Introduction: The workshop session will provide a quick tour covering high-level conceptsIntroduction: The workshop session will provide a quick tour covering high-level concepts, commands and processes for using Linux and HPC on our Beartooth cluster. It will cover enough to allow an attendee to access the cluster and to perform analysis associated with this workshop.
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
More extensive and in-depth information and walkthroughs are available on our wiki and under workshops/tutorials. You are welcome to dive into those in your own time. Content within them should provide you with a lot of the foundational concepts you would need to be familiar with to become a proficient HPC user.
...
Based on: Wiki Front Page: About ARCC
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
In short, we maintain internally housed scientific resources including more than one HPC Cluster, data storage, and several research computing servers and resources.
We are here to assist UW researchers like yourself with your research computing needs.
...
What is HPC
HPC stands for High Performance Computing and is one of UW ARCC’s core services. HPC is the practice of aggregating computing power in a way that delivers a much higher performance than one could get out of a typical desktop or workstation. HPC is commonly used to solve large problems, and has some common use cases:
...
We typically have multiple users independently running jobs concurrently across compute nodes.
Resources are shared, but to do not interfere with any one else’s resources.
i.e. you have
If someone else’s job fails it does NOT affect yours.
your own cores, your own block of memory.
If someone else’s job fails it does NOT affect yours.
Example: The two GPU compute nodes part of this reservation each have 8 GPU devices. We can have different, individual jobs run on each of these compute nodes, without effecting each other.
...
Homogeneous vs Heterogeneous HPCs
There are 2 types of HPC systems:
Homogeneous: All compute nodes in the system share the same architecture. CPU, memory, and storage are the same across the system. (Ex: NWSC’s Derecho)
Heterogeneous: The compute nodes in the system can vary architecturally with respect to CPU, memory, even storage, and whether they have GPUs or not. Usually, the nodes are grouped in partitions. Beartooth is a heterogeneous cluster and our partitions are described on the Beartooth Hardware Summary Table on our ARCC Wiki.
...
Expand | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
|
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
A reservation can be considered a temporary partition.
It is a set of compute nodes reserved for a period of time for a set of users/projects, who get priority use.
...
Important Dates:
After the 17th of June this reservation will stop and you will drop down to general usage if you have another Beartooth project.
The project itself will be removed after the 24th of June. You will not be able to use/access it. Anything you require please copy out of the project.
...
Southpass is our Open OnDemand resource allowing users to access Beartooth over a web-based portal. Learn more about Southpass here.
Goals:
Demonstrate how users log into Southpass
Demonstrate requesting and using a XFCE Desktop Session
Introduce the Linux File System and how it compares to common workstation environments
Introduce HPC specific directories and how they’re used
Introduce Beartooth specific directories and how they’re used
Demonstrate how to access files using the Beartooth File Browsing Application
Demonstrate the use of emacs, available as a GUI based text-editor
Based on: SouthPass
...
Log in and Access the Cluster
...
Expand | ||
---|---|---|
| ||
|
Note: While we use a webform to request Beartooth resources on Southpass, later training will show how resource configurations can be requested through command line via salloc
or sbatch
commands.
...
/apps
(Specific to ARCC HPC) is like on Windows or on a Mac.Where applications are installed and where modules are loaded from. (More on that later).
/alcova
(Specific to ARCC HPC).Additional research storage for research projects that may not require HPC but is accessible from beartooth.
You won’t have access to it unless you were added to an alcova project by the PI.
...
Exercise: File Browsing in Southpass GUI
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
The Beartooth Shell Access opens up a new browser tab that is running on a login node. Do not run any computation on these.
[<username>@blog2 ~]$
The SouthPass Interactive Desktop (terminal) is already running on a compute node.
[<username>@t402 ~]$
...
Login Node Policy
...
Anything compute-intensive (tasks using significant computational/hardware resources - Ex: using 100% cluster CPU)
Long running tasks (over 10 min)
Any collection of a large # of tasks resulting in a similar hardware footprint to actions mentioned previously.
Not sure? Use
salloc
to be on the safe side. This will be covered later.
Ex:salloc –-account=arccanetrain -–time 40:00
See more on ARCC’s Login Node Policy here
...
|
| ||
|
|
...
File Navigation demonstrating the use of:
|
|
...
Creating, moving and copying files and folders:
|
|
...
Text Editor Cheatsheets
Vi/Vim Cheatsheet | Nano Cheatsheet |
---|---|
Note: On Beartooth, vi
maps to vim
i.e. if you open vi
, you're actually starting vim
.
...
Demonstrating vi/vim text editor
...
Vim Tutor is a walkthrough for new users to get used to Vim. Run |
|
...
*** Break ***
...
04 Using Linux to Search/Parse Text Files
...
Since the cluster has to cater for everyone we can not provide a simple desktop environment that provides everything.
Instead we provide modules that a user will load that configures their environment for their particular needs within a session.
...
Code Block |
---|
# Within the R Terminal: > library(SueratSeurat) Error in library(Suerat) : there is no package called 'Suerat' > .libPaths(c('/project/biocompworkshop/software/r/libraries/4.4.0', '/apps/u/spack/gcc/12.2.0/r/4.4.0-7i7afpk/rlib/R/library')) # Notice how the list of System Library packages listed in RStudio has changed. > library(Seurat) Loading required package: SeuratObject Loading required package: sp Attaching package: 'SeuratObject' The following objects are masked from 'package:base': intersect, t |
...
Linux/bash commands and script.
Module loads.
Application command-line calls.
Lets Lets consider our R workflow. I have:
...
Code Block |
---|
# You can view the contents of your output file: [@blog2]$ cat r_16054193.out R Workflow Example Start: 06/05/24 14:02:01 SLURM_JOB_ID: 16054193 SLURM_JOB_NAME: r_job SLURM_JOB_NODELIST: m221 Sleeping... [@blog1]$ squeue -u salexan5 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 16054193 teton r_job salexan5 R 0:18 1 t402 # If the job id is nolonger in the queue then it means the job is no longer running. # It might have completed, or failed and exited. [@blog1]$ squeue -u salexan5 exited. [@blog1]$ squeue -u salexan5 JOBID PARTITION NAME USER ST TIME JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)NODES NODELIST(REASON) |
...
Why is my Job Not Running?
Previously we explained: The two GPU compute nodes part of this reservation each have 8 GPU devices. We can have different, individual jobs run on each of these compute nodes, without effecting each other.
So, we can have 16 concurrent jobs all running with a single GPU each.
But, what if a 17th person submitted a similar job?
Slurm will add this job to the queue, but it will be PENDING (P) while it waits for the necessary resources to become available.
As soon as there are, this 17th job will start, and it’s status will update to RUNNING (R).
Slurm manages this for you.
...
Monitor your Job: Continued…
...
Code Block |
---|
# You can monitor the queue and/or log file to check if running. [salexan5@blog2 salexan5]$ cat r_16054193.out R Workflow Example Start: 06/05/24 14:02:01 SLURM_JOB_ID: 16054193 SLURM_JOB_NAME: r_job SLURM_JOB_NODELIST: t402 Sleeping... Loading required package: SeuratObject Loading required package: sp Attaching package: ‘SeuratObject’ The following objects are masked from ‘package:base’: intersect, t End: 06/05/24 14:02:29 # OR... |
...
We’ve covered the following high-level concepts, commands and processes:
What is HPC and what is a cluster - focusing on ARC’s Beartooth cluster.
An introduction to Linux and its File System, and how to navigate around using an Interactive Desktop and/or using the command-line.
Linux command-line commands to view, search, parse, sort text files.
How to pipe the output of one command to the input of another, and how to redirect output to a file.
Using vim as a command-line text editor and/or emacs as a GUI within an Interactive Desktop.
Setting up your environment (using modules) to provide R/Python environments, and other software applications.
Accessing compute nodes via a SouthPass Interactive Desktop, and requesting different resources (cores, memory, GPUs).
Requesting interactive sessions (from a login node) using
salloc
.Setting up a workflow, within a script, that can then be submitted to the Slurm queue using
sbatch
, and how to monitor jobs.
...
Everything covered can be found in previous workshops and additional information can be found on our Wiki.
ARCC personnel will be around in-person for the first three days to assist with cluster/Linux related questions and issues.
We will provide virtual support over Thursday/Friday. Submit questions via the Slack channel and these will be passed onto us, and we will endeavor to set up a zoom via our Office Hours.
...