...
Steps to get started in HPC with ARCC: | ||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1: Get an ARCC HPC account by being added to an HPC project | To access an ARCC HPC resource, you must to be added to a project on that resource whether you’re a UWyo faculty member (Principle Investigator; PI), researcher, or student. You must be added as a member of a project on the cluster. (If you’ve received an e-mail from arcc-admin@uwyo.edu, indicating you’ve been added to a project, you have access to the HPC cluster).
| |||||||||||||||||||||||||||||||||||||
2: Log into HPC |
Expand | | title |
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Expand | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||
|
Or SSH Login
If you prefer to log into the cluster over SSH/Command Line, directions are dependent upon the client in which you’re connecting to your HPC from, and the HPC resources you’re accessing.
On MedicineBow, you must configure your client to log in using an SSH key and certificate.
Information for configuring keys is provided here.
In your command line window, type in the following command:
ssh <your_username>@<cluster_name>.arcc.uwyo.edu
.When connected, a bunch of text will scroll by. This will vary depending on the cluster. On
MedicineBow, for example, there are usage rules, tips, and a summary of your storage utilization across all projects that you are part of.
Upon login to the HPC, the command prompt will look something like this:
[
arccuser@mblog1 ~]$
. To learn more about the command prompt and command line, please look through our documentation on Command Line Interface.
3: Start Processing
While processing, you may also need to:
A key principle of any shared computing environment is that resources are shared among users and therefore must be scheduled. Please DO NOT simply log into the HPC and run your computations without requesting or scheduling resources from Slurm through a batch script or Interactive job.
ARCC uses the Slurm Workload Manager to regulate and schedule user submitted jobs on our HPC systems. In order for your job to submit properly to Slurm, you must at minimum specify your account and a time in your submission. There are 2 ways to run your work on an ARCC HPC systems from the Command Line:
Option 1: Run it as an Interactive Job
These are jobs that allow users access to computing nodes where applications can be run in real time. This may be necessary when performing heavy processing of files, or compiling large applications. Interactive jobs can be requested with an salloc
command. ARCC has configured the clusters so that interactive jobs provide shell access on compute nodes themselves rather than running on the login node. An example of an salloc request can be expanded below.
Expand | ||||||
---|---|---|---|---|---|---|
| ||||||
The following is the simplest example of a command to start an interactive job. This command has the bare minimum information (account and time) in order to run any job on an ARCC cluster:
Breaking it down:
The next example is a set of commands. The first line is a command to allocate an interactive job requesting specific hardware to perform the computations in our session. The second line runs a python script:
Breaking it down:
|
Option 2: Run it as a Batch Job
This means running of one or more tasks on a computer environment. Batch jobs are initiated using scripts or command-line parameters. They run to completion without further human intervention (fire and forget). Batch jobs are submitted to a job scheduler (on ARCC HPC, Slurm) and run on the first available compute node(s).
Expand | ||||
---|---|---|---|---|
| ||||
In the following example we need to create our own batch script which then gets run by Slurm to execute your jobs and any associated tasks. Below is an example of a batch script we created named
Breaking it down:
Assuming our batch script and the python script are complete and ready to run, we log into the HPC and run it on the cluster to submit our job by navigating to the location of our script, then running it with the following command:
Since this batch script first makes a request to Slurm to schedule our job and allocate resources before performing any computations, we can submit it on the login node. |
To learn more about running parallel jobs, running jobs with GPUs, and avoid more common issues, see our SLURM tutorial.
3a. Get access to software
Option 1: Use The Module System
LMOD is very useful software on a HPC cluster that is leveraged to maintain a number of dynamic user environments and allow users to switch between software stacks and packages on HPC resources. You may check to see if software is available as a module by running a module spider in the following expandable example.
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
If you have a software package that is not installed as a module, but you think it would be widely utilize, make a request with us to see if it can be installed. Learn more about using LMOD here.
Option 2: Install it Yourself
If your software packages are somewhat research specific, you may install them to your project. ARCC will be providing an additional allocation of 250GB in every MedicineBow /project directory under /project/for software installations. Information on installing software on your own will vary depending on the software. General instructions may be found here.
3b. Transfer Data on/off HPC
Data transfer can be performed between HPC resources using a number of methods. The two easiest ways to transfer data are detailed below. A cumulative list of methods to transfer data on or off of ARCC Resources are detailed here.
Option 1:
SouthpassOnDemand
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Option 2: Globus (For big data transfers)
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
3c. View Visual Data or Access HPC with Graphics / Visual Interface
If you want to view visual output you’ve created on Beartooth or just need access to a GUI (Graphical User Interface), please use Southpass/ OnDemand. Pages have been created for accessing Beartooth and Wildiris Medicinebow in a graphical user interface.
Want to learn more or need help on something specific?
Links to ARCCs HPC Training: Access our in-depth training pages on for using our cluster resources by starting here
Link to Frequently Asked Questions (FAQs)
Links to wiki subpages with more detailed documentation:
Child pages (Children Display) | ||
---|---|---|
|