Table of Contents | ||||||||
---|---|---|---|---|---|---|---|---|
|
Getting Started
...
|
Steps to get started in HPC with ARCC: | |
---|---|
1: Get an ARCC HPC account by being added to an HPC project | To access an ARCC HPC resource, you must to be added to a project on that resource whether you’re a UWyo faculty member (Principle Investigator; PI), |
...
researcher, or |
...
If you’re a UWyo faculty member (PI), create a project here: Request an HPC research project
If you’re a collaborative researcher/student, please contact the PI you’re working with and ask them to either:
If UWyo is not your primary institution, please contact the UWyo faculty member you’re working with and have them Request an external collaborator account.
After you get your external collaborator account, the PI will need to make a request to add your external collaborator account to a project (step ii).
Be sure to ask the PI which HPC resource the project is associated with - ARCC has several.
...
While you wait to be associated with a project, please take some time to learn how to setup and use Two-factor authentication (2FA).
If you plan to work from off campus, you’ll need to learn how to setup and use a VPN connection.
Note: the process of becoming associated with a project, etc can take hours to days, depending on workload. Once the process is complete, you will receive an automatically generated email from arcc-admin@uwyo.edu (don’t email this address - it is not monitored).
...
Now you’re ready to connect/login to a cluster!
Note: you need to not be connected to the UWguest wireless network.
Open a terminal window or similar command line interface (CLI). Learn how.
Type in “ssh <username>@<clustername>.arcc.uwyo.edu” and press enter.
For example:
ssh arccuser@teton.arcc.uwyo.edu
The first time you log in you will get a message saying the ‘authenticity of the host … can’t be established' and asking if you ‘are sure you want to continue?’.
Enter ‘yes’.
You will then see a Notice to Users and a Two-factor Authentication message, with your mobile device ready, enter your password and accept the Duo Mobile (2FA) challenge when it pops up.
The Two-factor message may say something about entering your password in this form: <password, token>. This is no longer necessary, but still possible to do.
Once you are connected, a bunch of text will scroll by. This will vary depending on the cluster. On Teton, for example, there are usage rules, tips, and a summary of your storage utilization across all projects that you are part of.
Note that when you are logged in, the command prompt will look something like this: [arccuser@tlog3 ~]$
...
This shows your username and which login node you are currently utilizing. The login node (here, tlog3) can, and probably will, change from one session to the next.
...
student. You must be added as a member of a project on the cluster. (If you’ve received an e-mail from arcc-admin@uwyo.edu, indicating you’ve been added to a project, you have access to the HPC cluster).
| |||||||||||||||||||||||||||||||
2: Log into HPC |
| ||||||||||||||||||||||||||||||
3: Start ProcessingWhile processing, you may also need to: | A key principle of any shared computing environment is that resources are shared among users and therefore must be scheduled. Please DO NOT simply log into the HPC and run your computations without requesting or scheduling resources from Slurm through a batch script or Interactive job. ARCC uses the Slurm Workload Manager to regulate and schedule user submitted jobs on our HPC systems. In order for your job to submit properly to Slurm, you must at minimum specify your account and a time in your submission. There are 2 ways to run your work on an ARCC HPC systems from the Command Line: Option 1: Run it as an Interactive JobThese are jobs that allow users access to computing nodes where applications can be run in real time. This may be necessary when performing heavy processing of files, or compiling large applications. Interactive jobs can be requested with an
Option 2: Run it as a Batch JobThis means running of one or more tasks on a computer environment. Batch jobs are initiated using scripts or command-line parameters. They run to completion without further human intervention (fire and forget). Batch jobs are submitted to a job scheduler (on ARCC HPC, Slurm) and run on the first available compute node(s).
To learn more about running parallel jobs, running jobs with GPUs, and avoid more common issues, see our SLURM tutorial. | ||||||||||||||||||||||||||||||
3a. Get access to software | Option 1: Use The Module SystemLMOD is very useful software on a HPC cluster that is leveraged to maintain a number of dynamic user environments and allow users to switch between software stacks and packages on HPC resources. You may check to see if software is available as a module by running a module spider in the following expandable example.
If you have a software package that is not installed as a module, but you think it would be widely utilize, make a request with us to see if it can be installed. Learn more about using LMOD here. Option 2: Install it YourselfIf your software packages are somewhat research specific, you may install them to your project. ARCC will be providing an additional allocation of 250GB in every MedicineBow /project directory under /project/for software installations. Information on installing software on your own will vary depending on the software. General instructions may be found here. | ||||||||||||||||||||||||||||||
3b. Transfer Data on/off HPC | Data transfer can be performed between HPC resources using a number of methods. The two easiest ways to transfer data are detailed below. A cumulative list of methods to transfer data on or off of ARCC Resources are detailed here. Option 1: Southpass
Option 2: Globus (For big data transfers)
| ||||||||||||||||||||||||||||||
3c. View Visual Data or Access HPC with Graphics / Visual Interface | If you want to view visual output you’ve created on Beartooth or just need access to a GUI (Graphical User Interface), please use Southpass. Pages have been created for accessing Beartooth and Wildiris in a graphical user interface. |