The Teton HPC cluster is the successor of Mount Moran. Teton contains several new compute nodes. All Mount Moran nodes have been reprovisioned within the Teton HPC Cluster. The system is available by SSH using hostname teton.arcc.uwyo.edu or teton.uwyo.edu. We ask that everybody who uses ARCC resources cite the resources accordingly. See Citing Teton. Newcomers to research computing should also consider reading the Research Computing Quick Reference.
...
Contents
Table of Contents |
---|
...
Teton has a Digital Object Identifier (DOI) (https://doi.org/10.15786/M2FY47) and we request that all use of Teton appropriately acknowledges the system. Please see Citing Teton for more information.
Available Nodes
See Partitions for information regarding Slurm Partitions on Teton.
...
Teton has login nodes for users to access the cluster. Login nodes are available publicly using the hostname teton.arcc.uwyo.edu or teton.uwyo.edu. SSH can be done natively on MacOS or Linux based operating systems using the terminal and the ssh command. Although X11 forwarding is supported, and if you need graphical support, we recommend using FastX if at all possible. Additionally, you may want to configure your OpenSSH client to support connection multiplexing if you require multiple terminal sessions. For those instances where you have unreliable network connectivity, you may want to use either tmux or screen once you login to keep sessions alive during disconnects. This will allow you to later reconnect to these sessions.
...
Teton has several shells available for use. The default is bash]. To change your default shell, please submit the request through standard ARCC request methods.hard
...
The following tables list each node that has GPUs and the type of GPU installed.
Table #1
Expand | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
The following two GPU nodes are reserved for AI use.
Table #2
Node | Partition | GPU Type | Number of Devices | GPU Memory Size (GB) | Compute Capability | GRES Flag | Teton Partition | Notes |
---|---|---|---|---|---|---|---|---|
mdgx01 | dgx | Tesla V100 | 8 | 16 | 7.0 | gpu:V100-16g:{1-8} | No | |
tdgx01 | dgx | Tesla V100 | 8 | 32 | 7.0 | gpu:V100-32g:{1-8} | No |
...
Node sharing can be accessed by requesting less than the full number of GPUs, CPUs or memory. Note that node sharing can also be done on the basis of the number of CPU's and/or memory, or all three. By default, each job gets 3.5 GB of memory per core requested (the lowest common denominator among our cluster nodes), therefore to request a different amount than the default amount of memory, you must use the "-mem" flag. To request exclusive use of the node, use "-mem=0".
Example #1
An example script that would request two Teton nodes with 2xK20m GPU's, including all cores and all memory, running one GPU per MPI task, would look like this:
Code Block |
---|
#SBATCH --nodes=2 #SBATCH --mem=0 #SBATCH --partition=teton #SBATCH --account=<account> #SBATCH --gres=gpu:k20m:2 #SBATCH --time=1:00:00 ... Other job prep srun myprogram.exe |
Example #2
To request all 8 K80 GPUs on a Teton node, again using one GPU per MPI task, we would do:
Code Block |
---|
#SBATCH --nodes=1 #SBATCH --mem=0 #SBATCH --partition=teton #SBATCH --account=<account> #SBATCH --gres=gpu:k80:8 #SBATCH --time=1:00:00 ... Other job prep srun myprogram.exe |
Example #3
Another example, using the job script below will get four GPUs, four CPU cores, and 8GB of memory. The remaining GPUs, CPUs, and memory will then be accessible to other jobs.
Code Block |
---|
#SBATCH --ntasks=4 #SBATCH --nodes=1 #SBATCH --mem=8 #SBATCH --partition=teton #SBATCH --account=<account> #SBATCH --gres=gpu:k80:4 #SBATCH --time=00:30:00 ... Other job prep srun myprogram.exe |
Example #4
To run a parallel interactive job with MPI, do not use the usual "srun" command, as this does not work properly with the "gres" request. Instead, use the "salloc" command, e.g.
...