Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Teton HPC cluster is the successor of Mount Moran. Teton contains several new compute nodes. All Mount Moran nodes have been reprovisioned within the Teton HPC Cluster. The system is available by SSH using hostname teton.arcc.uwyo.edu or teton.uwyo.edu. We ask that everybody who uses ARCC resources cite the resources accordingly. See Citing Teton. Newcomers to research computing should also consider reading the Research Computing Quick Reference.

Live Search
placeholderSearch here

...

Contents

Table of Contents

...

Code Block
breakoutModewide
ssh USERNAME@teton.arcc.uwyo.edu

ssh -l USERNAME teton.arcc.uwyo.edu

ssh -Y -l USERNAME teton.arcc.uwyo.edu                          # For secure forwarding of X11 displays

ssh -X -l USERNAME teton.arcc.uwyo.edu                          # For forwarding of X11 displays

OpenSSH Configuration File (BSD,Linux,MacOS)

...

Code Block
Host teton
  Hostname teton.arcc.uwyo.edu
  User USERNAME
  controlmaster auto
  controlpath ~/.ss/ssh-%r@%h:%p

WARNING: While ARCC allows SSH multiplexing, other research computing sites may not. Do not assume this will always work on systems not administered by ARCC.

...

  • Speciality Nodes: These are speciality specialty nodes that are available to special users and are requested via a partition request, i.e. "dgx", see table #2 above. Use the following partition request to access these nodes.

...

The "gres" flag attached to each type of node can be found in the second-to-last column of Table 1. For example, the flag -gres=gpu:titanx:1

...

 must must be used to request one (1) GTX Titan X device that can only be satisfied by the nodes with the GTX Titan X in them.

...

Code Block
echo $CUDA_VISIBLE_DEVICES

An empty output string implies NO access to the node's GPU devices.

...

Node sharing can be accessed by requesting less than the full number of GPUs, CPUs or memory. Note that node sharing can also be done on the basis of the number of CPU's and/or memory, or all three. By default, each job gets 3.5 GB of memory per core requested (the lowest common denominator among our cluster nodes), therefore to request a different amount than the default amount of memory, you must use the "-mem

...

" flag. To request exclusive use of the node, use "-mem=0

...

".

Example #1

An example script that would request two Teton nodes with 2xK20m GPU's, including all cores and all memory, running one GPU per MPI task, would look like this:

...

Another example, using the job script below will get four GPUs, four CPU cores, and 8GB of memory. The remaining GPUs, CPUs, and memory will then be accessible for to other jobs.

Code Block
#SBATCH --ntasks=4
#SBATCH --nodes=1    
#SBATCH --mem=8
#SBATCH --partition=teton
#SBATCH --account=<account>
#SBATCH --gres=gpu:k80:4
#SBATCH --time=00:30:00 
... Other job prep
srun myprogram.exe

...

This will allocate the resources to the job , but keeps the prompt on the login node. You can then use "srun" or "mpirun" commands to launch the calculation on the allocated compute node resources.

...

To invoke OpenACC, use the "-acc

...

" flag. More information on OpenACC can be obtained at http://www.openacc.org.

...