...
The ARCC Beartooth cluster has a number of compute nodes that contain GPUs. The following tables list each node that has GPUs and the type of GPU installed.
GPU Type | Partition (Some partitions may be in the process of migration to MB. Run sinfo for current partitions) | Example slurm value to request | # of Nodes | GPU devices per node | CUDA Cores | Tensor Cores | GPU Memory Size (GB) | Compute Capability | Tesla P100 | teton-gpu
| (all available on non-investor
)
Code Block |
---|
#SBATCH --partition=teton-gpu
#SBATCH --gres=gpu:<#_gpu_requested> |
| 8 | 2 | 3584 | 0 | 16 | 6.0 |
---|
V100 | dgx
(both available on non-investor ) | Code Block |
---|
#SBATCH --partition=dgx
#SBATCH --gres=gpu:<#_gpu_requested> |
| 2 | 8 | 5120 | 640 | 16/32 | 7.0 |
A30 | beartooth-gpu (4)
mb-a30 (8)
non-investor (3)
| Code Block |
---|
#SBATCH --partition=beartooth-gpu
#SBATCH --gres=gpu:<#_gpu_requested> |
| 15 | 7 on BT/non-investor, 8 on MedicineBow | 3584 | 224 | 25 | 8.0 | T4 | non-investor
| Code Block |
---|
#SBATCH --partition=beartooth-gpu
#SBATCH --gres=gpu:<#_gpu_requested> |
| 2 | 3 | 2560 or 3804 FP32 CUDA/GPU on MB | 320 or 224 TC/GPU on MB | 16G 24GB/GPU on MB | 7.5 |
L40S | mb-l40s (5)
| Code Block |
---|
#SBATCH --partition=beartooth-gpu
#SBATCH --gres=gpu:<#_gpu_requested> |
| 5 | 8 | | 568 TC/GPU on MB | 48GB/GPU | |
H100 | mb-h100 (6)
| Code Block |
---|
#SBATCH --partition=beartooth-gpu
#SBATCH --gres=gpu:<#_gpu_requested> |
| 6 | 8 | 16896 FP32 CUDA/GPU | 528 TC/GPU on MB | 80GB/GPU | |
...