Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

Slurm Partition name

Requestable features

Node
count

Sockets/
Node

Cores/
Socket

Threads/
Core

Total
Cores/Node

RAM
(GB)

Processor (x86_64)

Local Disks

OS

Moran Regular

moran

fdr,intel,sandy, ivy,community

280

2

8

1

16

64 or 128

Intel Ivybridge/
Sandybridge

1 TB HD

RHEL 7.9

Moran BigMem

moran, moran-bigmem-gpu

fdr,intel,haswell

2

2

8

1

16

512

Intel Haswell

1 TB HD

RHEL 7.9

Moran Debug

moran

fdr,intel,ivy,debug

2

2

8

1

16

64

Intel Ivybridge

1 TB HD

RHEL 7.9

Moran HugeMem

moran, moran-hugemem

fdr,intel,haswell,community

2

2

8

1

16

1024

Intel Haswell

1 TB HD

RHEL 7.9

Moran DGX

dgx

edr,intel,broadwell

1

2

10

2

40

512

Intel Broadwell

7 TB SSD

Ubuntu 18.04.2 LTS

Moran Test

arcc

fdr,intel,haswell

1

2

10

1

20

64

Intel Haswell

300 GB HD

RHEL 7.9

Teton Regular

teton

edr,intel,broadwell,community

180

2

16

1

32

128

Intel Broadwell

240 GB SSD

RHEL 7.9

Teton Cascade

teton-cascade

edr,intel,cascade,community

56

2

20

1

40

192 or 768

Intel Cascade Lake

240 GB SSD

RHEL 7.9

Teton BigMem GPU

teton-gpu

edr,intel,broadwell,community

8

2

16

1

32

512

Intel Broadwell

240 GB SSD

RHEL 7.9

Teton HugeMem

teton-hugemem

edr,intel,broadwell

10

2

16

1

32

1024

Intel Broadwell

240 GB SSD

RHEL 7.9

Teton Massive Memory

teton-massmem

edr,amd,epyc

2

2

24

1

48

4096

AMD/EPYC

4096 GB SSD

RHEL 7.9

Teton KNL

teton-knl

edr,intel,knl

12

1

18

4

72

384

Intel Knights Landing

240 GB SSD

RHEL 7.9

Teton DGX

dgx

edr,intel,broadwell

1

2

10

2

40

512

Intel Broadwell

7 TB SSD

Ubuntu 18.04.2 LTS

Feature

Description of Feature

fdr

nodes are connected with an Infiniband cable with a signaling rate of 14.0625 Gbit/s

edr

nodes are connected with an Infiniband cable with a signaling rate of 25.78125 Gbit/s

intel

ivy

sandy

broadwell

haswell

knl

amd

epyc

community

this feature indicates a node shared equally among the research community. Jobs on these nodes can’t be pre-empted, but can be queued up for far longer.

GPUs and Accelerators

The ARCC Teton cluster has a number of compute nodes that contain GPUs. The following tables list each node that has GPUs and the type of GPU installed.

Partition

GPU Type

slurm value to request

# of Nodes

# of GPU device on a node

CUDA Cores

GPU Memory Size (GB)

Compute Capability

moran

GeForce GTX Titan

1

1

2688

6

3.5

moran

GeForce GTX Titan X

2

One node has 2

One node has 1

3072

12

5.2

moran

Tesla K20m

#SBATCH --gres=gpu:k20:1

18

2

One node only has 1

2496

4.7

3.5

moran

Tesla K20Xm

14

2

One node only has 1

2688

5.7

3.5

moran

Tesla K40c

1

2

2880

11.4

3.5

moran-bigmem-gpu

Tesla K80

#SBATCH --partition=moran-bigmem-gpu
#SBATCH --gres=gpu:x
# where x defines the number of devices

2

8

2496

11.4

3.7

teton-gpu

Tesla P100

#SBATCH --partition=teton-gpu
#SBATCH --gres=gpu:2

8

2

3584

16

6.0

Notes:

  • No labels