Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Node Name

Description

Management Node

The management node is the node that runs the Slurm controller and manages the cluster.

Login Node

User access node to the cluster.  Allows ssh access for users to interact with the Linux CLI.

Storage Node

Provides medium speed storage for working dataset on the cluster.  Contain 110TB of useable storage space.

Compute Nodes (55)

55 compute nodes with 4 GPU’s each, Tesla K20Xm 6GB.

Large Memory Nodes (4)

Some allow direct login for running Graphics codes

Nodes

Partition

Spec

01, 030-10, 12, 13, 15-17, 19, 21-26, 28, 29, 32-35, 38, 4037-4344, 4746-51, 53, -54, 57-60

Loren: default

Code Block
MemTotal:            65G
CPU(s):              20
Thread(s) per core:  1
Core(s) per socket:  10
Socket(s):           2
CPU family:          6
Model:               62
Model name:          Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
CPU MHz:             3600.000
4 x GPU: Tesla K20Xm

02, 14, 18, 37, 39, 44, 46

Loren: default

As above, but with: CPU MHz: 3100.384

30-31

quick

Code Block
MemTotal:            65G
CPU(s):              28
Thread(s) per core:  1
Core(s) per socket:  14
Socket(s):           2
CPU family:          6
Model:               79
Model name:          Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
CPU MHz:             3300.000
4 x GPU: Tesla K20Xm

70-73

Loren-k80

Code Block
MemTotal:            125G
CPU(s):              40
Thread(s) per core:  1
Core(s) per socket:  20
Socket(s):           2
CPU family:          6
Model:               79
Model name:          Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
CPU MHz:             3600.000
4 x GPU: Tesla K80

11, 20, 27, 36, 45, 52, 55, 56

offline

51, 53-54, 57-60

mango

...