Loren Hardware

Nodes

Node Name

Description

Node Name

Description

Management Node

The management node is the node that runs the Slurm controller and manages the cluster.

Login Node

User access node to the cluster.  Allows ssh access for users to interact with the Linux CLI.

Storage Node

Provides medium speed storage for working dataset on the cluster.  Contain 110TB of useable storage space.

Compute Nodes (55)

55 compute nodes with 4 GPU’s each, Tesla K20Xm 6GB.

These GPUs have compute capability: 2.0

Large Memory Nodes (4)

Some allow direct login for running Graphics codes

 

Nodes

Partition

Spec

Nodes

Partition

Spec

0-10, 12-19, 21-26, 28, 29, 32-35, 37-44, 46-51, 53-54, 57-60

Loren: default

MemTotal: 65G CPU(s): 20 Thread(s) per core: 1 Core(s) per socket: 10 Socket(s): 2 CPU family: 6 Model: 62 Model name: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz CPU MHz: 3600.000 4 x GPU: Tesla K20Xm

30-31

quick

MemTotal: 65G CPU(s): 28 Thread(s) per core: 1 Core(s) per socket: 14 Socket(s): 2 CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz CPU MHz: 3300.000 4 x GPU: Tesla K20Xm

70-73

Loren-k80

MemTotal: 125G CPU(s): 40 Thread(s) per core: 1 Core(s) per socket: 20 Socket(s): 2 CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz CPU MHz: 3600.000 4 x GPU: Tesla K80

11, 20, 27, 36, 45, 52, 55, 56

offline

 

51, 53-54, 57-60

mango

 

Networks

Network Name

Descriptio

Network Name

Descriptio

Managment Network

This network is for inter-node communications for slurm and other management functions.

Highspeed Infiniband Network

This network is for moving data and running MPI jobs with the cluster.

 The above two networks connect to all of the above nodes.