/
MedicineBow Hardware Summary Table

MedicineBow Hardware Summary Table

MedicineBow Hardware

The MedicineBow cluster was developed and released to the UW campus community in the Summer of 2024 and is operated in a condo-model (further detailed in this section below). It currently hosts UW ARCC nodes and UW Researcher Investment nodes. While any MedicineBow users can access any node on the cluster in a preemptable fashion, 15 non-investor CPU nodes are available in our non-investor partition to any MedicineBow users to utilize without being subject to preemption.

New MedicineBow Hardware

Slurm Partition name

Requestable features

Node
count

Socket/
Node

Cores/
Socket

Threads/
Core

Total Cores/
Node

RAM
(GB)

Processor (x86_64)

Local Disks

OS

Use Case

Key Attributes

Slurm Partition name

Requestable features

Node
count

Socket/
Node

Cores/
Socket

Threads/
Core

Total Cores/
Node

RAM
(GB)

Processor (x86_64)

Local Disks

OS

Use Case

Key Attributes

mb

amd, epyc

25

2

 

48

 

1

 

96

 

1024

2x 48-Core/96-Thread 4th Gen AMD EPYC 9454 

4TB SSD

 

RHEL 9.3

 

For compute jobs running the latest and greatest MedicineBow hardware

MB Compute with 1TB RAM

mb-a30

amd, epyc

8

768

DL Inference, AI, Mainstream Acceleration

MB Compute with 24GB RAM/GPU & A30 GPU

mb-l40s

amd, epyc

5

768

DL Inference, Omniverse/Rendering, Mainstream Acceleration

MB Compute with 48GB RAM/GPU & L40S GPU

mb-h100

amd, epyc

6

1228

DL Training and Inference, DA, AI, Mainstream Acceleration

MB Compute with 80GB RAM/GPU & Nvidia SXM5 H100 GPU

Former Beartooth Hardware (to be consolidated into MedicineBow or retired - pending)