/
Facilities Statement

Facilities Statement

A copy of our facilities statement is also available here:
Research Computing Citation Language and help with acknowledgements may be found here



The Advanced Research Computing Center (ARCC) located at the University of Wyoming (UW) is the university’s primary research computing facility. All ARCC operations promote the growth of UW’s research and education activities. Resources provided by the department include high performance computing (HPC), large research data storage, and user training and consultation.

Research Computing Systems

Specialized ARCC staff members perform on-site administration, maintenance and support on all ARCC hosted systems and research support resources, including UW’s primary HPC cluster, Medicinebow. This is a heterogeneous condominium cluster open to all facets of research. ARCC also supports several specialty clusters and research services used by UW research organizations. 

Data Center and Infrastructure

The division of Information Technology opened it’s 6,000 sqft data center in the winter of 2008.  PC hardware is housed in liquid-cooled enclosures and includes highly redundant infrastructure for power, cooling, and security. Hardware is interconnected using InfiniBand and high-speed ethernet.  

Medicinebow HPC Cluster Configuration

Medicinebow is a x86_64 based HPC running RHEL 9. Job scheduling configuration is based on the condo model which encourages researchers to purchase nodes for use in our HPC environment granting priority and predictable access.  ARCC leverages fair-share mechanisms to distribute computational workloads between varied research projects as well as weighted scheduling parameters (job size, age, etc.) to maximize efficient cluster utilization. Medicinebow is backed by VAST performance data storage platform hosting over 3 PB of high-performance storage. 

The Medicinebow cluster (currently at over 300 nodes) hardware is heterogeneous, comprising several specialty partitions to address a wide variety of scientific pipelines and workloads. These include mass memory (4TB RAM) nodes, and a variety of GPU nodes. The system utilizes SLURM workload manager and LMOD environment modules to provide a robust and flexible user experience. Medicinebow supports a wide range of compilers (GNU, Intel oneAPI, and NVIDIA HPC SDK) as well as containerization frameworks. 

 Medicinebow hardware specifications are summarized below: 

Partitions

#Nodes

Cores /
Node

RAM

GPU

GPUs / Node

Processor

TensorCores/GPU & CUDACores/GPU

teton

175

32

128GB/Node

N/A

N/A

Intel Broadwell

N/A

56

40

192 or 768GB/Node

Intel Cascade Lake

8

43

1024GB/Node

Intel Broadwell

2

48

4096GB/Node

AMD/EPYC

teton-knl

12

72

384GB/Node

Intel Knights Landing

wildiris

5

48/56

512GB/Node

Intel Icelake

beartooth

2

56

256GB/Node

Intel Icelake

6

56

515GB/Node

8

56

1024GB/Node

teton-gpu

6

32

512GB/Node

Tesla P100

2

Intel Broadwell

3584CUDA/GPU

2

40

512GB/Node

V100

8

Intel Broadwell

80TC/GPU 640 CUDA/GPU

beartooth-gpu

5

56

256GB/Node

A30

2

Intel Icelake

160TC/GPU 896CUDA/GPU 

2

56

256GB/Node

Tesla T4

3

Intel Icelake

320TC/GPU
2560CUDA/GPU

mb

25

96

1024GB/Node

N/A

N/A

2x 48-Core/96-Thread 4th Gen AMD EPYC 9454

N/A

mb-a30

8

24GB/GPU

A30

8

224 TC/GPU
3804 FP32 CUDA/GPU

mb-l40s

5

48GB/GPU

L40S

8

568 TC/GPU

mb-h100

6

80GB/GPU

SXM5 H100

8

528 TC/GPU
16896 FP32 CUDA/GPU

mb-a6000

1

64

250GB/GPU

A6000

4

 

336TC/GPU
10752CUDA/GPU

High Performance Data Storage

Our facility houses two main research storage resources in addition to housing data for direct use within Medicinebow. Both are available to all researchers at UW for both short- and long-term archive data storage. The first is named Alcova, a peta-scale capable system. Both Medicinebow and Alcova file storage use the VAST data platform, a highly scalable, in-line, block-based data storage service, built for performance. Connected to the UWyo network, Alcova offers blazing-fast transfer speeds starting at 100 GB/s. Alcova also facilitates the publication of datasets curated by UW Libraries. Pathfinder, ARCC’s other primary research storage resource, uses the S3 protocol on object storage through Ceph to provide a low-cost storage option with added functionality to allow access to wide scale of large datasets for web-based applications. All ARCC hosted data storage services support GridFTP via Globus data transfer servers connected to the UW Science DMZ (100 Gbps Internet2 link). Alcova and Medicinebow storage also allow for SMB/CIFS and NFS making it easy for researchers to transfer data from their daily work environment.

Personnel & Support

The University of Wyoming is served by Central Information Technology (UWIT) and UW ARCC.  UWIT’s staff consists of 95 FTEs across multiple service areas including desktop support, networking, security, enterprise systems, and application development and support.   In addition to computational resources, UW ARCC personnel focus on IT services directly supporting research and UW researchers. Members of our team assists researchers by offering guidance, consultation and training for any users - both new and experienced – who may seek help incorporating HPC and research specific technologies in their research pipeline.  UW ARCC works in collaboration with the UW Libraries Digital Scholarship Center to facilitate training towards the growth in understanding and use of HPC across UW and throughout our state.   ARCC’s team consists of 8 FTEs and reports to the Faculty Director for Computing resources.