Goal: Introduction to UW ARCC and our services.
...
Core Service 1: High Performance Computing: HPC
We maintain a number of clusters for the purpose of allowing researchers to perform a variety of use cases such as running:
...
Released to campus beginning July 15th, 2024
This is a high performance HPC cluster with enhanced GPU offerings to expand research capabilities in AI, machine and deep learning, and enhanced modeling.
Currently consists of ~40 nodes with over 4224 CPU cores, 152 GPUs and 3PB of storage.
This is the HPC resource we’ve been performing our training on throughout the bootcamp.
First released to campus January 2023
A HPC cluster with ~375 nodes, over 10K CPU cores, 52 GPUs, and 1.2 PB of storage.
Eventually, all Beartooth nodes and their associated hardware will be consolidated into MedicineBow and Beartooth is planned for retirement at the end of 2024.
...
Exercises:
Log Into MedicineBow OnDemand
What do you initially see?
How would you open a new ssh/shell window/connection?
How would you get help?
...
Answers:
Log Into MedicineBow OnDemand: What do you initially see?
Expand | ||
---|---|---|
| ||
You should see the MedicineBow Open Ondemand Dashboard: This should include pinned apps, message of the day, and links for help. |
How can you open up a new shell window with an ssh connection to MedicineBow?
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
Click on the MedicineBow Shell Access Application icon in the Pinned Apps:
|
How would you get help?
Expand | ||
---|---|---|
| ||
Go to the main arccwiki link under the “Getting Help” option. |
...
Core Service 2: Research Data Storage
Safe and secure storage and transfer of data that researchers can share and collaborate on with others within UW, and other institutions across the world.
High performance data storage geared toward project-oriented data.
Storage for published research data.
Low-cost storage solution that enables a Cloud-like presence for research data hosted by ARCC.
Hosting onsite backups and enabling data sharing and collaboration.
...
Consider as more traditional storage that can be accessed via SMB/AD via a traditional Windows File Explorer/Globus.
Access follows the idea of a project that users are part of and authenticated via username/AD.
A cheaper storage solution that is accessed either via a client and/or programmatically that uses S3 to provide object storage via buckets.
Access it provides is via access/secret key tokens, that can be time based.
Data can be made publicly available.
It does not user the notion of projects/usernames.
Come and discuss what your needs and use cases are…
...
Core Service 2: Research Data Storage Changes:
Effective June 1, 2024, ARCC introduced the ‘ARCC Data Portal’ serving the dual purpose of providing high performance back end storage for the MedicineBow HPC system and a data storage solution for researchers needing a centralized data repository for ongoing research projects.
Data Portal storage is FREE up to the default allocation quota.
ARCC’s Data Portal is compromised of VAST data storage compromised of high speed all-NVMe storage, housing 3 petabytes of raw storage. VAST storage employs data de-duplication allowing the system to logically store more than the raw 3PB available.
Alcova storage on the ARCC Data Portal can be thought of as the “new Alcova” and will replace the prior Alcova storage space listed here. This space is intended for use as collaborative data storage space using SMB protocol for interactive access. This space is backed up by ARCC and can only be used by researchers with a uwyo.edu account.
MedBow space can be thought of as the root level directory of the HPC system, separated into home, project, and gscratch directories, intended for use with HPC workflows where speed and minimal overhead are prioritized over backups.
MedicineBow Data Storage is available upon the go-live of MedicineBow on July 15th.
– the essence of these services will remain
– but the underlying systems are being updated
...
Core Service 3: End User Support
...