Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Current »

Goal: Introduction to UW ARCC and our services.



ARCC: Mission

Provide support for research computing endeavors including:

  • high performance computing

  • large research data storage

  • consultation

  • miscellaneous services - if you’re not sure, reach out to us at arcc-help@uwyo.edu  

to further the University of Wyoming and State of Wyoming’s strategic priorities by enabling researchers with the necessary computational needs.


Core Service 1: High Performance Computing: HPC

We maintain a number of clusters for the purpose of allowing researchers to perform a variety of use cases such as running:

  • Computation-intensive analysis on large datasets.

  • Long large-scale simulations. 

  • 10s/100s/1000s of small short tasks - nothing is too small.

  • and lots of other use case…


ARCC HPC Clusters

ARCC currently maintain 2 HPC clusters for general UW research use. These HPCs allow for a wide array of computation-intensive analyses in various areas of science:

  1. MedicineBow

    1. Released to campus beginning July 15th, 2024

    2. This is a high performance HPC cluster with enhanced GPU offerings to expand research capabilities in AI, machine and deep learning, and enhanced modeling.

    3. Currently consists of ~40 nodes with over 4224 CPU cores, 152 GPUs and 3PB of storage.

    4. This is the HPC resource we’ve been performing our training on throughout the bootcamp.

    5. Users can view partition information on our Hardware Summary Table

  2. Beartooth

    1. First released to campus January 2023

    2. A HPC cluster with ~375 nodes, over 10K CPU cores, 52 GPUs, and 1.2 PB of storage.

    3. Eventually, all Beartooth nodes and their associated hardware will be consolidated into MedicineBow and Beartooth is planned for retirement at the end of 2024.

    4. Users can view partition information on our Hardware Summary Table


Exercises:

  1. Log Into MedicineBow OnDemand. What do you initially see?

  2. How would you open a new ssh/shell window/connection?

  3. How would you get help?


Answers:

  1. Log Into MedicineBow OnDemand: What do you initially see?

 Answer

You should see the MedicineBow Open Ondemand Dashboard:

This should include pinned apps, message of the day, and links for help.

  1. How can you open up a new shell window with an ssh connection to MedicineBow?

 Answer

Click on the MedicineBow Shell Access Application icon in the Pinned Apps:

Click shell.png

image-20240731-224939.png

  1. How would you get help?

 Answer

Bonus Answer: Submit a request using the ARCC Service Portal


Core Service 2: Research Data Storage 

Safe and secure storage and transfer of data that researchers can share and collaborate on with others within UW, and other institutions across the world. 

  1.  Alcova:

    1. High performance data storage geared toward project-oriented data.

    2. Storage for published research data.

  2.  Pathfinder: 

    1. Low-cost storage solution that enables a Cloud-like presence for research data hosted by ARCC. 

    2. Hosting onsite backups and enabling data sharing and collaboration.


Core Service 2: Research Data Storage: Which One?

Neither of these options is “tied to” HPC resources. Therefore, you do not NEED to perform work on our HPC clusters or have an HPC account to request these storage options, but both are accessible from ARCC HPC resources.
Considerations: Cost, accessibility, use cases:

 Alcova: More “Traditional” Block and File Storage

 Pathfinder:  Cloud-like S3 Object Storage

  1. Cost:

    1. Free, up to quota. Then charged more per TB than Pathfinder.

  2. Accessibility:

    1. Consider as more traditional storage

    2. Can be accessed via SMB/AD via a traditional Windows File Explorer, or Globus

  3. Permissions: follows the idea of a project that users are part of and authenticated via username.

    1. Management of permissions is tied to Active Directory.

    2. Or shared through Globus.

  1. Cost:

    1. A less expensive storage solution. Charged per TB.

  2. Accessibility:

    1. Accessed either via an S3 compatible client (cloudberry, cyberduck, some limited functionality in Globus) and/or programmatically (rclone, rsync, etc) over S3 to provide object storage via buckets.

    2. Provided via access/secret key tokens. Tokens can be time-based.

    3. Data can be made publicly available through a URL. URLs can be set with expiration dates.

  3. Permissions: It does not user the notion of projects/usernames, permissions are strictly maintained through key tokens.

Use cases:

  1. Very fast.

  2. Traditional use cases. Hierarchy of files and folders/directories.

  3. Focused on data storage for research.

  4. Shared to others, explicitly. (Not easily made available publicly)

Use cases:

  1. Not as fast. May have some S3 latency compared to traditional research data storage, but most users will not notice a difference unless very large datasets stored/accessed.

  2. Access for secure data my be less familiar for most users, initially.

  3. Data easier to share publicly over a URL/web link.

  4. Flexible data structure storage - you can store any kind of data (vs traditional databases with rigid data schemas).

  5. Functionality to allow access to wide scale of large datasets for web-based applications

Come and discuss what your needs and use cases are…


Core Service 2: Research Data Storage Changes:

  1. Data Portal:

    1. Effective June 1, 2024, ARCC introduced the ‘ARCC Data Portal’ serving the dual purpose of providing high performance back end storage for the MedicineBow HPC system and a data storage solution for researchers needing a centralized data repository for ongoing research projects.

    2. Data Portal storage is FREE up to the default allocation quota.

    3. ARCC’s Data Portal is compromised of VAST data storage compromised of high speed all-NVMe storage, housing 3 petabytes of raw storage. VAST storage employs data de-duplication allowing the system to logically store more than the raw 3PB available.

    4. MedicineBow vs Alcova Spaces:

      1. Alcova storage on the ARCC Data Portal can be thought of as the “new Alcova” and will replace the prior Alcova storage space listed here. This space is intended for use as collaborative data storage space using SMB protocol for interactive access. This space is backed up by ARCC and can only be used by researchers with a uwyo.edu account.

      2. MedBow space can be thought of as the root level directory of the HPC system, separated into home, project, and gscratch directories, intended for use with HPC workflows where speed and minimal overhead are prioritized over backups.

        1. MedicineBow Data Storage is available upon the go-live of MedicineBow on July 15th.

– the essence of these services will remain

– but the underlying systems are being updated


Core Service 3: End User Support

Available to all researchers at UW: User Services

  • Website/Wiki - examples and suggested best practices.

  • Service Portal

  • Zoom office hours.

  • One-on-one consultation.

  • Scheduled in-person and (online) trainings.

  • YouTube Channel

clientsupport.gif


Other Services

  • OnDemand: Web based access to the clusters - included Jupyter Notebooks.

  • Linux Desktop Support. 

  • Hosting of services - R Shiny application…

  • A GitLab service.

  • Proposal Development.

  • As research needs grow, so will services that we offer.

  • We’re always open to constructive feedback and suggestions. 

  • We’re here to provide the services for you


Try it: Use the Portal

Put in a request through our ARCC Service Portal


 

 

  • No labels