Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Next »

Overview

The Beartooth Compute Environment (Beartooth) is a high performance computing (HPC) cluster that offers over 500 compute nodes and 1.2 PB of storage, with an expected uptime of 98%, allowing researchers to perform computation-intensive analyses on datasets of various sizes.

Beartooth can be securely accessed anywhere, anytime using SSH connectivity with UWyo two-factor authentication.

Beartooth hardware (dev)

This will link to a summary of the Beartooth hardware.

Beartooth Storage

Beartooth’s storage is divided into three isolated filesystems to ensure that researchers have control of where their data is, and who can access it. 

  • /home: for configuration files and software user specific installations.

  • /project: a collaborative area shared among project members.

  • /gscratch: a large and fast storage area to temporarily store large data sets while they are being processed. This area is not backed up and is subject to periodic purges of old data.

Software

This links to a summary of the Beartooth software.

Project and Account Requests

For research projects, UWyo faculty members (Principal Investigators or PIs) can request a research project on ARCC HPC (high performance computing) resources, using this form. Note: you can also submit an initial set of users using this form.

User Accounts require a valid UWyo email address and a UWyo-Affiliated PI sponsor. UWyo faculty members can sponsor their own accounts, while students, post-doctoral researchers, or research associates must use their PI as their sponsor. Users with a valid UWyo email address can be added in the project request or added later, using the Request a Change to Your Project form.

Non-UWyo external collaborators (Ex_Co) must be sponsored by a current UWyo faculty member. Ex_Co accounts can be requested here. Please supply the Ex_Co username when requesting they be added to a project.

Logging Into Beartooth

Once access is granted, connection to ARCC HPC resources may be established via SSH. Note that SSH connections require Two-Factor Authentication. For reference, please see The Command Line Interface.

To connect to Beartooth, please run: ssh <username>@beartooth.arcc.uwyo.edu

Testing Procedures and Considerations

During the testing phase for Beartooth, there are some considerations to be aware of:

What to test: in addition to running jobs, please test all aspects of using and interacting with Beartooth, such as file performance, access and navigation. A simple feedback form is here: https://arccwiki.atlassian.net/servicedesk/customer/portal/2/group/15/create/49

Software to be tested: We ask that users explicitly test the applications they have requested as they expect them to work - being mindful that module versions (and version dependencies) will have probably updated.

Considerations:

Southpass Since Southpass is still serving Teton, we need to use a test site specific to Beartooth to provide similar functionality: https://ondemand-test.arcc.uwyo.edu/
To use Ondemand-test, users will need to be on campus or be connected through the VPN.

Data from jobs run on Beartooth during testing: historical information about JOBS will be lost once Beartooth goes live - DATA resulting from the jobs will remain in /project, /home, and/or /gscratch.

LMOD issue: ARCC has found an issue where the user’s local module cache does not differentiate which cluster you are using, which can result in versions of a module installed on Beartooth being shown while using on Teton, and vice-versa. A work around is to add “--ignore-cache” to the module spider command. This forces LMOD to only navigate the module tree for the cluster you’re using.
For example: module --ignore-cache spider <module name>

Citing Beartooth

For information on citing Beartooth, please reference the citing section in Documentation and Help.

  • No labels