BioCompWorkshop: ARCC Presentation

BioCompWorkshop: ARCC Presentation

Introduction: The workshop session will provide a quick tour covering high-level concepts, commands and processes for using Linux and HPC on our Beartooth cluster. It will cover enough to allow an attendee to access the cluster and to perform analysis associated with this workshop.

Goals:

  • Introduce ARCC and what types of services we provide including “what is HPC?”

  • Define “what is a cluster”, and how is it made of partitions and compute nodes.

  • How to access and start using ARCC’s Beartooth cluster - using our SouthPass service.

  • How to start an interactive desktop and open a terminal to use Linux commands within.

  • Introduce the basics of Linux, the command-line, and how its File System looks on Beartooth.

  • Introduce Linux commands to allow navigation and file/folder manipulation.

  • Introduce Linux commands to allow text files to be searched and manipulated.

  • Introduce using a command-line text-editor and an alternative GUI based application.

  • How to setup a Linux environment to use R(/Python) and start RStudio, by loading modules.

  • How to start interactive sessions to run on a compute node, to allow computation, requesting appropriate resources.

  • How to put elements together to construct a workflow that can be submitted as a job to the cluster, which can then be monitored.



0 Getting Started

  • Users may log in with their BYODs (do you have a computer with you to follow along with the workshop?)

    • Log into UWYO wifi if you can. (Non-UW users will be unable to).

  • Logging in:

    • If you have a UWYO username and password: UW Users may test their HPC access by opening a browser and then going to the following URL: https://southpass.arcc.uwyo.edu.

    • Standard wyologin page will be presented. Log in with your

    • UWYO username and password.

    • If you do not have a UWYO username and password: Come see me for a Yubikey and directions allow you to access the Beartooth HPC cluster if you do not have a UW account.


00 Introduction and Setting the Scope:

The roadmap to becoming a proficient HPC user can be long, complicated, and varies depending on the user. There are a large number of concepts to cover. Some of these concepts are included in today’s training but given time constraints, it’s impossible to get to all of them. This workshop session introduces key high-level concepts, and follows a very hands-on demonstration approach, for you to follow.

Our training will help provide the foundation necessary for you to use Beartooth cluster, specifically to perform some of the exercises later in this workshop over the week.

Because of our limited time this morning, please submit any questions to the slack channel for this workshop and workshop instructors can address them as they are available.

More extensive and in-depth information and walkthroughs are available on our wiki and under workshops/tutorials. You are welcome to dive into those in your own time. Content within them should provide you with a lot of the foundational concepts you would need to be familiar with to become a proficient HPC user.


01 About UW ARCC and HPC

Goals:

  • Describe ARCC’s role at UW.

  • Provide resources for ARCC Researchers to seek help.

  • Introduce staff members, including those available throughout the workshop.

  • Introduce the concept of an HPC cluster, it’s architecture and when to use one.

  • Introduce the Beartooth HPC architecture, hardware, and partitions.


About ARCC and how to reach us

Based on: Wiki Front Page: About ARCC

ARCC Wiki
  • In short, we maintain internally housed scientific resources including more than one HPC Cluster, data storage, and several research computing servers and resources.

  • We are here to assist UW researchers like yourself with your research computing needs.

3 ARCC Staff Members will be available through the course of the workshop if you need help using Beartooth:

ARCC End User Support

ARCC End User Support

Simon Alexander

HPC & Research Software Manager

simon.png

Dylan Perkins

Research Computing Facilitator

dylan.png

Lisa Stafford

Research Computing Facilitator

lisa.png

What is HPC

HPC stands for High Performance Computing and is one of UW ARCC’s core services. HPC is the practice of aggregating computing power in a way that delivers a much higher performance than one could get out of a typical desktop or workstation. HPC is commonly used to solve large problems, and has some common use cases:

  1. Performing computation-intensive analyses on large datasets: MB/GB/TB in a single or many files, computations requiring RAM in excess of what is available on a single workstation, or analysis performed across multiple CPUs (cores) or GPUs.

  2. Performing long, large-scale simulations: Hours, days, weeks, spread across multiple nodes each using multiple cores.

  3. Running repetitive tasks in parallel: 10s/100s/1000s of small short tasks.

  • Users log in from their clients (desktops, laptops, workstations) into a login node.

  • In an HPC Cluster, each compute node can be thought of as it’s own desktop, but the hardware resources of the cluster are available collectively as a single system.

  • Users may request specific allocations of resources available on the cluster - beyond that of a single node.

  • Allocated resources may include CPUs (Cores), Nodes, RAM/Memory, GPUs, etc.

  • Users log in from their clients (desktops, laptops, workstations) into a login node.

  • In an HPC Cluster, each compute node can be thought of as it’s own desktop, but the hardware resources of the cluster are available collectively as a single system.

  • Users may request specific allocations of resources available on the cluster - beyond that of a single node.

  • Allocated resources may include CPUs (Cores), Nodes, RAM/Memory, GPUs, etc.


What is a Compute Node?

  • We typically have multiple users independently running jobs concurrently across compute nodes.

  • Resources are shared, but do not interfere with any one else’s resources.

    • i.e. you have your own cores, your own block of memory.

  • If someone else’s job fails it does NOT affect yours.

  • Example: The two GPU compute nodes part of this reservation each have 8 GPU devices. We can have different, individual jobs run on each of these compute nodes, without effecting each other.


Homogeneous vs Heterogeneous HPCs

There are 2 types of HPC systems:

  1. Homogeneous: All compute nodes in the system share the same architecture. CPU, memory, and storage are the same across the system. (Ex: NWSC’s Derecho)

  2. Heterogeneous: The compute nodes in the system can vary architecturally with respect to CPU, memory, even storage, and whether they have GPUs or not. Usually, the nodes are grouped in partitions. Beartooth is a heterogeneous cluster and our partitions are described on the Beartooth Hardware Summary Table on our ARCC Wiki.


Beartooth Cluster: Heterogeneous: Partitions

Beartooth Hardware and Partitions

See Beartooth Hardware Summary Table on the ARCC Wiki.


Reservation

A reservation can be considered a temporary partition.

It is a set of compute nodes reserved for a period of time for a set of users/projects, who get priority use.

For this workshop we will be using the following: biocompworkshop:

ReservationName = biocompworkshop StartTime = 06.09-09:00:00 EndTime = 06.17-17:00:00 Duration = 8-08:00:00 Nodes = mdgx01,t[402-421],tdgx01 NodeCnt=22 CoreCnt=720 Users = Groups=biocompworkshop

Important Dates:

  1. After the 17th of June this reservation will stop and you will drop down to general usage if you have another Beartooth project.

  2. The project itself will be removed after the 24th of June. You will not be able to use/access it. Anything you require please copy out of the project.


02 Using Southpass to access the Beartooth HPC Cluster

Southpass is our Open OnDemand resource allowing users to access Beartooth over a web-based portal. Learn more about Southpass here.

Goals:

  • Demonstrate how users log into Southpass

  • Demonstrate requesting and using a XFCE Desktop Session

  • Introduce the Linux File System and how it compares to common workstation environments

    • Introduce HPC specific directories and how they’re used

    • Introduce Beartooth specific directories and how they’re used

  • Demonstrate how to access files using the Beartooth File Browsing Application

  • Demonstrate the use of emacs, available as a GUI based text-editor

Based on: https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1298071553


Log in and Access the Cluster

Login to Southpass

If you haven’t yet:

  1. Open a browser of your choice.

  2. Go to https://southpass.arcc.uwyo.edu

  3. Log in with your UWYO username and password, or the username and password to the training account you’ve been provided.

  4. Once in you will be presented with the Southpass Dashboard:

    A screenshot of a computer

Description automatically generated

Using Southpass

Interactive Applications in Southpass are requested by filling out a webform to specify hardware requirements while you use the application.

Other applications can be accessed without filling out a webform:

  1. Job Composer (To create batch scripts)

  2. Active Jobs (To view your active jobs)

  3. Home Directory (File Explorer/Upload/Download)

  4. Beartooth System Status (View cluster status)

Interactive Applications in Southpass are requested by filling out a webform to specify hardware requirements while you use the application.

Other applications can be accessed without filling out a webform:

  1. Job Composer (To create batch scripts)

  2. Active Jobs (To view your active jobs)

  3. Home Directory (File Explorer/Upload/Download)

  4. Beartooth System Status (View cluster status)

Exercise: Beartooth XFCE Desktop

Requests are made through a webform in which you specifically request certain hardware or software to use on Beartooth.

  1. Click on Beartooth XFCE Desktop
    You will be presented with a form asking for specific information.

     

    1. Project/Account: specifies the project you have access to on the HPC Cluster

    2. Reservation: not usually used for our general cluster use, but set up to access specific hardware that has been reserved for this workshop.

    3. Number of Hours: How long you plan to use the Remote Desktop Connection to the Beartooth HPC.

    4. Desktop Configuration: How many CPUs and Memory you require to perform your computations within this remote desktop session.

    5. GPU Type: GPU Hardware you want to access, specific to your use case. This may be set to “None - No GPU" if your computations do not require a GPU. Note: you can select DGX GPUs (Listed as V100s from the GPU Type drop down)

       

  2. You should see an interactive session starting. When it’s ready, it will turn green.

    1. Note the Host: field. Your Interactive session has been allocated to a specific host on the cluster. This is the node you are working on when you’re using your remote desktop session.

    2. Click Launch Beartooth XFCE Desktop to open your Remote Desktop session

       

  3. You should now see a Linux Desktop in your browser window

    A screenshot of a computer

Description automatically generated

     

    1. Beartooth runs Red Hat Enterprise Linux. If you’ve worked on a Red Hat System, it will probably look familiar.

    2. If not, hopefully it looks similar enough to a Windows or Mac Graphical OS Interface.

      1. Apps dock at the bottom (Similar to Mac OS, or Pinned apps in taskbar on Windows OS)

      2. Desktop icons provide links to specific folder locations and files, like Mac and PC).

         

Note: While we use a webform to request Beartooth resources on Southpass, later training will show how resource configurations can be requested through command line via salloc or sbatch commands.


Structure of the Linux File System and HPC Directories

Linux File Structure

We are now remote logged into a Linux Desktop.

  1. To take a look at the top level of the file structure, click on “Filesystem”.

     

This is specific to the Beartooth HPC but most Linux environments will look very similar

A screenshot of a computer

Description automatically generated

Linux Operating Systems (Generally)


Compare and Contrast: Linux, HPC Specific, Beartooth Specific

Based on: https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1714913281

HPC Specific Folders:

  1. /home (Common across most shared HPC Resources)

    1. What is it for? Similar to on a PC, and Macintosh HD → Users on a Mac

    2. Permissions: It should have files specific to you, personally, as the HPC user. By default no one else has access to your files in your home.

    3. Director Path: Every HPC user on Beartooth has a folder in on Beartooth under /home/<your_username> or $HOME

    4. Default Quota: 25GB

  2. /project (Common across most shared HPC Resources)

    1. What is it for? Think of it as a shared folder for you and all your project members. Similar to /glade/campaign on NCAR HPC.

    2. Permissions: All project members have access to the folder. By default, all project members can read any files or folders within, and can write in the main project directory.

    3. Directory path: get to it at /project/biocompworkshop/

    4. Subfolders in /project/biocompworkshop/ for each user are added to project when a user gets added to the project, but only that user can write to their folder.

    5. Default Quota: 1TB which is for the project folder itself and includes all it’s contents and subfolders.

  3. /gscratch (Scratch folder, common across most HPC resources but sometimes just called "scratch")

    1. What is it for? It’s “scratch space”, so it’s storage dedicated for you to store temporary data you need access to.

    2. Permissions: Like /home, contents is specific to you, personally, as the HPC user. By default no one else has access to your files in your /gscratch.

    3. Director Path: Every HPC user on Beartooth has a gscratch directory in Beartooth under /gscratch/<your_username> or $SCRATCH

    4. Default Quota: 5TB

      1. Don’t store anything in /gscratch that you need or don't have backed up elsewhere. It's not meant to store anything long term.

      2. Everyone’s /gscratch directory is subject to ARCC's purge policy.

Beartooth Specific

  1. /apps (Specific to ARCC HPC) is like on Windows or on a Mac.

    1. Where applications are installed and where modules are loaded from. (More on that later).

  2. /alcova (Specific to ARCC HPC).

    1. Additional research storage for research projects that may not require HPC but is accessible from beartooth.

    2. You won’t have access to it unless you were added to an alcova project by the PI.


Exercise: File Browsing in Southpass GUI

Users can access their files using the south pass file browser app.


Demonstration opening emacs GUI based text editor

Once you’re in a XFCE Desktop Session:

  1. Open the Applications Menu in the top right corner of your desktop

  2. Choose Run Program

  3. An Application Finder window will pop up. In the text box, type in: emacs

  4. Click Launch

  5. This will open a new window with the emacs text editor

  6. Users can click on the File menu and select Visit New File to create a new file, or Open file to continue working on one they’ve already started.


03 Using Linux and the Command Line

Goals:

  • Introduce the shell terminal and command line interface

    • Demonstrate starting a Beartooth SSH shell using Southpass

    • Demonstrate information provided in a command prompt

  • Introduce Policy for HPC Login Nodes

  • Demonstrate how to navigate the file system to create and remove files and folders using command line interface (CLI)

    • mkdir, cd, ls, mv, cp

  • Demonstrate the use of man, --help and identify when these should be used

  • Demonstrate using a command-line text editor, vi

Based on: https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1596194853


Exercise: Shell Terminal Introducing Command Line

  1. Click the following Icon on the Beartooth Dashboard

  2. This opens up a Beartooth SSH session in a web-based terminal:

  3. Login will display:

    1. Cluster you’ve logged into

    2. How to get help

    3. Important message(s) of the day

    4. A printout of arccquota

  4. Anatomy of Command Line Prompt: 

    1. Who (am I?):

    2. What (system am I talking to/working on?):

    3. Where (am I on the system?): 


What am I Using?

Remember:

  • The Beartooth Shell Access opens up a new browser tab that is running on a login node. Do not run any computation on these.
    [<username>@blog2 ~]$

  • The SouthPass Interactive Desktop (terminal) is already running on a compute node.
    [<username>@t402 ~]$


Login Node Policy

As a courtesy to your colleagues, please do not run the following on any login nodes:  

  1. Anything compute-intensive (tasks using significant computational/hardware resources - Ex: using 100% cluster CPU)

  2. Long running tasks (over 10 min)

  3. Any collection of a large # of tasks resulting in a similar hardware footprint to actions mentioned previously.  

  4. Not sure?  Use salloc to be on the safe side.  This will be covered later.
    Ex: salloc –-account=arccanetrain -–time 40:00

  5. See more on ARCC’s Login Node Policy here


Demonstrating how to get help in CLI

  • man - Short for the manual page. This is an interface to view the reference manual for the application or command.

  • man pages are only available on the login nodes.

 

[arcc-t10@blog2 ~]$ man pwd NAME pwd - print name of current/working directory SYNOPSIS pwd [OPTION]... DESCRIPTION Print the full filename of the current working directory. -L, --logical use PWD from environment, even if it contains symlinks -P, --physical avoid all symlinks --help display this help and exit --version output version information and exit If no option is specified, -P is assumed. NOTE: your shell may have its own version of pwd, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports.
  • --help - a built-in command in shell. It accepts a text string as the command line argument and searches the supplied string in the shell's documents.

[arcc-t10@blog1 ~]$ cp --help Usage: cp [OPTION]... [-T] SOURCE DEST or: cp [OPTION]... SOURCE... DIRECTORY or: cp [OPTION]... -t DIRECTORY SOURCE... Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY.

Demonstrating file navigation in CLI

File Navigation demonstrating the use of:

  • pwd (Print Working Directory)

  • ls (“List” lists information about directories and any type of files in the working directory)

  • ls flags

    • -l (tells the mode, # of links, owner, group, size (in bytes), and time of last modification for each file)

    • -a (Lists all entries in the directory, including the entries that begin with a . which are hidden)

  • cd (Change Directory)

  • cd .. (Change Directory - up one level)

[arcc-t10@blog2 ~]$ pwd /home/arcc-t10 [arcc-t10@blog2 ~]$ ls Desktop Documents Downloads ondemand R [arcc-t10@blog2 ~]$ cd /project/biocompworkshop [arcc-t10@blog2 biocompworkshop]$ pwd /project/biocompworkshop [arcc-t10@blog2 biocompworkshop]$ cd arcc-t10 [arcc-t10@blog2 arcc-t10]$ ls -la total 2.0K drwxr-sr-x 2 arcc-t10 biocompworkshop 4.0K May 23 11:05 . drwxrws--- 80 root biocompworkshop 4.0K Jun 4 14:39 .. [arcc-t10@blog2 arcc-t10]$ pwd /project/biocompworkshop/arcc-t10 [arcc-t10@blog2 arcc-t10]$ cd .. [arcc-t10@blog2 biocompworkshop]$ pwd /project/biocompworkshop

Demonstrating how to create and remove files and folders using CLI

Creating, moving and copying files and folders:

  • touch (Used to create a file without content. The file created using the touch command is empty)

  • mkdir (Make Directory - to create an empty directory)

  • mv (Move - moves a file or directory from one location to another)

  • cd.. (Change Directory - up one level)

  • cp (Copy - copies a file or directory from one location to another)

    • -r flag (Recursive)

  • ~ (Alias for /home/user)

  • rm (Remove - removes a file or if used with -r, removes directory and recursively removes files in directory)

[arcc-t10@blog2 arcc-t10]$ touch testfile [arcc-t10@blog2 arcc-t10]$ mkdir testdirectory [arcc-t10@blog2 arcc-t10]$ ls testdirectory testfile [arcc-t10@blog2 arcc-t10]$ mv testfile testdirectory [arcc-t10@blog2 arcc-t10]$ cd testdirectory [arcc-t10@blog2 testdirectory]$ ls testfile [arcc-t10@blog2 testdirectory]$ cd .. [arcc-t10@blog2 arcc-t10]$ cp -r testdirectory ~ [arcc-t10@blog2 arcc-t10]$ cd ~ [arcc-t10@blog2 ~]$ ls Desktop Documents Downloads ondemand R testdirectory [arcc-t10@blog2 ~]$ cd testdirectory [arcc-t10@blog2 ~]$ ls testfile [arcc-t10@blog2 ~]$ rm testfile [arcc-t10@blog2 ~]$ ls

Text Editor Cheatsheets

Vi/Vim Cheatsheet

Nano Cheatsheet

Vi/Vim Cheatsheet

Nano Cheatsheet

https://phoenixnap.com/kb/vim-commands-cheat-sheet

https://geek-university.com/nano-text-editor/

Note: On Beartooth, vi maps to vim i.e. if you open vi, you're actually starting vim.