Genomic Data Science Fall 2025
Introduction: The workshop session will provide a quick tour covering high-level concepts, commands and processes for using Linux and HPC on our MedicineBow cluster. It will cover enough to allow an attendee to access the cluster and to perform analysis associated with this workshop.
Last Updated: 20250904
Goals:
Introduce ARCC and what types of services we provide including “what is HPC?”
Define “what is a cluster”, and how is it made of partitions and compute nodes.
How to access and start using ARCC’s MedicineBow cluster - using our OnDemand service.
How to start an interactive desktop and open a terminal to use Linux commands within.
Introduce the basics of Linux, the command-line, and how its File System looks on MedicineBow.
Introduce Linux commands to allow navigation and file/folder manipulation.
Introduce Linux commands to allow text files to be searched and manipulated.
Introduce using a command-line text-editor and an alternative GUI based application.
How to setup a Linux environment to use R(/Python) and start RStudio, by loading modules.
How to start interactive sessions to run on a compute node, to allow computation, requesting appropriate resources.
How to put elements together to construct a workflow that can be submitted as a job to the cluster, which can then be monitored.
We will not be covering:
We will not covering, but workshops are available, on:
Using a terminal to SSH onto the Cluster - see Intro to Accessing the Cluster.
Data Management nor Data Transfer (such as using Globus).
Using / Creating Conda Environments - one method for installing your own software.
Using the Jupyter Service via OnDemand.
Sections
- 1 Sections
- 2 *** Class 01 ***
- 3 00 Introduction and Setting the Scope:
- 4 01 About UW ARCC and HPC
- 5 02 Using OnDemand to access the MedicineBow HPC Cluster
- 6 03 Using Linux and the Command Line
- 7 04 Text Editors
- 8 *** Class 02 ***
- 9 05 Using Linux to Search/Parse Text Files
- 10 06 Lets start using R(/Python) and RStudio
- 11 07 Create a basic workflow and submitting jobs.
- 12 08 Summary and Next Steps
*** Class 01 ***
00 Introduction and Setting the Scope:
HPC Skills to Learn: The roadmap to becoming a proficient HPC user can be long, complicated, and varies depending on the user.
Typically what we’re going to cover over the next two classes we’d use two full days.
So bear in mind that we’ll be introducing key high-level concepts, with not as much time for questions/exercises that we’d normally provide.
The classes will be more hands-on demonstrations for you to listen to, follow along where you can - but you’ll need to and be expected to work through these in your own time.
More extensive and in-depth information and walkthroughs are available on our wiki and under workshops/tutorials. You are welcome to dive into those in your own time. Content within them will provide you with a lot more detail and examples of the foundational concepts you would need to be familiar with to become a proficient HPC user.
01 About UW ARCC and HPC
Goals:
Describe ARCC’s role at UW.
Provide resources for ARCC Researchers to seek help.
Introduce staff members, including those available throughout the workshop.
Introduce the concept of an HPC cluster, it’s architecture and when to use one.
Introduce the MedicineBow HPC architecture, hardware, and partitions.
About ARCC and how to reach us
In short, we maintain internally housed scientific resources including more than one HPC Cluster, data storage, and several research computing servers and resources.
We are here to assist UW researchers like yourself with your research computing needs.
Exercise: Navigate to our Service Portal and submit a General Research Computing Support question.
Under the Please further describe your issue section make sure to enter the work “Test”.
What is HPC
HPC stands for High Performance Computing and is one of UW ARCC’s core services. HPC is the practice of aggregating computing power in a way that delivers a much higher performance than one could get out of a typical desktop or workstation. HPC is commonly used to solve large problems, and has some common use cases:
Performing computation-intensive analyses on large datasets: MB/GB/TB in a single or many files, computations requiring RAM in excess of what is available on a single workstation, or analysis performed across multiple CPUs (cores) or GPUs.
Performing long, large-scale simulations: Hours, days, weeks, spread across multiple nodes each using multiple cores.
Running repetitive tasks in parallel: 10s/100s/1000s of small short tasks.
|
|---|
What is a Compute Node?
We typically have multiple users independently running jobs concurrently across compute nodes - multi-tentancy.
Resources are shared, but do not interfere with any one else’s resources.
i.e. you have your own cores, your own block of memory.
If someone else’s job fails it does NOT affect yours.
Homogeneous vs Heterogeneous HPCs
There are 2 types of HPC systems:
Homogeneous: All compute nodes in the system share the same architecture. CPU, memory, and storage are the same across the system. (Ex: NWSC’s Derecho)
Heterogeneous: The compute nodes in the system can vary architecturally with respect to CPU, memory, even storage, and whether they have GPUs or not. Usually, the nodes are grouped in partitions. MedicineBow is a heterogeneous cluster.
Cluster: Heterogeneous: Partitions
MedicineBow Hardware Summary Table: Understand what resources are available.
02 Using OnDemand to access the MedicineBow HPC Cluster
Goals:
Demonstrate how users log into OnDemand.
Demonstrate requesting and using a XFCE Desktop Session
Introduce the Linux File System and how it compares to common workstation environments
Introduce HPC specific directories and how they’re used
Introduce MedicineBow specific directories and how they’re used
Demonstrate how to access files using the MedicineBow File Browsing Application
Demonstrate the use of emacs, available as a GUI based text-editor
Log in and Access the Cluster
Open OnDemand Dashboard: This service allows users to access MedicineBow cluster over a web-based portal, via a browser.
Exercise: Open an Interactive Desktop:
You can also access using a terminal and SSH-ing onto the cluster. See:
Structure of the Linux File System and HPC Directories
From within the Interactive Desktop:
Linux File Structure: Double click on the Home icon, and then File System.
This is specific to the MedicineBow HPC but most Linux environments will look very similar:
Linux Operating Systems (Generally)
Compare and Contrast: Linux, HPC Specific, MedicineBow Specific
The project name for this class is: genomicdatasci
Exercise: File Browsing in OnDemand GUI: The Files Category and App
03 Using Linux and the Command Line
Goals:
Introduce the shell terminal and command line interface
Demonstrate starting a MedicineBow SSH shell using OnDemand
Demonstrate information provided in a command prompt
Introduce Policy for HPC Login Nodes
Demonstrate how to navigate the file system to create and remove files and folders using command line interface (CLI)
mkdir,cd,ls,mv,cp
Demonstrate the use of
man,--helpand identify when these should be usedDemonstrate using a command-line text editor,
vi
Based on Workshop: Intro to Linux Command-Line: The File System
Exercise: Shell Terminal Introducing Command Line
Getting Started: Using the OnDemand service: Using the Terminal:
What am I Using?
Remember:
The MedicineBow Shell Access opens up a new browser tab that is running on a login node. Do not run any computation on these.
[<username>@mblog1/2 ~]$The OnDemand Interactive Desktop (terminal) is already running on a compute node.
[<username>@mbcpu-001 ~]$
Login Node Policy
As a courtesy to your colleagues, please do not run the following on any login nodes:
Anything compute-intensive (tasks using significant computational/hardware resources - Ex: using 100% cluster CPU)
Any collection of a large # of tasks resulting in a similar hardware footprint to actions mentioned previously.
Either start an Interactive Desktop, an interactive session (
salloc) or submit a job (sbatch) These will be covered later.See more on our ARCC HPC Policies.
Demonstrating how to get help in CLI
| [<username>@mblog1 ~]$ man pwd
NAME
pwd - print name of current/working directory
SYNOPSIS
pwd [OPTION]...
DESCRIPTION
Print the full filename of the current working directory.
-L, --logical
use PWD from environment, even if it contains symlinks
-P, --physical
avoid all symlinks
--help display this help and exit
--version
output version information and exit
If no option is specified, -P is assumed.
NOTE: your shell may have its own version of pwd, which usually supersedes the version described here. Please refer to your shell's documentation
for details about the options it supports. |
| [<username>@mblog1 ~]$ cp --help
Usage: cp [OPTION]... [-T] SOURCE DEST
or: cp [OPTION]... SOURCE... DIRECTORY
or: cp [OPTION]... -t DIRECTORY SOURCE...
Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY. |
Demonstrating file navigation in CLI
File Navigation demonstrating the use of:
| [<username>@mblog1 ~]$ pwd
/home/<username>
[<username>@mblog1 ~]$ ls
Desktop Documents Downloads ondemand R
[<username>@mblog1 ~]$ cd /project/genomicdatasci
[<username>@mblog1 genomicdatasci]$ pwd
/project/genomicdatasci
[<username>@mblog1 genomicdatasci]$ cd <username>
[<username>@mblog1 <username>]$ ls -la
total 2.0K
drwxr-sr-x 2 <username> genomicdatasci 4.0K May 23 11:05 .
drwxrws--- 80 root genomicdatasci 4.0K Jun 4 14:39 ..
[<username>@mblog1 <username>]$ pwd
/project/genomicdatasci/<username>
[<username>@mblog1 <username>]$ cd ..
[<username>@mblog1 genomicdatasci]$ pwd
/project/genomicdatasci |
Demonstrating how to create and remove files and folders using CLI
Creating, moving and copying files and folders:
| [<username>@mblog1 genomicdatasci]$ cd <username>
[<username>@mblog1 <username>]$ touch testfile
[<username>@mblog1 <username>]$ mkdir testdirectory
[<username>@mblog1 <username>]$ ls
testdirectory testfile
[<username>@mblog1 <username>]$ mv testfile testdirectory
[<username>@mblog1 <username>]$ ls
testdirectory
[<username>@mblog1 <username>]$ cd testdirectory
[<username>@mblog1 testdirectory]$ ls
testfile
[<username>@mblog1 testdirectory]$ cd ..
[<username>@mblog1 <username>]$ cp -r testdirectory ~
[<username>@mblog1 <username>]$ cd ~
[<username>@mblog1 ~]$ pwd
/home/<username>
[<username>@mblog1 ~]$ ls
Desktop Documents Downloads ondemand R testdirectory
[<username>@mblog1 ~]$ cd testdirectory
[<username>@mblog1 ~]$ ls
testfile
[<username>@mblog1 ~]$ rm testfile
[<username>@mblog1 ~]$ ls
[<username>@mblog1 ~]$ |
04 Text Editors
See Workshop: Intro to Text Editors in Linux
You can use Text Editors from:
from the command-line.
via GUI, from an Interactive Desktop, using emacs: Applications > Accessories > Emacs
*** Class 02 ***
05 Using Linux to Search/Parse Text Files
Goals:
Using the command-line, demonstrate how to search and parse text files.
Show how
exportcan be used to setup environment variables andechoto see what values they store.Linux Commands:
findcat/head/tail/grepsort/uniqPipe
|output from one command to the input of another, and redirect to a file using>,>>.
Based on Workshop: Intro to Linux Command-Line: View Find and Search Files
Your Environment: Echo and Export
# View the settings configured within your environment.
[~]$ env
# View a particular environment variable
# PATH: Where you environment will look for execuatables/commands.
[~]$ echo $PATH
# Create an environment variable that points to the workshop data folder.
[~] export TEST_DATA=/project/genomicdatasci/software/test_data
# Check it has been correctly set.
[~]$ echo $TEST_DATA
/project/genomicdatasci/software/test_dataUse Our Environment Variable
# Lets use it.
# Navigate to your home.
[~]$ cd
# Navigate to the workshop data folder.
[~]$ cd $TEST_DATA
[test_data]$ pwd
/project/genomicdatasci/software/test_dataThese are only available within this particular terminal/session.
Once you close this terminal, they are gone.
They are not available across other terminals.
Advanced: To make 'permanent' you can update your
~/.bashrc
Search for a File
Based on: Search for a File
Linux is case-sensitive.
[test_data]$ cd /project/genomicdatasci/software/test_data
# Find a file using its full name.
[test_data]$ find . -name "epithelial_overrep_gene_list.tsv"
./scRNASeq_Results/epithelial_overrep_gene_list.tsv
# Remember, Linux is case sensitive
# Returned to command prompt with no output.
[test_data]$ find . -name "Epithelial_overrep_gene_list.tsv"
[test_data]$
# Use case-insensitive option:
[test_data]$ find . -iname "Epithelial_overrep_gene_list.tsv"
./scRNASeq_Results/epithelial_overrep_gene_list.tsvUse Wildcards *
# Use Wildcards:
[test_data]$ find . -name "epithelial*"
./scRNASeq_Results/epithelial_overrep_gene_list.tsv
./scRNASeq_Results/epithelial_de_gsea.tsv
[test_data]$ find . -name "*.tsv"
./Grch38/Hisat2/exons.tsv
./Grch38/Hisat2/splicesites.tsv
./DE_Results/DE_sig_genes_DESeq2.tsv
./DE_Results/DE_all_genes_DESeq2.tsv
./scRNASeq_Results/epithelial_overrep_gene_list.tsv
./scRNASeq_Results/epithelial_de_gsea.tsv
./Pathway_Results/fc.go.cc.p.down.tsv
./Pathway_Results/fc.go.cc.p.up.tsv
./BatchCorrection_Results/DE_genes_uhr_vs_hbr_corrected.tsvView the Contents of a File
Based on: View/Search a File
[]$ cd /project/genomicdatasci/software/test_data/scRNASeq_Results
# View the contents of a TEXT based file:
# Prints everything.
[scRNASeq_Results]$ cat epithelial_overrep_gene_list.tsv
# View 'page-by-page'
# Press 'q' to exit and return to the command-line prompt.
[scRNASeq_Results]$ more epithelial_overrep_gene_list.tsvView the Start and End of a File
# View the first 10 items.
[]$ head epithelial_overrep_gene_list.tsv
# View the first 15 items.
[]$ head -n 15 epithelial_overrep_gene_list.tsv
# View the last 10 items.
[]$ tail epithelial_overrep_gene_list.tsv
# View the last 5 items.
[]$ tail -n 5 epithelial_overrep_gene_list.tsv
# On a login node, remember you can use 'man head'
# or tail --help to look up all the options for a command.Search the Contents of a Text File
[]$ cd /project/genomicdatasci/software/test_data/scRNASeq_Results
# Find rows containing "Zfp1"
# Remember: Linux is case-sensitive
# Searching for all lower case: zfp1
[]$ grep zfp1 epithelial_overrep_gene_list.tsv
[]$
# Searching with correct upper/lower case combination: Zfp1
# Returns all the lines that contain this piece of text.
[]$ grep Zfp1 epithelial_overrep_gene_list.tsv
Zfp106
Zfp146
Zfp185
Zfp1Grep-ing with Case-Insensitive and Line Numbers
# Grep ignoring case.
[]$ grep -i zfp1 epithelial_overrep_gene_list.tsv
Zfp106
Zfp146
Zfp185
Zfp1
# What line numbers are the elements on?
[]$ grep -n -i zfp1 epithelial_overrep_gene_list.tsv
696:Zfp106
1998:Zfp146
2041:Zfp185
2113:Zfp1Pipe: Count, Sort
Based on: Output Redirection and Pipes
[]$ cd /project/genomicdatasci/software/test_data/scRNASeq_Results
# Pipe: direct the output of one command to the input of another.
# Count how many lines/rows are in a file.
[]$ cat epithelial_overrep_gene_list.tsv | wc -l
2254
# Alphabetically soft a file:
[] sort epithelial_overrep_gene_list.tsv
...
Zswim4
Zyx
Zzz3
Zzz3
# Count lines after sorting.
[]$ sort epithelial_overrep_gene_list.tsv | wc -l
2254Uniq
# Find and list the unique elements within a file.
# You need to sort your elements first.
[] sort epithelial_overrep_gene_list.tsv | uniq
...
Zswim4
Zyx
Zzz3
# You can pipe multiple commands together.
# Find, list and count the unique elements within a file:
[] sort epithelial_overrep_gene_list.tsv | uniq | wc -l
2253Redirect Output into a File
# Redirect an output into a file.
# > : Over writes a file
# >> : Appends to a file.
[] sort epithelial_overrep_gene_list.tsv > sorted.tsv
# This will fail for anyone else.
-bash: sorted.tsv: Permission denied
# You do not have write permission within this folder.
[]$ cd ..
[]$ ls -al
drwxr-sr-x 2 <username> genomicdatasci 4096 May 31 13:50 scRNASeq_Results
# Redirect to a location where you do have write permission - you home folder.
[]$ cd scRNASeq_Results/
[]$ sort epithelial_overrep_gene_list.tsv > ~/sorted.tsv
[]$ ls ~
... sorted.tsv ...
[]$ head ~/sorted.tsv
For further details on permissions, read through File Ownership and Permissions.
06 Lets start using R(/Python) and RStudio
Goals:
Using a terminal (via an Interactive Desktop), demonstrate how to load modules to setup an environment that uses R/RStudio and how to start the GUI.
Mention how the module system will be used, in later workshops, to load other software applications.
(Indicate how this relates to setting up environment variables behind the scenes.)
Further explain the differences between using a login node that requires an
sallocto access a compute node, and that you're already running on a compute node (with limited resources) via an interactive desktop.Confirm arguments for
partition,gres/gpu.Note that can confirm a GPU device is available by running
nvidia-smi -Lfrom the command-line.
Show how the resources from the Interactive Desktop configuration start mapping to those used by
salloc.
Based on Workshops:
Open a Terminal
You can access a Linux terminal from OnDemand by:
Opening up an Interactive Desktop and opening a terminal.
Running on a compute node: Command prompt:
[<username>@t402 ~]$Only select what you require:
How many hours? Your session will NOT run any longer that the amount of hours you requested.
Some Desktop Configurations will NOT work with some GPU Types.
Do you actually need a GPU?
Unless you software/library/package has been developed to utilize a GPU, simply selected one will NOT make any difference - this won’t make you code magically run faster.
Selecting a MedicineBow Shell Access which opens up a new browser tab.
Running on the login node:
[<username>@mblog1/2 ~]$
To run any GUI application, you must use OnDemand and an Interactive Desktop.
Setting Up a Session Environment
Across the class, you’ll be using a number of different environments.
Running specific software applications.
Programming with R and using various R libraries.
Programming with Python and using various Python packages.
Environments build with Miniconda - a package/environment manager.
Since the cluster has to cater for everyone we can not provide a simple desktop environment that provides everything.
Instead we provide modules that a user will load that configures their environment for their particular needs within a session.
Loading a module configures various environment variables within that Session.
What is Available?
We have environments available based on compilers, Apptainer containers (formally Singularity), Conda, Linux Binaries
[]$ module avail
[]$ gcc --version
gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
[]$ which gcc
/usr/bin/gcc
[]$ echo $PATH
/home/<username>/.local/bin:/home/<username>/bin:/apps/s/arcc/1.0/bin:/apps/s/slurm/latest/bin:
/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbinIs Python and/or R available?
# An old version of Python is available on the System.
# Systems are updated! Do NOT rely on them for you environment regards versions/reproducability.
[]$ which python
/usr/bin/python
[]$ python --version
Python 3.9.21
# R is NOT available.
[]$ which R
/usr/bin/which: no R in (/home/<username>/.local/bin:/home/<username>/bin:
/apps/s/arcc/1.0/bin:/apps/s/slurm/latest/bin:/usr/share/Modules/bin:
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
# Nothing returned.
[]$ echo $R_HOME
[]$ Load a Compiler
# What's avail for a compiler?
[]$ module load gcc/14.2.0
[]$ module avail
# Notice there are a lot more applications available under this loaded compiler.
[]$ gcc --version
gcc (Spack GCC) 14.2.0
[]$ which gcc
/apps/u/spack/gcc/11.4.1/gcc/14.2.0-vzbrz6i/bin/gcc
# Notice that the environment variables have been extended.
[]$ echo $PATH
/apps/u/spack/gcc/11.4.1/gcc/14.2.0-vzbrz6i/bin:/apps/u/spack/gcc/14.2.0/zstd/1.5.5-4jnrrl7/bin:
/home/<username>/.local/bin:/home/<username>/bin:/apps/s/arcc/1.0/bin:
/apps/s/slurm/latest/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
# Notice R is now available and newer versions of Python are available under gcc/14.2.0
r/4.4.0
python/3.10.6
python/3.12.0Load a Newer Version of Python
[]$ module load python/3.10.6
[]$ which python
/apps/u/spack/gcc/14.2.0/python/3.10.6-6lvrsdd/bin/python
[]$ python --version
Python 3.10.6Typically Loading R
[]$ module load r/4.4.0
# Notice the environment variable has now been set.
[]$ echo $R_HOME
/apps/u/spack/gcc/14.2.0/r/4.4.0-w7xoohc/rlib/R
[]$ which R
/apps/u/spack/gcc/14.2.0/r/4.4.0-w7xoohc/bin/R
# Notice ALL the dependencies:
[] module list
Currently Loaded Modules:
1) slurm/latest (S) 42) libxau/1.0.8
2) arcc/1.0 (S) 43) libxdmcp/1.1.4
...
40) libpthread-stubs/0.4 81) r/4.4.0
41) xproto/7.0.31
[]$ R --version
R version 4.4.0 (2024-04-24) -- "Puppy Cup"You then perform: install.packages and manage these yourself.
Same with Python: You perform the pip install to install which ever Python packages you require.
Using module purge to reset you session/environment
[]$ module purge
The following modules were not unloaded:
(Use "module --force purge" to unload all):
1) slurm/latest 2) arcc/1.0
# ml is a shortcut for module list
[salexan5@mblog2 testdirectory]$ ml
Currently Loaded Modules:
1) slurm/latest (S) 2) arcc/1.0 (S)
Where:
S: Module is Sticky, requires --force to unload or purgeModules Specific for this Class
We have created two modules specifically for this class:
R/4.4.0 + Library of 477 R Packages:
[]$ ls /project/genomicdatasci/software/r/libraries_gcc14/
abind DBI ggnewscale libcoin RcppAnnoy sourcetools
alabaster.base dbplyr ggplot2 lifecycle RcppArmadillo sp
alabaster.matrix DelayedArray ggplotify limma RcppEigen spam
...R/4.3.3 and R Package Pigengene
Due to dependency hell issues, we could not install Pigengene within the R library collection.
There are two separate environments.
With different versions of R.
Using R/4.4.0 + Library
[]$ module purge
[]$ module use /project/genomicdatasci/software/modules/
[]$ module avail
...
------------------- /project/genomicdatasci/software/modules -------------------
bam-readcount/0.8.0 (D) pigengene/3.18
bedops/2.4.41 r/4.4.0-genomic-gcc14
fastp/0.23.4 (D) regtools/1.0.0 (D)
fastqc/0.12.1 (D) rseqc/5.0.3 (D)
hisat-genotype/1.3.3 (D) salmon/1.10.3
hisat2/2.2.1 (D) samtools/1.20
kentutils/1.04.0 (D) sratoolkit/3.1.1 (D)
multiqc/1.24.1 (D) subread/2.0.6 (D)
picard/3.2.0 (D) tophat/2.1.1 (D)
...If you do not call the .libPaths() command from within R (or an R script) you will not get access to the packages.
Version: r/4.4.0-genomic-gcc14
This later version has been built using gcc/14.2.0 - I would recommend using this version.
[]$ module purge
[]$ module load r/4.4.0-genomic-gcc14
[]$ R
R version 4.4.0 (2024-04-24) -- "Puppy Cup"
...
> .libPaths(c('/project/genomicdatasci/software/r/libraries_gcc14', '/apps/u/spack/gcc/14.2.0/r/4.4.0-w7xoohc/rlib/R/library'))R/4.3.3 and R Package Pigengene
[salexan5@mblog2 testdirectory]$ module purge
[salexan5@mblog2 testdirectory]$ module use /project/genomicdatasci/software/modules/
[salexan5@mblog2 testdirectory]$ module load pigengene/3.18
[salexan5@mblog2 testdirectory]$ R --version
R version 4.3.3 (2024-02-29) -- "Angel Food Cake"
...
# Start R
[salexan5@mblog2 testdirectory]$ R
R version 4.3.3 (2024-02-29) -- "Angel Food Cake"
...
> library(Pigengene)
Loading required package: graph
Loading required package: BiocGenerics
...
Due to dependency hell issues, we could not install Pigengene within the R library collection.
There are two separate environments.
With different versions of R.