Slurm: Getting Started-Jobs and Nodes

Overview

Slurm is the basis of which all jobs are to be submitted, this includes batch and interactive jobs. Slurm consists of several user facing commands, all of which have appropriate Unix man pages associated with them and should be consulted. On this page, users will find detailed information about running and submitting jobs, nodes, view available partitions, basic Slurm commands, troubleshooting and steps to configure Slurm for investments.

Required Inputs and Default Values and Limits

There are some default limits set for Slurm jobs. By default the following are required for submission:

  1. Walltime limit: --time=[days-hours:mins:secs]

  2. Project account: --account=account

Default Values

Additionally, the default submission has the following characteristics:

  1. nodes is for one node (-N 1, --nodes=1)

  2. task count one tasks (-n 1, --ntasks-per-node=1)

  3. memory amount 1000 MB RAM / CPU (--mem-per-cpu=1000).

These can be changed by requesting different allocation schemes by modifying the appropriate flags. Please reference our Slurm documentation.

Default Limits

On historic ARCC HPC resources, the default limits were specifically represented by concurrently used cores by each project account. Investors received an increase in concurrent core usage capability. To facilitate more flexible scheduling for all research groups, ARCC is looking at implementing limits based on concurrent usage of cores, memory, and walltime of jobs. This will be defined in the near future and will be subject to the FAC review.

Commands

sacct

  • Query detailed information about the job that has completed. Use this utility to get information about running or completed jobs

salloc

  • Request an interactive job for debugging and/or interactive computing. ARCC configures the salloc command to launch an interactive shell on individual compute nodes with your current environment carried over from the current session. This command requires specifying a project account (-A or --account=) and walltime (-t or --time=).

sbatch

  • Submit a batch job consisting of a single job or job array. Several methods can be used to submit batch jobs. A script file can be used and provided as an argument on the command line. Alternatively, and rarer, the use of standard input can be used and the batch job can be created interactively. We recommend writing the batch job in a script so that it may be referenced at a later time.

scancel

  • Cancel jobs after submission. Works on pending and running jobs. By default, provide a jobid or set of jobids to cancel. Alternatively, one can use sets of flags to cancel specific jobs relating to the account, name, partition, qos, reservation, nodelist. To cancel all array tasks, specify the parent jobid.

sinfo

  • View the status of the Slurm partitions or nodes. Status of nodes that are drained can be seen using the  -R flag.

squeue

  • View what is running or waiting to run in the job queue. Several modifiers and formats can be supplied to the command. You may be interested in the use of arccq as an alternative. The command arccjobs also provides a summary.

sreport

  • Obtain information regarding usage since the last database roll up (usually around midnight each day). sreport can be used as an interactive tool to see the usage of the clusters.

srun

  • A front-end launcher for job steps which includes serial and parallel jobs. srun can be considered an equivalent to mpirun or mpiexec when launching MPI jobs. Using srun inside a job is defined to be a job step that provides accounting information relating to memory, cpu time, and other parameters that are valuable when a job terminates unexpectedly or historical information is needed.

There are some additional commands, however, they'll not be mentioned here because they're not that useful on our system for general users. It's important to note that reading the man pages on the Slurm commands can be highly beneficial and if you have questions, ARCC encourages you to request information on submitting jobs to arcc-help@uwyo.edu.

Batch Jobs

Batch jobs are jobs that are submitted via job script or commands that are input into the sbatch command interactively which will then enter the queueing system and prepare for the execution, then execute when possible. The execution could start immediately if the queue is not completely full, start after a short time period if preemption opted for, or after extensive time if the queue is full or running limits are already reached.

A simple sbatch script to submit a simple "Hello World!" type problem follows:

#!/bin/bash ### Assume this file is named hello.sh #SBATCH --account=arcc #SBATCH --time=24:00:00 echo "Hello World!"

The two '#SBATCH' directives above are required for all job submissions, whether interactive or batch. The values to account should be changed to the appropriate project account and the time should be changed to an appropriate walltime limit. This is a walltime limit, not CPU time. These values could also be supplied when submitting jobs by providing them directly on the command line when submitting. Slurm will default jobs to use one node, one task per node, and one cpu per node.

Submitting Jobs

$ sbatch hello.sh

or, with account and time on the command line directly rather than as directives in the shell script:

$ sbatch --account=arcc --time=24:00:00 test.sh

Single Node, Multi-Core Jobs

Slurm creates allocations of resources, and resources can vary depending on the work needing to be done with the cluster. A batch job that requires multiple cores can have a few different layouts depending on what is intending to be run. If the job is a multi-threaded application such as OpenMP or utilizes pthreads, it's best to set the number of tasks to 1. The below script will request that a single node with 4 cores available. The job script, assuming OpenMP, sets the number of threads to the job provided environment variable SLURM_CPUS_PER_TASK.

Single Node, Multi-Tasks

This could be a multi-tasked job where the application has it's own parallel processing engine or uses MPI, but experiences poor scaling over multiple nodes.

Multi-Node, Non-Multithreaded

An application that strictly uses MPI often can use multiple nodes. However, there is often a chance that MPI type programs do not implement multithreading capability. Therefore, the number of cpus per task should be set to a value of 1.

Multi-Node, Multithreaded

Some applications have been developed to take advantage of both distributed memory parallelism and shared memory parallelism such that they're capable of using MPI and threading together. This often requires the user to find the right balance based on additional resources required such as memory per task, network bandwidth, and node core count. The below example request that 4 nodes be allocated, each supporting 4 MPI ranks and each MPI rank supporting 4 threads. The total CPU request count aggregates to 64 (i.e., 4 x 4 x 4).

Checking Status and Canceling

You can use the squeue command to display the status of all your jobs:

and scancel to delete a particular job from the queue:

Viewing the Results

Once your job has completed, you should see two files in the directory from which you submitted the job. By default, these will be named <jobname>.oXXXXX and <jobname>.eXXXXX (where the <jobname> is replaced by the name of the SLURM script and the X's are replaced by the numerical portion of the job identifier returned by sbatch). In the Hello World example, any output from the job sent to "standard output" will be written to the hello.oXXXXX file and any output sent to "standard error" will be written to the hello.eXXXXX file.

Interactive Jobs

Interactive jobs are jobs that allow shell access to computing nodes where applications can be run interactively, heavy processing of files, or compiling large applications. They can be requested with similar arguments to batch jobs. ARCC has configured the clusters such that Slurm interactive allocations will give shell access on the compute nodes themselves rather than keeping the shell on the login node. The salloc command is appropriate to launch interactive jobs.

The value of interactive jobs is to allow users to work interactively with the CLI or interactive use of debuggers (ddt, gdb) , profilers (map, gprof), or language interpreters such as Python, R, or Julia.