Slurm is the basis of which all jobs are to be submitted, this includes batch and interactive jobs. Slurm consists of several facing user commands, all of which have appropriate Unix man pages associated with them and should be consulted.
Contents
Commands
Batch Jobs
Batch jobs are jobs that are submitted via job script or commands that are input into the sbatch command interactively which will then enter the queueing system and prepare for the execution, then execute when possible. The execution could start immediately if the queue is not completely full, start after a short time period if preemption opted for, or after extensive time if the queue is full or running limits are already reached.
A simple sbatch script to submit a simple "Hello World!" type problem follows:
#!/bin/bash ### Assume this file is named hello.sh #SBATCH --account=arcc #SBATCH --time=24:00:00 echo "Hello World!"
The two '#SBATCH' directives above are required for all job submissions, whether interactive or batch. The values to account should be changed to the appropriate project account and the time should be changed to an appropriate walltime limit. This is walltime limit, not CPU time. These values could also be supplied when submitting jobs by providing them directly on the command line when submitting. Slurm will default jobs to use one node, one task per node, and once cpu per node.
Submitting Jobs
$ sbatch hello.sh
or, with account and time on the command line directly rather than as directives in the shell script:
$ sbatch --account=arcc --time=24:00:00 test.sh
Single Node, Multi-Core Jobs
Slurm creates allocations of resources and resources can vary depending on the work needing to be done with the cluster. A batch job that requires multiple cores can have a few different layouts depending on what is intending to be run. If the job is a multi-threaded application such as OpenMP or utilizes pthreads, it's best to set the number of tasks to 1. The below script will request that a single node with 4 cores available. The job script, assuming OpenMP, sets the number of threads to the job provided environment variable SLURM_CPUS_PER_TASK.
#!/bin/bash #SBATCH --account=arcc #SBATCH --time=24:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=4 export $OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK srun ./application
Single Node, Multi-Tasks
This could be a multi-tasked job where the application has it's own parallel processing engine or uses MPI, but experiences poor scaling over multiple nodes.
#!/bin/bash #SBATCH --account=arcc #SBATCH --time=24:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=1 ### Assuming MPI application srun ./application
Multi-Node, Non-Multithreaded
An application that strictly uses MPI often can use multiple nodes. However, there is often a chance that MPI type programs do not implement multithreading capability. Therefore, the number of cpus per task should be set to a value of 1.
#!/bin/bash #SBATCH --account=arcc #SBATCH --time=24:00:00 #SBATCH --nodes=4 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=1 ### Assuming 'application' is on your $PATH environment variable srun application
Multi-Node, Multithreaded
Some applications have been developed to take advantage of both distributed memory parallelism and shared memory parallelism such that they're capable of using MPI and threading together. This often requires the user to find the right balance based on additional resources required such as memory per task, network bandwidth, and node core count. The below example request that 4 nodes be allocated, each supporting 4 MPI ranks and each MPI rank supporting 4 threads. The total CPU request count aggregates to 64 (i.e., 4 x 4 x 4).
#!/bin/bash #SBATCH --account=arcc #SBATCH --time=24:00:00 #SBATCH --nodes=4 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=4 export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK srun application -arg1 -arg2
Checking Status and Canceling
You can use the squeue command to display the status of all your jobs:
$ squeue -u $USER
and scancel to delete a particular job from the queue:
$ scancel <jobid>
Viewing the Results
Once your job has completed, you should see two files in the directory from which you submitted the job. By default, these will be named <jobname>.oXXXXX and <jobname>.eXXXXX (where the <jobname> is replaced by the name of the SLURM script and the X's are replaced by the numerical portion of the job identifier returned by sbatch). In the Hello World example, any output from the job sent to "standard output" will be written to the hello.oXXXXX file and any output sent to "standard error" will be written to the hello.eXXXXX file.
Interactive Jobs
Interactive jobs are jobs that allow shell access to computing nodes where applications can be run interactively, heavy processing of files, or compiling large applications. They can be requested with similar arguments to batch jobs. ARCC has configured the clusters such that Slurm interactive allocations will give shell access on the compute nodes themselves rather than keeping the shell on the login node. The salloc command is appropriate to launch interactive jobs.
$ salloc --account=arcc --time=40:00 --nodes=1 --ntasks-per-node=1 --cpus-per-task=8
The value of interactive jobs is to allow users to work interactively with the CLI or interactive use of debuggers (ddt, gdb) , profilers (map, gprof), or language interpreters such as Python, R, or Julia.
Special Hardware / Configuration Requests
Slurm is a flexible and powerful workload manager. It has been configured to allow very good expressiveness to allocate certain features of nodes and specialized hardware. Certain features are requested by the use of Generic Resource or GRES while others are requested through the constraints option.
GPU Requests
Request that 16 cpus 2 GPUs be requested for an interactive session:
$ salloc -A arcc --time=40:00 -N 1 --ntasks-per-node=1 --cpus-per-task=16 --gres=gpu:2
Request 16 cpus, 1 GPU of type P100 in a batch script:
#!/bin/bash #SBATCH --account=arcc #SBATCH --time=1-00:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-tasks=16 #SBATCH --gres:P100:1 srun gpu_application
Long Job QOS Configuration
To allow projects to temporarily run jobs for 14 days ARCC has established a special QOS (long-jobs-14) with the following limits:
14-day wall clock time limit
10 max running jobs
As needed ARCC can create other QOS's as needed with different limits.
QOS Creation
To create the QOS for this feature with the issue the following commands as root on tmgt1.
sacctmgr add qos <QOS name> set Flags=PartitionTimeLimit MaxWall=14-0 MaxJobsPA=10
As an example to create a 14 day wall time and max 10 running jobs
sacctmgr add qos long-jobs-14 set Flags=PartitionTimeLimit MaxWall=14-0 MaxJobsPA=10
Allow Access to the QOS
Once the QOS with the proper limits has been created you need to apply it to the project.
sacctmgr modify account <project name> where cluster=teton set qos+=long-jobs-14
Now that you have enabled the long-job-14 QOS on a project inform the users to add:
--qos=long-jobs-14
to there salloc, sbatch or srun command.
Remove Access to the QOS
Once the requirement for the project to run longer jobs is no longer required to remove access for the project to the QOS.
sacctmgr modify account <project name> where cluster=teton set qos-=long-jobs-14
Examples
Trouble Shooting
Node won't come online
If a node won't come online for some reason check the node information for a slurm reason. run
scontrol show node=XXX
The command output should include a reason for why slurm won't bring the node online. As an example:
root@tmgt1:/apps/s/lenovo/dsa# scontrol show node=mtest2 NodeName=mtest2 Arch=x86_64 CoresPerSocket=10 CPUAlloc=0 CPUTot=20 CPULoad=0.02 AvailableFeatures=ib,dau,haswell,arcc ActiveFeatures=ib,dau,haswell,arcc Gres=(null) NodeAddr=mtest2 NodeHostName=mtest2 Version=18.08 OS=Linux 3.10.0-693.21.1.el7.x86_64 #1 SMP Fri Feb 23 18:54:16 UTC 2018 RealMemory=64000 AllocMem=0 FreeMem=55805 Sockets=2 Boards=1 State=IDLE+DRAIN ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A Partitions=arcc BootTime=06.08-11:44:57 SlurmdStartTime=06.08-11:47:35 CfgTRES=cpu=20,mem=62.50G,billing=20 AllocTRES= CapWatts=n/a CurrentWatts=0 LowestJoules=0 ConsumedJoules=0 ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s Reason=Low RealMemory [slurm@06.10-10:00:27]
This indicates that the memory definition for the node and what Slurm actually found are different. You can use
free -m
to see what the system thinks it has in terms of memory.
The node definition should have a memory definition less or equal to the total showed by the "free" command. You should verify that the settings are correct for the memory the node should have. If not, investigate and determine why the discrepancy.
Configuring Slurm for Investments
The Teton cluster is the University of Wyoming's Condo cluster which provides computing resources to the general UW research community. Being a condo cluster researchers can invest funds into the cluster in order to expand its functionality. As an investor, a researcher is afforded special privileges specifically first access to the nodes their funds purchased.
To establish an investment within Slurm follow the following steps:
First, define an investor partition that refers to the purchased nodes. Create the partition definition, edit /apps/s/slurm/latest/etc/partitions-invest.conf. Add
# Comment describing the investment PartitionName=inv-<investment-name> AllowQos=<investment-name> \ Default=No \ Priority=10 \ State=UP \ Nodes=<nodelist> \ PreemptMode=off \ TRESBillingWeights="CPU=1.0,Mem=.00025" Where:
investment-name is the name you wish to call the new investment
nodelist is the list of nodes to be included in the investment definition, i.e. t[305-315],t317
Adjust the TRESBillingWeights accordingly based on the node specifications
Note: The nodes should also be added to the general partition list, i.e. teton
2. Once you have checked and re-checked your work for correctness configure slurm with the new partition definition:
scontrol reconfigure
For the following you will need access to two ARCC created commands:
add_slurm_inv
add_project_to_inv
3. Now that you have the investor partition setup you need to create the associated Slurm DB entries. First, run
/root/bin/idm_scripts/add_slurm_inv inv-<investment-name>
This will create the investor umbrella account that ties the investment to projects.
4. Now add the investor project to the investor umbrella account.
/root/bin/idm_scripts/add_proj_to_inv inv-<investment-name> <project>