Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Goal: Introduction to Slurm and how to start interactive sessions, submit jobs and monitor.

...

ARCC: Slurm: Wiki Pages 

Info

A quick read can be found under: Slurm: Getting Started-Jobs and Nodes

ARCC also hosts a number of more detailed and specific wiki pages:

...

...

Interactive Session: salloc

Info

Interactive Session: salloc

Info
  • You’re there doing the work.

  • Suitable for developing and testing over a few hours.

Code Block
[]$ salloc

You’re there doing the work.

Suitable for developing and testing over a few hours.

Code Block
[]$ salloc -–help
[]$ man salloc
# Lots of options. 

# The bare minimum.
# This will provide the defaults of one node, one core and 1G of memory.
[]$ salloc –A <project-name> -t <wall-time>
Info
  • As with other Linux commands, there are typically short and long form forms for the options.

    • -A vs --account and -t vs --time.

  • Format for: -t/--time: Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".

...

Interactive Session: salloc: workshop

Info

You’ll only use the reservation for this (and/or other) workshop.

Once you have an account you typically do not need it.

But there are use cases when we can create a specific reservation for you.

Which itself might require a partition to be defined if you’re using a GPU node (more about that later).

Code Block
# CPU only compute node.
[]$ salloc –A <project-name> –t 1:00 --reservation=<reservation-name>

# GPU partition/compute node.
[]$ salloc –A <project-name> –t 1:00 --reservation=<reservation-name> --partition=<partition-name>

...

Info

Closing the session will also release the job.

Submit Jobs: sbatch

...

Exercise: salloc: Give It A Go

From a login node, create some interactive sessions using: salloc \.

Try different wall times:

  • Short times to experience an automatic timeout.

  • Longer times so you can call squeue and see your job in the queue.

Notice how the command-line prompt changes.

...

Submit Jobs: sbatch

Info
  • You submit a job to the queue and walk away.

  • Monitor its progress/state using command-line and/or email notifications.

  • Once complete, come back and analyze results.

...

Submit Jobs: sbatch: Example

...

Code Block
#!/bin/bash                               
# Shebang indicating this is a bash script.
# Do NOT put a comment after the shebang, this will cause an error.
#SBATCH --account=<project-name>          # Use #SBATCH to define Slurm related values.
#SBATCH --time=10:00                      # Must define an account and wall-time.
#SBATCH --reservation=<reservation-name>
echo "SLURM_JOB_ID:" $SLURM_JOB_ID        # Can access Slurm related Environment variables.
start=$(date +'%D %T')                    # Can call bash commands.
echo "Start:" $start
module purge
module load gcc/1314.2.0 python/3.10.6      # Load the modules you require for your environment.
python python01.py                        # Call your scripts/commands.
sleep 1m
end=$(date +'%D %T')
echo "End:" $end
Note
  • As with salloc, a submission script must at a minimum have an #SBATCH --account and #SBATCH --time defined.

  • Notice we are using the long forms in the example above.

...

Submit Jobs: squeue: What’s happening?

...

Code Block
[]$ sbatch run.sh
Submitted batch job 13526340

[]$ squeue -u <username>
             JOBID PARTITION     NAME      USER  ST       TIME  NODES NODELIST(REASON)
          13526340     moran   run.sh <username>  R       0:05      1 m233
[]$ ls
python01.py  run.sh  slurm-13526340.out

[]$ cat slurm-13526340.out
SLURM_JOB_ID: 13526340
Start: 03/22/24 09:38:36
Python version: 3.10.6 (main, OctSep 17 20223 2024, 1615:4713:3256) [GCC 1214.2.0]
Version info: sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)

[]$ squeue -u <username>
             JOBID PARTITION     NAME       USER ST       TIME  NODES NODELIST(REASON)
          13526340     moran   run.sh <username>  R       0:17      1 m233

...

Submit Jobs: squeue: What’s happening? Continued

Info

The squeue command only shows pending and running jobs.

If a job is no longer in the queue then it has finished.

Code Block
[]$ squeue -u <username>
             JOBID PARTITION     NAME       USER ST       TIME  NODES NODELIST(REASON)
          13526340     moran   run.sh <username>  R       0:29      1 m233

[]$ squeue -u <username>
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

[]$ cat slurm-13526340.out
SLURM_JOB_ID: 13526340
Start: 03/22/24 09:38:36
Python version: 3.10.6 (main, Sep Oct 173 20222024, 1615:4713:3256) [GCC 1214.2.0]
Version info: sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)
End: 03/22/24 09:39:36
Info
  • The squeue command only shows pending and running jobs.

  • If a job is no longer in the queue then it has finished.

  • Finished can mean success, failure, timeout..Finished can mean success, failure, timeout, out-of-memory... It’s just no longer running.

...

More squeue Information

...

Info

For more information see the main Slurm squeue page and use.

Code Block
# Lots more information
[]$ squeue --help
[]$ man squeue
# Display more columns: # For example how much time is left of your requested wall time: TimeLeft squeue -u <username> --Format="Account,UserName,JobID,SubmitTime,StartTime,TimeLeft"
Info

For example, using the --Format option you can display additional columns, such as how much time is left from your requested wall time using TimeLeft:

Code Block
[]$ squeue -u <username> --Format="Account,UserName,JobID,SubmitTime,StartTime,TimeLeft"
ACCOUNT             USER                JOBID               SUBMIT_TIME         START_TIME          TIME_LEFT
<project-name>      <username>          1795458             2024-08-14T10:31:07 2024-08-14T10:31:09 6-04:42:51
<project-name>      <username>          1795453             2024-08-14T10:31:06 2024-08-14T10:31:07 6-04:42:49
<project-name>      <username>          1795454             2024-08-14T10:31:06 2024-08-14T10:31:07 6-04:42:49
...

Submission from your Current Working Directory

Info

Remember from Linux, that your current location is your Current Working Directory - abbreviated to CWD.

By default Slurm will look for files, and write output, from the folder you submitted your script from i.e. your CWD.

In the example above, if I called sbatch run.sh from ~/intro_to_modules/ then the Python script should reside within this folder. Any output will be written into this folder.

Within the submission script you can define paths (absolute/relative) to other locations.

Info

You can submit a script from any of your allowed locations /home, /project and/or /gscratch.

But you need to manage and describe paths to scripts, data, output appropriately.

Submit Jobs: scancel: Cancel?

Info

If you have submitted a job, and for what ever reason you want/need to stop it early, then use scancel <job-id>.

This will stop the job at its current point within the computation, and return any associated resources back to the cluster.

[]$ sbatch run.sh
Code Block
Info

There are various other time related columns:

  • SubmitTime: The time that the job was submitted at.

  • StartTime: Actual or expected start time of the job or job step. This will be different than the submit time if your job has been pending in the queue.

  • TimeLeft: Time left for the job to execute. This value is calculated by subtracting the job's time used from its time limit.

  • TimeLimit: Time limit for the job.

  • TimeUsed: Time used by the job.

  • EndTime: The time of job termination, actual or expected.

There are lots of other columns that can be defined including ones related to resources (nodes, cores, memory) that have been specifically allocated.

...

Submission from your Current Working Directory

Info

Remember from Linux, that your current location is your Current Working Directory - abbreviated to CWD.

By default Slurm will look for files, and write output, from the folder you submitted your script from i.e. your CWD.

In the example above, if I called sbatch run.sh from ~/intro_to_modules/ then the Python script should reside within this folder. Any output will be written into this folder.

Within the submission script you can define paths (absolute/relative) to other locations.

Info

You can submit a script from any of your allowed locations /home, /project and/or /gscratch.

But you need to manage and describe paths to scripts, data, output appropriately.

...

Submit Jobs: scancel: Cancel?

Info

If you have submitted a job, and for what ever reason you want/need to stop it early, then use scancel <job-id>.

This will stop the job at its current point within the computation, and return any associated resources back to the cluster.

Code Block
[]$ sbatch run.sh
Submitted batch job 13526341
[]$ squeue -u <username>
             JOBID PARTITION     NAME       USER ST       TIME  NODES NODELIST(REASON)
          13526341     moran   run.sh <username>  R       0:03      1 m233

[]$ scancel 13526341

[]$ squeue -u <username>
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

[]$ cat slurm-13526341.out
SLURM_JOB_ID: 13526341
Start: 03/22/24 09:40:09
Python version: 3.10.6 (main, Sep Oct 173 20222024, 1615:4713:3256) [GCC 1214.2.0]
Version info: sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)
slurmstepd: error: *** JOB 13526341 ON m233 CANCELLED AT 2024-03-22T09:40:17 ***

...

Info

Use the sacct command to view you your jobs that have completed.

By default this will only list jobs from mid night of the that day.

View the -S, --starttime (and -E, --endtime=<end_time>) options to understand how to define a start (and end) time to configure different date/time intervals.The main Slurm sacct page.

It too has a --format option allowing you to display additional columns:

Code Block
[]$ sacct -u <username> -X
JobID           JobName  Partition    Account  AllocCPUS      State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
13526337     interacti+      moran arccanetr+          1    TIMEOUT      0:0
13526338     interacti+      moran arccanetr+          1  COMPLETED      0:0
13526340         run.sh      moran arccanetr+          1  COMPLETED      0:0
13526341         run.sh      moran arccanetr+          1 CANCELLED+      0:0  # Lots more information []$ sacct --help
[]$ man sacct 1 CANCELLED+   # Display more columns0:0

[]$ sacct -u <username> --format="JobID,Partition,nnodes,NodeList,NCPUS,ReqMem,State,Start,Elapsed" -X
JobID         Partition   NNodes        NodeList      NCPUS     ReqMem      State               Start    Elapsed
------------ ---------- -------- --------------- ---------- ---------- ---------- ------------------- ----------
13526337          moran        1            m233          1      1000M    TIMEOUT 2024-03-22T09:35:25   00:01:28
13526338          moran        1            m233          1      1000M  COMPLETED 2024-03-22T09:37:41   00:00:06
13526340          moran        1            m233          1      1000M  COMPLETED 2024-03-22T09:38:35   00:01:01
13526341          moran        1            m233          1      1000M CANCELLED+ 2024-03-22T09:40:08   00:00:09
Info

For more information see the main Slurm sacct page and use:

Code Block
[]$ sacct --help
[]$ man sacct

...

Submit Jobs: sbatch: Options

...

Info
  • Both salloc and sbatch have 10s of options, in both short and long form.

  • Some options mimic functionality, for example -G works /--gpus can work the same as --gres=gpu:1.

  • Please consult the command --help and man pages and/or web links to discover further options not listed.

...

Code Block
#!/bin/bash                               
#SBATCH --account=<project-name>
#SBATCH --time=10:00
#SBATCH --reservation=<reservation-name>

#SBATCH --job-name=pytest
#SBATCH --nodes=1                       
#SBATCH --cpus-per-task=1               
#SBATCH --mail-type=ALL                 
#SBATCH --mail-user=<email-address>
#SBATCH --output=slurms/pyresults_%A.out      

echo "SLURM_JOB_ID:" $SLURM_JOB_ID        # Can access Slurm related Environment variables.
start=$(date +'%D %T')                    # Can call bash commands.
echo "Start:" $start
module purge
module load gcc/1314.2.0 python/3.10.6      # Load the modules you require for your environment.
python python01.py                        # Call your scripts/commands.
sleep 1m
end=$(date +'%D %T')
echo "End:" $end

...

Info

With the above settings (written into a file called run.sh), a submission will look something like the following:

Expand
titleExample Flow and Output:
Code Block
# Submit the job:
[intro_to_modules]$ sbatch run.sh
Submitted batch job 1817260

# Notice the NAME is now 'pytest'
[intro_to_modules]$ squeue -u salexan5
             JOBID PARTITION     NAME       USER ST       TIME  NODES NODELIST(REASON)
           1817259        mb   pytest <username>  R       0:58      1 mbcpu-002

# I can view the output while the job is running.
# The output is now in a sub folder under slurm/
# It also uses the name 'pyresults_<job_id>.out'
[intro_to_modules]$ cat slurms/pyresults_1817260.out
SLURM_JOB_ID: 1817260
Start: 08/14/24 14:48:38
Python version: 3.10.6 (main, Sep Apr 303 2024, 1115:2313:0456) [GCC 1314.2.0]
Version info: sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)

[intro_to_modules]$ squeue -u <username>
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

[intro_to_modules]$ cat slurms/pyresults_1817260.out
SLURM_JOB_ID: 1817260
Start: 08/14/24 14:48:38
Python  USER ST       TIME  NODES NODELIST(REASON)

[intro_to_modules]$ cat slurms/pyresults_1817260.out
SLURM_JOB_ID: 1817260
Startversion: 3.10.6 (main, Sep  3 2024, 15:13:56) [GCC 14.2.0]
Version info: sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)
End: 08/14/24 14:48:38
Python version: 3.10.6 (main, Apr 30 2024, 11:23:04) [GCC 13.2.0]
Version info: sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)
End: 08/14/24 14:49:38
Info

In my inbox, I also received two emails with the subjects:

  • medicinebow Slurm Job_id=1817260 Name=pytest Began, Queued time 00:00:00

    1. This will have no text within the email body.

  • medicinebow Slurm Job_id=1817260 Name=pytest Ended, Run time 00:01:01, COMPLETED, ExitCode 0

    The body of this email contained the seff results
    49:38
    Info

    In my inbox, I also received two emails with the subjects:

    1. medicinebow Slurm Job_id=1817260 Name=pytest Began, Queued time 00:00:00

      1. This will have no text within the email body.

    2. medicinebow Slurm Job_id=1817260 Name=pytest Ended, Run time 00:01:01, COMPLETED, ExitCode 0

      1. The body of this email contained the seff results.

    ...

    Exercise: sbatch: Give It A Go

    Using the script examples (adjust were appropriate) try submitting some jobs.

    • Once submitted (within a different session) monitor the jobs using the squeue command.

    • Track the job ids, and try changing the job name to distinguish when viewing the pending/running jobs.

    • Cancel some of the jobs.

    • Maybe try increasing the sleep value to be longer than the requested wall time to trigger a timeout.

    • Once they’ve completed, run sacct to view the finished jobs, and look at their state.

    ...

    ...