HPC System and Job Queries
- 1 Overview: HPC Information and Compute Job Information
- 2 Common SLURM Commands
- 3 ARCCJOBS: Get a report of jobs currently running on the cluster
- 4 ARCCQUOTA: Get a report of your common HPC data storage locations and usage
Overview: HPC Information and Compute Job Information
System querying is helpful to understand what is happening with the system. Meaning, what compute jobs are running, storage quotas, job history, etc. This page contains commands and examples of how to find that information.
Common SLURM Commands
The following describes common SLURM commands and common flags you may want to include when running them. SLURM commands are often run with flags (appended to the command with --flag
) to stipulate specific information that should be included in output.
SQUEUE: Get information about running and queued jobs on the cluster with squeue
This command is used to pull up information about the jobs that currently exist in the SLURM queue. This command run as default will print all running and queued jobs on the cluster listing each job’s job ID, partition, username, job status, number of nodes, and a node list, with the name of the nodes allocated to each job:
Helpful flags when calling squeue
to tailor your query
Flag | Used this when | Short Form | Short Form Ex. | Long Form | Useful flag info, Long Form Example & Output |
---|---|---|---|---|---|
me | To get a printout with just your jobs | n/a | n/a |
| The [jsmith@mblog1 ~]$ squeue --me
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1000002 inv-lab2 AIML-CE jsmith R 6-13:02:32 1 mba30-004
1000005 inv-lab2 AIML-CE jsmith R 6-17:31:53 1 mba30-004
|
user | To get a printout of a specific user’s jobs |
|
|
| The [jsmith@mblog1 ~]$ squeue --user=joeblow
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1000002 inv-lab2 AIML-CE joeblow R 6-13:02:32 1 mba30-004
1000005 inv-lab2 AIML-CE joeblow R 6-17:31:53 1 mba30-004
|
long | To get a printout of jobs including wall time | -l |
|
| The |
format | To get squeue printout with specified format & output |
|
|
| If appended with the |
** you can also run squeue --help
to get a comprehensive list of flags available to run with the squeue command
SACCT: Get information about recent or completed jobs on the cluster with sacct
The default sacct
command: This print a list of your recent or recently completed jobs
Helpful flags when calling sacct
to tailor your query
Flag | Use this when | Short Form | Short Form Ex. | Long Form | Useful flag info, Long Form Example & Output |
---|---|---|---|---|---|
job | To get info about specific job#(s) |
|
|
| |
batch script | To view batch / submission script for a specific job |
|
|
| You must specify a job with the |
user | To get a printout of a specific user’s jobs |
|
|
| The |
start | To get a printout of job(s) starting after a date/time |
|
|
| Dates and times should be specified with format |
end | To get a printout of job(s) ending before a given date/time |
|
|
| Dates and times should be specified with format
|
format | To get sacct printout with specified format & output |
|
|
| If appended with the |
submit line | To view the submit command for a specified job |
|
|
| This is a way of using the |
** you can also run sacct --help
to get a comprehensive list of flags available to run with the sacct command
SINFO: Get information about cluster nodes and partitions
The default sinfo
command: This print a list of all partitions, their states, availability, and associated nodes on the cluster
Helpful flags when calling sinfo
to tailor your query
Flag | Used this when | Short Form | Short Form Ex. | Long Form | Useful flag info, Long Form Example & Output |
---|---|---|---|---|---|
state | Shows any nodes in state(s) specified |
|
|
| The |
format | To get sinfo printout with specified format & output |
|
|
| If appended with the |
SEFF: Analyze the efficiency of a completed job with seff
Below will just provide a short breakdown for using the seff command. Please see this page for a great and detailed description of how one could evaluate their job’s performance and efficiency.
The seff command will provide information about cpu and memory efficiency of your job, when provided a valid job number as the argument with seff <job#>
. This information is only accurate assuming the job has completed successfully. Any jobs that are still running, or that complete with an out-of-memory error or other errors will have inaccurate seff output.
ARCCJOBS: Get a report of jobs currently running on the cluster
arccjobs shows a summary of jobs, cpu resources, and requested/used cpu time. It doesn't take any arguments or options.
ARCCQUOTA: Get a report of your common HPC data storage locations and usage
arccquota shows information relating to storage quotas. By default, this will display $HOME and $SCRATCH quotas first, followed by the user's associated project quotas. This is a change on Teton from Mount Moran, but the tool is much more comprehensive. The command takes arguments to do project-only (i.e., no $HOME or $SCRATCH info displayed), extensive listing of users' quotas and usage within project directories, can summarize quotas (i.e., no user-specific usage on project spaces).