Loren uses the Slurm Job Scheduler for managing and running jobs within the cluster. The job scheduler makes sure that the cluster resources are efficiently scheduled and used by user jobs. Jobs are submitted to a single partition and slurm runs them on a set of selected compute nodes.
See Slurm
arccjobs shows a summary of jobs, cpu resources, and requested/used cpu time. It doesn't take any arguments or options.
$ arccjobs |
arcchist shows information relating to jobs completed or running with resource utilization. The default action is to request information about the calling user's jobs starting 14-days from the current date. Supports an optional Slurm job id or user for specific information.
$ arcchist $ arcchist -j JOBID $ arcchist -u USER |
arccq shows a summary of the invoking user jobs or all jobs. The default is to only show the calling user's jobs.
$ arccq $ arccq -a |
arccquota shows information relating to storage quotas. By default, this will display $HOME and $SCRATCH quotas first, followed by the user's associated project quotas. This is a change on Teton from Mount Moran, but the tool is much more comprehensive. The command takes arguments to do project-only (i.e., no $HOME or $SCRATCH info displayed), extensive listing of users' quotas and usage within project directories, can summarize quotas (i.e., no user-specific usage on project spaces).
Default:
$ arccquota |
Project-Only:
$ arccquota -P |
Project-Only Summary:
$ arccquota -P -s |
Extensive:
$ arccquota -e |
Specific project(s):
$ arccquota -p PROJECT_NAME |
Specific project(s) and extensive:
$ arccquota -e -p PROJECT_NAME Specific user (only one user): <pre> $ arccquota -u USER |
Specific user and extensive:
$ arccquota -e -u USER |