Effective January 13, 2025, changes were made to the Medicinebow job scheduler. These change are detailed here. These changes may result in your jobs not running the same way they did previously, ending in an error, or in queue for a longer period of time. Please reference the troubleshooting section below for issues that may occur with jobs after maintenance, typical error messages, and the more common solutions. In the event that this troubleshooting page does not resolve your problem, please contact arcc-help@uwyo.edu for assistance.
Unless you are performing computations that require a GPU you are restricted from running CPU jobs on a GPU node, with the exception of investors. Investors and their group members may run CPU jobs on the GPU nodes that fall within their investment.
If you do not specify a QoS as part of your job, a QoS will be assigned to that job based on partition or wall-time. Different partitions and wall-times are associated with different QoS, as detailed in our published Slurm Policy. Should no QoS, partition, or wall-time be specified, the job by default will be placed in the Normal queue with a 3 day wall-time.
Similar to jobs with unspecified QoS, wall-time is assigned to a job based on other job specifications, like QoS or partition. Specific QoS or partitions in a job submission will result in a default wall-time associated with those other flags. If no QoS, partition, or wall-time is specified, the job by default is placed in the Normal queue with a 3 day wall-time.
If you are requesting a GPU, you must also specify a partition with GPU nodes. Otherwise, you are not required to specify a partition. Users requesting GPUs should likely use a --gres=gpu:#
or --gpus-per-node
flag AND a --partition
flag in their job submission.
To encourage users to use only the time they need, all interactive jobs, including those requested through OnDemand have been limited to 8 hours in length. Please specify a time from the OnDemand webform under 8 hours.
This is usually the result of specified walltime. If you have specified a 7 day walltime in your job using --time
or -t
flag over 3 days, you will be placed in the “long” queue which may result in a longer wait time. If your job doesn’t require 7 days, please try specifying a shorter walltime (ideally under 3 days). This should result in your job being placed in a queue with a shorter wait time.
Post maintenance, interactive jobs are restricted to an 8 hour walltime. Please submit your salloc command with a walltime 8 hours or less.
Example:
salloc -A projectname -t 8:00:00
If accompanied by “sbatch/salloc: error: Batch job submission failed: Invalid account or account/partition combination specified
” it’s likely you need to specify an account in your batch script or salloc
command, or the account name provided after the -A or --account flag is invalid. The account flag should specify the name of the project in which you’re running your job. Example: salloc -A projectname -t 8:00:00
Users may no longer request all memory on a node using the --mem=0
flag and are encouraged to request only the memory they require to run their job. If you know you need the use of an entire node, replace your --mem=0
flag specification in your job with --exclusive
to get use of an entire node an all it’s resources.
Users must specify a GPU device if requesting a GPU partition. Assuming you plan to use a GPU in your computations, please specify a GPU by including either the --gres=gpu:#
or --gpus-per-node=#
flag in your job submission.
This may occur for a number of reasons. Please e-mail arcc-help@uwyo.edu with the location of the batch script you’re attempting to run, or salloc command you’re attempting to run, and the error message you receive.
Users must specify the interactive or debug queue, or a time under 8 hrs when requesting an interactive job.
Users should specify a walltime that is available for their specified queue. i.e.,
Debug (<= 1 hr)
Interactive (<= 8 hrs)
Fast (< = 12 hrs)
Normal (<= 3 days)
Long (<= 7 days)
This may occur for a number of reasons, but is likely due to the combination of nodes and hardware you’ve requested, and whether that hardware is available on the node/partition. If you need assistance please e-mail arcc-help@uwyo.edu with the location of the batch script you’re attempting to run, or salloc command you’re attempting to run, and the error message you receive.