...
If there are jobs running on your investment, belonging to users who are not part of your project, then slurm will pre-empt these jobs (ie stop them and add back to the queue) and immediately start your job.
But if If your investment is 'full' with jobs from users who are members of your project, then it will try to allocate across the other partitions if resources are available.
The list of other partitions on Beartooth, tried in order are: moran, teton, teton-cascade, teton-gpu
, teton-hugemem.
The list of other partitions on MedicineBow tried in order are: mb, mb-a30, mb-l40s, mb-h100.
If there are no resources available to fit your job (i.e. cluster usage is very high), then your job will have a state of pending (i.e. waiting in the queue). On a regular interval Slurm will monitor the queue and run the job when appropriate resources become available.
...