/
Frequently Asked Questions (FAQs)

Frequently Asked Questions (FAQs)

This page contains frequently asked questions. If users have any trouble, questions, concerns, or suggestions, or this FAQ page doesn’t provide you with your needed answer, please contact ARCC through our service portal or by e-mailing arcc-help@uwyo.edu.


Table of Contents


Troubleshooting

For general issues please see our Known Issues page to see if the problem could be caused by something that we already are aware of and recommended solutions or work arounds.

General Questions

  • Where can I check the status of ARCC Systems?

  • Where is the link to the latest announcements from arcc-announce?

  • I’m not getting messages from arcc-announce. How can I subscribe?

  • Is there documentation on known issues? Can I add something to the documentation?

    • There is a document ARCC staff maintains for known issues. The documentation of known issues on Beartooth is here, and you may e-mail us at arcc-help@uwyo.edu at any time to update documentation.

Information for citing ARCC and our resources can be found here. Additionally, ARCC maintains a Facilities Statement available as both a 1 or 2 page document available for all researchers, here.

My UWyo user account is locked/disabled, can ARCC restore it so I can use ARCC resources?

  • No, ARCC does not manage UWyo accounts, but to start troubleshooting, go here: Password Resets and Account Lock Outs

    • If you are an external collaborator, you will need to contact the UWIT Accounts Office to reset your password

    • Otherwise, call UW IT Helpdesk to have it unlocked at 766-HELP ( on campus: 6-4357)

    • For further information on UWyo accounts see UW Accounts FAQ

Does ARCC provide a document for researchers to reference in grant proposals?

  • Yes! ARCC maintains a facilities statement that’s available in several formats here.

All project creations must be requested by a PI. Who counts as a PI?

  • Per ARCC Policy: “Principal Investigators (PIs) are any University of Wyoming faculty member with an extended-term position with UW.  This designation does not extend to Adjunct Faculty.

  • Detailed information on PI qualifications may be found on our policy page here and any information on our published policy page should overrides information on the wiki which, may at times, be outdated.

  • PIs may read more about PI Responsibilities on the policy page (Under General Policies: 1 -> User Responsibilities: B -> PI Responsibilities: iii)

Over Quota: My directory is full. Can I get more space?

  • HPC Storage:

    • Rarely do we increase /home directory space and ask that you please clean up any unnecessary files first.

    • To determine what is taking up space in your home directory, you can cd to our home directory, then run a du -h --max-depth=1 command to determine what’s taking up space in your home folder.

    • You may want to move files from your /home directory to your /project directory if appropriate to share with other project members, or if your data isn’t needed long term or is backed up elsewhere, you may move it to your /gscratch directory.

  • Alcova Storage:

    • Storage increases

  • If the problem persists, email arcc-help@uwyo.edu to describe your issue.

  • Any requests for quota increases must be approved by PI to which the quota increase would be billed.

ARCC publishes our policies on our main website, linked here.

Do I need a smart phone to use two-factor for ARCC services?

  • No, there are other options:

    • You have to purchase a Yubikey and then self enroll the key or have it set up by UWIT.

    • You can enroll in any phone number or device on Duo.

      • When logging into Beartooth using the landline method, type “<your password>, phone“ and your enrolled number will get a call.


Cluster/HPC Questions

How do I get access to the cluster?

PIs (Principal Investigators on research) may request a cluster project by filling out the Request New Project form and selecting the cluster name in the ‘If known, resources' section. Please see ARCC HPC Policies and HPC/HPS Account Policies on our policy page for who qualifies as a PI.

May I have sudo access to the cluster?

Users are not permitted to have sudo access. Software can be installed anywhere that users have read/write/access (usually their associated /home, /gscratch and /project spaces. Software should be installed to one of these users associated locations.

Users can also email arcc-help@uwyo.edu for assistance.

Please reference the ARCC Login Page on how to log in here, and be sure to note whether you’re account is a UWYO account or an arcc-only account.

Medicinebow SSH Login requires setting up SSH keys on your local client and directions are specific to the operating system running on your local machine.

Information about SSH Keys and how they work can be found here
Windows specific directions for SSH key setup can be found here
Mac specific directions for SSH key setup can be found here
Linux specific directions for SSH key setup can be found here

If you have already set up keys and are still unable to access Medicinebow, please e-mail ARCC-help@uwyo.edu

How can I get more information about the cluster status and jobs running on the cluster currently?

Cluster usage and job information is available from a number of different locations:

  • Users can query cluster and job information as indicated here.

  • Users can run an arccjobs command from CLI to get a print out of running jobs

  • Users can get general cluster status and job information from OnDemand as shown below.

    You can see your job information from the drop down menu under Jobs->Active Jobs

    then you can expand your job ID for more information.

  • You can see overall cluster usage and status from the drop down menu under Clusters->Medicinebow System Status:

    This will display a graphic of overall usage in different partitions:

     

Yes! This list is as follows:

Command

Used For

Example Output

Command

Used For

Example Output

arccquota

Prints a table of your quotas in home, scratch & project(s)

[arcc-t01@mblog1 ~]$ arccquota +----------------------------------------------------------------------+ | arccquota | Block | +----------------------------------------------------------------------+ | Path | Used Limit % | +----------------------------------------------------------------------+ | /home/arcc-t01 | 00.00 GB 50.00 GB 00.00 | | /gscratch/arcc-t01 | 00.00 GB 05.00 TB 00.00 | | /project/arccanetrain | 00.00 GB 05.00 TB 00.00 | | /project/mbtestproj | 00.00 GB 05.00 TB 00.00 | | /project/sept24bootcamp | 02.81 GB 05.00 TB 00.05 | +----------------------------------------------------------------------+

arccjobs

Prints a table showing active jobs by project(s) and user(s)

[arcc-t01@mblog1 ~]$ arccjobs =============================================================================== Account Running Pending User jobs cpus cpuh jobs cpus cpuh =============================================================================== proj1 3 64 196.36 0 0 0.00 user1 3 64 196.36 0 0 0.00 proj2 1 1 2.17 0 0 0.00 user5 1 1 2.17 0 0 0.00 proj3 23 144 1650.15 0 0 0.00 user2 17 34 893.49 0 0 0.00 user3 5 80 734.52 0 0 0.00 user4 1 30 22.15 0 0 0.00 proj4 3 129 10378.16 0 0 0.00 user6 1 64 6127.18 0 0 0.00 user7 2 65 4250.97 0 0 0.00 proj5 1 4 0.48 0 0 0.00 user8 1 4 0.48 0 0 0.00 proj6 1 32 0.46 0 0 0.00 user9 1 32 0.46 0 0 0.00 proj7 12 74 1964.68 0 0 0.00 user10 2 64 999.90 0 0 0.00 user11 10 10 964.78 0 0 0.00 proj8 1 1 0.77 0 0 0.00 user12 1 1 0.77 0 0 0.00 proj9 3 288 3637.65 0 0 0.00 user13 3 288 3637.65 0 0 0.00 proj10 3094 3094 21641.46 3708 3708 622944.00 user14 3094 3094 21641.46 3708 3708 622944.00 =============================================================================== TOTALS: 3142 3831 39472.35 3708 3708 622944.00 =============================================================================== Nodes 54/345 (15.65%) Cores 6423/18084 (35.52%) Memory (GB) 27521/125570 (21.92%) CPU Load 4882.21 (27.00%) ===============================================================================

pestat

Prints a nodes list with allocated jobs

[arcc-t01@mblog1 ~]$ pestat Hostname Partition Node Num_CPU CPUload Memsize Freemem Joblist State Use/Tot (15min) (MB) (MB) JobID(JobArrayID) User ... b521 beartooth idle 0 56 0.00 257000 250421 b522 beartooth idle 0 56 0.00 257000 250604 b523 inv-inbre idle 0 56 0.00 1023528 650382 b525 beartooth-gpu idle 0 56 0.00 257000 251068 b526 beartooth-gpu idle 0 56 0.00 257000 251213 bbm01 inv-physics idle 0 56 0.00 515000 509085 bbm02 inv-physics drain$ 0 56 0.00 515000 510582 bbm03 inv-physics drain$ 0 56 0.00 515000 510517 bbm04 inv-physics idle 0 56 0.00 515000 509061 bbm05 inv-desousa mix 16 56 14.68* 515000 390233 4681301 user2 bbm06 inv-desousa idle 0 56 0.00 515000 504525 bhm01 inv-mccoy idle 0 56 0.00 1024000 1021794 bhm02 inv-mccoy idle 0 56 0.00 1024000 1021938 bhm03 inv-desousa idle 0 56 0.01 1024000 1019927 bhm04 inv-desousa idle 0 56 0.00 1024000 1020213 bhm05 inv-desousa idle 0 56 0.01 1024000 1020318 bhm06 beartooth mix 4 56 0.85* 1024000 954736 4681349 user4 bhm07 inv-desousa mix 48 56 46.86* 1024000 896169 4663754 user2 4681300 user2 4681299 user2 mba30-001 mb-a30 idle 0 96 0.00 765525 644918 mba30-002 inv-klab alloc 96 96 518.95* 765525 617526 4670459 user3 mba30-003 inv-klab alloc 96 96 538.48* 765525 597959 4670460 user3 mba30-004 inv-klab alloc 96 96 524.35* 765525 611625 4670461 user3 mba30-005 mb-a30 idle 0 96 0.00 765525 654891 mba30-006 mb-a30 idle 0 96 0.00 765525 223041 mba30-007 mb-a30 idle 0 96 0.00 765525 678885 mba30-008 mb-a30 idle 0 96 0.00 765525 678928 mbh100-001 mb-h100 mix 32 96 7.41* 1281554 966323 mbh100-002 mb-h100 idle 0 96 0.00 1281554 422899 mbh100-003 mb-h100 mix 16 96 8.89* 1281554 163025* mbh100-004 mb-h100 mix 16 96 8.02* 1281554 584036 mbh100-005 mb-h100 drain* 0 96 0.00 1281554 689431 mbl40s-001 mb-l40s idle 0 96 0.00 765525 125077* mbl40s-002 mb-l40s idle 0 96 0.00 765525 666055 mbl40s-003 mb-l40s idle 0 96 0.00 765525 150534* mbl40s-004 mb-l40s resv* 0 96 0.00 765525 51058* mbl40s-007 inv-inbre mix 65 96 38.42* 765525 28587* mdgx01 teton-gpu drain* 0 40 0.00 512000 513128 ondemand1 beartooth-gpu idle 0 96 0.00 192000 183536 ondemand2 beartooth-gpu idle 0 96 0.00 192000 183648 ondemand3 beartooth-gpu idle 0 56 0.00 1031000 1025146 ondemand4 beartooth-gpu idle 0 56 0.00 1031000 1025180 ondemand5 beartooth-gpu idle 0 56 0.00 1031000 1025470 t285 teton idle 0 32 0.00 119962 99838 t286 teton drain* 0 32 0.00 119962 120562 t287 teton idle 0 32 0.00 119962 101554 t288 teton idle 0 32 0.00 119962 104936 t289 teton idle 0 32 0.00 119962 116464 t290 teton idle 0 32 0.00 119962 116607 t291 teton idle 0 32 0.00 119962 102520 t292 teton idle 0 32 0.00 119962 105766 t293 teton idle 0 32 0.00 119962 115488 t294 teton idle 0 32 0.00 119962 116156 t295 teton idle 0 32 0.00 119962 105345 t296 teton idle 0 32 0.00 119962 103999 t297 inv-microbiome idle 0 32 0.00 128000 119275 t298 inv-microbiome idle 0 32 0.00 128000 119335 t299 inv-microbiome idle 0 32 0.00 128000 119291 t300 inv-microbiome idle 0 32 0.00 128000 120385 t301 inv-microbiome idle 0 32 0.00 128000 120356 t302 inv-microbiome idle 0 32 0.00 128000 120451 t303 inv-microbiome idle 0 32 0.00 128000 120409 t304 inv-microbiome idle 0 32 0.00 128000 120232 t305 inv-microbiome idle 0 32 0.00 128000 120329 t306 inv-microbiome idle 0 32 0.00 128000 120963 t307 inv-microbiome idle 0 32 0.00 128000 120362 t308 inv-microbiome idle 0 32 0.00 128000 120248 t309 inv-microbiome idle 0 32 0.00 128000 120350 t310 inv-microbiome idle 0 32 0.00 128000 120278 t311 inv-microbiome idle 0 32 0.00 128000 120472 t312 inv-microbiome idle 0 32 0.00 128000 120209 t313 inv-microbiome idle 0 32 0.00 128000 120355 t314 inv-microbiome idle 0 32 0.00 128000 120328 t315 inv-microbiome idle 0 32 0.00 128000 120937 t316 inv-microbiome idle 0 32 0.00 128000 120383 t317 inv-microbiome idle 0 32 0.00 128000 120373 t318 inv-microbiome idle 0 32 0.00 128000 120387 t319 inv-microbiome idle 0 32 0.00 128000 120290 t320 inv-microbiome idle 0 32 0.00 128000 120457 t321 inv-microbiome idle 0 32 0.00 128000 119792 t322 inv-microbiome idle 0 32 0.00 128000 119766 t323 inv-microbiome idle 0 32 0.00 128000 119656 t324 inv-microbiome idle 0 32 0.00 128000 119761 t325 inv-microbiome idle 0 32 0.00 128000 120379 t326 inv-microbiome idle 0 32 0.00 128000 119698 t327 inv-microbiome idle 0 32 0.00 128000 120335 t328 inv-microbiome idle 0 32 0.00 128000 120372 t329 inv-microbiome idle 0 32 0.00 128000 120244 t330 inv-microbiome idle 0 32 0.00 128000 119936 t331 inv-microbiome idle 0 32 0.00 128000 120180 t332 inv-microbiome idle 0 32 0.00 128000 120413 t333 inv-microbiome idle 0 32 0.00 128000 120413 t334 inv-microbiome idle 0 32 0.00 128000 120346 t335 inv-microbiome idle 0 32 0.00 128000 120325 t336 inv-microbiome idle 0 32 0.00 128000 120404 t337 inv-microbiome idle 0 32 0.00 128000 120438 t338 inv-microbiome idle 0 32 0.00 128000 120362 t339 inv-microbiome idle 0 32 0.01 128000 120463 t340 inv-microbiome idle 0 32 0.00 128000 120370 t341 inv-microbiome idle 0 32 0.00 128000 120845 t342 inv-microbiome idle 0 32 0.00 128000 118895 t343 inv-microbiome idle 0 32 0.00 128000 118866 t344 inv-microbiome idle 0 32 0.00 128000 119288 t345 inv-microbiome idle 0 32 0.00 128000 120352 t346 inv-microbiome idle 0 32 0.00 128000 120356 t347 inv-microbiome idle 0 32 0.00 128000 119233 t348 inv-microbiome idle 0 32 0.00 128000 120356 t349 inv-microbiome idle 0 32 0.01 128000 120348 t350 inv-microbiome idle 0 32 0.00 128000 120386 t351 inv-microbiome idle 0 32 0.00 128000 120375 t352 inv-microbiome idle 0 32 0.00 128000 120349 t353 inv-microbiome idle 0 32 0.00 128000 120289 t354 inv-microbiome idle 0 32 0.00 128000 120339 t355 inv-microbiome idle 0 32 0.00 128000 120277 t356 inv-microbiome idle 0 32 0.00 128000 120297 t357 inv-microbiome idle 0 32 0.00 128000 120357 t358 inv-microbiome idle 0 32 0.00 128000 120413 t359 inv-microbiome idle 0 32 0.00 128000 120321 t360 inv-microbiome idle 0 32 0.00 128000 121007 t361 inv-microbiome idle 0 32 0.00 128000 120243 t362 inv-microbiome idle 0 32 0.00 128000 120912 t363 inv-microbiome idle 0 32 0.00 128000 120386 t364 inv-microbiome idle 0 32 0.00 128000 120369 t365 inv-microbiome idle 0 32 0.00 128000 120376 t366 inv-microbiome idle 0 32 0.00 128000 120399 t367 inv-microbiome idle 0 32 0.00 128000 120207 t368 inv-microbiome idle 0 32 0.00 128000 120415 t369 inv-microbiome idle 0 32 0.00 128000 120714 t370 inv-microbiome idle 0 32 0.00 128000 120922 t371 inv-microbiome idle 0 32 0.00 128000 117851 t372 inv-microbiome idle 0 32 0.00 128000 119075 t373 inv-microbiome idle 0 32 0.00 128000 118381 t374 inv-microbiome idle 0 32 0.00 128000 118568 t375 inv-microbiome idle 0 32 0.00 128000 120849 t376 inv-microbiome idle 0 32 0.00 128000 120988 t377 inv-microbiome idle 0 32 0.00 128000 118603 t378 inv-microbiome idle 0 32 0.01 128000 118623 t379 inv-microbiome idle 0 32 0.00 128000 118658 t380 inv-microbiome idle 0 32 0.00 128000 120926 t381 inv-microbiome idle 0 32 0.00 128000 118632 t382 inv-microbiome idle 0 32 0.00 128000 118528 t383 inv-microbiome idle 0 32 0.00 128000 120809 t384 inv-microbiome idle 0 32 0.00 128000 119712 t385 non-investor idle 0 32 0.01 128000 120572 t386 non-investor drain* 0 32 0.00 128000 121129 t387 non-investor idle 0 32 0.00 128000 121112 t388 non-investor idle 0 32 0.00 128000 120984 t389 non-investor idle 0 32 0.00 128000 120947 t390 non-investor idle 0 32 0.00 128000 121132 t391 non-investor idle 0 32 0.00 128000 120944 t392 non-investor idle 0 32 0.00 128000 121065 t393 non-investor drain* 0 32 0.00 128000 0 t394 non-investor idle 0 32 0.00 128000 120958 t395 non-investor idle 0 32 0.00 128000 121035 t396 non-investor idle 0 32 0.00 128000 120935 t397 non-investor idle 0 32 0.00 128000 120679 t398 inv-physics idle 0 32 0.00 128000 120983 t399 inv-physics idle 0 32 0.00 128000 118662 t400 inv-physics idle 0 32 0.00 128000 119077 t401 inv-physics idle 0 32 0.00 128000 120981 t402 non-investor idle 0 32 0.00 128000 120932 t403 non-investor idle 0 32 0.00 128000 120982 t404 non-investor idle 0 32 0.00 128000 120856 t405 non-investor drain* 0 32 0.00 128000 121104 t406 non-investor idle 0 32 0.00 128000 120831 t407 non-investor idle 0 32 0.00 128000 120944 t408 non-investor idle 0 32 0.00 128000 120931 t409 non-investor idle 0 32 0.00 128000 120942 t410 non-investor idle 0 32 0.00 128000 120935 t411 non-investor idle 0 32 0.00 128000 120975 t412 non-investor idle 0 32 0.00 128000 120858 t413 non-investor idle 0 32 0.00 128000 120919 t414 non-investor idle 0 32 0.00 128000 120986 t415 non-investor idle 0 32 0.00 128000 120980 t416 non-investor idle 0 32 0.00 128000 120891 t417 non-investor idle 0 32 0.00 128000 120848 t418 non-investor idle 0 32 0.00 128000 120915 t419 non-investor idle 0 32 0.00 128000 120864 t420 non-investor idle 0 32 0.00 128000 120943 t421 non-investor idle 0 32 0.00 128000 120909 t422 non-investor idle 0 32 0.00 128000 120888 t423 non-investor idle 0 32 0.00 128000 121013 t424 non-investor idle 0 32 0.00 128000 120795 t425 non-investor idle 0 32 0.00 128000 120750 t426 non-investor idle 0 32 0.00 128000 120941 t427 non-investor idle 0 32 0.00 128000 120808 t428 non-investor idle 0 32 0.00 128000 120845 t429 non-investor idle 0 32 0.00 128000 120817 t430 non-investor idle 0 32 0.00 128000 121129 t431 non-investor idle 0 32 0.00 128000 120600 t432 non-investor idle 0 32 0.01 128000 121029 t433 non-investor idle 0 32 0.00 128000 120992 t434 non-investor idle 0 32 0.00 128000 120994 t435 non-investor idle 0 32 0.00 128000 120965 t436 non-investor idle 0 32 0.01 128000 120877 t437 non-investor idle 0 32 0.00 128000 120903 t438 non-investor idle 0 32 0.00 128000 120813 t439 non-investor idle 0 32 0.00 128000 120911 t440 non-investor drain* 0 32 0.00 128000 121136 t441 non-investor idle 0 32 0.00 128000 121004 t442 non-investor idle 0 32 0.00 128000 121010 t443 non-investor idle 0 32 0.00 128000 120865 t444 non-investor idle 0 32 0.00 128000 120811 t445 non-investor idle 0 32 0.00 128000 121025 t446 non-investor idle 0 32 0.00 128000 120806 t447 non-investor idle 0 32 0.00 128000 120889 t448 non-investor idle 0 32 0.00 128000 120993 t449 non-investor idle 0 32 0.00 128000 120885 t450 non-investor idle 0 32 0.00 128000 120859 t451 non-investor idle 0 32 0.00 128000 120711 t452 non-investor idle 0 32 0.00 128000 120988 t453 non-investor idle 0 32 0.00 128000 120845 t454 non-investor idle 0 32 0.00 128000 120950 t455 non-investor idle 0 32 0.00 128000 120991 t456 non-investor idle 0 32 0.00 128000 120936 t457 non-investor idle 0 32 0.00 128000 120918 t458 non-investor drain* 0 32 0.00 128000 121077 t459 non-investor idle 0 32 0.00 128000 121044 t460 inv-arcc idle 0 32 0.00 128000 118800 t461 inv-atmo2grid idle 0 32 0.00 128000 114370 t462 inv-atmo2grid idle 0 32 0.00 128000 118755 t463 inv-atmo2grid idle 0 32 0.00 128000 118870 t464 inv-atmo2grid drain* 0 32 0.00 128000 118797 t465 inv-atmo2grid idle 0 40 0.00 192000 177236 t466 inv-atmo2grid idle 0 40 0.00 192000 183226 t467 inv-atmo2grid idle 0 40 0.00 192000 183155 t468 inv-atmo2grid idle 0 40 0.00 192000 183047 t469 inv-atmo2grid idle 0 40 0.00 192000 183689 t470 inv-atmo2grid idle 0 40 0.00 192000 183619 t471 inv-atmo2grid idle 0 40 0.00 192000 183627 t472 inv-atmo2grid idle 0 40 0.00 192000 183833 t473 inv-atmo2grid idle 0 40 0.00 192000 183671 t474 inv-atmo2grid idle 0 40 0.00 192000 183878 t475 inv-atmo2grid idle 0 40 0.00 192000 188510 t476 inv-atmo2grid idle 0 40 0.00 192000 188539 t477 inv-atmo2grid idle 0 40 0.00 192000 181480 t478 inv-atmo2grid idle 0 40 0.00 192000 186914 t479 inv-atmo2grid idle 0 40 0.00 192000 187067 t480 inv-atmo2grid idle 0 40 0.00 192000 185831 t481 inv-camml idle 0 40 0.00 768000 765881 t482 inv-camml idle 0 40 0.00 768000 765887 t483 teton mix 2 40 2.00 768000 763713 t484 inv-camml idle 0 40 0.00 768000 765562 t485 inv-camml idle 0 40 0.00 768000 765932 t486 inv-camml idle 0 40 0.00 768000 767057 t487 inv-camml idle 0 40 0.00 768000 767094 t488 inv-camml idle 0 40 0.00 768000 767050 t489 inv-desousa idle 0 40 0.00 192000 185066 t490 inv-desousa idle 0 40 0.00 192000 186395 t491 inv-desousa idle 0 40 0.00 192000 183664 t492 inv-desousa idle 0 40 0.00 192000 186570 t493 inv-desousa idle 0 40 0.00 192000 186571 t494 inv-desousa idle 0 40 0.00 192000 186461 t495 inv-atmo2grid idle 0 40 0.00 192000 183545 t496 inv-atmo2grid idle 0 40 0.00 192000 186239 t497 inv-atmo2grid idle 0 40 0.00 192000 187884 t498 non-investor idle 0 40 0.00 192000 187886 t499 non-investor idle 0 40 0.00 192000 187870 t500 non-investor idle 0 40 0.01 192000 188012 t501 inv-inbre idle 0 40 0.00 184907 129080 t502 inv-inbre idle 0 40 0.00 184907 151099 t503 inv-inbre idle 0 40 0.00 184907 155913 t504 inv-inbre idle 0 40 0.00 184907 151709 t505 inv-inbre idle 0 40 0.00 184907 160277 t506 inv-inbre idle 0 40 0.00 184907 153148 t507 inv-inbre idle 0 40 0.00 184907 153179 t508 inv-inbre idle 0 40 0.00 184907 156567 t509 non-investor idle 0 40 0.00 192000 188000 t510 non-investor idle 0 40 0.00 192000 187761 t511 non-investor idle 0 40 0.00 192000 186995 t512 non-investor idle 0 40 0.00 192000 188036 t513 teton mix 8 40 8.00 192000 174843 4648822 user1 4648821 user1 4572176 user1 4634181 user1 t514 teton mix 4 40 4.00 192000 183065 4648824 user1 4648823 user1 t515 teton mix 20 40 20.00 192000 167322 4648814 user2 4648826 user1 4648825 user1 t516 teton mix 12 40 12.00 192000 173994 4648832 user1 4648831 user1 4648830 user1 4648829 user1 4648828 user1 4648827 user1 t517 teton mix 4 40 4.00 192000 185504 4 t518 non-investor idle 0 40 0.00 192000 187010 t519 non-investor idle 0 40 0.00 192000 187015 t520 non-investor idle 0 40 0.00 192000 186851 tbm03 teton-gpu idle 0 32 0.00 512000 506585 tbm04 teton-gpu idle 0 32 0.00 512000 506617 tbm05 teton-gpu idle 0 32 0.00 512000 506619 tbm06 teton-gpu idle 0 32 0.00 512000 506624 tbm07 teton-gpu idle 0 32 0.00 512000 506581 tbm08 teton-gpu idle 0 32 0.00 512000 506613 tbm09 teton-gpu idle 0 32 0.00 512000 506589 tbm10 teton-gpu idle 0 32 0.00 512000 505794 tdgx01 teton-gpu idle 0 40 0.00 512000 508928 thm03 inv-inbre idle 0 32 0.00 1023053 982164 thm04 inv-inbre idle 0 32 0.00 1023053 967985 thm05 inv-inbre idle 0 32 0.00 1023053 984820 thm06 non-investor idle 0 32 0.00 1024000 1021356 thm07 non-investor idle 0 32 0.00 1024000 1021557 thm08 non-investor idle 0 32 0.00 1024000 1021818 thm09 non-investor idle 0 32 0.00 1024000 1022073 thm10 non-investor idle 0 32 0.00 1024000 1021739 thm11 non-investor idle 0 32 0.00 1024000 1023491 thm12 non-investor idle 0 32 0.00 1024000 1022912 tmass01 inv-inbre idle 0 48 0.00 4116279 4004702 tmass02 inv-inbre idle 0 48 0.00 4116279 2350585 ttest01 debug idle 0 28 0.00 128000 121400 ttest02 debug idle 0 28 0.00 128000 121439 vl40s-002 inv-arcc idle 0 12 0.01 75469 80153 wi001 wildiris idle 0 48 0.00 506997 510076 wi002 wildiris idle 0 48 0.00 506997 509984 wi003 wildiris idle 0 48 0.00 506997 509986 wi004 wildiris idle 0 48 0.00 506997 510133 wi005 wildiris idle 0 56 0.00 1020129 1021497

showpartitions

Prints a list of partitions on the cluster with node information

[arcc-t01@mblog1 ~]$ showpartitions Partition statistics for cluster medicinebow at Tue Feb 11 08:47:23 AM MST 2025 Partition #Nodes #CPU_cores Cores_pending Job_Nodes MaxJobTime Cores Mem/Node Name State Total Idle Total Idle Resorc Other Min Max Day-hr:mn /node (GB) mb up 25 0 2400 1 0 3728 1 infin 7-00:00 96 1023 mb-a30 up 8 5 768 480 0 0 1 infin 7-00:00 96 765 mb-l40s up 5 3 480 415 0 0 1 infin 7-00:00 96 765 mb-h100 up 5 1 480 320 0 0 1 infin 7-00:00 96 1281 mb-a6000 up 1 1 64 64 0 0 1 infin 7-00:00 64 1023 debug up 2 2 56 56 0 0 1 infin 7-00:00 28 128 wildiris up 5 5 248 248 0 0 1 infin 7-00:00 48 506+ teton up 248 235 8416 8142 0 0 1 infin 7-00:00 32 119+ teton-knl up 12 0 3456 0 0 0 1 infin 7-00:00 288 384 teton-gpu up 10 9 336 296 0 0 1 infin 7-00:00 32 512 beartooth up 16 11 896 716 0 0 1 infin 7-00:00 56 257+ beartooth-gpu up 7 7 472 472 0 0 1 infin 7-00:00 56 192+ inv-arcc up 3 3 140 140 0 0 1 infin infinite 12 75 inv-inbre up 28 25 1144 951 0 0 1 infin 7-00:00 32 119+ inv-ssheshap up 1 1 64 64 0 0 1 infin 7-00:00 64 1023 inv-wysbc up 2 1 192 96 0 0 1 infin 7-00:00 96 765 inv-soc up 1 1 96 96 0 0 1 infin 7-00:00 96 765 inv-wildiris up 5 5 248 248 0 0 1 infin 7-00:00 48 506+ inv-klab up 10 1 960 96 0 0 1 infin 7-00:00 96 765+ inv-dale up 1 0 96 0 0 0 1 infin 7-00:00 96 1023 inv-wsbc up 2 1 192 96 0 0 1 infin 7-00:00 96 765 inv-physics up 10 8 464 352 0 0 1 infin 7-00:00 32 128+ inv-coe up 2 2 112 112 0 0 1 infin 7-00:00 56 257 inv-atmo2grid up 23 22 888 856 0 0 1 infin 7-00:00 32 128+ inv-camml up 8 7 320 318 0 0 1 infin 7-00:00 40 768 inv-compsci up 12 0 3456 0 0 3728 1 infin 7-00:00 288 384 inv-microbiome up 88 88 2816 2816 0 0 1 infin 7-00:00 32 128 inv-mccoy up 3 2 168 164 0 0 1 infin 7-00:00 56 1024 inv-desousa up 12 10 576 512 0 0 1 infin 7-00:00 40 192+ non-investor up 108 83 4536 2889 0 0 1 infin 7-00:00 32 128+ non-investor-gpu up 23 18 1520 1360 0 0 1 infin 7-00:00 32 192+ Note: The cluster default partition name is indicated by :* [arcc-t01@mblog1 ~]$

How do I find out which software packages are installed/available

  • While logged into the cluster, executing the command "module spider" will list all modules installed on the cluster on which you’re logged in.

$ module spider To learn more about a package execute: $ module spider Foo where "Foo" is the name of a module. To find detailed information about a particular package you must specify the version if there is more than one version: $ module spider Foo/11.1

How do I run my code on the cluster?

  • You must submit your code to the job scheduler. Running code on login nodes is not permitted.

    • For more information about the job scheduler please see our page on Slurm.

How do I access the cluster when I am not on the UWyo network?

  • ARCC HPC cluster login nodes are on the Science DMZ network outside of UWyo firewalls. Meaning you can log into Beartooth from anywhere as long as you have an internet connection and a terminal or ssh client.

Yes. Our list of Medicinebow software can be found here.

You can check by running an arccquota command to get a list of quotas for your /home, /gscratch, and associated /project directories, or navigating to the specific directory you’d like to check the size of, and running du -h -max-depth=1

Slurm is system that manages jobs on ARCC HPCs. To learn more about Slurm, click here.

There are a number of ways to check your jobs and get information about them. Depending on how your job was submitted, you may be provided a log or error files after the job runs.

With OnDemand: If the job is currently running, you can check your jobs on OnDemand by clicking on Jobs in the yellow drop down menu at the top.
Using Command Line (over SSH): Run an sacct specifying a start and end date to get jobs that ran over a specific timeframe. See here for more information on running sacct commands.

Wildiris is now part of the Medicinebow cluster. Users logging into Medicinebow with an arcc-only (non-UWYO) account should follow instructions here.


MedicineBow SLURM Job Scheduling (After Changes Jan 13, 2025)

Users that are not associated with an investment cannot run CPU jobs on a GPU node unless their specified job nodes fall within their investment hardware.
In all other cases, users are restricted from running CPU jobs on a GPU node and those jobs should be requested with a --gres:gpu flag in their submission script or salloc command.

If you do not specify a QoS as part of your job, a QoS will be assigned to that job based on partition or wall-time. Different partitions and wall-times are associated with different QoS, as detailed in our published Slurm Policy. Should no QoS, partition, or wall-time be specified, the job by default will be placed in the Normal queue with a 3 day wall-time by default.

Similar to jobs with unspecified QoS, wall-time is assigned to a job based on other job specifications, like QoS or partition. Specific QoS or partitions in a job submission will result in a default wall-time associated with those other flags. If no QoS, partition, or wall-time is specified, the job by default is placed in the Normal queue with a 3 day wall-time.

If you are requesting a GPU, you must also specify a partition with GPU nodes. Otherwise, you are not required to specify a partition. Users requesting GPUs should likely use a --gres=gpu:# or --gpus-per-node flag AND a --partition flag in their job submission.

To encourage users to use only the time they need, all interactive jobs, including those requested through OnDemand have been limited to 8 hours in length. Please specify a time from the OnDemand webform under 8 hours.

This is usually the result of specified walltime. If you have specified a 7 day walltime in your job using --time or -t flag over 3 days, you will be placed in the “long” queue which may result in a longer wait time. If your job doesn’t require 7 days, please try specifying a shorter walltime (ideally under 3 days). This should result in your job being placed in a queue with a shorter wait time.

sbatch/salloc: error: Interactive jobs cannot be longer than 8 hours

Post maintenance, interactive jobs are restricted to an 8 hour walltime. Please submit your salloc command with a walltime 8 hours or less.
Example:

salloc -A projectname -t 8:00:00

sbatch/salloc: error: You didn't specify a project account (-A,--account). Please open a ticket at arcc-help@uwyo.edu for help

If accompanied by “sbatch/salloc: error: Batch job submission failed: Invalid account or account/partition combination specified” it’s likely you need to specify an account in your batch script or salloc command, or the account name provided after the -A or --account flag is invalid. The account flag should specify the name of the project in which you’re running your job. Example:
salloc -A projectname -t 8:00:00

sbatch/salloc: error: Use of --mem=0 is not permitted. Consider using --exclusive instead

Users may no longer request all memory on a node using the --mem=0 flag and are encouraged to request only the memory they require to run their job. If you know you need the use of an entire node, replace your --mem=0 flag specification in your job with --exclusive to get use of an entire node an all it’s resources.

sbatch/salloc: error: QOSMinGRES

Users must specify a GPU device if requesting a GPU partition. Assuming you plan to use a GPU in your computations, please specify a GPU by including either the --gres=gpu:# or --gpus-per-node=# flag in your job submission.

sbatch/salloc: error: Job submit/allocate failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

This may occur for a number of reasons. Please e-mail arcc-help@uwyo.edu with the location of the batch script you’re attempting to run, or salloc command you’re attempting to run, and the error message you receive.

salloc: error: Interactive jobs must be run under the 'interactive' QOS or 'debug' QOS, not 'fast'

Users must specify the interactive or debug queue, or a time under 8 hrs when requesting an interactive job.

sbatch/salloc: error: Job submit/allocate failed: Invalid qos specification

Users should specify a walltime that is available for their specified queue. i.e.,
Debug (<= 1 hr)
Interactive (<= 8 hrs)
Fast (< = 12 hrs)
Normal (<= 3 days)
Long (<= 7 days)

sbatch: error: Batch job submission failed: Requested node configuration is not available

This may occur for a number of reasons, but is likely due to the combination of nodes and hardware you’ve requested, and whether that hardware is available on the node/partition. If you need assistance please e-mail arcc-help@uwyo.edu with the location of the batch script you’re attempting to run, or salloc command you’re attempting to run, and the error message you receive.

 

 


HPC Migration Questions: Beartooth to MedicineBow

I’m having trouble logging into Medicinebow OnDemand

Please reference the ARCC Login Page and how to log in here.

I’m having trouble SSH’ing to Medicinebow

New users trying to ssh for the first time without configuring SSH authentication will recieve an error message as follows:
ssh <username>@medicinebow.arcc.uwyo.edu: Permission denied (publickey)

ARCC has changed the way users authenticate over SSH to Medicinebow and authentication must be configured with SSH keys. See this page on configuring SSH keys to SSH into Medicinebow.

How do I move my data from Beartooth to Medicinebow?

The easiest method for moving data from Beartooth to Medicinebow is to use Globus. Instructions for logging into and configuring Globus and moving data between Beartooth and Medicinebow may be found here.
Alternative methods for moving data are detailed here.


Beartooth

I am no longer able to access Beartooth when I could previously

Beartooth is scheduled for end of life/decommission effective January 2, 2025. Users should no longer be able to access Beartooth after this date.

Users with old projects on Beartooth should migrate to Medicinebow. Migration Request forms may be filled out for projects here, on our portal.


Data Storage Questions

I need storage but I’m not sure whether I need Alcova, Pathfinder or something else. How do I know which one I need?

ARCC offers two main storage services outside of HPC data storage: Alcova and Pathfinder. **Researchers should be aware that neither storage solution is currently approved to store HIPAA data.

  • Alcova is likely appropriate for most users looking for data storage that functions similar to a mapped drive (like /warehouse storage from IT) but specifically for research data.

  • Pathfinder is more appropriate as a cloud based option for secondary backup and archival of data, storing and serving static website content, large data set analytics, and to facilitate the open sharing of data through a web URL.

  • If you’re still not sure, e-mail us at arcc-help@uwyo.edu. We will need to talk to you about your particular use case to identify the most appropriate storage solution.

There are a number of ways to move data, dependent on where it is currently (moving from) and where you want to move it to. Your options are detailed here: Data Moving and Access but newer ARCC users who are less familiar with HPC and Command Line Interface may elect to move data themselves using graphical user interface options. ARCC EUS recommends using Globus.

 


Alcova

How can I get data storage on Alcova?

PIs (Principal Investigators on research) may request an Alcova project by filling out the Request New Project form and selecting ‘Data Storage’ in the requested resources section and listing Alcova in the ‘If known, resources' section. Please see ARCC HPC Policies and HPC/HPS Account Policies on our policy page for who qualifies as a PI.

How do I access my Alcova allocation when I am not on the UWyo network?

Users will need to use the UWyo VPN (wyosecure) to get onto the UWyo network.

I would like to add user(s) to my Alcova Project

Only the project PI may make changes to Alcova Project permissions. If you are not the PI on the project, ARCC will need to obtain the PI’s approval to make changes to project permissions and add members. If you are the project PI, you may request a project change through our portal.

I can’t access my Alcova data at alcova.arcc.uwyo.edu/<projectname>

Alcova was migrated to the ARCC Data Portal effective June 2024. Please see this page for information on accessing the your new project space.

If this doesn’t help, please contact ARCC at arcc-help@uwyo.edu so we may troubleshoot your issue.

 


Migrations: Alcova to ARCC Data Portal

What is the location in which you’re attempting to access your data? If you’re accessing data at: alcova.arcc.uwyo.edu\<projectname> this was the prior location for Alcova data. Alcova projects were migrated effective June 2024 to data.arcc.uwyo.edu\cluster\alcova\<projectname> and users should access data at the new location.

If you are attempting to access your data at the new location and unable to get to it, this may be dependent on your network configuration. If you’re off campus you must connect to the UWYO network using Wyosecure VPN. Information for Wyosecure is available here.

If you’ve attempted to access the new alcova location and are connected to Wyosecure, but are still unable to do so, please contact arcc-help@uwyo.edu for assistance.


Pathfinder

What is Pathfinder and why would I use it over Alcova?

Pathfinder is storage in the form of an S3 bucket. Pathfinder storage space will be more appropriate as a cloud based option in cases where a user is wanting storage for secondary backup and archival of data, storing and serving static website content, large data set analytics, and to facilitate the open sharing of data through a web URL.

What is an S3 Bucket?

Imagine a large box in which you can store things. This box lets you put things in, take things out, and share things with our friends. An S3 bucket is similar to this “big box” concept, but to store files on the internet. S3 stands for Simple Storage Service (S3) protocol originally developed by Amazon. S3 works on object storage through a service called Ceph, provided by Red Hat Enterprise Linux. You can access our files at any time, and decide who can see or use them in the S3 bucket. Bucket permissions are typically set to public or private. If the bucket is private it’s only accessible to those with a set of keys.

Pathfinder storage is accessed as S3 storage, and you should use an S3 client to connect to it, or an application that supports the S3 protocol. A list of S3 clients can be found here.

Pathfinder data can be shared to anyone with a URL. Information on sharing Pathfinder data using a URL can be found here. If you do not wish to share your data openly through a URL, users may be added to access Pathfinder project data by the project’s PI.

Only the project PI may change project permissions. If you are not the PI on the project, ARCC will need to obtain the PI’s approval to make changes to project permissions and add members. If you are the project PI, you may request a project change through our portal.

Glossary

Related content