MedicineBow Filesystem
Overview:
MedicineBow uses the Data (VAST) filesystem configured with a 3 petabytes NVMe system, including advanced block-based data deduplication which will give an expected 6 PB of overall capacity. The cluster has several storage locations available for users and allocated for specific purposes. These storage spaces are described below.
home - /home/$USER/
Space for configuration files and software installations. This file space is intended to be small and always resides on SSDs. The /home file space is snapshotted to recover from accidental deletions.
alias:
$HOME
project - /project/project_name/$USER/
Space to collaborate among project members. Data here is persistent and is exempt from purge policy. The /project file space is snapshotted to recover from accidental deletions.
gscratch - /gscratch/$USER/
Space to perform computing for individual users. Data here is subject to a purge policy defined below. Warning emails will be sent when deletions are expected. No snapshots.
alias:
$SCRATCH
Global Filesystems
Filesystem Name | Quota (GB) | Snapshots | Backups | Purge Policy | Additional Info |
---|---|---|---|---|---|
home | Yes | No | No | Always on SSD | |
project | Yes | No | No | Aging Data will move to HDD | |
gscratch | No | No | Yes | Aging Data will move to HDD |
Snapshots vs Backups
Snapshots are a point in time reference of a specific filesystem, that can be referenced after changes are made. The data stays on the same storage system. This data would not be recoverable if there is an issue with the storage system.
Backups are the act of transporting data to another storage system for safe-keeping, recoverable if an issue occurs with the source file system.
Purge Policy
File spaces within the MedicineBow cluster filesystem may be subject to a purge policy. ARCC reserves the right to purge data subject to purge policy. Before performing an actual purge event, the owner of the file(s) will be notified by email for files that are subject to being purged.
Additional information summarizing ARCC’s storage policy is available here.
Storage Increases on MedicineBow
Special Filesystems
Certain filesystems exist on different nodes of the cluster where specialized requirements exist. The table below summarizes these specialized filesystems.
Specialty Filesystems
Filesystem Name | Mount Location | Notes |
---|---|---|
Data | /data | Only on login nodes |
node local scratch | /lscratch | Only on compute nodes, Moran is 1TB HDD, MedicineBow is 240GB SSD |
memory filesystem | /dev/shm | RAM-based tmpfs available as part of RAM for very rapid I/O operations; small capacity |
The node-local scratch or lscratch filesystem is purged at the end of each job.
The memory filesystems can really enhance the performance of small I/O operations. If you have a localized single node I/O jobs that have very intensive random access patterns, this filesystem may improve the performance of your compute job.
The new Alcova filesystems are only available from the login nodes, not on the compute nodes. Storage space on the MedicineBow global filesystems does not imply storage space on the ARCC new Alcova or vice versa. For more information about new Alcova filesystem on Data please see: Data on Alcova