Teton Filesystem_old

Overview: Global Filesystems

The Teton global parallel filesystem configured with a 160 TB SSD tier for active data and 1.2 PB HDD capacity tier for less-used data. The system policy engine moves data automatically between pools (disks and tiers). The system will automatically migrate data to HDD when the SSD tier reaches 70% used capacity. Teton has several spaces that are available for users to access described in the table below.

home/home/username ($HOME)

  • Space for configuration files and software installations. This file space is intended to be small and always resides on SSDs. The /home file space is snapshotted to recover from accidental deletions.

project/project/project_name/[username]

  • Space to collaborate among project members. Data here is persistent and is exempt from purge policy. The /project file space is snapshotted to recover from accidental deletions.

gscratch - /gscratch/username ($SCRATCH)

  • Space to perform computing for individual users. Data here is subject to a purge policy defined below. Warning emails will be sent when possible deletions may start to occur. No snapshots.

Global Filesystems

Filesystem Name

Quota (GB)

Snapshots

Backups

Purge Policy

Additional Info

Filesystem Name

Quota (GB)

Snapshots

Backups

Purge Policy

Additional Info

home

25

Yes

No

No

Always on SDD

project

1024

Yes

No

No

Aging Data will move to HDD

gscratch

5120

No

No

Yes

Aging Data will move to HDD

Snapshots vs Backups

Snapshots are a point in time reference of a specific filesystem, that can be referenced after changes are made. The data stays on the same storage system. This data would not be recoverable if there is an issue with the storage system.

Backups are the act of transporting data to another storage system for safe-keeping, recoverable if an issue occurs with the source file system.

Purge Policy

  • File spaces within the Teton cluster filesystem may be subject to a purge policy. ARCC reserves the right to purge data in this area after 30 to 90 days of no access or from creation time. Before performing an actual purge event, the owner of the file(s) will be notified by email several times for files that are subject to being purged.

Storage Increases on Teton

  • Project PIs can purchase additional scratch and/or project space at a cost of $100 / TB / year.

  • Additionally, PIs can request allocation increases at no cost for scratch and/or project space by submitting proposals that must be renewed when substantial cluster or storage changes occur:

    • the scientific gain and insights that will be or have been obtained by using the system,

    • how data is organized and accessed in efforts to maximize performance and usage.

    • Projects are limited to 1 no-cost increase.

  • To request more information, please contact ARCC.

Special Filesystems

Certain filesystems exist on different nodes of the cluster where specialized requirements exist. The table below summarizes these specialized filesystems.

Specialty Filesystems

Filesystem Name

Mount Location

Notes

Filesystem Name

Mount Location

Notes

petaLibrary

/petalibrary/homes

Only on login nodes

petaLibrary

/petalibrary/Commons

Only on login nodes

node local scratch

/lscratch

Only on compute nodes, Moran is 1 TB HDD, Teton is 240 GB SSD

memory filesystem

/dev/shm

RAM-based tmpfs available as part of RAM for very rapid I/O operations; small capacity

The node-local scratch or lscratch filesystem is purged at the end of each job.

The memory filesystems can really enhance the performance of small I/O operations. If you have a localized single node I/O jobs that have very intensive random access patterns, this filesystem may improve the performance of your compute job.

The petaLibrary filesystems are only available from the login nodes, not on the compute nodes. Storage space on the Teton global filesystems does not imply storage space on the ARCC petaLibrary or vice versa. For more information about the petaLibrary please see the following link petaLibrary.

The Bighorn filesystems will be provided for a limited amount of time in order for researchers to move data to either the petaLibrary, Teton storage, or to some other storage media. The actual date that these mounts will be removed is still TBD.