Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Remember to use the following form of the command srun g16 ...

GPUs

As stated on the Using GPUsGaussian 16 can use NVIDIA K40, K80, P100 (Rev. B.01), V100 (Rev. C.01) and A100 (Rev. C.02) GPUs under Linux. Earlier GPUs do not have the computational capabilities or memory size to run the algorithms in Gaussian 16.“. We have successfully tested using the P100s on both Beartooth and Teton. Bare in mind, where older GPUs are not supported, the same is true for newer GPUs. We have not been able to successfully run a test using the newer A30s on Beartooth.

Essentially to use GPUs you have to allocate the GPU device(s) to a specific CPUs. The “Optimizing the runtime of your jobs“ section at https://wiki.hpcuser.uni-oldenburg.de/index.php?title=Gaussian_2016 provides a good example on how to identify the CPUs your job is running on, and then how to update the %CPU and %GPUCPU parameters within your .gjf input file.

Memory Considerations

  • The amount of memory you require is a bit of a dark-art and will depend on your input files and number of cores requested, so this does take an element of experimenting and analysis.

  • The freqmem utility takes parameters for a frequency calculation and determines the amount of memory required to complete all steps in one pass for maximum efficiency. Use this to approximate memory requirements.

  • According to the Gaussian 16 Rev. C.01 Release Notes under the Parallel Perf. tab: The memory allocation section recommends setting %mem to half the total memory you allocate for your job. Or, in other words, once you've approximated your memory needs for gaussian, then within the batch script request twice as much.

...