DFTB
Overview
DFTB stands for Density Functional based Tight Binding (and more). It is an implementation of the Density Functional based Tight Binding (DFTB) method, containing many extensions to the original method. The development is supported by various groups, resulting in a code which is probably the most versatile DFTB-implementation, with some unique features not available in other implementations so far.
Features: DFTB+ offers an approximate density functional theory based quantum simulation tool with functionalities similar to ab initio quantum mechanical packages while being one or two orders of magnitude faster. You can optimize the structure of molecules and solids, you can extract one electron spectra, band structures and various other useful quantities. Additionally, you can calculate electron transport under non-equilibrium conditions.
Documentation: Of recipes and manuals.
Using
Use the module name dftb
to discover versions available and to load the application.
Due to the specific installation of this application, dftb
must be run using srun dftb+
in both the sbatch
and salloc
processes.
Multicore
Some of the DFTB versions have been built with MPI to allow processing across multiple nodes - see the tables below.
Please read section 2.11 Parallel of the manual to understand how to configure your input to effect the parallel behavior of your code.
DFTB uses the
OMP_NUM_THREADS
environmental variable to define the number of OpenMP threads a job can run on any node.Use:
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
Configure the
Parallel {}
block within your input with appropriateGroups
andUseOmpThreads
options.
For example:
# sbatch file:
#SBATCH --nodes=2
#SBATCH --cpus-per-task=16
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Within: dftb_in.hsd
Parallel {
Groups = 2
UseOmpThreads = Yes
}
You can the confirm configuration within the output by noticing:
# Within the output:
MPI processes: 2
OpenMP threads: 16
Illegal Instruction Issue
Due to the instruction set used, the application can fail if run across the older
moran
nodes resulting in illegal instruction errors. To resolve this, state the partition (typically teton) that you wish to run the jobs across.
Program received signal SIGILL: Illegal instruction.
Backtrace for this error:
#0 0x2b4ce98af3ff in ???
...
#13 0x42591c in ???
#14 0xffffffffffffffff in ???
srun: error: m003: task 0: Illegal instruction
srun: launch/slurm: _step_signal: Terminating StepId=2328549.0
Using on Beartooth
Version | Notes |
---|---|
22.2 | Provides multi node parallelism. It has not been built with the optional packages: ARPACK-NG, ELSE, MAGNA nor PLUMED2. Note: This is the first version that installs DFTB via a conda environment, previous versions were manually installed. MPI functionality is provided by and packaged within the conda environment itself, and does not require us to load compilers/openmpi libraries. To utilize MPI your command line needs to take the form: This conda environment, once this module version has been loaded, uses python 3.11.2. |
21.2_mdForces | Provides multi node parallelism. This version is build from the specific mdForces tree branch off of the main application. It has been built with the optional package: ARPACK-NG |
22.1-ompi | Provides multi node parallelism. To utilize MPI your command line needs to take the form: |
Using on Teton
Version | Notes |
---|---|
21.2 | Does provide multi node parallelism (which in itself does not support the ARPACK-NG library). It has not been built with the optional packages: ELSE, MAGNA nor PLUMED2. |
21.2_mdForces | Does provide multi node parallelism. This version is build from the specific mdForces tree branch off of the main application. |
20.1 | Version 20.1 has been built with the optional ARPACK-NG library for excited state DFTB functionality. It does NOT provide multi node parallelism. |