DENISE (Black Edition)

Overview

DENISE Black Edition: 2D time-domain isotropic (visco)elastic finite-difference modeling and full waveform inversion (FWI) code for P/SV-waves, which have been developed together with André Kurzmann, Denise De Nil and Thomas Bohlen. Since then the code has been extended by Lisa Groos, Sven Heider, Martin Schäfer, Linbin Zhang, and Daniel Wehner.

Using

Use the module name denise to discover versions available and to load the application.

 

Testing Based on Chapter 7, Example 1 in the Manual This assumes you have:

  • Cloned the DENISE-Black-Edition and have a copy of the par subfolder.

  • Cloned the DENISE-Benchmark and copied (as instructed) the Marmousi test files.

The documentation talks about using mpirun to run a simulation. You need to use the following method:

  • Create a batch file that you'll submit using sbatch or use salloc to create an interactive session.

  • Use srun to start you simulation.

Notes

  • The value of nodes * ntasks-per-node must match the srun -n value.

  • This value in turn must match NPROCX * NPROCY as defined in the DENISE_marm_OBC.inp file:

 

Example 1

#!/bin/bash #SBATCH --account=type-your-project-name #SBATCH --time=00:05:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=15 #SBATCH --cpus-per-task=1 #SBATCH --output=denise_%A.out module load denise/1.3 srun -n 15 denise DENISE_marm_OBC.inp #-------------- Domain Decomposition ----------------------------- number_of_processors_in_x-direction_(NPROCX) = 5 number_of_processors_in_y-direction_(NPROCY) = 3

 

Example 2

#!/bin/bash #SBATCH --account=type-your-project-name #SBATCH --time=00:05:00 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=30 #SBATCH --cpus-per-task=1 #SBATCH --output=denise_%A.out module load denise/1.3 srun -n 60 denise DENISE_marm_OBC.inp #-------------- Domain Decomposition ----------------------------- number_of_processors_in_x-direction_(NPROCX) = 10 number_of_processors_in_y-direction_(NPROCY) = 6

Remember: The number of cores you want to use must satisfy the conditions for the domain decomposition.

NX % NPROCX = 0 NY % NPROCY = 0
  • where NX, NY denote the number of FD grid points in x- and y-direction,

  • NPROCX, NPROCY are the number of MPI-processes in each spatial direction

  • and the % (modulo) operator yields the remainder from the division of the first argument by the second.

    For the given Marmousi-2 model discretization: NX = 500 NY = 174
    The following domain decomposition in the DENISE parameter file NPROCX = 10 NPROCY = 6 should run DENISE on NP = NPROCX * NPROCY = 60 cores
    And in the batch file we define 2 nodes, each using 30 cores: 2 * 30 = 60
    Or 4 nodes, each using 15 cores: 4 * 15 = 60

  • This software is dependent on the following modules:

    • swset/2018.05

    • intel/18.0.1

    • intel-mpi/2018.2.199

    • fftw/3.3.8-impi - compiled using intel/18.0.1

    • The module load denise/1.3 line will automatically load these modules for you.