Cmdstan

Overview

  • Stan: Stan is a state-of-the-art platform for statistical modeling and high-performance statistical computation.

    • Stan Interfaces:

      • The Stan modeling language and statistical algorithms are exposed through interfaces into many popular computing environments:

Using

Use the module name cmdstan to discover versions available and to load the application.

General Process: There are three core steps to using cmdstan:

  1. Convert a .stan program into a .hpp file using stanc.

    1. For example: stanc src/bernoulli.stan will generate src/bernoulli.hpp

  2. Build the .hpp into an executable using make.

    1. You will need to use: make -C $CMDSTAN $(pwd)/src/bernoulli

      1. The $CMDSTAN environment variable is set by loading the module and points to the makefile that comes with the installation, which links to the stan core and math include/libraries which are part of the installation.

      2. The $(pwd)/src/bernoulli allows you to build the executable within the home/project folder you are currently in.

  3. Running the executable. Since this has been built with openmpi, you can't just run this executable, you will need to:

    1. Create an interactive salloc or submit an sbatch job.

    2. The call: srun ./src/bernoulli ...

If you fail to use srun, you will observe the following output:

[@m001 testing]$ ./src/bernoulli sample data file=src/bernoulli.data.json [m001:91996] OPAL ERROR: Unreachable in file ext3x_client.c at line 112 -------------------------------------------------------------------------- The application appears to have been direct launched using "srun", but OMPI was not built with SLURM's PMI support and therefore cannot execute. There are several options for building PMI support under SLURM, depending upon the SLURM version you are using: version 16.05 or later: you can use SLURM's PMIx support. This requires that you configure and build SLURM --with-pmix. Versions earlier than 16.05: you must use either SLURM's PMI-1 or PMI-2 support. SLURM builds PMI-1 by default, or you can manually install PMI-2. You must then build Open MPI using --with-pmi pointing to the SLURM PMI library location. Please configure as appropriate and try again. -------------------------------------------------------------------------- *** An error occurred in MPI_Init *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [m001:91996] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

Multicore

cmdstan has been built with mpi capabilities. Due to the domain expertise required to use Stan, this has not been fully tested.

There is a section on Parallelization. Please note, cmdstan has been built with TBB and MPI capabilities, but not OpenCL.