Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Overview

  • Rmpi: An interface (wrapper) to MPI. It also provides interactive R manager and worker environment.

We currently only have version 0.7.1 available.

Note: Loading this module will also load in r/4.2.2

Meaning you have an environment with r/4.2.2 and the Rmpi library.

You do NOT need to load r/4.2.2 as a module separately.

Using

This usage relates to using this library on Beartooth.

Multicore

This R library is designed to run across multiple nodes, and multiple tasks on a node.

ONLY using the Rmpi library

If you are only using the Rmpi library, and no other related parallel libraries such as snow, then to allow the use of Rmpi on the cluster you first need to copy the following file into your home folder. (If you already have a .Rprofile file in your home, you'll need to update it.)

[]$ cp /apps/u/spack/gcc/12.2.0/r-rmpi/0.7-1-3eiutsq/rlib/R/library/Rmpi/Rprofile ~/.Rprofile

To then use the module, you’ll need to load the following modules:

module load gcc/12.2.0 openmpi/4.1.4 r-rmpi/0.7-1-ompi

Example

This example was based on an example here:

 rmpi_test.R
# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
    library("Rmpi")
    }

ns <- mpi.universe.size() - 1

mpi.spawn.Rslaves(nslaves=ns)

# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
       if (is.loaded("mpi_initialize")){
           if (mpi.comm.size(1) > 0){
               print("Please use mpi.close.Rslaves() to close slaves.")
               mpi.close.Rslaves()
           }
           print("Please use mpi.quit() to quit R")
           .Call("mpi_finalize")
       }
}

# Tell all slaves to return a message identifying themselves
mpi.bcast.cmd( id <- mpi.comm.rank() )
mpi.bcast.cmd( ns <- mpi.comm.size() )
mpi.bcast.cmd( host <- mpi.get.processor.name() )
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))

# Test computations
x <- 5
x <- mpi.remote.exec(rnorm, x)
length(x)
x

# Tell all slaves to close down, and exit the program
mpi.close.Rslaves(dellog = FALSE)
mpi.quit()
 run.sh
#!/bin/bash
#SBATCH --job-name=rmpi-test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --time=10:00
#SBATCH --mail-type=ALL
#SBATCH --mail-user=<your-email-addr>
#SBATCH --account=<your-project>

module load gcc/12.2.0 openmpi/4.1.4 r-rmpi/0.7-1-ompi

srun Rscript rmpi_test.R
 Example Run and Output
[]$ sbatch run_01.sh
Submitted batch job 13116893

[]$ cat slurm-13116893.out
master (rank 0, comm 1) of size 8 is running on: ttest01
slave1 (rank 1, comm 1) of size 8 is running on: ttest01
slave2 (rank 2, comm 1) of size 8 is running on: ttest01
slave3 (rank 3, comm 1) of size 8 is running on: ttest01
slave4 (rank 4, comm 1) of size 8 is running on: ttest02
slave5 (rank 5, comm 1) of size 8 is running on: ttest02
slave6 (rank 6, comm 1) of size 8 is running on: ttest02
slave7 (rank 7, comm 1) of size 8 is running on: ttest02
Error in mpi.spawn.Rslaves(nslaves = ns) :
  It seems there are some slaves running on comm  1
$slave1
[1] "I am 1 of 8"

$slave2
[1] "I am 2 of 8"

$slave3
[1] "I am 3 of 8"

$slave4
[1] "I am 4 of 8"

$slave5
[1] "I am 5 of 8"

$slave6
[1] "I am 6 of 8"

$slave7
[1] "I am 7 of 8"

[1] 7
           X1          X2         X3         X4         X5         X6
1  0.25231568 -0.70670787 -0.8623333 -1.1538241 -1.3747273 -0.9696954
2 -0.91498764  1.09819580  0.5737269 -0.6856323  0.8941616 -1.7339326
3  1.51865169  1.63120359 -0.9954300  0.2413086  0.2627482  1.6690493
4  1.09594877  0.08905511 -0.1490578  1.2190246 -0.1724257 -1.3822756
5  0.09966169 -1.92527468  0.9805431  1.8346315  0.2773092 -0.7084154
           X7
1 -0.26325262
2 -1.20082024
3 -0.04534522
4 -0.14685414
5  0.34071411
[1] 1
 Error if no .Rprofile
# If you do not copy the .Rprofile file into you home, you'll see an error of the form:
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"),  :
  MPI_ERR_SPAWN: could not spawn processes
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted

Using the Rmpi library with Snow

  • No labels