oneAPI: Compiling [beartooth]
Overview
The Intel oneAPI ecosystm provides a number of compilers and libraries. This page contains contains information about what compilers are provided on Beartooth including some basic examples and command-line instructions on how to compile.
Compilers Available
Below is a table summarizing what compilers are provided by what modules:
Compiler | Description | Language | Module |
---|---|---|---|
| Intel® C++ Compiler Classic ( | C/C++ |
|
| The Intel® oneAPI DPC++( The When the The primary use case is for Data Parallel applications. | C/C++ |
|
| The The primary use case is for standard C applications. | C |
|
| The The primary use case is for standard C++ applications. | C++ |
|
| The Intel® Fortran Compiler ( | Fortran |
|
| Intel® Fortran Compiler Classic ( For calendar year 2022 ifort continues to be our best-in-class Fortran compiler for customers not needing GPU offload support | Fortran |
|
| MPI C compiler that uses generic wrappers for the | C |
|
| MPI C++ compiler that uses generic wrappers for the | C/C++ |
|
| MPI Fortran compiler that uses generic wrappers for the | Fortran |
|
| MPI Fortran compiler that uses GNU wrappers for the | Fortran |
|
| MPI Fortran compiler that uses GNU wrappers for the | Fortran |
|
| MPI C compiler that uses GNU wrappers for the | C |
|
| MPI C++ compiler that uses GNU wrappers for the | C/C++ |
|
| MPI C compiler that uses Intel wrappers for the | C |
|
| MPI C++ compiler that uses Intel wrappers for the | C++ |
|
| MPI Fortran compiler that uses Intel wrappers for the | Fortran |
|
Acronyms:
MKL: Math Kernel Library: a computing math library of highly optimized and extensively parallelized routines for applications that require maximum performance.
TBB: Threading Building Blocks: a widely used C++ library for task-based, shared memory parallel programming on the host.
MPI: Message Passing Interface: a multifabric message-passing library that implements the open source MPICH specification.
Best Practice: Because compiling code can be computationally intensive or may have long compilation times, we ask that large compilations be done either inside of a salloc
session or sbatch
job.
Examples:
MKL: C:
Using icc compiler
[@m001 test]$ module load oneapi/2022.3 icc/2022.2.0 mkl/2022.2.0
[@m001 test]$ icc -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl -o mkl_test_icc mkl_test.c
Using icx compiler
[@m001 data]$ module load oneapi/2022.3 compiler/2022.2.0 mkl/2022.2.0
[@m001 test]$ icx -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl -o mkl_test_icx mkl_test.c
Output
Since all of the MKL examples are built with the same source files the results of the executable will be the same:
MKL: C++
Using dpcpp compiler
Using icc compiler
Using icpx compiler
Output
Since all of the C++ MKL examples are built with the same source files the results of the executable will be the same:
MKL: Fortran
Using ifx compiler
Using ifort compiler
Output
Since all of the Fortran MKL examples are built with the same source files the results of the executable will be the same:
TBB: C++
The module tbb/2021.7.0
is automatically loaded when loading compiler/2022.2.0
or mkl/2022.2.0
for this reason it is omitted from the module loads used in the examples.
As per the Specifications section of the Intel oneAPI Threading Building Block documentation this module only works with C++ compilers.
Using dpcpp compiler
Using icc compiler
Using icpx compiler
Output
Since all of the C++ examples are built with the same source files the results of the executable will be the same: The executable will also work across multiple CPUs so experiment with the number used in the salloc
.
MPI in oneAPI Ecosystem
Intel’s oneAPI MPI compilers use their own copy of mpirun
which uses a built in process manager that pulls the relevant information from SLURM to run code compiled with the oneAPI compilers.
Messages Appearing During Code Execution
Because the process manager used by oneAPI MPI compilers automatically pulls information from SLURM it will warn the user that it is ignoring certain environmental variables:
To suppress these errors in either the sbatch
script or salloc
session enter the following:
Sample Code: The sample code used in this webpage is available on Beartooth in this location: /apps/u/opt/compilers/oneapi/2022.3/mpi/2021.7.0/test
MPI: C
Using mpicc
Using mpicxx
Using mpigcc
Using mpigxx
Using mpiicc
Output
Since all of the C examples are built with the same source files the results of the executable will be the same:
MPI: C++
Using mpicxx
Using mpigxx
Using mpiicpc
Output
Since all of the C++ examples are built with the same source files the results of the executable will be the same:
MPI: Fortran
Using mpif77
Using mpif90
Using mpifc compiler
Using mpiifort compiler
Output
Since all of the Fortran examples are built with the same source files the results of the executable will be the same: