Community Open MPI
Scenarios
This section describes how to install and use community Open MPI (version 3.1.1 is used as an example) on a BMS.
Perform the operations on each BMS in a cluster.
Prerequisites
Password-free login has been configured between BMSs in the cluster.
Procedure
- Install the HPC-X toolkit.
- When community Open MPI is used, the Mellanox HPC-X toolkit is also required. The HPC-X version required by CentOS 7.3 is hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64.tbz.
Download path: https://developer.nvidia.com/networking/hpc-x
- Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
- Run the following commands to decompress the HPC-X toolkit and change the toolkit directory:
# tar -xvf hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64.tbz
# mv hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64 /opt/hpcx-v2.2.0
- When community Open MPI is used, the Mellanox HPC-X toolkit is also required. The HPC-X version required by CentOS 7.3 is hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64.tbz.
- Install Open MPI.
- Download community Open MPI of version openmpi-3.1.0.tar.gz.
Download path: https://www.open-mpi.org/software/ompi/v3.1/
- Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
- Run the following commands to decompress the software package:
# tar -xzvf openmpi-3.1.0.tar.gz
# cd openmpi-3.1.0
- Install desired library files. Ensure that the BMS can access the Internet before the installation.
- Run the following commands to install and compile Open MPI:
# mkdir build && cd build
# ../configure --prefix=/opt/openmpi-310 --with-mxm=/opt/hpcx-v2.2.0/mxm
# make all install
Figure 2 Successful installation of Open MPI
- Download community Open MPI of version openmpi-3.1.0.tar.gz.
- Configure MPI environment variables.
- Add the following environment variables in ~/.bashrc as a common user:
export PATH=$PATH:/opt/openmpi-310/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-310/lib
- Run the following command to import MPI environment variables:
- Run the following command to check whether the MPI environment variables are correct:
If information shown in Figure 3 is displayed, the environment variables have been correctly configured.
- Add the following environment variables in ~/.bashrc as a common user:
- Run community Open MPI on a BMS.
- Run the following commands to generate an executable file:
$ vi hello.c
Edit the following content:
#include<mpi.h> #include<stdio.h> int main(int argc, char** argv){ //Initialize the MPI environment MPI_Init(NULL, NULL); //Get the number of processes int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); //Get the rank of the process int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); //Get the name of the processor char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); //Print off a hello world message. printf("Hello world from processor %s, rank %d"" out of %d processors\n",processor_name, world_rank, world_size); //Finalize the MPI environment. MPI_Finalize(); }
$ mpicc hello.c -o helloThe hello.c file varies depending on MPI versions. You are required to re-compile the hello.c file by running the mpicc hello.c -o hello command, regardless of the MPI version.
- Run the following command to run community Open MPI on a BMS:
$ mpirun -np 2 /home/rhel/hello
If information shown in Figure 4 is displayed, community Open MPI is running on the BMS.
- Run the following commands to generate an executable file:
