Updated on 2022-05-09 GMT+08:00

Community Open MPI

Scenarios

This section describes how to install and use community Open MPI (version 3.1.1 is used as an example) on a BMS.

Perform the operations on each BMS in a cluster.

Prerequisites

Password-free login has been configured between BMSs in the cluster.

Procedure

  1. Install the HPC-X toolkit.

    1. When community Open MPI is used, the Mellanox HPC-X toolkit is also required. The HPC-X version required by CentOS 7.3 is hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64.tbz.

      Download path: https://developer.nvidia.com/networking/hpc-x

    2. Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
    3. Run the following commands to decompress the HPC-X toolkit and change the toolkit directory:

      # tar -xvf hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64.tbz

      # mv hpcx-v2.2.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.3-x86_64 /opt/hpcx-v2.2.0

  2. Install Open MPI.

    1. Download community Open MPI of version openmpi-3.1.0.tar.gz.

      Download path: https://www.open-mpi.org/software/ompi/v3.1/

    2. Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
    3. Run the following commands to decompress the software package:

      # tar -xzvf openmpi-3.1.0.tar.gz

      # cd openmpi-3.1.0

    4. Install desired library files. Ensure that the BMS can access the Internet before the installation.
      1. Run the following command to install the dependency package:

        # yum install binutils-devel.x86_64 gcc-c++ autoconf automake libtool

        Figure 1 Successful installation of the dependency package
    5. Run the following commands to install and compile Open MPI:

      # ./autogen.pl

      # mkdir build && cd build

      # ../configure --prefix=/opt/openmpi-310 --with-mxm=/opt/hpcx-v2.2.0/mxm

      # make all install

      Figure 2 Successful installation of Open MPI

  3. Configure MPI environment variables.

    1. Add the following environment variables in ~/.bashrc as a common user:

      export PATH=$PATH:/opt/openmpi-310/bin

      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-310/lib

    2. Run the following command to import MPI environment variables:

      $ source ~/.bashrc

    3. Run the following command to check whether the MPI environment variables are correct:
      $ which mpirun
      Figure 3 Correctly configured environment variables

      If information shown in Figure 3 is displayed, the environment variables have been correctly configured.

  4. Run community Open MPI on a BMS.

    1. Run the following commands to generate an executable file:

      $ cd ~

      $ vi hello.c

      Edit the following content:

      #include<mpi.h>   
      #include<stdio.h>   
      int main(int argc, char** argv){
             //Initialize the MPI environment
             MPI_Init(NULL, NULL);
             //Get the number of processes
             int world_size;
             MPI_Comm_size(MPI_COMM_WORLD, &world_size);
             //Get the rank of the process
             int world_rank;
             MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
             //Get the name of the processor
             char processor_name[MPI_MAX_PROCESSOR_NAME];
             int name_len;
             MPI_Get_processor_name(processor_name, &name_len);
             //Print off a hello world message.
             printf("Hello world from processor %s, rank %d"" out of %d processors\n",processor_name, world_rank, world_size);
             //Finalize the MPI environment.
             MPI_Finalize();
         }
      $ mpicc hello.c -o hello

      The hello.c file varies depending on MPI versions. You are required to re-compile the hello.c file by running the mpicc hello.c -o hello command, regardless of the MPI version.

    2. Run the following command to run community Open MPI on a BMS:

      $ mpirun -np 2 /home/rhel/hello

      Figure 4 Successful execution of community Open MPI

      If information shown in Figure 4 is displayed, community Open MPI is running on the BMS.