Updated on 2022-05-09 GMT+08:00

Community Open MPI

Scenarios

This section describes how to install and use community Open MPI (version 4.0.2 is used as an example) on a BMS.

Perform the operations on each BMS in a cluster.

Prerequisites

Password-free login has been configured between BMSs in the cluster.

Procedure

  1. Install Open MPI.

    1. Download community Open MPI of version openmpi-4.0.2.tar.bz2.

      Download path: https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.2.tar.bz2

    2. Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
    3. Run the following commands to decompress the software package:

      # tar -xzvf openmpi-4.0.2.tar.bz2

      # cd openmpi-4.0.2

    4. Install desired dependency packages. Ensure that the BMS can access the Internet before the installation.

      # yum install binutils-devel.x86_64 gcc-c++ autoconf automake libtool

      Figure 1 Successful installation of dependency packages

    5. Run the following commands to install and compile Open MPI:

      # ./openmpi-4.0.2/configure --prefix=/opt/openmpi-402--enable-mpirun-prefix-by-default --enable-mpi1-compatibility --with-ucx=/opt/ucx160

      # make -j 128 && make install

      Figure 2 Successful installation of Open MPI

  1. Configure MPI environment variables.

    1. Add the following environment variables in ~/.bashrc as a common user:

      export PATH=$PATH:/opt/openmpi-310/bin

      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-310/lib

    2. Run the following command to import MPI environment variables:

      $ source ~/.bashrc

    3. Run the following command to check whether the MPI environment variables are correct:

      $ which mpirun

      Figure 3 Correctly configured environment variables

      If information shown in Figure 3 is displayed, the environment variables have been correctly configured.

  2. Run community Open MPI on a BMS.

    1. Run the following commands to generate an executable file:

      $ cd ~

      $ vi hello.c

      Edit the following content:

      #include<mpi.h>    
      #include<stdio.h>    
      int main(int argc, char** argv){ 
             //Initialize the MPI environment 
             MPI_Init(NULL, NULL); 
             //Get the number of processes 
             int world_size; 
             MPI_Comm_size(MPI_COMM_WORLD, &world_size); 
             //Get the rank of the process 
             int world_rank; 
             MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); 
             //Get the name of the processor 
             char processor_name[MPI_MAX_PROCESSOR_NAME]; 
             int name_len; 
             MPI_Get_processor_name(processor_name, &name_len); 
             //Print off a hello world message. 
             printf("Hello world from processor %s, rank %d"" out of %d processors\n",processor_name, world_rank, world_size); 
             //Finalize the MPI environment. 
             MPI_Finalize(); 
         }

      $ mpicc hello.c -o hello

      The hello.c file varies depending on MPI versions. You are required to re-compile the hello.c file by running the mpicc hello.c -o hello command, regardless of the MPI version.

    2. Run community Open MPI on a BMS.

      $ mpirun -np 2 /home/rhel/hello

      Figure 4 Successful execution of community Open MPI

      If information shown in Figure 4 is displayed, community Open MPI is running on the BMS.