Updated on 2022-05-09 GMT+08:00

Open MPI Delivered with the IB Driver

Scenarios

This section describes how to install and use Open MPI (version 3.1.0rc2 is used as an example) delivered with the IB driver on a BMS.

Perform the operations on each BMS in a cluster.

Prerequisites

Password-free login has been configured between BMSs in the cluster.

Procedure

  1. Check whether the IB driver has been installed.

    1. Run the following commands to check whether the IB driver has been installed:

      $ ls /usr/mpi/gcc/openmpi-3.1.0rc2/bin/mpirun

      $ rpm -qa | grep mlnx-ofa

      Figure 1 Installed IB driver
    2. Check the command output.
      • If information shown in Figure 1 is displayed, the IB driver has been installed. Then, go to 3.
      • If the IB driver has not been installed, go to 2.

  2. Install the IB driver.

    1. Download the installation package MLNX_OFED_LINUX-4.3-1.0.1.0-rhel7.3-x86_64.tgz.

      Download path: https://network.nvidia.com/products/infiniband-drivers/linux/mlnx_ofed/

      Figure 2 IB driver download center
    2. Run the following commands to install the software package:

      # yum install tk tcl

      # tar -xvf MLNX_OFED_LINUX-4.3-1.0.1.0-rhel7.3-x86_64.tgz

      # cd MLNX_OFED_LINUX-4.3-1.0.1.0-rhel7.3-x86_64

      # ./mlnxofedinstall

  3. Configure environment variables.

    1. Use VIM to open the ~/.bashrc file and add the following data to the file:

      export PATH=$PATH:/usr/mpi/gcc/openmpi-3.1.0rc2/bin

      export LD_LIBRARY_PATH=/usr/mpi/gcc/openmpi-3.1.0rc2/lib64

    2. Run the following command to check whether the MPI environment variables are correct:

      $ which mpirun

      Figure 3 Viewing the Open MPI environment variables

      If information shown in Figure 3 is displayed, environment variables have been configured.

  4. Run the following command to run Open MPI delivered with the IB driver on a BMS:

    $ mpirun -np 2 -mca btl_openib_if_include "mlx5_0:1" -x MXM_IB_USE_GRH=y /usr/mpi/gcc/openmpi-3.1.0rc2/tests/imb/IMB-MPI1 PingPong

    Figure 4 Running the Open MPI on a BMS