Updated on 2022-05-09 GMT+08:00

Open MPI Delivered with the IB Driver

Scenarios

This section describes how to run Open MPI (version 4.0.2a1 is used as an example) delivered with the IB driver in a Kunpeng BMS cluster.

Prerequisites

  • Password-free login has been configured between BMSs in the cluster.
  • Open MPI delivered with the IB driver has been installed on all BMSs in the cluster.

Procedure

  1. Disable the firewall.

    1. Log in to a BMS in the cluster.
    2. Run the following commands to disable the BMS firewall:

      # service firewalld stop

      # iptables -F

    3. Run the following command to check whether the firewall has been disabled:

      # service firewalld status

      Figure 1 Disabled firewall

    4. Log in to all other BMSs in the cluster and repeat 1.b to 1.c to disable firewalls on all BMSs.

  2. Modify the configuration file.

    1. Log in to any BMS in the cluster and run the following command to add the hosts configuration file:

      # vi /etc/hosts

      Add the private network IP addresses and hostnames of all BMSs in the cluster. For example, run the following commands:

      192.168.1.138 bms-arm-ib-0001

      192.168.1.45 bms-arm-ib-0002

      ...

    2. Run the following command to add the hostfile file:

      $vi hostfile

      Add the hostnames of all BMSs in the cluster and the number of cores (for example, 2 cores).

      bms-arm-ib-0001 slots=2

      bms-arm-ib-0002 slots=2

      ...

    3. Log in to all BMSs in the cluster and repeat 2.a to 2.c.

  3. Run MPI benchmark.

    1. Perform the following command on any BMS to check whether the hostfile file has been configured:

      $ mpirun -np 2 -pernode --hostfile hostfile -mca btl_openib_if_include "mlx5_0:1" -x MXM_IB_USE_GRH=y hostname

      Figure 2 Checking the configuration file

      If the hostnames of all BMSs in the cluster are displayed, as shown in Figure 2, the hostfile file has been configured.

    2. Run the MPI benchmark on any BMS with the hostfile path specified.

      For example, there are two BMSs in the cluster. Then, run the following command:

      $ mpirun -np 2 -pernode --hostfile hostfile -mca btl_openib_if_include "mlx5_0:1" -x MXM_IB_USE_GRH=y /usr/mpi/gcc/openmpi-4.0.2a1/tests/imb/IMB-MPI1 PingPong

      Figure 3 Running Open MPI delivered with the IB driver in the cluster

    If information shown in Figure 3 is displayed, Open MPI delivered with the IB driver is running in the cluster.