Updated on 2022-05-09 GMT+08:00

Community Open MPI

Scenarios

This section describes how to run community Open MPI (version 4.0.2 is used as an example) in a BMS cluster.

Prerequisites

  • Password-free login has been configured between BMSs in the cluster.
  • Community Open MPI has been installed on all BMSs in the cluster.

Procedure

  1. Disable the firewall.

    1. Log in to a BMS in the cluster.
    2. Run the following commands to disable the BMS firewall:

      # service firewalld stop

      # iptables -F

    3. Run the following command to check whether the firewall has been disabled:

      # service firewalld status

      Figure 1 Disabled firewall

    4. Log in to all other BMSs in the cluster and repeat 1.b to 1.c to disable firewalls on all BMSs.

  2. Modify the configuration file.

    1. Log in to any BMS in the cluster and run the following command to add the hosts configuration file:

      # vi /etc/hosts

      Add the private network IP addresses and hostnames of all BMSs in the cluster. For example, run the following commands:

      192.168.1.138 bms-arm-ib-0001

      192.168.1.45 bms-arm-ib-0002

      ...

    2. Run the following command to add the hostfile file:

      $vi hostfile

      Add the hostnames of all BMSs in the cluster and the number of cores (for example, 2 cores).

      bms-arm-ib-0001 slots=2

      bms-arm-ib-0002 slots=2

      ...

    3. Log in to all BMSs in the cluster and repeat 2.a to 2.c.

  3. Log in to any BMS and run the community Open MPI.

    For example, there are two BMSs in the cluster. Then, run the following command:

    $ mpirun -np 2 --pernode -hostfile hostfile /home/rhel/hello
    Figure 2 Successful execution of community Open MPI in the cluster

    Specify the path of hostfile when running it. The path of the executable file hello must be absolute. All executable files in the cluster must be in the same directory.