Updated on 2022-06-28 GMT+08:00

Community Open MPI

Scenarios

This section describes how to run community Open MPI (version 3.1.1) on a configured ECS.

Prerequisites

  • An ECS equipped with InfiniBand NICs has been created, and an EIP has been bound to it.
  • Multiple ECSs have been created using a private image.

Procedure

  1. Use PuTTY and a key pair to log in to the ECS.

    Ensure that the username specified during ECS creation is used to establish the connection.

  2. Run the following command to disable user logout upon system timeout:

    # TMOUT=0

  3. Run the following command to check whether the ECSs to be tested can be logged in to from each other without a password:

    $ ssh Username@SERVER_IP

  4. Run the following commands to disable the firewall of the ECS:

    # iptables -F

    # service firewalld stop

  5. Run the following command to set the hostnames of tested ECSs:

    # hostnamectl set-hostname vm1

  6. Run the following command to add the /etc/hosts file:

    # vi /etc/hosts

    Add the hostnames and IP addresses of ECSs. The following are examples:

    #cat /etc/hosts

    192.168.1.3 vm1

    192.168.1.4 vm2

    ...

  7. Run the following command to add the hostfile file:

    # vi hostfile

    Add the hostnames of the tested ECSs. The following are examples:

    vm1

    vm2

    ...

  8. Modify hostfile and run MPI benchmark with the path of hostfile specified.

    For example, to modify the hostfile file and run MPI benchmark on two ECSs, run the following command:

    # mpirun --allow-run-as-root -np 2 --pernode -hostfile /root/hostfile /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong

    Run Intel MPI benchmark in a cluster containing two nodes. In the RDMA network, the minimum latency is less than 1.5 us.

    #------------------------------------------------------------
    #    Intel (R) MPI Benchmarks 4.1, MPI-1 part
    #------------------------------------------------------------
    # Date                  : Mon Jul 16 09:42:15 2018
    # Machine               : x86_64
    # System                : Linux
    # Release               : 3.10.0-514.10.2.el7.x86_64
    # Version               : #1 SMP Fri Mar 3 00:04:05 UTC 2017
    # MPI Version           : 3.1
    # MPI Thread Environment:
    
    # New default behavior from Version 3.2 on:
    
    # the number of iterations per message size is cut down
    # dynamically when a certain run time (per message size sample)
    # is expected to be exceeded. Time limit is defined by variable
    # "SECS_PER_SAMPLE" (=> IMB_settings.h)
    # or through the flag => -time
    
    # Calling sequence was:
    
    # /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong
    
    # Minimum message length in bytes:   0
    # Maximum message length in bytes:   4194304
    #
    # MPI_Datatype                   :   MPI_BYTE
    # MPI_Datatype for reductions    :   MPI_FLOAT
    # MPI_Op                         :   MPI_SUM
    #
    #
    
    # List of Benchmarks to run:
    
    # PingPong
    
    #---------------------------------------------------
    # Benchmarking PingPong
    # #processes = 2
    #---------------------------------------------------
    #bytes #repetitions      t[usec]   Mbytes/sec
    0         1000         1.75         0.00
    1         1000         1.75         0.55
    2         1000         1.74         1.10
    4         1000         1.74         2.19
    8         1000         1.77         4.31
    16         1000         1.79         8.54
    32         1000         1.77        17.26
    64         1000         1.85        33.02
    128         1000         1.89        64.45
    256         1000         2.39       102.29
    512         1000         2.54       192.56
    1024         1000         2.81       346.99
    2048         1000         3.24       603.08
    4096         1000         4.30       907.66
    8192         1000         5.91      1321.23
    16384         1000         8.61      1814.29
    32768         1000        12.31      2537.83
    65536          640        21.80      2867.15
    131072          320        33.91      3686.23
    262144          160        42.65      5861.95
    524288           80        68.61      7287.12
    1048576           40       120.06      8329.50
    2097152           20       221.55      9027.12
    4194304           10       424.35      9426.16
    
    
    # All processes entering MPI_Finalize
  9. Deploy your MPI application in the Linux cluster and run the MPI application in the Linux cluster using the preceding method.