High Performance ComputingHigh Performance Computing

Compute
Elastic Cloud Server
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
Domain Name Service
VPC Endpoint
Cloud Connect
Enterprise Switch
Security & Compliance
Anti-DDoS
Web Application Firewall
Host Security Service
Data Encryption Workshop
Database Security Service
Advanced Anti-DDoS
Data Security Center
Container Guard Service
Situation Awareness
Managed Threat Detection
Compass
Cloud Certificate Manager
Anti-DDoS Service
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GaussDB NoSQL
GaussDB(for MySQL)
Distributed Database Middleware
GaussDB(for openGauss)
Developer Services
ServiceStage
Distributed Cache Service
Simple Message Notification
Application Performance Management
Application Operations Management
Blockchain Service
API Gateway
Cloud Performance Test Service
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
DevCloud
ProjectMan
CodeHub
CloudRelease
CloudPipeline
CloudBuild
CloudDeploy
Cloud Communications
Message & SMS
Cloud Ecosystem
Marketplace
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP License Service
Support Plans
Customer Operation Capabilities
Partner Support Plans
Professional Services
enterprise-collaboration
Meeting
IoT
IoT
Intelligent EdgeFabric
DeveloperTools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Updated at: Feb 10, 2022 GMT+08:00

Community Open MPI

Scenarios

This section describes how to install and use community Open MPI (for example, version 3.1.1).

Prerequisites

You have configured the ECS password-free login.

Procedure

  1. Install the HPC-X toolkit.

    1. Download the desired HPC-X toolkit and Open MPI.

      To use the community Open MPI, you must use the Mellanox HPC-X toolkit. Download the desired version of the HPC-X toolkit based on the ECS OS and IB driver versions. An example of the HPC-X toolkit version is hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64.tbz.

    2. Run the following command to decompress the HPC-X toolkit:

      # tar -xvf hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64.tbz

    3. (Optional) Run the following command to change the directory of the HPC-X toolkit:

      # mv hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64.tbz /opt/hpcx-v2.0.0

  2. Install Open MPI.

    1. Copy the Open MPI package (for example, openmpi-3.1.1.tar.gz) to the ECS and run the following commands to decompress it:

      # tar -xzvf openmpi-3.1.1.tar.gz

      # cd openmpi-3.1.1

    2. Run the following command to install the required library file:

      # yum install binutils-devel.x86_64 libibverbs-devel

    3. Run the following commands to compile and install Open MPI:

      # ./autogen.pl

      # mkdir build

      # cd build

      # ../configure --prefix=/opt/openmpi-311 --with-mxm=/opt/hpcx-v2.0.0/mxm

      # make all install

      Figure 1 Installing Open MPI

      If information shown in Figure 1 is displayed and no error is displayed, the installation is successful.

  3. Configure MPI environment variables.

    1. Add the following environment variables to ~/.bashrc:

      export PATH=$PATH:/opt/openmpi-311/bin

      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-311/lib

    2. Run the following command to import MPI environment variables:

      # source ~/.bashrc

    3. Run the following command to check whether the MPI environment variables are correct:

      # which mpirun

      Figure 2 Checking community Open MPI environment variables

      If information shown in Figure 2 is displayed, the environment configuration is correct.

  4. Run the following command to run Intel MPI benchmark on an ECS:

    $ mpirun --allow-run-as-root -np 2 /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong

    Information similar to the following is displayed:

    #------------------------------------------------------------
    #    Intel (R) MPI Benchmarks 4.1, MPI-1 part
    #------------------------------------------------------------
    # Date                  : Mon Jul 16 09:38:20 2018
    # Machine               : x86_64
    # System                : Linux
    # Release               : 3.10.0-514.10.2.el7.x86_64
    # Version               : #1 SMP Fri Mar 3 00:04:05 UTC 2017
    # MPI Version           : 3.1
    # MPI Thread Environment:
    
    # New default behavior from Version 3.2 on:
    
    # the number of iterations per message size is cut down
    # dynamically when a certain run time (per message size sample)
    # is expected to be exceeded. Time limit is defined by variable
    # "SECS_PER_SAMPLE" (=> IMB_settings.h)
    # or through the flag => -time
    
    # Calling sequence was:
    
    # /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong
    
    # Minimum message length in bytes:   0
    # Maximum message length in bytes:   4194304
    #
    # MPI_Datatype                   :   MPI_BYTE
    # MPI_Datatype for reductions    :   MPI_FLOAT
    # MPI_Op                         :   MPI_SUM
    #
    #
    
    # List of Benchmarks to run:
    
    # PingPong
    
    #---------------------------------------------------
    # Benchmarking PingPong
    # #processes = 2
    #---------------------------------------------------
    #bytes #repetitions      t[usec]   Mbytes/sec
    0         1000         0.23         0.00
    1         1000         0.23         4.06
    2         1000         0.24         8.04
    4         1000         0.24        16.19
    8         1000         0.24        32.29
    16         1000         0.24        64.06
    32         1000         0.27       114.46
    64         1000         0.27       229.02
    128         1000         0.37       333.48
    256         1000         0.46       535.83
    512         1000         0.52       944.51
    1024         1000         0.63      1556.77
    2048         1000         0.83      2349.92
    4096         1000         1.35      2896.07
    8192         1000         2.29      3415.98
    16384         1000         1.46     10727.65
    32768         1000         2.08     15037.62
    65536          640         3.53     17691.38
    131072          320         6.52     19159.59
    262144          160        15.62     16002.93
    524288           80        31.37     15938.06
    1048576           40        61.78     16185.93
    2097152           20       124.04     16124.41
    4194304           10       242.42     16500.33
    
    # All processes entering MPI_Finalize

Did you find this page helpful?

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel