High Performance ComputingHigh Performance Computing

Compute
Elastic Cloud Server
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
Domain Name Service
VPC Endpoint
Cloud Connect
Enterprise Switch
Security & Compliance
Anti-DDoS
Web Application Firewall
Host Security Service
Data Encryption Workshop
Database Security Service
Advanced Anti-DDoS
Data Security Center
Container Guard Service
Situation Awareness
Managed Threat Detection
Compass
Cloud Certificate Manager
Anti-DDoS Service
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GaussDB NoSQL
GaussDB(for MySQL)
Distributed Database Middleware
GaussDB(for openGauss)
Developer Services
ServiceStage
Distributed Cache Service
Simple Message Notification
Application Performance Management
Application Operations Management
Blockchain Service
API Gateway
Cloud Performance Test Service
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
DevCloud
ProjectMan
CodeHub
CloudRelease
CloudPipeline
CloudBuild
CloudDeploy
Cloud Communications
Message & SMS
Cloud Ecosystem
Marketplace
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP License Service
Support Plans
Customer Operation Capabilities
Partner Support Plans
Professional Services
enterprise-collaboration
Meeting
IoT
IoT
Intelligent EdgeFabric
DeveloperTools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Updated at: Feb 10, 2022 GMT+08:00

Community Open MPI

Scenarios

This section describes how to install and use community Open MPI (version 4.0.2 is used as an example) on a BMS.

Perform the operations on each BMS in a cluster.

Prerequisites

Password-free login has been configured between BMSs in the cluster.

Procedure

  1. Install Open MPI.

    1. Download community Open MPI of version openmpi-4.0.2.tar.bz2.

      Download path: https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.2.tar.bz2

    2. Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
    3. Run the following commands to decompress the software package:

      # tar -xzvf openmpi-4.0.2.tar.bz2

      # cd openmpi-4.0.2

    4. Install desired dependency packages. Ensure that the BMS can access the Internet before the installation.

      # yum install binutils-devel.x86_64 gcc-c++ autoconf automake libtool

      Figure 1 Successful installation of dependency packages

    5. Run the following commands to install and compile Open MPI:

      # ./openmpi-4.0.2/configure --prefix=/opt/openmpi-402--enable-mpirun-prefix-by-default --enable-mpi1-compatibility --with-ucx=/opt/ucx160

      # make -j 128 && make install

      Figure 2 Successful installation of Open MPI

  1. Configure MPI environment variables.

    1. Add the following environment variables in ~/.bashrc as a common user:

      export PATH=$PATH:/opt/openmpi-310/bin

      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-310/lib

    2. Run the following command to import MPI environment variables:

      $ source ~/.bashrc

    3. Run the following command to check whether the MPI environment variables are correct:

      $ which mpirun

      Figure 3 Correctly configured environment variables

      If information shown in Figure 3 is displayed, the environment variables have been correctly configured.

  2. Run community Open MPI on a BMS.

    1. Run the following commands to generate an executable file:

      $ cd ~

      $ vi hello.c

      Edit the following content:

      #include<mpi.h>    
      #include<stdio.h>    
      int main(int argc, char** argv){ 
             //Initialize the MPI environment 
             MPI_Init(NULL, NULL); 
             //Get the number of processes 
             int world_size; 
             MPI_Comm_size(MPI_COMM_WORLD, &world_size); 
             //Get the rank of the process 
             int world_rank; 
             MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); 
             //Get the name of the processor 
             char processor_name[MPI_MAX_PROCESSOR_NAME]; 
             int name_len; 
             MPI_Get_processor_name(processor_name, &name_len); 
             //Print off a hello world message. 
             printf("Hello world from processor %s, rank %d"" out of %d processors\n",processor_name, world_rank, world_size); 
             //Finalize the MPI environment. 
             MPI_Finalize(); 
         }

      $ mpicc hello.c -o hello

      The hello.c file varies depending on MPI versions. You are required to re-compile the hello.c file by running the mpicc hello.c -o hello command, regardless of the MPI version.

    2. Run community Open MPI on a BMS.

      $ mpirun -np 2 /home/rhel/hello

      Figure 4 Successful execution of community Open MPI

      If information shown in Figure 4 is displayed, community Open MPI is running on the BMS.

Did you find this page helpful?

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel