All Documents
High Performance ComputingHigh Performance Computing
-
User Guide
- Overview
- Typical Applications in the ECS Scenario
- Best Practices in the ECS Scenario
- Typical Applications in the BMS Scenario
- Change History
-
HPC-S² User Guide
- Overview
- Getting Started
- User Guide
-
FAQs
- Why Am I Still Charged After I Deleted My Cluster?
- What Can I Do If a Compute Node Cannot Be Found or Fails to Be Added to a Cluster?
- If I Fail to Deploy a Cloud Server in a Cluster, Can I Deploy It in Another Cluster?
- What Can I Do If I Receive Error Message "Insufficient EIP quota"?
- What Can I Do If the Maximum Number of Clusters Has Been Reached?
- What Can I Do If I Receive Error Message "master node cannot reached" But the Master Node is Running?
- Change History
Community Open MPI
Scenarios
This section describes how to install and use community Open MPI (version 4.0.2 is used as an example) on a BMS.
Perform the operations on each BMS in a cluster.
Prerequisites
Password-free login has been configured between BMSs in the cluster.
Procedure
- Install Open MPI.
- Download community Open MPI of version openmpi-4.0.2.tar.bz2.
Download path: https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.2.tar.bz2
- Copy the downloaded software package to a directory, /home/rhel is recommended, on the BMS.
- Run the following commands to decompress the software package:
# tar -xzvf openmpi-4.0.2.tar.bz2
# cd openmpi-4.0.2
- Install desired dependency packages. Ensure that the BMS can access the Internet before the installation.
# yum install binutils-devel.x86_64 gcc-c++ autoconf automake libtool
Figure 1 Successful installation of dependency packages - Run the following commands to install and compile Open MPI:
# ./openmpi-4.0.2/configure --prefix=/opt/openmpi-402--enable-mpirun-prefix-by-default --enable-mpi1-compatibility --with-ucx=/opt/ucx160
# make -j 128 && make install
Figure 2 Successful installation of Open MPI
- Download community Open MPI of version openmpi-4.0.2.tar.bz2.
- Configure MPI environment variables.
- Add the following environment variables in ~/.bashrc as a common user:
export PATH=$PATH:/opt/openmpi-310/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-310/lib
- Run the following command to import MPI environment variables:
- Run the following command to check whether the MPI environment variables are correct:
If information shown in Figure 3 is displayed, the environment variables have been correctly configured.
- Add the following environment variables in ~/.bashrc as a common user:
- Run community Open MPI on a BMS.
- Run the following commands to generate an executable file:
$ vi hello.c
Edit the following content:
#include<mpi.h> #include<stdio.h> int main(int argc, char** argv){ //Initialize the MPI environment MPI_Init(NULL, NULL); //Get the number of processes int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); //Get the rank of the process int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); //Get the name of the processor char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); //Print off a hello world message. printf("Hello world from processor %s, rank %d"" out of %d processors\n",processor_name, world_rank, world_size); //Finalize the MPI environment. MPI_Finalize(); }
$ mpicc hello.c -o hello
The hello.c file varies depending on MPI versions. You are required to re-compile the hello.c file by running the mpicc hello.c -o hello command, regardless of the MPI version.
- Run community Open MPI on a BMS.
$ mpirun -np 2 /home/rhel/hello
If information shown in Figure 4 is displayed, community Open MPI is running on the BMS.
- Run the following commands to generate an executable file: