Community Open MPI
Scenarios
This section describes how to install and use community Open MPI (for example, version 3.1.1).
Prerequisites
You have configured the ECS password-free login.
Procedure
- Install the HPC-X toolkit.
- Download the desired HPC-X toolkit and Open MPI.
To use the community Open MPI, you must use the Mellanox HPC-X toolkit. Download the desired version of the HPC-X toolkit based on the ECS OS and IB driver versions. An example of the HPC-X toolkit version is hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64.tbz.
- Run the following command to decompress the HPC-X toolkit:
# tar -xvf hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64.tbz
- (Optional) Run the following command to change the directory of the HPC-X toolkit:
# mv hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64.tbz /opt/hpcx-v2.0.0
- Download the desired HPC-X toolkit and Open MPI.
- Install Open MPI.
- Copy the Open MPI package (for example, openmpi-3.1.1.tar.gz) to the ECS and run the following commands to decompress it:
# tar -xzvf openmpi-3.1.1.tar.gz
# cd openmpi-3.1.1
- Run the following command to install the required library file:
- Run the following commands to compile and install Open MPI:
# mkdir build
# cd build
# ../configure --prefix=/opt/openmpi-311 --with-mxm=/opt/hpcx-v2.0.0/mxm
# make all install
If information shown in Figure 1 is displayed and no error is displayed, the installation is successful.
- Copy the Open MPI package (for example, openmpi-3.1.1.tar.gz) to the ECS and run the following commands to decompress it:
- Configure MPI environment variables.
- Add the following environment variables to ~/.bashrc:
export PATH=$PATH:/opt/openmpi-311/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi-311/lib
- Run the following command to import MPI environment variables:
- Run the following command to check whether the MPI environment variables are correct:
If information shown in Figure 2 is displayed, the environment configuration is correct.
- Add the following environment variables to ~/.bashrc:
- Run the following command to run Intel MPI benchmark on an ECS:
$ mpirun --allow-run-as-root -np 2 /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong
Information similar to the following is displayed:
#------------------------------------------------------------ # Intel (R) MPI Benchmarks 4.1, MPI-1 part #------------------------------------------------------------ # Date : Mon Jul 16 09:38:20 2018 # Machine : x86_64 # System : Linux # Release : 3.10.0-514.10.2.el7.x86_64 # Version : #1 SMP Fri Mar 3 00:04:05 UTC 2017 # MPI Version : 3.1 # MPI Thread Environment: # New default behavior from Version 3.2 on: # the number of iterations per message size is cut down # dynamically when a certain run time (per message size sample) # is expected to be exceeded. Time limit is defined by variable # "SECS_PER_SAMPLE" (=> IMB_settings.h) # or through the flag => -time # Calling sequence was: # /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong # Minimum message length in bytes: 0 # Maximum message length in bytes: 4194304 # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # PingPong #--------------------------------------------------- # Benchmarking PingPong # #processes = 2 #--------------------------------------------------- #bytes #repetitions t[usec] Mbytes/sec 0 1000 0.23 0.00 1 1000 0.23 4.06 2 1000 0.24 8.04 4 1000 0.24 16.19 8 1000 0.24 32.29 16 1000 0.24 64.06 32 1000 0.27 114.46 64 1000 0.27 229.02 128 1000 0.37 333.48 256 1000 0.46 535.83 512 1000 0.52 944.51 1024 1000 0.63 1556.77 2048 1000 0.83 2349.92 4096 1000 1.35 2896.07 8192 1000 2.29 3415.98 16384 1000 1.46 10727.65 32768 1000 2.08 15037.62 65536 640 3.53 17691.38 131072 320 6.52 19159.59 262144 160 15.62 16002.93 524288 80 31.37 15938.06 1048576 40 61.78 16185.93 2097152 20 124.04 16124.41 4194304 10 242.42 16500.33 # All processes entering MPI_Finalize
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot