Community Open MPI
Scenarios
This section describes how to run community Open MPI (version 4.0.2 is used as an example) in a BMS cluster.
Prerequisites
- Password-free login has been configured between BMSs in the cluster.
- Community Open MPI has been installed on all BMSs in the cluster.
Procedure
- Disable the firewall.
- Modify the configuration file.
- Log in to any BMS in the cluster and run the following command to add the hosts configuration file:
Add the private network IP addresses and hostnames of all BMSs in the cluster. For example, run the following commands:
192.168.1.138 bms-arm-ib-0001
192.168.1.45 bms-arm-ib-0002
...
- Run the following command to add the hostfile file:
$vi hostfile
Add the hostnames of all BMSs in the cluster and the number of cores (for example, 2 cores).
bms-arm-ib-0001 slots=2
bms-arm-ib-0002 slots=2
...
- Log in to all BMSs in the cluster and repeat 2.a to 2.c.
- Log in to any BMS in the cluster and run the following command to add the hosts configuration file:
- Log in to any BMS and run the community Open MPI.
For example, there are two BMSs in the cluster. Then, run the following command:
$ mpirun -np 2 --pernode -hostfile hostfile /home/rhel/helloFigure 2 Successful execution of community Open MPI in the cluster
Specify the path of hostfile when running it. The path of the executable file hello must be absolute. All executable files in the cluster must be in the same directory.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot