Interconnecting HDFS with OBS Using an IAM Agency
After configuring decoupled storage and compute for a cluster by referring to Interconnecting an MRS Cluster with OBS Using an IAM Agency, you can view and create OBS file directories on the HDFS client.
Interconnecting HDFS with OBS
- Log in to the node where the HDFS client is installed as the client installation user.
- Run the following command to switch to the client installation directory.
cd Client installation directory
- Run the following command to configure environment variables:
source bigdata_env
- If the cluster is in security mode, authenticate the user. In normal mode, skip user authentication.
kinit Component service user
- Explicitly add the OBS file system to be accessed in the HDFS command line.
For example:
- Run the following command to access the OBS file system:
hdfs dfs -ls obs://OBS parallel file system name/Path
For example, run the following command to access the mrs-word001 parallel file system. If the file list is returned, OBS is successfully accessed.
hadoop fs -ls obs://mrs-word001/Figure 1 Returned file list
- Run the following command to upload the /opt/test.txt file from the client node to the OBS file system path:
hdfs dfs -put /opt/test.txt obs://OBS parallel file system name/Path
- Run the following command to access the OBS file system:
If a large number of logs are printed in the OBS file system, the read and write performance may be affected. You can adjust the log level of the OBS client as follows:
cd Client installation directory/HDFS/hadoop/etc/hadoop
vi log4j.properties
Add the OBS log level configuration to the file as follows:
log4j.logger.org.apache.hadoop.fs.obs=WARN
log4j.logger.com.obs=WARN
Run the following command to view the log level:
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot