Help Center > > User Guide> Storage-Compute Decoupling Operation Guide> Using a Storage-Compute Decoupled Cluster> Interconnecting HDFS with OBS

Interconnecting HDFS with OBS

Updated at: Sep 02, 2021 GMT+08:00

Before performing the following operations, ensure that you have configured a storage-compute decoupled cluster by referring to Configuring a Storage-Compute Decoupled Cluster (Agency) or Configuring a Storage-Compute Decoupled Cluster (AK/SK).

  1. Log in to the node on which the HDFS client is installed as a client installation user.
  2. Run the following command to switch to the client installation directory.

    cd ${client_home}

  3. Run the following command to configure environment variables:

    source bigdata_env

  4. If the cluster is in security mode, run the following command to authenticate the user. In normal mode, skip user authentication.

    kinit Component service user

  5. Explicitly add the OBS file system to be accessed in the HDFS command line.

    For example, you can run the following command to access the OBS file system:

    hdfs dfs -ls obs://OBS parallel file system name/Path

If a large number of logs are printed in the OBS file system, the read and write performance may be affected. You can adjust the log level of the OBS client as follows:

cd ${client_home}/HDFS/hadoop/etc/hadoop

vi log4j.properties

Add the OBS log level configuration to the file as follows:

log4j.logger.org.apache.hadoop.fs.obs=WARN

log4j.logger.com.obs=WARN

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel