Help Center > > User Guide> Connecting to Clusters> Using an MRS Client> Using an MRS Client on Nodes Inside a Cluster

Using an MRS Client on Nodes Inside a Cluster

Updated at: Mar 25, 2021 GMT+08:00

Scenario

You need to use a client on Master or Core nodes in a cluster.

Before using the client, you need to install the client. For example, the installation directory is /opt/hadoopclient.

Procedure

  • Using the client on a Master node
    1. On the active management node where the client is installed, that is, the Master node, run the sudo su - omm command to switch the user. Run the following command to go to the client directory:

      cd /opt/hadoopclient

    2. Run the following command to configure the environment variables:

      source bigdata_env

    3. If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the user. If Kerberos authentication is disabled for the current cluster, skip this step.

      kinit MRS cluster user

      Example: kinit admin

      User admin is created by default for MRS clusters with Kerberos authentication enabled and is used for administrators to maintain the clusters.

    4. Run the client command of a component directly.

      For example, run the hdfs dfs -ls / command to view files in the HDFS root directory.

  • Using the client on a Core node
    1. Download a client configuration file from an active management node.
    2. Use the IP address to search for the active management node, and log in to the active management node using VNC.
    3. Log in to the active management node, and run the following command to switch the user:

      sudo su - omm

    4. On the MRS management console, view the IP address on the Nodes tab page of the specified cluster.

      Record the IP address of the Core node that will use the client.

    5. On the active management node, run the following command to copy the client configuration file package to the Core node:

      MRS 2.1.0 or earlier:

      scp -p /tmp/MRS-client/MRS_Services_Client.tar IP address of the Core node:///opt/hadoopclient

      MRS 3.x or later:

      scp -p /tmp/FusionInsight-Client/FusionInsight_Cluster_1_Services_Client.tar IP address of the Core node:///opt/hadoopclient

    6. Log in to the Core node as user root.

      Master nodes support Cloud-Init. The preset username for Cloud-Init is root and the password is the one you set during cluster creation.

    7. Run the following command to go to the client directory:

      cd /opt/hadoopclient

    8. Run the following command to update client configurations:

      MRS 2.1.0 or earlier:

      sh refreshConfig.sh Client installation directory Full path of the client configuration file package

      For example, run the following command:

      sh refreshConfig.sh /opt/hadoopclient /opt/hadoopclient/MRS_Services_Client.tar

      You can also refer to method 2 in Updating a Client to perform operations in 1 to 8 for clusters whose versions are MRS 1.8.5 or later.

      MRS 3.x or later:

      cd /opt/hadoopclient

      tar -xvf FusionInsight_Cluster_1_Services_Client.tar

      tar -xvf FusionInsight_Cluster_1_Services_ClientConfig_ConfigFiles.tar

      sh refreshConfig.sh Client installation directoryConfiguration file directory

      This operation is to update the configuration file of the existing client. For details about how to install a new client, see Installing a Client.

      For example, run the following command:

      sh refreshConfig.sh /opt/hadoopclient /opt/hadoopclient/FusionInsight_Cluster_1_Services_ClientConfig_ConfigFiles

    9. Run the following commands to switch to the client directory and configure environment variables:

      cd /opt/hadoopclient

      source bigdata_env

    10. If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the user. If Kerberos authentication is disabled for the current cluster, skip this step.

      kinit MRS cluster user

      Example: kinit admin

    11. Run the client command of a component directly.

      For example, run the hdfs dfs -ls / command to view files in the HDFS root directory.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel