Installing a Client (Version 3.x or Later)
Scenario
This section describes how to install clients of all services (excluding Flume) in an MRS cluster. For details about how to install the Flume client, see Installing the Flume Client.
A client can be installed on a node inside or outside the cluster. This section uses the installation directory /opt/hadoopclient as an example. Replace it to the actual one.
Prerequisites
- An installation directory will be automatically created if it does not exist. If the directory exists, it must be empty. The directory path cannot contain any space.
- If a server outside the cluster is used as the client node, ensure that the node can communicate with the cluster service plane. Otherwise, client installation will fail.
Installing a Client
- Obtain the software package.
Log in to FusionInsight Manager. For details, see Accessing FusionInsight Manager (MRS 3.x or Later). Click the name of the cluster to be operated in the Cluster drop-down list.
Choose More > Download Client. The Download Cluster Client dialog box is displayed.Figure 1 Downloading a client
In the scenario where only one client is to be installed, choose Cluster > Service > Service name > More > Download Client. The Download Client dialog box is displayed.
- Set the client type to Complete Client.
Configuration Files Only is to download client configuration files in the following scenario: After a complete client is downloaded and installed and MRS cluster administrators modify server configurations on Manager, developers need to update the configuration files during application development.
The platform type can be set to x86_64 or aarch64.
- x86_64: indicates the client software package that can be deployed on the x86 platform.
- aarch64: indicates the client software package that can be deployed on the TaiShan server.
The cluster supports two types of clients: x86_64 and aarch64. The client type must match the architecture of the node to be installed. Otherwise, client installation will fail.
- Determine whether to generate a client file on the cluster node.
- If yes, select Save to Path, and click OK to generate the client file. By default, the client file is generated in /tmp/FusionInsight-Client on the active management node. You can also store the client file in other directories, and user omm has the read, write, and execute permissions on the directories. Click OK and copy the software package to the file directory, for example, /opt/Bigdata/client, on the server where the client is to be installed as user omm or root. Then go to 5.
If you cannot obtain permissions of user root, use user omm.
- If no, click OK, specify a local save path, and download the complete client. Wait until the download is complete and go to 4.
- If yes, select Save to Path, and click OK to generate the client file. By default, the client file is generated in /tmp/FusionInsight-Client on the active management node. You can also store the client file in other directories, and user omm has the read, write, and execute permissions on the directories. Click OK and copy the software package to the file directory, for example, /opt/Bigdata/client, on the server where the client is to be installed as user omm or root. Then go to 5.
- Upload the software package.
Use WinSCP to upload the obtained software package as the user (such as user_client) who prepares for the installation, to the directory (such as /opt/Bigdata/client) of the server where the client is to be installed.
The format of the client software package name is as follows: FusionInsight_Cluster_<Cluster ID>_Services_Client.tar.
The following steps and sections use FusionInsight_Cluster_1_Services_Client.tar as an example.The host where the client is to be installed can be a node inside or outside the cluster. If the node is a server outside the cluster, it must be able to communicate with the cluster, and the NTP service must be enabled to ensure that the time is the same as that on the server.
For example, you can configure the same NTP clock source for external servers as that of the cluster. After the configuration, you can run the ntpq -np command to check whether the time is synchronized.- If there is an asterisk (*) before the IP address of the NTP clock source in the command output, the synchronization is normal. For example:
remote refid st t when poll reach delay offset jitter ============================================================================== *10.10.10.162 .LOCL. 1 u 1 16 377 0.270 -1.562 0.014
- If there is no asterisk (*) before the IP address of the NTP clock source and the value of refid is .INIT., or if the command output is abnormal, the synchronization is abnormal. Contact technical support.
remote refid st t when poll reach delay offset jitter ============================================================================== 10.10.10.162 .INIT. 1 u 1 16 377 0.270 -1.562 0.014
You can also configure the same chrony clock source for external servers as that for the cluster. After the configuration, run the chronyc sources command to check whether the time is synchronized.
- In the command output, if there is an asterisk (*) before the IP address of the chrony service on the active OMS node, the synchronization is normal. For example:
MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 10.10.10.162 10 10 377 626 +16us[ +15us] +/- 308us
- In the command output, if there is no asterisk (*) before the IP address of the NTP service on the active OMS node, and the value of Reach is 0, the synchronization is abnormal.
MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 10.1.1.1 0 10 0 - +0ns[ +0ns] +/- 0ns
- If there is an asterisk (*) before the IP address of the NTP clock source in the command output, the synchronization is normal. For example:
- Log in to the server where the client software package is located as user user_client.
- Decompress the software package.
Go to the directory where the installation package is stored, such as /opt/Bigdata/client. Run the following command to decompress the installation package to a local directory:
tar -xvf FusionInsight_Cluster_1_Services_Client.tar
- Verify the software package.
Run the following command to verify the decompressed file and check whether the command output is consistent with the information in the sha256 file.
sha256sum -c FusionInsight_Cluster_1_Services_ClientConfig.tar.sha256
FusionInsight_Cluster_1_Services_Client.tar: OK
- Decompress the obtained installation file.
- Configure network connections for the client.
- Ensure that the host where the client is installed can communicate with the hosts listed in the hosts file (for example, /opt/Bigdata/client/FusionInsight_Cluster_<Cluster ID>_Services_ClientConfig/hosts.
- If the host where the client is installed is not a host in the cluster, you need to set the mapping between the host name and the service plane IP address for each cluster node in /etc/hosts, user root rights are required to modify the file. Each host name uniquely maps an IP address. You can perform the following steps to import the domain name mapping of the cluster to the hosts file:
- Switch to user root or a user who has the permission to modify the hosts file.
su - root
- Go to the directory where the client package is decompressed.
cd /opt/Bigdata/client/FusionInsight_Cluster_1_Services_ClientConfig
- Run the cat realm.ini >> /etc/hosts command to import the domain name mapping to the hosts file.
- Switch to user root or a user who has the permission to modify the hosts file.
- If the host where the client is installed is not a node in the cluster, configure network connections for the client to prevent errors when you run commands on the client.
- If Spark tasks are executed in yarn-client mode, add the spark.driver.host parameter to the file Client installation directory/Spark/spark/conf/spark-defaults.conf and set the parameter to the client IP address.
- If the yarn-client mode is used, you need to configure the mapping between the IP address and host name of the client in the hosts file on the active and standby Yarn nodes (ResourceManager nodes in the cluster) to make sure that the Spark web UI is properly displayed.
- Go to the directory where the installation package is stored, and run the following command to install the client to a specified directory (an absolute path), for example, /opt/hadoopclient:
cd /opt/Bigdata/client/FusionInsight_Cluster_1_Services_ClientConfig
Run the ./install.sh /opt/hadoopclient command to install the client. The client is successfully installed if information similar to the following is displayed:
The component client is installed successfully
- If the clients of all or some services use the /opt/hadoopclient directory, other directories must be used when you install other service clients.
- You must delete the client installation directory when uninstalling a client.
- To ensure that an installed client can only be used by the installation user (for example, user_client), add parameter -o during the installation. That is, run the ./install.sh /opt/hadoopclient -o command to install the client.
- If the NTP server is to be installed in chrony mode, ensure that the parameter chrony is added during the installation, that is, run the command ./install.sh /opt/hadoopclient -o chrony to install the client.
- If an HBase client is installed, it is recommended that the client installation directory contain only uppercase and lowercase letters, digits, and characters (_-?.@+=) due to the limitation of the Ruby syntax used by HBase.
- If the client node is a server outside the cluster and cannot communicate with the service plane IP address of the active OMS node or cannot access port 20029 of the active OMS node, the client can be successfully installed but cannot be registered with the cluster or displayed on the GUI.
Using a Client
- On the node where the client is installed, run the sudo su - omm command to switch the user. Run the following command to go to the client directory:
cd /opt/hadoopclient
- Run the following command to configure environment variables:
- If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the user. If Kerberos authentication is disabled for the current cluster, skip this step.
Example: kinit admin
User admin is created by default for MRS clusters with Kerberos authentication enabled and is used for administrators to maintain the clusters.
- Run the client command of a component directly.
For example, run the hdfs dfs -ls / command to view files in the HDFS root directory.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot