Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Installing a Client

Updated on 2022-12-14 GMT+08:00

Scenario

Install the clients of all services (except Flume). MRS provides shell scripts for different services for maintenance personnel to log in to related maintenance clients and implement maintenance operations.

NOTE:
  • Reinstall the client after server configuration is modified on the Manager portal or after the system is upgraded. Otherwise, the versions of the client and server will be inconsistent.

Prerequisites

  • The client installation directory is automatically created if it does not exist. If it already exists, it must be empty. The directory cannot contain any space.
  • If the node where the client is to be installed is a server outside the cluster, it must be able to communicate with the service plane. Otherwise, the client will fail to be installed.
  • The client must be enabled with the NTP service and synchronize time with the server. Otherwise, the client will fail to be installed.
  • The HDFS and MapReduce components are stored in the same directory (client directory/HDFS/) after being downloaded.
  • You can install or use the client as any user. Obtain the username and password from the administrator. This section uses user user_client as an example. User user_client is the owner of the server file directory (such as /opt/Bigdata/client) and the client installation directory (such as /opt/Bigdata/hadoopclient) with the permissions of 755.
  • You have obtained the component service user (default user or new user) and password.
  • If the /var/tmp/patch directory already exists when you install the client as non-root or non-omm user, change the permission on the directory to 777 and the permission on the logs in the directory to 666.

Procedure

  1. Obtain the software package.

    Log in to FusionInsight Manager. For details, see Accessing FusionInsight Manager (MRS 3.x or Later). Click the wanted cluster from the Cluster drop-down list.

    Choose More > Download Client. The Download Cluster Client window is displayed.
    NOTE:

    In a single-client scenario, choose Cluster > Name of the desired cluster > Services > Service name > More > Download Client. The Download Client dialog box is displayed.

  2. Set Select Client type to Complete Client.

    Configuration Files Only is to download client configuration files in the following scenario: After all clients are downloaded and installed and administrators modify server configuration on the Manager portal, development personnel need to update the configuration files during application development.

    There are two client software packages:

    • x86_64: client software package that can be deployed on the x86 platform.
    • aarch64: client software package that can be deployed on the TaiShan platform.
    NOTE:

    The cluster supports x86_64 and aarch64 clients. However, the client type must match the architecture of the target node. Otherwise, the client installation will fail.

  3. Determine whether to generate a client file on the cluster node?

    • If yes, select Save to Path, and click OK to generate the client file. By default, the client file is generated in /tmp/FusionInsight-Client on the active management node. The directory can be customized and user omm has the read, write, and execute permission on the directory. Click OK, copy the software package to the file directory, for example, /opt/Bigdata/client, on the server where the client is to be installed as user omm or root. Then, go to 5.
      NOTE:

      If you cannot obtain permissions of user root, use the omm user.

    • If no, click OK, specify a local save path, and download the complete client. Wait until the download is complete, and go to 4.

  4. Upload the software package. Use WinSCP to upload the software package to the server file directory where the client is to be installed (such as /opt/Bigdata/client) as the user who is to install the client (any user, such as user user_client).

    The format of the client software package name is as follows: FusionInsight_Cluster_<Cluster ID>_Services_Client.tar. The following steps and sections use FusionInsight_Cluster_1_Services_Client.tar as an example.

    NOTE:
    • The host where the client is to be installed can be a node in the cluster or outside the cluster. If the node is a server outside the cluster, it must be able to communicate with the cluster, and the NTP service must be enabled to ensure that the time is the same as that on the server.
    • For example, you can configure NTP clock sources for external client servers as well as clusters. Then you can execute the ntpq -np command to check whether the time is synchronized.
      • If there is a * before the result of the NTP clock source IP address, it means time synchronization is normal, as follows:
        remote refid st t when poll reach delay offset jitter 
        ============================================================================== 
        *10.10.10.162 .LOCL. 1 u 1 16 377 0.270 -1.562 0.014
      • If there is no * before the result of the NTP clock source IP address, and the result of refid is .INIT., or the results showed abnormal, it means synchronization is exception, please contact technical support.
        remote refid st t when poll reach delay offset jitter 
        ============================================================================== 
        10.10.10.162 .INIT. 1 u 1 16 377 0.270 -1.562 0.014
    • You can also configure the same chrony clock source for external servers as that for the cluster. After the configuration, run the chronyc sources command to check whether the time is synchronized.
      • In the command output, if an asterisk (*) exists before the IP address of the chrony service on the active OMS node, the synchronization is in normal state. For example:
        MS Name/IP address         Stratum Poll Reach LastRx Last sample                
         =============================================================================== 
         ^* 10.10.10.162             10  10   377   626    +16us[  +15us] +/-  308us
      • If there is no asterisk (*) before the IP address of the NTP service on the active OMS node and Reach is "0", the synchronization is abnormal.
        MS Name/IP address         Stratum Poll Reach LastRx Last sample                
         =============================================================================== 
         ^? 10.1.1.1                      0  10     0     -     +0ns[   +0ns] +/-    0ns

  5. Log in to the server where the client software package is located as user user_client.
  6. Decompress the package.

    Go to the directory where the package is stored, for example, /opt/Bigdata/client. Run the following command to decompress the package to a local directory:

    tar -xvf FusionInsight_Cluster_1_Services_Client.tar

  7. Verify the software package.

    Run the sha256sum to verify the retrieved file, for example,

    sha256sum -c FusionInsight_Cluster_1_Services_ClientConfig.tar.sha256

    FusionInsight_Cluster_1_Services_ClientConfig.tar: OK     

  8. Run the following command to decompress the retrieved file:

    tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

  9. Configure network connections for the client.

    1. Ensure that the host where the client is installed can communicate with the hosts listed in the hosts file stored in the directory containing the decompressed package, for example, /opt/Bigdata/client/FusionInsight_Cluster_<Cluster ID>_Services_ClientConfig/hosts.
    2. If the host where the client is installed is not a host in the cluster, you need to set the mapping between the host name and the service plane IP address for each cluster node in /etc/hosts, user root rights are required to modify the file. Each host name uniquely maps an IP address. You can perform the following steps to import the domain name mapping of the cluster to the hosts file:
      1. Switch to the root user or a user who has permission to modify the hosts file.

        su - root

      2. Go to the directory where the client package is decompressed.

        cd /opt/Bigdata/client/FusionInsight_Cluster_1_Services_ClientConfig

      3. Run the cat realm.ini >> /etc/hosts command to import the domain name mapping to the hosts file.
    NOTE:
    • If the host where the client is installed is not a host in the cluster, configure network connections for the client to prevent errors from occurring when you run commands on the client.
    • If the Spark task is run in yarn-client mode, add the spark.driver.host parameter in the Client installation directory/Spark/spark/conf/spark-defaults.conf file and set the parameter value to the IP address of the client.
    • When yarn-client mode is used, to ensure that the Spark WebUI can properly display information, add the mappings between the client IP addresses and host names to the hosts file on the active and standby Yarn nodes, that is, the ResourceManager nodes in the cluster.

  10. Go to the directory where the installation package is stored, and run the following command to install the client to the specified directory (an absolute path), for example, /opt/hadoopclient:

    cd /opt/Bigdata/client/FusionInsight_Cluster_1_Services_ClientConfig

    Run the ./install.sh /opt/hadoopclient command and wait for the client installation to complete. The client is successfully installed if information similar to the following is displayed:

    The component client is installed successfully
    NOTE:
    • If the /opt/hadoopclient directory has been used by the client of all or some installed services, use another directory when another client is installed.
    • Delete the client installation directory to uninstall the client.
    • To ensure that the client you install can only be used by you, add the -o parameter. That is, run the ./install.sh /opt/hadoopclient -o command to install the client.
    • If the NTP server is to be installed in chrony mode, ensure that the parameter chrony is added in the installation process, that is, run the command ./install.sh /opt/hadoopclient -o chrony to install the client.
    • Because HBase uses the Ruby syntax, if the installed client contains the HBase client, it is recommended that the client installation directory contain only uppercase letters, lowercase letters, digits, and _-?.@+= characters.
    • If the client node is a server outside the cluster and cannot communicate with the service plane IP address of the active OMS node or cannot access port 20029 of the active OMS node, the client can be successfully installed but cannot be registered with the cluster and cannot be displayed on the GUI.

  11. Log in to the client to check whether the client is successfully installed.

    1. Run the cd /opt/hadoopclient command to go to the client installation directory.
    2. Run the source bigdata_env command to configure the environment variables for the client.
    3. If the cluster is in security mode, run the following command to set kinit authentication and enter the password for logging in to the client, In normal mode, user authentication is not required:

      kinit admin

      Password for admin@HADOOP.COM:          #Enter the login password of user admin (this password is the same as the user password for cluster login).
    4. Run the klist command to view and confirm authentication details.
      Ticket cache: FILE:/tmp/krb5cc_0 
      Default principal: admin@HADOOP.COM   
      
      Valid starting     Expires            Service principal 
      04/09/2021 18:22:35  04/10/2021 18:22:29  krbtgt/HADOOP.COM@HADOOP.COM
      NOTE:
      • When kinit authentication is used, the ticket is stored in the /tmp/krb5cc_uid directory by default.

        uid indicates the ID of the user who logs in to the operating system. For example, if the UID of user root is 0, the ticket generated for kinit authentication after user root logs in to the system is stored in the /tmp/krb5cc_0 directory.

        If the current user does not have the read and write permission on the /tmp directory, the ticket generated cache path is changed to client installation directory/tmp/krb5cc_uid. For example, the client installation directory is /opt/hadoopclient. The kinit authentication ticket is stored in /opt/hadoopclient/tmp/krb5cc_uid.

      • If kinit authentication is used and the same user is used to log in to the operating system, there is a risk that tickets are overwritten. You can set the -c cache_name parameter to specify the ticket buffer location or set the KRB5CCNAME environment variable to avoid this problem.

  12. After the cluster is reinstalled, the client that has been installed is no longer available. Perform the following operations to reinstall the client.

    1. Log in to the node where the client is located as user root.
    2. Run the following command to check the directory where the client is located. (In the following example, the client is located in the /opt/hadoopclient directory.)

      ll /opt

      drwxr-x---. 6 root root       4096 Dec 11 19:00 hadoopclient 
      drwxr-xr-x. 3 root root       4096 Dec  9 02:04 godi 
      drwx------. 2 root root      16384 Nov  6 01:03 lost+found 
      drwxr-xr-x. 2 root root       4096 Nov  7 09:49 rh 
    3. Run the mv command to remove the directory where the client program is located and all files in this directory. (For example, remove the /opt/hadoopclient directory and all files in it.)

      mv /opt/hadoopclient /tmp/clientbackup

    4. Reinstall the client.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback