El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Configuring a Traditional Data Source

Updated on 2022-12-14 GMT+08:00

Scenario

This section describes how to add a Hive data source on HSConsole.

Currently, HetuEngine supports data sources of the following traditional data formats: AVRO, TEXT, RCTEXT, Binary, ORC, Parquet, and SequenceFile.

Prerequisites

  • The domain name of the cluster where the data source is located must be different from the HetuEngine cluster domain name.
  • The cluster where the data source is located and the HetuEngine cluster nodes can communicate with each other.
  • A HetuEngine compute instance has been created.

Procedure

  1. Obtain the hdfs-site.xml and core-site.xml configuration files of the Hive data source cluster.

    1. Log in to FusionInsight Manager of the cluster where the Hive data source is located.
    2. Choose Cluster > Dashboard.
    3. Choose More > Download Client and download the client file to the local computer.
    4. Decompress the downloaded client file package and obtain the core-site.xml and hdfs-site.xml files in the FusionInsight_Cluster_1_Services_ClientConfig/HDFS/config directory.
    5. Check whether the core-site.xml file contains the fs.trash.interval configuration item. If not, add the following configuration items:
      <property>
      <name>fs.trash.interval</name>
      <value>2880</value>
      </property>
    6. Change the value of dfs.client.failover.proxy.provider.hacluster in the hdfs-site.xml file to org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.
      <property>
      <name>dfs.client.failover.proxy.provider.hacluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
      </property>
      • If HDFS has multiple NameServices, change the values of dfs.client.failover.proxy.provider.NameService name for multiple NameServices in the hdfs-site.xml file to org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.
      • In addition, if the hdfs-site.xml file references the host name of a non-HetuEngine cluster node, you need to add the mapping between the referenced host name and the corresponding IP address to the /etc/hosts file of each HetuEngine cluster node. Otherwise, HetuEngine cannot connect to the node that is not in this cluster based on the host name.
      NOTICE:

      If the Hive data source to be interconnected is in the same Hadoop cluster with HetuEngine, you can log in to the HDFS client and run the following commands to obtain the hdfs-site.xml and core-site.xml configuration files. For details, see Using the HDFS Client.

      hdfs dfs -get /user/hetuserver/fiber/restcatalog/hive/core-site.xml

      hdfs dfs -get /user/hetuserver/fiber/restcatalog/hive/hdfs-site.xml

  2. Obtain the user.keytab and krb5.conf files of the proxy user of the Hive data source.

    1. Log in to FusionInsight Manager of the cluster where the Hive data source is located.
    2. Choose System > Permission > User.
    3. Locate the row that contains the target data source user, click More in the Operation column, and select Download Authentication Credential.
    4. Decompress the downloaded package to obtain the user.keytab and krb5.conf files.
      NOTE:

      The proxy user of the Hive data source must be associated with at least the hive user group.

  3. Obtain the MetaStore URL and the Principal of the server.

    1. Decompress the client package of the cluster where the Hive data source is located and obtain the hive-site.xml file from the FusionInsight_Cluster_1_Services_ClientConfig/Hive/config directory.
    2. Open the hive-site.xml file and search for hive.metastore.uris. The value of hive.metastore.uris is the value of MetaStore URL. Search for hive.server2.authentication.kerberos.principal. The value of hive.server2.authentication.kerberos.principal is the value of Principal on the server.

  4. Log in to FusionInsight Manager as a HetuEngine administrator and choose Cluster > Services > HetuEngine. The HetuEngine service page is displayed.
  5. In the Basic Information area on the Dashboard page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
  6. Choose Data Source and click Add Data Source. Configure parameters on the Add Data Source page.

    1. Configure parameters in the Basic Information area. For details, see Table 1.
      Table 1 Basic Information

      Parameter

      Description

      Example Value

      Name

      Name of the data source to be connected.

      The value can contain only letters, digits, and underscores (_) and must start with a letter.

      hive_1

      Data Source Type

      Type of the data source to be connected. Select Hive.

      Hive

      Mode

      Mode of the current cluster. The default value is Security Mode.

      -

      Description

      Description of the data source.

      The value can contain only letters, digits, commas (,), periods (.), underscores (_), spaces, and line breaks.

      -

    2. Configure parameters in the Hive Configuration area. For details, see Table 2.
      Table 2 Hive Configuration

      Parameter

      Description

      Example Value

      Driver

      The default value is fi-hive-hadoop.

      fi-hive-hadoop

      hdfs-site File

      Select the hdfs-site.xml configuration file obtained in 1. The file name is fixed.

      -

      core-site File

      Select the core-site.xml configuration file obtained in 1. The file name is fixed.

      -

      krb5 File

      Configure this parameter when the security mode is enabled.

      It is the configuration file used for Kerberos authentication. Select the krb5.conf file obtained in 2.

      krb5.conf

      Enable Data Source Authentication

      Whether to use the permission policy of the Hive data source for authentication.

      If Ranger is disabled for the HetuEngine service, select Yes. If Ranger is enabled, select No.

      No

    3. Configure parameters in the MetaStore Configuration area. For details, see Table 3.
      Table 3 MetaStore Configuration

      Parameter

      Description

      Example Value

      Metastore URL

      URL of the MetaStore of the data source. For details, see 3.

      thrift://10.92.8.42:21088,thrift://10.92.8.43:21088,thrift://10.92.8.44:21088

      Security Authentication Mechanism

      After the security mode is enabled, the default value is KERBEROS.

      KERBEROS

      Server Principal

      Configure this parameter when the security mode is enabled.

      It specifies the username with domain name used by meta to access MetaStore. For details, see 3.

      hive/hadoop.hadoop.com@HADOOP.COM

      Client Principal

      Configure this parameter when the security mode is enabled.

      The parameter format is as follows: Username for accessing MetaStore@domain name (uppercase).COM.

      Username for accessing MetaStore is the user to which the user.keytab file obtained in 2 belongs.

      admintest@HADOOP.COM

      Keytab File

      Configure this parameter when the security mode is enabled.

      It specifies the keytab credential file of the MetaStore user name. The file name is fixed. Select the user.keytab file obtained in 2.

      user.keytab

    4. Configure parameters in the Connection Pool Configuration area. For details, see Table 4.
      Table 4 Connection Pool Configuration

      Parameter

      Description

      Example Value

      Enable Connection Pool

      Whether to enable the connection pool when accessing Hive MetaStore.

      Yes/No

      Maximum Connections

      Maximum number of connections in the connection pool when accessing Hive MetaStore.

      50

    5. Configure parameters in Hive User Information Configuration. For details, see Table 5.
      Hive User Information Configuration and HetuEngine-Hive User Mapping Configuration must be used together. When HetuEngine is connected to the Hive data source, user mapping enables HetuEngine users to have the same permissions of the mapped Hive data source user. Multiple HetuEngine users can correspond to one Hive user.
      Table 5 Hive User Information Configuration

      Parameter

      Description

      Example Value

      Data Source User

      Data source user information.

      The value can contain only letters, digits, underscores (_), hyphens (-), and periods (.), and must start with a letter or underscore (_). The minimum length is 2 characters and the maximum length is 100 characters.

      If the data source user is set to hiveuser1, a HetuEngine user mapped to hiveuser1 must exist. For example, create hetuuser1 and map it to hiveuser1.

      Keytab File

      Obtain the authentication credential of the user corresponding to the data source.

      hiveuser1.keytab

    6. Configure parameters in the HetuEngine-Hive User Mapping Configuration area. For details, see Table 6.
      Table 6 HetuEngine-Hive User Mapping Configuration

      Parameter

      Description

      Example Value

      HetuEngine User

      HetuEngine user information.

      The value can contain only letters, digits, underscores (_), hyphens (-), and periods (.), and must start with a letter or underscore (_). The minimum length is 2 characters and the maximum length is 100 characters.

      hetuuser1

      Data Source User

      Data source user information.

      The value can contain only letters, digits, underscores (_), hyphens (-), and periods (.), and must start with a letter or underscore (_). The minimum length is 2 characters and the maximum length is 100 characters.

      hiveuser1 (data source user configured in Table 5)

    7. Modify custom configurations.
      • You can click Add to add custom configuration parameters by referring to Table 7.
        Table 7 Custom parameters

        Parameter

        Description

        Example Value

        hive.metastore.connection.pool.maxTotal

        Maximum number of connections in the connection pool.

        50 (Value range: 0-200)

        hive.metastore.connection.pool.maxIdle

        Maximum number of idle threads in the connection pool. When the number of idle threads reaches the maximum number, new threads are not released.

        Default value: 10.

        10 (The value ranges from 0 to 200 and cannot exceed the maximum number of connections.)

        hive.metastore.connection.pool.minIdle

        Minimum number of idle threads in the connection pool. When the number of idle threads reaches the minimum number, the thread pool does not create new threads.

        Default value: 10.

        10 (The value ranges from 0 to 200 and cannot exceed the value of hive.metastore.connection.pool.maxIdle.)

      • You can click Delete to delete custom configuration parameters.
        NOTE:
        • You can add prefixes coordinator. and worker. to the preceding custom configuration items to configure coordinators and workers, respectively. For example, if worker.hive.metastore.connection.pool.maxTotal is set to 50, a maximum number of 50 connections are allowed for workers to access Hive MetaStore. If no prefix is added, the configuration item is valid for both coordinators and workers.
        • By default, the maximum number of connections for coordinators to access Hive MetaStore is 5 and the maximum and minimum numbers of idle data source connections are both 10. The maximum number of connections for workers to access Hive MetaStore is 20, the maximum and minimum numbers of idle data source connections are both 0.
    8. Click OK.

  7. Log in to the node where the cluster client is located and run the following commands to switch to the client installation directory and authenticate the user:

    cd /opt/client

    source bigdata_env

    kinit User performing HetuEngine operations (If the cluster is in normal mode, skip this step.)

  8. Run the following command to log in to the catalog of the data source:

    hetu-cli --catalog Data source name --schema default

    For example, run the following command:

    hetu-cli --catalog hive_1 --schema default

  9. Run the following command to view the database table:

    show tables;
      Table  
    ---------
     hivetb   
    (1 rows)
    
    Query 20210730_084524_00023_u3sri@default@HetuEngine, FINISHED, 3 nodes
    Splits: 36 total, 36 done (100.00%)
    0:00 [2 rows, 47B] [7 rows/s, 167B/s]

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback