Updated on 2023-04-28 GMT+08:00

Configuring a Hudi Data Source

Scenario

HetuEngine can connect to Hudi data sources of MRS 3.1.1 or later clusters.

HetuEngine cannot read Hudi bootstrap tables.

Prerequisites

  • You have created the proxy user of the Hudi data source. The proxy user is a human-machine user and must belong to the hive group.
  • You have created a HetuEngine administrator by referring to Creating a HetuEngine User.

Procedure

  1. Obtain the hdfs-site.xml, core-site.xml, and yarn-site.xml configuration files of the Hive data source cluster.

    1. Log in to FusionInsight Manager of the cluster where the Hive data source is located.
    2. Choose Cluster > Dashboard.
    3. Choose More > Download Client and download the client file to the local computer.
    4. Decompress the downloaded client file package to obtain core-site.xml and hdfs-site.xml files in the FusionInsight_Cluster_1_Services_ClientConfig/HDFS/config directory as well as the yarn-site.xml file in the FusionInsight_Cluster_1_Services_ClientConfig/Yarn/config directory.
    5. Check whether the core-site.xml file contains the fs.trash.interval configuration item. If not, add the following configuration items:
      <property>
      <name>fs.trash.interval</name>
      <value>2880</value>
      </property>
    6. Change the value of dfs.client.failover.proxy.provider.hacluster in the hdfs-site.xml file to org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.
      <property>
      <name>dfs.client.failover.proxy.provider.hacluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
      </property>
      • If HDFS has multiple NameServices, change the values of dfs.client.failover.proxy.provider.NameService name for multiple NameServices in the hdfs-site.xml file to org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.
      • In addition, if the hdfs-site.xml file references the host name of a non-HetuEngine cluster node, you need to add the mapping between the referenced host name and the corresponding IP address to the /etc/hosts file of each HetuEngine cluster node. Otherwise, HetuEngine cannot connect to the node that is not in this cluster based on the host name.

      If the Hive data source to be interconnected is in the same Hadoop cluster with HetuEngine, you can log in to the HDFS client and run the following commands to obtain the hdfs-site.xml and core-site.xml configuration files. For details, see Using the HDFS Client.

      hdfs dfs -get /user/hetuserver/fiber/restcatalog/hive/core-site.xml

      hdfs dfs -get /user/hetuserver/fiber/restcatalog/hive/hdfs-site.xml

  2. Obtain the user.keytab and krb5.conf files of the proxy user of the Hive data source.

    1. Log in to FusionInsight Manager of the cluster where the Hive data source is located.
    2. Choose System > Permission > User.
    3. Locate the row that contains the target data source user, click More in the Operation column, and select Download Authentication Credential.
    4. Decompress the downloaded package to obtain the user.keytab and krb5.conf files.

      The proxy user of the Hive data source must be associated with at least the hive user group.

  3. Obtain the MetaStore URL and the Principal of the server.

    1. Decompress the client package of the cluster where the Hive data source is located and obtain the hive-site.xml file from the FusionInsight_Cluster_1_Services_ClientConfig/Hive/config directory.
    2. Open the hive-site.xml file and search for hive.metastore.uris. The value of hive.metastore.uris is the value of MetaStore URL. Search for hive.server2.authentication.kerberos.principal. The value of hive.server2.authentication.kerberos.principal is the value of Principal on the server.

  4. Log in to FusionInsight Manager as a HetuEngine administrator and choose Cluster > Services > HetuEngine. The HetuEngine service page is displayed.
  5. In the Basic Information area on the Dashboard page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
  6. Choose Data Source and click Add Data Source. Configure parameters on the Add Data Source page.

    1. In the Basic Configuration area, configure Name and choose Hive for Data Source Type.
    2. Configure parameters in the Hive Configuration area. For details, see Table 1.
      Table 1 Hive configuration

      Parameter

      Description

      Example Value

      Driver

      The default value is fi-hive-hadoop.

      fi-hive-hadoop

      hdfs-site File

      Select the hdfs-site.xml configuration file obtained in 1. The file name is fixed.

      -

      core-site File

      Select the core-site.xml configuration file obtained in 1. The file name is fixed.

      -

      yarn-site File

      Select the yarn-site.xml configuration file obtained in 1. The file name is fixed.

      -

      krb5 File

      Configure this parameter when the security mode is enabled.

      It is the configuration file used for Kerberos authentication. Select the krb5.conf file obtained in 2.

      krb5.conf

      Enable Data Source Authentication

      Whether to use the permission policy of the Hive data source for authentication.

      If Ranger is disabled for the HetuEngine service, select Yes. If Ranger is enabled, select No.

      No

    3. Configure parameters in the MetaStore Configuration area. For details, see Table 2.
      Table 2 MetaStore Configuration

      Parameter

      Description

      Example Value

      Metastore URL

      URL of the MetaStore of the data source. For details, see 3.

      thrift://10.92.8.42:21088,thrift://10.92.8.43:21088,thrift://10.92.8.44:21088

      Security Authentication Mechanism

      After the security mode is enabled, the default value is KERBEROS.

      KERBEROS

      Server Principal

      Configure this parameter when the security mode is enabled.

      It specifies the username with domain name used by meta to access MetaStore. For details, see 3.

      hive/hadoop.hadoop.com@HADOOP.COM

      Client Principal

      Configure this parameter when the security mode is enabled.

      The parameter format is as follows: Username for accessing MetaStore@Domain name (uppercase).COM.

      Username for accessing MetaStore is the user to which the user.keytab file obtained in 2 belongs.

      admintest@HADOOP.COM

      Keytab File

      Configure this parameter when the security mode is enabled.

      It specifies the keytab credential file of the MetaStore username. The file name is fixed. Select the user.keytab file obtained in 2.

      user.keytab

    4. Configure parameters in the Connection Pool Configuration area. For details, see Table 3.
      Table 3 Connection pool configuration parameters

      Parameter

      Description

      Example Value

      Enable Connection Pool

      Whether to enable the connection pool when accessing Hive MetaStore.

      Yes

      Maximum Connections

      Maximum number of connections in the connection pool when Hive MetaStore is accessed.

      50

    5. Configure parameters in Hive User Information Configuration. For details, see Table 4.
      Hive User Information Configuration and HetuEngine-Hive User Mapping Configuration must be used together. When HetuEngine is connected to the Hive data source, user mapping enables HetuEngine users to have the same permissions of the mapped Hive data source user. Multiple HetuEngine users can correspond to one Hive user.
      Table 4 Hive user information configuration

      Parameter

      Description

      Data Source User

      Data source user information.

      If the data source user is set to hiveuser1, a HetuEngine user mapped to hiveuser1 must exist. For example, create hetuuser1 and map it to hiveuser1.

      Keytab File

      Obtain the authentication credential of the user corresponding to the data source.

    6. Modify custom configurations. Parameters hive.parquet.use-column-names and hive.partition-use-column-names are mandatory.
      • You can click Add to add custom configuration parameters by referring to the following table.
        Table 5 Custom configurations (mandatory)

        Parameter

        Description

        Example Value

        hive.parquet.use-column-names

        If the value is true, columns are accessed based on the names recorded in Parquet files instead of the default sequence.

        true

        Value: true and false

        hive.partition-use-column-names

        If the value is true, columns are accessed based on the names recorded in partitions instead of the default sequence.

        true

        Value: true and false

        Table 6 Custom configurations (optional)

        Parameter

        Description

        Example Value

        hive.metastore.connection.pool.maxTotal

        Maximum number of connections in the connection pool.

        50 (Value range: 20–200)

        hive.metastore.connection.pool.maxIdle

        Maximum number of idle threads in the connection pool. When the number of idle threads reaches the maximum number, new threads are not released.

        Default value: 8

        8 (The value ranges from 0 to 200 and cannot exceed the maximum number of connections.)

        hive.metastore.connection.pool.minIdle

        Minimum number of idle threads in the connection pool. When the number of idle threads reaches the minimum number, the thread pool does not create new threads.

        Default value: 0

        0 (The value ranges from 0 to 200 and cannot exceed the value of hive.metastore.connection.pool.maxIdle.)

        hive.hdfs.wire-encryption.enabled

        This parameter needs to be added and set to false if the hadoop.rpc.protection parameter of the HDFS is set to authentication or integrity.

        false

      • You can click Delete to delete custom configuration parameters.
        • You can add prefixes coordinator. and worker. to the preceding custom configuration items to configure coordinators and workers, respectively. For example, if you prefix worker. to hive.metastore.connection.pool.maxTotal, the custom parameter becomes worker.hive.metastore.connection.pool.maxTotal. If you set this new parameter to 50, it indicates that a maximum number of 50 connections are allowed for worker nodes to access Hive MetaStore. If no prefix is added, the configuration item is valid for both Coordinators and Workers.
        • By default, the maximum number of connections for coordinator nodes to access Hive MetaStore is 50, and the maximum and minimum numbers of idle data source connections are 8 and 0, respectively. The maximum number of connections for worker nodes to access Hive MetaStore is 20, and the maximum and minimum numbers of idle data source connections are both 0.
    7. Click OK.

  7. Log in to the node where the cluster client is located and run the following commands to switch to the client installation directory and authenticate the user:

    cd /opt/client

    source bigdata_env

    kinit User performing HetuEngine operations (If the cluster is in normal mode, skip this step.)

  8. Run the following command to log in to the catalog of the data source:

    hetu-cli --catalog Data source name --schema Database name

    For example, run the following command:

    hetu-cli --catalog hudi_1 --schema default

  9. Run the following command. If the database table information can be viewed or no error is reported, the connection is successful.

    show tables;

Data Type Mapping

Currently, Hudi data sources support the following data types: BOOLEAN, TINYINT, SMALLINT, INT, BIGINT, REAL, DOUBLE, DECIMAL, NUMERIC, DEC, VARCHAR, VARCHAR (X), CHAR, CHAR (X), STRING, DATE, TIMESTAMP, TIME WITH TIMEZONE, TIMESTAMP WITH TIME ZONE, TIME, ARRAY, MAP, UNIOMTYPE, STRUCT, and ROW.

Performance Optimization

  • Metadata caching

    Hdui connectors support metadata caching to provide metadata requests for various operations faster. For details, see Adjusting Metadata Cache.

  • Cost-based Optimization (CBO)

    Periodically running the analyze command to collect table statistics helps perform CBOs for Hudi connectors.

  • Dynamic filtering

    Enabling dynamic filtering helps optimize the calculation of the Join operator of Hudi connectors. For details, see Enabling Dynamic Filtering.

  • Query with partition conditions

    Creating a partitioned table and querying data with partition filter criteria help filter out some partition data, improving performance.

Constraints

Hudi data sources support only the QUERY operation.