Adding a GaussDB Data Source
Add a GaussDB JDBC data source on HSConsole.
Prerequisites for Adding a GaussDB Data Source
- The cluster where the data source is located and the HetuEngine cluster nodes can communicate with each other.
- In the /etc/hosts file of all nodes in the cluster where HetuEngine is located, add the mapping between the host names and IP addresses of the cluster where the data source to be connected is located, and add 10.10.10.10 hadoop.System domain name in the /etc/hosts file (for example, 10.10.10.10 hadoop.hadoop.com). Otherwise, HetuEngine cannot connect to the nodes that are not in the cluster based on the host name.
- A HetuEngine compute instance has been created.
Procedure for Adding a GaussDB Data Source
- Log in to FusionInsight Manager as a HetuEngine administrator and choose Cluster > Services > HetuEngine. The HetuEngine service page is displayed.
- In the Basic Information area on the Dashboard page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
- Choose Data Source and click Add Data Source. Configure parameters on the Add Data Source page.
- In the Basic Configuration area, configure Name and choose JDBC > GAUSSDB-A for Data Source Type.
- Configure parameters in the GAUSSDB-A Configuration area. For details, see Table 1.
Table 1 GAUSSDB-A Configuration Parameter
Description
Example Value
Driver
The default value is gaussdba.
gaussdba
JDBC URL
JDBC URL for connecting to GaussDB A. The format is as follows:
jdbc:postgresql://CN service IP address:Port number/Database name
NOTE:If SSL is enabled for the GaussDB database, add the following SSL-related parameter to the URL: ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory
Username
Username for connecting to the GaussDB data source.
Change the value based on the username being connected with the data source.
Password
Password for connecting to the GaussDB data source.
Change the value based on the username and password for connecting to the data source.
- (Optional) Configure GaussDB user information according to Table 2.
GaussDB User Information Configuration and HetuEngine-GaussDB User Mapping Configuration must be used together. When HetuEngine is connected to the GaussDB data source, HetuEngine users can have the same permissions of the mapped GaussDB data source user through mapping. Multiple HetuEngine users can correspond to one GaussDB user.
In the GaussDB database, the created user name must comply with the identifier naming convention and contain a maximum of 63 characters. If a username contains uppercase letters, the database automatically converts the uppercase letters into lowercase letters. To create a username that contains uppercase letters, enclose the username with double quotation marks (""). Therefore, you must use the converted username to set the Data Source User parameter.
The examples are as follows:
- If the user name created in the GaussDB database is Gaussuser1, the value of Data Source User must be gaussuser1.
- If the user name created in the GaussDB database is "Gaussuser1", the value of Data Source User must be Gaussuser1.
Table 2 GaussDB User Information Configuration Parameter
Description
Data Source User
Data source user name
If the data source user is set to gaussuser1, a HetuEngine user mapped to gaussuser1 must exist. For example, create hetuuser1 and map it to gaussuser1.
Password
User authentication password of the corresponding data source
- (Optional) Configure HetuEngine-GaussDB user mapping according to Table 3.
Multiple HetuEngine accounts are configured in the format of HetuEngine User and Data Source User key-value pairs, corresponding to one of the users configured in the GaussDB User Information Configuration area. When different HetuEngine users are used to access GaussDB, different GaussDB usernames and passwords can be used.
Table 3 HetuEngine-GaussDB User Mapping Configuration Parameter
Description
HetuEngine User
HetuEngine username
Data Source User
Data source user, for example, gaussuser1 (data source user configured in Table 2)
- (Optional) Customize the configuration.
- You can click Add to add custom configuration parameters. Configure custom parameters of the GaussDB data source. For details, see Table 4.
Table 4 Custom parameters of the GaussDB data source Parameter
Description
Example Value
use-connection-pool
Whether to use the JDBC connection pool.
true
jdbc.connection.pool.maxTotal
Maximum number of connections in the JDBC connection pool.
8
jdbc.connection.pool.maxIdle
Maximum number of idle connections in the JDBC connection pool.
8
jdbc.connection.pool.minIdle
Minimum number of idle connections in the JDBC connection pool.
0
join-pushdown.enabled
- true: JOIN statements can be pushed down to the data source for execution.
- false: JOIN statements are not pushed down to the data source for execution. As a result, more network and compute resources are consumed.
true
join-pushdown.strategy
The JOIN push-down function should be enabled in advance. Value options are as follows:
- AUTOMATIC: cost-based JOIN pushdown
- EAGER: JOIN pushdown as much as possible
AUTOMATIC
source-encoding
GaussDB data source encoding mode.
UTF-8
multiple-cnn-enabled
Whether to use the GaussDB multi-CN configuration. To use it, ensure that the JDBC connection pool is disabled and the JDBC URL format is as follows: jdbc:postgresql://host:port/database,jdbc:postgresql://host:port/database,jdbc:postgresql://host:port/database.
false
parallel-read-enabled
Whether to use the parallel data read function.
If the parallel data read function is enabled, the actual number of splits is determined based on the node distribution and the value of max-splits.
Multiple connections to the data source will be created for parallel read operations. The dependent data source should support the load.
false
split-type
Type of the parallel data read function.
- NODE: The degree of parallelism (DOP) is categorized based on the GaussDB data source DataNodes.
- PARTITION: The DOP is categorized based on table partitions.
- INDEX: The DOP is categorized based on table indexes.
NODE
max-splits
Maximum degree of parallelism.
5
use-copymanager-for-insert
Whether to use CopyManager for batch import.
false
unsupported-type-handling
If the connector does not support the data of a certain type, convert it to VARCHAR.
- After the CONVERT_TO_VARCHAR parameter is configured, the data of BIT VARYING, CIDR, MACADDR, INET, OID, REGTYPE, REGCONFIG and POINT types are converted to the varchar type during query and data of these types can only be read.
- The default value is IGNORE, indicating that unsupported types will be not displayed in the result.
CONVERT_TO_VARCHAR
max-bytes-in-a-batch-for-copymanager-in-mb
Maximum volume of data imported by CopyManager in a batch, in MB.
10
decimal-mapping
By default, data of the DECIMAL, NUMBER, or NUMERIC type whose precision is not specified or exceeds the maximum precision of 38 digits is ignored. You can map the data to the DECIMAL(38, x) data type by setting the decimal-mapping=allow_overflow parameter. The value of x is the value of decimal-default-scale.
allow_overflow
decimal-default-scale
Decimal precision when data of the DECIMAL, NUMBER, or NUMERIC type is mapped into DECIMAL(38, x). The value ranges from 0 to 38 and the default value is 0.
0
case-insensitive-name-matching
HetuEngine supports case-sensitive table and schema names of the GaussDB data source.
- false: Only schemas or tables whose names contain only lowercase letters can be queried. The default value is false.
- true: If there are no schemas or tables with the same name after the case-insensitive matching, the schemas or tables can be queried. Otherwise, the schemas or tables cannot be queried.
false
- You can click Delete to delete custom configuration parameters.
- You can click Add to add custom configuration parameters. Configure custom parameters of the GaussDB data source. For details, see Table 4.
- Click OK.
- Log in to the node where the cluster client is located and run the following commands to switch to the client installation directory and authenticate the user:
cd /opt/client
source bigdata_env
kinit User performing HetuEngine operations (If the cluster is in normal mode, skip this step.)
- Run the following command to log in to the catalog of the data source:
hetu-cli --catalog Data source name --schema Database name
For example, run the following command:
hetu-cli --catalog gaussdb_1 --schema admin
- Run the following command. If the database table information can be viewed or no error is reported, the connection is successful.
show tables;
GAUSSDB Data Type Mapping
GaussDB Data Type |
HetuEngine Data Type |
---|---|
BOOLEAN |
BOOLEAN |
TINYINT |
TINYINT |
SMALLINT |
SMALLINT |
INTEGER |
INTEGER |
BINARY_INTEGER |
INTEGER |
BIGINT |
BIGINT |
SMALLSERIAL |
SMALLINT |
SERIAL |
INTEGER |
BIGSERIAL |
BIGINT |
FLOAT4 (REAL) |
REAL |
FLOAT8(DOUBLE PRECISION) |
DOUBLE PRECISION |
DECIMAL[p (,s)] |
DECIMAL[p (,s)] |
NUMERIC[p (,s)] |
DECIMAL[p (,s)] |
CHAR(n) |
CHAR(n) |
CHARACTER(n) |
CHAR(n) |
NCHAR(n) |
CHAR(n) |
VARCHAR(n) |
VARCHAR(n) |
CHARACTER VARYING(55) |
VARCHAR(n) |
VARCHAR2(n) |
VARCHAR(n) |
NVARCHAR2(n) |
VARCHAR |
TEXT(CLOB) |
VARCHAR |
DATE |
TIMESTAMP |
TIMESTAMP |
TIMESTAMP |
UUID |
UUID |
JSON |
JSON |
Constraints on GaussDB Data Source
- The following syntaxes are not supported: GRANT, REVOKE, SHOW GRANTS, SHOW ROLES, and SHOW ROLE GRANTS.
- The UPDATE and DELETE syntaxes cannot be used to filter clauses containing cross-catalog conditions, for example, UPDATE mppdb.table SET column1=value WHERE column2 IN (SELECT column2 from hive.table).
- The UPDATE syntax cannot be used to update the DATE, TIMESTAMP, and VARBINARY fields.
- WHERE statements whose condition is REAL cannot be queried, for example, SELECT * FROM mppdb.table WHERE column1 = REAL '1.1'.
- The DELETE syntax cannot be used to filter clauses containing subqueries, for example, DELETE FROM mppdb.table WHERE column IN (SELECT column FROM mppdb.table1).
- HetuEngine supports a maximum precision of 38 digits for GaussDB data sources, including the DECIMAL, NUMBER, and NUMERIC data types.
- If either end of a predicate contains a subquery, the predicate will not be pushed down. For example, if a subquery exists after the example statement count(*), the predicate will not be pushed down, but the min function in the subquery can be pushed down.
select count(*) from item where i_current_price = (select min(i_current_price) from item);
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot