Creating a Table and Associating It with DWS
Function
This statement is used to create a table and associate it with an existing DWS table.
Prerequisites
Before creating a table and associating it with DWS, you need to create a datasource connection. For details about operations on the management console, see
Syntax
1 2 3 4 5 6 |
CREATE TABLE [IF NOT EXISTS] TABLE_NAME USING JDBC OPTIONS ( 'url'='xx', 'dbtable'='db_name_in_DWS.table_name_in_DWS', 'passwdauth' = 'xxx', 'encryption' = 'true'); |
Keyword
Parameter |
Description |
---|---|
url |
Before obtaining the DWS IP address, you need to create a datasource connection first.. If you have created an enhanced datasource connection, you can use the JDBC Connection String (intranet) provided by DWS or the intranet address and port number to access DWS. The format is protocol header: //Internal IP address:Internal network port/Database name, for example: jdbc:postgresql://192.168.0.77:8000/postgres.
NOTE:
The DWS IP address is in the following format: protocol header://IP address:port number/database name The following is an example: jdbc:postgresql://to-dws-1174405119-ihlUr78j.datasource.com:8000/postgres If you want to connect to a database created in DWS, change postgres to the corresponding database name in this connection. |
dbtable |
Specifies the name or Schema name.Table name of the table that is associated with the DWS. For example: public.table_name. |
user |
(Discarded) DWS username. |
password |
User password of the DWS cluster. |
passwdauth |
Datasource password authentication name. For details about how to create datasource authentication, see in the Data Lake Insight User Guide. |
encryption |
Set this parameter to true when datasource password authentication is used. |
partitionColumn |
This parameter is used to set the numeric field used concurrently when data is read.
NOTE:
|
lowerBound |
Minimum value of a column specified by partitionColumn. The value is contained in the returned result. |
upperBound |
Maximum value of a column specified by partitionColumn. The value is not contained in the returned result. |
numPartitions |
Number of concurrent read operations.
NOTE:
When data is read, the number of concurrent operations are evenly allocated to each task according to the lowerBound and upperBound to obtain data. The following is an example: 'partitionColumn'='id', 'lowerBound'='0', 'upperBound'='100', 'numPartitions'='2' Two concurrent tasks are started in DLI. The execution ID of one task is greater than or equal to 0 and the ID is less than 50, and the execution ID of the other task is greater than or equal to 50 and the ID is less than 100. |
fetchsize |
Number of data records obtained in each batch during data reading. The default value is 1000. If this parameter is set to a large value, the performance is good but more memory is occupied. If this parameter is set to a large value, memory overflow may occur. |
batchsize |
Number of data records written in each batch. The default value is 1000. If this parameter is set to a large value, the performance is good but more memory is occupied. If this parameter is set to a large value, memory overflow may occur. |
truncate |
Indicates whether to clear the table without deleting the original table when overwrite is executed. The options are as follows:
The default value is false, indicating that the original table is deleted and then a new table is created when the overwrite operation is performed. |
isolationLevel |
Transaction isolation level. The options are as follows:
The default value is READ_UNCOMMITTED. |
Precautions
When creating a table associated with DWS, you do not need to specify the Schema of the associated table. DLI automatically obtains the schema of the table in the dbtable parameter of DWS.
Example
1 2 3 4 5 6 |
CREATE TABLE IF NOT EXISTS dli_to_dws USING JDBC OPTIONS ( 'url'='jdbc:postgresql://to-dws-1174405119-ih1Ur78j.datasource.com:8000/postgres', 'dbtable'='test_dws', 'passwdauth' = 'xxx', 'encryption' = 'true'); |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot