Updated on 2023-07-26 GMT+08:00

Configuring Destination Information

Overview

This topic describes how to configure destination information for a data integration task. Based on the destination information (including the data source and data storage), ROMA Connect writes data to the destination. The destination information configuration varies depending on data source types.

During data migration, if a primary key conflict occurs at the destination, data is automatically updated based on the primary key.

Data Source Types Supported by Both Real-Time and Scheduled Integration Tasks

Data Source Types Supported by Only Real-Time Tasks

API

If Integration Mode is set to Scheduled or Real-Time, you can select API as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 1 API information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the API data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select API.

    Data Source Name

    Select the API data source that you configured in Connecting to Data Sources.

    Request Parameters

    Construct the parameter definition of an API request. For example, the data to be integrated to the destination must be defined in Body. Set this parameter based on the definition of the API data source.

    • Params: parameters defined after the question mark (?) in the request URL. Only fixed values can be transferred. The method for setting Params is similar to that for setting Body in form-data format.
    • Headers: message headers of RESTful requests. Only fixed values can be transferred. The method for setting Headers is similar to that for setting Body in form-data format.
    • Body: bottom-layer parameters in the body of RESTful requests. This parameter and Data Root Field constitute the complete body of a request sent to the destination API. Source data is transferred to the destination API through parameters defined in Body. Body supports two modes: form-data and raw. For details, see Description on Body Parameter Configuration.

    Data Root Field

    This parameter specifies the path of upper-layer common fields in all parameters in the body sent to the destination in JSON format. Data Root Field and Body in Request Parameters form the request body sent to the destination API.

    For example, if the body parameter is {"c":"xx","d":"xx"} and Data Root Field is set to a.b, the encapsulated request data is {"a":{"b":{"c":"xx","d":"xx"}}}.

    Description on Body Parameter Configuration

    • form-data mode:

      Set Key to the parameter name defined by the API data source and leave Value unspecified. The key will be used as the destination field name in mapping information to map and transfer the value of the source field.

      Figure 1 form-data mode
    • Raw mode:

      The raw mode supports the JSON, Array, and nested JSON formats. Enter an example body sent to the destination API in JSON format. ROMA Connect replaces the parameter values in the example based on the mapping configuration, and finally transfers the source data to the destination. The following is an example body in raw mode:

      • JSON format:
        {
        	"id": 1,
        	"name": "name1"
        }

        Enter the body content in JSON format , leave Data Root Field empty, and set the field names in Mapping Information.

      • Array format:
        {
        	"record":[
        		{
        			"id": 1,
        			"name": ""
        		}
        	]
        }

        Set Data Root Field to the JSONArray object name, for example, record. Enter field names in mapping information.

      • Nested JSON format:
        {
        	"startDate":"",
        	"record":[
        		{
        			"id": 1,
        			"name": ""
        		}
        	]
        }

        Leave Data Root Field blank. In mapping information, set the json fields to the field names and set the jsonArray fields to specific paths, for example, record[0].id.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

ActiveMQ

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select ActiveMQ as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 2 ActiveMQ information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the ActiveMQ data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select ActiveMQ.

    Data Source Name

    Select the ActiveMQ data source that you configured in Connecting to Data Sources.

    Destination Type

    Select the message transfer model of the ActiveMQ data source. The value can be Topic or Queue.

    Destination Name

    Enter the name of a topic or queue to which to send data. Ensure that the topic or queue already exists.

    Metadata

    Define each underlying key-value data element to be written to the destination in JSON format. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.
    • Parsing Path: full path of metadata. For details, see Description on Metadata Parsing Path Configuration.

    Description on Metadata Parsing Path Configuration

    • Data in JSON format does not contain arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b.c, the complete path of element d is a.b.d, and elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b.c, and Parsing Path of element d must be set to a.b.d.

      {
         "a": {
            "b": {
               "c": "xx",
               "d": "xx"
            }
         }
      }
    • Data in JSON format contains arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b[i].c, and the complete path of element d is a.b[i].d. Elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b[i].c, and Parsing Path of element d must be set to a.b[i].d.

      {
         "a": {
            "b": [{
               "c": "xx",
               "d": "xx"
            },
            {
               "c": "yy",
               "d": "yy"
            }
            ]
         }
      }

    The preceding JSON data that does not contain arrays is used as an example. The following describes the configuration when the destination is ActiveMQ:

    Figure 2 ActiveMQ configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

ArtemisMQ

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select ArtemisMQ as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 3 ArtemisMQ information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the ArtemisMQ data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select ArtemisMQ.

    Data Source Name

    Select the ArtemisMQ data source that you configured in Connecting to Data Sources.

    Destination Type

    Select the message transfer model of the ArtemisMQ data source. The value can be Topic or Queue.

    Destination Name

    Enter the name of a topic or queue to which to send data. Ensure that the topic or queue already exists.

    Metadata

    Define each underlying key-value data element to be written to the destination in JSON format. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.
    • Parsing Path: full path of metadata. For details, see Description on Metadata Parsing Path Configuration.

    Description on Metadata Parsing Path Configuration

    • Data in JSON format does not contain arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b.c, the complete path of element d is a.b.d, and elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b.c, and Parsing Path of element d must be set to a.b.d.

      {
         "a": {
            "b": {
               "c": "xx",
               "d": "xx"
            }
         }
      }
    • Data in JSON format contains arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b[i].c, and the complete path of element d is a.b[i].d. Elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b[i].c, and Parsing Path of element d must be set to a.b[i].d.

      {
         "a": {
            "b": [{
               "c": "xx",
               "d": "xx"
            },
            {
               "c": "yy",
               "d": "yy"
            }
            ]
         }
      }

    The configuration when the destination is ArtemisMQ is similar to that when the destination is ActiveMQ. For details, see ActiveMQ configuration example.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

DB2

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select DB2 as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 4 DB2 information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the DB2 data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select DB2.

    Data Source Name

    Select the DB2 data source that you configured in Connecting to Data Sources.

    Table

    Select the data table to which data will be written. Then, click Select Table Field and select only the column fields that you want to write.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

DIS

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select DIS as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 5 DIS information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the DIS data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select DIS.

    Data Source Name

    Select the DIS data source that you configured in Connecting to Data Sources.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

DWS

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select DWS as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 6 DWS information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the DWS data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select DWS.

    Data Source Name

    Select the DWS data source that you configured in Connecting to Data Sources.

    Table

    Select the data table to which data will be written. Then, click Select Table Field and select only the table fields that you want to write.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

    Clear Table

    This parameter specifies whether to clear the destination table each time when a schedule starts.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

FTP

Back to Overview

If Integration Mode is set to Scheduled, you can select FTP as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 7 FTP information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the FTP data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select FTP.

    Data Source Name

    Select the FTP data source that you configured in Connecting to Data Sources.

    File Path

    Enter the path of the folder to be accessed on the FTP server, for example, /data/FDI.

    File Name Prefix

    Enter the prefix of the file name. This parameter is used together with File Name Extension to define the name of the file to be written to the FTP data source.

    File Name Extension

    Select a timestamp format for the file name extension. This parameter is used together with File Name Prefix to define the data file to be written to the FTP data source.

    File Content Type

    Select the content type of the data file to be written. Currently, Text file and Binary file are supported.

    File Type

    Select the format of the data file to be written. If File Content Type is set to Text file, CSV and TXT are available. If File Content Type is set to Binary file, XLS and XLSX are available.

    File Name Encoding

    Select the encoding mode of the data file name.

    File Content Encoding

    This parameter is mandatory only if File Content Type is set to Text file.

    Select the encoding format of the data file content.

    File Separator

    This parameter is mandatory only if File Content Type is set to Text file.

    Enter the field separator for the data file to distinguish different fields in each row of data. By default, the values are separated with commas (,).

    Space Format Character

    Define the space characters in the data file for data writing.

    Write Mode

    Select the mode in which integration data is written to a file.

    • Truncate: Delete the file, re-create a file, and write data to the new file.
    • Append: Incrementally write data to an existing file.

    Add File Header

    Determine whether to add a file header.

    File Header

    This parameter is mandatory only if Add File Header is set to Yes.

    Enter the file header information. Use commas (,) to separate multiple file headers.

    Metadata

    Define the data fields to be written to the destination file. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The following figure shows a configuration example when the destination is FTP. id, name, and info are the data fields to be written to the FTP data source.

    Figure 3 FTP configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

FI HDFS

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select FI HDFS as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 8 FI HDFS information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the FI HDFS data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select FI HDFS.

    Data Source Name

    Select the FI HDFS data source that you configured in Connecting to Data Sources.

    Separator

    Enter the field separator for the text data in the FI HDFS data source. The separator is used to distinguish different fields in each row of data.

    Storage Subpath

    Enter a subpath of the file, which is user-defined.

    Storage Block Size (M)

    Select the size of data to be written by ROMA Connect to the MRS HDFS data source each time.

    Storage Type

    Select Text file as the data storage type of the FI HDFS data source.

    Batch Number

    User-defined batch number. The batch number cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Metadata

    Define the data fields to be written to the destination text data. Separate different data fields with delimiters. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The configuration when the destination is FI HDFS is similar to that when the destination is MRS HDFS. For details, see MRS HDFS configuration example.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

FI Hive

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select FI Hive as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 9 FI Hive information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the FI Hive data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select FI Hive.

    Data Source Name

    Select the FI Hive data source that you configured in Connecting to Data Sources.

    Database Name

    Select the database to which data will be written.

    Table

    Select the data table to which data will be written.

    Separator

    Enter the field separator for the text data in the FI Hive data source. The separator is used to distinguish different fields in each row of data. The existing characters in the data to be integrated cannot be used as separators.

    Write Mode

    Select the mode in which integration data is written to a data table.

    • Truncate: Delete all data from the destination data table and then write data to the table.
    • Append: Incrementally write data to an existing data table.

    Batch Number

    User-defined batch number. The batch number cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Storage Type

    Select RCFile or Text file as the storage type for data to be written to the FI Hive data source.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

FI Kafka

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select FI Kafka as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 10 FI Kafka information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the FI Kafka data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select FI Kafka.

    Data Source Name

    Select the FI Kafka data source that you configured in Connecting to Data Sources.

    Topic Name

    Enter a topic name that is created in the FI Kafka service and starts with T_.

    Key

    Enter the key value of a message so that the message will be stored in a specified partition. It can be used as an ordered message queue. If this parameter is left unspecified, messages are stored in different message partitions in a distributed manner.

    Metadata

    Define the data fields to be written to the destination Kafka. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The configuration when the destination is FI Kafka is similar to that when the destination is Kafka. For details, see Kafka configuration example.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

GaussDB 100

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select GaussDB 100 as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 11 GaussDB 100 information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the GaussDB 100 data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select GaussDB 100.

    Data Source Name

    Select the GaussDB 100 data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

GaussDB 200

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select GaussDB 200 as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 12 GaussDB 200 information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the GaussDB 200 data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select GaussDB 200.

    Data Source Name

    Select the GaussDB 200 data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

HL7

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select HL7 as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 13 HL7 information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the HL7 data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select HL7.

    Data Source Name

    Select the HL7 data source that you configured in Connecting to Data Sources.

    Encoding Format

    Select the encoding mode of data files in the HL7 data source. The value can be UTF-8 or GBK.

    Message Encoding Type

    Select the message type of the data to be integrated. This parameter defines the purpose and usage of messages. Set this parameter based on the definition in the HL7 protocol.

    Trigger Event Type

    Select the event type corresponding to the message type. Set this parameter based on the definition in the HL7 protocol.

    Protocol Version

    Select the HL7 protocol version used by the HL7 data source.

    Metadata

    Define the data fields to be written to the destination HL7. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata.
    • Parsing Path: location of metadata in HL7 messages. For details, see the metadata path configuration description in the subsequent section.

    Description on Metadata Path Configuration

    MSH|^~\\&|hl7Integration|hl7Integration|||||ADT^A01|||2.3|
    EVN|A01|20191212155644
    PID|||PATID1234^5^M11||FN^Patrick^^^MR||19700101|1|||xx Street^^NY^^Ox4DP|||||||
    NK1|1|FN^John^^^MR|Father||999-9999
    NK1|2|MN^Georgie^^^MSS|Mother||999-9999

    The metadata parsing path of HL7 messages must be set based on the Terser syntax specifications. In the preceding example HL7 message, each line represents an information segment. Each information segment starts with three uppercase letters, which are paragraph symbols of the information segment and are used to indicate content of the information segment. Information segments are separated by separators.

    • |: field separator, which is used to divide information segments into different fields. Each field in an information segment is numbered starting from 1 (excluding paragraph symbols). The rest may be deduced by analogy.
    • ^: component separator, which divides the field content into different components. In the fields that are divided into components, the position of a component is identified by a number, starting from 1. The rest may be deduced by analogy.
    • ~: subcomponent separator, which is used to divide a component into subcomponents.

    For example, in the PID information segment, if the field position of 19700101 is 7, the parsing path is /PID-7. If the field position of xx Street is 11 and the component position is 1, the parsing path is /PID-11-1.

    For the information segments with the same paragraph symbol in the HL7 message, the repeated paragraph symbol is identified by adding a number enclosed by brackets after the paragraph symbol. In repeated paragraph symbols, the first is (0), the second is (1), and so on.

    For example, in the NK1 information segment, "Father" is located in the first NK1 information segment, and a field location is 3, a parsing path of the NK1 information segment is NK1(0)-3. Similarly, the parsing path of Mother is NK1(1)-3.

    Writing the 19700101 and xx Street fields in the preceding HL7 message is used as an example. The following describes the configuration when the destination is HL7:

    Figure 4 HL7 configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

HANA

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select HANA as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 14 HANA information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the HANA data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select HANA.

    Data Source Name

    Select the HANA data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

IBM MQ

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select IBM MQ as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 15 IBM MQ information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the IBM MQ data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select IBM MQ.

    Data Source Name

    Select the IBM MQ data source that you configured in Connecting to Data Sources.

    Destination Type

    Select the message transfer model of the IBM MQ data source. The value can be Topic or Queue.

    Destination Name

    Enter the name of a topic or queue to which to send data. Ensure that the topic or queue already exists.

    Metadata

    Define each underlying key-value data element to be written to the destination in JSON format. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.
    • Parsing Path: full path of metadata. For details, see Description on Metadata Parsing Path Configuration.

    Description on Metadata Parsing Path Configuration

    • Data in JSON format does not contain arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b.c, the complete path of element d is a.b.d, and elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b.c, and Parsing Path of element d must be set to a.b.d.

      {
         "a": {
            "b": {
               "c": "xx",
               "d": "xx"
            }
         }
      }
    • Data in JSON format contains arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b[i].c, and the complete path of element d is a.b[i].d. Elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b[i].c, and Parsing Path of element d must be set to a.b[i].d.

      {
         "a": {
            "b": [{
               "c": "xx",
               "d": "xx"
            },
            {
               "c": "yy",
               "d": "yy"
            }
            ]
         }
      }

    The configuration when the destination is IBM MQ is similar to that when the destination is ActiveMQ. For details, see ActiveMQ configuration example.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

Kafka

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select Kafka as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 16 Kafka information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the Kafka data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select Kafka.

    Data Source Name

    Select the Kafka data source that you configured in Connecting to Data Sources.

    Topic Name

    Select the name of the topic to which data is to be written.

    Key

    Enter the key value of a message so that the message will be stored in a specified partition. It can be used as an ordered message queue. If this parameter is left unspecified, messages are stored in different message partitions in a distributed manner.

    Metadata

    Define the data fields to be written to the destination Kafka. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The following figure shows a configuration example when the destination is Kafka. id, name, and info are the data fields to be written to the Kafka data source.

    Figure 5 Kafka configuration example

    The structure of the message written to Kafka is {"id":"xx", "name":"yy", "info":"zz"}, where xx, yy, and zz are the data values transferred from the source.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

MySQL

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select MySQL as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 17 MySQL information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the MySQL data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select MySQL.

    Data Source Name

    Select the MySQL data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    This parameter specifies whether to update only the values of input fields. If this parameter is disabled, all integrated synchronization fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

MongoDB

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select MongoDB as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 18 MongoDB information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the MongoDB data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select MongoDB.

    Data Source Name

    Select the MongoDB data source that you configured in Connecting to Data Sources.

    Destination Set

    Select the data set to be written to the MongoDB data source. (The data set is equivalent to a data table in a relational database.) Then, click Select Fields in Set and select only the column fields that you want to write.

    Upsert

    This parameter indicates whether to update or insert data to the destination, that is, whether to directly updating existing data fields in the data set on the destination.

    Upsert key

    This parameter is mandatory only if Upsert is enabled.

    Select the data field to be upserted.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

MRS Hive

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select MRS Hive as the data source type at the destination.

  1. On the Create Task page, configure destination information.

    If the source data field contains special characters \r, \n, and \01, ROMA Connect deletes these special characters and then writes the data to MRS Hive.

    Table 19 MRS Hive information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the MRS Hive data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select MRS Hive.

    Data Source Name

    Select the MRS Hive data source that you configured in Connecting to Data Sources.

    Database Name

    Select the database to which data will be written.

    NOTE:

    You need to use a self-built database instead of the default database of MRS Hive.

    Table

    Select the data table to which data will be written.

    Separator

    Enter the field separator for the text data in the MRS Hive data source. The separator is used to distinguish different fields in each row of data.

    Write Mode

    Select the mode in which integration data is written to a data table.

    • Truncate: Delete all data from the destination data table and then write data to the table.
    • Append: Incrementally write data to an existing data table.

    Batch Number

    User-defined batch number. The batch number cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Storage Type

    Select RCFile or Text file as the storage type for data to be written to the MRS Hive data source.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

MRS HDFS

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select MRS HDFS as the data source type at the destination.

  1. On the Create Task page, configure destination information.

    If the source data field contains special characters \r, \n, and \01, ROMA Connect deletes these special characters and then writes the data to MRS HDFS.

    Table 20 MRS HDFS information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the MRS HDFS data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select MRS HDFS.

    Data Source Name

    Select the MRS HDFS data source that you configured in Connecting to Data Sources.

    Separator

    Enter the field separator for the text data in the MRS HDFS data source. The separator is used to distinguish different fields in each row of data.

    Storage Subpath

    Enter the path of the data to be integrated in the hdfs:///hacluster directory of MRS HDFS.

    Storage Block Size (M)

    Select the size of data to be written each time ROMA Connect writes data to the MRS HDFS data source.

    Storage Type

    Select Text file as the data storage type of the MRS HDFS data source.

    Batch Number

    User-defined batch number. The batch number cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Metadata

    Define the data fields to be written to the destination text data. Separate different data fields with delimiters. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The following figure shows a configuration example when the destination is MRS HDFS. id, name, and info are the data fields to be written to the MRS HDFS data source.

    Figure 6 MRS HDFS configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

MRS HBase

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select MRS HBase as the data source type at the destination.

  1. On the Create Task page, configure destination information.

    If the source data field contains special characters \r, \n, and \01, ROMA Connect deletes these special characters and then writes the data to MRS HBase.

    Table 21 MRS HBase information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the MRS HBase data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select MRS HBase.

    Data Source Name

    Select the MRS HBase data source that you configured in Connecting to Data Sources.

    Table

    Select the data table to which data will be written.

    Column Family

    Define the data column fields to be written to the destination data table. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    Field Name: user-defined name of a field in a data column.

    The following figure shows a configuration example when the destination is MRS HBase. id, name, and info are the data fields to be written to the MRS HBase data source.

    Figure 7 MRS HBase configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

MRS Kafka

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select MRS Kafka as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 22 MRS Kafka information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the MRS Kafka data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select MRS Kafka.

    Data Source Name

    Select the MRS Kafka data source that you configured in Connecting to Data Sources.

    Topic Name

    Enter a topic name that is created in the MRS Kafka service and starts with T_.

    Key

    Enter the key value of a message so that the message will be stored in a specified partition. It can be used as an ordered message queue. If this parameter is left unspecified, messages are stored in different message partitions in a distributed manner.

    Metadata

    Define the data fields to be written to the destination Kafka. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The configuration when the destination is MRS Kafka is similar to that when the destination is Kafka. For details, see Kafka configuration example.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

OBS

Back to Overview

If Integration Mode is set to Scheduled, you can select OBS as the data source type at the source.

  1. On the Create Task page, configure destination information.
    Table 23 OBS information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the OBS data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select OBS.

    Data Source Name

    Select the OBS data source that you configured in Connecting to Data Sources.

    Object Type

    Select the type of the data file to be written to the OBS data source. Currently, Text file and Binary file are supported.

    Encoding Format

    This parameter is mandatory only if Object Type is set to Text file.

    Select the encoding mode of the data file to be written to the OBS data source. The value can be UTF-8 or GBK.

    Path

    Enter the name of the object to be written to the OBS data source. The value of Path cannot end with a slash (/).

    File Name Prefix

    Enter the prefix of the file name. This parameter is used together with Time Format to define the name of the file to be written to the OBS data source.

    Time Format

    Select the time format to be used in the file name. This parameter is used together with File Name Prefix to define the data file to be written to the OBS data source.

    File Type

    Select the format of the data file to be written to the OBS data source. A text file can be in TXT or CSV format, and a binary file can be in XLS or XLSX format.

    Advanced Attributes

    This parameter is mandatory only if File Type is set to csv.

    Select whether to configure the advanced properties of the file.

    Newline

    This parameter is mandatory only if Advanced Attributes is set to Enable.

    Enter a newline character in the file content to distinguish different data lines in the file.

    Enclosure Character

    This parameter is mandatory only if Advanced Attributes is set to Enable.

    If you select Use, each data field in the data file is enclosed by double quotation marks ("). If a data field contains the same symbol as a separator or newline character, the field will not be split into two fields. For example, if source data contains a data field aa|bb and the vertical bar (|) is set as the separator when the source data is integrated to the destination data file, the field is aa|bb in the destination data file and will not be split into aa and bb.

    Field Separator

    This parameter is mandatory only if File Type is set to txt or Advanced Attributes is set to Enable.

    Enter the field separator for the file contents to distinguish different fields in each row of data.

    Add File Header

    Determine whether to add a file header to the data file to be written. The file header is the first line or several lines at the beginning of a file, which helps identify and distinguish the file content.

    File Header

    This parameter is mandatory only if Add File Header is set to Yes.

    Enter the file header information. Use commas (,) to separate multiple file headers.

    Metadata

    Define the data fields to be written to the destination file. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    The following figure shows a configuration example when the destination is OBS. id, name, and info are the data fields to be written to the OBS data source.

    Figure 8 OBS configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

Oracle

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select Oracle as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 24 Oracle information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the Oracle data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select Oracle.

    Data Source Name

    Select the Oracle data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

PostgreSQL

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select PostgreSQL as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 25 PostgreSQL information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the PostgreSQL data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select PostgreSQL.

    Data Source Name

    Select the PostgreSQL data source that you configured in Connecting to Data Sources.

    Table

    Select the data table to which data is to be written and click Select Table Field to select the data column fields to be integrated.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

Redis

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select Redis as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 26 Redis information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the Redis data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select Redis.

    Data Source Name

    Select the Redis data source that you configured in Connecting to Data Sources.

    Key Prefix

    Enter the key name prefix of the data to be integrated in the Redis data source. The key name in the Redis data source consists of the key prefix, separator, and key suffix field. Each row of data is stored in the Redis as the key value. For details about the key format, see Description on Key and Value Formats.

    Key Suffix

    Select a field whose value is unique in the source data as the key suffix. The key name in the Redis data source consists of the key prefix, separator, and key suffix field. In this way, each row of data can be integrated into different keys of the Redis data source.

    If Data Type is set to List, Set, or ZSet, Key Suffix can be left blank. That is, only one key is generated based on the key prefix. In this case, all data rows are integrated to the same key of the Redis data source as elements.

    Separator

    This parameter is mandatory only if Key Suffix is not left blank.

    Enter the separator between Key Prefix and Key Suffix. The key name in the Redis data source consists of Key Prefix, Separator, and Key Suffix.

    Data Type

    Select the data type of the key in the Redis data source. Options are as follows:

    • String
    • List
    • Map
    • Set
    • ZSet

    List Appending Mode

    This parameter is mandatory only if Data Type is set to List.

    Select the data appending mode for the key of the list type.

    • lpush: The current data is inserted to the header of the list.
    • rpush: The current data is inserted to the end of the list.

    sortColumn

    This parameter is mandatory only if Data Type is set to ZSet.

    Select the source data field for sorting data elements.

    Validity Period (s)

    Period for which a key in the Redis data source remains valid. The value 0 indicates that the key never expires.

    Write Format

    This parameter is mandatory only if Data Type is set to String, List, Set, or Zset. If Data Type is set to Map, the JSON format is used by default.

    Select JSON or CUSTOMIZE as the format of the data to be written to the Redis data source.

    Metadata

    Define the format of the value written to be written to the destination key. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    If Write Format is set to JSON, the metadata is stored as the key value in the Redis data source in JSON format. If Write Format is set to CUSTOMIZE, customize the format of the destination value. All metadata is combined with the prefix and suffix, and then stored as the key value in the Redis data source. For details about the value format, see Description on Key and Value Formats.

    Description on Key and Value Formats

    Assume that at the destination: Key Prefix is set to roma; Key Suffix is set to the unique key aaa of the source data, which ensures that the key name is unique; Separator is set to the vertical bar (|), which will be used as a separator between key prefixes and key suffixes.

    +-------+-------+
    |  aaa  |  bbb  |
    +-------+-------+
    |   1   |   x   |
    |   2   |   y   |
    |   3   |   z   |
    +-------+-------+
    • If Data Type is set to String and Write Format is set to JSON, the keys and values written to the Redis data source are shown in Figure 9.
       key               value
      --------------------------------
      roma|1     "{"bbb":"x","aaa":1}"
      roma|2     "{"bbb":"y","aaa":2}"
      roma|3     "{"bbb":"z","aaa":3}"
      Figure 9 Metadata configuration (JSON)
    • If Data Type is set to String and Write Format is set to CUSTOMIZE, the keys and values written to the Redis data source are shown in Figure 10.
       key           value
      ------------------------
      roma|1     "bbb_x&aaa_1"
      roma|2     "bbb_y&aaa_2"
      roma|3     "bbb_z&aaa_3"
      Figure 10 Metadata configuration (CUSTOMIZE)
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

RabbitMQ

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select RabbitMQ as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 27 RabbitMQ information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the RabbitMQ data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select RabbitMQ.

    Data Source Name

    Select the RabbitMQ data source that you configured in Connecting to Data Sources.

    New Queue Creation

    Determine whether to create a queue in the RabbitMQ data source.

    • If you select Yes, a new queue is created and the data to be integrated is sent to the queue.
    • If you select No, the data to be integrated is sent to an existing queue.

    Exchange Mode

    Select a routing mode for the exchange in the RabbitMQ data source to forward messages to the queue. If New Queue Creation is set to Yes, select the routing mode for the new queue. If New Queue Creation is set to No, select the routing mode that is the same as that of the existing destination queue.

    • Direct: If the routing key in a message fully matches the queue, the message will be forwarded to the queue.
    • Topic: If the routing key in a message approximately matches the queue, the message will be forwarded to the queue.
    • Fanout: All messages will be forwarded to the queue.
    • Headers: If the Headers attribute of a message fully matches the queue, the message will be forwarded to the queue.

    Exchange Name

    Enter the exchange name of the RabbitMQ data source. If New Queue Creation is set to Yes, the exchange name of the new queue is used. If New Queue Creation is set to No, configure the exchange name that is the same as that of the existing destination queue.

    Routing Key

    This parameter is mandatory only if Exchange Mode is set to Direct or Topic.

    RabbitMQ uses the routing key as the judgment condition. Messages that meet the condition will be forwarded to the queue. If New Queue Creation is set to Yes, enter the routing key of the new queue. If New Queue Creation is set to No, enter the routing key that is the same as that of the existing destination queue.

    Message Parameters

    This parameter is mandatory only if Exchange Mode is set to Headers.

    RabbitMQ uses Headers as a judgment condition. Messages that meet the condition will be forwarded to a new queue. If New Queue Creation is set to Yes, enter the headers of the new queue. If New Queue Creation is set to No, enter the headers that are the same as those of the existing destination queue.

    Queue Name

    This parameter is mandatory only if New Queue Creation is set to Yes.

    Enter the name of a new queue.

    Automatic Deletion

    This parameter specifies whether a queue will be automatically deleted if no client is connected.

    Persistence

    This parameter specifies whether messages in a queue are stored permanently.

    Metadata

    Define each underlying key-value data element to be written to the destination in JSON format. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.
    • Parsing Path: full path of metadata. For details, see Description on Metadata Parsing Path Configuration.

    Description on Metadata Parsing Path Configuration

    • Data in JSON format does not contain arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b.c, the complete path of element d is a.b.d, and elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b.c, and Parsing Path of element d must be set to a.b.d.

      {
         "a": {
            "b": {
               "c": "xx",
               "d": "xx"
            }
         }
      }
    • Data in JSON format contains arrays:

      For example, in the following JSON data written to the destination, the complete path of element a is a, the complete path of element b is a.b, the complete path of element c is a.b[i].c, and the complete path of element d is a.b[i].d. Elements c and d are underlying data elements.

      In this scenario, Parsing Path of metadata c must be set to a.b[i].c, and Parsing Path of element d must be set to a.b[i].d.

      {
         "a": {
            "b": [{
               "c": "xx",
               "d": "xx"
            },
            {
               "c": "yy",
               "d": "yy"
            }
            ]
         }
      }

    The preceding JSON data that does not contain arrays is used as an example. The following describes the configuration when the destination is RabbitMQ:

    Figure 11 RabbitMQ configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

SQL Server

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select SQL Server as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 28 SQL Server information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the SQL Server data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select SQL Server.

    Data Source Name

    Select the SQL Server data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

TaurusDB

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select TaurusDB as the data source type at the destination.

  1. On the Create Task page, configure destination information.
    Table 29 TaurusDB information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the TaurusDB data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select TaurusDB.

    Data Source Name

    Select the TaurusDB data source that you configured in Connecting to Data Sources.

    Table

    Select an existing table and click Select Table Field to select only the column fields that you want to integrate.

    Batch Number Field

    Select a field whose type is String and length is greater than 14 characters in the destination table as the batch number field. In addition, the batch number field cannot be the same as the destination field in mapping information.

    The value of this field is a random number, which is used to identify the data in the same batch. The data inserted in the same batch uses the same batch number, indicating that the data is inserted in the same batch and can be used for location or rollback.

    Update Changed Fields Only

    If this option is enabled, only the table fields whose values change are updated. If this option is disabled, all table fields are updated.

  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.

Custom Data Sources

Back to Overview

If Integration Mode is set to Scheduled or Real-Time, you can select a custom data source at the destination.

  1. On the Create Task page, configure destination information.
    Table 30 Custom data source information at the destination

    Parameter

    Description

    Instance

    Set this parameter to the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.

    Integration Application

    Select the integration application to which the custom data source belongs. Ensure that the integration application has been configured in Connecting to Data Sources.

    Data Source Type

    Select a custom data source type.

    Data Source Name

    Select the custom data source that you configured in Connecting to Data Sources.

    Metadata

    Define each underlying key-value data element to be written to the destination in JSON format. The number of fields to be integrated at the source determines the number of metadata records defined on the destination.

    • Alias: user-defined metadata name.
    • Type: data type of metadata. The value must be the same as the data type of the corresponding parameter in the source data.

    In addition to the preceding parameters, different custom data sources define different writer parameters. Set the parameters based on the original definition specifications of the connector. You can locate the connector used by the custom data source on the Assets page of the ROMA Connect console and view the writer parameter definition of the connector.

    The following figure shows an example of configuring a custom data source for sending emails. The destination is the custom data source. The receiver and title parameters are the destination parameters defined in the connector. id, name, and info are the data fields to be written to the custom data source.

    Figure 12 Custom data source configuration example
  2. After configuring the destination information, proceed with Configuring a Data Mapping Rule.