หน้านี้ยังไม่พร้อมใช้งานในภาษาท้องถิ่นของคุณ เรากำลังพยายามอย่างหนักเพื่อเพิ่มเวอร์ชันภาษาอื่น ๆ เพิ่มเติม ขอบคุณสำหรับการสนับสนุนเสมอมา
- What's New
- Function Overview
- Service Overview
- Billing
- Getting Started
-
User Guide
- Usage Process
- Instances
- Integration Application Management
-
Data Source Management
- Data Sources Supported by ROMA Connect
- Connecting to an API Data Source
- Connecting to an ActiveMQ Data Source
- Connecting to an ArtemisMQ Data Source
- Connecting to a DB2 Data Source
- Connecting to a DIS Data Source
- Connecting to a DWS Data Source
- Connecting to the DM Data Source
- Connecting to a Gauss100 Data Source
- Connecting to an FTP Data Source
- Connecting to an HL7 Data Source
- Connecting to a HANA Data Source
- Connecting to a HIVE Data Source
- Connecting to an LDAP Data Source
- Connecting to an IBM MQ Data Source
- Connecting to a Kafka Data Source
- Connecting to a MySQL Data Source
- Connecting to a MongoDB Data Source
- Connecting to an MQS Data Source
- Connecting to an MRS Hive Data Source
- Connecting to an MRS HDFS Data Source
- Connecting to an MRS HBase Data Source
- Connecting to an MRS Kafka Data Source
- Connecting to an OBS Data Source
- Connecting to an Oracle Data Source
- Connecting to a PostgreSQL Data Source
- Connecting to a Redis Data Source
- Connecting to a RabbitMQ Data Source
- Connecting to a RocketMQ Data Source
- Connecting to an SAP Data Source
- Connecting to an SNMP Data Source
- Connecting to a SQL Server Data Source
- Connecting to a GaussDB(for MySQL) Data Source
- Connecting to a WebSocket Data Source
- Connecting to a Custom Data Source
- Data Integration Guide
-
Service Integration Guide
- Usage Introduction
- Exposing an API
- Exposing a Function API
- Exposing a Data API
- Calling an API
- Managing APIs
- Managing Custom Backends
- Configuring API Control Policies
- Configuring API Plug-in Policies
- Configuring a Custom Authorizer
- Configuring Signature Verification for Backend Services
- Configuring API Cascading
-
Service Integration Guide (Old Edition)
- Usage Introduction
- Exposing APIs
- Creating and Exposing Data APIs
- Creating and Exposing Function APIs
- Calling an API
- Managing APIs
- Managing Custom Backends
- Managing Control Policies
- Managing Plug-ins
- Configuring a Custom Authorizer
- Configuring Signature Verification for Backend Services
- Configuring API Cascading
- Message Integration Guide
- Device Integration Guide
- Increasing Resource Quota
- Audit Logs
- Monitoring Metrics
- Permissions
- User Guide(new)
-
Best Practices
- Digital Reconstruction of Traditional Parking Lot Management Systems
- Sharing Enterprise Data Using APIs
- Integrating and Converting Service Data Across Systems
- Building an Enterprise Service Open Platform
- Developing a Custom Authorizer with a Custom Backend
- Avoiding MQS Message Accumulation
- Synchronizing Data from MySQL to Oracle as Scheduled
-
Developer Guide
- Developer Guide for Data Integration
-
Developer Guide for Service Integration
- Overview
- Developing API Calling Authentication (App)
- Developing API Calling Authentication (IAM)
-
Developing Custom Function Backends
- Overview
- AesUtils
- APIConnectResponse
- Base64Utils
- CacheUtils
- CipherUtils
- ConnectionConfig
- DataSourceClient
- DataSourceConfig
- ExchangeConfig
- HttpClient
- HttpConfig
- JedisConfig
- JSON2XMLHelper
- JSONHelper
- JsonUtils
- JWTUtils
- KafkaConsumer
- KafkaProducer
- KafkaConfig
- MD5Encoder
- Md5Utils
- QueueConfig
- RabbitMqConfig
- RabbitMqProducer
- RedisClient
- RomaWebConfig
- RSAUtils
- SapRfcClient
- SapRfcConfig
- SoapClient
- SoapConfig
- StringUtils
- TextUtils
- XmlUtils
- Developing Custom Data Backends
- Developing Signature Verification for Backend Services
- Developer Guide for Message Integration
-
Developer Guide for Device Integration
- Overview
- Preparations
- Configuring Device Integration
-
MQTT Topic Specifications
- Before You Start
- Gateway Login
- Adding a Gateway Subdevice
- Response for Adding a Gateway Subdevice
- Updating the Gateway Subdevice Status
- Response for Updating the Gateway Subdevice Status
- Deleting a Gateway Subdevice
- Querying Gateway Information
- Response for Querying Gateway Information
- Delivering a Command to a Device
- Response for Delivering a Command to a Device
- Reporting Device Data
-
API Reference
- Before You Start
- API Overview
- Calling APIs
-
Public Resource APIs
-
Application Management
- Verifying the Existence of an Application
- Querying Applications
- Creating an Application
- Querying Application Details
- Updating an Application
- Deleting an Application
- Querying an Application Secret
- Resetting an Application Secret
- Querying Application Members
- Setting Application Members
- Querying Candidate Members
- Asset Management
- Dictionary Management
- Public Management
- Instance Management
-
Application Management
-
Data Integration APIs
- Data Source Management
- Task Monitoring and Management
-
Task Management
- Creating a Common Task
- Querying a Task List
- Counting the number of tasks of different types and states
- Updating a Common Job
- Querying Information About a Specified Task
- Deleting a Task
- Manually Triggering a Single Task
- Manually Stopping the Current Task
- Creating a Schedule
- Querying a Schedule
- Modifying a Scheduling
- Starting or Stopping Tasks in Batches
- Creating a Composite Task
- Initializing the Combined Task
- Modifying a Combined Task
- Resetting the Progress of the Combined Task
- Creating Combined Task Mappings
- Deleting the Specified Task Mapping
-
Service Integration APIs
- Instance Management
- API Group Management
-
API Management
- Creating an API
- Modifying an API
- Deleting an API
- Publishing an API or Taking an API Offline
- Querying Details of an API
- Querying APIs
- Publishing APIs or Taking APIs Offline in Batches
- Debugging an API
- Switching API Versions
- Querying Details of an API Version
- Taking an API Version Offline
- Querying Historical Versions of an API
- Querying the Runtime Definition of an API
- Verifying the API Definition
- Environment Management
- Environment Variable Management
- Domain Name Management
- Request Throttling Policy Management
- Binding a Request Throttling Policy to an API
- Excluded Request Throttling Configuration
- Signature Key Management
- Binding Signature Keys
- Access Control Policy Management
- Binding Access Control Policies
- API Import and Export
-
VPC Channel Management
- Creating a VPC Channel
- Updating a VPC Channel
- Deleting a VPC Channel
- Querying Details of a VPC Channel
- Querying VPC Channels
- Adding or Updating Backend Servers
- Querying Backend Servers of a VPC Channel
- Updating a Backend Instance
- Deleting a Backend Server
- Enabling Backend Servers in Batches
- Disabling Backend Servers in Batches
- Modifying VPC Channel Health Checks
- Adding or Updating a Backend Server Group of a VPC Channel
- Querying the List of Backend Cloud Service Groups of a VPC Channel
- Viewing Details About a Backend Server Group of a VPC Channel
- Deleting a VPC Backend Server Group
- Updating a Backend Server Group of a VPC Channel
-
Client Configuration
- Querying Apps
- Querying Details of an App
- Creating an AppCode
- Generating an AppCode
- Querying AppCodes of an App
- Querying Details of an AppCode
- Deleting an AppCode
- Querying Application Quotas Associated with an Application Quota
- Configuring Access Control for an App
- Querying Details About App Access Control
- Deleting Access Control for an App
-
Client Quotas
- Creating a client quota.
- Modifying a Client Quota
- Delete a Client Quota
- Querying Details of a Client Quota
- Querying Client Quotas
- Binding Client Applications to a Client Quota
- Unbinding Client Applications from a Client Quota
- Querying Client Applications Bound to a Client Quota
- Querying Client Applications Available for Being Bound to a Client Quota
- Application Authorization Management
-
Custom Backend Management
- Creating a Backend API
- Modifying a Backend API
- Deleting a Backend API
- Querying Details of a Backend API
- Querying Backend APIs
- Creating a Backend API Script
- Testing a Backend API
- Querying the Backend API Test Result
- Deploying a Backend API
- Querying the Deployment History of a Backend API
- Canceling Deployment of a Backend API
- Verifying the Definition of a Custom Backend API
- Querying the Quota of a Custom Backend Service
- Querying Data Sources of a Custom Backend Service
- Custom Authorizer Management
- Querying Metrics
- Instance Feature Management
- Tag Management
- Configuration Management
- Application Configuration Management
- VPC Channel Management - Project-Level
-
SSL Certificate Management
- Obtaining SSL Certificates
- Create an SSL certificate.
- Binding a Domain Name to an SSL Certificate
- Unbinding a Domain Name from an SSL Certificate
- Viewing Certificate Details
- Deleting an SSL Certificate
- Modifying an SSL certificate
- Binding an SSL Certificate with a Domain Name
- Unbinding an SSL Certificate from a Domain Name
- Obtaining Domain Names Bound to an SSL Certificate
-
Plug-in Management
- Creating a Plug-in
- Modifying a Plug-in
- Deleting a Plug-in
- Querying a Plug-in
- Querying Plug-in Details
- Binding a Plug-in to an API
- Binding a Plug-in to an API
- Unbinding a Plug-in from an API
- Unbinding a Plug-in from an API
- Querying APIs Bound with a Plug-in
- Querying Plug-ins bound to an API
- Querying APIs that Can Be Bound to a Plug-in
- Querying Plug-ins that Can Be Bound to the Current API
- Message Integration APIs
-
Device Integration APIs
- Device Group Management
-
Device Management
- Creating a Device
- Querying Devices
- Bringing Devices Offline in Batches
- Deleting a Device
- Querying Device Details
- Modifying a Device
- Querying Device Topics
- Adding a Subdevice to the Gateway
- Querying Subdevices
- Querying a Device Shadow
- Resetting Device Authentication Information
- Querying Device Authentication Information
- Sending Commands
- Subscription Management
- Product Template Management
-
Product Management
- Creating a Product
- Querying Products
- Querying the Number of Devices in a Product
- Deleting a Product
- Querying Product Details
- Modifying Product Information
- Adding a Product Topic
- Querying Product Topics
- Deleting a Product Topic
- Modifying a Product Topic
- Resetting Product Authentication Information
- Querying Product Authentication Information
- Importing Products
- Exporting Products
- Rule Engine
-
Service Management
- Creating a Service
- Querying Services
- Deleting a Service
- Querying Service Details
- Modifying a Service
- Creating an Attribute
- Querying Attributes
- Creating a Command
- Querying Commands
- Deleting a Command
- Querying Command Details
- Modifying a Command
- Creating a Request Attribute
- Querying Request Attributes
- Deleting a Request Attribute
- Modifying a Request Attribute
- Creating a Response Attribute
- Querying Response Attributes
- Deleting a Response Attribute
- Querying Response Attributes
- Modifying a Response Attribute
- Example Applications
- Permissions Policies and Supported Actions
- Appendix
-
Out-of-Date APIs
-
APIC APIs (V1)
- API Group Management (V1)
- API Management (V1)
-
Custom Backend Management (V1)
- Querying Backend APIs
- Creating a Backend API
- Querying Details of a Backend API
- Modifying a Backend API
- Deleting a Backend API
- Querying the Backend API Test Result
- Querying the Deployment History of a Backend API
- Deploying a Backend API
- Testing a Backend API
- Canceling Deployment of a Backend API
- Creating a Backend API Script
- Application Authorization Management (V1)
- Custom Authorizer Management (V1)
- Device Integration APIs (V1)
-
APIC APIs (V1)
- Change History
- SDK Reference
-
FAQs
- Instance Management
-
Data Integration
-
Common Data Integration Tasks
- Which Types of Data Are Supported by FDI Databases?
- What Is a Trace Number? Can Data Be Traced by Using a Trace Number?
- Can I Clear the Destination Tables of an FDI Task?
- Can a Task Collect Data from One Table to Another Table?
- Is the FDI Task Created by One User Visible to Another User Under the Same Account?
- How Do I Configure FDI to Connect to MRS Hive of Other Tenants Through a Public Network?
- Will Collected Data Be Updated After a File Is Parsed from OBS to the RDS Database?
- Why Are the Column Values Incorrect When the CSV File on the Source OBS Is Parsed to the Relational Database?
- Can Data Be Integrated into the Destination When the Data Types of Source and Destination Fields of MRS Hive Do Not Match?
- Is the Mapping Between MRS Hive or MRS HBase and MongoDB Case-sensitive When It Is Manually Entered?
- Does MRS Hive Support Partitioning?
- How Do I Set the Custom Period for the API Data Source at the Source?
- Does SAP Support Reading Views by Page?
- Composite Data Integration Tasks
-
Common Data Integration Tasks
-
Service Integration
- Does APIC Support Multiple Backend Endpoints?
- How Do I Perform App Authentication in iOS System?
- How Can I Ensure the Security of Backend Services Invoked by APIC?
- Do I Need to Publish an API Again After Modification?
- What Can I Do If an API Published in a Non-RELEASE Environment Is Inaccessible?
- Can I Invoke Different Backend Services by Publishing an API in Different Environments?
- Can Applications Deployed in a VPC Call APIs?
- What Is the Maximum Size of an API Request Packet?
- Can I Modify a Deployed Custom Backend?
- How Does APIC Throttle Requests?
- What Are the Causes of an API Calling Failure?
- Message Integration
- Device Integration
-
Troubleshooting
-
Common Data Integration Tasks
- Garbled Characters Are Displayed When Data Is Written to MRS Hive at the Destination
- All Data Is Written to the First Field of MRS Hive
- An Error Message Is Displayed at the Destination Indicating Task Execution Times Out
- Error Message "could only be written to 0 of the 1 minReplication nodes. There are 2 datanode(s) running and 2 node(s) are excluded in this operation" Is Reported at the Destination During Data Integration from MySQL to MRS Hive
- Error Message "Illegal mix of collations for operation 'UNION'" Is Displayed at the Source Database During MySQL-to-MySQL Data Integration
- Data May Be Lost When Incremental Data Collection Is Performed from the Source MySQL on an Hourly Basis
- Error Message "401 unauthorized" Is Displayed at the Source During API-to-MySQL Data Integration
- Error Message "cannot find record mapping field" Is Displayed at the Destination During Kafka-to-MySQL Data Integration
- Error Message "connect timeout" Is Displayed at the Source During Scheduled API-to-MySQL Data Integration
- FDI Fails to Obtain Data During Real-Time Kafka-to-MySQL Data Integration Although Data Exists in MQS Topics
- Value of the Source Field of the tinyint(1) Type Is Changed from 2 to 1 at the Destination During Scheduled MySQL-to-MySQL Data Integration
- "The task executes failed.Writer data to kafka failed" Is Reported When the Kafka Destination Is Used over the Public Network
-
Composite Data Integration Tasks
- Data Fails to Be Written Because the RowId Field Type Is Incorrectly Configured in the Destination Table
- Error Message "binlog probably contains events generated with statement or mixed based replication forma" Is Displayed When the Binlog of the MySQL Database Is Read
- Data Still Fails to Be Written After an FDI Task Failure Is Rectified
- Camel Fails to Access the Database Because Table Names Contain Garbled Characters
- Inserted Data Violates the Non-null Constraint
- FDI Task Fails to Be Executed Because DWS Changes to the Read-only State
- Data Write to DWS Becomes Slower
- Data Sources
-
Service Integration
- Backend Service Fails to Be Invoked
- Error Message "No backend available" Is Displayed When an API Is Called
- Error Message "The API does not exist or has not been published in an environment" Is Displayed When an API Is Called Using JavaScript
- Common Errors Related to IAM Authentication Information
- A Message Is Displayed Indicating that the Certificate Chain Is Incomplete When You Add a Certificate
- Device Integration
-
Common Data Integration Tasks
- Videos
-
More Documents
-
User Guide (ME-Abu Dhabi Region)
- Service Overview
- Getting Started
- Getting Started
- Instances
- Integration Application Management
- Data Integration Guide
- Service Integration Guide
- Message Integration Guide
- Device Integration Guide
-
Data Source Management
- Data Sources Supported by ROMA Connect
- Connecting to an API Data Source
- Connecting to an ActiveMQ Data Source
- Connecting to an ArtemisMQ Data Source
- Connecting to a DB2 Data Source
- Connecting to a DIS Data Source
- Connecting to a DWS Data Source
- Connecting to a GaussDB 100 Data Source
- Connecting to a GaussDB 200 Data Source
- Connecting to an FTP Data Source
- Connecting to an FI HDFS Data Source
- Connecting to an FI Hive Data Source
- Connecting to an FI Kafka Data Source
- Connecting to an HL7 Data Source
- Connecting to a HANA Data Source
- Connecting to a HIVE Data Source
- Connecting to an LDAP Data Source
- Connecting to an IBM MQ Data Source
- Connecting to a Kafka Data Source
- Connecting to a MySQL Data Source
- Connecting to a MongoDB Data Source
- Connecting to an MRS Hive Data Source
- Connecting to an MRS HDFS Data Source
- Connecting to an MRS HBase Data Source
- Connecting to an MRS Kafka Data Source
- Connecting to an OBS Data Source
- Connecting to an Oracle Data Source
- Connecting to a PostgreSQL Data Source
- Connecting to a Redis Data Source
- Connecting to a RabbitMQ Data Source
- Connecting to an SAP Data Source
- Connecting to an SNMP Data Source
- Connecting to a SQL Server Data Source
- Connecting to a TaurusDB Data Source
- Connecting to a WebSocket Data Source
- Connecting to a Custom Data Source
- Asset Management
-
FAQs
- Common Operations
-
Data Integration Tasks
- Which Types of Data Are Supported by FDI Databases?
- What Is a Trace Number? Can Data Be Traced by Using a Trace Number?
- Does an FDI Task Support Clearance of a Destination Table?
- Can a Task Collect Data from One Table to Another Table?
- Is the FDI Task Created by One User Visible to Another User Under the Same Account?
- How Do I Configure FDI to Connect to MRS Hive of Other Tenants Through a Public Network?
- Will Collected Data Be Updated After a File Is Parsed from OBS to the RDS Database?
- Why Are the Column Values Incorrect When the CSV File on the Source OBS Is Parsed to the Relational Database?
- Can Data Be Integrated into the Destination When the Data Types of Source and Destination Fields of MRS Hive Do Not Match?
- Is the Mapping Between MRS Hive or MRS HBase and MongoDB Case-sensitive When It Is Manually Entered?
- Does MRS Hive Support Partitioning?
- How Can I Configure the Time for Triggering a Scheduled Task of FDI?
- How Do I Set the Custom Period for the API Data Source at the Source?
- Composite Data Integration Tasks
-
Service Integration
- Which Languages Does APIC Support for SDKs?
- Does APIC Support Multiple Backend Endpoints?
- Which Error Codes Will Be Displayed When I Use APIC?
- How Can I Ensure the Security of Backend Services Invoked by APIC?
- Do I Need to Publish an API Again After Modification?
- What Can I Do If an API Published in a Non-RELEASE Environment Is Inaccessible?
- Can I Invoke Different Backend Services by Publishing an API in Different Environments?
- What Is the Maximum Size of an API Request Package?
- Can I Modify an API After It Is Deployed on the Custom Backend?
- How Does APIC Throttle Requests?
- What Are the Causes of an API Calling Failure?
- Can I Call an API If the API Is Not Bound to an EIP?
- Which Data Sources Does APIC Support?
- Message Integration
- Device Integration
-
Troubleshooting
-
Common Data Integration Tasks
- Garbled Characters Are Displayed When Data Is Written to MRS Hive at the Destination
- All Data Is Written to the First Field of MRS Hive
- An Error Message Is Displayed at the Destination Indicating Task Execution Times Out
- Error Message "could only be written to 0 of the 1 minReplication nodes. There are 2 datanode(s) running and 2 node(s) are excluded in this operation" Is Reported at the Destination During Data Integration from MySQL to MRS Hive
- Error Message "Illegal mix of collations for operation 'UNION'" Is Displayed at the Source Database During MySQL-to-MySQL Data Integration
- Data May Be Lost When Incremental Data Collection Is Performed from the Source MySQL on an Hourly Basis
- Error Message "401 unauthorized" Is Displayed at the Source During API-to-MySQL Data Integration
- Error Message "cannot find record mapping field" Is Displayed at the Destination During Kafka-to-MySQL Data Integration
- Error Message "connect timeout" Is Displayed at the Source During Scheduled API-to-MySQL Data Integration
- FDI Fails to Obtain Data During Real-Time Kafka-to-MySQL Data Integration Although Data Exists in MQS Topics
- Value of the Source Field of the tinyint(1) Type Is Changed from 2 to 1 at the Destination During Scheduled MySQL-to-MySQL Data Integration
-
Composite Data Integration Tasks
- Data Fails to Be Written Because the RowId Field Type Is Incorrectly Configured in the Destination Table
- Error Message "binlog probably contains events generated with statement or mixed based replication forma" Is Displayed When the Binlog of the MySQL Database Is Read
- Data Still Fails to Be Written After an FDI Task Failure Is Rectified
- Camel Fails to Access the Database Because Table Names Contain Garbled Characters
- Inserted Data Violates the Non-null Constraint
- FDI Task Fails to Be Executed Because DWS Changes to the read-only State
- Speed of Writing Data to DWS Becomes Slower
- Data Sources
- Service Integration
- Device Integration
-
Common Data Integration Tasks
-
Developer Guide (ME-Abu Dhabi Region)
-
Developer Guide for Service Integration
- How Do I Choose an Authentication Mode
- Using App Authentication to Call APIs
- Using IAM Authentication to Call APIs
- Signing Backend Services
-
Developing Function APIs
- Function API Script Compilation Guide
- APIConnectResponse
- Base64Utils
- CacheUtils
- CipherUtils
- ConnectionConfig
- DataSourceClient
- DataSourceConfig
- ExchangeConfig
- HttpClient
- HttpConfig
- JedisConfig
- JSON2XMLHelper
- JSONHelper
- JsonUtils
- JWTUtils
- KafkaConsumer
- KafkaProducer
- KafkaConfig
- MD5Encoder
- Md5Utils
- ObjectUtils
- QueueConfig
- RabbitMqConfig
- RabbitMqProducer
- RedisClient
- RomaWebConfig
- RSAUtils
- SapRfcClient
- SapRfcConfig
- SoapClient
- SoapConfig
- StringUtils
- TextUtils
- XmlUtils
- Developing Data API Statements
-
Developer Guide for Message Integration
- Overview and Network Environment Preparation
- Collecting Connection Information
-
Connecting to MQS in Client Mode
- Recommendations for Client Usage
- Setting Parameters for Clients
- Setting Up the Java Development Environment
- Configuring Kafka Clients in Java
- Configuring Kafka Clients in Python
- Configuring Kafka Clients in Other Languages
- Appendix: Methods for Improving the Message Processing Efficiency
- Appendix: Restrictions on Spring Kafka Interconnection
- Connecting to MQS Using RESTful APIs
-
Developer Guide for Device Integration
- Device Integration Development
-
MQTT Topic Specifications
- Before You Start
- Gateway Login
- Adding a Gateway Subdevice
- Response for Adding a Gateway Subdevice
- Updating the Gateway Subdevice Status
- Response for Updating the Gateway Subdevice Status
- Deleting a Gateway Subdevice
- Querying Gateway Information
- Response for Querying Gateway Information
- Delivering a Command to a Device
- Response for Delivering a Command to a Device
- Reporting Device Data
-
Developer Guide for Service Integration
-
API Reference (ME-Abu Dhabi Region)
- Before You Start
- Calling APIs
-
Public Resource APIs
-
Application Management
- Verifying the Existence of an Application
- Querying Applications
- Creating an Application
- Querying Application Details
- Updating an Application
- Deleting an Application
- Querying an Application Secret
- Resetting an Application Secret
- Querying Application Members
- Setting Application Members
- Querying Candidate Members
- Asset Management
- Dictionary Management
- Public Management
-
Application Management
-
APIC APIs
- Instance Management
- API Group Management
-
API Management
- Creating an API
- Modifying an API
- Deleting an API
- Publishing an API or Taking an API Offline
- Querying Details of an API
- Querying APIs
- Publishing APIs or Taking APIs Offline in Batches
- Debugging an API
- Switching API Versions
- Querying Details of an API Version
- Taking an API Version Offline
- Querying Historical Versions of an API
- Querying the Runtime Definition of an API
- Verifying the API Definition
- Environment Management
- Environment Variable Management
- Domain Name Management
- Request Throttling Policy Management
- Binding/Unbinding Request Throttling Policies
- Excluded Request Throttling Configuration
- Signature Key Management
- Binding/Unbinding Signature Keys
- Access Control Policy Management
- Binding/Unbinding Access Control Policies
- API Import and Export
- VPC Channel Management
-
Client Configuration
- Querying Apps
- Querying Details of an App
- Creating an AppCode
- Generating an AppCode
- Querying AppCodes of an App
- Querying Details of an AppCode
- Deleting an AppCode
- Querying Application Quotas Associated with an Application Quota
- Configuring Access Control for an App
- Querying Details About App Access Control
- Deleting Access Control for an App
-
Client Quota
- This API is used to create a client quota.
- Modifying a Client Quota
- Delete a Client Quota
- Querying Details of a Client Quota
- Querying Client Quotas
- Binding Client Applications to a Client Quota
- Unbinding Client Applications from a Client Quota
- Querying Client Applications Bound to a Client Quota
- Querying Client Applications Available for Being Bound to a Client Quota
- App Authorization Management
-
Custom Backend Management
- Creating a Backend API
- Modifying a Backend API
- Deleting a Backend API
- Querying Details of a Backend API
- Querying Backend APIs
- Creating a Backend API Script
- Testing a Backend API
- Querying the Backend API Test Result
- Deploying a Backend API
- Querying the Deployment History of a Backend API
- Canceling Deployment of a Backend API
- Verifying the Definition of a Custom Backend API
- Querying the Quota of a Custom Backend Service
- Querying Data Sources of a Custom Backend Service
- Custom Authorizer Management
- Monitoring Information Query
- Instance Feature Management
- Tag Management
- Configuration Management
- Application Configuration Management
- MQS APIs
-
Device Integration APIs
- Device Group Management
-
Device Management
- Creating a Device
- Querying Devices
- Bringing Devices Offline in Batches
- Deleting a Device
- Querying Device Details
- Modifying a Device
- Querying Device Topics
- Adding a Subdevice to the Gateway
- Querying Subdevices
- Querying a Device Shadow
- Resetting Device Authentication Information
- Querying Device Authentication Information
- Sending Commands
- Product Template Management
-
Product Management
- Creating a Product
- Querying Products
- Querying the Number of Devices in a Product
- Deleting a Product
- Querying Product Details
- Modifying Product Information
- Adding a Product Topic
- Querying Product Topics
- Deleting a Product Topic
- Modifying a Product Topic
- Resetting Product Authentication Information
- Querying Product Authentication Information
- Importing Products
- Exporting Products
- Rule Engine
-
Service Management
- Creating a Service
- Querying Services
- Deleting a Service
- Querying Service Details
- Modifying a Service
- Creating an Attribute
- Querying Attributes
- Creating a Command
- Querying Commands
- Deleting a Command
- Querying Command Details
- Modifying a Command
- Creating a Request Attribute
- Querying Request Attributes
- Deleting a Request Attribute
- Modifying a Request Attribute
- Creating a Response Attribute
- Querying Response Attributes
- Deleting a Response Attribute
- Querying Response Attributes
- Modifying a Response Attribute
- Appendix
-
User Guide (ME-Abu Dhabi Region)
- General Reference
Show all
Copied.
Creating a Composite Task
Overview
You can create a composite task if you need to continuously synchronize real-time data. This task allows FDI to implement real-time and incremental synchronization of multiple data tables from the source to destination, improving the data integration and synchronization efficiency.
The composite task supports flexible mappings of fields between data tables. For example, multiple fields in one data table at the source can be mapped to different data tables at the destination, or fields in multiple data tables at the source can be mapped to one data table at the destination.
Prerequisites
- You have connected to data sources at the source and destination. For details, see Connecting to Data Sources.
In the data source configuration at the source, the value of Database must be the same as the actual database name (case-sensitive). Otherwise, data synchronization will fail.
- The CDC function has been enabled at the source. The CDC implementation modes vary depending on data source types. For details, see the following:
- The retention period of CDC archiving logs in a data source at the source must be greater than the log time parsed by the integration task. Otherwise, the integration task cannot find archive logs, resulting in incremental synchronization failures. Therefore, it is not recommended that a data integration task be stopped for a long time. It is recommended that archive logs be retained for at least two days.
- Do not perform Data Definition Language (DDL) operations on the source database during the first data synchronization.
- If a large number of composite tasks are created, the database server and FDI plug-in process will consume resources. Therefore, you are advised not to create too many composite tasks for a database.
- You can configure multiple database tables under multiple schemas in a single CDC task to implement unified collection for full or incremental data.
- During the running of a composite task, you can add a table and perform full or incremental collection on the new table after the restart.
- Synchronization is not supported for the following types of Oracle data sources at the source:
- Fields of the large text type and binary type
- A data table whose name contains lowercase letters cannot be synchronized.
- Data tables that do not have primary keys cannot be synchronized.
If a table contains only a small amount of data, collect full data once a day. Data in PostgreSQL tables can be cleared before you write to the table. If data is collected from the Oracle database but no primary key is available in the table, you can use the internal row ID of the Oracle database as the primary key. The row ID is 18 characters of digits and letters.
- Data tables or data fields whose names are reserved in the database
- Data deleted in truncate mode cannot be synchronized. Data deleted in entire table mode cannot be synchronized.
- For the MySQL data source at the source:
If the MySQL database uses the MGR cluster mode, the source data source must be directly connected to the active node instead of the route node.
If the MySQL database contains a large amount of data, the connection to the database may time out when data is synchronized for the first time. You can modify the interactive_timeout and wait_timeout parameters of the MySQL database to avoid this problem.
Procedure
- Log in to the ROMA Connect console. On the Instances page, click View Console next to a specific instance.
- In the navigation pane on the left, choose Fast Data Integration > Task Management. On the page displayed, click Create Composite Task.
- On the Create Composite Task page, configure basic task information.
Table 1 Basic task information Parameter
Description
Name
After a task is created, the task name cannot be modified. It is recommended that you enter a name based on naming rules to facilitate search.
Integration Mode
Select Scheduled as the mode used for data integration.
- Scheduled: A data integration task is executed according to the schedule to integrate data from the source to the destination.
NOTE:
This mode applies only to composite tasks whose data source type is MySQL, Oracle, PostgreSQL, SQL Server, or HANA.
- Real-Time: The data integration task continuously detects updates to the data at the source and integrates updates to the destination in real time.
The data integration mode varies depending on the data source. For details, see Table 1.
Description
It is recommended that you add task descriptions based on the actual task usage to differentiate tasks. The task description can be modified after being created.
Tag
Add or select an existing tag to classify tasks for quick search. New tags are saved when you save the task and can be searched directly when you create another task.
Operation Types
Mandatory for Integration Mode set to Real-Time.
Select the operation types for database logs, including Insert, Delete, and Update. For example, if you select Insert and Update, only the logs related to data insert and update in the database are obtained.
Use Quartz Cron Expression
Mandatory for Integration Mode set to Scheduled.
Schedule a task using a Quartz cron expression.
Period
Mandatory for Integration Mode set to Scheduled.
Set the task execution interval, from minutes to months.
For example, if Unit is set to Day and Period is set to 1, the task is executed once every day.
Expression
Mandatory for Use Quartz Cron Expression set to Yes.
Configure a Quartz cron expression for task scheduling. Second in the expression is fixed to 0 because ROMA Connect supports only down-to-minute schedules. For details, see Appendix: Quartz Cron Expression Configuration.
A task needs to be executed every 15 minutes from 01:00 a.m. to 04:00 a.m. every day. In this example, the expression should be as follows:
0 0/15 1-4 * * ?
Effective Time
Mandatory for Integration Mode set to Scheduled.
Start time of a task.
Sync Existing Data
Available only after you click Edit of a task.
This function takes effect when you add a table mapping to a composite task that has started.
- When enabled, the task synchronizes all the existing data of the newly added tables. After that, data is synchronized in increments.
- When disabled, the task synchronizes only data generated from the start of the task.
- Scheduled: A data integration task is executed according to the schedule to integrate data from the source to the destination.
- Configure a mapping between data sources at the source and destination.
Table 2 Source and destination configuration information Parameter
Description
Source
Instance Name
Select the ROMA Connect instance that is being used.
Integration Application
Select the integration application to which the data source at the source belongs.
Data Source Type
Select a data source type at the source.
Scheduled: MySQL, Oracle, SQL Server, PostgreSQL, HANA
Real-time: MySQL, Oracle, and SQL Server
Destination
Instance Name
Select the ROMA Connect instance that is being used. After the source instance is configured, the destination instance is automatically associated and does not need to be configured.
Integration Application
Select the integration application to which the data source at the destination belongs.
Data Source Type
Select a data source type at the destination.
Scheduled: MySQL, Oracle, PostgreSQL, SQL Server, and HANA
Real-time: MySQL, Oracle, PostgreSQL, Kafka, and SQL Server,
- Configure data table mappings between the source and destination in manual or automatic mode.
NOTE:
- The length of a data field at the destination must be greater than or equal to that of the data field at the source. Otherwise, the synchronized data will be lost.
- A maximum of 1000 data tables can be synchronized in a task.
- If the data source type at the destination is Kafka, the table displayed on the destination is a virtual table. You only need to edit the field mappings in the table.
- For the Oracle destination data source, if the source primary key field is empty, the record is discarded by default and no scheduling log error code is generated.
- Automatic mapping
- Click Automatic Mapping. In the dialog box that is displayed, configure the mapping policy and range and then click Start Mapping. The mapping between data tables will be automatically generated.
- Click Edit to modify a mapping between data tables as required.
- Click View. In the dialog box displayed, modify the mappings between fields in the data tables as required or add new mappings.
The length of a data field at the destination must be greater than or equal to that of the data field at the source. Otherwise, the synchronized data will be lost.
- Manually adding a table mapping
- In the Table Mappings area, click Add to manually add a table mapping.
- Set Source Table Name and Destination Table Name for the table mapping.
- Click Map in the Operation column. In the window displayed, view or edit the mapping fields or delete unnecessary fields, or click Add to add a field mapping. Click
in the upper right corner to set the configuration items for the field mapping:
- Source Field: Select a field name in the source table, for example, ID.
- Destination Field: Select the corresponding field name in the destination table, for example, Name.
- Prefix: Enter the prefix of the synchronization field.
- Suffix: Enter the suffix of the synchronization field.
The following is an example of configuring the prefix and suffix. For example, if the field content is test, the prefix is tab1, and the suffix is 1, the field after synchronization is tab1test1.
Figure 1 Configuring field mappingsFunction mapping is available only when PostgreSQL is set as the destination. Click Add Function Mapping to map functions.
- Mapping Function: Select a mapping relationship.
- Destination Field: Select a destination field to map, for example, Name.
- Click Save.
- Configure abnormal data storage.
NOTE:
This configuration is available only when the data source type at the destination is MySQL, Oracle, PostgreSQL, or SQL Server. Before the configuration, connect to the OBS data source. For details, see Connecting to an OBS Data Source.
During each task execution, if some data at the source meets integration conditions but cannot be integrated to the destination due to network jitter or other exceptions, ROMA Connect stores the data to the OBS bucket as text files.Table 3 Abnormal data storage information Parameter
Description
Source Data Type
This parameter can only be set to OBS.
Integration Application
Select the required integration application.
Name
Select the OBS data source that you configured.
Path
Enter the object name of the OBS data source where abnormal data is to be stored. The value of Path cannot end with a slash (/).
- Set transaction thresholds.
NOTE:
Transaction thresholds are available only when the integration mode is set to Real-time and the source data type set to Oracle.
For Oracle sources, transactions whose size or duration exceeds the values set for the following two parameters are forcibly committed.Table 4 Setting transaction thresholds Parameter
Description
Transaction Size
Default: 100,000
Transaction Duration (min)
Default: 250
- Click Create.
NOTE:
In the following scenarios, click Reset in the Operation column of a composite task to reset its synchronization.
A full reset will delete all synchronization progress information but will not remove any destination data. When the task is executed again, full synchronization will restart. If the database contains a large amount of data, initiating real-time synchronization may take a long time. Please exercise caution when using this function.
Then the task starts to synchronize existing data again and later synchronize increments in real time.
- Composite tasks need to support synchronization of new data tables and data fields at the source.
- The CDC archive logs at the source are cleared. As a result, the composite task fails to be synchronized.
- The MySQL database does not use the GTID mode, and an active/standby switchover occurs. As a result, the composite task fails to be synchronized.
You can reset the task only when Task Status is Stopped.
CAUTION:
Run the sql: select pg_drop_replication_slot('roma_fdi_{task_id}') command in the database to delete the replication slot for GaussDB and PostgreSQL data sources after a composite task is deleted. Replace {task_id} with the actual ID.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot