- What's New
- Function Overview
- Service Overview
- Getting Started
-
User Guide
-
New Edition
- Permissions Management
- Settings Management
- Surveys
- Resource Discovery
- Application Management
- Migration Preparations
- Migration Solutions
- Migration Plans
- Migration Clusters
- Migration Workflows
- Big Data Migration (from MaxCompute to DLI)
- Big Data Verification
-
Old Edition
- Permissions Management
- Settings Management
- Migration Surveys
- Resource Discovery
- Application Management
- Big Data Lineage
- Migration Solutions
- Migration Plans
- Migration Clusters
- Migration Workflows
- Big Data Migration
- Big Data Verification
-
New Edition
-
MgC Agent Usage Guide
- MgC Agent Overview
- Downloading and Installing the MgC Agent (Formerly Edge)
- Local Discovery and Collection
- Connecting the MgC Agent to MgC
- Agent-based Discovery
-
Collector-based Discovery
- Creating a Collection Task
- Managing Collectors
-
Configuring Collector Parameters
- Kubernetes Static Collector (app-discovery-k8s)
- Kubernetes Conntrack Collector (app-discovery-k8s-conntrack)
- Kubernetes Pod Network Collector (app-discovery-k8s-pod-net)
- Process and Network Collector (app-discovery-process-netstat)
- Windows Process and Network Collector (app-discovery-process-netstat-win)
- RabbitMQ Collector (app-discovery-rabbitmq)
- Kafka Collector (app-discovery-kafka)
- Eureka Collector (app-discovery-eureka)
- Redis Collector (app-discovery-redis)
- MongoDB Collector (app-discovery-mongodb)
- MySQL-General Log Collector (app-discovery-mysql-generallog)
- MySQL-JDBC Collector (app-discovery-mysql-jdbc)
- Nginx Configuration Collector (app-discovery-nginx)
- Cloud VPC Log Collector (app-discovery-cloud-vpc-log)
- Nacos Collector (app-discovery-nacos)
- Application Configuration Collector (app-discovery-application-config)
- Best Practices
-
FAQs
- What Are the Requirements for the Server for Installing the MgC Agent (Formerly Edge)?
- How Do I Run the MgC Agent in Compatibility Mode?
- What Can I Do If the MgC Agent (Formerly Edge) Is Offline?
- Why Can't the MgC Agent (Formerly Edge) Start After Being Installed?
- How Do I Upgrade the MgC Agent (Formerly Edge) to the Latest Version?
- How Do I Uninstall the MgC Agent (Formerly Edge)?
- How Do I Restart the MgC Agent (Formerly Edge)?
- How Do I Check the Current MgC Agent Version (Formerly Edge)?
- How Do I Obtain Run Logs of the MgC Agent (Formerly Edge) on Linux?
- How Do I Fix the Error "The collector is not installed" When a Discovery Task Fails?
- How Do I Obtain the Hive Metastore Credential Files?
- What Can I Do If the Port Required by the MgC Agent Is Occupied and the Installation Fails?
- What Can I Do If AK/SK Verification Fails?
- How Do I Configure WinRM and Troubleshoot WinRM Connection Problems?
- What Do I Do If the Credential List Is Empty When I Create a Data Connection for Big Data Verification?
-
Best Practices
- Configuring Permissions Required for Server Migration
-
Server Migration
- Network Requirements for Server Migration
- Migrating On-premises Servers to Huawei Cloud
- Migrating Servers from Alibaba Cloud to Huawei Cloud
- One-stop Cross-AZ ECS Migration
- Migrating Servers Across AZs on Huawei Cloud
- Migrating Servers to FlexusX Instances (Original HECS X Instances)
- Keeping Private IP Addresses of Servers Unchanged After the Migration
- Batch Modifying and Restoring the Host Configurations for Linux Source Servers
- Batch Modifying and Restoring the Host Configurations for Windows Source Servers
-
Storage Migration
- Migrating Data from Other Cloud Platforms to Huawei Cloud
- Migrating Data from Multiple Source Buckets by Prefix
- Migrating Archive (Cold) Data
- Migrating Data from SFS 1.0 to SFS 3.0
- Performing a NAS-to-NAS Migration and Service Cutover
- Migrating File Systems in Batches
- Migrating Data from MinIO to Huawei Cloud OBS over HTTP
- Migrating Data from Ceph to Huawei Cloud OBS over HTTP
- Reducing Disk Capacity for Target Servers
- Resizing Disks and Partitions for Target Servers
- Collecting Details of Azure Kubernetes Service (AKS) Resources
- Collecting Details of Google Cloud GKE Resources
- Collecting Details of AWS Container Resources
- Collecting Details of Self-built Oracle Databases
-
Verifying Big Data Consistency After Migration
- Verifying the Consistency of Data Migrated from MaxCompute to DLI
- Verifying the Consistency of Data Migrated Between MRS ClickHouse Clusters
- Verifying the Consistency of Data Migrated from Alibaba Cloud EMR ClickHouse to Huawei Cloud MRS ClickHouse
- Verifying the Consistency of Data Migrated from Alibaba Cloud ApsaraDB for ClickHouse to Huawei Cloud MRS ClickHouse
- Verifying the Consistency of Data Migrated from Alibaba Cloud ApsaraDB for ClickHouse to Huawei Cloud CloudTable ClickHouse
- Verifying the Consistency of Data Migrated Between MRS Doris Clusters
- Verifying the Consistency of Data Migrated Between MRS Doris Clusters or from CDH or EMR to MRS Doris
- Verifying the Consistency of Data Migrated from Alibaba Cloud MaxCompute to Huawei Cloud DLI
- Verifying the Consistency of Data Migrated Between MRS HBase Clusters
- Verifying the Consistency of Data Migrated from Delta Lake (with Metadata) to MRS Delta Lake
- Verifying the Consistency of Data Migrated from Delta Lake (without Metadata) to MRS Delta Lake
- Migrating Big Data Without Using the Internet
- BigData Migration Cockpit
-
FAQs
-
Product Consultation
- How Do I Assign the Permissions Required for Using MgC to IAM Users?
- Where Is MgC Available?
- How Do I Prepare for Using MgC?
- How Do I Fix the Error "Failed to access IAM. Check the current user's IAM permissions"?
- Why Can't I Sign the Privacy Statement and Use MgC?
- How Does MgC Ensure Data Security?
- Does Data Collection Affect My Source Services?
-
Network Settings
- What Can I Do If a Source Server Fails the Migration Readiness Check Because Its IP address or Port Is Unreachable?
- What Can I Do If a Source Server Fails the Migration Readiness Check Because the Username or Password Is Incorrect?
- How Do I Fix the Error "Deliver Command to Edge Failed" When a Source Server Fails the Migration Readiness Check?
- What Can I Do If a Source Server Fails the Migration Readiness Check Due to an Unreachable Port, Incorrect Firewall Settings, or Insufficient Access Permissions?
- What Can I Do If Deep Collection Fails on a Source Server Due to Disabled WinRM or an Unreachable IP Address or Port?
-
Server Migration
- How Do I View the Migration Progress When the Migration Workflow Is in the Running State?
- Why the Workflow Status Is Always "Running"?
- How Do I Fix the Error "Edge is not accessible" When a Step in the Migration Workflow Fails?
- How Do I Fix the Error "Server Require to Bind Credential First..." When the Migration Workflow Fails on a Source Server?
- How Do I Handle Resource Exceptions during a Batch Server Migration?
- What Are the Known Errors Related to Server Migration Workflows and How Can I Fix Them?
- What Can I Do If an Error Occurs During the Migration of a VMware Server?
- What Are the Information Mappings Between MgC and SMS?
- Why Is the Migration Progress Inconsistent Between MgC and SMS?
- What Do I Do If I Use a sudo User to Migrate a Source Server and the Server Fails the Source Environment Check?
- What Can I Do If the StartUpAgent Step Fails and the Error Message "System.OutOfMemoryException" Is Displayed?
- How Do I Fix the Error "SMS-Workflow.0503: SMS migration task failed. SMS.xxxx?"
- What Do I Do If Some Disks Are Not Attached to the Target Server After the Migration Is Complete?
-
Storage Migration
- What Are the Restrictions on Using MgC for Storage Migration?
- What Are the Requirements for the Source and Target Environments?
- How Do I Choose the Right Specifications for a Migration Cluster?
- What Affects the Migration Speed of Large Objects?
- What Affects the Migration Speed of Small Objects?
- How Do I View Key Metrics that Affect the Migration Speed?
- Why Is My Storage Migration Workflow Stalled for a Long Time?
- When I Migrate HTTP/HTTPS Data to Huawei Cloud OBS, How Are the Objects with the Same Name but Different URLs Processed?
- When I Migrate Data from OBS to NAS on Huawei Cloud, How Are Objects with the Same Name but Different Capitalization Processed?
- What Are the Constraints on the Length of Object Paths for Migrations Between OBS, NAS, and SMB Storage Systems on Huawei Cloud?
- How Do I Resolve the Problem that a Migration Cluster Fails to Be Created?
- How Do I Obtain Credentials for Accessing Microsoft Azure?
- What Do I Do If the Storage Migration Workflow Fails and "COMPARISON_ATTRIBUTE_NOT_SAME" Is Displayed?
- How Do I Choose Storage Classes?
- What Do I Do If a Migration Task Fails?
- Cross-AZ Migration
- Migration Surveys
-
Resource Discovery
- Known Resource Discovery Problems and Solutions
- Where Can I Find the Collection Failure Cause?
- What Can I Do If an Internet Discovery Task Fails and the Error Message "Network connection timed out" or "Other exception" Is Displayed?
- How Do I Collect Data from a Data Source Again If the Previous Collection Fails?
- How Do I Obtain the Cloud Platform Credentials (AK/SK Pairs)?
- How Do I Obtain the Information for Adding Azure Credentials to MgC?
- How Do I Obtain the Required Credentials Before Using MgC to Perform a Deep Collection for My Azure Object Storage Resources?
- How Do I Configure the Permissions Required for Collecting Details of Azure Containers?
- How Do I Convert the Encoding Format of a CSV File to UTF-8?
- What Can I Do If the Collected Disk Information Is Empty or Incorrect After a Deep Collection Is Performed for a Windows Source Server?
- What Can I Do If the Collected OS Information Is Incorrect After a Deep Collection Is Performed for a Windows Source Server?
- What Can I Do If an RVTools Import Fails?
- What Do I Do If the Deep Collection Succeeds on a Source Server but Some Specifications Information Is Not Collected?
-
Target Recommendations
- Where Can I Find the Assessment Failure Cause?
- Why Can't I Manually Select Target Server Specifications and Disk Types?
- What Can I Do If a Server Assessment Fails and the System Displays a Message Indicating No Proper Specifications Are Matched?
- What Can I Do If a Server Assessment Fails Because the Target Server Specifications Do Not Support Windows Images?
- What Types of Databases Can I Assess Using MgC?
- How Does MgC Generate Target Recommendations?
- Big Data Migration
-
Big Data Verification
- What Do I Do If the Credential List Is Empty When I Create a Data Connection for Big Data Verification?
- Why Are 0 or -1 Displayed in the Hive Verification Results?
- Why Does a Field in Hive Fail the Sum Verification?
- Why Do a Large Number of Tables Fail to Be Verified in a DLI Verification Task?
- How Do I Optimize the Verification Task When the Delta Lake Data Volume Is Large?
- How Do I Replace Packages Before I Create a Connection to a Secured HBase Cluster on the Target Cloud?
- How Do I Replace Packages When I Create a Verification Task for an MRS 3.1.0 Cluster Using Yarn?
- Known Issues and Solutions
-
Product Consultation
- General Reference
Show all
Copied.
Creating a Connection to a Target Component
To verify the consistency of data stored using big data components, you need to establish connections between MgC and the big data components.
The supported big data components include:
- Doris
- HBase
- ClickHouse
- Hive Metastore
Procedure
- Sign in to the MgC console.
- In the navigation pane on the left, choose Migrate > Big Data Verification. Select a migration project in the upper left corner of the page.
- In the Features area, click Connection Management.
- Click Create Connection in the upper right corner of the page.
- Select a big data component and click Next.
- Set parameters based on the big data component you selected.
- Parameters for creating a connection to Doris
- Parameters for creating a connection to HBase
- Parameters for creating a connection to ClickHouse
- Parameters for creating a connection to Hive Metastore
Table 1 Parameters for creating a connection to Doris Parameter
Configuration
Connection To
Select Target.
Connection Name
The default name is Doris-4 random characters (including letters and numbers). You can also customize a name.
Doris Credential
Select the target Doris credential added to Edge. For details about how to add credentials, see "Big data - Doris" in Adding Resource Credentials.
Database IP Address
Enter the IP address for accessing the target Doris cluster.
To obtain the address, log in to FusionInsight Manager, choose Cluster > Services > Doris, and check Host Where Leader Locates.
Database Port
Enter the port for accessing the target Doris cluster.
To obtain the port, log in to FusionInsight Manager, choose Cluster > Services > Doris > Configurations, and search for query_port.
Database Name
Enter the name of the target Doris database.
Table 2 Parameters for creating a connection to HBase Parameter
Configuration
Connection To
Select Target.
Connection Name
The default name is HBase-4 random characters (including letters and numbers). You can also customize a name.
HBase Credential
Select the target HBase credential added to Edge. For details about how to add credentials, see Big data - HBase in Adding Resource Credentials.
Secured Cluster
Choose whether the cluster is secured.
ZooKeeper IP Address
Enter the IP address for connecting to the target ZooKeeper node. You can enter the public or private IP address of the ZooKeeper server.
ZooKeeper Port
Enter the port number for connecting to the target ZooKeeper node. The default value is 2181.
HBase Version
Select the target HBase version.
Table 3 Parameters for creating a connection to ClickHouse Parameter
Configuration
Connection To
Select Target.
Connection Name
The default name is ClickHouse-4 random characters (including letters and numbers). You can also customize a name.
ClickHouse Credential (Optional)
Select the target ClickHouse credential added to Edge. For details about how to add credentials, see "Big data - ClickHouse" in Adding Resource Credentials.
Secured Cluster
Choose whether the cluster is secured.
ClickHouse IP Address
Enter the IP address of the target ClickHouse server.
HTTP Port
If the ClickHouse cluster is unsecured, enter the HTTP port for communicating with the target ClickHouse server. The default value is 8123.
HTTP SSL/TLS Port
If the ClickHouse cluster is secured, enter the HTTPS port for communicating with the target ClickHouse server.
Table 4 Parameters for creating a connection to Hive Metastore Parameter
Configuration
Connection To
Select Target.
Connection Name
The default name is Hive-Metastore-4 random characters (including letters and numbers). You can also customize a name.
Secure Connection
Choose whether to enable secure connection.
- If Hive Metastore is deployed in an unsecured cluster, do not enable secure connection.
- If Hive Metastore is deployed in a secured cluster, enable secure connection and provide access credentials. For details about how to obtain and add credentials to MgC, see "Big data - Hive Metastore" in Adding Resource Credentials.
Hive Version
Select the target Hive version.
CAUTION:Only version 3.x is available.
Hive Metastore IP Address
Enter the IP address for connecting to the Hive Metastore node.
Hive Metastore Thrift Port
Enter the port for connecting to the Hive Metastore Thrift service. The default port is 9083.
Connect to Metadata Database
During an incremental data verification, querying with Hive Metastore on more than 30,000 partitions may lead to a memory overflow (OOM) since all partition information is loaded into memory. Connecting to the MySQL metadata database can effectively prevent this issue.
- If you disable this option, the system queries the information of Hive tables and partitions using Hive Metastore.
- If you enable this option, configure the MySQL database information. The system will query the information of Hive tables and partitions through the MySQL database. You need to set the following parameters:
- Metadata Database Type: Only MySQL is supported.
- MySQL Credential: Select the credential for accessing the MySQL database. You need to add the credential to Edge and synchronize it to MgC. For details, see Adding Resource Credentials.
- MySQL Node IP Address: Enter the IP address of the MySQL database server.
- MySQL Port: Enter the port of the MySQL database service.
- Database Name: Enter the name of the database that stores the Hive table metadata.
NOTE:
Ensure that the entered MySQL credential, node IP address, service port, and database name match the MySQL database used by Hive. Otherwise, data verification will fail.
- Click Test. MgC verifies whether the component can be connected using the configuration information. If the test is successful, the cloud services can be connected.
- After the connection test is successful, click Confirm. The connection is created.
- On the Connection Management page, view the created connection and its basic information. In the Operation column, click Modify to modify the connection settings.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot