Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Link to HDFS

Updated on 2022-08-17 GMT+08:00
CDM supports the following HDFS data sources:

MRS HDFS

When connecting CDM to HDFS of MRS, configure the parameters as described in Table 1.

NOTE:
  • Before creating an MRS link, you need to add an authenticated Kerberos user on MRS and log in to the MRS management page to change the initial password. Then use the new user to create an MRS link.
  • To connect to an MRS 2.x cluster, create a CDM cluster of version 2.x first. CDM 1.8.x clusters cannot connect to MRS 2.x clusters.
  • Ensure that the MRS cluster and the DataArts Studio instance can communicate with each other. The following requirements must be met for network interconnection:
    • If the CDM cluster in the DataArts Studio instance and the MRS cluster are in different regions, a public network or a dedicated connection is required. If the Internet is used for communication, ensure that an EIP has been bound to the CDM cluster, and the MRS cluster can access the Internet and the port has been enabled in the firewall rule.
    • If the CDM cluster in the DataArts Studio instance and the MRS cluster are in the same region, VPC, subnet, and security group, they can communicate with each other by default. If they are in the same VPC but in different subnets or security groups, you must configure routing rules and security group rules. For details about how to configure routing rules, see Custom Route in Region Type I > Adding Routes in Virtual Private Cloud (VPC) Usage Guide. For details about how to configure security group rules, see Security Group > Adding a Security Group Rule in Virtual Private Cloud (VPC) Usage Guide.
    • The MRS cluster and the DataArts Studio workspace belong to the same enterprise project. If they do not, you can modify the enterprise project of the workspace.
Table 1 MRS HDFS link parameters

Parameter

Description

Example Value

Name

Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

mrs_hdfs_link

Manager IP

Floating IP address of MRS Manager. Click Select next to the Manager IP text box to select an MRS cluster. CDM automatically fills in the authentication information.

127.0.0.1

Username

If Authentication Method is set to KERBEROS, you must provide the username and password used for logging in to MRS Manager. If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS.

To create a data connection for an MRS security cluster, do not use user admin. The admin user is the default management page user and cannot be used as the authentication user of the security cluster. You can create an MRS user and set Username and Password to the username and password of the created MRS user when creating an MRS data connection.
NOTE:
  • If the CDM cluster version is 2.9.0 or later and the MRS cluster version is 3.1.0 or later, the created user must have the permissions of the Manager_viewer role to create links on CDM. To perform operations on databases, tables, and data of a component, you also need to add the user group permissions of the component to the user.
  • If the CDM cluster version is earlier than 2.9.0 or the MRS cluster version is earlier than 3.1.0, the created user must have the permissions of Manager_administrator or System_administrator to create links on CDM.
  • A user with only the Manager_tenant or Manager_auditor permission cannot create connections.

cdm

Password

Password used for logging in to MRS Manager

-

Authentication Method

Authentication method used for accessing MRS
  • SIMPLE: Select this for non-security mode.
  • KERBEROS: Select this for security mode.

SIMPLE

Run Mode

Run mode of the HDFS link. The options are as follows:
  • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.
  • STANDALONE: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, select STANDALONE or configure different agents.

    Note: The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

  • Agent: The link instance runs on an agent.

If Agent is not used, and the CDM cluster connects to two or more clusters with Kerberos authentication enabled and the same realm, only one cluster can be connected in EMBEDDED mode, and the other clusters must be in STANDALONE mode.

STANDALONE

Agent

Click Select and select the agent created in Connecting to an Agent. This parameter is displayed when Run Mode is set to Agent.

-

Use Cluster Config

You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

No

Cluster Config Name

This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.

For details, see Managing Cluster Configurations.

hdfs_01

Click Show Advanced Attributes, and then click Add to add configuration attributes of other clients. The name and value of each attribute must be configured. You can click Delete to delete no longer used attributes.

FusionInsight HDFS

When connecting CDM to HDFS of FusionInsight HD, configure the parameters as described in Table 2.

Table 2 FusionInsight HDFS link parameters

Parameter

Description

Example Value

Name

Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

FI_hdfs_link

Manager IP

IP address of FusionInsight Manager

127.0.0.1

Manager Port

Port number of FusionInsight Manager

28443

CAS Server Port

Port number of the CAS server used to connect to FusionInsight

20009

Username

Username used for logging in to FusionInsight Manager.

If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS.

cdm

Password

Password used for logging in to FusionInsight Manager

-

Authentication Method

Authentication method used for accessing the cluster:
  • SIMPLE: Select this for non-security mode.
  • KERBEROS: Select this for security mode.

KERBEROS

Run Mode

Run mode of the HDFS link. The options are as follows:
  • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.
  • STANDALONE: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, select STANDALONE or configure different agents.

    Note: The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

  • Agent: The link instance runs on an agent.

STANDALONE

Agent

Click Select and select the agent created in Connecting to an Agent. This parameter is displayed when Run Mode is set to Agent.

-

Use Cluster Config

You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

No

Cluster Config Name

This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.

For details, see Managing Cluster Configurations.

hdfs_01

Click Show Advanced Attributes, and then click Add to add configuration attributes of other clients. The name and value of each attribute must be configured. You can click Delete to delete no longer used attributes.

Apache HDFS

When connecting CDM to HDFS of Apache Hadoop, configure the parameters as described in Table 3.

Table 3 Apache HDFS link parameters

Parameter

Description

Example Value

Name

Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

hadoop_hdfs_link

URI

NameNode URI You can enter hdfs://IP address of the NameNode instance:8020.

hdfs://IP:8020

Authentication Method

Authentication method used for accessing the cluster:
  • SIMPLE: Select this for non-security mode.
  • KERBEROS: Select this for security mode.

KERBEROS

Principal

When Authentication Method is set to KERBEROS, this parameter is mandatory. It is the username in the Kerberos security mode and can be obtained from the Hadoop administrator. The value of this parameter must be the same as that in the Keytab file.

-

Keytab File

When Authentication Method is set to KERBEROS, a Keytab file must be uploaded. The Keytab file is an authentication credential and can be obtained from the Hadoop administrator. Before obtaining the keytab file, you need to change the password of this user at least once in the cluster. Otherwise, the downloaded keytab file may be unavailable. After a user password is changed, the exported keytab file becomes invalid, and you need to export a keytab file again.

-

Run Mode

Run mode of the HDFS link. The options are as follows:
  • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.
  • STANDALONE: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, select STANDALONE or configure different agents.

    Note: The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

  • Agent: The link instance runs on an agent.

STANDALONE

IP and Host Name Mapping

This parameter is used only when Run Mode is set to EMBEDDED or STANDALONE.

If the HDFS configuration file uses the host name, configure the mapping between the IP address and host name. Separate the IP addresses and host names by spaces and mappings by semicolons (;), carriage returns, or line feeds.

10.1.6.9 hostname01

10.2.7.9 hostname02

Agent

If Run Mode is set to Agent, click Select and select the agent created in Connecting to an Agent.

-

Use Cluster Config

You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

No

Cluster Config Name

This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.

For details, see Managing Cluster Configurations.

hdfs_01

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback