MapReduce Service
MapReduce Service
All results for "
" in this service
All results for "
" in this service
What's New
Function Overview
Service Overview
Infographics
What Is MRS?
Advantages of MRS Compared with Self-Built Hadoop
Application Scenarios
How Do I Select an MRS Version?
Components
List of MRS Component Versions
CarbonData
ClickHouse
Infographics for ClickHouse
ClickHouse
CDL
CDL Basic Principles
Relationship Between CDL and Other Components
DBService
DBService Basic Principles
Relationship Between DBService and Other Components
Flink
Flink Basic Principles
Flink HA Solution
Relationships Between Flink and Other Components
Flink Enhanced Open Source Features
Window
Job Pipeline
Stream SQL Join
Flink CEP in SQL
Flume
Flume Basic Principles
Relationships Between Flume and Other Components
Flume Enhanced Open Source Features
HBase
HBase Basic Principles
HBase HA Solution
Relationship with Other Components
HBase Enhanced Open Source Features
HDFS
HDFS Basic Principles
HDFS HA Solution
Relationship Between HDFS and Other Components
HDFS Enhanced Open Source Features
HetuEngine
HetuEngine Product Overview
Relationships Between HetuEngine and Other Components
Hive
Hive Basic Principles
Hive CBO Principles
Relationships Between Hive and Other Components
Enhanced Open Source Feature
Hudi
Hue
Hue Basic Principles
Relationships Between Hue and Other Components
Hue Enhanced Open Source Features
Impala
IoTDB
IoTDB Basic Principles
Relationships Between IoTDB and Other Components
IoTDB Enhanced Open Source Features
Kafka
Kafka Basic Principles
Relationships Between Kafka and Other Components
Kafka Enhanced Open Source Features
KafkaManager
KrbServer and LdapServer
KrbServer and LdapServer Principles
KrbServer and LdapServer Enhanced Open Source Features
Kudu
Loader
Loader Basic Principles
Relationship Between Loader and Other Components
Loader Enhanced Open Source Features
Manager
Manager Basic Principles
Manager Key Features
MapReduce
MapReduce Basic Principles
Relationship Between MapReduce and Other Components
MapReduce Enhanced Open Source Features
Oozie
Oozie Basic Principles
Oozie Enhanced Open Source Features
OpenTSDB
Presto
Ranger
Ranger Basic Principles
Relationships Between Ranger and Other Components
Spark
Spark Basic Principles
Spark HA Solution
Relationship Among Spark, HDFS, and Yarn
Spark Enhanced Open Source Feature: Optimized SQL Query of Cross-Source Data
Spark2x
Spark2x Basic Principles
Spark2x HA Solution
Spark2x Multi-active Instance
Spark2x Multi-tenant
Relationship Between Spark2x and Other Components
Spark2x Open Source New Features
Spark2x Enhanced Open Source Features
CarbonData Overview
Optimizing SQL Query of Data of Multiple Sources
Storm
Storm Basic Principles
Relationships Between Storm and Other Components
Storm Enhanced Open Source Features
Tez
YARN
YARN Basic Principles
YARN HA Solution
Relationships Between YARN and Other Components
Yarn Enhanced Open Source Features
ZooKeeper
ZooKeeper Basic Principles
Relationships Between ZooKeeper and Other Components
ZooKeeper Enhanced Open Source Features
Functions
Multi-tenant
Security Hardening
Easy Access to Web UIs of Components
Reliability Enhancement
Job Management
Bootstrap Actions
Enterprise Project Management
Metadata
Cluster Management
Cluster Lifecycle Management
Cluster Scaling
Auto Scaling
Task Node Creation
Isolating a Host
Managing Tags
Cluster O&M
Message Notification
Constraints
Technical Support
Billing
Permissions Management
Related Services
Quota Description
Common Concepts
Getting Started
Buying and Using an MRS Cluster
How to Use MRS
Creating a Cluster
Uploading Data
Creating a Job
Terminating a Cluster
Installing and Using the Cluster Client
Using Clusters with Kerberos Authentication Enabled
Using Hadoop from Scratch
Using Kafka from Scratch
Using HBase from Scratch
Modifying MRS Configurations
Configuring Auto Scaling for an MRS Cluster
Configuring Hive with Storage and Compute Decoupled
Submitting Spark Tasks to New Task Nodes
User Guide
Preparing a User
Configuring Cloud Service Permissions
Creating an MRS User
Creating a Custom Policy
Synchronizing IAM Users to MRS
Configuring a Cluster
How to Buy an MRS Cluster
Quick Configuration
Quickly Buying a Hadoop Analysis Cluster
Quickly Buying an HBase Query Cluster
Quickly Buying a Kafka Streaming Cluster
Quickly Buying a ClickHouse Cluster
Quickly Buying a Real-time Analysis Cluster
Buying a Custom Cluster
Configuring Custom Topology
Adding a Tag to a Cluster/Node
Communication Security Authorization
Configuring Auto Scaling Rules
Overview
Configuring Auto Scaling During Cluster Creation
Creating an Auto Scaling Policy for an Existing Cluster
Scenario 1: Using Auto Scaling Rules Alone
Scenario 2: Using Resource Plans Alone
Scenario 3: Using Both Auto Scaling Rules and Resource Plans
Modifying an Auto Scaling Policy
Deleting an Auto Scaling Policy
Enabling or Disabling an Auto Scaling Policy
Viewing an Auto Scaling Policy
Configuring Automation Scripts
Configuring Auto Scaling Metrics
Managing Data Connections
Configuring Data Connections
Configuring an RDS Data Connection
Configuring an RDS Data Connection
Configuring a Ranger Data Connection
Configuring a Hive Data Connection
Installing Third-Party Software Using Bootstrap Actions
Viewing Failed MRS Tasks
Viewing Information of a Historical Cluster
Managing Clusters
Logging In to a Cluster
MRS Cluster Node Overview
Logging In to an ECS
Determining Active and Standby Management Nodes
Cluster Overview
Cluster List
Checking the Cluster Status
Viewing Basic Cluster Information
Viewing Cluster Patch Information
Managing Components and Monitoring Hosts
Viewing and Customizing Cluster Monitoring Metrics
Cluster O&M
Importing and Exporting Data
Changing the Subnet of a Cluster
Configuring Message Notification
Checking Health Status
Before You Start
Performing a Health Check
Viewing and Exporting a Health Check Report
Remote O&M
Authorizing O&M
Sharing Logs
Viewing MRS Operation Logs
Changing Billing Mode to Yearly/Monthly
Unsubscribing from a Cluster
Unsubscribing from a Specified Node in a Yearly/Monthly Cluster
Deleting a Cluster
Managing Nodes
Scaling Out a Cluster
Scaling In a Cluster
Removing ClickHouseServer Instance Nodes
Constraints on ClickHouseServer Scale-in
Scaling In ClickHouseServer Nodes
Managing a Host (Node)
Isolating a Host
Canceling Host Isolation
Scaling Up Master Node Specifications
Job Management
Introduction to MRS Jobs
Running a MapReduce Job
Running a SparkSubmit or Spark Job
Running a HiveSQL Job
Running a SparkSql Job
Running a Flink Job
Running a HadoopStreaming Job
Viewing Job Configuration and Logs
Stopping a Job
Deleting a Job
Using Encrypted OBS Data for Job Running
Configuring Job Notification Rules
Component Management
Object Management
Viewing Configuration
Managing Services
Configuring Service Parameters
Configuring Customized Service Parameters
Synchronizing Service Configuration
Managing Role Instances
Configuring Role Instance Parameters
Synchronizing Role Instance Configuration
Decommissioning and Recommissioning a Role Instance
Starting and Stopping a Cluster
Synchronizing Cluster Configuration
Exporting Cluster Configuration
Performing Rolling Restart
Alarm Management
Viewing the Alarm List
Viewing the Event List
Viewing and Manually Clearing an Alarm
Patch Management
Installing an Online Patch
Installing a Rolling Patch
Restoring Patches for the Isolated Hosts
MRS Patch Description
Fixed the Privilege Escalation Vulnerability of User omm
MRS 3.2.0-LTS.1 Patch Description
MRS 2.1.0.11 Patch Description
MRS 3.0.5.1 Patch Description
MRS 2.1.0.10 Patch Description
MRS 2.1.0.9 Patch Description
MRS 2.1.0.8 Patch Description
MRS 2.1.0.7 Patch Description
MRS 2.1.0.6 Patch Description
MRS 2.1.0.3 Patch Description
MRS 2.1.0.2 Patch Description
MRS 2.1.0.1 Patch Description
MRS 2.0.6.1 Patch Description
MRS 2.0.1.3 Patch Description
MRS 2.0.1.2 Patch Description
MRS 2.0.1.1 Patch Description
MRS 1.9.3.3 Patch Description
MRS 1.9.3.1 Patch Description
MRS 1.9.2.2 Patch Description
MRS 1.9.0.8, 1.9.0.9, and 1.9.0.10 Patch Description
MRS 1.9.0.7 Patch Description
MRS 1.9.0.6 Patch Description
MRS 1.9.0.5 Patch Description
MRS 1.8.10.1 Patch Description
Tenant Management
Before You Start
Overview
Creating a Tenant
Creating a Sub-tenant
Deleting a Tenant
Managing a Tenant Directory
Restoring Tenant Data
Creating a Resource Pool
Modifying a Resource Pool
Deleting a Resource Pool
Configuring a Queue
Configuring the Queue Capacity Policy of a Resource Pool
Clearing Configuration of a Queue
Bootstrap Actions
Introduction to Bootstrap Actions
Preparing the Bootstrap Action Script
View Execution Records
Adding a Bootstrap Action
Modifying a Bootstrap Action
Deleting a Bootstrap Action
Using an MRS Client
Installing a Client
Installing a Client (MRS 3.x or Later)
Installing a Client (Versions Earlier Than 3.x)
Updating a Client
Updating a Client (Version 3.x or Later)
Updating a Client (Versions Earlier Than 3.x)
Using the Client of Each Component
Using a ClickHouse Client
Using a Flink Client
Using a Flume Client
Using an HBase Client
Using an HDFS Client
Using a Hive Client
Using an Impala Client
Using a Kafka Client
Using a Kudu Client
Using the Oozie Client
Using a Storm Client
Using a Yarn Client
Configuring a Cluster with Decoupled Storage and Compute
MRS Storage-Compute Decoupling
Interconnecting with OBS Using the Cluster Agency Mechanism
Configuring a Storage-Compute Decoupled Cluster (Agency)
Configuring a Storage-Compute Decoupled Cluster (AK/SK)
Configuring the Policy for Clearing Component Data in the Recycle Bin
Interconnecting MRS with OBS Using an Agency
Interconnecting Flink with OBS
Interconnecting Flume with OBS
Interconnecting HDFS with OBS
Interconnecting Hive with OBS
Interconnecting MapReduce with OBS
Interconnecting Spark2x with OBS
Interconnecting Sqoop with External Storage Systems
Interconnecting Hudi with OBS
Configuring Fine-Grained Permissions for MRS Multi-User Access to OBS
Accessing OBS from a Client on a Node Outside the Cluster
Interconnecting with OBS Using the Guardian Service
Scenarios
Interconnecting the Guardian Service with OBS
Interconnecting Components with OBS Using Guardian
Interconnecting Hive with OBS
Interconnecting Flink with OBS
Interconnecting Spark with OBS
Interconnecting Hudi with OBS
Interconnecting HetuEngine with OBS
Interconnecting HDFS with OBS
Interconnecting Yarn with OBS
Interconnecting MapReduce with OBS
Accessing Web Pages of Open Source Components Managed in MRS Clusters
Web UIs of Open Source Components
Common Ports of Components
Access Through Direct Connect
EIP-based Access
Access Using a Windows ECS
Creating an SSH Channel for Connecting to an MRS Cluster and Configuring the Browser
Accessing Manager
Accessing FusionInsight Manager (MRS 3.x or Later)
Accessing MRS Manager (MRS 2.x or Earlier)
FusionInsight Manager Operation Guide (Applicable to 3.x)
Homepage
Overview
Managing Monitoring Metric Reports
Querying the FusionInsight Manager Version
Cluster
Cluster Management
Overview
Performing a Rolling Restart of a Cluster
Managing Expired Configurations
Downloading the Client
Modifying Cluster Attributes
Managing Cluster Configurations
Managing Static Service Pools
Static Service Resources
Configuring Cluster Static Resources
Viewing Cluster Static Resources
Managing Clients
Managing a Client
Batch Upgrading Clients
Updating the hosts File in Batches
Managing a Service
Overview
Service Management Operations
Service Details Page
Performing Active/Standby Switchover of a Role Instance
Resource Monitoring
Collecting Stack Information
Switching Ranger Authentication
Service Configuration
Modifying Service Configuration Parameters
Modifying Custom Configuration Parameters of a Service
Instance Management
Overview
Decommissioning and Recommissioning an Instance
Managing Instance Configurations
Viewing the Instance Configuration File
Instance Group
Managing Instance Groups
Viewing Information About an Instance Group
Configuring Instantiation Group Parameters
Hosts
Host Management Page
Viewing the Host List
Viewing the Host Dashboard
Checking Host Processes and Resources
Host Maintenance Operations
Starting and Stopping All Instances on a Host
Performing a Host Health Check
Configuring Racks for Hosts
Isolating a Host
Exporting Host Information
Resource Overview
Distribution
Trend
Cluster
Host
O&M
Alarms
Overview of Alarms and Events
Alarm Threshold
Configuring the Alarm Masking Status
Log
Log Online Search
Log Download
Perform a Health Check
Viewing a Health Check Task
Managing Health Check Reports
Modifying Health Check Configuration
Configuring Backup and Backup Restoration
Creating a Backup Task
Creating a Backup Restoration Task
Managing Backup and Backup Restoration Tasks
Audit
Overview
Configuring Audit Log Dumping
Tenant Resources
Multi-Tenancy
Overview
Technical Principles
Multi-Tenant Management
Multi-Tenant Model
Resource Overview
Dynamic Resources
Storage Resources
Multi-Tenancy Usage
Overview
Process Overview
Using the Superior Scheduler
Creating Tenants
Adding a Tenant
Adding a Sub-Tenant
Adding a User and Binding the User to a Tenant Role
Managing Tenants
Managing Tenant Directories
Restoring Tenant Data
Deleting a Tenant
Managing Resources
Adding a Resource Pool
Modifying a Resource Pool
Deleting a Resource Pool
Modifying Queue Resources
Configuring the Queue Capacity Policy of a Resource Pool
Clearing Queue Configurations
Managing Global User Policies
Using the Capacity Scheduler
Creating Tenants
Adding a Tenant
Adding a Sub-Tenant
Adding a User and Binding the User to a Tenant Role
Managing Tenants
Managing Tenant Directories
Restoring Tenant Data
Deleting a Tenant
Clearing Non-associated Queues of a Tenant
Managing Resources
Adding a Resource Pool
Modifying a Resource Pool
Deleting a Resource Pool
Modifying Queue Resources
Configuring the Queue Capacity Policy of a Resource Pool
Clearing Queue Configurations
Switching the Scheduler
System
Configuring Permissions
Managing Users
Creating a User
Modifying User Information
Exporting User Information
Locking a User
Unlocking a User
Deleting a User
Changing a User Password
Initializing a Password
Exporting an Authentication Credential File
Managing User Groups
Managing Roles
Security Policies
Configuring Password Policies
Configuring the Independent Attribute
Configuring Interconnections
Configuring SNMP Northbound Parameters
Configuring Syslog Northbound Parameters
Configuring Monitoring Metric Dumping
Importing a Certificate
OMS Management
Overview of the OMS Page
Modifying OMS Service Configuration Parameters
Viewing Component Packages
Cluster Management
Cluster Mutual Trust Management
Overview of Mutual Trust Between Clusters
Changing Manager's Domain Name
Configuring Cross-Manager Mutual Trust Between Clusters
Assigning User Permissions After Cross-Cluster Mutual Trust Is Configured
Configuring Scheduled Backup of Alarm and Audit Information
Modifying the FusionInsight Manager Routing Table
Replacing the NTP Server for the Cluster
Switching to the Maintenance Mode
Routine Maintenance of Manager
Log Management
About Logs
Manager Log List
Configuring the Log Level and Log File Size
Configuring the Number of Local Audit Log Backups
Viewing Role Instance Logs
Backup and Recovery Management
Introduction
Backing Up Data
Backing Up Manager Data
Backing Up CDL Data
Backing Up ClickHouse Metadata
Backing Up ClickHouse Service Data
Backing Up DBService Data
Backing Up Flink Metadata
Backing Up HBase Metadata
Backing Up HBase Service Data
Backing Up NameNode Data
Backing Up HDFS Service Data
Backing Up Hive Service Data
Backing Up IoTDB Metadata
Backing Up IoTDB Service Data
Backing Up Kafka Metadata
Recovering Data
Restoring Manager Data
Restoring CDL Data
Restoring ClickHouse Metadata
Restoring ClickHouse Service Data
Restoring DBService data
Restoring Flink Metadata
Restoring HBase Metadata
Restoring HBase Service Data
Restoring NameNode Data
Restoring HDFS Service Data
Restoring Hive Service Data
Restoring IoTDB Metadata
Restoring IoTDB Service Data
Restoring Kafka Metadata
Enabling Cross-Cluster Replication
Managing Local Quick Restoration Tasks
Modifying a Backup Task
Viewing Backup and Restoration Tasks
How Do I Configure the Environment When I Create a ClickHouse Backup Task on FusionInsight Manager and Set the Path Type to RemoteHDFS?
SQL Inspector
Overview
Adding an SQL Inspection
Configuring Hive SQL Inspection
Configuring ClickHouse SQL Inspection
Configuring HetuEngine SQL Inspection
Configuring Spark SQL Inspection
Security Management
Security Overview
Right Model
Right Mechanism
Authentication Policies
Permission Verification Policies
User Account List
Default Permission Information
FusionInsight Manager Security Functions
Account Management
Account Security Settings
Unlocking LDAP Users and Management Accounts
Internal an Internal System User
Enabling and Disabling Permission Verification on Cluster Components
Logging In to a Non-Cluster Node Using a Cluster User in Normal Mode
Changing the Password for a System User
Changing the Password for User admin
Changing the Password for an OS User
Changing the Password for a System Internal User
Changing the Password for the Kerberos Administrator
Changing the Password for the OMS Kerberos Administrator
Changing the Passwords of the LDAP Administrator and the LDAP User (Including OMS LDAP)
Changing the Password for the LDAP Administrator
Changing the Password for a Component Running User
Changing the Password for a Database User
Changing the Password of the OMS Database Administrator
Changing the Password for the Data Access User of the OMS Database
Changing the Password for a Component Database User
Resetting the Component Database User Password
Changing the Password for User omm in DBService
Changing the Password for User compdbuser of the DBService Database
Changing or Resetting the Password for User admin of Manager
Certificate Management
Replacing the CA Certificate
Replacing HA Certificates
Security Hardening
Hardening Policies
Configuring a Trusted IP Address to Access LDAP
HFile and WAL Encryption
Configuring Hadoop Security Parameters
Configuring an IP Address Whitelist for Modification Allowed by HBase
Updating a Key for a Cluster
Hardening the LDAP
Configuring Kafka Data Encryption During Transmission
Configuring HDFS Data Encryption During Transmission
Configuring Spark2x Data Encryption During Transmission
Configuring ZooKeeper SSL
Encrypting the Communication Between the Controller and the Agent
Updating SSH Keys for User omm
Changing the Timeout Duration of the Manager Page
Security Maintenance
Account Maintenance Suggestions
Password Maintenance Suggestions
Log Maintenance Suggestions
Security Statement
MRS Manager Operation Guide (Applicable to 2.x and Earlier Versions)
Introduction to MRS Manager
Checking Running Tasks
Monitoring Management
Dashboard
Managing Services and Monitoring Hosts
Managing Resource Distribution
Configuring Monitoring Metric Dumping
Alarm Management
Viewing and Manually Clearing an Alarm
Configuring an Alarm Threshold
Configuring Syslog Northbound Interface Parameters
Configuring SNMP Northbound Interface Parameters
Alarm Reference (Applicable to MRS 2.x and Earlier Versions)
ALM-12001 Audit Log Dump Failure (For MRS 2.x or Earlier)
ALM-12002 HA Resource Abnormal (For MRS 2.x or Earlier)
ALM-12004 OLdap Resource Abnormal (For MRS 2.x or Earlier)
ALM-12005 OKerberos Resource Abnormal (For MRS 2.x or Earlier)
ALM-12006 Node Fault (For MRS 2.x or Earlier)
ALM-12007 Process Fault (For MRS 2.x or Earlier)
ALM-12010 Manager Heartbeat Interruption Between the Active and Standby Nodes (For MRS 2.x or Earlier)
ALM-12011 Data Synchronization Exception Between the Active and Standby Manager Nodes (For MRS 2.x or Earlier)
ALM-12012 NTP Service Abnormal (For MRS 2.x or Earlier)
ALM-12014 Device Partition Lost (For MRS 2.x or Earlier)
ALM-12015 Device Partition File System Read-Only (For MRS 2.x or Earlier)
ALM-12016 CPU Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12017 Insufficient Disk Capacity (For MRS 2.x or Earlier)
ALM-12018 Memory Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12027 Host PID Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12028 Number of Processes in the D State on the Host Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12031 User omm or Password Is About to Expire (For MRS 2.x or Earlier)
ALM-12032 User ommdba or Password Is About to Expire (For MRS 2.x or Earlier)
ALM-12033 Slow Disk Fault (For MRS 2.x or Earlier)
ALM-12034 Periodic Backup Failure (For MRS 2.x or Earlier)
ALM-12035 Unknown Data Status After Recovery Task Failure (For MRS 2.x or Earlier)
ALM-12037 NTP Server Abnormal (For MRS 2.x or Earlier)
ALM-12038 Monitoring Indicator Dump Failure (For MRS 2.x or Earlier)
ALM-12039 GaussDB Data Is Not Synchronized (For MRS 2.x or Earlier)
ALM-12040 Insufficient System Entropy (For MRS 2.x or Earlier)
ALM-12041 Permission of Key Files Is Abnormal (For MRS 2.x or Earlier)
ALM-12042 Key File Configurations Are Abnormal (For MRS 2.x or Earlier)
ALM-12043 DNS Parsing Duration Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12045 Read Packet Dropped Rate Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12046 Write Packet Dropped Rate Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12047 Read Packet Error Rate Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12048 Write Packet Error Rate Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12049 Read Throughput Rate Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12050 Write Throughput Rate Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12051 Disk Inode Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12052 Usage of Temporary TCP Ports Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12053 File Handle Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-12054 Invalid Certificate File (For MRS 2.x or Earlier)
ALM-12055 Certificate File Is About to Expire (For MRS 2.x or Earlier)
ALM-12180 Disk Card I/O (For MRS 2.x or Earlier)
ALM-12357 Failed to Export Audit Logs to OBS (For MRS 2.x or Earlier)
ALM-13000 ZooKeeper Service Unavailable (For MRS 2.x or Earlier)
ALM-13001 Available ZooKeeper Connections Are Insufficient (For MRS 2.x or Earlier)
ALM-13002 ZooKeeper Memory Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14000 HDFS Service Unavailable (For MRS 2.x or Earlier)
ALM-14001 HDFS Disk Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14002 DataNode Disk Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14003 Number of Lost HDFS Blocks Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14004 Number of Damaged HDFS Blocks Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14006 Number of HDFS Files Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14007 HDFS NameNode Memory Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14008 HDFS DataNode Memory Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14009 Number of Faulty DataNodes Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-14010 NameService Is Abnormal (For MRS 2.x or Earlier)
ALM-14011 HDFS DataNode Data Directory Is Not Configured Properly (For MRS 2.x or Earlier)
ALM-14012 HDFS Journalnode Data Is Not Synchronized (For MRS 2.x or Earlier)
ALM-16000 Percentage of Sessions Connected to the HiveServer to the Maximum Number Allowed Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-16001 Hive Warehouse Space Usage Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-16002 Hive SQL Execution Success Rate Is Lower Than the Threshold (For MRS 2.x or Earlier)
ALM-16004 Hive Service Unavailable (For MRS 2.x or Earlier)
ALM-16005 Number of Failed Hive SQL Executions in the Last Period Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-18000 Yarn Service Unavailable (For MRS 2.x or Earlier)
ALM-18002 NodeManager Heartbeat Lost (For MRS 2.x or Earlier)
ALM-18003 NodeManager Unhealthy (For MRS 2.x or Earlier)
ALM-18004 NodeManager Disk Usability Ratio Is Lower Than the Threshold (For MRS 2.x or Earlier)
ALM-18006 MapReduce Job Execution Timeout (For MRS 2.x or Earlier)
ALM-18008 Heap Memory Usage of Yarn ResourceManager Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-18009 Heap Memory Usage of MapReduce JobHistoryServer Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-18010 Number of Pending Yarn Tasks Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-18011 Memory of Pending Yarn Tasks Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-18012 Number of Terminated Yarn Tasks in the Last Period Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-18013 Number of Failed Yarn Tasks in the Last Period Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-19000 HBase Service Unavailable (For MRS 2.x or Earlier)
ALM-19006 HBase Replication Sync Failed (For MRS 2.x or Earlier)
ALM-19007 HBase Merge Queue Exceeds the Threshold (for 2.x and Earlier Versions)
ALM-20002 Hue Service Unavailable (For MRS 2.x or Earlier)
ALM-23001 Loader Service Unavailable (For MRS 2.x or Earlier)
ALM-24000 Flume Service Unavailable (For MRS 2.x or Earlier)
ALM-24001 Flume Agent Is Abnormal (For MRS 2.x or Earlier)
ALM-24003 Flume Client Connection Interrupted (For MRS 2.x or Earlier)
ALM-24004 Flume Fails to Read Data (For MRS 2.x or Earlier)
ALM-24005 Data Transmission by Flume Is Abnormal (For MRS 2.x or Earlier)
ALM-25000 LdapServer Service Unavailable (For MRS 2.x or Earlier)
ALM-25004 Abnormal LdapServer Data Synchronization (For MRS 2.x or Earlier)
ALM-25500 KrbServer Service Unavailable (For MRS 2.x or Earlier)
ALM-26051 Storm Service Unavailable (For MRS 2.x or Earlier)
ALM-26052 Number of Available Supervisors in Storm Is Lower Than the Threshold (For MRS 2.x or Earlier)
ALM-26053 Slot Usage of Storm Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-26054 Heap Memory Usage of Storm Nimbus Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-27001 DBService Unavailable (For MRS 2.x or Earlier)
ALM-27003 DBService Heartbeat Interruption Between the Active and Standby Nodes (For MRS 2.x or Earlier)
ALM-27004 Data Inconsistency Between Active and Standby DBServices (For MRS 2.x or Earlier)
ALM-28001 Spark Service Unavailable (For MRS 2.x or Earlier)
ALM-38000 Kafka Service Unavailable (For MRS 2.x or Earlier)
ALM-38001 Insufficient Kafka Disk Capacity (For MRS 2.x or Earlier)
ALM-38002 Heap Memory Usage of Kafka Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43001 Spark Service Unavailable (For MRS 2.x or Earlier)
ALM-43006 Heap Memory Usage of the JobHistory Process Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43007 Non-Heap Memory Usage of the JobHistory Process Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43008 Direct Memory Usage of the JobHistory Process Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43009 JobHistory GC Time Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43010 Heap Memory Usage of the JDBCServer Process Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43011 Non-Heap Memory Usage of the JDBCServer Process Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43012 Direct Memory Usage of the JDBCServer Process Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-43013 JDBCServer GC Time Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-44004 Presto Coordinator Resource Group Queuing Tasks Exceed the Threshold (For MRS 2.x or Earlier)
ALM-44005 Presto Coordinator Process GC Time Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-44006 Presto Worker Process GC Time Exceeds the Threshold (For MRS 2.x or Earlier)
ALM-45325 Presto Service Unavailable (For MRS 2.x or Earlier)
Object Management
Managing Objects
Viewing Configurations
Managing Services
Configuring Service Parameters
Configuring Customized Service Parameters
Synchronizing Service Configurations
Managing Role Instances
Configuring Role Instance Parameters
Synchronizing Role Instance Configuration
Decommissioning and Recommissioning a Role Instance
Managing a Host
Isolating a Host
Canceling Host Isolation
Starting or Stopping a Cluster
Synchronizing Cluster Configurations
Exporting Configuration Data of a Cluster
Log Management
About Logs
Manager Log List
Viewing and Exporting Audit Logs
Exporting Service Logs
Configuring Audit Log Exporting Parameters
Health Check Management
Performing a Health Check
Viewing and Exporting a Health Check Report
Configuring the Number of Health Check Reports to Be Reserved
Managing Health Check Reports
DBService Health Check Indicators
Flume Health Check Indicators
HBase Health Check Indicators
Host Health Check Indicators
HDFS Health Check Indicators
Hive Health Check Indicators
Kafka Health Check Indicators
KrbServer Health Check Indicators
LdapServer Health Check Indicators
Loader Health Check Indicators
MapReduce Health Check Indicators
OMS Health Check Indicators
Spark Health Check Indicators
Storm Health Check Indicators
Yarn Health Check Indicators
ZooKeeper Health Check Indicators
Static Service Pool Management
Viewing the Status of a Static Service Pool
Configuring a Static Service Pool
Tenant Management
Overview
Creating a Tenant
Creating a Sub-tenant
Deleting a tenant
Managing a Tenant Directory
Restoring Tenant Data
Creating a Resource Pool
Modifying a Resource Pool
Deleting a Resource Pool
Configuring a Queue
Configuring the Queue Capacity Policy of a Resource Pool
Clearing Configuration of a Queue
Backup and Restoration
Introduction
Backing Up Metadata
Restoring Metadata
Modifying a Backup Task
Viewing Backup and Restoration Tasks
Security Management
Default Users of Clusters with Kerberos Authentication Disabled
Default Users of Clusters with Kerberos Authentication Enabled
Changing the Password of an OS User
Changing the password of user admin
Changing the Password of the Kerberos Administrator
Changing the Passwords of the LDAP Administrator and the LDAP User
Changing the Password of a Component Running User
Changing the Password of the OMS Database Administrator
Changing the Password of the Data Access User of the OMS Database
Changing the Password of a Component Database User
Replacing the HA Certificate
Updating Cluster Keys
Permissions Management
Creating a Role
Creating a User Group
Creating a User
Modifying User Information
Locking a User
Unlocking a User
Deleting a User
Changing the Password of an Operation User
Initializing the Password of a System User
Downloading a User Authentication File
Modifying a Password Policy
MRS Multi-User Permission Management
Users and Permissions of MRS Clusters
Default Users of Clusters with Kerberos Authentication Enabled
Creating a Role
Creating a User Group
Creating a User
Modifying User Information
Locking a User
Unlocking a User
Deleting a User
Changing the Password of an Operation User
Initializing the Password of a System User
Downloading a User Authentication File
Modifying a Password Policy
Configuring Cross-Cluster Mutual Trust Relationships
Configuring Users to Access Resources of a Trusted Cluster
Patch Operation Guide
Patch Operation Guide for Versions
Supporting Rolling Patches
Restoring Patches for the Isolated Hosts
Rolling Restart
Alarm Reference (Applicable to MRS 3.x)
ALM-12001 Audit Log Dumping Failure
ALM-12004 OLdap Resource Abnormal
ALM-12005 OKerberos Resource Abnormal
ALM-12006 Node Fault
ALM-12007 Process Fault
ALM-12010 Manager Heartbeat Interruption Between the Active and Standby Nodes
ALM-12011 Manager Data Synchronization Exception Between the Active and Standby Nodes
ALM-12012 NTP Service Is Abnormal
ALM-12014 Partition Lost
ALM-12015 Partition Filesystem Readonly
ALM-12016 CPU Usage Exceeds the Threshold
ALM-12017 Insufficient Disk Capacity
ALM-12018 Memory Usage Exceeds the Threshold
ALM-12027 Host PID Usage Exceeds the Threshold
ALM-12028 Number of Processes in the D State and Z State on a Host Exceeds the Threshold
ALM-12033 Slow Disk Fault
ALM-12034 Periodical Backup Failure
ALM-12035 Unknown Data Status After Recovery Task Failure
ALM-12037 NTP Server Abnormal
ALM-12038 Monitoring Indicator Dumping Failure
ALM-12039 Active/Standby OMS Databases Not Synchronized
ALM-12040 Insufficient System Entropy
ALM-12041 Incorrect Permission on Key Files
ALM-12042 Incorrect Configuration of Key Files
ALM-12045 Read Packet Dropped Rate Exceeds the Threshold
ALM-12046 Write Packet Dropped Rate Exceeds the Threshold
ALM-12047 Read Packet Error Rate Exceeds the Threshold
ALM-12048 Write Packet Error Rate Exceeds the Threshold
ALM-12049 Network Read Throughput Rate Exceeds the Threshold
ALM-12050 Network Write Throughput Rate Exceeds the Threshold
ALM-12051 Disk Inode Usage Exceeds the Threshold
ALM-12052 TCP Temporary Port Usage Exceeds the Threshold
ALM-12053 Host File Handle Usage Exceeds the Threshold
ALM-12054 Invalid Certificate File
ALM-12055 Certificate File Is About to Expire
ALM-12057 Metadata Not Configured with the Task to Periodically Back Up Data to a Third-Party Server
ALM-12061 Process Usage Exceeds the Threshold
ALM-12062 OMS Parameter Configurations Mismatch with the Cluster Scale
ALM-12063 Unavailable Disk
ALM-12064 Host Random Port Range Conflicts with Cluster Used Port
ALM-12066 Trust Relationships Between Nodes Become Invalid
ALM-12067 Tomcat Resource Is Abnormal
ALM-12068 ACS Resource Exception
ALM-12069 AOS Resource Exception
ALM-12070 Controller Resource Is Abnormal
ALM-12071 Httpd Resource Is Abnormal
ALM-12072 FloatIP Resource Is Abnormal
ALM-12073 CEP Resource Is Abnormal
ALM-12074 FMS Resource Is Abnormal
ALM-12075 PMS Resource Is Abnormal
ALM-12076 GaussDB Resource Is Abnormal
ALM-12077 User omm Expired
ALM-12078 Password of User omm Expired
ALM-12079 User omm Is About to Expire
ALM-12080 Password of User omm Is About to Expire
ALM-12081User ommdba Expired
ALM-12082 User ommdba Is About to Expire
ALM-12083 Password of User ommdba Is About to Expire
ALM-12084 Password of User ommdba Expired
ALM-12085 Service Audit Log Dump Failure
ALM-12087 System Is in the Upgrade Observation Period
ALM-12089 Inter-Node Network Is Abnormal
ALM-12091 Abnormal disaster Resources
ALM-12099 core dump Occurred
ALM-12100 AD Service Connection Failed
ALM-12101 AZ Unhealthy
ALM-12102 AZ HA Component Is Not Deployed Based on DR Requirements
ALM-12103 Executor Resource Exception
ALM-12104 Abnormal Knox Resources
ALM-12110 Failed to get ECS temporary AK/SK
ALM-12172 Failed to Report Metrics to Cloud Eye
ALM-12180 Suspended Disk I/O
ALM-12186 CGroup Task Usage Exceeds the Threshold
ALM-12187 Failed to Expand Disk Partition Capacity
ALM-12188 diskmgt Disk Monitoring Unavailable
ALM-12190 Number of Knox Connections Exceeds the Threshold
ALM-13000 ZooKeeper Service Unavailable
ALM-13001 Available ZooKeeper Connections Are Insufficient
ALM-13002 ZooKeeper Direct Memory Usage Exceeds the Threshold
ALM-13003 GC Duration of the ZooKeeper Process Exceeds the Threshold
ALM-13004 ZooKeeper Heap Memory Usage Exceeds the Threshold
ALM-13005 Failed to Set the Quota of Top Directories of ZooKeeper Components
ALM-13006 Znode Number or Capacity Exceeds the Threshold
ALM-13007 Available ZooKeeper Client Connections Are Insufficient
ALM-13008 ZooKeeper Znode Usage Exceeds the Threshold
ALM-13009 ZooKeeper Znode Capacity Usage Exceeds the Threshold
ALM-13010 Znode Usage of a Directory with Quota Configured Exceeds the Threshold
ALM-14000 HDFS Service Unavailable
ALM-14001 HDFS Disk Usage Exceeds the Threshold
ALM-14002 DataNode Disk Usage Exceeds the Threshold
ALM-14003 Number of Lost HDFS Blocks Exceeds the Threshold
ALM-14006 Number of HDFS Files Exceeds the Threshold
ALM-14007 NameNode Heap Memory Usage Exceeds the Threshold
ALM-14008 DataNode Heap Memory Usage Exceeds the Threshold
ALM-14009 Number of Dead DataNodes Exceeds the Threshold
ALM-14010 NameService Service Is Abnormal
ALM-14011 DataNode Data Directory Is Not Configured Properly
ALM-14012 JournalNode Is Out of Synchronization
ALM-14013 Failed to Update the NameNode FsImage File
ALM-14014 NameNode GC Time Exceeds the Threshold
ALM-14015 DataNode GC Time Exceeds the Threshold
ALM-14016 DataNode Direct Memory Usage Exceeds the Threshold
ALM-14017 NameNode Direct Memory Usage Exceeds the Threshold
ALM-14018 NameNode Non-heap Memory Usage Exceeds the Threshold
ALM-14019 DataNode Non-heap Memory Usage Exceeds the Threshold
ALM-14020 Number of Entries in the HDFS Directory Exceeds the Threshold
ALM-14021 NameNode Average RPC Processing Time Exceeds the Threshold
ALM-14022 NameNode Average RPC Queuing Time Exceeds the Threshold
ALM-14023 Percentage of Total Reserved Disk Space for Replicas Exceeds the Threshold
ALM-14024 Tenant Space Usage Exceeds the Threshold
ALM-14025 Tenant File Object Usage Exceeds the Threshold
ALM-14026 Blocks on DataNode Exceed the Threshold
ALM-14027 DataNode Disk Fault
ALM-14028 Number of Blocks to Be Supplemented Exceeds the Threshold
ALM-14029 Number of Blocks in a Replica Exceeds the Threshold
ALM-14030 HDFS Allows Write of Single-Replica Data
ALM-14031 DataNode Process Is Abnormal
ALM-14032 JournalNode Process Is Abnormal
ALM-14033 ZKFC Process Is Abnormal
ALM-14034 Router Process Is Abnormal
ALM-14035 HttpFS Process Is Abnormal
ALM-16000 Percentage of Sessions Connected to the HiveServer to Maximum Number Allowed Exceeds the Threshold
ALM-16001 Hive Warehouse Space Usage Exceeds the Threshold
ALM-16002 Hive SQL Execution Success Rate Is Lower Than the Threshold
ALM-16003 Background Thread Usage Exceeds the Threshold
ALM-16004 Hive Service Unavailable
ALM-16005 The Heap Memory Usage of the Hive Process Exceeds the Threshold
ALM-16006 The Direct Memory Usage of the Hive Process Exceeds the Threshold
ALM-16007 Hive GC Time Exceeds the Threshold
ALM-16008 Non-Heap Memory Usage of the Hive Process Exceeds the Threshold
ALM-16009 Map Number Exceeds the Threshold
ALM-16045 Hive Data Warehouse Is Deleted
ALM-16046 Hive Data Warehouse Permission Is Modified
ALM-16047 HiveServer Has Been Deregistered from ZooKeeper
ALM-16048 Tez or Spark Library Path Does Not Exist
ALM-17003 Oozie Service Unavailable
ALM-17004 Oozie Heap Memory Usage Exceeds the Threshold
ALM-17005 Oozie Non Heap Memory Usage Exceeds the Threshold
ALM-17006 Oozie Direct Memory Usage Exceeds the Threshold
ALM-17007 Garbage Collection (GC) Time of the Oozie Process Exceeds the Threshold
ALM-17008 Abnormal Connection Between Oozie and ZooKeeper
ALM-17009 Abnormal Connection Between Oozie and DBService
ALM-17010 Abnormal Connection Between Oozie and HDFS
ALM-17011 Abnormal Connection Between Oozie and Yarn
ALM-18000 Yarn Service Unavailable
ALM-18002 NodeManager Heartbeat Lost
ALM-18003 NodeManager Unhealthy
ALM-18008 Heap Memory Usage of ResourceManager Exceeds the Threshold
ALM-18009 Heap Memory Usage of JobHistoryServer Exceeds the Threshold
ALM-18010 ResourceManager GC Time Exceeds the Threshold
ALM-18011 NodeManager GC Time Exceeds the Threshold
ALM-18012 JobHistoryServer GC Time Exceeds the Threshold
ALM-18013 ResourceManager Direct Memory Usage Exceeds the Threshold
ALM-18014 NodeManager Direct Memory Usage Exceeds the Threshold
ALM-18015 JobHistoryServer Direct Memory Usage Exceeds the Threshold
ALM-18016 Non Heap Memory Usage of ResourceManager Exceeds the Threshold
ALM-18017 Non Heap Memory Usage of NodeManager Exceeds the Threshold
ALM-18018 NodeManager Heap Memory Usage Exceeds the Threshold
ALM-18019 Non Heap Memory Usage of JobHistoryServer Exceeds the Threshold
ALM-18020 Yarn Task Execution Timeout
ALM-18021 Mapreduce Service Unavailable
ALM-18022 Insufficient Yarn Queue Resources
ALM-18023 Number of Pending Yarn Tasks Exceeds the Threshold
ALM-18024 Pending Yarn Memory Usage Exceeds the Threshold
ALM-18025 Number of Terminated Yarn Tasks Exceeds the Threshold
ALM-18026 Number of Failed Yarn Tasks Exceeds the Threshold
ALM-19000 HBase Service Unavailable
ALM-19006 HBase Replication Sync Failed
ALM-19007 HBase GC Time Exceeds the Threshold
ALM-19008 Heap Memory Usage of the HBase Process Exceeds the Threshold
ALM-19009 Direct Memory Usage of the HBase Process Exceeds the Threshold
ALM-19011 RegionServer Region Number Exceeds the Threshold
ALM-19012 HBase System Table Directory or File Lost
ALM-19013 Duration of Regions in transaction State Exceeds the Threshold
ALM-19014 Capacity Quota Usage on ZooKeeper Exceeds the Threshold Severely
ALM-19015 Quantity Quota Usage on ZooKeeper Exceeds the Threshold
ALM-19016 Quantity Quota Usage on ZooKeeper Exceeds the Threshold Severely
ALM-19017 Capacity Quota Usage on ZooKeeper Exceeds the Threshold
ALM-19018 HBase Compaction Queue Size Exceeds the Threshold
ALM-19019 Number of HBase HFiles to Be Synchronized Exceeds the Threshold
ALM-19020 Number of HBase WAL Files to Be Synchronized Exceeds the Threshold
ALM-19021 Handler Usage of RegionServer Exceeds the Threshold
ALM-19022 HBase Hotspot Detection Is Unavailable
ALM-19023 Region Traffic Restriction for HBase
ALM-19024 RPC Requests P99 Latency on RegionServer Exceeds the Threshold
ALM-19025 Damaged StoreFile in HBase
ALM-19026 Damaged WAL Files in HBase
ALM-20002 Hue Service Unavailable
ALM-23001 Loader Service Unavailable
ALM-23003 Loader Task Execution Failure
ALM-23004 Loader Heap Memory Usage Exceeds the Threshold
ALM-23005 Loader Non-Heap Memory Usage Exceeds the Threshold
ALM-23006 Loader Direct Memory Usage Exceeds the Threshold
ALM-23007 Garbage Collection (GC) Time of the Loader Process Exceeds the Threshold
ALM-24000 Flume Service Unavailable
ALM-24001 Flume Agent Exception
ALM-24003 Flume Client Connection Interrupted
ALM-24004 Exception Occurs When Flume Reads Data
ALM-24005 Exception Occurs When Flume Transmits Data
ALM-24006 Heap Memory Usage of Flume Server Exceeds the Threshold
ALM-24007 Flume Server Direct Memory Usage Exceeds the Threshold
ALM-24008 Flume Server Non Heap Memory Usage Exceeds the Threshold
ALM-24009 Flume Server Garbage Collection (GC) Time Exceeds the Threshold
ALM-24010 Flume Certificate File Is Invalid or Damaged
ALM-24011 Flume Certificate File Is About to Expire
ALM-24012 Flume Certificate File Has Expired
ALM-24013 Flume MonitorServer Certificate File Is Invalid or Damaged
ALM-24014 Flume MonitorServer Certificate Is About to Expire
ALM-24015 Flume MonitorServer Certificate File Has Expired
ALM-25000 LdapServer Service Unavailable
ALM-25004 Abnormal LdapServer Data Synchronization
ALM-25005 nscd Service Exception
ALM-25006 Sssd Service Exception
ALM-25007 Number of SlapdServer Connections Exceeds the Threshold
ALM-25008 SlapdServer CPU Usage Exceeds the Threshold
ALM-25500 KrbServer Service Unavailable
ALM-26051 Storm Service Unavailable
ALM-26052 Number of Available Supervisors of the Storm Service Is Less Than the Threshold
ALM-26053 Storm Slot Usage Exceeds the Threshold
ALM-26054 Nimbus Heap Memory Usage Exceeds the Threshold
ALM-27001 DBService Service Unavailable
ALM-27003 DBService Heartbeat Interruption Between the Active and Standby Nodes
ALM-27004 Data Inconsistency Between Active and Standby DBServices
ALM-27005 Database Connections Usage Exceeds the Threshold
ALM-27006 Disk Space Usage of the Data Directory Exceeds the Threshold
ALM-27007 Database Enters the Read-Only Mode
ALM-29000 Impala Service Unavailable
ALM-29004 Impalad Process Memory Usage Exceeds the Threshold
ALM-29005 Number of JDBC Connections to Impalad Exceeds the Threshold
ALM-29006 Number of ODBC Connections to Impalad Exceeds the Threshold
ALM-29007 Impalad Process Memory Usage Exceeds the Threshold
ALM-29008 Number of ODBC Connections to Impalad Exceeds the Threshold
ALM-29010 Number of Queries Being Submitted by Impalad Exceeds the Threshold
ALM-29011 Number of Queries Being Executed by Impalad Exceeds the Threshold
ALM-29012 Number of Queries Being Waited by Impalad Exceeds the Threshold
ALM-29013 Impalad FGC Time Exceeds the Threshold
ALM-29014 Catalog FGC Time Exceeds the Threshold
ALM-29015 Catalog Process Memory Usage Exceeds the Threshold
ALM-29016 Impalad Instance in the Sub-healthy State
ALM-29100 Kudu Service Unavailable
ALM-29104 Tserver Process Memory Usage Exceeds the Threshold
ALM-29106 Tserver Process CPU Usage Exceeds the Threshold
ALM-29107 Tserver Process Memory Usage Exceeds the Threshold
ALM-38000 Kafka Service Unavailable
ALM-38001 Insufficient Kafka Disk Capacity
ALM-38002 Kafka Heap Memory Usage Exceeds the Threshold
ALM-38004 Kafka Direct Memory Usage Exceeds the Threshold
ALM-38005 GC Duration of the Broker Process Exceeds the Threshold
ALM-38006 Percentage of Kafka Partitions That Are Not Completely Synchronized Exceeds the Threshold
ALM-38007 Status of Kafka Default User Is Abnormal
ALM-38008 Abnormal Kafka Data Directory Status
ALM-38009 Busy Broker Disk I/Os (Applicable to Versions Later Than MRS 3.1.0)
ALM-38009 Kafka Topic Overload (Applicable to MRS 3.1.0 and Earlier Versions)
ALM-38010 Topics with Single Replica
ALM-38011 User Connection Usage on Broker Exceeds the Threshold
ALM-43001 Spark2x Service Unavailable
ALM-43006 Heap Memory Usage of the JobHistory2x Process Exceeds the Threshold
ALM-43007 Non-Heap Memory Usage of the JobHistory2x Process Exceeds the Threshold
ALM-43008 The Direct Memory Usage of the JobHistory2x Process Exceeds the Threshold
ALM-43009 JobHistory2x Process GC Time Exceeds the Threshold
ALM-43010 Heap Memory Usage of the JDBCServer2x Process Exceeds the Threshold
ALM-43011 Non-Heap Memory Usage of the JDBCServer2x Process Exceeds the Threshold
ALM-43012 Direct Heap Memory Usage of the JDBCServer2x Process Exceeds the Threshold
ALM-43013 JDBCServer2x Process GC Time Exceeds the Threshold
ALM-43017 JDBCServer2x Process Full GC Number Exceeds the Threshold
ALM-43018 JobHistory2x Process Full GC Number Exceeds the Threshold
ALM-43019 Heap Memory Usage of the IndexServer2x Process Exceeds the Threshold
ALM-43020 Non-Heap Memory Usage of the IndexServer2x Process Exceeds the Threshold
ALM-43021 Direct Memory Usage of the IndexServer2x Process Exceeds the Threshold
ALM-43022 IndexServer2x Process GC Time Exceeds the Threshold
ALM-43023 IndexServer2x Process Full GC Number Exceeds the Threshold
ALM-44000 Presto Service Unavailable
ALM-44004 Presto Coordinator Resource Group Queuing Tasks Exceed the Threshold
ALM-44005 Presto Coordinator Process GC Time Exceeds the Threshold
ALM-44006 Presto Worker Process GC Time Exceeds the Threshold
ALM-45000 HetuEngine Service Unavailable
ALM-45001 Faulty HetuEngine Compute Instances
ALM-45003 HetuEngine QAS Disk Capacity Is Insufficient
ALM-45175 Average Time for Calling OBS Metadata APIs Is Greater than the Threshold
ALM-45176 Success Rate of Calling OBS Metadata APIs Is Lower than the Threshold
ALM-45177 Success Rate of Calling OBS Data Read APIs Is Lower than the Threshold
ALM-45178 Success Rate of Calling OBS Data Write APIs Is Lower Than the Threshold
ALM-45179 Number of Failed OBS readFully API Calls Exceeds the Threshold
ALM-45180 Number of Failed OBS read API Calls Exceeds the Threshold
ALM-45181 Number of Failed OBS write API Calls Exceeds the Threshold
ALM-45182 Number of Throttled OBS Operations Exceeds the Threshold
ALM-45275 Ranger Service Unavailable
ALM-45276 Abnormal RangerAdmin Status
ALM-45277 RangerAdmin Heap Memory Usage Exceeds the Threshold
ALM-45278 RangerAdmin Direct Memory Usage Exceeds the Threshold
ALM-45279 RangerAdmin Non Heap Memory Usage Exceeds the Threshold
ALM-45280 RangerAdmin GC Duration Exceeds the Threshold
ALM-45281 UserSync Heap Memory Usage Exceeds the Threshold
ALM-45282 UserSync Direct Memory Usage Exceeds the Threshold
ALM-45283 UserSync Non Heap Memory Usage Exceeds the Threshold
ALM-45284 UserSync Garbage Collection (GC) Time Exceeds the Threshold
ALM-45285 TagSync Heap Memory Usage Exceeds the Threshold
ALM-45286 TagSync Direct Memory Usage Exceeds the Threshold
ALM-45287 TagSync Non Heap Memory Usage Exceeds the Threshold
ALM-45288 TagSync Garbage Collection (GC) Time Exceeds the Threshold
ALM-45289 PolicySync Heap Memory Usage Exceeds the Threshold
ALM-45290 PolicySync Direct Memory Usage Exceeds the Threshold
ALM-45291 PolicySync Non-Heap Memory Usage Exceeds the Threshold
ALM-45292 PolicySync GC Duration Exceeds the Threshold
ALM-45325 Presto Service Unavailable
ALM-45326 Number of Presto Coordinator Threads Exceeds the Threshold
ALM-45327 Presto Coordinator Process GC Time Exceeds the Threshold
ALM-45328 Presto Worker Process GC Time Exceeds the Threshold
ALM-45329 Presto Coordinator Resource Group Queuing Tasks Exceed the Threshold
ALM-45330 Number of Presto Worker Threads Exceeds the Threshold
ALM-45331 Number of Presto Worker1 Threads Exceeds the Threshold
ALM-45332 Number of Presto Worker2 Threads Exceeds the Threshold
ALM-45333 Number of Presto Worker3 Threads Exceeds the Threshold
ALM-45334 Number of Presto Worker4 Threads Exceeds the Threshold
ALM-45335 Presto Worker1 Process GC Time Exceeds the Threshold
ALM-45336 Presto Worker2 Process GC Time Exceeds the Threshold
ALM-45337 Presto Worker3 Process GC Time Exceeds the Threshold
ALM-45338 Presto Worker4 Process GC Time Exceeds the Threshold
ALM-45425 ClickHouse Service Unavailable
ALM-45426 ClickHouse Service Quantity Quota Usage in ZooKeeper Exceeds the Threshold
ALM-45427 ClickHouse Service Capacity Quota Usage in ZooKeeper Exceeds the Threshold
ALM-45428 ClickHouse Disk I/O Exception
ALM-45429 Table Metadata Synchronization Failed on the Added ClickHouse Node
ALM-45430 Permission Metadata Synchronization Failed on the Added ClickHouse Node
ALM-45431 Improper ClickHouse Instance Distribution for Topology Allocation
ALM-45432 ClickHouse User Synchronization Process Fails
ALM-45433 ClickHouse AZ Topology Exception
ALM-45434 A Single Replica Exists in the ClickHouse Data Table
ALM-45435 Inconsistent Metadata of ClickHouse Tables
ALM-45436 Skew ClickHouse Table Data
ALM-45437 Excessive Parts in the ClickHouse Table
ALM-45438 ClickHouse Disk Usage Exceeds 80%
ALM-45439 ClickHouse Node Enters the Read-Only Mode
ALM-45440 Inconsistency Between ClickHouse Replicas
ALM-45441 Zookeeper Disconnected
ALM-45442 Too Many Concurrent SQL Statements
ALM-45443 Slow SQL Queries in the Cluster
ALM-45444 Abnormal ClickHouse Process
ALM-45475 A Single Replica Exists in the Kudu Data Table
ALM-45476 Kudu Failed to Enter the Maintenance Mode
ALM-45477 Failed to Restore Data After a Disk of Kudu Is Replaced
ALM-45478 Kudu Failed Data Balancing
ALM-45479 Number of Tablets of the Tserver Process Exceeds the Threshold
ALM-45480 Tablet Leaders of a Tserver Process Are Unevenly Distributed
ALM-45481 KuduTserver Has Full Disks
ALM-45585 IoTDB Service Unavailable
ALM-45586 IoTDBServer Heap Memory Usage Exceeds the Threshold
ALM-45587 IoTDBServer GC Duration Exceeds the Threshold
ALM-45588 IoTDBServer Direct Memory Usage Exceeds the Threshold
ALM-45589 ConfigNode Heap Memory Usage Exceeds the Threshold
ALM-45590 ConfigNode GC Duration Exceeds the Threshold
ALM-45591 ConfigNode Direct Memory Usage Exceeds the Threshold
ALM-45592 IoTDBServer RPC Execution Duration Exceeds the Threshold
ALM-45593 IoTDBServer Flush Execution Duration Exceeds the Threshold
ALM-45594 IoTDBServer Intra-Space Merge Duration Exceeds the Threshold
ALM-45595 IoTDBServer Cross-Space Merge Duration Exceeds the Threshold
ALM-45596 Procedure Execution Failed
ALM-45615 CDL Service Unavailable
ALM-45616 CDL Job Execution Exception
ALM-45617 Data Queued in the CDL Replication Slot Exceeds the Threshold
ALM-45635 FlinkServer Job Execution Failure
ALM-45636 FlinkServer Job Checkpoints Keep Failing
ALM-45636 Flink Job Checkpoints Keep Failing
ALM-45637 FlinkServer Task Is Continuously Under Back Pressure
ALM-45638 Number of Restarts After FlinkServer Job Failures Exceeds the Threshold
ALM-45638 Number of Restarts After Flink Job Failures Exceeds the Threshold
ALM-45639 Checkpointing of a Flink Job Times Out
ALM-45640 FlinkServer Heartbeat Interruption Between the Active and Standby Nodes
ALM-45641 Data Synchronization Exception Between the Active and Standby FlinkServer Nodes
ALM-45642 RocksDB Continuously Triggers Write Traffic Limiting
ALM-45643 MemTable Size of RocksDB Continuously Exceeds the Threshold
ALM-45644 Number of SST Files at Level 0 of RocksDB Continuously Exceeds the Threshold
ALM-45645 Pending Flush Size of RocksDB Continuously Exceeds the Threshold
ALM-45646 Pending Compaction Size of RocksDB Continuously Exceeds the Threshold
ALM-45647 Estimated Pending Compaction Size of RocksDB Continuously Exceeds the Threshold
ALM-45648 RocksDB Frequently Encounters Write-Stopped
ALM-45649 P95 Latency of RocksDB Get Requests Continuously Exceeds the Threshold
ALM-45650 P95 Latency of RocksDB Write Requests Continuously Exceeds the Threshold
ALM-45652 Flink Service Unavailable
ALM-45653 Invalid Flink HA Certificate File
ALM-45654 Flink HA Certificate Is About to Expire
ALM-45655 Flink HA Certificate File Has Expired
ALM-45736 Guardian Service Unavailable
ALM-45737 TokenServer Heap Memory Usage Exceeds the Threshold
ALM-45738 TokenServer Direct Memory Usage Exceeds the Threshold
ALM-45739 TokenServer Non-Heap Memory Usage Exceeds the Threshold
ALM-45740 TokenServer GC Duration Exceeds the Threshold
ALM-45741 Failed to Call the ECS securitykey API
ALM-45742 Failed to Call the ECS Metadata API
ALM-45743 Failed to Call the IAM API
ALM-50201 Doris Service Unavailable
ALM-50202 FE CPU Usage Exceeds the Threshold
ALM-50203 FE Memory Usage Exceeds the Threshold
ALM-50205 BE CPU Usage Exceeds the Threshold
ALM-50206 BE Memory Usage Exceeds the Threshold
ALM-50207 Ratio of Connections to the FE MySQL Port to the Maximum Connections Allowed Exceeds the Threshold
ALM-50208 Failures to Clear Historical Metadata Image Files Exceed the Threshold
ALM-50209 Failures to Generate Metadata Image Files Exceed the Threshold
ALM-50210 Maximum Compaction Score of All BE Nodes Exceeds the Threshold
ALM-50211 FE Queue Length of BE Periodic Report Tasks Exceeds the Threshold
ALM-50212 Accumulated Old-Generation GC Duration of the FE Process Exceeds the Threshold
ALM-50213 Number of Tasks Queuing in the FE Thread Pool for Interacting with BE Exceeds the Threshold
ALM-50214 Number of Tasks Queuing in the FE Thread Pool for Task Processing Exceeds the Threshold
ALM-50215 Longest Duration of RPC Requests Received by Each FE Thrift Method Exceeds the Threshold
ALM-50216 Memory Usage of the FE Node Exceeds the Threshold
ALM-50217 Heap Memory Usage of the FE Node Exceeds the Threshold
ALM-50219 Length of the Queue in the Thread Pool for Query Execution Exceeds the Threshold
ALM-50220 Error Rate of TCP Packet Receiving Exceeds the Threshold
ALM-50221 BE Data Disk Usage Exceeds the Threshold
ALM-50222 Disk Status of a Specified Data Directory on BE Is Abnormal
ALM-50223 Maximum Memory Required by BE Is Greater Than the Remaining Memory of the Machine
ALM-50224 Failures a Certain Task Type on BE Are Increasing
ALM-50225 FE Instance Fault
ALM-50226 BE Instance Fault
ALM-50401 Number of JobServer Jobs Waiting to Be Executed Exceeds the Threshold
ALM-50402 JobGateway Service Unavailable
Security Description
Security Configuration Suggestions for Clusters with Kerberos Authentication Disabled
Security Authentication Principles and Mechanisms
High-Risk Operations
Interconnecting Jupyter Notebook with MRS Using Custom Python
Overview
Installing a Client on a Node Outside the Cluster
Installing Python 3
Configuring the MRS Client
Installing Jupyter Notebook
Verifying that Jupyter Notebook Can Access MRS
FAQs
Appendix
ECS Specifications Used by MRS
BMS Specifications Used by MRS
A Defect Exists After Core Nodes in the MRS Cluster Are Added
Data Migration Solution
Making Preparations
Exporting Metadata
Copying Data
Restoring Data
Precautions for MRS 3.x
Installing the Flume Client
Installing the Flume Client on Clusters of Versions Earlier Than MRS 3.x
Installing the Flume Client on MRS 3.x or Later Clusters
Component Operation Guide (Normal)
Using Alluxio
Configuring an Underlying Storage System
Accessing Alluxio Using a Data Application
Common Operations of Alluxio
Using CarbonData (for Versions Earlier Than MRS 3.x)
Using CarbonData from Scratch
About CarbonData Table
Creating a CarbonData Table
Deleting a CarbonData Table
Using CarbonData (for MRS 3.x or Later)
Overview
CarbonData Overview
Main Specifications of CarbonData
Common CarbonData Parameters
CarbonData Operation Guide
CarbonData Quick Start
CarbonData Table Management
About CarbonData Table
Creating a CarbonData Table
Deleting a CarbonData Table
Modify the CarbonData Table
CarbonData Table Data Management
Loading Data
Deleting Segments
Combining Segments
CarbonData Data Migration
Migrating Data on CarbonData from Spark 1.5 to Spark2x
CarbonData Performance Tuning
Tuning Guidelines
Suggestions for Creating CarbonData Tables
Configurations for Performance Tuning
CarbonData Access Control
CarbonData Syntax Reference
DDL
CREATE TABLE
CREATE TABLE As SELECT
DROP TABLE
SHOW TABLES
ALTER TABLE COMPACTION
TABLE RENAME
ADD COLUMNS
DROP COLUMNS
CHANGE DATA TYPE
REFRESH TABLE
REGISTER INDEX TABLE
DML
LOAD DATA
UPDATE CARBON TABLE
DELETE RECORDS from CARBON TABLE
INSERT INTO CARBON TABLE
DELETE SEGMENT by ID
DELETE SEGMENT by DATE
SHOW SEGMENTS
CREATE SECONDARY INDEX
SHOW SECONDARY INDEXES
DROP SECONDARY INDEX
CLEAN FILES
SET/RESET
Operation Concurrent Execution
API
Spatial Indexes
CarbonData Troubleshooting
Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter
Query Performance Deterioration
CarbonData FAQ
Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?
How to Avoid Minor Compaction for Historical Data?
How to Change the Default Group Name for CarbonData Data Loading?
Why Does INSERT INTO CARBON TABLE Command Fail?
Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?
Why Data Load Performance Decreases due to Bad Records?
Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial ExecutorsIs Zero?
Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?
Why Data loading Fails During off heap?
Why Do I Fail to Create a Hive Table?
How Do I Logically Split Data Across Different Namespaces?
Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?
Why the UPDATE Command Cannot Be Executed in Spark Shell?
How Do I Configure Unsafe Memory in CarbonData?
Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?
Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed?
Why Do Files of a Carbon Table Exist in the Recycle Bin Even If the drop table Command Is Not Executed When Mis-deletion Prevention Is Enabled?
Using ClickHouse
Using ClickHouse from Scratch
ClickHouse Table Engine Overview
Creating a ClickHouse Table
ClickHouse Data Type
Configuring Interconnection Between ClickHouse and OBS
Enabling the mysql_port Configuration for ClickHouse
Common ClickHouse SQL Syntax
CREATE DATABASE: Creating a Database
CREATE TABLE: Creating a Table
INSERT INTO: Inserting Data into a Table
SELECT: Querying Table Data
ALTER TABLE: Modifying a Table Structure
ALTER TABLE: Modifying Table Data
DESC: Querying a Table Structure
DROP: Deleting a Table
SHOW: Displaying Information About Databases and Tables
Migrating ClickHouse Data
Accessing RDS MySQL Using ClickHouse
Importing DWS Data to a ClickHouse Table
Using ClickHouse to Import and Export Data
Synchronizing Kafka Data to ClickHouse
Using the ClickHouse Data Migration Tool
User Management and Authentication
ClickHouse User and Permission Management
Interconnecting ClickHouse With OpenLDAP for Authentication
ClickHouse Cluster Management
ClickHouse Cluster Configuration
Expanding the Data Disk Capacity of a ClickHouse Node
Adding a Disk to a ClickHouse Node
Accessing ClickHouse Through ELB
Backing Up and Restoring ClickHouse Data Using a Data File
ClickHouse Log Overview
ClickHouse Performance Tuning
Solution to the "Too many parts" Error in Data Tables
Accelerating Merge Operations
Accelerating TTL Operations
ClickHouse FAQ
How Do I Do If the Disk Status Displayed in the System.disks Table Is fault or abnormal?
How Do I Migrate Data from Hive/HDFS to ClickHouse?
How Do I Migrate Data from OBS/S3 to ClickHouse?
An Error Is Reported in Logs When the Auxiliary ZooKeeper or Replica Data Is Used to Synchronize Table Data
How Do I Grant the Select Permission at the Database Level to ClickHouse Users?
Using DBService
DBService Log Overview
Using Flink
Using Flink from Scratch
Viewing Flink Job Information
Configuring Flink Service Parameters
Configuring Flink Security Features
Security Features
Authentication and Encryption
Configuring Kafka
Configuring Pipeline
Configuring and Developing a Flink Visualization Job
Introduction to Flink Web UI
Flink Web UI Permission Management
Creating a FlinkServer Role
Accessing the Flink Web UI
Creating an Application
Creating a Cluster Connection
Creating a Data Connection
Creating a Stream Table
Creating a Job
Configuring and Managing UDFs
Flink Log Overview
Flink Performance Tuning
Memory Configuration Optimization
Configuring DOP
Configuring Process Parameters
Optimizing the Design of Partitioning Method
Configuring the Netty Network Communication
Experience Summary
Common Flink Shell Commands
Reference
Example of Issuing a Certificate
Flink Restart Policy
Using Flume
Using Flume from Scratch
Overview
Installing the Flume Client
Installing the Flume Client on Clusters of Versions Earlier Than MRS 3.x
Installing the Flume Client on MRS 3.x or Later Clusters
Viewing Flume Client Logs
Stopping or Uninstalling the Flume Client
Using the Encryption Tool of the Flume Client
Flume Service Configuration Guide
Flume Configuration Parameter Description
Using Environment Variables in the properties.properties File
Non-Encrypted Transmission
Configuring Non-encrypted Transmission
Typical Scenario: Collecting Local Static Logs and Uploading Them to Kafka
Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS
Typical Scenario: Collecting Local Dynamic Logs and Uploading Them to HDFS
Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS
Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS Through the Flume Client
Typical Scenario: Collecting Local Static Logs and Uploading Them to HBase
Encrypted Transmission
Configuring the Encrypted Transmission
Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS
Viewing Flume Client Monitoring Information
Connecting Flume to Kafka in Security Mode
Connecting Flume with Hive in Security Mode
Configuring the Flume Service Model
Overview
Service Model Configuration Guide
Introduction to Flume Logs
Flume Client Cgroup Usage Guide
Secondary Development Guide for Flume Third-Party Plug-ins
Configuring the Flume Customized Script
Common Issues About Flume
Using HBase
Using HBase from Scratch
Using an HBase Client
Creating HBase Roles
Configuring HBase Replication
Configuring HBase Parameters
Enabling Cross-Cluster Copy
Using the ReplicationSyncUp Tool
GeoMesa Command Line
Using HIndex
Introduction to HIndex
Loading Index Data in Batches
Using an Index Generation Tool
Migrating Index Data
Configuring an RSGroup
Configuring HBase DR
Configuring HBase Data Compression and Encoding
Performing an HBase DR Service Switchover
Performing an HBase DR Active/Standby Cluster Switchover
Community BulkLoad Tool
In-House Enhanced BulkLoad Tool
Importing Data in a Customized Manner
Importing Data in Batches
Combining Rowkeys
Implementing Custom RowKeys
Combining Fields
Specifying Field Data Types
Defining Inapplicable Data Rows
Importing Data with Indexes in a Customized Manner
Creating a Secondary Index When Importing Data In Batches
Combining Rowkeys
Implementing Custom RowKeys
Combining Fields
Specifying Field Data Type
Defining Inapplicable Data Row
Updating Rows in Batches
Deleting Rows in Batches
Collecting Statistics on Rows
Configuring the MOB
Configuring Secure HBase Replication
Configuring Region In Transition Recovery Chore Service
Using a Secondary Index
HBase Log Overview
HBase Performance Tuning
Improving the BulkLoad Efficiency
Improving Put Performance
Optimizing Put and Scan Performance
Improving Real-time Data Write Performance
Improving Real-time Data Read Performance
Optimizing JVM Parameters
Common Issues About HBase
Why Does a Client Keep Failing to Connect to a Server for a Long Time?
Operation Failures Occur in Stopping BulkLoad On the Client
Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?
Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network Port?
Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,000 Reduce Tasks Fail?
How Do I Restore a Region in the RIT State for a Long Time?
Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?
Why Does SocketTimeoutException Occur When a Client Queries HBase?
Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?
Why "java.lang.UnsatisfiedLinkError: Permission denied" exception thrown while starting HBase shell?
When does the RegionServers listed under "Dead Region Servers" on HMaster WebUI gets cleared?
Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully Imported by HBase bulkload?
What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?
How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?
Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?
Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using OfflineMetaRepair Tool and Startups Failed
Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster Logs During the WAL Splitting Process?
Insufficient Rights When a Tenant Accesses Phoenix
What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating "Rollback recovery failed"?
How Do I Fix Region Overlapping?
Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Set to 31 GB?
Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and "Permission denied" Is Displayed When Nodes in a Cluster Are Used to Import Data in Batches?
Why Is the Error Message "import argparse" Displayed When the Phoenix sqlline Script Is Used?
How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?
Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to the Ranger Plug-ins?
Using HDFS
Using Hadoop from Scratch
Configuring HDFS Parameters
Configuring Memory Management
Creating an HDFS Role
Using the HDFS Client
Running the DistCp Command
Overview of HDFS File System Directories
Changing the DataNode Storage Directory
Configuring HDFS Directory Permission
Configuring NFS
Planning HDFS Capacity
Configuring ulimit for HBase and HDFS
Balancing DataNode Capacity
Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes
Configuring the Number of Files in a Single HDFS Directory
Configuring the Recycle Bin Mechanism
Setting Permissions on Files and Directories
Setting the Maximum Lifetime and Renewal Interval of a Token
Configuring the Damaged Disk Volume
Configuring Encrypted Channels
Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable
Configuring the NameNode Blacklist
Optimizing HDFS NameNode RPC QoS
Optimizing HDFS DataNode RPC QoS
Configuring LZC Compression
Configuring Reserved Percentage of Disk Usage on DataNodes
Configuring HDFS NodeLabel
Configuring HDFS Mover
Using HDFS AZ Mover
Configuring HDFS DiskBalancer
Configuring the Observer NameNode to Process Read Requests
Performing Concurrent Operations on HDFS Files
Introduction to HDFS Logs
HDFS Performance Tuning
Improving Write Performance
Improving Read Performance Using Client Metadata Cache
Improving the Connection Between the Client and NameNode Using Current Active Cache
FAQ
NameNode Startup Is Slow
DataNode Is Normal but Cannot Report Data Blocks
HDFS WebUI Cannot Properly Update Information About Damaged Data
Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?
Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals dfs.datanode.failed.volumes.tolerated?
Failed to Calculate the Capacity of a DataNode when Multiple data.dir Directories Are Configured in a Disk Partition
Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) Storage
Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files
Why Does Array Border-crossing Occur During FileInputFormat Split?
Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?
The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time
Can I Delete or Modify the Data Storage Directory in DataNode?
Blocks Miss on the NameNode UI After the Successful Rollback
Why Is "java.net.SocketException: No buffer space available" Reported When Data Is Written to HDFS
Why are There Two Standby NameNodes After the active NameNode Is Restarted?
When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?
"This page can't be displayed" Is Displayed When Internet Explorer Fails to Access the Native HDFS UI
NameNode Fails to Be Restarted Due to EditLog Discontinuity
Using Hive
Using Hive from Scratch
Configuring Hive Parameters
Hive SQL
Permission Management
Hive Permission
Creating a Hive Role
Configuring Permissions for Hive Tables, Columns, or Databases
Configuring Permissions to Use Other Components for Hive
Using a Hive Client
Using HDFS Colocation to Store Hive Tables
Using the Hive Column Encryption Function
Customizing Row Separators
Configuring Hive on HBase in Across Clusters with Mutual Trust Enabled
Deleting Single-Row Records from Hive on HBase
Configuring HTTPS/HTTP-based REST APIs
Enabling or Disabling the Transform Function
Access Control of a Dynamic Table View on Hive
Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions
Using Hive to Read Data in a Relational Database
Supporting Traditional Relational Database Syntax in Hive
Creating User-Defined Hive Functions
Enhancing beeline Reliability
Viewing Table Structures Using the show create Statement as Users with the select Permission
Writing a Directory into Hive with the Old Data Removed to the Recycle Bin
Inserting Data to a Directory That Does Not Exist
Creating Databases and Tables in the Default Database Only as the Hive Administrator
Disabling of Specifying the location Keyword When Creating an Internal Hive Table
Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read
Authorizing Over 32 Roles in Hive
Restricting the Maximum Number of Maps for Hive Tasks
HiveServer Lease Isolation
Hive Supporting Transactions
Switching the Hive Execution Engine to Tez
Hive Materialized View
Hive Supporting Cold and Hot Storage of Partitioned Metadata
Hive Supporting ZSTD Compression Formats
Hive Log Overview
Hive Performance Tuning
Creating Table Partitions
Optimizing Join
Optimizing Group By
Optimizing Data Storage
Optimizing SQL Statements
Optimizing the Query Function Using Hive CBO
Common Issues About Hive
How Do I Delete UDFs on Multiple HiveServers at the Same Time?
Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?
How to Perform Operations on Local Files with Hive User-Defined Functions
How Do I Forcibly Stop MapReduce Jobs Executed by Hive?
Table Creation Fails Because Hive Complex Fields' Names Contain Special Characters
How Do I Monitor the Hive Table Size?
How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Statement?
Why Is Hive on Spark Task Freezing When HBase Is Not Installed?
Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionInsight Hive
Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?
Description of Hive Table Location (Either Be an OBS or HDFS Path)
Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Execute Union-related Statements?
Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?
Why Does Hive Not Support Vectorized Query?
Why Does Metadata Still Exist When the HDFS Data Directory of the Hive Table Is Deleted by Mistake?
How Do I Disable the Logging Function of Hive?
Why Hive Tables in the OBS Directory Fail to Be Deleted?
Hive Configuration Problems
Using Hudi
Getting Started
Basic Operations
Hudi Table Schema
Write
Batch Write
Stream Write
Synchronizing Hudi Table Data to Hive
Read
Overview
Reading COW Table Views
Reading MOR Table Views
Data Management and Maintenance
Clustering
Cleaning
Compaction
Savepoint
Single-Table Concurrency Control
Using the Hudi Client
Operating a Hudi Table Using hudi-cli.sh
Configuration Reference
Overview
Write Configuration
Configuration of Hive Table Synchronization
Index Configuration
Storage Configuration
Compaction and Cleaning Configurations
Single-Table Concurrency Control Configuration
Hudi Performance Tuning
Common Issues About Hudi
Data Write
Parquet/Avro schema Is Reported When Updated Data Is Written
UnsupportedOperationException Is Reported When Updated Data Is Written
SchemaCompatabilityException Is Reported When Updated Data Is Written
What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?
Hudi Fails to Write Decimal Data with Lower Precision
Data Collection
IllegalArgumentException Is Reported When Kafka Is Used to Collect Data
HoodieException Is Reported When Data Is Collected
HoodieKeyException Is Reported When Data Is Collected
Hive Synchronization
SQLException Is Reported During Hive Data Synchronization
HoodieHiveSyncException Is Reported During Hive Data Synchronization
SemanticException Is Reported During Hive Data Synchronization
Using Hue (Versions Earlier Than MRS 3.x)
Using Hue from Scratch
Accessing the Hue Web UI
Hue Common Parameters
Using HiveQL Editor on the Hue Web UI
Using the Metadata Browser on the Hue Web UI
Using File Browser on the Hue Web UI
Using Job Browser on the Hue Web UI
Using Hue (MRS 3.x or Later)
Using Hue from Scratch
Accessing the Hue Web UI
Hue Common Parameters
Using HiveQL Editor on the Hue Web UI
Using the SparkSql Editor on the Hue Web UI
Using the Metadata Browser on the Hue Web UI
Using File Browser on the Hue Web UI
Using Job Browser on the Hue Web UI
Using HBase on the Hue Web UI
Typical Scenarios
HDFS on Hue
Configuring HDFS Cold and Hot Data Migration
Hive on Hue
Oozie on Hue
Hue Log Overview
Common Issues About Hue
Why Do HQL Statements Fail to Execute in Hue Using Internet Explorer?
Why Does the use database Statement Become Invalid in Hive?
Why Do HDFS Files Fail to Access Through the Hue Web UI?
Why Do Large Files Fail to Upload on the Hue Page
Why Is the Hue Native Page Cannot Be Properly Displayed If the Hive Service Is Not Installed in a Cluster?
Using Impala
Using Impala from Scratch
Common Impala Parameters
Accessing the Impala Web UI
Using Impala to Operate Kudu
Interconnecting Impala with External LDAP
Enabling and Configuring a Dynamic Resource Pool for Impala
Using Kafka
Using Kafka from Scratch
Managing Kafka Topics
Querying Kafka Topics
Managing Kafka User Permissions
Managing Messages in Kafka Topics
Synchronizing Binlog-based MySQL Data to the MRS Cluster
Creating a Kafka Role
Kafka Common Parameters
Safety Instructions on Using Kafka
Kafka Specifications
Using the Kafka Client
Configuring Kafka HA and High Reliability Parameters
Changing the Broker Storage Directory
Checking the Consumption Status of Consumer Group
Kafka Balancing Tool Instructions
Balancing Data After Kafka Node Scale-Out
Kafka Token Authentication Mechanism Tool Usage
Using Kafka UI
Accessing Kafka UI
Kafka UI Overview
Creating a Topic on Kafka UI
Migrating a Partition on Kafka UI
Managing Topics on Kafka UI
Viewing Brokers on Kafka UI
Viewing a Consumer Group on Kafka UI
Introduction to Kafka Logs
Performance Tuning
Kafka Performance Tuning
Kafka Feature Description
Migrating Data Between Kafka Nodes
Common Issues About Kafka
How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?
Using KafkaManager
Introduction to KafkaManager
Accessing the KafkaManager Web UI
Managing Kafka Clusters
Kafka Cluster Monitoring Management
Using Loader
Using Loader from Scratch
How to Use Loader
Common Loader Parameters
Creating a Loader Role
Loader Link Configuration
Managing Loader Links (Versions Earlier Than MRS 3.x)
Managing Loader Links (MRS 3.x and Later Versions)
Source Link Configurations of Loader Jobs
Destination Link Configurations of Loader Jobs
Managing Loader Jobs
Preparing a Driver for MySQL Database Link
Importing Data
Overview
Importing Data Using Loader
Typical Scenario: Importing Data from an SFTP Server to HDFS or OBS
Typical Scenario: Importing Data from an SFTP Server to HBase
Typical Scenario: Importing Data from an SFTP Server to Hive
Typical Scenario: Importing Data from an FTP Server to HBase
Typical Scenario: Importing Data from a Relational Database to HDFS or OBS
Typical Scenario: Importing Data from a Relational Database to HBase
Typical Scenario: Importing Data from a Relational Database to Hive
Typical Scenario: Importing Data from HDFS or OBS to HBase
Typical Scenario: Importing Data from a Relational Database to ClickHouse
Typical Scenario: Importing Data from HDFS to ClickHouse
Exporting Data
Overview
Using Loader to Export Data
Typical Scenario: Exporting Data from HDFS or OBS to an SFTP Server
Typical Scenario: Exporting Data from HBase to an SFTP Server
Typical Scenario: Exporting Data from Hive to an SFTP Server
Typical Scenario: Exporting Data from HDFS or OBS to a Relational Database
Typical Scenario: Exporting Data from HBase to a Relational Database
Typical Scenario: Exporting Data from Hive to a Relational Database
Typical Scenario: Importing Data from HBase to HDFS or OBS
Managing Jobs
Migrating Loader Jobs in Batches
Deleting Loader Jobs in Batches
Importing Loader Jobs in Batches
Exporting Loader Jobs in Batches
Viewing Historical Job Information
Operator Help
Overview
Input Operators
CSV File Input
Fixed File Input
Table Input
HBase Input
HTML Input
Hive input
Spark Input
Conversion Operators
Long Date Conversion
Null Value Conversion
Constant Field Addition
Random Value Conversion
Concat Fields
Extract Fields
Modulo Integer
String Cut
EL Operation
String Operations
String Reverse
String Trim
Filter Rows
Update Fields Operator
Output Operators
Hive output
Spark Output
Table Output
File Output
HBase Output
ClickHouse Output
Associating, Editing, Importing, or Exporting the Field Configuration of an Operator
Using Macro Definitions in Configuration Items
Operator Data Processing Rules
Client Tools
Running a Loader Job by Using Commands
loader-tool Usage Guide
loader-tool Usage Example
schedule-tool Usage Guide
schedule-tool Usage Example
Using loader-backup to Back Up Job Data
Open Source sqoop-shell Tool Usage Guide
Example for Using the Open-Source sqoop-shell Tool (SFTP-HDFS)
Example for Using the Open-Source sqoop-shell Tool (Oracle-HBase)
Loader Log Overview
Example: Using Loader to Import Data from OBS to HDFS
Common Issues About Loader
Data Cannot Be Saved in Internet Explorer 10 or 11
Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to HDFS
Using Kudu
Using Kudu from Scratch
Accessing the Kudu Web UI
Using MapReduce
Configuring the Log Archiving and Clearing Mechanism
Reducing Client Application Failure Rate
Transmitting MapReduce Tasks from Windows to Linux
Configuring the Distributed Cache
Configuring the MapReduce Shuffle Address
Configuring the Cluster Administrator List
Introduction to MapReduce Logs
MapReduce Performance Tuning
Optimization Configuration for Multiple CPU Cores
Determining the Job Baseline
Streamlining Shuffle
AM Optimization for Big Tasks
Speculative Execution
Using Slow Start
Optimizing Performance for Committing MR Jobs
Common Issues About MapReduce
Why Does a MapReduce Task Stay Unchanged for a Long Time?
Why the Client Hangs During Job Running?
Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?
How Do I Set the Task Priority When Submitting a MapReduce Task?
Why Physical Memory Overflow Occurs If a MapReduce Task Fails?
After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I Click the Tracking URL on the ResourceManager WebUI?
MapReduce Job Failed in Multiple NameService Environment
Why a Fault MapReduce Node Is Not Blacklisted?
Using OpenTSDB
Using an MRS Client to Operate OpenTSDB Metric Data
Running the curl Command to Operate OpenTSDB
Using Oozie
Using Oozie from Scratch
Using the Oozie Client
Enabling Oozie High Availability (HA)
Using Oozie Client to Submit an Oozie Job
Submitting a Hive Job
Submitting a Spark2x Job
Submitting a Loader Job
Submitting a DistCp Job
Submitting Other Jobs
Using Hue to Submit an Oozie Job
Creating a Workflow
Submitting a Workflow Job
Submitting a Hive2 Job
Submitting a Spark2x Job
Submitting a Java Job
Submitting a Loader Job
Submitting a MapReduce Job
Submitting a Sub-workflow Job
Submitting a Shell Job
Submitting an HDFS Job
Submitting a Streaming Job
Submitting a DistCp Job
Example of Mutual Trust Operations
Submitting an SSH Job
Submitting a Hive Script
Submitting a Coordinator Periodic Scheduling Job
Submitting a Bundle Batch Processing Job
Querying Job Execution Results
Oozie Log Overview
Common Issues About Oozie
Oozie Scheduled Tasks Are Not Executed on Time
Why Update of the share lib Directory of Oozie on HDFS Does Not Take Effect?
Common Oozie Troubleshooting Methods
Using Presto
Accessing the Presto Web UI
Using a Client to Execute Query Statements
Presto FAQ
How Do I Configure Multiple Hive Connections for Presto?
Using Ranger (MRS 1.9.2)
Creating a Ranger Cluster
Accessing the Ranger Web UI and Synchronizing Unix Users to the Ranger Web UI
Configuring Hive/Impala Access Permissions in Ranger
Configuring HBase Access Permissions in Ranger
Using Ranger (MRS 3.x)
Logging In to the Ranger Web UI
Enabling Ranger Authentication
Configuring Component Permission Policies
Viewing Ranger Audit Information
Configuring a Security Zone
Viewing Ranger Permission Information
Adding a Ranger Access Permission Policy for HDFS
Adding a Ranger Access Permission Policy for HBase
Adding a Ranger Access Permission Policy for Hive
Adding a Ranger Access Permission Policy for Yarn
Adding a Ranger Access Permission Policy for Spark2x
Adding a Ranger Access Permission Policy for Kafka
Adding a Ranger Access Permission Policy for Storm
Ranger Log Overview
Common Issues About Ranger
Why Ranger Startup Fails During the Cluster Installation?
How Do I Determine Whether the Ranger Authentication Is Used for a Service?
Why Cannot a New User Log In to Ranger After Changing the Password?
When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search for Existing HBase Tables
Using Spark
Precautions
Getting Started with Spark
Getting Started with Spark SQL
Using the Spark Client
Accessing the Spark Web UI
Interconnecting Spark with OpenTSDB
Creating a Table and Associating It with OpenTSDB
Inserting Data to the OpenTSDB Table
Querying an OpenTSDB Table
Modifying the Default Configuration Data
Using Spark2x
Precautions
Basic Operation
Getting Started
Configuring Parameters Rapidly
Common Parameters
Spark on HBase Overview and Basic Applications
Spark on HBase V2 Overview and Basic Applications
SparkSQL Permission Management(Security Mode)
Spark SQL Permissions
Creating a Spark SQL Role
Configuring Permissions for SparkSQL Tables, Columns, and Databases
Configuring Permissions for SparkSQL to Use Other Components
Configuring the Client and Server
Scenario-Specific Configuration
Configuring Multi-active Instance Mode
Configuring the Multi-tenant Mode
Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode
Configuring the Size of the Event Queue
Configuring Executor Off-Heap Memory
Enhancing Stability in a Limited Memory Condition
Viewing Aggregated Container Logs on the Web UI
Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes
Configuring the Default Number of Data Blocks Divided by SparkSQL
Configuring the Compression Format of a Parquet Table
Configuring the Number of Lost Executors Displayed in WebUI
Setting the Log Level Dynamically
Configuring Whether Spark Obtains HBase Tokens
Configuring LIFO for Kafka
Configuring Reliability for Connected Kafka
Configuring Streaming Reading of Driver Execution Results
Filtering Partitions without Paths in Partitioned Tables
Configuring Spark2x Web UI ACLs
Configuring Vector-based ORC Data Reading
Broaden Support for Hive Partition Pruning Predicate Pushdown
Hive Dynamic Partition Overwriting Syntax
Configuring the Column Statistics Histogram to Enhance the CBO Accuracy
Configuring Local Disk Cache for JobHistory
Configuring Spark SQL to Enable the Adaptive Execution Feature
Configuring Event Log Rollover
Adapting to the Third-party JDK When Ranger Is Used
Spark2x Logs
Obtaining Container Logs of a Running Spark Application
Small File Combination Tools
Using CarbonData for First Query
Spark2x Performance Tuning
Spark Core Tuning
Data Serialization
Optimizing Memory Configuration
Setting the DOP
Using Broadcast Variables
Using the external shuffle service to improve performance
Configuring Dynamic Resource Scheduling in Yarn Mode
Configuring Process Parameters
Designing the Direction Acyclic Graph (DAG)
Experience
Spark SQL and DataFrame Tuning
Optimizing the Spark SQL Join Operation
Improving Spark SQL Calculation Performance Under Data Skew
Optimizing Spark SQL Performance in the Small File Scenario
Optimizing the INSERT...SELECT Operation
Multiple JDBC Clients Concurrently Connecting to JDBCServer
Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables
Optimizing Small Files
Optimizing the Aggregate Algorithms
Optimizing Datasource Tables
Merging CBO
Optimizing SQL Query of Data of Multiple Sources
SQL Optimization for Multi-level Nesting and Hybrid Join
Spark Streaming Tuning
Common Issues About Spark2x
Spark Core
How Do I View Aggregated Spark Application Logs?
Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager WebUI?
Why Cannot Exit the Driver Process?
Why Does FetchFailedException Occur When the Network Connection Is Timed out
How to Configure Event Queue Size If Event Queue Overflows?
What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Execution and the Application Does Not Exit for a Long Time?
What Can I Do If "Connection to ip:port has been quiet for xxx ms while there are outstanding requests" Is Reported When Spark Executes an Application and the Application Ends?
Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?
What Can I Do If the Message "Password cannot be null if SASL is enabled" Is Displayed?
What Should I Do If the Message "Failed to CREATE_FILE" Is Displayed in the Restarted Tasks When Data Is Inserted Into the Dynamic Partition Table?
Why Tasks Fail When Hash Shuffle Is Used?
What Can I Do If the Error Message "DNS query failed" Is Displayed When I Access the Aggregated Logs Page of Spark Applications?
What Can I Do If Shuffle Fetch Fails Due to the "Timeout Waiting for Task" Exception?
Why Does the Stage Retry due to the Crash of the Executor?
Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?
Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications
Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster in Security Mode?
Spark SQL and DataFrame
What Do I have to Note When Using Spark SQL ROLLUP and CUBE?
Why Spark SQL Is Displayed as a Temporary Table in Different Databases?
How to Assign a Parameter Value in a Spark Command?
What Directory Permissions Do I Need to Create a Table Using SparkSQL?
Why Do I Fail to Delete the UDF Using Another Service?
Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?
How to Use Cache Table?
Why Are Some Partitions Empty During Repartition?
Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?
Why the Operation Fails When the Table Name Is TABLE?
Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?
If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before "Missing Privileges" Is Displayed?
Why Do I Fail to Modify MetaData by Running the Hive Command?
Why Is "RejectedExecutionException" Displayed When I Exit Spark SQL?
How Do I Do If I Incidentally Kill the JDBCServer Process During Health Check?
Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?
Why Does the "--hivevar" Option I Specified in the Command for Starting spark-beeline Fail to Take Effect?
Why Is the "Code of method ... grows beyond 64 KB" Error Message Displayed When I Run Complex SQL Statements?
Why Is Memory Insufficient if 10 Terabytes of TPCDS Test Suites Are Consecutively Run in Beeline/JDBCServer Mode?
Why Functions Cannot Be Used When Different JDBCServers Are Connected?
Why Does an Exception Occur When I Drop Functions Created Using the Add Jar Statement?
Why Does Spark2x Have No Access to DataSource Tables Created by Spark1.5?
Why Does Spark-beeline Fail to Run and Error Message "Failed to create ThriftService instance" Is Displayed?
Why Cannot I Query Newly Inserted Data in an ORC Hive Table Using Spark SQL?
Spark Streaming
Same DAG Log Is Recorded Twice for a Streaming Task
What Can I Do If Spark Streaming Tasks Are Blocked?
What Should I Pay Attention to When Optimizing Spark Streaming Task Parameters?
Why Does the Spark Streaming Application Fail to Be Submitted After the Token Validity Period Expires?
Why does Spark Streaming Application Fail to Restart from Checkpoint When It Creates an Input Stream Without Output Logic?
Why Is the Input Size Corresponding to Batch Time on the Web UI Set to 0 Records When Kafka Is Restarted During Spark Streaming Running?
Why the Job Information Obtained from the restful Interface of an Ended Spark Application Is Incorrect?
Why Cannot I Switch from the Yarn Web UI to the Spark Web UI?
What Can I Do If an Error Occurs when I Access the Application Page Because the Application Cached by HistoryServer Is Recycled?
Why Is not an Application Displayed When I Run the Application with the Empty Part File?
Why Does Spark2x Fail to Export a Table with the Same Field Name?
Why JRE fatal error after running Spark application multiple times?
Native Spark2x UI Fails to Be Accessed or Is Incorrectly Displayed when Internet Explorer Is Used for Access
How Does Spark2x Access External Cluster Components?
Why Does the Foreign Table Query Fail When Multiple Foreign Tables Are Created in the Same Directory?
Why Is the Native Page of an Application in Spark2x JobHistory Displayed Incorrectly?
Why Do I Fail to Create a Table in the Specified Location on OBS After Logging to spark-beeline?
Spark Shuffle Exception Handling
Using Sqoop
Using Sqoop from Scratch
Adapting Sqoop 1.4.7 to MRS 3.x Clusters
Common Sqoop Commands and Parameters
Common Issues About Sqoop
What Should I Do If Class QueryProvider Is Unavailable?
What Should I Do If Method getHiveClient Does Not Exist?
How Do I Do If PostgreSQL or GaussDB Fails to Connect?
What Should I Do If Data Failed to Be Synchronized to a Hive Table on the OBS Using hive-table?
What Should I Do If Data Failed to Be Synchronized to an ORC or Parquet Table Using hive-table?
What Should I Do If Data Failed to Be Synchronized Using hive-table?
What Should I Do If Data Failed to Be Synchronized to a Hive Parquet Table Using HCatalog?
What Should I Do If the Data Type of Fields timestamp and data Is Incorrect During Data Synchronization Between Hive and MySQL?
Using Storm
Using Storm from Scratch
Using the Storm Client
Submitting Storm Topologies on the Client
Accessing the Storm Web UI
Managing Storm Topologies
Querying Storm Topology Logs
Storm Common Parameters
Configuring a Storm Service User Password Policy
Migrating Storm Services to Flink
Overview
Completely Migrating Storm Services
Performing Embedded Service Migration
Migrating Services of External Security Components Interconnected with Storm
Storm Log Introduction
Performance Tuning
Storm Performance Tuning
Using Tez
Precautions
Common Tez Parameters
Accessing TezUI
Log Overview
Common Issues
TezUI Cannot Display Tez Task Execution Details
Error Occurs When a User Switches to the Tez Web UI
Yarn Logs Cannot Be Viewed on the TezUI Page
Table Data Is Empty on the TezUI HiveQueries Page
Using YARN
Common YARN Parameters
Creating Yarn Roles
Using the YARN Client
Configuring Resources for a NodeManager Role Instance
Changing NodeManager Storage Directories
Configuring Strict Permission Control for Yarn
Configuring Container Log Aggregation
Using CGroups with YARN
Configuring the Number of ApplicationMaster Retries
Configure the ApplicationMaster to Automatically Adjust the Allocated Memory
Configuring the Access Channel Protocol
Configuring Memory Usage Detection
Configuring the Additional Scheduler WebUI
Configuring Yarn Restart
Configuring ApplicationMaster Work Preserving
Configuring the Localized Log Levels
Configuring Users That Run Tasks
Yarn Log Overview
Yarn Performance Tuning
Preempting a Task
Setting the Task Priority
Optimizing Node Configuration
Common Issues About Yarn
Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGroups?
Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?
Why Are Local Logs Not Deleted After YARN Is Restarted?
Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?
Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?
Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?
Why Does the Switchover of ResourceManager Occur Continuously?
Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?
Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Using the RESTful APIs?
Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?
Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?
How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?
Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?
Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?
Using ZooKeeper
Using ZooKeeper from Scratch
Common ZooKeeper Parameters
Using a ZooKeeper Client
Configuring the ZooKeeper Permissions
ZooKeeper Log Overview
Common Issues About ZooKeeper
Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?
Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?
Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are Enabled at Zookeeper Server?
How Do I Check Which ZooKeeper Instance Is a Leader?
Why Cannot the Client Connect to ZooKeeper using the IBM JDK?
What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?
Why Is Message "Node does not exist" Displayed when A Large Number of Znodes Are Deleted Using the deleteallCommand
Appendix
Modifying Cluster Service Configuration Parameters
Accessing Manager
Accessing MRS Manager (Versions Earlier Than MRS 3.x)
Accessing FusionInsight Manager (MRS 3.x or Later)
Using an MRS Client
Installing a Client (MRS 3.x or Later)
Installing a Client (Versions Earlier Than 3.x)
Updating a Client (Version 3.x or Later)
Updating a Client (Versions Earlier Than 3.x)
Component Operation Guide (LTS)
Using CarbonData
Overview
CarbonData Overview
Main Specifications of CarbonData
Configuration Reference
CarbonData Operation Guide
CarbonData Quick Start
CarbonData Table Management
About CarbonData Table
Creating a CarbonData Table
Deleting a CarbonData Table
Modify the CarbonData Table
CarbonData Table Data Management
Loading Data
Deleting Segments
Combining Segments
CarbonData Data Migration
Migrating Data on CarbonData from Spark1.5 to Spark2x
CarbonData Performance Tuning
Tuning Guidelines
Suggestions for Creating CarbonData Tables
Configurations for Performance Tuning
CarbonData Access Control
CarbonData Syntax Reference
DDL
CREATE TABLE
CREATE TABLE As SELECT
DROP TABLE
SHOW TABLES
ALTER TABLE COMPACTION
TABLE RENAME
ADD COLUMNS
DROP COLUMNS
CHANGE DATA TYPE
REFRESH TABLE
REGISTER INDEX TABLE
REFRESH INDEX
DML
LOAD DATA
UPDATE CARBON TABLE
DELETE RECORDS from CARBON TABLE
INSERT INTO CARBON TABLE
DELETE SEGMENT by ID
DELETE SEGMENT by DATE
SHOW SEGMENTS
CREATE SECONDARY INDEX
SHOW SECONDARY INDEXES
DROP SECONDARY INDEX
CLEAN FILES
SET/RESET
Operation Concurrent Execution
API
Spatial Indexes
CarbonData Troubleshooting
Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter
Query Performance Deterioration
CarbonData FAQ
Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?
How to Avoid Minor Compaction for Historical Data?
How to Change the Default Group Name for CarbonData Data Loading?
Why Does INSERT INTO CARBON TABLE Command Fail?
Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?
Why Data Load Performance Decreases due to Bad Records?
Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero?
Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?
Why Data loading Fails During off heap?
Why Do I Fail to Create a Hive Table?
Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?
How Do I Logically Split Data Across Different Namespaces?
Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?
Why the UPDATE Command Cannot Be Executed in Spark Shell?
How Do I Configure Unsafe Memory in CarbonData?
Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?
Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed?
Using ClickHouse
Using ClickHouse from Scratch
Common ClickHouse SQL Syntax
CREATE DATABASE: Creating a Database
CREATE TABLE: Creating a Table
INSERT INTO: Inserting Data into a Table
SELECT: Querying Table Data
ALTER TABLE: Modifying a Table Structure
DESC: Querying a Table Structure
DROP: Deleting a Table
SHOW: Displaying Information About Databases and Tables
Importing and Exporting File Data
User Management and Authentication
ClickHouse User and Permission Management
ClickHouse Table Engine Overview
Creating a ClickHouse Table
Using the ClickHouse Data Migration Tool
Monitoring of Slow ClickHouse Query Statements and Replication Table Data Synchronization
Slow Query Statement Monitoring
Replication Table Data Synchronization Monitoring
Adaptive MV Usage in ClickHouse
Enabling the mysql_port Configuration for ClickHouse
ClickHouse Log Overview
Using DBService
Configuring SSL for the HA Module
Restoring SSL for the HA Module
Configuring the Timeout Interval of DBService Backup Tasks
DBService Log Overview
Using Flink
Using Flink from Scratch
Viewing Flink Job Information
Flink Configuration Management
Configuring Parameter Paths
JobManager & TaskManager
Blob
Distributed Coordination (via Akka)
SSL
Network communication (via Netty)
JobManager Web Frontend
File Systems
State Backend
Kerberos-based Security
HA
Environment
Yarn
Pipeline
Security Configuration
Security Features
Configuring Kafka
Configuring Pipeline
Security Hardening
Authentication and Encryption
ACL Control
Web Security
Security Statement
Using the Flink Web UI
Overview
Introduction to Flink Web UI
Flink Web UI Application Process
FlinkServer Permissions Management
Overview
Authentication Based on Users and Roles
Accessing the Flink Web UI
Creating an Application on the Flink Web UI
Creating a Cluster Connection on the Flink Web UI
Creating a Data Connection on the Flink Web UI
Managing Tables on the Flink Web UI
Managing Jobs on the Flink Web UI
Managing UDFs on the Flink Web UI
Managing UDFs on the Flink Web UI
UDF Java and SQL Examples
UDAF Java and SQL Examples
UDTF Java and SQL Examples
Interconnecting FlinkServer with External Components
Interconnecting FlinkServer with ClickHouse
Interconnecting FlinkServer with HBase
Interconnecting FlinkServer with HDFS
Interconnecting FlinkServer with Hive
Interconnecting FlinkServer with Hudi
Interconnecting FlinkServer with Kafka
Deleting Residual Information About Flink Tasks
Flink Log Overview
Flink Performance Tuning
Memory Configuration Optimization
Configuring DOP
Configuring Process Parameters
Optimizing the Design of Partitioning Method
Configuring the Netty Network Communication
Summarization
Common Flink Shell Commands
Using Flume
Using Flume from Scratch
Overview
Installing the Flume Client on Clusters
Viewing Flume Client Logs
Stopping or Uninstalling the Flume Client
Using the Encryption Tool of the Flume Client
Flume Service Configuration Guide
Flume Configuration Parameter Description
Using Environment Variables in the properties.properties File
Non-Encrypted Transmission
Configuring Non-encrypted Transmission
Typical Scenario: Collecting Local Static Logs and Uploading Them to Kafka
Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS
Typical Scenario: Collecting Local Dynamic Logs and Uploading Them to HDFS
Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS
Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS Through the Flume Client
Typical Scenario: Collecting Local Static Logs and Uploading Them to HBase
Encrypted Transmission
Configuring the Encrypted Transmission
Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS
Viewing Flume Client Monitoring Information
Connecting Flume to Kafka in Security Mode
Connecting Flume with Hive in Security Mode
Configuring the Flume Service Model
Overview
Service Model Configuration Guide
Introduction to Flume Logs
Flume Client Cgroup Usage Guide
Secondary Development Guide for Flume Third-Party Plug-ins
Configuring the Flume Customized Script
Common Issues About Flume
Using HBase
Using HBase from Scratch
Creating HBase Roles
Using an HBase Client
Configuring HBase Replication
Enabling Cross-Cluster Copy
Supporting Full-Text Index
Using the ReplicationSyncUp Tool
GeoMesa Command Line
Using HIndex
Introduction to HIndex
Loading Index Data in Batches
Using an Index Generation Tool
Configuring HBase DR
Performing an HBase DR Service Switchover
Configuring HBase Data Compression and Encoding
Performing an HBase DR Active/Standby Cluster Switchover
Community BulkLoad Tool
In-House Enhanced BulkLoad Tool
Importing Data in a Customized Manner
Importing Data in Batches
Combining Rowkeys
Implementing RowKeys in a Customized Manner
Combining Fields
Specifying Field Data Types
Defining Inapplicable Data Rows
Importing Data with Indexes in a Customized Manner
Creating a Secondary Index When Importing Data In Batches
Combining Rowkeys
Implementing RowKeys in a Customized Manner
Combining Fields
Specifying Field Data Type
Defining Inapplicable Data Row
Updating Rows in Batches
Deleting Rows in Batches
Obtaining Statistics of Row
Configuring the MOB
Configuring Secure HBase Replication
Configuring Region In Transition Recovery Chore Service
Using a Secondary Index
HBase Log Overview
HBase Performance Tuning
Improving the BulkLoad Efficiency
Improving Put Performance
Optimizing Put and Scan Performance
Improving Real-time Data Write Performance
Improving Real-time Data Read Performance
Optimizing JVM Parameters
Common Issues About HBase
Why Does a Client Keep Failing to Connect to a Server for a Long Time?
Operation Failures Occur in Stopping BulkLoad On the Client
Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?
Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network Port?
Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,000 Reduce Tasks Fail?
How Do I Restore a Region in the RIT State for a Long Time?
Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?
Why Does SocketTimeoutException Occur When a Client Queries HBase?
Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?
Why "java.lang.UnsatisfiedLinkError: Permission denied" exception thrown while starting HBase shell?
When does the RegionServers listed under "Dead Region Servers" on HMaster WebUI gets cleared?
Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully Imported by HBase bulkload?
What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?
How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?
Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?
Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using OfflineMetaRepair Tool and Startups Failed
Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster Logs During the WAL Splitting Process?
Why Does the ImportTsv Tool Display "Permission denied" When the Same Linux User as and a Different Kerberos User from the Region Server Are Used?
Insufficient Rights When a Tenant Accesses Phoenix
Insufficient Rights When a Tenant Uses the HBase Bulkload Function
What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating "Rollback recovery failed"?
How Do I Fix Region Overlapping?
Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Set to 31 GB?
Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and "Permission denied" Is Displayed When Nodes in a Cluster Are Used to Import Data in Batches?
Why Is the Error Message "import argparse" Displayed When the Phoenix sqlline Script Is Used?
How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?
Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to the Ranger Plug-ins?
Using HDFS
Configuring Memory Management
Creating an HDFS Role
Using the HDFS Client
Running the DistCp Command
Overview of HDFS File System Directories
Changing the DataNode Storage Directory
Configuring HDFS Directory Permission
Configuring NFS
Planning HDFS Capacity
Configuring ulimit for HBase and HDFS
Balancing DataNode Capacity
Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes
Configuring the Number of Files in a Single HDFS Directory
Configuring the Recycle Bin Mechanism
Setting Permissions on Files and Directories
Setting the Maximum Lifetime and Renewal Interval of a Token
Configuring the Damaged Disk Volume
Configuring Encrypted Channels
Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable
Configuring the NameNode Blacklist
Optimizing HDFS NameNode RPC QoS
Optimizing HDFS DataNode RPC QoS
Configuring LZC Compression
Configuring Reserved Percentage of Disk Usage on DataNodes
Configuring HDFS NodeLabel
Configuring HDFS DiskBalancer
Performing Concurrent Operations on HDFS Files
Introduction to HDFS Logs
HDFS Performance Tuning
Improving Write Performance
Improving Read Performance Using Client Metadata Cache
Improving the Connection Between the Client and NameNode Using Current Active Cache
FAQ
NameNode Startup Is Slow
Why MapReduce Tasks Fails in the Environment with Multiple NameServices?
DataNode Is Normal but Cannot Report Data Blocks
HDFS WebUI Cannot Properly Update Information About Damaged Data
Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?
Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals dfs.datanode.failed.volumes.tolerated?
Why Does an Error Occur During DataNode Capacity Calculation When Multiple data.dir Are Configured in a Partition?
Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) Storage
Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files
Why Does Array Border-crossing Occur During FileInputFormat Split?
Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?
The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time
Can I Delete or Modify the Data Storage Directory in DataNode?
Blocks Miss on the NameNode UI After the Successful Rollback
Why Is "java.net.SocketException: No buffer space available" Reported When Data Is Written to HDFS
Why are There Two Standby NameNodes After the active NameNode Is Restarted?
When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?
"This page can't be displayed" Is Displayed When Internet Explorer Fails to Access the Native HDFS UI
NameNode Fails to Be Restarted Due to EditLog Discontinuity
Using HetuEngine
Using HetuEngine from Scratch
HetuEngine Permission Management
HetuEngine Permission Management Overview
Creating a HetuEngine User
HetuEngine Ranger-based Permission Control
HetuEngine MetaStore-based Permission Control
Overview
Creating a HetuEngine Role
Configuring Permissions for Tables, Columns, and Databases
Permission Principles and Constraints
Creating HetuEngine Compute Instances
Configuring Data Sources
Before You Start
Configuring a Hive Data Source
Configuring a Co-deployed Hive Data Source
Configuring a Traditional Data Source
Configuring a Hudi Data Source
Configuring an HBase Data Source
Configuring a GaussDB Data Source
Configuring a HetuEngine Data Source
Configuring a ClickHouse Data Source
Managing an External Data Source
Managing Compute Instances
Configuring Resource Groups
Adjusting the Number of Worker Nodes
Managing a HetuEngine Compute Instance
Importing and Exporting Compute Instance Configurations
Viewing the Instance Monitoring Page
Viewing Coordinator and Worker Logs
Using Resource Labels to Specify on Which Node Coordinators Should Run
Using the HetuEngine Client
Using the HetuEngine Cross-Source Function
Introduction to HetuEngine Cross-Source Function
Usage Guide of HetuEngine Cross-Source Function
Using HetuEngine Cross-Domain Function
Introduction to HetuEngine Cross-Source Function
HetuEngine Cross-Domain Function Usage
HetuEngine Cross-Domain Rate Limit Function
Using a Third-Party Visualization Tool to Access HetuEngine
Using DBeaver to Access HetuEngine
Using Tableau to Access HetuEngine
Using Yonghong BI to Access HetuEngine
Function & UDF Development and Application
HetuEngine Function Plugin Development and Application
Hive UDF Development and Application
HetuEngine UDF Development and Application
Introduction to HetuEngine Logs
HetuEngine Performance Tuning
Adjusting the Yarn Service Configuration
Adjusting Cluster Node Resource Configurations
Adjusting Execution Plan Cache
Adjusting Metadata Cache
Modifying the CTE Configuration
Common Issues About HetuEngine
How Do I Perform Operations After the Domain Name Is Changed?
What Do I Do If Starting a Cluster on the Client Times Out?
How Do I Handle Data Source Loss?
How Do I Handle HetuEngine Alarms?
How Do I Do If Coordinators and Workers Cannot Be Started on the New Node?
HetuEngine SQL Syntax
Data Type
Data Types
Boolean
Integer
Fixed Precision
Float
Character
Time and Date Type
Complex Type
SQL Syntax
DDL Syntax
CREATE SCHEMA
CREATE VIRTUAL SCHEMA
CREATE TABLE
CREATE TABLE AS
CREATE TABLE LIKE
CREATE VIEW
CREATE FUNCTION
ALTER TABLE
ALTER VIEW
ALTER SCHEMA
DROP SCHEMA
DROP TABLE
DROP VIEW
DROP FUNCTION
TRUNCATE TABLE
COMMENT
VALUES
SHOW Syntax Overview
SHOW CATALOGS
SHOW SCHEMAS (DATABASES)
SHOW TABLES
SHOW TBLPROPERTIES TABLE|VIEW
SHOW TABLE/PARTITION EXTENDED
SHOW STATS
SHOW FUNCTIONS
SHOW SESSION
SHOW PARTITIONS
SHOW COLUMNS
SHOW CREATE TABLE
SHOW VIEWS
SHOW CREATE VIEW
DML Syntax
INSERT
DELETE
UPDATE
LOAD
TCL Syntax
START TRANSACTION
COMMIT
ROLLBACK
DQL Syntax
SELECT
WITH
GROUP BY
HAVING
UNION | INTERSECT | EXCEPT
ORDER BY
OFFSET
LIMIT | FETCH FIRST
TABLESAMPLE
UNNEST
JOINS
Subqueries
SELECT VIEW CONTENT
Auxiliary Command Syntax
USE
SET SESSION
RESET SESSION
DESCRIBE
DESCRIBE FORMATTED COLUMNS
DESCRIBE DATABASE| SCHEMA
DESCRIBE INPUT
DESCRIBE OUTPUT
EXPLAIN
EXPLAIN ANALYZE
REFRESH CATALOG
REFRESH SCHEMA
REFRESH TABLE
ANALYZE
CALL
PREPARE
DEALLOCATE PREPARE
EXECUTE
Reserved Keywords
SQL Functions and Operators
Logical Operators
Comparison Functions and Operators
Condition Expression
Conversion function
Mathematical Functions and Operators
Bitwise Function
Regular Expressions
Binary Functions and Operators
JSON Functions and Operators
Date and Time Operators
Aggregate Function
Window Functions
Array Functions and Operators
Map Functions and Operators
URL Function
Geospatial Function
HyperLogLog Function
UUID Function
Color Function
Session Information
Teradata Function
Data Masking Functions
HetuEngine SQL Tutorial
Introduction to Implicit Data Type Conversion
Enabling/Disabling Implicit Conversion
Enabling Implicit Conversion
Disabling Implicit Conversion
Implicit Conversion Table
Appendix
Data preparation for the sample table in this document
Syntax Compatibility of Common Data Sources
Using Hive
Using Hive from Scratch
Configuring Hive Parameters
Hive SQL
Permission Management
Hive Permission
Creating a Hive Role
Configuring Permissions for Hive Tables, Columns, or Databases
Configuring Permissions to Use Other Components for Hive
Using a Hive Client
Using HDFS Colocation to Store Hive Tables
Using the Hive Column Encryption Function
Customizing Row Separators
Deleting Single-Row Records from Hive on HBase
Configuring HTTPS/HTTP-based REST APIs
Enabling or Disabling the Transform Function
Access Control of a Dynamic Table View on Hive
Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions
Using Hive to Read Data in a Relational Database
Supporting Traditional Relational Database Syntax in Hive
Creating User-Defined Hive Functions
Enhancing beeline Reliability
Viewing Table Structures Using the show create Statement as Users with the select Permission
Writing a Directory into Hive with the Old Data Removed to the Recycle Bin
Inserting Data to a Directory That Does Not Exist
Creating Databases and Creating Tables in the Default Database Only as the Hive Administrator
Disabling of Specifying the location Keyword When Creating an Internal Hive Table
Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read
Authorizing Over 32 Roles in Hive
Restricting the Maximum Number of Maps for Hive Tasks
HiveServer Lease Isolation
Hive Supporting Transactions
Switching the Hive Execution Engine to Tez
Interconnecting Hive with External Self-Built Relational Databases
Redis-based CacheStore of HiveMetaStore
Hive Materialized View
Hive Supporting Reading Hudi Tables
Hive Supporting Cold and Hot Storage of Partitioned Metadata
Hive Supporting ZSTD Compression Formats
Hive Log Overview
Hive Performance Tuning
Creating Table Partitions
Optimizing Join
Optimizing Group By
Optimizing Data Storage
Optimizing SQL Statements
Optimizing the Query Function Using Hive CBO
Common Issues About Hive
How Do I Delete UDFs on Multiple HiveServers at the Same Time?
Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?
How to Perform Operations on Local Files with Hive User-Defined Functions
How Do I Forcibly Stop MapReduce Jobs Executed by Hive?
Table Creation Fails Because Hive Complex Fields' Names Contain Special Characters
How Do I Monitor the Hive Table Size?
How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Statement?
Why Is Hive on Spark Task Freezing When HBase Is Not Installed?
Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionInsight Hive
Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?
Description of Hive Table Location (Either Be an OBS or HDFS Path)
Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Execute Union-related Statements?
Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?
Why Does Hive Not Support Vectorized Query?
Hive Configuration Problems
Using Hudi
Quick Start
Basic Operations
Hudi Table Schema
Write
Batch Write
Stream Write
Bootstrapping
Synchronizing Hudi Table Data to Hive
Read
Reading COW Table Views
Reading MOR Table Views
Data Management and Maintenance
Metadata Table
Clustering
Cleaning
Compaction
Savepoint
Single-Table Concurrent Write
Using the Hudi Client
Operating a Hudi Table Using hudi-cli.sh
Configuration Reference
Write Configuration
Configuration of Hive Table Synchronization
Index Configuration
Storage Configuration
Compaction and Cleaning Configurations
Metadata Table Configuration
Single-Table Concurrent Write Configuration
Hudi Performance Tuning
Performance Tuning Methods
Recommended Resource Configuration
Hudi SQL Syntax Reference
Constraints
DDL
CREATE TABLE
CREATE TABLE AS SELECT
DROP TABLE
SHOW TABLE
ALTER RENAME TABLE
ALTER ADD COLUMNS
TRUNCATE TABLE
DML
INSERT INTO
MERGE INTO
UPDATE
DELETE
COMPACTION
SET/RESET
Common Issues About Hudi
Data Write
Parquet/Avro schema Is Reported When Updated Data Is Written
UnsupportedOperationException Is Reported When Updated Data Is Written
SchemaCompatabilityException Is Reported When Updated Data Is Written
What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?
Data Collection
IllegalArgumentException Is Reported When Kafka Is Used to Collect Data
HoodieException Is Reported When Data Is Collected
HoodieKeyException Is Reported When Data Is Collected
Hive Synchronization
SQLException Is Reported During Hive Data Synchronization
HoodieHiveSyncException Is Reported During Hive Data Synchronization
SemanticException Is Reported During Hive Data Synchronization
Using Hue
Using Hue from Scratch
Accessing the Hue Web UI
Hue Common Parameters
Using HiveQL Editor on the Hue Web UI
Using the Metadata Browser on the Hue Web UI
Using File Browser on the Hue Web UI
Using Job Browser on the Hue Web UI
Using HBase on the Hue Web UI
Typical Scenarios
HDFS on Hue
Configuring HDFS Cold and Hot Data Migration
Hive on Hue
Oozie on Hue
Hue Log Overview
Common Issues About Hue
How Do I Solve the Problem that HQL Fails to Be Executed in Hue Using Internet Explorer?
Why Does the use database Statement Become Invalid When Hive Is Used?
What Can I Do If HDFS Files Fail to Be Accessed Using Hue WebUI?
What Should I Do If a Large File Fails to Be Uploaded on the Hue Page?
Hue Page Cannot Be Displayed When the Hive Service Is Not Installed in a Cluster
How Do I Solve the Problem of Setting the Time Zone of the Oozie Editor on the Hue Web UI?
Using Kafka
Using Kafka from Scratch
Managing Kafka Topics
Querying Kafka Topics
Managing Kafka User Permissions
Managing Messages in Kafka Topics
Creating a Kafka Role
Kafka Common Parameters
Safety Instructions on Using Kafka
Kafka Specifications
Using the Kafka Client
Configuring Kafka HA and High Reliability Parameters
Changing the Broker Storage Directory
Checking the Consumption Status of Consumer Group
Kafka Balancing Tool Instructions
Kafka Token Authentication Mechanism Tool Usage
Kafka Feature Description
Using Kafka UI
Accessing Kafka UI
Kafka UI Overview
Creating a Topic on Kafka UI
Migrating a Partition on Kafka UI
Managing Topics on Kafka UI
Viewing Brokers on Kafka UI
Viewing a Consumer Group on Kafka UI
Introduction to Kafka Logs
Performance Tuning
Kafka Performance Tuning
Common Issues About Kafka
How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?
Using Loader
Common Loader Parameters
Creating a Loader Role
Managing Loader Links
Importing Data
Overview
Importing Data Using Loader
Typical Scenario: Importing Data from an SFTP Server to HDFS or OBS
Typical Scenario: Importing Data from an SFTP Server to HBase
Typical Scenario: Importing Data from an SFTP Server to Hive
Typical Scenario: Importing Data from an SFTP Server to Spark
Typical Scenario: Importing Data from an FTP Server to HBase
Typical Scenario: Importing Data from a Relational Database to HDFS or OBS
Typical Scenario: Importing Data from a Relational Database to HBase
Typical Scenario: Importing Data from a Relational Database to Hive
Typical Scenario: Importing Data from a Relational Database to Spark
Typical Scenario: Importing Data from HDFS or OBS to HBase
Typical Scenario: Importing Data from a Relational Database to ClickHouse
Typical Scenario: Importing Data from HDFS to ClickHouse
Exporting Data
Overview
Using Loader to Export Data
Typical Scenario: Exporting Data from HDFS/OBS to an SFTP Server
Typical Scenario: Exporting Data from HBase to an SFTP Server
Typical Scenario: Exporting Data from Hive to an SFTP Server
Typical Scenario: Exporting Data from Spark to an SFTP Server
Typical Scenario: Exporting Data from HDFS/OBS to a Relational Database
Typical Scenario: Exporting Data from HBase to a Relational Database
Typical Scenario: Exporting Data from Hive to a Relational Database
Typical Scenario: Exporting Data from Spark to a Relational Database
Typical Scenario: Importing Data from HBase to HDFS/OBS
Job Management
Migrating Loader Jobs in Batches
Deleting Loader Jobs in Batches
Importing Loader Jobs in Batches
Exporting Loader Jobs in Batches
Viewing Historical Job Information
Operator Help
Overview
Input Operators
CSV File Input
Fixed File Input
Table Input
HBase Input
HTML Input
Hive input
Spark Input
Conversion Operators
Long Date Conversion
Null Value Conversion
Constant Field Addition
Random Value Conversion
Concat Fields
Extract Fields
Modulo Integer
String Cut
EL Operation
String Operations
String Reverse
String Trim
Filter Rows
Update Fields Operator
Output Operators
Hive output
Spark Output
Table Output
File Output
HBase Output
ClickHouse Output
Associating, Editing, Importing, or Exporting the Field Configuration of an Operator
Using Macro Definitions in Configuration Items
Operator Data Processing Rules
Client Tool Description
Running a Loader Job by Using Commands
loader-tool Usage Guide
loader-tool Usage Example
schedule-tool Usage Guide
schedule-tool Usage Example
Using loader-backup to Back Up Job Data
Open Source sqoop-shell Tool Usage Guide
Example for Using the Open-Source sqoop-shell Tool (SFTP-HDFS)
Example for Using the Open-Source sqoop-shell Tool (Oracle-HBase)
Loader Log Overview
Common Issues About Loader
How to Resolve the Problem that Failed to Save Data When Using Internet Explorer 10 or Internet Explorer 11 ?
Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to HDFS
Using MapReduce
Converting MapReduce from the Single Instance Mode to the HA Mode
Configuring the Log Archiving and Clearing Mechanism
Reducing Client Application Failure Rate
Transmitting MapReduce Tasks from Windows to Linux
Configuring the Distributed Cache
Configuring the MapReduce Shuffle Address
Configuring the Cluster Administrator List
Introduction to MapReduce Logs
MapReduce Performance Tuning
Optimization Configuration for Multiple CPU Cores
Determining the Job Baseline
Streamlining Shuffle
AM Optimization for Big Tasks
Speculative Execution
Using Slow Start
Optimizing Performance for Committing MR Jobs
Common Issues About MapReduce
Why Does It Take a Long Time to Run a Task Upon ResourceManager Active/Standby Switchover?
Why Does a MapReduce Task Stay Unchanged for a Long Time?
Why the Client Hangs During Job Running?
Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?
How Do I Set the Task Priority When Submitting a MapReduce Task?
Why Physical Memory Overflow Occurs If a MapReduce Task Fails?
After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I Click the Tracking URL on the ResourceManager WebUI?
MapReduce Job Failed in Multiple NameService Environment
Why a Fault MapReduce Node Is Not Blacklisted?
Using Oozie
Using Oozie from Scratch
Using the Oozie Client
Enabling Oozie High Availability (HA)
Using Oozie Client to Submit an Oozie Job
Submitting a Hive Job
Submitting a Spark2x Job
Submitting a Loader Job
Submitting a DistCp Job
Submitting Other Jobs
Using Hue to Submit an Oozie Job
Creating a Workflow
Submitting a Workflow Job
Submitting a Hive2 Job
Submitting a Spark2x Job
Submitting a Java Job
Submitting a Loader Job
Submitting a MapReduce Job
Submitting a Sub-workflow Job
Submitting a Shell Job
Submitting an HDFS Job
Submitting a DistCp Job
Example of Mutual Trust Operations
Submitting an SSH Job
Submitting a Hive Script
Submitting an Email Job
Submitting a Coordinator Periodic Scheduling Job
Submitting a Bundle Batch Processing Job
Querying the Operation Results
Oozie Log Overview
Common Issues About Oozie
How Do I Resolve the Problem that the Oozie Client Fails to Submit a MapReduce Job?
Oozie Scheduled Tasks Are Not Executed on Time
The Update of the share lib Directory of Oozie Does Not Take Effect
Using Ranger
Logging In to the Ranger Web UI
Enabling Ranger Authentication
Configuring Component Permission Policies
Viewing Ranger Audit Information
Configuring a Security Zone
Viewing Ranger Permission Information
Adding a Ranger Access Permission Policy for HDFS
Adding a Ranger Access Permission Policy for HBase
Adding a Ranger Access Permission Policy for Hive
Adding a Ranger Access Permission Policy for Yarn
Adding a Ranger Access Permission Policy for Spark2x
Adding a Ranger Access Permission Policy for Kafka
Adding a Ranger Access Permission Policy for HetuEngine
Ranger Log Overview
Common Issues About Ranger
Why Ranger Startup Fails During the Cluster Installation?
How Do I Determine Whether the Ranger Authentication Is Used for a Service?
Why Cannot a New User Log In to Ranger After Changing the Password?
When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search for Existing HBase Tables
Using Spark2x
Basic Operation
Getting Started
Configuring Parameters Rapidly
Common Parameters
Spark on HBase Overview and Basic Applications
Spark on HBase V2 Overview and Basic Applications
SparkSQL Permission Management(Security Mode)
Spark SQL Permissions
Creating a Spark SQL Role
Configuring Permissions for SparkSQL Tables, Columns, and Databases
Configuring Permissions for SparkSQL to Use Other Components
Configuring the Client and Server
Scenario-Specific Configuration
Configuring Multi-active Instance Mode
Configuring the Multi-tenant Mode
Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode
Configuring the Size of the Event Queue
Configuring Executor Off-Heap Memory
Enhancing Stability in a Limited Memory Condition
Viewing Aggregated Container Logs on the Web UI
Configuring Whether to Display Spark SQL Statements Containing Sensitive Words
Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes
Configuring the Default Number of Data Blocks Divided by SparkSQL
Configuring the Compression Format of a Parquet Table
Configuring the Number of Lost Executors Displayed in WebUI
Setting the Log Level Dynamically
Configuring Whether Spark Obtains HBase Tokens
Configuring LIFO for Kafka
Configuring Reliability for Connected Kafka
Configuring Streaming Reading of Driver Execution Results
Filtering Partitions without Paths in Partitioned Tables
Configuring Spark2x Web UI ACLs
Configuring Vector-based ORC Data Reading
Broaden Support for Hive Partition Pruning Predicate Pushdown
Hive Dynamic Partition Overwriting Syntax
Configuring the Column Statistics Histogram to Enhance the CBO Accuracy
Configuring Local Disk Cache for JobHistory
Configuring Spark SQL to Enable the Adaptive Execution Feature
Configuring Event Log Rollover
Adapting to the Third-party JDK When Ranger Is Used
Spark2x Logs
Obtaining Container Logs of a Running Spark Application
Small File Combination Tools
Using CarbonData for First Query
Spark2x Performance Tuning
Spark Core Tuning
Data Serialization
Optimizing Memory Configuration
Setting the DOP
Using Broadcast Variables
Using the external shuffle service to improve performance
Configuring Dynamic Resource Scheduling in Yarn Mode
Configuring Process Parameters
Designing the Direction Acyclic Graph (DAG)
Experience
Spark SQL and DataFrame Tuning
Optimizing the Spark SQL Join Operation
Improving Spark SQL Calculation Performance Under Data Skew
Optimizing Spark SQL Performance in the Small File Scenario
Optimizing the INSERT...SELECT Operation
Multiple JDBC Clients Concurrently Connecting to JDBCServer
Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables
Optimizing Small Files
Optimizing the Aggregate Algorithms
Optimizing Datasource Tables
Merging CBO
Optimizing SQL Query of Data of Multiple Sources
SQL Optimization for Multi-level Nesting and Hybrid Join
Spark Streaming Tuning
Spark on OBS Tuning
Common Issues About Spark2x
Spark Core
How Do I View Aggregated Spark Application Logs?
Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager WebUI?
Why Cannot Exit the Driver Process?
Why Does FetchFailedException Occur When the Network Connection Is Timed out
How to Configure Event Queue Size If Event Queue Overflows?
What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Execution and the Application Does Not Exit for a Long Time?
What Can I Do If "Connection to ip:port has been quiet for xxx ms while there are outstanding requests" Is Reported When Spark Executes an Application and the Application Ends?
Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?
What Can I Do If the Message "Password cannot be null if SASL is enabled" Is Displayed?
What Should I Do If the Message "Failed to CREATE_FILE" Is Displayed in the Restarted Tasks When Data Is Inserted Into the Dynamic Partition Table?
Why Tasks Fail When Hash Shuffle Is Used?
What Can I Do If the Error Message "DNS query failed" Is Displayed When I Access the Aggregated Logs Page of Spark Applications?
What Can I Do If Shuffle Fetch Fails Due to the "Timeout Waiting for Task" Exception?
Why Does the Stage Retry due to the Crash of the Executor?
Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?
Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications
Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster in Security Mode?
Spark SQL and DataFrame
What Do I have to Note When Using Spark SQL ROLLUP and CUBE?
Why Spark SQL Is Displayed as a Temporary Table in Different Databases?
How to Assign a Parameter Value in a Spark Command?
What Directory Permissions Do I Need to Create a Table Using SparkSQL?
Why Do I Fail to Delete the UDF Using Another Service?
Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?
How to Use Cache Table?
Why Are Some Partitions Empty During Repartition?
Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?
Why the Operation Fails When the Table Name Is TABLE?
Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?
If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before "Missing Privileges" Is Displayed?
Why Do I Fail to Modify MetaData by Running the Hive Command?
Why Is "RejectedExecutionException" Displayed When I Exit Spark SQL?
What Should I Do If the JDBCServer Process is Mistakenly Killed During a Health Check?
Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?
Why Does the "--hivevar" Option I Specified in the Command for Starting spark-beeline Fail to Take Effect?
Why Does the "Permission denied" Exception Occur When I Create a Temporary Table or View in Spark-beeline?
Why Is the "Code of method ... grows beyond 64 KB" Error Message Displayed When I Run Complex SQL Statements?
Why Is Memory Insufficient if 10 Terabytes of TPCDS Test Suites Are Consecutively Run in Beeline/JDBCServer Mode?
Why Are Some Functions Not Available when Another JDBCServer Is Connected?
Why Does an Exception Occur When I Drop Functions Created Using the Add Jar Statement?
Why Does Spark2x Have No Access to DataSource Tables Created by Spark1.5?
Why Does Spark-beeline Fail to Run and Error Message "Failed to create ThriftService instance" Is Displayed?
Spark Streaming
Streaming Task Prints the Same DAG Log Twice
What Can I Do If Spark Streaming Tasks Are Blocked?
What Should I Pay Attention to When Optimizing Spark Streaming Task Parameters?
Why Does the Spark Streaming Application Fail to Be Submitted After the Token Validity Period Expires?
Why does Spark Streaming Application Fail to Restart from Checkpoint When It Creates an Input Stream Without Output Logic?
Why Is the Input Size Corresponding to Batch Time on the Web UI Set to 0 Records When Kafka Is Restarted During Spark Streaming Running?
Why the Job Information Obtained from the restful Interface of an Ended Spark Application Is Incorrect?
Why Cannot I Switch from the Yarn Web UI to the Spark Web UI?
What Can I Do If an Error Occurs when I Access the Application Page Because the Application Cached by HistoryServer Is Recycled?
Why Is not an Application Displayed When I Run the Application with the Empty Part File?
Why Does Spark2x Fail to Export a Table with the Same Field Name?
Why JRE fatal error after running Spark application multiple times?
"This page can't be displayed" Is Displayed When Internet Explorer Fails to Access the Native Spark2x UI
How Does Spark2x Access External Cluster Components?
Why Does the Foreign Table Query Fail When Multiple Foreign Tables Are Created in the Same Directory?
What Should I Do If the Native Page of an Application of Spark2x JobHistory Fails to Display During Access to the Page
Spark Shuffle Exception Handling
Using Tez
Common Tez Parameters
Accessing TezUI
Log Overview
Common Issues
TezUI Cannot Display Tez Task Execution Details
Error Occurs When a User Switches to the Tez Web UI
Yarn Logs Cannot Be Viewed on the TezUI Page
Table Data Is Empty on the TezUI HiveQueries Page
Using Yarn
Common Yarn Parameters
Creating Yarn Roles
Using the Yarn Client
Configuring Resources for a NodeManager Role Instance
Changing NodeManager Storage Directories
Configuring Strict Permission Control for Yarn
Configuring Container Log Aggregation
Using CGroups with YARN
Configuring the Number of ApplicationMaster Retries
Configure the ApplicationMaster to Automatically Adjust the Allocated Memory
Configuring the Access Channel Protocol
Configuring Memory Usage Detection
Configuring the Additional Scheduler WebUI
Configuring Yarn Restart
Configuring ApplicationMaster Work Preserving
Configuring the Localized Log Levels
Configuring Users That Run Tasks
Yarn Log Overview
Yarn Performance Tuning
Preempting a Task
Setting the Task Priority
Optimizing Node Configuration
Common Issues About Yarn
Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGroups?
Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?
Why Are Local Logs Not Deleted After YARN Is Restarted?
Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?
Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?
Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?
Why Does the Switchover of ResourceManager Occur Continuously?
Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?
What Is the Queue Replacement Policy?
Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Using the RESTful APIs?
Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?
Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?
How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?
Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?
Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?
Using ZooKeeper
Using ZooKeeper from Scratch
Common ZooKeeper Parameters
Using a ZooKeeper Client
Configuring the ZooKeeper Permissions
Changing the ZooKeeper Storage Directory
Configuring the ZooKeeper Connection
Configuring ZooKeeper Response Timeout Interval
Binding the Client to an IP Address
Configuring the Port Range Bound to the Client
Performing Special Configuration on ZooKeeper Clients in the Same JVM
Configuring a Quota for a Znode
ZooKeeper Log Overview
Common Issues About ZooKeeper
Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?
Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?
Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are Enabled at Zookeeper Server?
How Do I Check Which ZooKeeper Instance Is a Leader?
Why Cannot the Client Connect to ZooKeeper using the IBM JDK?
What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?
Why Is Message "Node does not exist" Displayed when A Large Number of Znodes Are Deleted Using the deleteallCommand
Appendix
Modifying Cluster Service Configuration Parameters
Accessing FusionInsight Manager
Using an MRS Client
Using an MRS Client on Nodes Inside a MRS Cluster
Using an MRS Client on Nodes Outside a MRS Cluster
Best Practices
Data Analytics
Using Spark2x to Analyze IoV Drivers' Driving Behavior
Using Hive to Load HDFS Data and Analyze Book Scores
Using Hive to Load OBS Data and Analyze Enterprise Employee Information
Using Flink Jobs to Process OBS Data
Consuming Kafka Data Using Spark Streaming Jobs
Using Flume to Collect Log Files from a Specified Directory to HDFS
Kafka-based WordCount Data Flow Statistics Case
Data Migration
Data Migration Solution
Making Preparations
Exporting Metadata
Copying Data
Restoring Data
Using BulkLoad to Import Data to HBase in Batches
Migrating Data from MySQL to an MRS Hive Partitioned Table
Migrating Data from MRS HDFS to OBS
Data Backup and Restoration
HDFS Data
Hive Metadata
Hive Data
HBase Data
Kafka Data
System Interconnection
Using DBeaver to Access Phoenix
Using DBeaver to Access HetuEngine
Using Tableau to Access HetuEngine
Using Yonghong BI to Access HetuEngine
Interconnecting Hive with External Self-Built Relational Databases
Interconnecting Hive with CSS
Interconnecting Hive with External LDAP
Developer Guide
Developer Guide (LTS)
Description
Obtaining Sample Projects from Huawei Mirrors
Using Open-source JAR File Conflict Lists
HBase
HDFS
Kafka
Spark2x
Mapping Between Maven Repository JAR Versions and MRS Component Versions
Security Authentication
Security Authentication Principles and Mechanisms
Preparing the Developer Account
Handling an Authentication Failure
ClickHouse Development Guide (Security Mode)
Overview
Introduction to ClickHouse
Basic Concepts
Development Process
Environment Preparations
Preparing the Development and Operating Environment
Configuring and Importing a Sample Project
Application Development
Typical Application Scenario
Development Guideline
Sample Code
Setting Attributes
Establishing a Connection
Creating a Database
Creating a Table
Inserting Data
Querying Data
Deleting a Table
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
ClickHouse Development Guide (Normal Mode)
Overview
Introduction to ClickHouse
Basic Concepts
Development Process
Environment Preparations
Preparing the Development and Operating Environment
Configuring and Importing a Sample Project
Application Development
Typical Application Scenario
Development Guideline
Sample Code
Setting Attributes
Establishing a Connection
Creating a Database
Creating a Table
Inserting Data
Querying Data
Deleting a Table
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
Flink Development Guide (Security Mode)
Overview
Application Development
Basic Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Creating a Project (Optional)
Preparing for Security Authentication
Developing an Application
DataStream Application
Scenarios
Java Sample Code
Scala Sample Code
Interconnecting with Kafka
Scenarios
Java Sample Code
Scala Sample Code
Asynchronous Checkpoint Mechanism
Scenarios
Java Sample Code
Scala Sample Code
Job Pipeline Program
Scenario
Java Sample Code
Scala Sample Code
Stream SQL Join Program
Scenario
Java Sample Code
Debugging the Application
Compiling and Running the Application
Viewing the Debugging Result
More Information
Introduction to Common APIs
Java
Scala
Overview of RESTful APIs
Overview of Savepoints CLI
Introduction to Flink Client CLI
FAQ
Savepoints-related Problems
What If the Chrome Browser Cannot Display the Title
What If the Page Is Displayed Abnormally on Internet Explorer 10/11
What If Checkpoint Is Executed Slowly in RocksDBStateBackend Mode When the Data Amount Is Large
What If yarn-session Start Fails When blob.storage.directory Is Set to /home
Why Does Non-static KafkaPartitioner Class Object Fail to Construct FlinkKafkaProducer010?
When I Use a Newly Created Flink User to Submit Tasks, Why Does the Task Submission Fail and a Message Indicating Insufficient Permission on ZooKeeper Directory Is Displayed?
Why Cannot I Access the Apache Flink Dashboard?
How Do I View the Debugging Information Printed Using System.out.println or Export the Debugging Information to a Specified File?
Incorrect GLIBC Version
Flink Development Guide (Normal Mode)
Overview
Application Development
Basic Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Creating a Project (Optional)
Developing an Application
DataStream Application
Scenarios
Java Sample Code
Scala Sample Code
Interconnecting with Kafka
Scenarios
Java Sample Code
Scala Sample Code
Asynchronous Checkpoint Mechanism
Scenarios
Java Sample Code
Scala Sample Code
Job Pipeline Program
Scenario
Java Sample Code
Scala Sample Code
Stream SQL Join Program
Scenario
Java Sample Code
Debugging the Application
Compiling and Running the Application
Viewing the Debugging Result
More Information
Introduction to Common APIs
Java
Scala
Overview of RESTful APIs
Overview of Savepoints CLI
Introduction to Flink Client CLI
FAQ
Savepoints-related Problems
What If the Chrome Browser Cannot Display the Title
What If the Page Is Displayed Abnormally on Internet Explorer 10/11
What If Checkpoint Is Executed Slowly in RocksDBStateBackend Mode When the Data Amount Is Large
What If yarn-session Start Fails When blob.storage.directory Is Set to /home
Why Does Non-static KafkaPartitioner Class Object Fail to Construct FlinkKafkaProducer010?
When I Use a Newly Created Flink User to Submit Tasks, Why Does the Task Submission Fail and a Message Indicating Insufficient Permission on ZooKeeper Directory Is Displayed?
Why Cannot I Access the Apache Flink Dashboard?
How Do I View the Debugging Information Printed Using System.out.println or Export the Debugging Information to a Specified File?
Incorrect GLIBC Version
HBase Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Preparing for Security Authentication
Preparing Authentication Mechanism Code
Multi-Instance Authentication in Mutual Trust Scenarios
Authentication for accessing the HBase REST Service
Authentication for Accessing the ThriftServer Service
Authentication for Accessing Multiple ZooKeepers
Developing an Application
Typical Scenario Description
Development Idea
Example Code Description
Configuring Log4j Log Output
Creating Configuration
Creating Connection
Creating a Table
Deleting a Table
Modifying a Table
Inserting Data
Deleting Data
Reading Data Using Get
Reading Data Using Scan
Filtering Data
Creating a Secondary Index
Deleting an Index
Secondary Index-based Query
Multi-Point Region Division
Creating a Phoenix Table
Writing Data to the PhoenixTable
Reading the PhoenixTable
Accessing Multiple ZooKeepers
Querying Cluster Information Using REST
Obtaining All Tables Using REST
Operate Namespaces Using REST
Operate Tables Using REST
Accessing the ThriftServer Operation Table
Accessing ThriftServer to Write Data
Accessing ThriftServer to Write Data
Using HBase Dual-Read
Application Commissioning
Commissioning an Application in Windows
Compiling and Running an Application
Viewing Windows Commissioning Results
Commissioning an Application in Linux
Compiling and Running an Application When a Client Is Installed
Compiling and Running an Application When No Client Is Installed
Viewing Linux Commissioning Results
More Information
SQL Query
HBase Dual-Read Configuration Items
External Interfaces
Shell
Java API
Sqlline
JDBC APIs
WebUI
Phoenix Command Line
FAQs
How to Rectify the Fault When an Exception Occurs During the Running of an HBase-developed Application and "org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory" Is Displayed in the Error Information?
What Are the Application Scenarios of the Bulkload and put Data-loading Modes?
An Error Occurred When Building a JAR Package
HBase Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Developing an Application
Typical Scenario Description
Development Idea
Example Code Description
Configuring Log4j Log Output
Creating Configuration
Creating Connection
Creating a Table
Deleting a Table
Modifying a Table
Inserting Data
Deleting Data
Reading Data Using Get
Reading Data Using Scan
Filtering Data
Creating a Secondary Index
Deleting an Index
Secondary Index-based Query
Multi-Point Region Division
Creating a Phoenix Table
Writing Data to the PhoenixTable
Reading the PhoenixTable
Accessing Multiple ZooKeepers
Querying Cluster Information Using REST
Obtaining All Tables Using REST
Operate Namespaces Using REST
Operate Tables Using REST
Accessing the ThriftServer Operation Table
Accessing ThriftServer to Write Data
Accessing ThriftServer to Write Data
Using HBase Dual-Read
Application Commissioning
Commissioning an Application in Windows
Compiling and Running an Application
Viewing Windows Commissioning Results
Commissioning an Application in Linux
Compiling and Running an Application When a Client Is Installed
Compiling and Running an Application When No Client Is Installed
Viewing Linux Commissioning Results
More Information
SQL Query
HBase Dual-Read Configuration Items
External Interfaces
Shell
Java APIs
Sqlline
JDBC APIs
WebUI
Phoenix Command Line
FAQs
How to Rectify the Fault When an Exception Occurs During the Running of an HBase-developed Application and "org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory" Is Displayed in the Error Information?
What Are the Application Scenarios of the bulkload and put Data-loading Modes?
An Error Occurred When Building a JAR Package
HDFS Development Guide (Security Mode)
Overview
Introduction to HDFS
Basic Concepts
Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing Sample Projects
Preparing the Authentication Mechanism
Developing the Project
Scenario
Development Idea
Declare the Example Codes
Initializing the HDFS
Creating Directories
Writing Data into a File
Appending Data to a File
Reading Data from a File
Deleting a File
Deleting Directories
Multi-Thread Tasks
Operation on SmallFS
Setting Storage Policies
Colocation
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running an Application
Checking the Commissioning Result
Commissioning an Application in the Linux Environment
Compiling and Running an Application With the Client Installed
Compiling and Running an Application With the Client Not Installed
Checking the Commissioning Result
More Information
Common API Introduction
Java API
C API
HTTP REST API
Shell Command Introduce
HDFS Access Configuration on Windows Using EIPs
HDFS Development Guide (Normal Mode)
Overview
Introduction to HDFS
Basic Concepts
Development Process
Environment Preparation
Development and Operating Environment
Configuring and Importing Sample Projects
Developing the Project
Scenario
Development Idea
Declare the Example Codes
Initializing the HDFS
Creating Directories
Writing Data into a File
Appending Data to a File
Reading Data from a File
Deleting a File
Deleting Directories
Multi-Thread Tasks
Setting Storage Policies
Operation on SmallFS
Colocation
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running an Application
Checking the Commissioning Result
Commissioning an Application in the Linux Environment
Compiling and Running an Application with the Client Installed
Compiling and Running an Application with the Client Not Installed
Checking the Commissioning Result
More Information
Common API Introduction
Java API
C API
HTTP REST API
Shell Command Introduce
HDFS Access Configuration on Windows Using EIPs
HetuEngine Development Guide (Security Mode)
Overview
Introduction to HetuEngine
Concepts
Development Process
Environment Preparation
Preparing Development and Running Environments
Configuring and Importing a Sample Project
Preparing for Security Authentication
KeyTab File Authentication
Username and Password Authentication Using Zookeeper
Username and Password Authentication Using HSBroker
Application Development
Typical Application Scenario
Java Sample Code
KeyTab File Authentication
Username and Password Authentication Using Zookeeper
Username and Password Authentication Using HSBroker
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
HetuEngine Development Guide (Normal Mode)
Overview
Introduction to HetuEngine
Concepts
Development Process
Preparing Environment
Preparing Development and Running Environments
Configuring and Importing a Sample Project
Application Development
Typical Application Scenario
Java Sample Code
Accessing Hive Data Sources Using ZooKeeper
Accessing Hive Data Sources Using HSBroker
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
Hive Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Required Permissions
Development Process
Preparing the Environment
Preparing Development and Operating Environment
Configuring the JDBC Sample Project
Configuring the Hcatalog Sample Project
Configuring the Python Sample Project
Configuring the Python3 Sample Project
Developing an Application
Typical Scenario Description
Example Codes
Creating a Table
Loading Data
Querying Data
UDF
Example Program Guide
Accessing Multiple ZooKeepers
Commissioning Applications
Running JDBC and Viewing Results
Running HCatalog and Viewing Results
Running Python and Viewing Results
Running Python3 and Viewing Results
More Information
Interface Reference
JDBC
Hive SQL
WebHCat
Hive Access Configuration on Windows Using EIPs
FAQ
A Message Is Displayed Stating "Unable to read HiveServer2 configs from ZooKeeper" During the Use of the Secondary Development Program
Problem performing GSS wrap Message Is Displayed Due to IBM JDK Exceptions
Hive SQL Is Incompatible with SQL2003 Standards
Hive Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Preparing the Environment
Preparing Development and Operating Environment
Configuring the JDBC Sample Project
Configuring the Hcatalog Sample Project
Configuring the Python Sample Project
Configuring the Python3 Sample Project
Developing an Application
Typical Scenario Description
Example Codes
Creating a Table
Loading Data
Querying Data
UDF
Example Program Guide
Accessing Multiple ZooKeepers
Commissioning Applications
Running JDBC and Viewing Results
Running HCatalog and Viewing Results
Running Python and Viewing Results
Running Python3 and Viewing Results
More Information
Interface Reference
JDBC
Hive SQL
WebHCat
Hive Access Configuration on Windows Using EIPs
FAQ
Problem performing GSS wrap Message Is Displayed Due to IBM JDK Exceptions
Kafka Development Guide (Security Mode)
Overview
Development Environment Preparation
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Preparing for Security Authentication
Developing an Application
Typical Scenario Description
Typical Scenario Sample Code Description
Producer API Usage Sample
Consumer API Usage Sample
Multi-thread Producer Sample
Multi-thread Consumer Sample
Kafka Streams Scenario Description
Kafka Streams Sample Code Description
High level KafkaStreams API Usage Sample
Low level KafkaStreams API Usage Sample
Kafka Token Authentication Mechanism Scenario
Sample Code of the Kafka Token Authentication Mechanism
Application Commissioning
Commissioning an Application in Windows
Commissioning an Application in Linux
Kafka Streams Sample Running Guide
High level Streams API Sample Usage Guide
Low level Streams API Sample Usage Guide
Running Guide of the Kafka Token Authentication Mechanism Sample Code
More Information
External Interfaces
Shell
Java API
Security Ports
SSL Encryption Function Used by a Client
Kafka Access Configuration on Windows Using EIPs
FAQ
Topic Authentication Fails During Sample Running and "example-metric1=TOPIC_AUTHORIZATION_FAILED" Is Displayed
Running the Producer.java Sample to Obtain Metadata Fails and "ERROR fetching topic metadata for topics..." Is Displayed, Even the Access Permission for the Related Topic
Kafka Development Guide (Normal Mode)
Overview
Development Environment Preparation
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Developing an Application
Typical Scenario Description
Example Code Description
Producer API Usage Sample
Consumer API Usage Sample
Multi-thread Producer Sample
Multi-thread Consumer Sample
Kafka Streams Sample Code Description
Kafka Streams Sample Code Description
High level KafkaStreams API Usage Sample
Low level KafkaStreams API Usage Sample
Application Commissioning
Commissioning an Application in Windows
Commissioning an Application in Linux
Kafka Streams Sample Running Guide
High Level Kafka Streams API Sample Usage Guide
Low level Kafka Streams API Sample Usage Guide
More Information
External Interfaces
Shell
Java API
Kafka Access Configuration on Windows Using EIPs
FAQ
Running the Producer.java Sample to Obtain Metadata Fails and "ERROR fetching topic metadata for topics..." Is Displayed, Even the Access Permission for the Related Topic
MapReduce Development Guide (Security Mode)
Overview
MapReduce Overview
Basic Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Preparing the Authentication Mechanism
Developing the Project
MapReduce Statistics Sample Project
Typical Scenarios
Example Code
MapReduce Accessing Multi-Component Example Project
Instance
Example Code
Commissioning the Application
Commissioning the Application in the Windows Environment
Compiling and Running the Application
Checking the Commissioning Result
Commissioning an Application in the Linux Environment
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java API
REST API
FAQ
No Response from the Client When Submitting the MapReduce Application
When an Application Is Run, An Abnormality Occurs Due to Network Faults
How to Perform Remote Debugging During MapReduce Secondary Development?
MapReduce Development Guide (Normal Mode)
Overview
MapReduce Overview
Basic Concepts
Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Developing the Project
MapReduce Statistics Sample Project
Typical Scenarios
Example Codes
MapReduce Accessing Multi-Component Example Project
Instance
Example Code
Commissioning the Application
Commissioning the Application in the Windows Environment
Compiling and Running the Application
Checking the Commissioning Result
Commissioning the Application in the Linux Environment
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java API
REST API
FAQ
No Response from the Client When Submitting the MapReduce Application
How to Perform Remote Debugging During MapReduce Secondary Development?
Oozie Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing Development and Operating Environment
Downloading and Importing Sample Projects
Preparing Authentication Mechanism Code
Developing the Project
Development of Configuration Files
Description
Development Procedure
Example Codes
job.properties
workflow.xml
Start Action
End Action
Kill Action
FS Action
MapReduce Action
coordinator.xml
Development of Java
Description
Sample Code
Scheduling Spark2x to Access HBase and Hive Using Oozie
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running Applications
Checking the Commissioning Result
More Information
Common API Introduce
Shell
Java
REST
Oozie Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Development and Operating Environment
Downloading and Importing Sample Projects
Developing the Project
Development of Configuration Files
Description
Development Procedure
Example Codes
job.properties
workflow.xml
Start Action
End Action
Kill Action
FS Action
MapReduce Action
coordinator.xml
Development of Java
Description
Sample Code
Scheduling Spark2x to Access HBase and Hive Using Oozie
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running Applications
Checking the Commissioning Result
More Information
Common API Introduce
Shell
Java
REST
Spark2x Development Guide (Security Mode)
Overview
Application Development Overview
Basic Concepts
Development Process
Preparing for the Environment
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Preparing for Security Authentication
Configuring the Python3 Sample Project
Developing the Project
Spark Core Project
Instance
Java Example Code
Scala Example Code
Python Example Code
Spark SQL Project
Instance
Java Example Code
Scala Example Code
Python Example Code
Accessing the Spark SQL Through JDBC
Instance
Java Example Code
Scala Example Code
Spark on HBase
Performing Operations on Data in Avro Format
Performing Operations on the HBase Data Source
Using the BulkPut Interface
Using the BulkGet Interface
Using the BulkDelete Interface
Using the BulkLoad Interface
Using the foreachPartition Interface
Distributedly Scanning HBase Tables
Using the mapPartition Interface
Writing Data to HBase Tables In Batches Using SparkStreaming
Reading Data from HBase and Write It Back to HBase
Instance
Java Example Code
Scala Example Code
Python Example Code
Reading Data from Hive and Write It to HBase
Instance
Java Example Code
Scala Example Code
Python Example Code
Streaming Connecting to Kafka0-10
Instance
Java Example Code
Scala Example Code
Structured Streaming Project
Instance
Java Example Code
Scala Example Code
Python Example Code
Structured Streaming Stream-Stream Join
Overview
Scala Example Code
Structured Streaming Status Operation
Scenario
Scala Sample Code
Concurrent Access from Spark to HBase in Two Clusters
Scenario
Scala Sample Code
Synchronizing HBase Data from Spark to CarbonData
Instance
Java Example Code
Using Spark to Perform Basic Hudi Operations
Instance
Scala Example Code
Python Example Code
Java Example Code
Compiling User-defined Configuration Items for Hudi
HoodieDeltaStreamer
User-defined Partitioner
Commissioning the Application
Commissioning Applications on Windows
Spark Access Configuration on Windows Using EIPs
Compiling and Running Applications
View Debugging Results
Commissioning an Application in Linux
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java
Scala
Python
REST API
Common CLIs
JDBCServer Interface
Structured Streaming Functions and Reliability
FAQ
How to Add a User-Defined Library
How to Automatically Load Jars Packages?
Why the "Class Does not Exist" Error Is Reported While the SparkStresmingKafka Project Is Running?
Privilege Control Mechanism of SparkSQL UDF Feature
Why Does Kafka Fail to Receive the Data Written Back by SLog in to the node where the client is installed as the client installation user.park Streaming?
Why a Spark Core Application Is Suspended Instead of Being Exited When Driver Memory Is Insufficient to Store Collected Intensive Data?
Why the Name of the Spark Application Submitted in Yarn-Cluster Mode Does not Take Effect?
How to Perform Remote Debugging Using IDEA?
How to Submit the Spark Application Using Java Commands?
A Message Stating "Problem performing GSS wrap" Is Displayed When IBM JDK Is Used
Application Fails When ApplicationManager Is Terminated During Data Processing in the Cluster Mode of Structured Streaming
Restrictions on Restoring the Spark Application from the checkpoint
Support for Third-party JAR Packages on x86 and TaiShan Platforms
What Should I Do If a Large Number of Directories Whose Names Start with blockmgr- or spark- Exist in the /tmp Directory on the Client Installation Node?
Error Code 139 Reported When Python Pipeline Runs in the ARM Environment
What Should I Do If the Structured Streaming Task Submission Way Is Changed?
Common JAR File Conflicts
Spark2x Development Guide (Normal Mode)
Overview
Application Development Overview
Basic Concepts
Development Process
Preparing for the Environment
Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Configuring the Python3 Sample Project
Developing the Project
Spark Core Project
Instance
Java Example Code
Scala Example Code
Python Example Code
Spark SQL Project
Instance
Java Example Code
Scala Example Code
Python Example Code
Accessing the Spark SQL Through JDBC
Instance
Java Example Code
Scala Example Code
Spark on HBase
Performing Operation on Data in Avro Format
Performing Operations on the HBase Data Source
Using the BulkPut Interface
Using the BulkGet Interface
Using the BulkDelete Interface
Using the BulkLoad Interface
Using the foreachPartition Interface
Distributedly Scanning HBase Tables
Using the mapPartition Interface
Writing Data to HBase Tables In Batches Using SparkStreaming
Reading Data from HBase and Write It Back to HBase
Instance
Java Example Code
Scala Example Code
Python Example Code
Reading Data from Hive and Write It to HBase
Instance
Java Example Code
Scala Example Code
Python Example Code
Streaming Connecting to Kafka0-10
Instance
Java Example Code
Scala Example Code
Structured Streaming Project
Instance
Java Example Code
Scala Example Code
Python Example Code
Structured Streaming Stream-Stream Join
Overview
Scala Example Code
Structured Streaming Status Operation
Scenario
Scala Sample Code
Synchronizing HBase Data from Spark to CarbonData
Instance
Java Example Code
Using Spark to Perform Basic Hudi Operations
Instance
Scala Example Code
Python Example Code
Java Example Code
Compiling User-defined Configuration Items for Hudi
HoodieDeltaStreamer
User-defined Partitioner
Commissioning the Application
Commissioning Applications on Windows
Spark Access Configuration on Windows Using EIPs
Compiling and Running Applications
View Debugging Results
Commissioning an Application in Linux
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java
Scala
Python
REST API
Common CLIs
JDBCServer Interface
Structured Streaming Functions and Reliability
FAQ
How to Add a User-Defined Library
How to Automatically Load Jars Packages?
Why the "Class Does not Exist" Error Is Reported While the SparkStresmingKafka Project Is Running?
Why Does Kafka Fail to Receive the Data Written Back by Spark Streaming?
Why a Spark Core Application Is Suspended Instead of Being Exited When Driver Memory Is Insufficient to Store Collected Intensive Data?
Why the Name of the Spark Application Submitted in Yarn-Cluster Mode Does not Take Effect?
How to Perform Remote Debugging Using IDEA?
How to Submit the Spark Application Using Java Commands?
A Message Stating "Problem performing GSS wrap" Is Displayed When IBM JDK Is Used
Application Fails When ApplicationManager Is Terminated During Data Processing in the Cluster Mode of Structured Streaming
Restrictions on Restoring the Spark Application from the checkpoint
Support for Third-party JAR Packages on x86 and TaiShan Platforms
What Should I Do If a Large Number of Directories Whose Names Start with blockmgr- or spark- Exist in the /tmp Directory on the Client Installation Node?
Error Code 139 Reported When Python Pipeline Runs in the ARM Environment
What Should I Do If the Structured Streaming Task Submission Way Is Changed?
Common JAR File Conflicts
YARN Development Guide (Security Mode)
Overview
Interfaces
Command
Java API
REST API
REST APIs of Superior Scheduler
YARN Development Guide (Normal Mode)
Overview
Interfaces
Command
Java API
REST API
REST APIs of Superior Scheduler
Development Specifications
Development Environment Construction
Rules
Security Authentication
Rules
Suggestions
ClickHouse
Rules
Suggestions
Flink
Applicable Scenarios
Rules
Suggestions
HBase
Application Scenarios
Rules
Suggestions
Examples
Appendix
HDFS
Application Scenarios
Rules
Suggestions
Hive
Application Scenarios
Rules
Suggestions
Examples
Kafka
Application Scenarios
Rules
Suggestions
Mapreduce
Application Scenarios
Rules
Suggestions
Examples
Oozie
Application Scenarios
Rules
Suggestions
Spark2x
Application Scenarios
Rules
Suggestions
Yarn
Application Scenarios
Rules
Manager Management Development Guide
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing Development and Running Environments
Configuring and Importing a Sample Project
Developing an Application
Typical Scenario Description
Development Guideline
Example Code Description
Login Authentication
Adding Users
Searching for Users
Modifying Users
Deleting Users
Exporting a User List
Application Commissioning
Commissioning an Application in the Windows OS
Compiling and Running an Application
Viewing Windows Commissioning Results
More Information
External Interfaces
Java API
FAQ
JDK1.6 Fails to Connect to the FusionInsight System Using JDK1.8
An Operation Fails and "authorize failed" Is Displayed in Logs
An Operation Fails and "log4j:WARN No appenders could be found for logger(basicAuth.Main)" Is Displayed in Logs
An Operation Fails and "illegal character in path at index 57" Is Displayed in Logs
Run the curl Command to Access REST APIs
Developer Guide (Normal_3.x)
Description
Obtaining Sample Projects from Huawei Mirrors
Using Open-source JAR File Conflict Lists
HBase
HDFS
Kafka
Spark2x
Mapping Between Maven Repository JAR Versions and MRS Component Versions
Security Authentication
Security Authentication Principles and Mechanisms
Preparing a Developer Account
Handling an Authentication Failure
CQL Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Environment Preparation Overview
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Developing an Application
Scenario
Example Codes
Running an Application
CQL Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Environment Preparation Overview
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Developing an Application
Scenario
Example Codes
Running an Application
Debugging, Compiling, and Running an Application in Windows
Debugging and Running an Application in Linux
Viewing Commissioning Results
FAQ
"Application 'example' already exist" Is Reported When a Sample Application Is Run in Windows
ClickHouse Development Guide (Security Mode)
Overview
Introduction to ClickHouse
Basic Concepts
Development Process
Environment Preparations
Preparing the Development and Operating Environment
Configuring and Importing a Sample Project
Application Development
Typical Application Scenario
Development Guideline
Sample Code
Setting Properties
Establishing a Connection
Creating a Database
Creating a Table
Inserting Data
Querying Data
Deleting a Table
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
ClickHouse Development Guide (Normal Mode)
Overview
Introduction to ClickHouse
Basic Concepts
Development Process
Environment Preparations
Preparing the Development and Operating Environment
Configuring and Importing a Sample Project
Application Development
Typical Application Scenario
Development Guideline
Sample Code
Setting Properties
Establishing a Connection
Creating a Database
Creating a Table
Inserting Data
Querying Data
Deleting a Table
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
Flink Development Guide (Security Mode)
Overview
Application Development
Basic Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Creating a Project (Optional)
Preparing for Security Authentication
Developing an Application
DataStream Application
Scenarios
Java Sample Code
Scala Sample Code
Interconnecting with Kafka
Scenario Description
Java Sample Code
Scala Sample Code
Asynchronous Checkpoint Mechanism
Scenarios
Java Sample Code
Scala Sample Code
Job Pipeline Program
Scenario
Java Sample Code
Scala Sample Code
Stream SQL Join Program
Scenario
Java Sample Code
Debugging the Application
Compiling and Running the Application
Viewing the Debugging Result
More Information
Introduction to Common APIs
Java
Scala
Overview of RESTful APIs
Overview of Savepoints CLI
Introduction to Flink Client CLI
FAQ
Savepoints-related Problems
What If the Chrome Browser Cannot Display the Title
What If the Page Is Displayed Abnormally on Internet Explorer 10/11
What If Checkpoint Is Executed Slowly in RocksDBStateBackend Mode When the Data Amount Is Large
What If yarn-session Start Fails When blob.storage.directory Is Set to /home
Why Does Non-static KafkaPartitioner Class Object Fail to Construct FlinkKafkaProducer010?
When I Use a Newly Created Flink User to Submit Tasks, Why Does the Task Submission Fail and a Message Indicating Insufficient Permission on ZooKeeper Directory Is Displayed?
Why Cannot I Access the Apache Flink Dashboard?
How Do I View the Debugging Information Printed Using System.out.println or Export the Debugging Information to a Specified File?
Incorrect GLIBC Version
Flink Development Guide (Normal Mode)
Overview
Application Development
Basic Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Creating a Project (Optional)
Developing an Application
DataStream Application
Scenarios
Java Sample Code
Scala Sample Code
Interconnecting with Kafka
Scenarios
Java Sample Code
Scala Sample Code
Asynchronous Checkpoint Mechanism
Scenarios
Java Sample Code
Scala Sample Code
Job Pipeline Program
Scenario
Java Sample Code
Scala Sample Code
Stream SQL Join Program
Scenario
Java Sample Code
Interconnecting Flink with Cloud Search Service
Scenario Description
Java Sample Code
Debugging the Application
Compiling and Running the Application
Viewing the Debugging Result
More Information
Introduction to Common APIs
Java
Scala
Overview of RESTful APIs
Overview of Savepoints CLI
Introduction to Flink Client CLI
FAQ
Savepoints-related Problems
What If the Chrome Browser Cannot Display the Title
What If the Page Is Displayed Abnormally on Internet Explorer 10/11
What If Checkpoint Is Executed Slowly in RocksDBStateBackend Mode When the Data Amount Is Large
What If yarn-session Start Fails When blob.storage.directory Is Set to /home
Why Does Non-static KafkaPartitioner Class Object Fail to Construct FlinkKafkaProducer010?
When I Use a Newly Created Flink User to Submit Tasks, Why Does the Task Submission Fail and a Message Indicating Insufficient Permission on ZooKeeper Directory Is Displayed?
Why Cannot I Access the Apache Flink Dashboard?
How Do I View the Debugging Information Printed Using System.out.println or Export the Debugging Information to a Specified File?
Incorrect GLIBC Version
HBase Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Preparing for Security Authentication
Preparing Authentication Mechanism Code
Multi-Instance Authentication in Mutual Trust Scenarios
Authentication for accessing the HBase REST Service
Authentication for Accessing the ThriftServer Service
Authentication for Accessing Multiple ZooKeepers
Developing an Application
Typical Scenario Description
Development Idea
Example Code Description
Configuring Log4j Log Output
Creating Configuration
Creating Connection
Creating a Table
Deleting a Table
Modifying a Table
Inserting Data
Deleting Data
Reading Data Using Get
Reading Data Using Scan
Filtering Data
Creating a Secondary Index
Deleting an Index
Secondary Index-based Query
Multi-Point Region Division
Creating a Phoenix Table
Writing Data to the PhoenixTable
Reading the PhoenixTable
Accessing Multiple ZooKeepers
Querying Cluster Information Using REST
Obtaining All Tables Using REST
Operate Namespaces Using REST
Operate Tables Using REST
Accessing the ThriftServer Operation Table
Accessing ThriftServer to Write Data
Accessing ThriftServer to Read Data
Using HBase Dual-Read
Application Commissioning
Commissioning an Application in Windows
Compiling and Running an Application
Viewing Windows Commissioning Results
Commissioning an Application in Linux
Compiling and Running an Application When a Client Is Installed
Compiling and Running an Application When No Client Is Installed
Viewing Linux Commissioning Results
More Information
SQL Query
HBase Dual-Read Configuration Items
External Interfaces
Shell
Java API
SQLLine
JDBC APIs
WebUI
HBase Access Configuration on Windows Using EIPs
Phoenix Command Line
FAQs
How to Rectify the Fault When an Exception Occurs During the Running of an HBase-developed Application and "org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory" Is Displayed in the Error Information?
What Are the Application Scenarios of the Bulkload and put Data-loading Modes?
An Error Occurred When Building a JAR Package
HBase Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Developing an Application
Typical Scenario Description
Development Idea
Example Code Description
Configuring Log4j Log Output
Creating Configuration
Creating Connection
Creating a Table
Deleting a Table
Modifying a Table
Inserting Data
Deleting Data
Reading Data Using Get
Reading Data Using Scan
Filtering Data
Creating a Secondary Index
Deleting an Index
Secondary Index-based Query
Multi-Point Region Division
Creating a Phoenix Table
Writing Data to the PhoenixTable
Reading the PhoenixTable
Accessing Multiple ZooKeepers
Querying Cluster Information Using REST
Obtaining All Tables Using REST
Operate Namespaces Using REST
Operate Tables Using REST
Accessing the ThriftServer Operation Table
Accessing ThriftServer to Write Data
Accessing ThriftServer to Read Data
Using HBase Dual-Read
Application Commissioning
Commissioning an Application in Windows
Compiling and Running an Application
Viewing Windows Commissioning Results
Commissioning an Application in Linux
Compiling and Running an Application When a Client Is Installed
Compiling and Running an Application When No Client Is Installed
Viewing Linux Commissioning Results
More Information
SQL Query
HBase Dual-Read Configuration Items
External Interfaces
Shell
Java APIs
SQLLine
JDBC APIs
WebUI
HBase Access Configuration on Windows Using EIPs
Phoenix Command Line
FAQs
How to Rectify the Fault When an Exception Occurs During the Running of an HBase-developed Application and "org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory" Is Displayed in the Error Information?
What Are the Application Scenarios of the bulkload and put Data-loading Modes?
An Error Occurred When Building a JAR Package
HDFS Development Guide (Security Mode)
Overview
Introduction to HDFS
Basic Concepts
Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing Sample Projects
Preparing the Authentication Mechanism
Developing the Project
Scenario
Development Idea
Declare the Example Codes
Initializing the HDFS
Creating Directories
Writing Data into a File
Appending Data to a File
Reading Data from a File
Deleting a File
Deleting Directories
Multi-Thread Tasks
Setting Storage Policies
Colocation
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running an Application
Checking the Commissioning Result
Commissioning an Application in the Linux Environment
Compiling and Running an Application With the Client Installed
Compiling and Running an Application With the Client Not Installed
Checking the Commissioning Result
More Information
Common API Introduction
Java API
C API
HTTP REST API
Shell Command Introduce
HDFS Access Configuration on Windows Using EIPs
HDFS Development Guide (Normal Mode)
Overview
Introduction to HDFS
Basic Concepts
Development Process
Environment Preparation
Development and Operating Environment
Configuring and Importing Sample Projects
Developing the Project
Scenario
Development Idea
Declare the Example Codes
Initializing the HDFS
Creating Directories
Writing Data into a File
Appending Data to a File
Reading Data from a File
Deleting a File
Deleting Directories
Multi-Thread Tasks
Setting Storage Policies
Colocation
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running an Application
Checking the Commissioning Result
Commissioning an Application in the Linux Environment
Compiling and Running an Application with the Client Installed
Compiling and Running an Application with the Client Not Installed
Checking the Commissioning Result
More Information
Common API Introduction
Java API
C API
HTTP REST API
Shell Command Introduce
HDFS Access Configuration on Windows Using EIPs
Hive Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Required Permissions
Development Process
Preparing the Environment
Preparing Development and Operating Environment
Configuring the JDBC Sample Project
Configuring the Hcatalog Sample Project
Configuring the Python Sample Project
Configuring the Python3 Sample Project
Developing an Application
Typical Scenario Description
Example Codes
Creating a Table
Loading Data
Querying Data
UDF
Example Program Guide
Accessing Multiple ZooKeepers
Commissioning Applications
Running JDBC and Viewing Results
Running HCatalog and Viewing Results
Running Python and Viewing Results
Running Python3 and Viewing Results
More Information
Interface Reference
JDBC
Hive SQL
WebHCat
Hive Access Configuration on Windows Using EIPs
FAQ
A Message Is Displayed Stating "Unable to read HiveServer2 configs from ZooKeeper" During the Use of the Secondary Development Program
Problem performing GSS wrap Message Is Displayed Due to IBM JDK Exceptions
Hive SQL Is Incompatible with SQL2003 Standards
Hive Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Preparing the Environment
Preparing Development and Operating Environment
Configuring the JDBC Sample Project
Configuring the Hcatalog Sample Project
Configuring the Python Sample Project
Configuring the Python3 Sample Project
Developing an Application
Typical Scenario Description
Example Codes
Creating a Table
Loading Data
Querying Data
UDF
Example Program Guide
Accessing Multiple ZooKeepers
Commissioning Applications
Running JDBC and Viewing Results
Running HCatalog and Viewing Results
Running Python and Viewing Results
Running Python3 and Viewing Results
More Information
Interface Reference
JDBC
Hive SQL
WebHCat
Hive Access Configuration on Windows Using EIPs
FAQ
Problem performing GSS wrap Message Is Displayed Due to IBM JDK Exceptions
Impala Development Guide (Security Mode)
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Preparing Development and Operating Environment
Application Development
Typical Application Scenario
Creating a Table
Loading Data
Querying Data
User-defined Functions
Sample Program Guide
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
Impala APIs
JDBC
Impala SQL
Development Specifications
Rules
Suggestions
Examples
Impala Development Guide (Normal Mode)
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing a Sample Project
Application Development
Typical Application Scenario
Creating a Table
Loading Data
Querying Data
User-defined Functions
Sample Program Guide
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
Impala APIs
JDBC
Impala SQL
Development Specifications
Rules
Suggestions
Examples
Kafka Development Guide (Security Mode)
Overview
Development Environment Preparation
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Preparing for Security Authentication
Sasl Kerberos authentication
Developing an Application
Typical Scenario Description
Typical Scenario Sample Code Description
Producer API Usage Sample
Consumer API Usage Sample
Multi-thread Producer Sample
Multi-thread Consumer Sample
Kafka Streams Scenario Description
Kafka Streams Sample Code Description
High level KafkaStreams API Usage Sample
Low level KafkaStreams API Usage Sample
Kafka Token Authentication Mechanism Scenario
Sample Code of the Kafka Token Authentication Mechanism
Application Commissioning
Commissioning an Application in Windows
Commissioning an Application in Linux
Kafka Streams Sample Running Guide
High level Streams API Sample Usage Guide
Low level Streams API Sample Usage Guide
Running Guide of the Kafka Token Authentication Mechanism Sample Code
More Information
External Interfaces
Shell
Java API
Security Ports
SSL Encryption Function Used by a Client
Kafka Access Configuration on Windows Using EIPs
FAQ
Topic Authentication Fails During Sample Running and "example-metric1=TOPIC_AUTHORIZATION_FAILED" Is Displayed
Running the Producer.java Sample to Obtain Metadata Fails and "ERROR fetching topic metadata for topics..." Is Displayed, Even the Access Permission for the Related Topic
Kafka Development Guide (Normal Mode)
Overview
Development Environment Preparation
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing a Sample Project
Developing an Application
Typical Scenario Description
Example Code Description
Producer API Usage Sample
Consumer API Usage Sample
Multi-thread Producer Sample
Multi-thread Consumer Sample
Kafka Streams Sample Code Description
Kafka Streams Sample Code Description
High level KafkaStreams API Usage Sample
Low level KafkaStreams API Usage Sample
Application Commissioning
Commissioning an Application in Windows
Commissioning an Application in Linux
Kafka Streams Sample Running Guide
High Level Kafka Streams API Sample Usage Guide
Low level Kafka Streams API Sample Usage Guide
More Information
External Interfaces
Shell
Java API
Kafka Access Configuration on Windows Using EIPs
FAQ
Running the Producer.java Sample to Obtain Metadata Fails and "ERROR fetching topic metadata for topics..." Is Displayed, Even the Access Permission for the Related Topic
Kudu Development Guide (Security Mode)
Overview
Introduction to Kudu
Basic Concepts
Development Process
Environment Preparation
Preparing the Development and Running Environment
Preparing for Security Authentication
Developing an Application
Typical Application Scenario
Development Idea
Sample Code Description
Establish Connections
Creating a Table
Opening a Table
Modifying a Table
Writing Data
Read Data
Deleting a Table
Commissioning the Application
More Information
Common APIs
Java API
Kudu Development Guide (Normal Mode)
Overview
Introduction to Kudu
Basic Concepts
Development Process
Environment Preparation
Preparing the Development and Running Environment
Developing an Application
Typical Application Scenario
Development Idea
Sample Code Description
Establish Connections
Creating a Table
Opening a Table
Modifying a Table
Writing Data
Read Data
Deleting a Table
Commissioning the Application
More Information
Common APIs
Java API
MapReduce Development Guide (Security Mode)
Overview
MapReduce Overview
Basic Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Preparing the Authentication Mechanism
Developing the Project
MapReduce Statistics Sample Project
Typical Scenarios
Example Code
MapReduce Accessing Multi-Component Example Project
Instance
Example Code
Commissioning the Application
Commissioning the Application in the Windows Environment
Compiling and Running the Application
Checking the Commissioning Result
Commissioning an Application in the Linux Environment
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java API
REST API
FAQ
No Response from the Client When Submitting the MapReduce Application
When an Application Is Run, An Abnormality Occurs Due to Network Faults
How to Perform Remote Debugging During MapReduce Secondary Development?
MapReduce Development Guide (Normal Mode)
Overview
MapReduce Overview
Basic Concepts
Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Developing the Project
MapReduce Statistics Sample Project
Typical Scenarios
Example Codes
MapReduce Accessing Multi-Component Example Project
Instance
Example Code
Commissioning the Application
Commissioning the Application in the Windows Environment
Compiling and Running the Application
Checking the Commissioning Result
Commissioning the Application in the Linux Environment
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java API
REST API
FAQ
No Response from the Client When Submitting the MapReduce Application
How to Perform Remote Debugging During MapReduce Secondary Development?
Oozie Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing Development and Operating Environment
Downloading and Importing Sample Projects
Preparing Authentication Mechanism Code
Developing the Project
Development of Configuration Files
Description
Development Procedure
Example Codes
job.properties
workflow.xml
Start Action
End Action
Kill Action
FS Action
MapReduce Action
coordinator.xml
Development of Java
Description
Sample Code
Scheduling Spark2x to Access HBase and Hive Using Oozie
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running Applications
Checking the Commissioning Result
More Information
Common API Introduce
Shell
Java
REST
Oozie Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing for Development and Operating Environment
Downloading and Importing Sample Projects
Developing the Project
Development of Configuration Files
Description
Development Procedure
Example Codes
job.properties
workflow.xml
Start Action
End Action
Kill Action
FS Action
MapReduce Action
coordinator.xml
Development of Java
Description
Sample Code
Scheduling Spark2x to Access HBase and Hive Using Oozie
Commissioning the Application
Commissioning an Application in the Windows Environment
Compiling and Running Applications
Checking the Commissioning Result
More Information
Common API Introduce
Shell
Java
REST
Presto Development Guide (Security Mode)
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing the JDBC API Sample Project
Configuring and Importing the HCatalog API Sample Project
Developing the Project
Typical Application Scenario
Sample Code Description
Commissioning the Application
Presto APIs
FAQs
No Certificate Is Available When PrestoJDBCExample Run on a Node Outside the Cluster
When a Node Outside a Cluster Is Connected to a Cluster with Kerberos Authentication Enabled, HTTP Cannot Find the Corresponding Record in the Kerberos Database
Presto Development Guide (Normal Mode)
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Preparing Development and Operating Environment
Configuring and Importing the JDBC API Sample Project
Configuring and Importing the HCatalog API Sample Project
Developing the Project
Typical Application Scenario
Sample Code Description
Commissioning the Application
Presto APIs
Spark2x Development Guide (Security Mode)
Overview
Application Development Overview
Basic Concepts
Development Process
Preparing for the Environment
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Preparing for Security Authentication
Configuring the Python3 Sample Project
Developing the Project
Spark Core Project
Overview
Java Sample Code
Scala Sample Code
Python Sample Code
Spark SQL Project
Overview
Java Sample Code
Scala Sample Code
Python Sample Code
Accessing the Spark SQL Through JDBC
Overview
Java Sample Code
Scala Sample Code
Spark on HBase
Performing Operations on Data in Avro Format
Performing Operations on the HBase Data Source
Using the BulkPut Interface
Using the BulkGet Interface
Using the BulkDelete Interface
Using the BulkLoad Interface
Using the foreachPartition Interface
Distributedly Scanning HBase Tables
Using the mapPartition Interface
Writing Data to HBase Tables In Batches Using SparkStreaming
Reading Data from HBase and Write It Back to HBase
Overview
Java Example Code
Scala Example Code
Python Example Code
Reading Data from Hive and Write It to HBase
Overview
Java Example Code
Scala Example Code
Python Example Code
Streaming Connecting to Kafka0-10
Overview
Java Example Code
Scala Example Code
Structured Streaming Project
Overview
Java Sample Code
Scala Sample Code
Python Sample Code
Structured Streaming Stream-Stream Join
Overview
Scala Example Code
Structured Streaming Status Operation
Overview
Scala Sample Code
Concurrent Access from Spark to HBase in Two Clusters
Overview
Scala Sample Code
Synchronizing HBase Data from Spark to CarbonData
Overview
Java Example Code
Using Spark to Perform Basic Hudi Operations
Overview
Java Example Code
Scala Example Code
Python Example Code
Compiling User-defined Configuration Items for Hudi
HoodieDeltaStreamer
User-defined Partitioner
Commissioning the Application
Commissioning Applications on Windows
Spark Access Configuration on Windows Using EIPs
Compiling and Running Applications
View Debugging Results
Commissioning an Application in Linux
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java
Scala
Python
REST API
Common CLIs
JDBCServer Interface
Structured Streaming Functions and Reliability
FAQ
How to Add a User-Defined Library
How to Automatically Load Jars Packages?
Why the "Class Does not Exist" Error Is Reported While the SparkStresmingKafka Project Is Running?
Privilege Control Mechanism of SparkSQL UDF Feature
Why Does Kafka Fail to Receive the Data Written Back by SLog in to the node where the client is installed as the client installation user.park Streaming?
Why a Spark Core Application Is Suspended Instead of Being Exited When Driver Memory Is Insufficient to Store Collected Intensive Data?
Why the Name of the Spark Application Submitted in Yarn-Cluster Mode Does not Take Effect?
How to Perform Remote Debugging Using IDEA?
How to Submit the Spark Application Using Java Commands?
A Message Stating "Problem performing GSS wrap" Is Displayed When IBM JDK Is Used
Application Fails When ApplicationManager Is Terminated During Data Processing in the Cluster Mode of Structured Streaming
Restrictions on Restoring the Spark Application from the checkpoint
Support for Third-party JAR Packages on x86 and TaiShan Platforms
What Should I Do If a Large Number of Directories Whose Names Start with blockmgr- or spark- Exist in the /tmp Directory on the Client Installation Node?
Error Code 139 Reported When Python Pipeline Runs in the ARM Environment
What Should I Do If the Structured Streaming Task Submission Way Is Changed?
Common JAR File Conflicts
Spark2x Development Guide (Normal Mode)
Overview
Application Development Overview
Basic Concepts
Development Process
Preparing for the Environment
Development and Operating Environment
Configuring and Importing Sample Projects
Creating a New Project (Optional)
Configuring the Python3 Sample Project
Developing the Project
Spark Core Project
Overview
Java Example Code
Scala Sample Code
Python Example Code
Spark SQL Project
Overview
Java Sample Code
Scala Sample Code
Python Sample Code
Accessing the Spark SQL Through JDBC
Overview
Java Example Code
Scala Example Code
Spark on HBase
Performing Operation on Data in Avro Format
Performing Operations on the HBase Data Source
Using the BulkPut Interface
Using the BulkGet Interface
Using the BulkDelete Interface
Using the BulkLoad Interface
Using the foreachPartition Interface
Distributedly Scanning HBase Tables
Using the mapPartition Interface
Writing Data to HBase Tables In Batches Using SparkStreaming
Reading Data from HBase and Write It Back to HBase
Overview
Java Example Code
Scala Example Code
Python Example Code
Reading Data from Hive and Write It to HBase
Overview
Java Example Code
Scala Example Code
Python Example Code
Streaming Connecting to Kafka0-10
Overview
Java Example Code
Scala Example Code
Structured Streaming Project
Overview
Java Sample Code
Scala Sample Code
Python Sample Code
Structured Streaming Stream-Stream Join
Overview
Scala Example Code
Structured Streaming Status Operation
Overview
Scala Sample Code
Synchronizing HBase Data from Spark to CarbonData
Overview
Java Example Code
Using Spark to Perform Basic Hudi Operations
Overview
Java Example Code
Scala Example Code
Python Sample Code
Compiling User-defined Configuration Items for Hudi
HoodieDeltaStreamer
User-defined Partitioner
Commissioning the Application
Commissioning Applications on Windows
Spark Access Configuration on Windows Using EIPs
Compiling and Running Applications
View Debugging Results
Commissioning an Application in Linux
Compiling and Running the Application
Checking the Commissioning Result
More Information
Common APIs
Java
Scala
Python
Common CLIs
JDBCServer Interface
Structured Streaming Functions and Reliability
FAQ
How to Add a User-Defined Library
How to Automatically Load Jars Packages?
Why the "Class Does not Exist" Error Is Reported While the SparkStreamingKafka Project Is Running?
Why Does Kafka Fail to Receive the Data Written Back by Spark Streaming?
Why a Spark Core Application Is Suspended Instead of Being Exited When Driver Memory Is Insufficient to Store Collected Intensive Data?
Why the Name of the Spark Application Submitted in Yarn-Cluster Mode Does not Take Effect?
How to Perform Remote Debugging Using IDEA?
How to Submit the Spark Application Using Java Commands?
A Message Stating "Problem performing GSS wrap" Is Displayed When IBM JDK Is Used
Application Fails When ApplicationManager Is Terminated During Data Processing in the Cluster Mode of Structured Streaming
Restrictions on Restoring the Spark Application from the checkpoint
Support for Third-party JAR Packages on x86 and TaiShan Platforms
What Should I Do If a Large Number of Directories Whose Names Start with blockmgr- or spark- Exist in the /tmp Directory on the Client Installation Node?
Error Code 139 Reported When Python Pipeline Runs in the ARM Environment
What Should I Do If the Structured Streaming Task Submission Way Is Changed?
Common JAR File Conflicts
Storm Development Guide (Security Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Environment Preparation Overview
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Developing an Application
Typical Scenario Description
Development Idea
Example Code Description
Creating a Spout
Creating a Bolt
Creating a Topology
Running an Application
Packaging IntelliJ IDEA Code
Packaging Services
Overview
Packaging Services on a Linux OS
Packaging Services on a Windows OS
Submitting a Topology
Submitting a Topology When a Client Is Installed on a Linux OS
Submitting a Topology When No Client Is Installed on a Linux OS
Submitting a Topology in IntelliJ IDEA Remotely
Viewing Results
More Information
Storm-Kafka Development Guideline
Storm-JDBC Development Guideline
Storm-HDFS Development Guideline
Storm-HBase Development Guideline
Flux Development Guideline
External Interfaces
FAQ
How Do I Use IDEA to Remotely Debug Services?
How Do I Handle the Error "Command line is too long" Reported When Main Is Executed for Remote Topology Submission in IntelliJ IDEA
Storm Development Guide (Normal Mode)
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Environment Preparation Overview
Preparing for Development and Operating Environment
Configuring and Importing Sample Projects
Developing an Application
Typical Scenario Description
Development Idea
Example Code Description
Creating a Spout
Creating a Bolt
Creating a Topology
Running an Application
Packaging IntelliJ IDEA Code
Packaging Services
Overview
Packaging Services on a Linux OS
Packaging Services on a Windows OS
Submitting a Topology
Submitting a Topology When a Client Is Installed on a Linux OS
Submitting a Topology When No Client Is Installed on a Linux OS
Submitting a Topology in IntelliJ IDEA Remotely
Viewing Results
More Information
Storm-Kafka Development Guideline
Storm-JDBC Development Guideline
Storm-HDFS Development Guideline
Storm-HBase Development Guideline
Flux Development Guideline
External Interfaces
FAQ
How Do I Use IDEA to Remotely Debug Services?
How Do I Set the Offset Correctly when Using the Old Plug-in storm-kafka?
How Do I Handle the Error "Command line is too long" Reported When Main Is Executed for Remote Topology Submission in IntelliJ IDEA
YARN Development Guide (Security Mode)
Overview
Interfaces
Command
Java API
REST API
REST APIs of Superior Scheduler
YARN Development Guide (Normal Mode)
Overview
Interfaces
Command
Java API
REST API
REST APIs of Superior Scheduler
Development Specifications
Development Environment Construction
Rules
Security Authentication
Rules
Suggestions
ClickHouse
Rules
Suggestions
Flink
Applicable Scenarios
Rules
Suggestions
GraphBase
Rules
Recommendations
HBase
Application Scenarios
Rules
Suggestions
Examples
Appendix
HDFS
Application Scenarios
Rules
Suggestions
Hive
Application Scenarios
Rules
Suggestions
Examples
Kafka
Application Scenarios
Rules
Suggestions
Mapreduce
Application Scenarios
Rules
Suggestions
Examples
Oozie
Application Scenarios
Rules
Suggestions
Spark2x
Application Scenarios
Rules
Suggestions
Yarn
Application Scenarios
Rules
Manager Management Development Guide
Overview
Application Development Overview
Common Concepts
Development Process
Environment Preparation
Preparing Development and Running Environments
Configuring and Importing a Sample Project
Developing an Application
Typical Scenario Description
Development Guideline
Example Code Description
Login Authentication
Adding Users
Searching for Users
Modifying Users
Deleting Users
Exporting a User List
Application Commissioning
Commissioning an Application in the Windows OS
Compiling and Running an Application
Viewing Windows Commissioning Results
More Information
External Interfaces
Java API
FAQ
JDK1.6 Fails to Connect to the FusionInsight System Using JDK1.8
An Operation Fails and "authorize failed" Is Displayed in Logs
An Operation Fails and "log4j:WARN No appenders could be found for logger(basicAuth.Main)" Is Displayed in Logs
An Operation Fails and "illegal character in path at index 57" Is Displayed in Logs
Run the curl Command to Access REST APIs
Developer Guide (Normal_Earlier Than 3.x)
Before You Start
Method of Building an MRS Sample Project
HBase Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Preparing Development and Operating Environments
Preparing a Development User
Configuring and Importing a Sample Project
Application Development
Development Guidelines in Typical Scenarios
Creating the Configuration Object
Creating the Connection Object
Creating a Table
Deleting a Table
Modifying a Table
Inserting Data
Deleting Data
Reading Data Using Get
Reading Data Using Scan
Using a Filter
Adding a Secondary Index
Enabling/Disabling a Secondary Index
Querying a List of Secondary Indexes
Using a Secondary Index to Read Data
Deleting a Secondary Index
Writing Data into a MOB Table
Reading MOB Data
Multi-Point Region Splitting
ACL Security Configuration
Application Commissioning
Commissioning Applications on Windows
Compiling and Running Applications
Viewing Commissioning Results
Commissioning Applications on Linux
Compiling and Running an Application When a Client Is Installed
Compiling and Running an Application When No Client Is Installed
Viewing Commissioning Results
Commissioning HBase Phoenix Sample Code
Commissioning HBase Python Sample Code
More Information
SQL Query
HBase File Storage Configuration
HFS Java APIs
HBase APIs
Shell
Java APIs
Phoenix
REST
FAQs
HBase Application Running Exception
What Are Application Scenarios of the BulkLoad and Put Data Loading Methods?
Development Specifications
Rules
Suggestions
Examples
Appendix
Hive Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing an Environment
Preparing a Development User
Preparing a JDBC Client Development Environment
Preparing an HCatalog Development Environment
Application Development
Typical Application Scenario
Creating a Table
Loading Data
Querying Data
User-defined Functions
Sample Program Guide
Application Commissioning
Commissioning Applications on Windows
Running the JDBC Client and Viewing Results
Commissioning Applications on Linux
Running the JDBC Client and Viewing Results
Running HCatalog and Viewing Results
Hive APIs
JDBC
HiveQL
WebHCat
Development Specifications
Rules
Suggestions
Examples
MapReduce Application Development
Overview
MapReduce Introduction
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing a Development User
Preparing the Eclipse and JDK
Preparing a Linux Client Operating Environment
Obtaining and Importing a Sample Project
Preparing Kerberos Authentication
Application Development
MapReduce Statistics Sample Applications
Sample Applications About Multi-Component Access from MapReduce
Application Commissioning
Compiling and Running Applications
Viewing Commissioning Results
MapReduce APIs
Java API
FAQs
What Should I Do if the Client Has No Response after a MapReduce Job is Submitted?
Development Specifications
Rules
Suggestions
Examples
HDFS Application Development
Overview
Introduction to HDFS
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing a Development User
Preparing the Eclipse and JDK
Preparing a Linux Client Operating Environment
Obtaining and Importing a Sample Project
Application Development
Scenario Description and Development Guidelines
Initializing HDFS
Writing Data to a File
Appending File Content
Reading a File
Deleting a File
Colocation
Setting Storage Policies
Accessing OBS
Application Commissioning
Commissioning Applications on Linux
Compiling and Running an Application When a Client Is Installed
Viewing Commissioning Results
HDFS APIs
Java APIs
C APIs
HTTP REST APIs
Shell Commands
Development Specifications
Rules
Suggestions
Spark Application Development
Overview
Spark Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Environment Overview
Preparing a Development User
Preparing a Java Development Environment
Preparing a Scala Development Environment
Preparing a Python Development Environment
Preparing an Operating Environment
Downloading and Importing a Sample Project
(Optional) Creating a Project
Preparing the Authentication Mechanism Code
Application Development
Spark Core Application
Scenario Description
Java Sample Code
Scala Sample Code
Python Sample Code
Spark SQL Application
Scenario Description
Java Sample Code
Scala Sample Code
Spark Streaming Application
Scenario Description
Java Sample Code
Scala Sample Code
Application for Accessing Spark SQL Through JDBC
Scenario Description
Java Sample Code
Scala Sample Code
Python Sample Code
Spark on HBase Application
Scenario Description
Java Sample Code
Scala Sample Code
Reading Data from HBase and Writing Data Back to HBase
Scenario Description
Java Sample Code
Scala Sample Code
Reading Data from Hive and Write Data to HBase
Scenario Description
Java Sample Code
Scala Sample Code
Using Streaming to Read Data from Kafka and Write Data to HBase
Scenario Description
Java Sample Code
Scala Sample Code
Application for Connecting Spark Streaming to Kafka0-10
Scenario Description
Java Sample Code
Scala Sample Code
Structured Streaming Application
Scenario Description
Java Sample Code
Scala Sample Code
Application Commissioning
Compiling and Running Applications
Viewing Commissioning Results
Application Tuning
Spark Core Tuning
Data Serialization
Memory Configuration Optimization
Setting a Degree of Parallelism
Using Broadcast Variables
Using the External Shuffle Service to Improve Performance
Configuring Dynamic Resource Scheduling in Yarn Mode
Configuring Process Parameters
Designing a Direction Acyclic Graph (DAG)
Experience Summary
SQL and DataFrame Tuning
Optimizing the Spark SQL Join Operation
Optimizing INSERT...SELECT Operation
Spark Streaming Tuning
Spark CBO Tuning
Spark APIs
Java
Scala
Python
REST API
ThriftServer APIs
Common Commands
FAQs
How Do I Add a Dependency Package with Customized Codes?
How Do I Handle the Dependency Package That Is Automatically Loaded?
Why the "Class Does not Exist" Error Is Reported While the SparkStreamingKafka Project Is Running?
Why a Spark Core Application Is Suspended Instead of Being Exited When Driver Memory Is Insufficient to Store Collected Intensive Data?
Why the Name of the Spark Application Submitted in Yarn-Cluster Mode Does not Take Effect?
How Do I Submit the Spark Application Using Java Commands?
How Does the Permission Control Mechanism Work for the UDF Function in SparkSQL?
Why Does Kafka Fail to Receive the Data Written Back by Spark Streaming?
How Do I Perform Remote Debugging Using IDEA?
A Message Stating "Problem performing GSS wrap" Is Displayed When IBM JDK Is Used
Why Does the ApplicationManager Fail to Be Terminated When Data Is Being Processed in the Structured Streaming Cluster Mode?
What Should I Do If FileNotFoundException Occurs When spark-submit Is Used to Submit a Job in Spark on Yarn Client Mode?
What Should I Do If the "had a not serializable result" Error Is Reported When a Spark Task Reads HBase Data?
How Do I Connect to Hive and HDFS of an MRS Cluster when the Spark Program Is Running on a Local Host?
Development Specifications
Rules
Suggestions
Storm Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Preparing the Linux Client
Preparing the Windows Development Environment
Development Environment Introduction
Preparing the Eclipse and JDK
Configuring and Importing a Project
Application Development
Typical Application Scenario
Creating a Spout
Creating a Bolt
Creating a Topology
Running an Application
Generate the JAR Package of the Sample Code
Submitting a Topology When a Client Is Installed on a Linux OS
Viewing Results
More Information
Storm-Kafka Development Guideline
Storm-JDBC Development Guideline
Storm-HDFS Development Guideline
Storm-OBS Development Guideline
Storm-HBase Development Guideline
Flux Development Guideline
External APIs
Development Specifications
Rules
Suggestions
Kafka Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing the Maven and JDK
Importing a Sample Project
Preparing for Security Authentication
Application Development
Typical Application Scenario
Old Producer API Usage Sample
Old Consumer API Usage Sample
Producer API Usage Sample
Consumer API Usage Sample
Multi-Thread Producer API Usage Sample
Multi-Thread Consumer API Usage Sample
SimpleConsumer API Usage Sample
Description of the Sample Project Configuration File
Application Commissioning
Commissioning Applications on Linux
Kafka APIs
Shell
Java APIs
Security APIs
FAQs
How Can I Address the Issue That Running the Producer.java Sample to Obtain Metadata Fails and "ERROR fetching topic metadata for topics..." Is Displayed?
Development Specifications
Rules
Suggestions
Presto Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing an Environment
Preparing a Development User
Preparing a JDBC Client Development Environment
Preparing an HCatalog Development Environment
Application Development
Typical Application Scenario
Sample Code Description
Application Commissioning
Commissioning Applications on Windows
Commissioning Applications on Linux
Presto APIs
FAQs
No Certificate Is Available When PrestoJDBCExample Run on a Node Outside the Cluster
When a Node Outside a Cluster Is Connected to a Cluster with Kerberos Authentication Enabled, HTTP Cannot Find the Corresponding Record in the Kerberos Database
OpenTSDB Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing an Environment
Preparing a Development User
Configuring and Importing a Sample Project
Application Development
Development Guidelines in Typical Scenarios
Configuring Parameters
Writing Data
Querying Data
Deleting Data
Application Commissioning
Commissioning Applications on Windows
Compiling and Running Applications
Viewing Commissioning Results
Commissioning Applications on Linux
Compiling and Running Applications
Viewing Commissioning Results
OpenTSDB APIs
CLI Tools
HTTP APIs
Flink Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Preparing Development and Operating Environments
Preparing a Development User
Installing a Client
Configuring and Importing a Sample Project
(Optional) Creating a Project
Preparing for Security Authentication
Application Development
DataStream Application
Scenario Description
Java Sample Code
Scala Sample Code
Application for Producing and Consuming Data in Kafka
Scenario Description
Java Sample Code
Scala Sample Code
Asynchronous Checkpoint Mechanism Application
Scenario Description
Java Sample Code
Scala Sample Code
Stream SQL Join Application
Scenario Description
Java Sample Code
Application Commissioning
Compiling and Running Applications
Viewing Commissioning Results
Performance Tuning
More Information
Savepoints CLI
Flink Client CLI
FAQs
Savepoints FAQs
What Should I Do If Running a Checkpoint Is Slow When RocksDBStateBackend is Set for the Checkpoint and a Large Amount of Data Exists?
What Should I Do If yarn-session Failed to Be Started When blob.storage.directory Is Set to /home?
Why Does Non-static KafkaPartitioner Class Object Fail to Construct FlinkKafkaProducer010?
When I Use the Newly-Created Flink User to Submit Tasks, Why Does the Task Submission Fail and a Message Indicating Insufficient Permission on ZooKeeper Directory Is Displayed?
Why Can't I Access the Flink Web Page?
Impala Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing an Environment
Preparing a Development User
Preparing a JDBC Client Development Environment
Application Development
Typical Application Scenario
Creating a Table
Loading Data
Querying Data
User-defined Functions
Sample Program Guide
Application Commissioning
Commissioning Applications on Windows
Running the JDBC Client and Viewing Results
Commissioning Applications on Linux
Running the JDBC Client and Viewing Results
Impala APIs
JDBC
Impala SQL
Development Specifications
Rules
Suggestions
Examples
Alluxio Application Development
Overview
Application Development Overview
Basic Concepts
Application Development Process
Environment Preparation
Development Environment Introduction
Preparing an Environment
Obtaining and Importing a Sample Project
Application Development
Scenario Description
Initializing Alluxio
Writing Data to a File
Reading a File
Application Commissioning
Alluxio API
Appendix
Login to MRS Manager
Downloading an MRS Client
Change History
API Reference
Before You Start
Overview
API Calling
Endpoints
Constraints
Concepts
Selecting an API Type
API Overview
Calling APIs
Making an API Request
Authentication
Response
Application Cases
Creating an MRS Cluster
Scaling Out a Cluster
Scaling in a Cluster
Creating a Job
Terminating a Job
Terminating a Cluster
API V2
Cluster Management APIs
Creating a Cluster
Job Object APIs
Adding and Executing a Job
Querying Information About a Job
Querying a List of Jobs
Terminating a Job
Deleting Jobs in Batches
Obtaining SQL Results
SQL APIs
Submitting a SQL Statement
Querying SQL Results
Canceling a SQL Execution Task
Cluster HDFS File API
Obtaining the List of Files from a Specified Directory
Agency Management
Querying the Mapping Between a User (Group) and an IAM Agency
Updating the Mapping Between a User (Group) and an IAM Agency
API V1.1
Cluster Management APIs
Creating a Cluster and Executing a Job
Resizing a Cluster
Querying a Cluster List
Terminating a Cluster
Querying Cluster Details
Querying a Host List
Auto Scaling APIs
Configuring an Auto Scaling Rule
Tag Management APIs
Adding Tags to a Specified Cluster
Deleting Tags from a Specified Cluster
Querying Tags of a Specified Cluster
Adding or Deleting Cluster Tags in Batches
Querying All Tags
Querying a List of Clusters with Specified Tags
Out-of-Date APIs
Job API Management (Deprecated)
Adding and Executing a Job (Deprecated)
Querying the exe Object List of Jobs (Deprecated)
Querying exe Object Details (Deprecated)
Deleting a Job Execution Object (Deprecated)
Permissions Policies and Supported Actions
Introduction
Appendix
ECS Specifications Used by MRS
BMS Specifications Used by MRS
Status Codes
Error Codes
Obtaining a Project ID
Obtaining an Account ID
Obtaining the MRS Cluster Information
Roles and components supported by MRS
SDK Reference
SDK Overview
FAQs
MRS Overview
What Is MRS Used For?
What Types of Distributed Storage Does MRS Support?
How Do I Create an MRS Cluster Using a Custom Security Group?
How Do I Use MRS?
Region and AZ
Can I Configure a Phoenix Connection Pool?
Does MRS Support Change of the Network Segment?
Can I Downgrade the Specifications of an MRS Cluster Node?
What Is the Relationship Between Hive and Other Components?
Does an MRS Cluster Support Hive on Spark?
What Are the Differences Between Hive Versions?
Which MRS Cluster Version Supports Hive Connection and User Synchronization?
What Are the Differences Between OBS and HDFS in Data Storage?
How Do I Obtain the Hadoop Pressure Test Tool?
What Is the Relationship Between Impala and Other Components?
Statement About the Public IP Addresses in the Open-Source Third-Party SDK Integrated by MRS
What Is the Relationship Between Kudu and HBase?
Does MRS Support Running Hive on Kudu?
What Are the Solutions for processing 1 Billion Data Records?
Can I Change the IP address of DBService?
Can I Clear MRS sudo Logs?
Is the Storm Log also limited to 20 GB in MRS cluster 2.1.0?
What Is Spark ThriftServer?
What Access Protocols Are Supported by Kafka?
What If Error 408 Is Reported When an MRS Node Accesses OBS?
What Is the Compression Ratio of zstd?
Why Are the HDFS, YARN, and MapReduce Components Unavailable When an MRS Cluster Is Bought?
Why Is the ZooKeeper Component Unavailable When an MRS Cluster Is Bought?
Which Python Versions Are Supported by Spark Tasks in an MRS 3.1.0 Cluster?
How Do I Enable Different Service Programs to Use Different YARN Queues?
Differences and Relationships Between the MRS Management Console and Cluster Manager
Billing
How Is MRS Billed?
Why Is the Price Not Displayed During MRS Cluster Creation?
How Is Auto Scaling Billed for an MRS Cluster?
How Is MRS Renewed?
How Is the Task Node in an MRS Cluster Billed?
Why Does My Unsubscription from ECS Fail After I Unsubscribe from MRS?
Account and Password
What Is the Account for Logging In to Manager?
How Do I Query and Change the Password Validity Period of an Account?
Accounts and Permissions
Does an MRS Cluster Support Access Permission Control If Kerberos Authentication Is not Enabled?
How Do I Assign Tenant Management Permission to a New Account?
How Do I Customize an MRS Policy?
Why Is the Manage User Function Unavailable on the System Page on MRS Manager?
Does Hue Support Account Permission Configuration?
Why Cannot I Submit Jobs on the Console After My IAM Account Is Assigned with Related Permissions?
How Do I Do If an Error Indicating Invalid Authentication Is Reported When I Submit an MRS Cluster Purchase Order?
Client Usage
How Do I Configure Environment Variables and Run Commands on a Component Client?
How Do I Disable ZooKeeper SASL Authentication?
An Error Is Reported When the kinit Command Is Executed on a Client Node Outside an MRS Cluster
Web Page Access
How Do I Change the Session Timeout Duration for an Open Source Component Web UI?
Why Cannot I Refresh the Dynamic Resource Plan Page on MRS Tenant Tab?
What Do I Do If the Kafka Topic Monitoring Tab Is Unavailable on Manager?
How Do I Do If an Error Is Reported or Some Functions Are Unavailable When I Access the Web UIs of HDFS, Hue, YARN, and Flink?
How Do I Access HDFS of the Cluster in Security Mode on Windows Using EIPs?
How Do I Access HDFS of the Cluster in Normal Mode on Windows Using EIPs?
How Do I Access Hive of the Cluster in Security Mode on Windows Using EIPs?
How Do I Access Hive of the Cluster in Normal Mode on Windows Using EIPs?
How Do I Access Kafka of the Cluster in Security Mode on Windows Using EIPs?
How Do I Access Kafka of the Cluster in Normal Mode on Windows Using EIPs?
How Do I Access Spark of the Cluster in Security Mode on Windows Using EIPs?
How Do I Access Spark of the Cluster in Normal Mode on Windows Using EIPs?
How Do I Access HBase of the Cluster in Security Mode on Windows Using EIPs?
How Do I Access HBase of the Cluster in Normal Mode on Windows Using EIPs?
How Do I Switch the Mode of Accessing MRS Manager?
Alarm Monitoring
In an MRS Streaming Cluster, Can the Kafka Topic Monitoring Function Send Alarm Notifications?
Where Can I View the Running Resource Queues When the Alarm "ALM-18022 Insufficient Yarn Queue Resources" Is Reported?
How Do I Understand the Multi-Level Chart Statistics in the HBase Operation Requests Metric?
Performance Tuning
Does an MRS Cluster Support System Reinstallation?
Can I Change the OS of an MRS Cluster?
How Do I Improve the Resource Utilization of Core Nodes in a Cluster?
How Do I Stop the Firewall Service?
Job Development
How Do I Get My Data into OBS or HDFS?
What Types of Spark Jobs Can Be Submitted in a Cluster?
Can I Run Multiple Spark Tasks at the Same Time After the Minimum Tenant Resources of an MRS Cluster Is Changed to 0?
How Do I Do If Job Parameters Separated By Spaces Cannot Be Identified?
What Are the Differences Between the Client Mode and Cluster Mode of Spark Jobs?
How Do I View MRS Job Logs?
How Do I Do If the Message "The current user does not exist on MRS Manager. Grant the user sufficient permissions on IAM and then perform IAM user synchronization on the Dashboard tab page." Is Displayed?
LauncherJob Job Execution Is Failed And the Error Message "jobPropertiesMap is null." Is Displayed
How Do I Do If the Flink Job Status on the MRS Console Is Inconsistent with That on Yarn?
How Do I Do If a SparkStreaming Job Fails After Being Executed Dozens of Hours and the OBS Access 403 Error is Reported?
How Do I Do If an Alarm Is Reported Indicating that the Memory Is Insufficient When I Execute a SQL Statement on the ClickHouse Client?
How Do I Do If Error Message "java.io.IOException: Connection reset by peer" Is Displayed During the Execution of a Spark Job?
How Do I Do If Error Message "requestId=4971883851071737250" Is Displayed When a Spark Job Accesses OBS?
How Do I Do If the Spark Job Error "UnknownScannerExeception" Is Reported?
Why DataArtsStudio Occasionally Fail to Schedule Spark Jobs and the Rescheduling also Fails?
How Do I Do If a Flink Job Fails to Execute and the Error Message "java.lang.NoSuchFieldError: SECURITY_SSL_ENCRYPT_ENABLED" Is Displayed?
Why Submitted Yarn Job Cannot Be Viewed on the Web UI?
How Do I Modify the HDFS NameSpace (fs.defaultFS) of an Existing Cluster?
How Do I Do If the launcher-job Queue Is Stopped by YARN due to Insufficient Heap Size When I Submit a Flink Job on the Management Plane?
How Do I Do If the Error Message "slot request timeout" Is Displayed When I Submit a Flink Job?
Data Import and Export of DistCP Jobs
How Do I View SQL Statements for Hive Jobs on the YARN Web UI?
Cluster Upgrade/Patching
Can I Upgrade an MRS Cluster?
Can I Change the MRS Cluster Version?
Peripheral Ecosystem Interconnection
Can MRS Be Used to Perform Read and Write Operations on DLI Tables?
Does OBS Support the ListObjectsV2 Protocol?
Can MRS Data Be Stored in a Parallel File System Provided by OBS?
Can the Crawler Service Be Deployed in MRS?
Do DWS and MRS Support Secure Deletion (Preventing Retrieval After Deletion)?
Why Is the Kerberos-Authenticated MRS Cluster Not Found When a Connection Is Set Up from DLF?
How Do I Use PySpark on an ECS to Connect to an MRS Spark Cluster with Kerberos Authentication Enabled, on the Intranet?
Why Mapped Fields Do not Exist in the Database After HBase Synchronizes Data to CSS?
Can Flume Read Data from OBS?
Can MRS Connect to an External KDC?
How Do I Solve the Jetty Version Compatibility Issue in Open-Source Kylin 3.x and MRS 1.9.3 Interconnection?
What If Data Failed to Be Exported from MRS to an OBS Encrypted Bucket?
How Do I Install HSS on MRS Cluster Nodes?
Cluster Access
Can I Switch Between the Two Login Modes of MRS?
How Can I Obtain the IP Address and Port Number of a ZooKeeper Instance?
How Do I Access an MRS Cluster from a Node Outside the Cluster?
Big Data Service Development
Can MRS Run Multiple Flume Tasks at a Time?
How Do I Change FlumeClient Logs to Standard Logs?
Where Are the JAR Files and Environment Variables of Hadoop Stored?
What Compression Algorithms Does HBase Support?
Can MRS Write Data to HBase Through the HBase External Table of Hive?
How Do I View HBase Logs?
How Do I Set the TTL for an HBase Table?
How Do I Connect to HBase of MRS Through HappyBase?
How Do I Balance HDFS Data?
How Do I Change the Number of HDFS Replicas?
What Is the Port for Accessing HDFS Using Python?
How Do I Modify the HDFS Active/Standby Switchover Class?
What Is the Recommended Number Type of DynamoDB in Hive Tables?
Can the Hive Driver Be Interconnected with DBCP2?
How Do I View the Hive Table Created by Another User?
Where Can I Download the Dependency Package (com.huawei.gaussc10) in the Hive Sample Project?
Can I Export the Query Result of Hive Data?
How Do I Do If an Error Occurs When Hive Runs the beeline -e Command to Execute Multiple Statements?
How Do I Do If a "hivesql/hivescript" Job Fails to Submit After Hive Is Added?
What If an Excel File Downloaded on Hue Failed to Open?
How Do I Do If Sessions Are Not Released After Hue Connects to HiveServer and the Error Message "over max user connections" Is Displayed?
How Do I Reset Kafka Data?
How Do I Obtain the Client Version of MRS Kafka?
What Access Protocols Are Supported by Kafka?
How Do I Do If Error Message "Not Authorized to access group xxx" Is Displayed When a Kafka Topic Is Consumed?
What Compression Algorithms Does Kudu Support?
How Do I View Kudu Logs?
How Do I Handle the Kudu Service Exceptions Generated During Cluster Creation?
What Are the Differences Between Sample Project Building and Application Development? Is Python Code Supported?
Does OpenTSDB Support Python APIs?
How Do I Configure Other Data Sources on Presto?
How Do I Update the Ranger Certificate?
How Do I Connect to Spark Shell from MRS?
How Do I Connect to Spark Beeline from MRS?
Where Are the Execution Logs of Spark Jobs Stored?
How Do I Specify a Log Path When Submitting a Task in an MRS Storm Cluster?
How Do I Check Whether the ResourceManager Configuration of Yarn Is Correct?
How Do I Modify the allow_drop_detached Parameter of ClickHouse?
How Do I Do If an Alarm Indicating Insufficient Memory Is Reported During Spark Task Execution?
How Do I Do If ClickHouse Consumes Excessive CPU Resources?
How Do I Obtain a Spark JAR File?
Why Is an Alarm Generated When the NameNode Process Is Not Restarted After the hdfs-site.xml File Is Modified?
It Takes a Long Time for Spark SQL to Access Hive Partitioned Tables Before Job Startup
API
How Do I Configure the node_id Parameter When Using the API for Adjusting Cluster Nodes?
Cluster Management
How Do I View All Clusters?
How Do I View Log Information?
How Do I View Cluster Configuration Information?
How Do I Add Services to an MRS Cluster?
How Do I Install Kafka and Flume in an MRS Cluster?
How Do I Stop an MRS Cluster?
Do I Need to Shut Down a Master Node Before Upgrading Its Specifications?
Can I Expand Data Disk Capacity for MRS?
Can I Add Components to an Existing Cluster?
Can I Delete Components Installed in an MRS Cluster?
Can I Change MRS Cluster Nodes on the MRS Console?
How Do I Shield Cluster Alarm/Event Notifications?
Why Is the Resource Pool Memory Displayed in the MRS Cluster Smaller Than the Actual Cluster Memory?
How Do I Configure the knox Memory?
What Is the Python Version Installed for an MRS Cluster?
How Do I View the Configuration File Directory of Each Component?
How Do I Upload a Local File to a Node Inside a Cluster?
How Do I Do If the Time on MRS Nodes Is Incorrect?
How Do I Query the Startup Time of an MRS Node?
How Do I Do If Trust Relationships Between Nodes Are Abnormal?
How Do I Adjust the Memory Size of the manager-executor Process?
Can I Modify a Master Node in an Existing MRS Cluster?
Kerberos Usage
How Do I Change the Kerberos Authentication Status of a Created MRS Cluster?
What Are the Ports of the Kerberos Authentication Service?
How Do I Deploy the Kerberos Service in a Running Cluster?
How Do I Access Hive in a Cluster with Kerberos Authentication Enabled?
How Do I Access Presto in a Cluster with Kerberos Authentication Enabled?
How Do I Access Spark in a Cluster with Kerberos Authentication Enabled?
How Do I Prevent Kerberos Authentication Expiration?
Metadata Management
Where Can I View Hive Metadata?
Troubleshooting
Account Passwords
Resetting or Changing the Password of Manager User admin
Failed to Download Authentication Credentials When the Username Is Too Long
Account Permissions
When a User Calls the API Using the AK/SK to Obtain MRS Cluster Hosts, Message "User do not have right to access cluster" Is Displayed
Failed to View Cluster Details
Common Exceptions in Logging In to the Cluster Manager
Failed to Access the Manager Page of MRS Cluster
Accessing the Web Pages
Error "502 Bad Gateway" Is Reported During the Access to MRS Manager
An Error Message Occurs Indicating that the VPC Request Is Incorrect During the Access
Error 503 Is Reported When Manager Is Accessed Through Direct Connect
Error Message "You have no right to access this page." Is Displayed When Users log in to the Cluster Page
Failed to Log In to the Manager After Timeout
Failed to Log In to MRS Manager After the Python Upgrade
Failed to Log In to MRS Manager After Changing the Domain Name
Manager Page Is Blank After a Success Login
Cluster Login Fails Because Native Kerberos Is Installed on Cluster Nodes
Using Google Chrome to Access MRS Manager on macOS
How Do I Unlock a User Who Logs in to Manager?
Why Does the Manager Page Freeze?
Common Exceptions in Accessing the MRS Web UI
How Do I Do If an Error Is Reported or Some Functions Are Unavailable When I Access the Web UIs of HDFS, Hue, YARN, HetuEngine, and Flink?
Error 500 Is Reported When a User Accesses the Component Web UI
[HBase WebUI] Users cannot switch from the HBase WebUI to the RegionServer WebUI
[HDFS WebUI] When users access the HDFS WebUI, an error message is displayed indicating that the number of redirections is too large
[HDFS WebUI] Failed to access the HDFS WebUI using the Internet Explorer
[Hue Web UI] Failed to Access the Hue Web UI
[Hue WebUI] The error "Proxy Error" is reported when a user accesses the Hue WebUI
[Hue WebUI] Why Is the Hue Native Page Cannot Be Properly Displayed If the Hive Service Is Not Installed in a Cluster?
Hue (Active) Cannot Open Web Pages
[Ranger WebUI] Why Cannot a New User Log In to Ranger After Changing the Password?
[Tez WebUI] Error 404 is reported when users access the Tez WebUI
[Spark WebUI] Why Cannot I Switch from the Yarn Web UI to the Spark Web UI?
[Spark WebUI] What Can I Do If an Error Occurs when I Access the Application Page Because the Application Cached by HistoryServer Is Recycled?
[Spark WebUI] Why Is the Native Page of an Application in Spark2x JobHistory Displayed Incorrectly?
[Spark WebUI] The Spark2x WebUI fails to be accessed using the Internet Explorer
[Yarn Web UI] Failed to Access the Yarn Web UI
APIs
Failed to Call an API to Create a Cluster
Cluster Management
Failed to Reduce Task Nodes
OBS Certificate in a Cluster Expired
Adding a New Disk to an MRS Cluster
Replacing a Disk in an MRS Cluster (Applicable to 2.x and Earlier)
Replacing a Disk in an MRS Cluster (Applicable to 3.x)
MRS Backup Failure
Inconsistency Between df and du Command Output on the Core Node
Disassociating a Subnet from the ACL Network
MRS Becomes Abnormal After hostname Modification
DataNode Restarts Unexpectedly
Failed to Configure Cross-Cluster Mutual Trust
Network Is Unreachable When Using pip3 to Install the Python Package in an MRS Cluster
Connecting the Open-Source confluent-kafka-go to the Security Cluster of MRS
Failed to Periodically Back Up an MRS 1.7.2 Cluster
Failed to Download the MRS Cluster Client
An Error Is Reported When a Flink Job Is Submitted in a Cluster with Kerberos Authentication Enabled
Failed to Scale Out an MRS Cluster
Error Occurs When MRS Executes the Insert Command Using Beeline
How Do I Upgrade EulerOS to Fix Vulnerabilities in an MRS Cluster?
Using CDM to Migrate Data to HDFS
Alarms Are Frequently Generated in the MRS Cluster
Memory Usage of the PMS Process Is High
High Memory Usage of the Knox Process
It Takes a Long Time to Access HBase from a Client Installed on a Node Outside the Security Cluster
How Do I Locate a Job Submission Failure?
OS Disk Space Is Insufficient Due to Oversized HBase Log Files
OS Disk Space Is Insufficient Due to Oversized HDFS Log Files
An Exception Occurs During Specifications Upgrade of Master Nodes in an MRS Cluster
Failed to Delete a New Tenant on FusionInsight Manager
MRS Cluster Becomes Unavailable After the VPC Is Changed
Failed to Submit a Job on the MRS Console
Error "symbol BIO_new_dgram_sctp version OPENSSL_1_1_0 not defined in file libcrypto.so.1.1 ..." Is Displayed During HA Certificate Generation
Using Alluixo
Error Message "Does not contain a valid host:port authority" Is Reported When Alluixo Is in HA Mode
Using ClickHouse
ClickHouse Fails to Start Due to Incorrect Data in ZooKeeper
An Exception Occurs When ClickHouse Consumes Kafka Data
Using DBService
DBServer Instance Is in Abnormal Status
DBServer Instance Remains in the Restoring State
Default Port 20050 or 20051 Is Occupied
DBServer Instance Is Always in the Restoring State Because the Incorrect /tmp Directory Permission
DBService Backup Failure
Components Failed to Connect to DBService in Normal State
DBServer Failed to Start
DBService Backup Failed Because the Floating IP Address Is Unreachable
DBService Failed to Start Due to the Loss of the DBService Configuration File
Using Flink
"IllegalConfigurationException: Error while parsing YAML configuration file: "security.kerberos.login.keytab" Is Displayed When a Command Is Executed on an Installed Client
"IllegalConfigurationException: Error while parsing YAML configuration file" Is Displayed When a Command Is Executed After Configurations of the Installed Client Are Changed
The yarn-session.sh Command Fails to Be Executed When the Flink Cluster Is Created
Failed to Create a Cluster by Executing the yarn-session Command When a Different User Is Used
Flink Service Program Fails to Read Files on the NFS Disk
Failed to Customize the Flink Log4j Log Level
Using Flume
Class Cannot Be Found After Flume Submits Jobs to Spark Streaming
Failed to Install a Flume Client
A Flume Client Cannot Connect to the Server
Flume Data Fails to Be Written to the Component
Flume Server Process Fault
Flume Data Collection Is Slow
Failed to Start Flume
Using HBase
Slow Response to HBase Connection
Failed to Authenticate the HBase User
RegionServer Failed to Start Because the Port Is Occupied
HBase Failed to Start Due to Insufficient Node Memory
HBase Service Unavailable Due to Poor HDFS Performance
HBase Failed to Start Due to Inappropriate Parameter Settings
RegionServer Failed to Start Due to Residual Processes
HBase Failed to Start Due to a Quota Set on HDFS
HBase Failed to Start Due to Corrupted Version Files
High CPU Usage Caused by Zero-Loaded RegionServer
HBase Failed to Started with "FileNotFoundException" in RegionServer Logs
The Number of RegionServers Displayed on the Native Page Is Greater Than the Actual Number After HBase Is Started
RegionServer Instance Is in the Restoring State
HBase Failed to Start in a Newly Installed Cluster
HBase Failed to Start Due to the Loss of the ACL Table Directory
HBase Failed to Start After the Cluster Is Powered Off and On
Failed to Import HBase Data Due to Oversized File Blocks
Failed to Load Data to the Index Table After an HBase Table Is Created Using Phoenix
Failed to Run the hbase shell Command on the MRS Cluster Client
Disordered Information Display on the HBase Shell Client Console Due to Printing of the INFO Information
HBase Failed to Start Due to Insufficient RegionServer Memory
Failed to Start HRegionServer on the Node Newly Added to the Cluster
Region in the RIT State for a Long Time Due to HBase File Loss
Using HDFS
All NameNodes Become the Standby State After the NameNode RPC Port of HDFS Is Changed
An Error Is Reported When the HDFS Client Is Used After the Host Is Connected Using a Public Network IP Address
Failed to Use Python to Remotely Connect to the Port of HDFS
HDFS Capacity Usage Reaches 100%, Causing Unavailable Upper-layer Services Such as HBase and Spark
An Error Is Reported During HDFS and Yarn Startup
HDFS Permission Setting Error
A DataNode of HDFS Is Always in the Decommissioning State
HDFS or NameNode Failed to Start Due to Insufficient Memory
A Large Number of Blocks Are Lost in HDFS due to the Time Change Using ntpdate
CPU Usage of a DataNode Reaches 100% Occasionally, Causing Node Loss (SSH Connection Is Slow or Fails)
Manually Performing Checkpoints When a NameNode Is Faulty for a Long Time
Common File Read/Write Faults
Maximum Number of File Handles Is Set to a Too Small Value, Causing File Reading and Writing Exceptions
A Client File Fails to Be Closed After Data Writing
File Fails to Be Uploaded to HDFS Due to File Errors
After dfs.blocksize Is Configured and Data Is Put, Block Size Remains Unchanged
Failed to Read Files, and "FileNotFoundException" Is Displayed
Failed to Write Files to HDFS, and "item limit of / is exceeded" Is Displayed
Adjusting the Log Level of the Shell Client
File Read Fails, and "No common protection layer" Is Displayed
Failed to Write Files Because the HDFS Directory Quota Is Insufficient
Balancing Fails, and "Source and target differ in block-size" Is Displayed
A File Fails to Be Queried or Deleted, and the File Can Be Viewed in the Parent Directory (Invisible Characters)
Uneven Data Distribution Due to Non-HDFS Data Residuals
Uneven Data Distribution Due to the Client Installation on the DataNode
Handling Unbalanced DataNode Disk Usage on Nodes
Locating Common Balance Problems
HDFS Displays Insufficient Disk Space But 10% Disk Space Remains
An Error Is Reported When the HDFS Client Is Installed on the Core Node in a Common Cluster
Client Installed on a Node Outside the Cluster Fails to Upload Files Using hdfs
Insufficient Number of Replicas Is Reported During High Concurrent HDFS Writes
HDFS Client Failed to Delete Overlong Directories
An Error Is Reported When a Node Outside the Cluster Accesses MRS HDFS
"ALM-12027 Host PID Usage Exceeds the Threshold" Is Generated for a NameNode
ALM-14012 JournalNode Is Out of Synchronization Is Generated in the Cluster
Failed to Decommission a DataNode Due to HDFS Block Loss
An Error Is Reported When DistCP Is Used to Copy an Empty Folder
Using Hive
Content Recorded in Hive Logs
Causes of Hive Startup Failure
"Cannot modify xxx at runtime" Is Reported When the set Command Is Executed in a Security Cluster
How to Specify a Queue When Hive Submits a Job
How to Set Map and Reduce Memory on the Client
Specifying the Output File Compression Format When Importing a Table
desc Table Cannot Be Completely Displayed
NULL Is Displayed When Data Is Inserted After the Partition Column Is Added
A Newly Created User Has No Query Permissions
An Error Is Reported When SQL Is Executed to Submit a Task to a Specified Queue
An Error Is Reported When the "load data inpath" Command Is Executed
An Error Is Reported When the "load data local inpath" Command Is Executed
An Error Is Reported When the "create external table" Command Is Executed
An Error Is Reported When the dfs -put Command Is Executed on the Beeline Client
Insufficient Permissions to Execute the set role admin Command
An Error Is Reported When UDF Is Created Using Beeline
Hive Status Is Bad
Hive Service Status Is Partially Healthy
Difference Between Hive Service Health Status and Hive Instance Health Status
Hive Alarms and Triggering Conditions
"authentication failed" Is Displayed During an Attempt to Connect to the Shell Client
Failed to Access ZooKeeper from the Client
"Invalid function" Is Displayed When a UDF Is Used
Hive Service Status Is Unknown
Health Status of a HiveServer or MetaStore Instance Is Unknown
Health Status of a HiveServer or MetaStore Instance Is Concerning
Garbled Characters Returned upon a select Query If Text Files Are Compressed Using ARC4
Hive Task Failed to Run on the Client But Successful on Yarn
An Error Is Reported When the select Statement Is Executed
Failed to Drop a Large Number of Partitions
Failed to Start a Local Task
Failed to Start WebHCat
Sample Code Error for Hive Secondary Development After Domain Switching
MetaStore Exception Occurs When the Number of DBService Connections Exceeds the Upper Limit
"Failed to execute session hooks: over max connections" Reported by Beeline
beeline Reports the "OutOfMemoryError" Error
Task Execution Fails Because the Input File Number Exceeds the Threshold
Task Execution Fails Because of Stack Memory Overflow
Task Failed Due to Concurrent Writes to One Table or Partition
Hive Task Failed Due to a Lack of HDFS Directory Permission
Failed to Load Data to Hive Tables
Failed to Run the Application Developed Based on the Hive JDBC Code Case
HiveServer and HiveHCat Process Faults
An Error Is Reported When MRS Hive Uses Code to Connect to ZooKeeper
An Error Occurs When the INSERT INTO Statement Is Executed on Hive But the Error Message Is Unclear
Timeout Reported When Adding the Hive Table Field
Failed to Restart the Hive Service
Hive Failed to Delete a Table
An Error Is Reported When msck repair table table_name Is Run on Hive
Insufficient User Permission for Running the insert into Command on Hive
How Do I Release Disk Space After Dropping a Table in Hive?
Abnormal Hive Query Due to Damaged Data in the JSON Table
Connection Timed Out During SQL Statement Execution on the Client
WebHCat Failed to Start Due to Abnormal Health Status
WebHCat Failed to Start Because the mapred-default.xml File Cannot Be Parsed
Using Hue
A Job Is Running on Hue
HQL Fails to Be Executed on Hue Using Internet Explorer
Failed to Access the Hue Web UI
HBase Tables Cannot Be Loaded on the Hue Web UI
Chinese Characters Entered in the Hue Text Box Are Displayed Incorrectly
An Error Is Reported If the Query Result of an Impala SQL Statement Executed on Hue Contains Chinese Characters
Using Impala
Failed to Connect to impala-shell
Failed to Create a Kudu Table
Failed to Log In to the Impala Client
Using Kafka
An Error Is Reported When Kafka Is Run to Obtain a Topic
How Do I Use Python3.x to Connect to Kafka in a Security Cluster?
Flume Normally Connects to Kafka But Fails to Send Messages
Producer Failed to Send Data and Threw "NullPointerException"
Producer Fails to Send Data and "TOPIC_AUTHORIZATION_FAILED" Is Thrown
Producer Occasionally Fails to Send Data and the Log Displays "Too many open files in system"
Consumer Is Initialized Successfully, But the Specified Topic Message Cannot Be Obtained from Kafka
Consumer Fails to Consume Data and Remains in the Waiting State
SparkStreaming Fails to Consume Kafka Messages, and "Error getting partition metadata" Is Displayed
Consumer Fails to Consume Data in a Newly Created Cluster, and the Message " GROUP_COORDINATOR_NOT_AVAILABLE" Is Displayed
SparkStreaming Fails to Consume Kafka Messages, and the Message "Couldn't find leader offsets" Is Displayed
Consumer Fails to Consume Data and the Message " SchemaException: Error reading field 'brokers'" Is Displayed
Checking Whether Data Consumed by a Customer Is Lost
Failed to Start the Component Due to Account Lockout
Kafka Broker Reports Abnormal Processes and the Log Shows "IllegalArgumentException"
Kafka Topics Cannot Be Deleted
Error "AdminOperationException" Is Displayed When a Kafka Topic Is Deleted
When a Kafka Topic Fails to Be Created, "NoAuthException" Is Displayed
Failed to Set an ACL for a Kafka Topic, and "NoAuthException" Is Displayed
When a Kafka Topic Fails to Be Created, "NoNode for /brokers/ids" Is Displayed
When a Kafka Topic Fails to Be Created, "replication factor larger than available brokers" Is Displayed
Consumer Repeatedly Consumes Data
Leader for the Created Kafka Topic Partition Is Displayed as none
Safety Instructions on Using Kafka
Obtaining Kafka Consumer Offset Information
Adding or Deleting Configurations for a Topic
Reading the Content of the __consumer_offsets Internal Topic
Configuring Logs for Shell Commands on the Client
Obtaining Topic Distribution Information
Kafka HA Usage Description
Failed to Manage a Kafka Cluster Using the Kafka Shell Command
Kafka Producer Writes Oversized Records
Kafka Consumer Reads Oversized Records
High Usage of Multiple Disks on a Kafka Cluster Node
Kafka Is Disconnected from the ZooKeeper Client
Using Oozie
Oozie Jobs Do Not Run When a Large Number of Jobs Are Submitted Concurrently
An Error Is Reported When Oozie Schedules HiveSQL Jobs
Oozie Tasks Cannot Be Submitted on a Client Outside the MRS Cluster or Can Be Submitted Only Two Hours Later
Using Presto
During sql-standard-with-group Configuration, a Schema Fails to Be Created and the Error Message "Access Denied" Is Displayed
The Presto coordinator cannot be started properly.
An Error Is Reported When Presto Is Used to Query a Kudu Table
No Data is Found in the Hive Table Using Presto
How Do I Do If an Error Is Reported When an MRS Presto Query Statement Is Executed?
How Do I Access Presto from an MRS Cluster Through a Public Network?
Using Spark
An Error Occurs When the Split Size Is Changed in a Spark Application
An Error Is Reported When Spark Is Used
Spark, Hive, and Yarn Unavailable due to Insufficient Disk Capacity
A Spark Job Fails to Run Due to Incorrect JAR File Import
Spark Job Suspended Due to Insufficient Memory or Lack of JAR Packages
An Error Is Reported During Spark Running
Executor Memory Reaches the Threshold Is Displayed in Driver
Message "Can't get the Kerberos realm" Is Displayed in Yarn-cluster Mode
Failed to Start spark-sql and spark-shell Due to JDK Version Mismatch
ApplicationMaster Failed to Start Twice in Yarn-client Mode
Failed to Connect to ResourceManager When a Spark Task Is Submitted
DataArts Studio Failed to Schedule Spark Jobs
Submission Status of the Spark Job API Is Error
Alarm 43006 Is Repeatedly Generated in the Cluster
Failed to Create or Delete a Table in Spark Beeline
Failed to Connect to the Driver When a Node Outside the Cluster Submits a Spark Job to Yarn
Large Number of Shuffle Results Are Lost During Spark Task Execution
Disk Space Is Insufficient Due to Long-Term Running of JDBCServer
Failed to Load Data to a Hive Table Across File Systems by Running SQL Statements Using Spark Shell
Spark Task Submission Failure
Spark Task Execution Failure
JDBCServer Connection Failure
Failed to View Spark Task Logs
Spark Streaming Task Issues
Authentication Fails When Spark Connects to Other Services
Authentication Fails When Spark Connects to Kafka
An Error Occurs When SparkSQL Reads the ORC Table
Failed to Switch to the Log Page from stderr and stdout on the Native Spark2x Page
An Error Is Reported When spark-beeline Is Used to Query a Hive View
Using Sqoop
Connecting Sqoop to MySQL
Failed to Find the HBaseAdmin.<init> Method When Sqoop Reads Data from the MySQL Database to HBase
Failed to Export HBase Data to HDFS Through Hue's Sqoop Task
A Format Error Is Reported When Sqoop Is Used to Export Data from Hive to MySQL 8.0
An Error Is Reported When sqoop import Is Executed to Import PostgreSQL Data to Hive
Sqoop Failed to Read Data from MySQL and Write Parquet Files to OBS
How Do I Do If An Error Is Reported During Database Data Migration Using Sqoop?
Using Storm
Invalid Hyperlink of Events on the Storm UI
Failed to Submit a Topology
Topology Submission Fails and the Message "Failed to check principle for keytab" Is Displayed
The Worker Log Is Empty After a Topology Is Submitted
Worker Runs Abnormally After a Topology Is Submitted and Error "Failed to bind to:host:ip" Is Displayed
"well-known file is not secure" Is Displayed When the jstack Command Is Used to Check the Process Stack
When the Storm-JDBC plug-in is used to develop Oracle write Bolts, data cannot be written into the Bolts.
The GC Parameter Configured for the Service Topology Does Not Take Effect
Internal Server Error Is Displayed When the User Queries Information on the UI
Using Ranger
After Ranger Authentication Is Enabled for Hive, Unauthorized Tables and Databases Can Be Viewed on the Hue Page
Using Yarn
Plenty of Jobs Are Found After Yarn Is Started
"GC overhead" Is Displayed on the Client When Tasks Are Submitted Using the Hadoop Jar Command
Disk Space Is Used Up Due to Oversized Aggregated Logs of Yarn
Temporary Files Are Not Deleted When an MR Job Is Abnormal
ResourceManager of Yarn (Port 8032) Throws Error "connection refused"
Failed to View Job Logs on the Yarn Web UI
An Error Is Reported When a Queue Name Is Clicked on the Yarn Page
Error 500 Is Reported When Job Logs Are Queried on the Yarn Web UI
An Error Is Reported When a Yarn Client Command Is Used to Query Historical Jobs
Number of Files in the TimelineServer Directory Reaches the Upper Limit
Using ZooKeeper
Accessing ZooKeeper from an MRS Cluster
ZooKeeper Is Unavailable Because of Non-synchronized Time Between Active and Standby Master Nodes
Accessing OBS
When Using the MRS Multi-user Access to OBS Function, a User Does Not Have the Permission to Access the /tmp Directory
When the Hadoop Client Is Used to Delete Data from OBS, It Does Not Have the Permission for the .Trash Directory
How Do I Change the NTP Server Address of a Cluster Node?