Updated on 2026-01-04 GMT+08:00
Obtaining the MRS Cluster Information
Components Supported by MRS
- MRS 3.5.0-LTS supports the following components:
- A custom cluster can contain the following components: Hadoop, Spark, HBase, Hive, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, ClickHouse, Guardian, JobGateway, MemArtsCC, Sqoop, Kudu, and Impala.
- A Doris cluster can contain the following components:
- An analysis cluster contains the following components: Hadoop, Spark, HBase, Hive, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, JobGateway, Guardian, MemArtsCC, Sqoop, Kudu, and Impala.
- A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
- A hybrid cluster contains the following components: Hadoop, Spark, HBase, Hive, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, Flume, JobGateway, Guardian, MemArtsCC, Sqoop, Kudu, and Impala.
- MRS 3.3.0-LTS supports the following components:
- A custom cluster can contain the following components: CDL, Hadoop, Spark, HBase, Hive, IoTDB, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, ClickHouse, Guardian, and JobGateway.
- A Doris cluster can contain the following components:
- An analysis cluster contains the following components: Hadoop, Spark, HBase, Hive, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, JobGateway, Guardian, and Doris.
- A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
- A hybrid cluster can contain the following components: Hadoop, Spark, HBase, Hive, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, Flume, JobGateway, and Guardian.
- MRS 3.2.0-LTS.1 supports the following components:
- An analysis cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and Guardian.
- A streaming cluster can contain the following components: Kafka, Flume, ZooKeeper, and Ranger.
- A hybrid cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, Flume, and Guardian.
- A custom cluster can contain the following components: CDL, Hadoop, Spark2x, HBase, Hive, Hue, IoTDB, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and ClickHouse, and Guardian.
- MRS 3.1.5 supports the following components:
- An analysis cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Sqoop, and Guardian
- A streaming cluster can contain the following components: Kafka, Flume, ZooKeeper, and Ranger.
- A hybrid cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Sqoop, Guardian, Kafka, and Flume.
- A custom cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Kafka, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, Kudu, Sqoop, and Guardian.
- MRS 3.1.2-LTS.3 supports the following components:
- An analysis cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, and Tez.
- A streaming cluster can contain the following components: Kafka, Flume, ZooKeeper, and Ranger.
- A hybrid cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, and Flume.
- A custom cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and ClickHouse.
- MRS 3.1.0 supports the following components:
- An analysis cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, and Kudu.
- A streaming cluster can contain the following components: Kafka, Flume, ZooKeeper, and Ranger.
- A hybrid cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Kafka, and Flume.
- A custom cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Kafka, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, and Kudu.
- MRS 3.0.5 supports the following components:
- An analysis cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, and Alluxio.
- A streaming cluster can contain the following components: Kafka, Storm, Flume, ZooKeeper, and Ranger.
- A hybrid cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Alluxio, Kafka, Storm, and Flume.
- A custom cluster can contain the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Kafka, Storm, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, Kudu, and Alluxio.
- MRS 2.1.0 supports the following components:
- An analysis cluster can contain the following components: Presto, Hadoop, Spark, HBase, Hive, Hue, Loader, Tez, Impala, Kudu, and Flink.
- A streaming cluster can contain the following components: Kafka, Storm, and Flume.
- MRS 1.9.2 supports the following components:
- An analysis cluster can contain the following components: Presto, Hadoop, Spark, HBase, OpenTSDB, Hive, Hue, Loader, Tez, Flink, Alluxio, and Ranger.
- A streaming cluster can contain the following components: Kafka, KafkaManager, Storm, and Flume.
Obtaining a Cluster ID
A cluster ID (cluster_id) is required for some URLs when an API is called. To obtain a cluster ID, perform the following operations:
- Log in to the MRS management console.
- On the Active Clusters page, and click the name of the cluster to be operated. The cluster details page is displayed.
- Click the Dashboard tab and obtain the cluster ID in the Basic Information area. Figure 1 Obtaining a cluster ID

Obtaining a Job ID
A job ID (job_execution_id) is required for some URLs when an API is called. To obtain a job ID, perform the following operations:
- Log in to the MRS management console.
- On the Active Clusters page, and click the name of the cluster to be operated. The cluster details page is displayed.
- Click the Jobs tab and obtain the ID of the job to be operated from the job list. Figure 2 Obtaining a job ID

Parent topic:Appendix
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot
