Updated on 2025-08-09 GMT+08:00

Disk Partitions of an MRS Cluster Node

MRS clusters can be used immediately after being created. You do not need to plan disk partitions. Table 1 describes the OS disk partitions of a created cluster node.

Table 1 OS disk partitions of an MRS cluster node

Partition Type

Partition Directory

Capacity

Usage

OS partition

/

220 GB

OS root partition directory and program storage directory, including all directories (except the directories specified below)

/tmp

10 GB

Directory for storing temporary files

/var

10 GB

OS runtime directory

/var/log

The remaining space of the OS disk is allocated to the /var/log partition.

Default directory for storing MRS cluster logs

/srv/BigData

60 GB

Data directory of MRS Manager, which stores data such as ldapData, Manager, and metric_agent, and provides mount points for component data directories.

This section applies only to MRS 3.x or later. The OS disk of MRS 3.3.1-LTS or later is not partitioned and has only the / directory.

After an MRS node is created, the non-OS disks of the node are mounted to the /srv/BigData/dataN directory. For example, if the node has four data disks, the disk mount directories are /srv/BigData/data1, /srv/BigData/data2, /srv/BigData/data3, and /srv/BigData/data4.

The metadata and data directories of each component deployed on the node are allocated to different disk partition directories based on certain mapping rules. Table 2 describes the data directories of each component.

Table 2 Non-OS disk partitions of an MRS cluster node

Partition Type

Disk Partition Mount Directory

Data Directory

Usage

Metadata partition

/srv/BigData/data1

dbdata_om

Directory for storing OMS database data. If FusionInsight Manager is to be installed on two nodes, both OMS nodes contain this directory.

LocalBackup

If LocalDir is selected for backing up cluster data, the backup data is stored in the directory by default. If FusionInsight Manager is to be installed on two nodes, both nodes contain this directory.

doris/fe

Directory for storing Doris metadata.

/srv/BigData/data2

journalnode

Node where the JournalNode role of HDFS is located, which stores the JournalNode metadata of HDFS.

dbdata_service

Node where the DBServer role of DBService is located, and the database directory of DBService

iotdb/iotdbserver

Directory for storing IoTDB metadata

iotdb/confignode

Directory for storing metadata of the IoTDB ConfigNode role

/srv/BigData/data3

namenode

Node where the NameNode role of HDFS is located, which stores NameNode data.

iotdb/iotdbserver

Directory for storing IoTDBServer log data

/srv/BigData/data4

zookeeper

Node where the QuorumPeer role of ZooKeeper is located, which stores ZooKeeper data.

hetuengine/qas

Node where the QAS role of HetuEngine is located, which stores QAS data.

Service data partition

/srv/BigData/dataN

  • dn
  • nm

Directory for storing DataNode data and intermediate MapReduce data

kafka-logs

Directory for storing Kafka broker data.

clickhouse

clickhouse_path

Directory for storing ClickHouse database data

The clickhouse_path directory storing ClickHouse metadata exists only in the data1 directory.

iotdb/iotdbserver

Directory for storing IoTDB service data

memartscc/data_N

Directory for storing MemArtsCC cache data.

doris/be

Directory for storing Doris database data.

kudu

Directory for storing Kudu data.

impala

Directory for storing Impala data.

  • The metadata partition directory uses a maximum of four disks (data1 to data4). The metadata directories are mapped to directories from /srv/BigData/data1 to /srv/BigData/data4 in sequence according to Table 2. If only three data disks are mounted to the current node, the directories under data4 and data2 are combined. If only two data disks are mounted, the directories under data3 and data1 are combined, and those under data4 and data2 are combined.

    For example, if the ZooKeeper node has four data disks, the ZooKeeper data directory is /srv/BigData/data4/zookeeper. If the node has only three data disks, the ZooKeeper data directory is /srv/BigData/data2/zookeeper.

  • The mapping rules of service data directories are as follows:

    For HDFS, Kafka, ClickHouse, and IoTDB, mount points that comply with the /srv/BigData/dataN directory are automatically identified as data directories based on the number of mounted disks on the current node.

    For example, if disks are mounted to the /srv/BigData/data1 to /srv/BigData/data3 directories, the DataNode data directories are /srv/BigData/data1/dn, /srv/BigData/data2/dn, and /srv/BigData/data3/dn, and the Kafka data directories are /srv/BigData/data1/kafka-logs, /srv/BigData/data2/kafka-logs, and /srv/BigData/data3/kafka-logs.