Help Center/ MapReduce Service/ Service Overview/ Components/ ClickHouse/ Relationships Between ClickHouse and Other Components
Updated on 2024-08-28 GMT+08:00

Relationships Between ClickHouse and Other Components

ClickHouse depends on ZooKeeper for installation and deployment.

Flink stream computing applications are used to generate common report data (detail flat-wide tables) and write the report data to ClickHouse in quasi-real time. Hive/Spark jobs are used to generate common report data (detail flat-wide tables) and batch import the data to ClickHouse.

Currently, ClickHouse does not support interconnection with Kafka in normal mode or HDFS in security mode.