Esta página aún no está disponible en su idioma local. Estamos trabajando arduamente para agregar más versiones de idiomas. Gracias por tu apoyo.

On this page

Show all

Help Center/ MapReduce Service/ Service Overview/ Components/ ClickHouse/ Relationships Between ClickHouse and Other Components

Relationships Between ClickHouse and Other Components

Updated on 2025-01-22 GMT+08:00

ClickHouse depends on ZooKeeper for installation and deployment.

Flink stream computing applications are used to generate common report data (detail flat-wide tables) and write the report data to ClickHouse in quasi-real time. Hive/Spark jobs are used to generate common report data (detail flat-wide tables) and batch import the data to ClickHouse.

NOTE:

Currently, ClickHouse does not support interconnection with Kafka in normal mode or HDFS in security mode.

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback