Esta página ainda não está disponível no idioma selecionado. Estamos trabalhando para adicionar mais opções de idiomas. Agradecemos sua compreensão.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Creating a ClickHouse Table

Updated on 2024-07-19 GMT+08:00

ClickHouse implements the replicated table mechanism based on the ReplicatedMergeTree engine and ZooKeeper. When creating a table, you can specify an engine to determine whether the table is highly available. Shards and replicas of each table are independent of each other.

ClickHouse also implements the distributed table mechanism based on the Distributed engine. Views are created on all shards (local tables) for distributed query, which is easy to use. ClickHouse has the concept of data sharding, which is one of the features of distributed storage. That is, parallel read and write are used to improve efficiency.

The ClickHouse cluster table engine that uses Kunpeng as the CPU architecture does not support HDFS and Kafka.

Viewing cluster and Other Environment Parameters of ClickHouse

  1. Use the ClickHouse client to connect to the ClickHouse server by referring to Using ClickHouse from Scratch.
  2. Query the cluster identifier and other information about the environment parameters.

    select cluster,shard_num,replica_num,host_name from system.clusters;
    SELECT 
        cluster, 
        shard_num, 
        replica_num, 
        host_name
    FROM system.clusters
    
    ┌─cluster───────────┬─shard_num─┬─replica_num─┬─host_name──────── ┐
    │ default_cluster_1             │         1   │           1   │ node-master1dOnG           │
    │ default_cluster_1             │         1   │           2   │ node-group-1tXED0001       │
    │ default_cluster_1             │         2   │           1   │ node-master2OXQS           │
    │ default_cluster_1             │         2   │           2   │ node-group-1tXED0002       │
    │ default_cluster_1             │         3   │           1   │ node-master3QsRI           │
    │ default_cluster_1             │         3   │           2   │ node-group-1tXED0003       │
    └─────────────── ┴────── ┴─────── ┴──────────────┘
    
    6 rows in set. Elapsed: 0.001 sec. 

  3. Query the shard and replica identifiers.

    select * from system.macros;
    SELECT *
    FROM system.macros
    
    ┌─macro───┬─substitution─────┐
    │ id          │ 76                     │
    │ replica     │ 2                      │
    │ shard       │ 3                      │
    └────── ┴────────────┘
    
    3 rows in set. Elapsed: 0.001 sec. 

Creating a Local Replicated Table and a distributed Table

  1. Log in to the ClickHouse node using the client, for example, clickhouse client --host node-master3QsRI --multiline --port 9440 --secure;

    NOTE:

    node-master3QsRI is the value of host_name obtained in 2 in Viewing cluster and Other Environment Parameters of ClickHouse.

  2. Create a replicated table using the ReplicatedMergeTree engine.

    For example, run the following commands to create a ReplicatedMergeTree table named test on the default_cluster_1 node and in the default database:

    CREATE TABLE default.test ON CLUSTER default_cluster_1

    (

    `EventDate` DateTime,

    `id` UInt64

    )

    ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/default/test', '{replica}')

    PARTITION BY toYYYYMM(EventDate)

    ORDER BY id;

    The parameters are described as follows:

    • The ON CLUSTER syntax indicates the distributed DDL, that is, the same local table can be created on all instances in the cluster after the statement is executed once.
    • default_cluster_1 is the cluster identifier obtained in 2 in Viewing cluster and Other Environment Parameters of ClickHouse.
      CAUTION:
      ReplicatedMergeTree engine receives the following two parameters:
      • Storage path of the table data in ZooKeeper

        The path must be in the /clickhouse directory. Otherwise, data insertion may fail due to insufficient ZooKeeper quota.

        To avoid data conflict between different tables in ZooKeeper, the directory must be in the following format:

        /clickhouse/tables/{shard}/default/test, in which /clickhouse/tables/{shard} is fixed, default indicates the database name, and text indicates the name of the created table.

      • Replica name: Generally, {replica} is used.
    CREATE TABLE default.test ON CLUSTER default_cluster_1
    (
        `EventDate` DateTime, 
        `id` UInt64
    )
    ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/default/test', '{replica}')
    PARTITION BY toYYYYMM(EventDate)
    ORDER BY id
    
    ┌─host─────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
    │ node-group-1tXED0002                   │  9000  │      0   │         │                   5   │                3   │
    │ node-group-1tXED0003                   │  9000  │      0   │         │                   4   │                3   │
    │ node-master1dOnG                       │  9000  │      0   │         │                   3   │                3   │
    └────────────────────┴────┴─────┴──── ┴─────────── ┴──────────┘
    ┌─host─────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
    │ node-master3QsRI                       │  9000  │      0   │         │                   2   │                0   │
    │ node-group-1tXED0001                   │  9000  │      0   │         │                   1   │                0   │
    │ node-master2OXQS                       │  9000  │      0   │         │                   0   │                0   │
    └────────────────────┴────┴─────┴──── ┴─────────── ┴──────────┘
    
    6 rows in set. Elapsed: 0.189 sec. 

  3. Create a distributed table using the Distributed engine.

    For example, run the following commands to create a distributed table named test_all on the default_cluster_1 node and in the default database:

    CREATE TABLE default.test_all ON CLUSTER default_cluster_1

    (

    `EventDate` DateTime,

    `id` UInt64

    )

    ENGINE = Distributed(default_cluster_1, default, test, rand());

    CREATE TABLE default.test_all ON CLUSTER default_cluster_1
    (
        `EventDate` DateTime, 
        `id` UInt64
    )
    ENGINE = Distributed(default_cluster_1, default, test, rand())
    
    ┌─host─────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
    │ node-group-1tXED0002                   │  9000  │      0   │         │                   5   │                0   │
    │ node-master3QsRI                       │  9000  │      0   │         │                   4   │                0   │
    │ node-group-1tXED0003                   │  9000  │      0   │         │                   3   │                0   │
    │ node-group-1tXED0001                   │  9000  │      0   │         │                   2   │                0   │
    │ node-master1dOnG                       │  9000  │      0   │         │                   1   │                0   │
    │ node-master2OXQS                       │  9000  │      0   │         │                   0   │                0   │
    └────────────────────┴────┴─────┴──── ┴─────────── ┴──────────┘
    
    6 rows in set. Elapsed: 0.115 sec. 
    
    NOTE:

    Distributed requires the following parameters:

    • default_cluster_1 is the cluster identifier obtained in 2 in Viewing cluster and Other Environment Parameters of ClickHouse.
    • default indicates the name of the database where the local table is located.
    • test indicates the name of the local table. In this example, it is the name of the table created in 2.
    • (Optional) Sharding key

      This key and the weight configured in the config.xml file determine the route for writing data to the distributed table, that is, the physical table to which the data is written. It can be the original data (for example, site_id) of a column in the table or the result of the function call, for example, rand() is used in the preceding SQL statement. Note that data must be evenly distributed in this key. Another common operation is to use the hash value of a column with a large difference, for example, intHash64(user_id).

ClickHouse Table Data Operations

  1. Log in to the ClickHouse node on the client. Example:

    clickhouse client --host node-master3QsRI --multiline --port 9440 --secure;
    NOTE:

    node-master3QsRI is the value of host_name obtained in 2 in Viewing cluster and Other Environment Parameters of ClickHouse.

  2. After creating a table by referring to Creating a Local Replicated Table and a distributed Table, you can insert data to the local table.

    For example, run the following command to insert data to the local table test:

    insert into test values(toDateTime(now()), rand());

  3. Query the local table information.

    For example, run the following command to query data information of the table test in 2:

    select * from test;

    SELECT *
    FROM test
    
    ┌───────────EventDate─┬─────────id─┐
    │ 2020-11-05 21:10:42             │ 1596238076           │
    └──────────────── ┴───────────┘
    
    1 rows in set. Elapsed: 0.002 sec. 
     

  4. Query the distributed table.

    For example, the distributed table test_all is created based on table test in 3. Therefore, the same data in table test can also be queried in table test_all.

    select * from test_all;

    SELECT *
    FROM test_all
    
    ┌───────────EventDate─┬─────────id─┐
    │ 2020-11-05 21:10:42             │ 1596238076           │
    └──────────────── ┴───────────┘
    
    1 rows in set. Elapsed: 0.004 sec. 

  5. Switch to the shard node with the same shard_num and query the information about the current table. The same table data can be queried.

    For example, run the exit; command to exit the original node.

    Run the following command to switch to the node-group-1tXED0003 node:

    clickhouse client --host node-group-1tXED0003 --multiline --port 9440 --secure;

    NOTE:

    The shard_num values of node-group-1tXED0003 and node-master3QsRI are the same by performing 2.

    show tables;

    SHOW TABLES
    
    ┌─name─────┐
    │ test           │
    │ test_all       │
    └────────┘
     

  6. Query the local table data. For example, run the following command to query data in table test on the node-group-1tXED0003 node:

    select * from test;
    SELECT *
    FROM test
    
    ┌───────────EventDate─┬─────────id─┐
    │ 2020-11-05 21:10:42             │ 1596238076           │
    └──────────────── ┴───────────┘
    
    1 rows in set. Elapsed: 0.005 sec. 
    

  7. Switch to the shard node with different shard_num value and query the data of the created table.

    For example, run the following command to exit the node-group-1tXED0003 node:

    exit;

    Switch to the node-group-1tXED0001 node. The shard_num values of node-group-1tXED0001 and node-master3QsRI are different by performing 2.

    clickhouse client --host node-group-1tXED0001 --multiline --port 9440 --secure;

    Query the local table test. Data cannot be queried on the different shard node because table test is a local table.

    select * from test;

    SELECT *
    FROM test
    
    Ok.

    Query data in the distributed table test_all. The data can be queried properly.

    select * from test_all;

    SELECT *
    FROM test
    
    ┌───────────EventDate─┬─────────id─┐
    │ 2020-11-05 21:12:19             │ 3686805070           │
    └──────────────── ┴───────────┘
    
    1 rows in set. Elapsed: 0.002 sec. 
     

Usamos cookies para aprimorar nosso site e sua experiência. Ao continuar a navegar em nosso site, você aceita nossa política de cookies. Saiba mais

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback