Updated on 2024-11-29 GMT+08:00

CarbonData Quick Start

This section describes how to create CarbonData tables, load data, and query data. This quick start provides operations based on the Spark Beeline client. If you want to use Spark shell, wrap the queries with spark.sql().

The following describes how to load data from a CSV file to a CarbonData table.

Table 1 CarbonData Quick Start

Operation

Description

Preparing a CSV File

Prepare the CSV file to be loaded to the CarbonData Table.

Connecting to CarbonData

Connect to CarbonData before performing any operations on CarbonData.

Creating a CarbonData Table

Create a CarbonData table to load data and perform query operations.

Loading Data to a CarbonData Table

Load data from CSV to the created table.

Querying Data from a CarbonData Table

Perform query operations such as filters and groupby.

Preparing a CSV File

  1. Prepare a CSV file named test.csv on the local PC. An example is as follows:
    13418592122,1001, MAC address, 2017-10-23 15:32:30,2017-10-24 15:32:30,62.50,74.56
    13418592123 1002,  MAC address, 2017-10-23 16:32:30,2017-10-24 16:32:30,17.80,76.28
    13418592124,1003, MAC address, 2017-10-23 17:32:30,2017-10-24 17:32:30,20.40,92.94
    13418592125 1004,  MAC address, 2017-10-23 18:32:30,2017-10-24 18:32:30,73.84,8.58
    13418592126,1005, MAC address, 2017-10-23 19:32:30,2017-10-24 19:32:30,80.50,88.02
    13418592127 1006,  MAC address, 2017-10-23 20:32:30,2017-10-24 20:32:30,65.77,71.24
    13418592128,1007, MAC address, 2017-10-23 21:32:30,2017-10-24 21:32:30,75.21,76.04
    13418592129,1008, MAC address, 2017-10-23 22:32:30,2017-10-24 22:32:30,63.30,94.40
    13418592130, 1009, MAC address, 2017-10-23 23:32:30,2017-10-24 23:32:30,95.51,50.17
    13418592131,1010, MAC address, 2017-10-24 00:32:30,2017-10-25 00:32:30,39.62,99.13
  2. Use WinSCP to import the CSV file to the directory of the node where the client is installed, for example, /opt.
  3. If Kerberos authentication is enabled for the cluster, log in to FusionInsight Manager, choose System > Permission > User, and add the human-machine user sparkuser, and user groups hadoop (primary) and hive.
  4. Run the following commands to go to the client installation directory, load environment variables, and authenticate the user.

    cd Client installation directory

    source ./bigdata_env

    source ./Spark/component_env

    kinit sparkuser

  5. Run the following command to upload the CSV file to the /data directory of the HDFS.

    hdfs dfs -put /opt/test.csv /data/

Connecting to CarbonData

  • Use Spark SQL or Spark shell to connect to Spark and run Spark SQL commands.
  • Start JDBCServer and use a JDBC client (for example, Spark Beeline) to connect to JDBCServer.

    Run the following commands:

    cd ./Spark/spark/bin

    ./spark-beeline

Creating a CarbonData Table

After connecting Spark Beeline with the JDBCServer, create a CarbonData table to load data and perform query operations. Run the following commands to create a simple table:

create table x1 (imei string, deviceInformationId int, mac string, productdate timestamp, updatetime timestamp, gamePointId double, contractNumber double) STORED AS carbondata TBLPROPERTIES ('SORT_COLUMNS'='imei,mac');

The command output is as follows:

+---------+
| Result  |
+---------+
+---------+
No rows selected (1.093 seconds)

Loading Data to a CarbonData Table

After you have created a CarbonData table, you can load the data from CSV to the created table.

Run the following command with required parameters to load data from CSV. The column names of the CarbonData table must match the column names of the CSV file.

LOAD DATA inpath 'hdfs://hacluster/data/test.csv' into table x1 options('DELIMITER'=',', 'QUOTECHAR'='"','FILEHEADER'='imei, deviceinformationid,mac, productdate,updatetime, gamepointid,contractnumber');

test.csv is the CSV file prepared in Preparing a CSV File and x1 is the table name.

The CSV example file is as follows:

13418592122,1001, MAC address, 2017-10-23 15:32:30,2017-10-24 15:32:30,62.50,74.56
13418592123 1002,  MAC address, 2017-10-23 16:32:30,2017-10-24 16:32:30,17.80,76.28
13418592124,1003, MAC address, 2017-10-23 17:32:30,2017-10-24 17:32:30,20.40,92.94
13418592125 1004,  MAC address, 2017-10-23 18:32:30,2017-10-24 18:32:30,73.84,8.58
13418592126,1005, MAC address, 2017-10-23 19:32:30,2017-10-24 19:32:30,80.50,88.02
13418592127 1006,  MAC address, 2017-10-23 20:32:30,2017-10-24 20:32:30,65.77,71.24
13418592128,1007, MAC address, 2017-10-23 21:32:30,2017-10-24 21:32:30,75.21,76.04
13418592129,1008, MAC address, 2017-10-23 22:32:30,2017-10-24 22:32:30,63.30,94.40
13418592130, 1009, MAC address, 2017-10-23 23:32:30,2017-10-24 23:32:30,95.51,50.17
13418592131,1010, MAC address, 2017-10-24 00:32:30,2017-10-25 00:32:30,39.62,99.13

The command output is as follows:

+------------+
|Segment ID  |
+------------+
|0           |
+------------+
No rows selected (3.039 seconds)

Querying Data from a CarbonData Table

After a CarbonData table is created and the data is loaded, you can perform query operations as required. Some query operations are provided as examples.

  • Obtaining the number of records

    Run the following command to obtain the number of records in the CarbonData table:

    select count(*) from x1;

  • Querying with the groupby condition

    Run the following command to obtain the deviceinformationid records without repetition in the CarbonData table:

    select deviceinformationid,count (distinct deviceinformationid) from x1 group by deviceinformationid;

  • Querying with Filter

    Run the following command to obtain specific deviceinformationid records:

    select * from x1 where deviceinformationid='1010';

Using CarbonData on Spark-shell

If you need to use CarbonData on a Spark-shell, you need to create a CarbonData table, load data to the CarbonData table, and query data in CarbonData as follows:

spark.sql("CREATE TABLE x2(imei string, deviceInformationId int, mac string, productdate timestamp, updatetime timestamp, gamePointId double, contractNumber double) STORED AS carbondata")
spark.sql("LOAD DATA inpath 'hdfs://hacluster/data/x1_without_header.csv' into table x2 options('DELIMITER'=',', 'QUOTECHAR'='\"','FILEHEADER'='imei, deviceinformationid,mac, productdate,updatetime, gamepointid,contractnumber')")
spark.sql("SELECT * FROM x2").show()