CarbonData Quick Start
This section describes how to create CarbonData tables, load data, and query data. This quick start provides operations based on the Spark Beeline client. If you want to use Spark shell, wrap the queries with spark.sql().
Operation |
Description |
---|---|
Connect to CarbonData before performing any operations on CarbonData. |
|
Create a CarbonData table to load data and perform query operations. |
|
Load data from CSV to the created table. |
|
Perform query operations such as filters and groupby. |
Connecting to CarbonData
- Use Spark SQL or Spark shell to connect to Spark and run Spark SQL commands.
- Start the JDBCServer and use a JDBC client (for example, Spark Beeline) to connect to the JDBCServer.
A user must belong to the data loading group for performing data loading operations. The default name of the data loading group is ficommon.
Creating a CarbonData Table
After connecting Spark Beeline with the JDBCServer, create a CarbonData table to load data and perform query operations. Run the following commands to create a simple table:
create table x1 (imei string, deviceInformationId int, mac string, productdate timestamp, updatetime timestamp, gamePointId double, contractNumber double) STORED AS carbondata TBLPROPERTIES ('SORT_COLUMNS'='imei,mac');
The command output is as follows:
+---------+--+ | result | +---------+--+ +---------+--+ No rows selected (1.551 seconds)
Loading Data to a CarbonData Table
After you have created a CarbonData table, you can load the data from CSV to the created table.
Loading data from a CSV file to a CarbonData table
Run the following command with required parameters to load data from CSV. The column names of the CarbonData table must match the column names of the CSV file.
LOAD DATA inpath 'hdfs://hacluster/data/x1_without_header.csv' into table x1 options('DELIMITER'=',', 'QUOTECHAR'='"','FILEHEADER'='imei, deviceinformationid,mac, productdate,updatetime, gamepointid,contractnumber');
In the preceding command, x1_without_header.csv and x1 are used as examples.
The CSV example file is as follows:
13418592122,1001, MAC address, 2017-10-23 15:32:30,2017-10-24 15:32:30,62.50,74.56 13418592123 1002, MAC address, 2017-10-23 16:32:30,2017-10-24 16:32:30,17.80,76.28 13418592124,1003, MAC address, 2017-10-23 17:32:30,2017-10-24 17:32:30,20.40,92.94 13418592125 1004, MAC address, 2017-10-23 18:32:30,2017-10-24 18:32:30,73.84,8.58 13418592126,1005, MAC address, 2017-10-23 19:32:30,2017-10-24 19:32:30,80.50,88.02 13418592127 1006, MAC address, 2017-10-23 20:32:30,2017-10-24 20:32:30,65.77,71.24 13418592128,1007, MAC address, 2017-10-23 21:32:30,2017-10-24 21:32:30,75.21,76.04 13418592129,1008, MAC address, 2017-10-23 22:32:30,2017-10-24 22:32:30,63.30,94.40 13418592130, 1009, MAC address, 2017-10-23 23:32:30,2017-10-24 23:32:30,95.51,50.17 13418592131,1010, MAC address, 2017-10-24 00:32:30,2017-10-25 00:32:30,39.62,99.13
The command output is as follows:
+---------+--+ | Result | +---------+--+ +---------+--+ No rows selected (3.039 seconds)
Querying Data from a CarbonData Table
After a CarbonData table is created and the data is loaded, you can perform query operations as required. Some query operations are provided as examples.
- Obtaining the number of records
Run the following command to obtain the number of records in the CarbonData table:
select count(*) from x1;
- Querying with the groupby condition
Run the following command to obtain the deviceinformationid records without repetition in the CarbonData table:
select deviceinformationid,count (distinct deviceinformationid) from x1 group by deviceinformationid;
- Querying with Filter
Run the following command to obtain specific deviceinformationid records:
select * from x1 where deviceinformationid='1010';
Using CarbonData on Spark-shell
If you need to use CarbonData on a Spark-shell, you need to create a CarbonData table, load data to the CarbonData table, and query data in CarbonData as follows:
spark.sql("CREATE TABLE x2(imei string, deviceInformationId int, mac string, productdate timestamp, updatetime timestamp, gamePointId double, contractNumber double) STORED AS carbondata") spark.sql("LOAD DATA inpath 'hdfs://hacluster/data/x1_without_header.csv' into table x2 options('DELIMITER'=',', 'QUOTECHAR'='\"','FILEHEADER'='imei, deviceinformationid,mac, productdate,updatetime, gamepointid,contractnumber')") spark.sql("SELECT * FROM x2").show()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot