Creating a CarbonData Table Using the Spark Client
This section describes how to create CarbonData tables, load data, and query data. This quick start provides operations based on the Spark Beeline client. If you want to use Spark shell, wrap the queries with spark.sql().
The following describes how to load data from a CSV file to a CarbonData table.
Operation |
Description |
---|---|
Prepare the CSV file to be loaded to the CarbonData Table. |
|
Connect to CarbonData before performing any operations on CarbonData. |
|
Create a CarbonData table to load data and perform query operations. |
|
Load data from CSV to the created table. |
|
Perform query operations such as filters and groupby. |
Preparing a CSV File
- Prepare a CSV file named test.csv on the local PC. An example is as follows:
13418592122,1001, MAC address, 2017-10-23 15:32:30,2017-10-24 15:32:30,62.50,74.56 13418592123 1002, MAC address, 2017-10-23 16:32:30,2017-10-24 16:32:30,17.80,76.28 13418592124,1003, MAC address, 2017-10-23 17:32:30,2017-10-24 17:32:30,20.40,92.94 13418592125 1004, MAC address, 2017-10-23 18:32:30,2017-10-24 18:32:30,73.84,8.58 13418592126,1005, MAC address, 2017-10-23 19:32:30,2017-10-24 19:32:30,80.50,88.02 13418592127 1006, MAC address, 2017-10-23 20:32:30,2017-10-24 20:32:30,65.77,71.24 13418592128,1007, MAC address, 2017-10-23 21:32:30,2017-10-24 21:32:30,75.21,76.04 13418592129,1008, MAC address, 2017-10-23 22:32:30,2017-10-24 22:32:30,63.30,94.40 13418592130, 1009, MAC address, 2017-10-23 23:32:30,2017-10-24 23:32:30,95.51,50.17 13418592131,1010, MAC address, 2017-10-24 00:32:30,2017-10-25 00:32:30,39.62,99.13
- Use WinSCP to import the CSV file to the directory of the node where the client is installed, for example, /opt.
- Log in to FusionInsight Manager and choose System. In the navigation pane on the left, choose Permission > User, click Create to create human-machine user sparkuser, and add the user to user groups hadoop (primary group) and hive.
- Run the following commands to go to the client installation directory, load environment variables, and authenticate the user.
cd /Client installation directory
source ./bigdata_env
source ./Spark2x/component_env
In MRS 3.3.0-LTS and later versions, the Spark2x component is renamed Spark, and the role names of this component are also changed. For example, JobHistory2x is changed to JobHistory. Refer to the descriptions and operations related to the component name and role names in the document based on your MRS version.
kinit sparkuser
- Run the following command to upload the CSV file to the /data directory of the HDFS.
hdfs dfs -put /opt/test.csv /data/
Creating a CarbonData Table
After connecting Spark Beeline with the JDBCServer, create a CarbonData table to load data and perform query operations. Run the following commands to create a simple table:
create table x1 (imei string, deviceInformationId int, mac string, productdate timestamp, updatetime timestamp, gamePointId double, contractNumber double) STORED AS carbondata TBLPROPERTIES ('SORT_COLUMNS'='imei,mac');
The command output is as follows:
+---------+ | Result | +---------+ +---------+ No rows selected (1.093 seconds)
Loading Data to a CarbonData Table
After you have created a CarbonData table, you can load the data from CSV to the created table.
Run the following command with required parameters to load data from CSV. The column names of the CarbonData table must match the column names of the CSV file.
LOAD DATA inpath 'hdfs://hacluster/data/test.csv' into table x1 options('DELIMITER'=',', 'QUOTECHAR'='"','FILEHEADER'='imei, deviceinformationid,mac, productdate,updatetime, gamepointid,contractnumber');
test.csv is the CSV file prepared in Preparing a CSV File and x1 is the table name.
The CSV example file is as follows:
13418592122,1001, MAC address, 2017-10-23 15:32:30,2017-10-24 15:32:30,62.50,74.56 13418592123 1002, MAC address, 2017-10-23 16:32:30,2017-10-24 16:32:30,17.80,76.28 13418592124,1003, MAC address, 2017-10-23 17:32:30,2017-10-24 17:32:30,20.40,92.94 13418592125 1004, MAC address, 2017-10-23 18:32:30,2017-10-24 18:32:30,73.84,8.58 13418592126,1005, MAC address, 2017-10-23 19:32:30,2017-10-24 19:32:30,80.50,88.02 13418592127 1006, MAC address, 2017-10-23 20:32:30,2017-10-24 20:32:30,65.77,71.24 13418592128,1007, MAC address, 2017-10-23 21:32:30,2017-10-24 21:32:30,75.21,76.04 13418592129,1008, MAC address, 2017-10-23 22:32:30,2017-10-24 22:32:30,63.30,94.40 13418592130, 1009, MAC address, 2017-10-23 23:32:30,2017-10-24 23:32:30,95.51,50.17 13418592131,1010, MAC address, 2017-10-24 00:32:30,2017-10-25 00:32:30,39.62,99.13
The command output is as follows:
+------------+ |Segment ID | +------------+ |0 | +------------+ No rows selected (3.039 seconds)
Querying Data from a CarbonData Table
After a CarbonData table is created and the data is loaded, you can perform query operations as required. Some query operations are provided as examples.
- Obtaining the number of records
Run the following command to obtain the number of records in the CarbonData table:
select count(*) from x1;
- Querying with the groupby condition
Run the following command to obtain the deviceinformationid records without repetition in the CarbonData table:
select deviceinformationid,count (distinct deviceinformationid) from x1 group by deviceinformationid;
- Querying with Filter
Run the following command to obtain specific deviceinformationid records:
select * from x1 where deviceinformationid='1010';
If the query result has Chinese or other non-English characters, the columns in the query result may not be aligned. This is because characters of different languages occupy different widths.
Using CarbonData on Spark-shell
If you need to use CarbonData on a Spark-shell, you need to create a CarbonData table, load data to that table, and query data as follows:
spark.sql("CREATE TABLE x2(imei string, deviceInformationId int, mac string, productdate timestamp, updatetime timestamp, gamePointId double, contractNumber double) STORED AS carbondata") spark.sql("LOAD DATA inpath 'hdfs://hacluster/data/x1_without_header.csv' into table x2 options('DELIMITER'=',', 'QUOTECHAR'='\"','FILEHEADER'='imei, deviceinformationid,mac, productdate,updatetime, gamepointid,contractnumber')") spark.sql("SELECT * FROM x2").show()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot