Loading HBase Data in Batches and Generating Local Secondary Indexes
Scenarios
HBase provides the ImportTsv&LoadIncremental tool to load user data in batches. HBase also provides the HIndexImportTsv tool to load both the user data and index data in batches. HIndexImportTsv inherits all functions of the HBase batch data loading tool ImportTsv. If a table is not created before the HIndexImportTsv tool is executed, an index will be created when the table is created, and index data is generated when user data is generated.
Prerequisites
- The client has been installed. For details, see Installing a Client.
- You have created a component service user with required permissions. A machine-machine user needs to download the keytab file and a human-machine user needs to change the password upon the first login.
Using HIndexImportTsv to Generate HBase Local Secondary Index Data in Batches
- Log in to the node where the client is installed as the client installation user.
- Run the following commands to configure environment variables and authenticate the user:
cd Client installation directory
source bigdata_env
kinit Component service user (Skip this step for clusters with Kerberos authentication disabled.)
- Run the following commands to import data to HDFS:
hdfs dfs -put <local_data_file> <inputdir>
For example, define the data file data.txt as follows:
12005000201,Zhang San,Male,19,City a, Province a 12005000202,Li Wanting,Female,23,City b, Province b 12005000203,Wang Ming,Male,26,City c, Province c 12005000204,Li Gang,Male,18,City d, Province d 12005000205,Zhao Enru,Female,21,City e, Province e 12005000206,Chen Long,Male,32,City f, Province f 12005000207,Zhou Wei,Female,29,City g, Province g 12005000208,Yang Yiwen,Female,30,City h, Province h 12005000209,Xu Bing,Male,26,City i, Province i 12005000210,Xiao Kai,Male,25,City j, Province j
Run the following commands:
hdfs dfs -mkdir /datadirImport
hdfs dfs -put data.txt /datadirImport
- Run the following command to create the bulkTable table:
hbase shell
create 'bulkTable', {NAME => 'info',COMPRESSION => 'SNAPPY', DATA_BLOCK_ENCODING => 'FAST_DIFF'},{NAME=>'address'}
Run the !quit command to exit hbase shell.
- Run the following commands to generate an HFile file (StoreFiles):
hbase org.apache.hadoop.hbase.hindex.mapreduce.HIndexImportTsv -Dimporttsv.separator=<separator>
-Dimporttsv.bulk.output=</path/for/output> -Dindexspecs.to.add=<indexspecs> -Dimporttsv.columns=<columns> tableName <inputdir>
- -Dimport.separator: indicates a separator, for example, -Dimport.separator=','.
- -Dimport.bulk.output=</path/for/output>: indicates the output path of the execution result. You need to specify a path that does not exist.
- <columns>: indicates the mapping of the imported data in a table, for example, -Dimporttsv.columns=HBASE_ROW_KEY,info:name,info:gender,info:age,address:city,address:province.
- <tablename>: indicates the name of the table to be operated.
- <inputdir>: indicates the directory where data is loaded in batches.
- -Dindexspecs.to.add=<indexspecs>: indicates the mapping between an index name and a column, for example, -Dindexspecs.to.add='index_bulk=>info:[age->String]'. The structure is as follows:
indexNameN=>familyN :[columnQualifierN-> columnQualifierDataType], [columnQualifierM-> columnQualifierDataType];familyM: [columnQualifierO-> columnQualifierDataType]# indexNameN=> familyM: [columnQualifierO-> columnQualifierDataType]
- Column qualifiers are separated by commas (,). For example:
- Column families are separated by semicolons (;). For example:
- Multiple indexes are separated by the number sign (#). For example:
index1 => f1:[c1-> String], [c2-> String]; f2:[c3-> Long]#index2 => f2:[c3-> Long]
- The following data types are supported by columns:
STRING, INTEGER, FLOAT, LONG, DOUBLE, SHORT, BYTE, CHAR
Data types can also be transferred in lowercase.
For example, run the following command:
hbase org.apache.hadoop.hbase.hindex.mapreduce.HIndexImportTsv -Dimporttsv.separator=',' -Dimporttsv.bulk.output=/dataOutput -Dindexspecs.to.add='index_bulk=>info:[age->String]' -Dimporttsv.columns=HBASE_ROW_KEY,info:name,info:gender,info:age,address:city,address:province bulkTable /datadirImport/data.txt
Command output:
[root@shap000000406 opt]# hbase org.apache.hadoop.hbase.hindex.mapreduce.HIndexImportTsv -Dimporttsv.separator=',' -Dimporttsv.bulk.output=/dataOutput -Dindexspecs.to.add='index_bulk=>info:[age->String]' -Dimporttsv.columns=HBASE_ROW_KEY,info:name,info:gender,info:age,address:city,address:province bulkTable /datadirImport/data.txt 2018-05-08 21:29:16,059 INFO [main] mapreduce.HFileOutputFormat2: Incremental table bulkTable output configured. 2018-05-08 21:29:16,069 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService 2018-05-08 21:29:16,069 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x80007c2cb4fd5b4d 2018-05-08 21:29:16,072 INFO [main] zookeeper.ZooKeeper: Session: 0x80007c2cb4fd5b4d closed 2018-05-08 21:29:16,072 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x80007c2cb4fd5b4d 2018-05-08 21:29:16,379 INFO [main] client.ConfiguredRMFailoverProxyProvider: Failing over to 147 2018-05-08 21:29:17,328 INFO [main] input.FileInputFormat: Total input files to process : 1 2018-05-08 21:29:17,413 INFO [main] mapreduce.JobSubmitter: number of splits:1 2018-05-08 21:29:17,430 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2018-05-08 21:29:17,687 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1525338489458_0002 2018-05-08 21:29:18,100 INFO [main] impl.YarnClientImpl: Submitted application application_1525338489458_0002 2018-05-08 21:29:18,136 INFO [main] mapreduce.Job: The url to track the job: http://shap000000407:8088/proxy/application_1525338489458_0002/ 2018-05-08 21:29:18,136 INFO [main] mapreduce.Job: Running job: job_1525338489458_0002 2018-05-08 21:29:28,248 INFO [main] mapreduce.Job: Job job_1525338489458_0002 running in uber mode : false 2018-05-08 21:29:28,249 INFO [main] mapreduce.Job: map 0% reduce 0% 2018-05-08 21:29:38,344 INFO [main] mapreduce.Job: map 100% reduce 0% 2018-05-08 21:29:51,421 INFO [main] mapreduce.Job: map 100% reduce 100% 2018-05-08 21:29:51,428 INFO [main] mapreduce.Job: Job job_1525338489458_0002 completed successfully 2018-05-08 21:29:51,523 INFO [main] mapreduce.Job: Counters: 50
- Run the following command to import the generated HFile to HBase:
hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles </path/for/output> <tablename>
For example, run the following command:
hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /dataOutput bulkTable
Command output:
[root@shap000000406 opt]# hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /dataOutput bulkTable 2018-05-08 21:30:01,398 WARN [main] mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://hacluster/dataOutput/_SUCCESS 2018-05-08 21:30:02,006 INFO [LoadIncrementalHFiles-0] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled 2018-05-08 21:30:02,006 INFO [LoadIncrementalHFiles-2] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled 2018-05-08 21:30:02,006 INFO [LoadIncrementalHFiles-1] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled 2018-05-08 21:30:02,085 INFO [LoadIncrementalHFiles-2] compress.CodecPool: Got brand-new decompressor [.snappy] 2018-05-08 21:30:02,120 INFO [LoadIncrementalHFiles-0] mapreduce.LoadIncrementalHFiles: Trying to load hfile=hdfs://hacluster/dataOutput/address/042426c252f74e859858c7877b95e510 first=12005000201 last=12005000210 2018-05-08 21:30:02,120 INFO [LoadIncrementalHFiles-2] mapreduce.LoadIncrementalHFiles: Trying to load hfile=hdfs://hacluster/dataOutput/info/f3995920ae0247a88182f637aa031c49 first=12005000201 last=12005000210 2018-05-08 21:30:02,128 INFO [LoadIncrementalHFiles-1] mapreduce.LoadIncrementalHFiles: Trying to load hfile=hdfs://hacluster/dataOutput/d/c53b252248af42779f29442ab84f86b8 first=\x00index_bulk\x00\x00\x00\x00\x00\x00\x00\x0018\x00\x0012005000204 last=\x00index_bulk\x00\x00\x00\x00\x00\x00\x00\x0032\x00\x0012005000206 2018-05-08 21:30:02,231 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService 2018-05-08 21:30:02,231 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x81007c2cf0f55cc5 2018-05-08 21:30:02,235 INFO [main] zookeeper.ZooKeeper: Session: 0x81007c2cf0f55cc5 closed 2018-05-08 21:30:02,235 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x81007c2cf0f55cc5
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot