Updated on 2024-10-09 GMT+08:00

Updating HBase Data in Batches Using BulkLoad

Scenario

Rows need to be updated in batches based on the row key naming rule, row key scope, field name, and field value.

Procedure

Run the following command to update the rows from row_start to row_stop and direct the output to /output/destdir/.

hbase com.huawei.hadoop.hbase.tools.bulkload.UpdateData 
  -Dupdate.rowkey.start="row_start" 
  -Dupdate.rowkey.stop="row_stop" 
  -Dupdate.hfile.output=/user/output/  
  -Dupdate.qualifier=f1:c1,f2  
  -Dupdate.qualifier.new.value=0,a  
  'table1'     
  • -Dupdate.rowkey.start="row_start": indicates that the start row number is row_start.
  • -Dupdate.rowkey.stop="row_stop": indicates that the end row number is row_stop.
  • -Dupdate.hfile.output=/user/output/: indicates that the output results are directed to /user/output/.

After transparent encryption is configured for HBase, see 7 for precautions on batch updating.

Run the following command to load HFiles:

hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles <path/for/output> <tablename>

Precautions

  1. During batch updating, the field value of the row that meets the requirements will be updated.
  2. Batch updating cannot be performed on fields where indexes are created.
  3. If you do not set the output file of the execution result, the default value is /tmp/updatedata/table name.