Updated on 2024-08-10 GMT+08:00

Using the BulkLoad API

Overview

You can use HBaseContext to perform operations on HBase in Spark applications, construct rowkey of the data to be inserted into RDDs, and write RDDs to HFiles through the BulkLoad API of HBaseContext. The following command is used to import the generated HFiles to the HBase table and will not be described in this section.

hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles {hfilePath} {tableName}

Preparing Data

  1. Run the hbase shell command on the client to go to the HBase command line.
  2. Create an HBase table.

    create 'bulkload-table-test','f1','f2'

Development Guidelines

  1. Construct the data to be imported into RDDs.
  2. Perform operations on HBase in HBaseContext mode and write RDDs into HFiles through the BulkLoad API of HBaseContext.

Packaging the Project

  • Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
  • Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.

    To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set configuration item spark.inputFormat.cache.enabled to false.

Submitting Commands

Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java API. For details, see the sample code.

  • yarn-client mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkLoadExample SparkOnHbaseJavaExample-1.0.jar /tmp/hfile bulkload-table-test

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample-1.0.jar HBaseBulkLoadExample.py /tmp/hfile bulkload-table-test

  • yarn-cluster mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkLoadExample SparkOnHbaseJavaExample-1.0.jar /tmp/hfile bulkload-table-test

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --jars SparkOnHbaseJavaExample-1.0.jar HBaseBulkLoadExample.py /tmp/hfile bulkload-table-test

Java Sample Code

The following code snippet is only for demonstration. For details about the code, see the JavaHBaseBulkLoadExample file in SparkOnHbaseJavaExample.

 public static void main(String[] args) throws IOException{
    if (args.length < 2) {
      System.out.println("JavaHBaseBulkLoadExample {outputPath} {tableName}");
      return;
    }
    String outputPath = args[0];
    String tableName = args[1];
    String columnFamily1 = "f1";
    String columnFamily2 = "f2";
    SparkConf sparkConf = new SparkConf().setAppName("JavaHBaseBulkLoadExample " + tableName);
    JavaSparkContext jsc = new JavaSparkContext(sparkConf);
    try {
      List<String> list= new ArrayList<String>();
      // row1
      list.add("1," + columnFamily1 + ",b,1");
      // row3
      list.add("3," + columnFamily1 + ",a,2");
      list.add("3," + columnFamily1 + ",b,1");
      list.add("3," + columnFamily2 + ",a,1");
      /* row2 */
      list.add("2," + columnFamily2 + ",a,3");
      list.add("2," + columnFamily2 + ",b,3");
      JavaRDD<String> rdd = jsc.parallelize(list);
      Configuration conf = HBaseConfiguration.create();
      JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
      hbaseContext.bulkLoad(rdd, TableName.valueOf(tableName),new BulkLoadFunction(), outputPath,
          new HashMap<byte[], FamilyHFileWriteOptions>(), false, HConstants.DEFAULT_MAX_FILE_SIZE);
    } finally {
      jsc.stop();
    }
  }

Scala Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseBulkLoadExample file in SparkOnHbaseScalaExample.

  def main(args: Array[String]) {
      if(args.length < 2) {
        println("HBaseBulkLoadExample {outputPath} {tableName}
        return
      }
      LoginUtil.loginWithUserKeytab()
      val Array(outputPath, tableName) = args
      val columnFamily1 = "f1"
      val columnFamily2 = "f2"
      val sparkConf = new SparkConf().setAppName("JavaHBaseBulkLoadExample " + tableName)
      val sc = new SparkContext(sparkConf)
      try {
        val arr = Array("1," + columnFamily1 + ",b,1",
                        "2," + columnFamily1 + ",a,2",
                        "3," + columnFamily1 + ",b,1",
                        "3," + columnFamily2 + ",a,1",
                        "4," + columnFamily2 + ",a,3",
                        "5," + columnFamily2 + ",b,3")

          val rdd = sc.parallelize(arr)
          val config = HBaseConfiguration.create
          val hbaseContext = new HBaseContext(sc, config)
          hbaseContext.bulkLoad[String](rdd, 
            TableName.valueOf(tableName),
            (putRecord) => {
              if(putRecord.length > 0) {
                  val strArray = putRecord.split(",")
                  val kfq = new KeyFamilyQualifier(Bytes.toBytes(strArray(0)), Bytes.toBytes(strArray(1)), Bytes.toBytes(strArray(2)))
                  val ite = (kfq, Bytes.toBytes(strArray(3)))
                  val itea = List(ite).iterator
                  itea
                } else {
                  null
                }
              },
              outputPath)
          } finally {
              sc.stop()
          }
      }
  }
                

Python Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseBulkLoadPythonExample file in SparkOnHbasePythonExample.

# -*- coding:utf-8 -*-
"""
[Note]
PySpark does not provide HBase APIs. In this example, Python is used to invoke Java code to implement required operations.
"""
from py4j.java_gateway import java_import
from pyspark.sql import SparkSession
# Create a SparkSession instance.
spark = SparkSession\
        .builder\
        .appName("JavaHBaseBulkLoadExample")\
        .getOrCreate()
# Import the required class to sc._jvm.
java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.HBaseBulkLoadPythonExample')
# Create a class instance, invoke the method, and transfer the sc._jsc parameter.
spark._jvm.HBaseBulkLoadPythonExample().hbaseBulkLoad(spark._jsc, sys.argv[1], sys.argv[2])
# Stop SparkSession.
spark.stop()