Updated on 2024-08-10 GMT+08:00

Writing Data to HBase Tables In Batches Using Spark Streaming

Overview

You can use HBaseContext to perform operations on HBase in Spark applications and write streaming data to HBase tables using the streamBulkPut API.

Preparing Data

  1. Run the hbase shell command on the client to go to the HBase command line.
  2. Create an HBase table.

    create 'streamingTable','cf1'

  3. In another client session, run the Linux command to create a port for receiving data. Command may vary depending on the operating system of the server. For SLES, use the command netcat -lk 9999.

    nc -lk 9999

    To create a port for receiving data, you need to install Netcat on the server where the client is located.

Development Guidelines

  1. Use Spark Streaming to continuously read data from a specific port.
  2. Write the read DStream to HBase tables through the streamBulkPut API.

Packaging the Project

  • Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
  • Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.

    To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set configuration item spark.inputFormat.cache.enabled to false.

Submitting Commands

Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java API. For details, see the sample code.

  • yarn-client mode:

    Java/Scala version. (The class name must be the same as the actual code. The following is only an example.) ${ip} must be the IP address of the host where the nc -lk 9999 command is executed.

    bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.streaming.JavaHBaseStreamingBulkPutExample SparkOnHbaseJavaExample-1.0.jar ${ip} 9999 streamingTable cf1

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --jars SparkOnHbaseJavaExample-1.0.jar HBaseStreamingBulkPutExample.py ${ip} 9999 streamingTable cf1

  • yarn-cluster mode:

    Java/Scala version. (The class name must be the same as the actual code. The following is only an example.) ${ip} must be the IP address of the host where the nc -lk 9999 command is executed.

    bin/spark-submit --master yarn --deploy-mode client --deploy-mode cluster --class com.huawei.bigdata.spark.examples.streaming.JavaHBaseStreamingBulkPutExample SparkOnHbaseJavaExample-1.0.jar ${ip} 9999 streamingTable cf1

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --jars SparkOnHbaseJavaExample-1.0.jar HBaseStreamingBulkPutExample.py ${ip} 9999 streamingTable cf1

Java Sample Code

The following code snippet is only for demonstration. For details about the code, see the JavaHBaseStreamingBulkPutExample file in SparkOnHbaseJavaExample.

The awaitTerminationOrTimeout() method is used to set the task timeout interval (in milliseconds). You are advised to set this parameter based on the expected task execution time.

  public static void main(String[] args) throws IOException {
    if (args.length < 4) {
      System.out.println("JavaHBaseBulkPutExample  " +
              "{host} {port} {tableName}");
      return;
    }
    String host = args[0];
    String port = args[1];
    String tableName = args[2];
    String columnFamily = args[3];
    SparkConf sparkConf =
            new SparkConf().setAppName("JavaHBaseStreamingBulkPutExample " +
                    tableName + ":" + port + ":" + tableName);
    JavaSparkContext jsc = new JavaSparkContext(sparkConf);
    try {
      JavaStreamingContext jssc =
              new JavaStreamingContext(jsc, new Duration(1000));
      JavaReceiverInputDStream<String> javaDstream =
              jssc.socketTextStream(host, Integer.parseInt(port));
      Configuration conf = HBaseConfiguration.create();
      JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
      hbaseContext.streamBulkPut(javaDstream,
              TableName.valueOf(tableName),
              new PutFunction(columnFamily));
      jssc.start();
      jssc.awaitTerminationOrTimeout(60000);
      jssc.stop(false,true);
    }catch(InterruptedException e){
      e.printStackTrace();
    } finally {
      jsc.stop();
    }
  }

Scala Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseStreamingBulkPutExample file in SparkOnHbaseScalaExample.

The awaitTerminationOrTimeout() method is used to set the task timeout interval (in milliseconds). You are advised to set this parameter based on the expected task execution time.

  def main(args: Array[String]): Unit = {
    val host = args(0)
    val port = args(1)
    val tableName = args(2)
    val columnFamily = args(3)
    val conf = new SparkConf()
    conf.setAppName("HBase Streaming Bulk Put Example")
    val sc = new SparkContext(conf)
    try {
      val config = HBaseConfiguration.create()
      val hbaseContext = new HBaseContext(sc, config)
      val ssc = new StreamingContext(sc, Seconds(1))
      val lines = ssc.socketTextStream(host, port.toInt)
      hbaseContext.streamBulkPut[String](lines,
        TableName.valueOf(tableName),
        (putRecord) => {
          if (putRecord.length() > 0) {
            val put = new Put(Bytes.toBytes(putRecord))
            put.addColumn(Bytes.toBytes(columnFamily), Bytes.toBytes("foo"), Bytes.toBytes("bar"))
            put
          } else {
            null
          }
        })
      ssc.start()
      ssc.awaitTerminationOrTimeout(60000)
      ssc.stop(stopSparkContext = false)
    } finally {
      sc.stop()
    }
  }

Python Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseStreamingBulkPutExample file in SparkOnHbasePythonExample.

# -*- coding:utf-8 -*-
"""
[Note]
PySpark does not provide HBase APIs. In this example, Python is used to invoke Java code to implement required operations.
"""
from py4j.java_gateway import java_import
from pyspark.sql import SparkSession
# Create a SparkSession instance.
spark = SparkSession\
        .builder\
        .appName("JavaHBaseStreamingBulkPutExample")\
        .getOrCreate()
# Import the required class to sc._jvm.
java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.streaming.JavaHBaseStreamingBulkPutExample')
# Create a class instance, invoke the method, and transfer the sc._jsc parameter.
spark._jvm.JavaHBaseStreamingBulkPutExample().execute(spark._jsc, sys.argv)
# Stop SparkSession.
spark.stop()