Updated on 2024-10-23 GMT+08:00

Using the BulkDelete Interface

Scenario

You can use the HBaseContext method to use HBase in Spark applications, construct rowkey of the data to be deleted into RDDs, and delete the data corresponding to the rowkey in HBase tables through the BulkDelete interface of HBaseContext.

Data Planning

Perform operations based on the HBase tables and data in the tables that are created in Using the BulkPut Interface.

Development Guideline

  1. Create RDDs containing the rowkey to be deleted.
  2. Perform operations on the HBase in HBaseContext mode and delete the data corresponding to the rowkey in HBase tables through the BulkDelete interface of HBaseContext.

Packaging the Project

  • Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Writing and Running the Spark Program in the Linux Environment.
  • Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.

    To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set the value of the configuration item spark.inputFormat.cache.enabled to false.

Submitting Commands

Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java interface. For details, see the sample code.

  • yarn-client mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkDeleteExample SparkOnHbaseJavaExample-1.0.jar bulktable

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample-1.0.jar HBaseBulkDeleteExample.py bulktable

  • yarn-cluster mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkDeleteExample SparkOnHbaseJavaExample-1.0.jar bulktable

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --jars SparkOnHbaseJavaExample-1.0.jar HBaseBulkDeleteExample.py bulktable

Java Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseBulkDeleteExample file in SparkOnHbaseJavaExample.

 public static void main(String[] args) throws IOException {
    if (args.length < 1) {
      System.out.println("JavaHBaseBulkDeleteExample  {tableName}");
      return;
    }
    String tableName = args[0];
    SparkConf sparkConf = new SparkConf().setAppName("JavaHBaseBulkDeleteExample " + tableName);
    JavaSparkContext jsc = new JavaSparkContext(sparkConf);
    try {
      List<byte[]> list = new ArrayList<byte[]>(5);
      list.add(Bytes.toBytes("1"));
      list.add(Bytes.toBytes("2"));
      list.add(Bytes.toBytes("3"));
      list.add(Bytes.toBytes("4"));
      list.add(Bytes.toBytes("5"));
      JavaRDD<byte[]> rdd = jsc.parallelize(list);
      Configuration conf = HBaseConfiguration.create();
      JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
      hbaseContext.bulkDelete(rdd,
              TableName.valueOf(tableName), new DeleteFunction(), 4);
      System.out.println("Bulk Delete successfully!");
    } finally {
      jsc.stop();
    }
  }

Scala Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseBulkDeleteExample file in SparkOnHbaseScalaExample.

  def main(args: Array[String]) {
    if (args.length < 1) {
      println("HBaseBulkDeleteExample {tableName} missing an argument")
      return
    }
    val tableName = args(0)
    val sparkConf = new SparkConf().setAppName("HBaseBulkDeleteExample " + tableName)
    val sc = new SparkContext(sparkConf)
    try {
      //[Array[Byte]]
      val rdd = sc.parallelize(Array(
        Bytes.toBytes("1"),
        Bytes.toBytes("2"),
        Bytes.toBytes("3"),
        Bytes.toBytes("4"),
        Bytes.toBytes("5")
      ))
      val conf = HBaseConfiguration.create()
      val hbaseContext = new HBaseContext(sc, conf)
      hbaseContext.bulkDelete[Array[Byte]](rdd,
        TableName.valueOf(tableName),
        putRecord => new Delete(putRecord),
        4)
    } finally {
      sc.stop()
    }
  }

Python Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseBulkDeleteExample file in SparkOnHbasePythonExample.

  def main(args: Array[String]) {
# -*- coding:utf-8 -*-
"""
[Note]
PySpark does not provide HBase-related APIs. In this example, Python is used to invoke Java code to implement required operations. 
"""
from py4j.java_gateway import java_import
from pyspark.sql import SparkSession
# Create a SparkSession instance. 
spark = SparkSession\
        .builder\
        .appName("JavaHBaseBulkDeleteExample")\
        .getOrCreate()
# Import the required class to sc._jvm. 
java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkDeleteExample')
# Create a class instance and invoke the method. Transfer the sc._jsc parameter.  
spark._jvm.JavaHBaseBulkDeleteExample().execute(spark._jsc, sys.argv)
# Stop the SparkSession instance. 
spark.stop()