Updated on 2024-08-10 GMT+08:00

Distributedly Scanning HBase Tables

Overview

You can use HBaseContext to perform operations on HBase in Spark applications and use HBase RDDs to scan HBase tables based on specific rules.

Preparing Data

Use HBase tables created in Operating Data in Avro Format.

Development Guidelines

  1. Set the scanning rule, for example: setCaching.
  2. Use specific rules to scan the HBase table.

Packaging the Project

To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set configuration item spark.inputFormat.cache.enabled to false.

Submitting Commands

Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java API. For details, see the sample code.

  • yarn-client mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseDistributedScanExample SparkOnHbaseJavaExample-1.0.jar ExampleAvrotable

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample-1.0.jar HBaseDistributedScanExample.py ExampleAvrotable

  • yarn-cluster mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseDistributedScanExample SparkOnHbaseJavaExample-1.0.jar ExampleAvrotable

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --jars SparkOnHbaseJavaExample-1.0.jar HBaseDistributedScanExample.py ExampleAvrotable

Java Sample Code

The following code snippet is only for demonstration. For details about the code, see the JavaHBaseDistributedScanExample file in SparkOnHbaseJavaExample.

 public static void main(String[] args) throws IOException{
    if (args.length < 1) {
      System.out.println("JavaHBaseDistributedScan {tableName}");
      return;
    }
    String tableName = args[0];
    SparkConf sparkConf = new SparkConf().setAppName("JavaHBaseDistributedScan " + tableName);
    JavaSparkContext jsc = new JavaSparkContext(sparkConf);
    try {
      Configuration conf = HBaseConfiguration.create();
      JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
      Scan scan = new Scan();
      scan.setCaching(100);
      JavaRDD<Tuple2<ImmutableBytesWritable, Result>> javaRdd =
              hbaseContext.hbaseRDD(TableName.valueOf(tableName), scan);
      List<String> results = javaRdd.map(new ScanConvertFunction()).collect();
      System.out.println("Result Size: " + results.size());
    } finally {
      jsc.stop();
    }
  }

Scala Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseDistributedScanExample file in SparkOnHbaseScalaExample.

  def main(args: Array[String]) {
    if (args.length < 1) {
      println("HBaseDistributedScanExample {tableName} missing an argument")
      return
    }
    val tableName = args(0)
    val sparkConf = new SparkConf().setAppName("HBaseDistributedScanExample " + tableName )
    val sc = new SparkContext(sparkConf)
    try {
      val conf = HBaseConfiguration.create()
      val hbaseContext = new HBaseContext(sc, conf)
      val scan = new Scan()
      scan.setCaching(100)
      val getRdd = hbaseContext.hbaseRDD(TableName.valueOf(tableName), scan)
      getRdd.foreach(v => println(Bytes.toString(v._1.get())))
      println("Length: " + getRdd.map(r => r._1.copyBytes()).collect().length);
    } finally {
      sc.stop()
    }
  }

Python Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseDistributedScanExample file in SparkOnHbasePythonExample.

# -*- coding:utf-8 -*-
# -*- coding:utf-8 -*-
"""
[Note]
PySpark does not provide HBase APIs. In this example, Python is used to invoke Java code to implement required operations.
"""
from py4j.java_gateway import java_import
from pyspark.sql import SparkSession
# Create a SparkSession instance.
spark = SparkSession\
        .builder\
        .appName("JavaHBaseDistributedScan")\
        .getOrCreate()
# Import the required class to sc._jvm.
java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseDistributedScanExample')
# Create a class instance, invoke the method, and transfer the sc._jsc parameter.
spark._jvm.JavaHBaseDistributedScan().execute(spark._jsc, sys.argv)
# Stop SparkSession.
spark.stop()