Updated on 2024-08-10 GMT+08:00

Using the foreachPartition API

Overview

You can use HBaseContext to perform operations on HBase in the Spark application, construct rowkey of the data to be inserted into RDDs, and write RDDs to HBase tables through the mapPartition API of HBaseContext.

Preparing Data

  1. Run the hbase shell command on the client to go to the HBase command line.
  2. Create an HBase table.

    create 'table2','cf1'

Development Guidelines

  1. Construct the data to be imported into RDDs.
  2. Perform operations on HBase in HBaseContext mode and concurrently write data to HBase through the foreachPatition API of HBaseContext.

Preparations

For clusters with the security mode enabled, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the sample code is super, change the value to the prepared development user name.

Packaging the Project

  • Use the Maven tool provided by IDEA to package the project and generate the JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
  • Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.
  • Upload the user.keytab and krb5.conf files to the server where the client is installed (The file upload path must be the same as the path of the generated JAR file).

    To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set configuration item spark.inputFormat.cache.enabled to false.

Submitting Commands

Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java API. For details, see the sample code.

  • yarn-client mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseForEachPartitionExample SparkOnHbaseJavaExample.jar table2 cf1

    Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.

    bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample.jar HBaseForEachPartitionExample.py table2 cf1

  • yarn-cluster mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseForEachPartitionExample --files /opt/user.keytab,/opt/krb5.conf SparkOnHbaseJavaExample.jar table2 cf1

    Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.

    bin/spark-submit --master yarn --deploy-mode cluster --files /opt/user.keytab,/opt/krb5.conf --jars SparkOnHbaseJavaExample.jar HBaseForEachPartitionExample.py table2 cf1

Java Sample Code

The following code snippet is only for demonstration. For details about the code, see the JavaHBaseForEachPartitionExample file in SparkOnHbaseJavaExample.

public static void main(String[] args) throws IOException {
    if (args.length < 1) {
      System.out.println("JavaHBaseForEachPartitionExample {tableName} {columnFamily}");
      return;
    }
    LoginUtil.loginWithUserKeytab();
    final String tableName = args[0];
    final String columnFamily = args[1];
    SparkConf sparkConf = new SparkConf().setAppName("JavaHBaseBulkGetExample " + tableName);
    JavaSparkContext jsc = new JavaSparkContext(sparkConf);
    try {
      List<byte[]> list = new ArrayList<byte[]>(5);
      list.add(Bytes.toBytes("1"));
      list.add(Bytes.toBytes("2"));
      list.add(Bytes.toBytes("3"));
      list.add(Bytes.toBytes("4"));
      list.add(Bytes.toBytes("5"));
      JavaRDD<byte[]> rdd = jsc.parallelize(list);
      Configuration conf = HBaseConfiguration.create();
      JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
      hbaseContext.foreachPartition(rdd,
              new VoidFunction<Tuple2<Iterator<byte[]>, Connection>>() {
          public void call(Tuple2<Iterator<byte[]>, Connection> t)
                  throws Exception {
            Connection con = t._2();
            Iterator<byte[]> it = t._1();
            BufferedMutator buf = con.getBufferedMutator(TableName.valueOf(tableName));
            while (it.hasNext()) {
              byte[] b = it.next();
              Put put = new Put(b);
              put.add(Bytes.toBytes(columnFamily), Bytes.toBytes("cid"), b);
              buf.mutate(put);
            }
            buf.flush();
            buf.close();

          }
        });
    } finally {
      jsc.stop();
    }
  }

Scala Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseForEachPartitionExample file in SparkOnHbaseScalaExample.

def main(args: Array[String]) {
    if (args.length < 2) {
      println("HBaseForeachPartitionExample {tableName} {columnFamily} are missing an arguments")
      return
    }
    LoginUtil.loginWithUserKeytab()
    val tableName = args(0)
    val columnFamily = args(1)
    val sparkConf = new SparkConf().setAppName("HBaseForeachPartitionExample " +
      tableName + " " + columnFamily)
    val sc = new SparkContext(sparkConf)
    try {
      //[(Array[Byte], Array[(Array[Byte], Array[Byte], Array[Byte])])]
      val rdd = sc.parallelize(Array(
        (Bytes.toBytes("1"),
          Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("1")))),
        (Bytes.toBytes("2"),
          Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("2")))),
        (Bytes.toBytes("3"),
          Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("3")))),
        (Bytes.toBytes("4"),
          Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("4")))),
        (Bytes.toBytes("5"),
          Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("5"))))
      ))
      val conf = HBaseConfiguration.create()
      val hbaseContext = new HBaseContext(sc, conf)
      rdd.hbaseForeachPartition(hbaseContext,
        (it, connection) => {
          val m = connection.getBufferedMutator(TableName.valueOf(tableName))
          it.foreach(r => {
            val put = new Put(r._1)
            r._2.foreach((putValue) =>
              put.addColumn(putValue._1, putValue._2, putValue._3))
            m.mutate(put)
          })
          m.flush()
          m.close()
        })
    } finally {
      sc.stop()
    }
  }

Python Sample Code

The following code snippet is only for demonstration. For details about the code, see the HBaseForEachPartitionExample file in SparkOnHbasePythonExample.

# -*- coding:utf-8 -*-
"""
[Note]
1. PySpark does not provide HBase APIs. In this example, Python is used to invoke Java code to implement required operations.
2. If yarn-client is used, ensure that the spark.yarn.security.credentials.hbase.enabledparameter in the spark-defaults.conffile under Spark2x/spark/conf/ is set to true on the Spark2x client.
   Set spark.yarn.security.credentials.hbase.enabled to true.
"""
from py4j.java_gateway import java_import
from pyspark.sql import SparkSession
# Create a SparkSession instance.
spark = SparkSession\
        .builder\
        .appName("JavaHBaseForEachPartitionExample")\
        .getOrCreate()
# Import the required class to sc._jvm.
java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseForEachPartitionExample')
# Create a class instance, invoke the method, and transfer the sc._jsc parameter.
spark._jvm.JavaHBaseForEachPartitionExample().execute(spark._jsc, sys.argv)
# Stop SparkSession.
spark.stop()