Updated on 2023-08-31 GMT+08:00

Performing Operation on Data in Avro Format

Scenario

You can use HBase as data sources in Spark applications. In this example, data is stored in HBase in Avro format. Data is read from the HBase, and the read data is filtered.

Data Planning

On the client, run the hbase shell command to go to the HBase command line and run the following commands to create HBase tables to be used in the sample code:

create 'ExampleAvrotable','rowkey','cf1' (If the table already exists, run truncate 'ExampleAvrotable' to clear the data in the table before running the commands provided in Submitting Commands.)

create 'ExampleAvrotableInsert','rowkey','cf1' (If the table already exists, run truncate 'ExampleAvrotableInsert' to clear the data in the table before running the commands provided in Submitting Commands.)

Development Guideline

  1. Create an RDD.
  2. Perform operations on HBase to treat it as the data source and write the generated RDD into HBase tables.
  3. Read data from HBase tables and performs simple operations on the data.

Packaging the Project

  • Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Compiling and Running the Application.
  • Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.

    To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set the value of the configuration item spark.inputFormat.cache.enabled to false.

Submitting Commands

Assume that the JAR package name of the case code is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. Run the following commands in the $SPARK_HOME directory.

  • yarn-client mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --jars /opt/female/protobuf-java-2.5.0.jar --conf spark.yarn.user.classpath.first=true --class com.huawei.bigdata.spark.examples.datasources.AvroSource SparkOnHbaseJavaExample-1.0.jar

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode client --conf spark.yarn.user.classpath.first=true --jars SparkOnHbaseJavaExample-1.0.jar,/opt/female/protobuf-java-2.5.0.jar AvroSource.py

  • yarn-cluster mode:

    Java/Scala version (The class name must be the same as the actual code. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --jars /opt/female/protobuf-java-2.5.0.jar --conf spark.yarn.user.classpath.first=true --class com.huawei.bigdata.spark.examples.datasources.AvroSource SparkOnHbaseJavaExample-1.0.jar

    Python version. (The file name must be the same as the actual one. The following is only an example.)

    bin/spark-submit --master yarn --deploy-mode cluster --conf spark.yarn.user.classpath.first=true --jars SparkOnHbaseJavaExample-1.0.jar,/opt/female/protobuf-java-2.5.0.jar AvroSource.py

Java Sample Code

The following code snippet is only for demonstration. For details about the code, see the AvroSource file in SparkOnHbaseJavaExample.

public static void main(JavaSparkContext jsc) throws IOException {
        SQLContext sqlContext = new SQLContext(jsc);
        Configuration hbaseconf = new HBaseConfiguration().create();
        JavaHBaseContext hBaseContext = new JavaHBaseContext(jsc, hbaseconf);
        List list = new ArrayList<AvroHBaseRecord>();
        for(int i=0; i<=255 ; ++i){
            list.add(AvroHBaseRecord.apply(i));
        }
        try{
            Map<String, String> map = new HashMap<String, String>();
            map.put(HBaseTableCatalog.tableCatalog(), catalog);
            map.put(HBaseTableCatalog.newTable(), "5");
            sqlContext.createDataFrame(list, AvroHBaseRecord.class).write().options(map).format("org.apache.hadoop.hbase.spark").save();
            Dataset<Row> ds = withCatalog(sqlContext,catalog);
            ds.show();
            ds.printSchema();
            ds.registerTempTable("ExampleAvrotable");
            Dataset<Row> c= sqlContext.sql("select count(1) from ExampleAvrotable");
            c.show();
            Dataset<Row> filtered = ds.select("col0", "col1.favorite_array").where("col0 = 'name1'");
            filtered.show();
            java.util.List<Row> collected = filtered.collectAsList();
            if (collected.get(0).get(1).toString().equals("number1")) {
                throw new UserCustomizedSampleException("value invalid", new Throwable());
            }
            if (collected.get(0).get(1).toString().equals("number2")) {
                throw new UserCustomizedSampleException("value invalid", new Throwable());
            }
            Map avroCatalogInsertMap = new HashMap<String,String>();
            avroCatalogInsertMap.put("avroSchema" , AvroHBaseRecord.schemaString);
            avroCatalogInsertMap.put(HBaseTableCatalog.tableCatalog(), avroCatalogInsert);
            ds.write().options(avroCatalogInsertMap).format("org.apache.hadoop.hbase.spark").save();
            Dataset<Row> newDS = withCatalog(sqlContext,avroCatalogInsert);
            newDS.show();
            newDS.printSchema();
            if (newDS.count() != 256) {
                throw new UserCustomizedSampleException("value invalid", new Throwable());
            }
            ds.filter("col1.name = 'name5' || col1.name <= 'name5'").select("col0","col1.favorite_color", "col1.favorite_number").show();
        } finally{
            jsc.stop();
       }
    }

Scala Sample Code

The following code snippet is only for demonstration. For details about the code, see the AvroSource file in SparkOnHbaseScalaExample.

def main(args: Array[String]) {
    val sparkConf = new SparkConf().setAppName("AvroSourceExample")
    val sc = new SparkContext(sparkConf)
    val sqlContext = new SQLContext(sc)
    val hbaseConf = HBaseConfiguration.create()
    val hbaseContext = new HBaseContext(sc, hbaseConf)
    import sqlContext.implicits._
    def withCatalog(cat: String): DataFrame = {
      sqlContext
        .read
        .options(Map("avroSchema" -> AvroHBaseRecord.schemaString, HBaseTableCatalog.tableCatalog -> avroCatalog))
        .format("org.apache.hadoop.hbase.spark")
        .load()
    }
    val data = (0 to 255).map { i =>
      AvroHBaseRecord(i)
    }
    try {
      sc.parallelize(data).toDF.write.options(
        Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5"))
        .format("org.apache.hadoop.hbase.spark")
        .save()

      val df = withCatalog(catalog)
      df.show()
      df.printSchema()
      df.registerTempTable("ExampleAvrotable")
      val c = sqlContext.sql("select count(1) from ExampleAvrotable")
      c.show()

      val filtered = df.select($"col0", $"col1.favorite_array").where($"col0" === "name001")
      filtered.show()
      val collected = filtered.collect()
      if (collected(0).getSeq[String](1)(0) != "number1") {
        throw new UserCustomizedSampleException("value invalid")
      }
      if (collected(0).getSeq[String](1)(1) != "number2") {
        throw new UserCustomizedSampleException("value invalid")
      }

      df.write.options(
        Map("avroSchema" -> AvroHBaseRecord.schemaString, HBaseTableCatalog.tableCatalog -> avroCatalogInsert,
          HBaseTableCatalog.newTable -> "5"))
        .format("org.apache.hadoop.hbase.spark")
        .save()
      val newDF = withCatalog(avroCatalogInsert)
      newDF.show()
      newDF.printSchema()
      if (newDF.count() != 256) {
        throw new UserCustomizedSampleException("value invalid")
      }
      df.filter($"col1.name" === "name005" || $"col1.name" <= "name005")
        .select("col0", "col1.favorite_color", "col1.favorite_number")
        .show()
    } finally {
      sc.stop()
    }
  }

Python Sample Code

The following code snippet is only for demonstration. For details about the code, see the AvroSource file in SparkOnHbasePythonExample.

# -*- coding:utf-8 -*-
"""
[Note]
Pyspark does not provide HBase-related APIs. In this example, Python is used to invoke Java code to implement required operations.
"""
from py4j.java_gateway import java_import
from pyspark.sql import SparkSession
# Create a SparkSession instance.
spark = SparkSession\
        .builder\
        .appName("AvroSourceExample")\
        .getOrCreate()
# Import the required class to sc._jvm.
java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.datasources.AvroSource')
# Create a class instance, invoke the method, and transfer the sc._jsc parameter.
spark._jvm.AvroSource().execute(spark._jsc)
# Stop SparkSession.
spark.stop()