Using the mapPartition Interface
Scenario
Users can use the HBaseContext method to perform operations on HBase in Spark applications and use the mapPartition interface to traverse HBase tables in parallel.
Data Planning
Use HBase tables created in 3.5.7 Using the foreachPartition Interface.
Development Guideline
- Construct RDDs corresponding to rowkey in HBase tables to be traversed.
- Use the mapPartition interface to traverse the data corresponding to rowkey and perform simple operations.
Configuration Operations Before Running
In security mode, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the example code is super, change the value to the prepared development user name.
Packaging the Project
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Compiling and Running the Application.
- Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.
- Upload the user.keytab and krb5.conf files to the server where the client is installed (The file upload path must be the same as the path of the generated JAR file).
To run the Spark on HBase example program, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set the value of the configuration item spark.inputFormat.cache.enabled to false.
Submitting Commands
- yarn-client mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseMapPartitionExample SparkOnHbaseJavaExample.jar table2
Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.
bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample.jar HBaseMapPartitionExample.py table2
- yarn-cluster mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseMapPartitionExample --files /opt/user.keytab,/opt/krb5.conf SparkOnHbaseJavaExample.jar table2
Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.
bin/spark-submit --master yarn --deploy-mode cluster --files /opt/user.keytab,/opt/krb5.conf --jars SparkOnHbaseJavaExample.jar HBaseMapPartitionExample.py table2
Java Sample Code
The following code snippet is only for demonstration. For details about the code, see the JavaHBaseMapPartitionExample file in SparkOnHbaseJavaExample.
public static void main(String args[]) throws IOException { if(args.length <1){ System.out.println("JavaHBaseMapPartitionExample {tableName} is missing an argument"); return; } LoginUtil.loginWithUserKeytab(); final String tableName = args[0]; SparkConf sparkConf = new SparkConf().setAppName("HBaseMapPartitionExample " + tableName); JavaSparkContext jsc = new JavaSparkContext(sparkConf); try{ List<byte []> list = new ArrayList(); list.add(Bytes.toBytes("1")); list.add(Bytes.toBytes("2")); list.add(Bytes.toBytes("3")); list.add(Bytes.toBytes("4")); list.add(Bytes.toBytes("5")); JavaRDD<byte []> rdd = jsc.parallelize(list); Configuration hbaseconf = HBaseConfiguration.create(); JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, hbaseconf); JavaRDD getrdd = hbaseContext.mapPartitions(rdd, new FlatMapFunction<Tuple2<Iterator<byte[]>,Connection>, Object>() { public Iterator call(Tuple2<Iterator<byte[]>, Connection> t) throws Exception { Table table = t._2.getTable(TableName.valueOf(tableName)); //go through rdd List<String> list = new ArrayList<String>(); while(t._1.hasNext()){ byte[] bytes = t._1.next(); Result result = table.get(new Get(bytes)); Iterator<Cell> it = result.listCells().iterator(); StringBuilder sb = new StringBuilder(); sb.append(Bytes.toString(result.getRow()) + ":"); while(it.hasNext()){ Cell cell = it.next(); String column = Bytes.toString(cell.getQualifierArray()); if(column.equals("counter")){ sb.append("(" + column + "," + Bytes.toLong(cell.getValueArray()) + ")"); } else { sb.append("(" + column + "," + Bytes.toString(cell.getValueArray()) + ")"); } } list.add(sb.toString()); } return list.iterator(); } }); List<byte[]> resultList = getrdd.collect(); if(null == resultList || 0 == resultList.size()){ System.out.println("Nothing matches!"); }else{ for(int i =0; i< resultList.size(); i++){ System.out.println(resultList.get(i)); } } } finally { jsc.stop(); } }
Scala Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseMapPartitionExample file in SparkOnHbaseScalaExample.
def main(args: Array[String]) { if (args.length < 1) { println("HBaseMapPartitionExample {tableName} is missing an argument") return } LoginUtil.loginWithUserKeytab() val tableName = args(0) val sparkConf = new SparkConf().setAppName("HBaseMapPartitionExample " + tableName) val sc = new SparkContext(sparkConf) try { //[(Array[Byte])] val rdd = sc.parallelize(Array( Bytes.toBytes("1"), Bytes.toBytes("2"), Bytes.toBytes("3"), Bytes.toBytes("4"), Bytes.toBytes("5"), val conf = HBaseConfiguration.create() val hbaseContext = new HBaseContext(sc, conf) val b = new StringBuilder val getRdd = rdd.hbaseMapPartitions[String](hbaseContext, (it, connection) => { val table = connection.getTable(TableName.valueOf(tableName)) it.map{r => //batching would be faster. This is just an example val result = table.get(new Get(r)) val it = result.listCells().iterator() b.append(Bytes.toString(result.getRow) + ":") while (it.hasNext) { val cell = it.next() val q = Bytes.toString(cell.getQualifierArray) if (q.equals("counter")) { b.append("(" + q + "," + Bytes.toLong(cell.getValueArray) + ")") } else { b.append("(" + q + "," + Bytes.toString(cell.getValueArray) + ")") } } b.toString() } }) getRdd.collect().foreach(v => println(v)) } finally { sc.stop() } }
Python Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseMapPartitionExample file in SparkOnHbasePythonExample.
# -*- coding:utf-8 -*- """ [Note] (1) PySpark does not provide HBase-related APIs. In this example, Python is used to invoke Java code. (2) If yarn-client is used, ensure that the spark.yarn.security.credentials.hbase.enabled parameter in the spark-defaults.conf file under Spark2x/spark/conf/ is set to true on the Spark2x client. """ from py4j.java_gateway import java_import from pyspark.sql import SparkSession # Create a SparkSession instance. spark = SparkSession\ .builder\ .appName("JavaHBaseMapPartitionExample")\ .getOrCreate() # Import the required class to sc._jvm. java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseMapPartitionExample') # Create a class instance and invoke the method. Transfer the sc._jsc parameter. spark._jvm.JavaHBaseMapPartitionExample().execute(spark._jsc, sys.argv) # Stop the SparkSession instance. spark.stop()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.