Using the BulkPut API
Overview
You can use the HBaseContext method to use HBase in Spark applications and write the constructed RDD into HBase.
Preparing Data
On the client, run the hbase shell command to go to the HBase command line and run the following command to create HBase tables to be used in the sample code:
create 'bulktable','cf1'
Development Guidelines
- Create an RDD.
- Perform operations on HBase in HBaseContext mode and write the generated RDD into the HBase table.
Packaging the Project
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
- Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.
To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set configuration item spark.inputFormat.cache.enabled to false.
Submitting Commands
Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java API. For details, see the sample code.
- yarn-client mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkPutExample SparkOnHbaseJavaExample-1.0.jar bulktable cf1
Python version. (The file name must be the same as the actual one. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample-1.0.jar HBaseBulkPutExample.py bulktable cf1
- yarn-cluster mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkPutExample SparkOnHbaseJavaExample-1.0.jar bulktable cf1
Python version. (The file name must be the same as the actual one. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode cluster --jars SparkOnHbaseJavaExample-1.0.jar HBaseBulkPutExample.py bulktable cf1
Java Sample Code
The following code snippet is only for demonstration. For details about the code, see the JavaHBaseBulkPutExample file in SparkOnHbaseJavaExample.
public static void main(String[] args) throws Exception{ if (args.length < 2) { System.out.println("JavaHBaseBulkPutExample " + "{tableName} {columnFamily}"); return; } String tableName = args[0]; String columnFamily = args[1]; SparkConf sparkConf = new SparkConf().setAppName("JavaHBaseBulkPutExample " + tableName); JavaSparkContext jsc = new JavaSparkContext(sparkConf); try { List<String> list = new ArrayList<String>(5); list.add("1," + columnFamily + ",1,1"); list.add("2," + columnFamily + ",1,2"); list.add("3," + columnFamily + ",1,3"); list.add("4," + columnFamily + ",1,4"); list.add("5," + columnFamily + ",1,5"); list.add("6," + columnFamily + ",1,6"); list.add("7," + columnFamily + ",1,7"); list.add("8," + columnFamily + ",1,8"); list.add("9," + columnFamily + ",1,9"); list.add("10," + columnFamily + ",1,10"); JavaRDD<String> rdd = jsc.parallelize(list); Configuration conf = HBaseConfiguration.create(); JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf); hbaseContext.bulkPut(rdd, TableName.valueOf(tableName), new PutFunction()); System.out.println("Bulk put into Hbase successfully!"); } finally { jsc.stop(); } }
Scala Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseBulkPutExample file in SparkOnHbaseScalaExample.
def main(args: Array[String]) { if (args.length < 2) { System.out.println("HBaseBulkPutTimestampExample {tableName} {columnFamily} are missing an argument") return } val tableName = args(0) val columnFamily = args(1) val sparkConf = new SparkConf().setAppName("HBaseBulkPutTimestampExample " + tableName + " " + columnFamily) val sc = new SparkContext(sparkConf) try { val rdd = sc.parallelize(Array( (Bytes.toBytes("1"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("1")))), (Bytes.toBytes("2"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("2")))), (Bytes.toBytes("3"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("3")))), (Bytes.toBytes("4"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("4")))), (Bytes.toBytes("5"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("5")))), (Bytes.toBytes("6"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("6")))), (Bytes.toBytes("7"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("7")))), (Bytes.toBytes("8"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("8")))), (Bytes.toBytes("9"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("9")))), (Bytes.toBytes("10"), Array((Bytes.toBytes(columnFamily), Bytes.toBytes("1"), Bytes.toBytes("10")))))) val conf = HBaseConfiguration.create() val timeStamp = System.currentTimeMillis() val hbaseContext = new HBaseContext(sc, conf) hbaseContext.bulkPut[(Array[Byte], Array[(Array[Byte], Array[Byte], Array[Byte])])](rdd, TableName.valueOf(tableName), (putRecord) => { val put = new Put(putRecord._1) putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, timeStamp, putValue._3)) put }) } finally { sc.stop() } }
Python Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseBulkPutExample file in SparkOnHbasePythonExample.
# -*- coding:utf-8 -*- """ [Note] PySpark does not provide HBase APIs. In this example, Python is used to invoke Java code to implement required operations. """ from py4j.java_gateway import java_import from pyspark.sql import SparkSession # Create a SparkSession instance. spark = SparkSession\ .builder\ .appName("JavaHBaseBulkPutExample")\ .getOrCreate() # Import the required class to sc._jvm. java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkPutExample') # Create a class instance, invoke the method, and transfer the sc._jsc parameter. spark._jvm.JavaHBaseBulkPutExample().execute(spark._jsc, sys.argv) # Stop SparkSession. spark.stop()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot