Using the BulkLoad Interface
Scenario
You can use HBaseContext to use HBase in Spark applications, construct rowkey of the data to be inserted into RDDs, write RDDs into HFiles through the BulkLoad interface of HBaseContext. The following command is used to import the generated HFiles to the HBase table and will not be described in this section.
hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles {hfilePath} {tableName}
Data Planning
Development Guideline
- Construct the data to be imported into RDDs
- Perform operations on HBase in HBaseContext mode and write RDDs into HFiles through the BulkLoad interface of HBaseContext.
Configuration Operations Before Running
In security mode, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on FusionInsight Manager. The user in the sample code is super, change the value to the prepared development user name.
Packaging the Project
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Writing and Running the Spark Program in the Linux Environment.
- Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.
- Upload the user.keytab and krb5.conf files to the server where the client is installed (The file upload path must be the same as the path of the generated JAR file).
Submitting Commands
Assume that the JAR package name is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. The following commands are executed in the $SPARK_HOME directory, and Java is displayed before the class name of the Java interface. For details, see the sample code.
- yarn-client mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode client --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkLoadExample SparkOnHbaseJavaExample.jar /tmp/hfile bulkload-table-test
Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.
bin/spark-submit --master yarn --deploy-mode client --jars SparkOnHbaseJavaExample.jar HBaseBulkLoadExample.py /tmp/hfile bulkload-table-test
- yarn-cluster mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode cluster --class com.huawei.bigdata.spark.examples.hbasecontext.JavaHBaseBulkLoadExample --files /opt/user.keytab,/opt/krb5.conf SparkOnHbaseJavaExample.jar /tmp/hfile bulkload-table-test
Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.
bin/spark-submit --master yarn --deploy-mode cluster --files /opt/user.keytab,/opt/krb5.conf --jars SparkOnHbaseJavaExample.jar HBaseBulkLoadExample.py /tmp/hfile bulkload-table-test
Java Sample Code
The following code snippet is only for demonstration. For details about the code, see the JavaHBaseBulkLoadExample file in SparkOnHbaseJavaExample.
public static void main(String[] args) throws IOException{ if (args.length < 2) { System.out.println("JavaHBaseBulkLoadExample {outputPath} {tableName}"); return; } LoginUtil.loginWithUserKeytab(); String outputPath = args[0]; String tableName = args[1]; String columnFamily1 = "f1"; String columnFamily2 = "f2"; SparkConf sparkConf = new SparkConf().setAppName("JavaHBaseBulkLoadExample " + tableName); JavaSparkContext jsc = new JavaSparkContext(sparkConf); try { List<String> list= new ArrayList<String>(); // row1 list.add("1," + columnFamily1 + ",b,1"); // row3 list.add("3," + columnFamily1 + ",a,2"); list.add("3," + columnFamily1 + ",b,1"); list.add("3," + columnFamily2 + ",a,1"); /* row2 */ list.add("2," + columnFamily2 + ",a,3"); list.add("2," + columnFamily2 + ",b,3"); JavaRDD<String> rdd = jsc.parallelize(list); Configuration conf = HBaseConfiguration.create(); JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf); hbaseContext.bulkLoad(rdd, TableName.valueOf(tableName),new BulkLoadFunction(), outputPath, new HashMap<byte[], FamilyHFileWriteOptions>(), false, HConstants.DEFAULT_MAX_FILE_SIZE); } finally { jsc.stop(); } }
Scala Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseBulkLoadExample file in SparkOnHbaseScalaExample.
def main(args: Array[String]) { if(args.length < 2) { println("HBaseBulkLoadExample {outputPath} {tableName} return } LoginUtil.loginWithUserKeytab() val Array(outputPath, tableName) = args val columnFamily1 = "f1" val columnFamily2 = "f2" val sparkConf = new SparkConf().setAppName("JavaHBaseBulkLoadExample " + tableName) val sc = new SparkContext(sparkConf) try { val arr = Array("1," + columnFamily1 + ",b,1", "2," + columnFamily1 + ",a,2", "3," + columnFamily1 + ",b,1", "3," + columnFamily2 + ",a,1", "4," + columnFamily2 + ",a,3", "5," + columnFamily2 + ",b,3") val rdd = sc.parallelize(arr) val config = HBaseConfiguration.create val hbaseContext = new HBaseContext(sc, config) hbaseContext.bulkLoad[String](rdd, TableName.valueOf(tableName), (putRecord) => { if(putRecord.length > 0) { val strArray = putRecord.split(",") val kfq = new KeyFamilyQualifier(Bytes.toBytes(strArray(0)), Bytes.toBytes(strArray(1)), Bytes.toBytes(strArray(2))) val ite = (kfq, Bytes.toBytes(strArray(3))) val itea = List(ite).iterator itea } else { null } }, outputPath) } finally { sc.stop() } } }
Python Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseBulkLoadPythonExample file in SparkOnHbasePythonExample.
# -*- coding:utf-8 -*- """ [Note] (1) PySpark does not provide HBase-related APIs. In this example, Python is used to invoke Java code to implement required operations. (2) If yarn-client is used, ensure that the spark.yarn.security.credentials.hbase.enabled parameter in the spark-defaults.conf file under Spark2x/spark/conf/ is set to true on the Spark2x client. Set spark.yarn.security.credentials.hbase.enabled to true. """ from py4j.java_gateway import java_import from pyspark.sql import SparkSession # Create a SparkSession instance. spark = SparkSession\ .builder\ .appName("JavaHBaseBulkLoadExample")\ .getOrCreate() # Import required class to sc._jvm. java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.HBaseBulkLoadPythonExample') # Create a class instance and invoke the method. Transfer the sc._jsc parameter. spark._jvm.HBaseBulkLoadPythonExample().hbaseBulkLoad(spark._jsc, sys.argv[1], sys.argv[2]) # Stop the SparkSession instance. spark.stop()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot