Performing Operations on the HBase Data Source
Scenario
You can use HBase as data sources in Spark applications, write dataFrame to HBase, read data from HBase, and filter the read data.
Data Planning
On the client, run the hbase shell command to go to the HBase command line and run the following commands to create HBase tables to be used in the sample code:
create 'HBaseSourceExampleTable','rowkey','cf1','cf2','cf3','cf4','cf5','cf6','cf7', 'cf8'
Development Guideline
- Create an RDD.
- Perform operations on HBase to treat it as the data source and write the generated RDD into HBase tables.
- Read data from HBase tables and performs simple operations on the data.
Configuration Operations Before Running
In security mode, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the sample code is super, change the value to the prepared development user name.
Packaging the Project
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Writing and Running the Spark Program in the Linux Environment.
- Upload the JAR package to any directory (for example, $SPARK_HOME) on the server where the Spark client is located.
- Upload the user.keytab and krb5.conf files to the server where the client is installed (The file upload path must be the same as the path of the generated JAR file).
To run the Spark on HBase sample project, set spark.yarn.security.credentials.hbase.enabled (false by default) in the spark-defaults.conf file on the Spark client to true. Changing the spark.yarn.security.credentials.hbase.enabled value does not affect existing services. (To uninstall the HBase service, you need to change the value of this parameter back to false.) Set the value of the configuration item spark.inputFormat.cache.enabled to false.
Submitting Commands
Assume that the JAR package name of the case code is spark-hbaseContext-test-1.0.jar that is stored in the $SPARK_HOME directory on the client. Run the following commands in the $SPARK_HOME directory.
- yarn-client mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode client --jars /opt/female/protobuf-java-2.5.0.jar --conf spark.yarn.user.classpath.first=true --class com.huawei.bigdata.spark.examples.datasources.HBaseSource SparkOnHbaseJavaExample.jar
Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.
bin/spark-submit --master yarn --deploy-mode client --conf spark.yarn.user.classpath.first=true --jars SparkOnHbaseJavaExample.jar,/opt/female/protobuf-java-2.5.0.jar HBaseSource.py
- yarn-cluster mode:
Java/Scala version (The class name must be the same as the actual code. The following is only an example.)
bin/spark-submit --master yarn --deploy-mode cluster --jars /opt/female/protobuf-java-2.5.0.jar --conf spark.yarn.user.classpath.first=true --class com.huawei.bigdata.spark.examples.datasources.HBaseSource --files /opt/user.keytab,/opt/krb5.conf SparkOnHbaseJavaExample.jar
Python version. (The file name must be the same as the actual one. The following is only an example.) Assume that the package name of the corresponding Java code is SparkOnHbaseJavaExample.jar and the package is saved to the current directory.
bin/spark-submit --master yarn --files /opt/user.keytab,/opt/krb5.conf --conf spark.yarn.user.classpath.first=true --jars SparkOnHbaseJavaExample.jar,/opt/female/protobuf-java-2.5.0.jar HBaseSource.py
Java Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseSource file in SparkOnHbaseJavaExample.
public static void main(String args[]) throws IOException{ LoginUtil.loginWithUserKeytab(); SparkConf sparkConf = new SparkConf().setAppName("HBaseSourceExample"); JavaSparkContext jsc = new JavaSparkContext(sparkConf); SQLContext sqlContext = new SQLContext(jsc); Configuration conf = HBaseConfiguration.create(); JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc,conf); try{ List<HBaseRecord> list = new ArrayList<HBaseRecord>(); for(int i=0 ; i<256; i++){ list.add(new HBaseRecord(i)); } Map map = new HashMap<String, String>(); map.put(HBaseTableCatalog.tableCatalog(), cat); map.put(HBaseTableCatalog.newTable(), "5"); System.out.println("Before insert data into hbase table"); sqlContext.createDataFrame(list, HBaseRecord.class).write().options(map).format("org.apache.hadoop.hbase.spark").save(); Dataset<Row> ds = withCatalog(sqlContext, cat); System.out.println("After insert data into hbase table"); ds.printSchema(); ds.show(); ds.filter("key <= 'row5'").select("key","col1").show(); ds.registerTempTable("table1"); Dataset<Row> tempDS = sqlContext.sql("select count(col1) from table1 where key < 'row5'"); tempDS.show(); } finally { jsc.stop(); } }
Scala Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseSource file in SparkOnHbaseScalaExample.
def main(args: Array[String]) { LoginUtil.loginWithUserKeytab() val sparkConf = new SparkConf().setAppName("HBaseSourceExample") val sc = new SparkContext(sparkConf) val sqlContext = new SQLContext(sc) val conf = HBaseConfiguration.create() val hbaseContext = new HBaseContext(sc,conf) import sqlContext.implicits._ def withCatalog(cat: String): DataFrame = { sqlContext .read .options(Map(HBaseTableCatalog.tableCatalog->cat)) .format("org.apache.hadoop.hbase.spark") .load() } val data = (0 to 255).map { i => HBaseRecord(i) } try{ sc.parallelize(data).toDF.write.options( Map(HBaseTableCatalog.tableCatalog -> cat, HBaseTableCatalog.newTable -> "5")) .format("org.apache.hadoop.hbase.spark") .save() val df = withCatalog(cat) df.show() df.filter($"col0" <= "row005") .select($"col0", $"col1").show df.registerTempTable("table1") val c = sqlContext.sql("select count(col1) from table1 where col0 < 'row050'") c.show() } finally { sc.stop() } }
Python Sample Code
The following code snippet is only for demonstration. For details about the code, see the HBaseSource file in SparkOnHbasePythonExample.
# -*- coding:utf-8 -*- """ [Note] (1) PySpark does not provide HBase-related APIs. In this example, Python is used to invoke Java code to implement the required operations. (2). If yarn-client is used, ensure that the spark.yarn.security.credentials.hbase.enabled parameter in the spark-defaults.conf file under Spark2x/spark/conf/ is set to true on the Spark2x client. Set spark.yarn.security.credentials.hbase.enabled to true. """ from py4j.java_gateway import java_import from pyspark.sql import SparkSession # Create a SparkSession instance. spark = SparkSession\ .builder\ .appName("HBaseSourceExample")\ .getOrCreate() # Import the required class to sc._jvm. java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.datasources.HBaseSource') # Create a class instance and invoke the method. Transfer the sc._jsc parameter. spark._jvm.HBaseSource().execute(spark._jsc) # Stop the SparkSession instance. spark.stop()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot