Implementing Data Transition Between Hive and HBase (Python)
Function
In a Spark application, you can use Spark to call a Hive API to operate a Hive table, and write the data analysis result of the Hive table to an HBase table.
Sample Code
PySpark does not provide HBase APIs. Therefore, Python is used to invoke Java code in this sample.
The following code snippets are used as an example. For complete codes, see SparkHivetoHbasePythonExample:
# -*- coding:utf-8 -*- from py4j.java_gateway import java_import from pyspark.sql import SparkSession # Create a SparkSession instance. spark = SparkSession\ .builder\ .appName("SparkHivetoHbase") \ .getOrCreate() # Import the class to be run to sc._jvm java_import(spark._jvm, 'com.huawei.bigdata.spark.examples.SparkHivetoHbase') # Create class instances and invoke the method. spark._jvm.SparkHivetoHbase().hivetohbase(spark._jsc) # Stop SparkSession. spark.stop()
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot