Accessing Hive Data Sources Using HSFabric
This section describes how to use HSFabric to connect to HetuEngine, assemble SQL statements, and send the them to HetuEngine for execution to add, delete, modify, and query Hive data sources.
import jaydebeapi driver = "io.XXX.jdbc.XXXDriver" # need to change the value based on the cluster information url = "jdbc:XXX://192.168.37.61:29903,192.168.37.61:29903/hive/default?serviceDiscoveryMode=hsfabric" user = "YourUserName" tenant = "YourTenant" jdbc_location = "Your file path of the jdbc jar" sql = "show catalogs" if __name__ == '__main__': conn = jaydebeapi.connect(driver, url, {"user": user, "SSL": "false", "tenant": tenant}, [jdbc_location]) curs = conn.cursor() curs.execute(sql) result = curs.fetchall() print(result) curs.close() conn.close()
The following table describes the parameters in the preceding code.
Parameter |
Description |
---|---|
url |
jdbc:XXX://HSFabric1_IP:HSFabric1_Port,HSFabric2_IP:HSFabric2_Port,HSFabric3_IP:HSFabric3_Port/catalog/schema?serviceDiscoveryMode=hsfabric
NOTE:
|
user |
Username for accessing HetuServer, that is, the username of the human-machine user created in the cluster. |
tenant |
Tenant resource queue for accessing HetuEngine compute instances |
jdbc_location |
Full path of the hetu-jdbc-XXX.jar package obtained in Configuring the Python3 Sample Project.
|
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot