How Do I Configure Other Data Sources on Presto?
In this section, MySQL is used as an example.
- For MRS 1.x and 3.x clusters, do the following:
- Log in to the MRS management console.
- Click the name of the cluster to go to its details page.
- Click the Components tab and then Presto in the component list. On the page that is displayed, click the Configurations tab then the All Configurations sub-tab.
- On the Presto configuration page that is displayed, find connector-customize.
- Set Name and Value as follows:
Name: mysql.connector.name
Value: mysql
- Click the plus sign (+) to add three more fields and set Name and Value according to the table below. Then click Save.
Name
Value
Description
mysql.connection-url
jdbc:mysql://xxx.xxx.xxx.xxx:3306
Database connection pool
mysql.connection-user
xxxx
Database username
mysql.connection-password
xxxx
Database password
- Restart the Presto service.
- Run the following command to connect to the Presto Server of the cluster:
presto_cli.sh --krb5-config-path {krb5.conf path} --krb5-principal {User principal} --krb5-keytab-path {user.keytab path} --user {presto username}
- Log in to Presto and run the show catalogs command to check whether the data source list mysql of Presto can be queried.
Run the show schemas from mysql command to query the MySQL database.
- For MRS 2.x clusters, do the following:
- Create the mysql.properties configuration file containing the following content:
connector.name=mysql
connection-url=jdbc:mysql://mysqlIp:3306
connection-user=Username
connection-password=Password
- mysqlIp indicates the IP address of the MySQL instance, which must be able to communicate with the MRS network.
- The username and password are those used to log in to the MySQL database.
- Upload the configuration file to the /opt/Bigdata/MRS_Current/1_14_Coordinator/etc/catalog/ directory on the master node (where the Coordinator instance resides) and the /opt/Bigdata/MRS_Current/1_14_Worker/etc/catalog/ directory on the core node (depending on the actual directory in the cluster), and change the file owner group to omm:wheel.
- Restart the Presto service.
- Create the mysql.properties configuration file containing the following content:
Big Data Service Development FAQs
- Can MRS Run Multiple Flume Tasks at a Time?
- How Do I Change FlumeClient Logs to Standard Logs?
- Where Are the JAR Files and Environment Variables of Hadoop Stored?
- What Compression Algorithms Does HBase Support?
- Can MRS Write Data to HBase Through an HBase External Table of Hive?
- How Do I View HBase Logs?
- How Do I Set the TTL for an HBase Table?
- How Do I Connect to HBase of MRS Through HappyBase?
- How Do I Change the Number of HDFS Replicas?
- How Do I Modify the HDFS Active/Standby Switchover Class?
- What Data Type in Hive Tables Is Recommended for the Number Type of DynamoDB?
- Can the Hive Driver Be Interconnected with DBCP2?
- How Do I View the Hive Table Created by Another User?
- Where Can I Download the Dependency Package (com.huawei.gaussc10) in the Hive Sample Project?
- Can I Export the Query Result of Hive Data?
- What Should I Do If an Error Occurs When Hive Runs the beeline -e Command to Execute Multiple Statements?
- What Should I Do If a HiveSQL/HiveScript Job Fails to be Submitted After Hive Is Added?
- What Can I Do If the Excel File Downloaded by Hue Cannot Be Opened?
- How Do I Do If Sessions Are Not Released After Hue Connects to HiveServer and the Error Message "over max user connections" Is Displayed?
- How Do I Reset Kafka Data?
- What Access Protocols Are Supported by Kafka?
- What Should I Do If the Error Message "Not Authorized to access group XXX" Is Displayed When Kafka Topics Are Consumed?
- What Compression Algorithms Does Kudu Support?
- How Do I View Kudu Logs?
- How Do I Handle the Kudu Service Exceptions Generated During Cluster Creation?
- Does MRS Support Python Code?
- Does OpenTSDB Support Python APIs?
- How Do I Configure Other Data Sources on Presto?
- How Do I Update the Ranger Certificate in MRS 1.9.3?
- How Do I Connect to Spark Shell from MRS?
- How Do I Connect to Spark Beeline from MRS?
- Where Are the Execution Logs of Spark Jobs Stored?
- How Do I Specify a Log Path When Submitting a Task in an MRS Storm Cluster?
- How Do I Check the ResourceManager Configuration of Yarn?
- How Do I Modify the allow_drop_detached Parameter of ClickHouse?
- What Should I Do If an Alarm Indicating Insufficient Memory Is Reported During Spark Task Execution?
- How Do I Add a Periodic Deletion Policy to Prevent Large ClickHouse System Table Logs?
- How Do I Obtain a Spark JAR File?
- What Can I Do If an Alarm is Generated Because the NameNode Is not Restarted on Time After the hdfs-site.xml File Is Modified?
- What Should I Do If It Takes a Long Time for Spark SQL to Access Hive Partitioned Tables Before a Job Starts?
- What Should I Do If the spark.yarn.executor.memoryOverhead Setting Does Not Take Effect?
- How Do I Change the Time Zone of the ClickHouse Service?
- What Should I Do If the Connection to the ClickHouse Server Fails and Error Code 516 Is Reported?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbotmore