How Do I Set the AK/SK for a Queue to Operate an OBS Table?
The temporary AK/SK is recommended. For details, see Obtaining a Temporary Access Key and Security Token in Identity and Access Management API Reference.
- If the AK and SK are obtained, set the parameters as follows:
- Create SparkContext using code
val sc: SparkContext = new SparkContext() sc.hadoopConfiguration.set("fs.obs.access.key", ak) sc.hadoopConfiguration.set("fs.obs.secret.key", sk)
- Create SparkSession using code
val sparkSession: SparkSession = SparkSession .builder() .config("spark.hadoop.fs.obs.access.key", ak) .config("spark.hadoop.fs.obs.secret.key", sk) .enableHiveSupport() .getOrCreate()
- Create SparkContext using code
- If ak, sk, and securitytoken are obtained, the temporary AK/SK and security token must be used at the same time during authentication. The setting is as follows:
- Create SparkContext using code
val sc: SparkContext = new SparkContext() sc.hadoopConfiguration.set("fs.obs.access.key", ak) sc.hadoopConfiguration.set("fs.obs.secret.key", sk) sc.hadoopConfiguration.set("fs.obs.session.token", sts)
- Create SparkSession using code
val sparkSession: SparkSession = SparkSession .builder() .config("spark.hadoop.fs.obs.access.key", ak) .config("spark.hadoop.fs.obs.secret.key", sk) .config("spark.hadoop.fs.obs.session.token", sts) .enableHiveSupport() .getOrCreate()
- Create SparkContext using code
For security purposes, you are advised not to include the AK and SK information in the OBS path. In addition, if a table is created in the OBS directory, the OBS path specified by the Path field cannot contain the AK and SK information.
Job Development FAQs
- How Do I Use Spark to Write Data into a DLI Table?
- How Do I Set the AK/SK for a Queue to Operate an OBS Table?
- How Do I View the Resource Usage of DLI Spark Jobs?
- How Do I Use Python Scripts to Access the MySQL Database If the pymysql Module Is Missing from the Spark Job Results Stored in MySQL?
- How Do I Run a Complex PySpark Program in DLI?
- How Does a Spark Job Access a MySQL Database?
- How Do I Use JDBC to Set the spark.sql.shuffle.partitions Parameter to Improve the Task Concurrency?
- How Do I Read Uploaded Files for a Spark Jar Job?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.
more