Deze pagina is nog niet beschikbaar in uw eigen taal. We werken er hard aan om meer taalversies toe te voegen. Bedankt voor uw steun.

On this page

Show all

Help Center/ Data Lake Insight/ FAQs/ Problems Related to Spark Jobs/ Job Development/ How Do I Use Spark to Write Data into a DLI Table?

How Do I Use Spark to Write Data into a DLI Table?

Updated on 2023-03-21 GMT+08:00

To use Spark to write data into a DLI table, configure the following parameters:

  • fs.obs.access.key
  • fs.obs.secret.key
  • fs.obs.impl
  • fs.obs.endpoint

The following is an example:

import logging
from operator import add
from pyspark import SparkContext

logging.basicConfig(format='%(message)s', level=logging.INFO)

#import local file
test_file_name = "D://test-data_1.txt"
out_file_name = "D://test-data_result_1"

sc = SparkContext("local","wordcount app")
sc._jsc.hadoopConfiguration().set("fs.obs.access.key", "myak")
sc._jsc.hadoopConfiguration().set("fs.obs.secret.key", "mysk")
sc._jsc.hadoopConfiguration().set("fs.obs.impl", "org.apache.hadoop.fs.obs.OBSFileSystem")
sc._jsc.hadoopConfiguration().set("fs.obs.endpoint", "myendpoint")

# red: text_file rdd object
text_file = sc.textFile(test_file_name)

# counts
counts = text_file.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b)
# write
counts.saveAsTextFile(out_file_name)
Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback