Creating a Spark Job
DLI provides fully-managed Spark computing services by allowing you to execute Spark jobs.
On the Overview page, click Create Job in the upper right corner of the Spark Jobs tab or click Create Job in the upper right corner of the Spark Jobs page. The Spark job editing page is displayed.
On the Spark job editing page, a message is displayed, indicating that a temporary DLI data bucket will be created. The created bucket is used to store temporary data generated by DLI, such as job logs and job results. You cannot view job logs if you choose not to create the bucket. You can configure a lifecycle rule to periodically delete objects in a bucket or transit objects between different storage classes.The bucket will be created and the default bucket name is used.
If you do not need to create a DLI temporary data bucket and do not want to receive this message, select Do not show again and click Cancel.
Prerequisites
- You have uploaded the dependencies to the corresponding OBS bucket on the Data Management > Package Management page. For details, see Creating a Package.
- Before creating a Spark job to access other external data sources, such as OpenTSDB, HBase, Kafka, GaussDB(DWS), RDS, CSS, CloudTable, DCS Redis, and DDS, you need to create a datasource connection to enable the network between the job running queue and external data sources.
- For details about the external data sources that can be accessed by Spark jobs, see Common Development Methods for DLI Cross-Source Analysis.
- For how to create a datasource connection, see Configuring the Network Connection Between DLI and Data Sources (Enhanced Datasource Connection).
On the Resources > Queue Management page, locate the queue you have created, click More in the Operation column, and select Test Address Connectivity to check if the network connection between the queue and the data source is normal. For details, see Testing Address Connectivity.
Procedure
- In the left navigation pane of the DLI management console, choose Spark Jobs page is displayed.
Click Create Job in the upper right corner. In the job editing window, you can set parameters in Fill Form mode or Write API mode.
The following uses the Fill Form as an example. In Write API mode, refer to the Data Lake Insight API Reference for parameter settings.
> . The
- Select a queue.
Select the queue you want to use from the drop-down list box.
- Select a Spark version. Select a supported Spark version from the drop-down list. The latest version is recommended.
You are not advised to use Spark/Flink engines of different versions for a long time.
- Doing so can lead to code incompatibility, which can negatively impact job execution efficiency.
- Doing so may result in job execution failures due to conflicts in dependencies. Jobs rely on specific versions of libraries or components.
- Configure the job.
Configure job parameters by referring to Table 1.
Figure 1 Spark job configuration
Table 1 Job configuration parameters Parameter
Description
Job Name (--name)
Set a job name.
Application
Select the package to be executed. The value can be .jar or .py.
You can select the name of a JAR or pyFile package that has been uploaded to the DLI resource management system. You can also specify an OBS path, for example, obs://Bucket name/Package name.
Spark 3.3.x or later supports only packages in OBS paths.
Main Class (--class)
Enter the name of the main class. When the application type is .jar, the main class name cannot be empty.
Application Parameters
User-defined parameters. Separate multiple parameters by Enter.
These parameters can be replaced with global variables. For example, if you create a global variable batch_num on the Global Configuration > Global Variables page, you can use {{batch_num}} to replace a parameter with this variable after the job is submitted.
Spark Arguments (--conf)
Enter a parameter in the format of key=value. Press Enter to separate multiple key-value pairs.
These parameters can be replaced with global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted.
NOTE:- The JVM garbage collection algorithm cannot be customized for Spark jobs.
- If the Spark version is 3.1.1, configure Spark parameters (--conf) to select a dependent module. For details about the example configuration, see Table 2.
Job Type
Set this parameter when you select a CCE queue. Type of the Spark image used by a job. The options are as follows:
- Basic: Base images provided by DLI. Select this option for non-AI jobs.
- Image: Custom Spark images. Select an existing image name and version on SWR.
JAR Package Dependencies (--jars)
JAR file on which the Spark job depends. You can enter the JAR file name or the OBS path of the JAR file in the format of obs://Bucket name/Folder path/JAR file name.
Python File Dependencies (--py-files)
py-files on which the Spark job depends. You can enter the Python file name or the corresponding OBS path of the Python file. The format is as follows: obs://Bucket name/Folder name/File name.
Other Dependencies (--files)
Other files on which the Spark job depends. You can enter the name of the dependency file or the corresponding OBS path of the dependency file. The format is as follows: obs://Bucket name/Folder name/File name.
Group Name
If you select a group when creating a package, you can select all the packages and files in the group. For how to create a package, see Creating a Package.
Access Metadata
Whether to access metadata through Spark jobs. For details, see the Data Lake Insight Developer Guide.
Retry upon Failure
Indicates whether to retry a failed job.
If you select Yes, you need to set the following parameters:
Maximum Retries: Maximum number of retry times. The maximum value is 100.
Advanced Settings
Table 2 Spark Parameter (--conf) configuration Datasource
Example Value
CSS
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/*
GaussDB(DWS)
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/dws/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/dws/*
HBase
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/hbase/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/hbase/*
OpenTSDB
park.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/opentsdb/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/opentsdb/*
RDS
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/*
Redis
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/redis/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/redis/*
Mongo
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/mongo/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/mongo/*
Figure 2 Creating a Spark job - advanced settings
- Set the following parameters in advanced settings:
- Select Dependency Resources: For details about the parameters, see Table 3.
- Configure Resources: For details about the parameters, see Table 4.
The parallelism degree of Spark resources is jointly determined by the number of Executors and the number of Executor CPU cores.
Maximum number of tasks that can be concurrently executed = Number of Executors x Number of Executor CPU cores
You can properly plan compute resource specifications based on the compute CUs of the queue you have purchased.
Note that Spark tasks need to be jointly executed by multiple roles, such as driver and executor. So, the number of executors multiplied by the number of executor CPU cores must be less than the number of compute CUs of the queue to prevent other roles from failing to start Spark tasks. For more information about roles for Spark tasks, see Apache Spark.
Calculation formula for Spark job parameters:
- CUs = Driver Cores + Executors x Executor Cores
- Memory = Driver Memory + (Executors x Executor Memory)
Table 3 Parameters for selecting dependency resources Parameter
Description
modules
If the Spark version is 3.1.1, you do not need to select a module. Configure Spark parameters (--conf).
Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.- CloudTable/MRS HBase: sys.datasource.hbase
- DDS: sys.datasource.mongo
- CloudTable/MRS OpenTSDB: sys.datasource.opentsdb
- DWS: sys.datasource.dws
- RDS MySQL: sys.datasource.rds
- RDS PostGre: sys.datasource.rds
- DCS: sys.datasource.redis
- CSS: sys.datasource.css
DLI internal modules include:
- sys.res.dli-v2
- sys.res.dli
- sys.datasource.dli-inner-table
Resource Package
JAR package on which the Spark job depends.
Table 4 Resource specification parameters Parameter
Description
Resource Specifications
Select a resource specification from the drop-down list box. The system provides three resource specification options for you to choose from.
Resource specifications involve the following parameters:
- Executor Memory
- Executor Cores
- Executors
- Driver Cores
- Driver Memory
If modified, your modified settings of the items are used.
Executor Memory
Customize the configuration item based on the selected resource specifications.
Memory of each Executor. It is recommended that the ratio of Executor CPU cores to Executor memory be 1:4.
Executor Cores
Number of CPU cores of each Executor applied for by Spark jobs, which determines the capability of each Executor to execute tasks concurrently.
Executors
Number of Executors applied for by a Spark job
Driver Cores
Number of CPU cores of the driver
Driver Memory
Driver memory size. It is recommended that the ratio of the number of driver CPU cores to the driver memory be 1:4.
- Click Execute in the upper right corner of the Spark job editing page.
After the message "Batch processing job submitted successfully" is displayed, you can view the status and logs of the submitted job on the Spark Jobs page.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot