Updated on 2022-02-22 GMT+08:00

Developing a Job

DLF allows you to develop existing jobs.

Prerequisites

You have created a job. For details about how to create a job, see Creating a Job.

Compiling Job Nodes

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Data Development > Develop Job.
  3. In the job directory, double-click a job that you want to develop. The job development page is displayed.
  4. Drag the desired node to the canvas, move the mouse over the node, and select the icon and drag it to connect to another node.

    Each job can contain a maximum of 200 nodes.

Configuring Basic Job Information

After you configure the owner and priority for a job, you can search for the job by the owner and priority. The procedure is as follows:

Select a job. On the job development page, click the Basic Job Information tab. On the displayed page, configure parameters. Table 1 describes the parameters.

Table 1 Basic job information

Parameter

Description

Owner

An owner configured during job creation is automatically matched. This parameter value can be modified.

Executor

User that executes the job. When you enter an executor, the job is executed by the executor. If the executor is left unspecified, the job is executed by the user who submitted the job for startup.

Priority

Priority configured during job creation is automatically matched. This parameter value can be modified.

Execution Timeout

Timeout of the job instance. If this parameter is set to 0 or is not set, this parameter does not take effect. If the notification function is enabled for the job and the execution time of the job instance exceeds the preset value, the system sends a specified notification.

Custom Parameter

Set the name and value of the parameter.

Configuring Job Parameters

Job parameters can be globally used in any node in jobs. The procedure is as follows:

Select a job. On the job development page, click the Job Parameter Setup tab. On the displayed page, configure parameters. Table 2 describes the parameters.

Table 2 Job parameter setup

Function

Description

Variable Parameter

Add

Click Add and enter the variable parameter name and parameter value in the text boxes.

  • Parameter name

    The parameter name must be unique, consist of 1 to 64 characters, and contain only letters, digits, underscores (_), hyphens (-), less-than signs (<), and greater-than signs (>).

  • Parameter Value
    • The function type of parameter value starts with a dollar sign ($). For example: $getCurrentTime(@@yyyyMMdd@@,0)
    • The string type of parameter value is a character string. For example: str1

      When a character string and function are used together, use @@ to enclose the character string and use + to connect the character string and function. For example: @@str1@@+$getCurrentTime(@@yyyyMMdd@@,0)

    • The numeric type of parameter value is a number or operation expression.

After the parameter is configured, it is referenced in the format of ${parameter name} in the job.

Modify

Modify the parameter name and parameter value in text boxes and save the modifications.

Save

Click Save to save the settings.

Delete

Click next to the parameter value text box to delete the job parameter.

Constant Parameter

Add

Click Add and enter the constant parameter name and parameter value in the text boxes.

  • Parameter name

    The parameter name must be unique, consist of 1 to 64 characters, and contain only letters, digits, underscores (_), hyphens (-), less-than signs (<), and greater-than signs (>).

  • Parameter Value
    • The function type of parameter value starts with a dollar sign ($). For example: $getCurrentTime(@@yyyyMMdd@@,0)
    • The string type of parameter value is a character string. For example: str1

      When a character string and function are used together, use @@ to enclose the character string and use + to connect the character string and function. For example: @@str1@@+$getCurrentTime(@@yyyyMMdd@@,0)

    • The numeric type of parameter value is a number or operation expression.

After the parameter is configured, it is referenced in the format of ${parameter name} in the job.

Modify

Modify the parameter name and parameter value in text boxes and save the modifications.

Save

Click Save to save the settings.

Delete

Click next to the parameter value text box to delete the job constant.

Configuring Job Scheduling Tasks

You can configure job scheduling tasks for batch jobs. There are three scheduling types available: Run once, Run periodically, and Event-driven. The procedure is as follows:

Select a job. On the job development page, click the Scheduling Parameter Setup tab. On the displayed page, configure parameters. Table 3 describes the parameters.

Table 3 Scheduling parameter setup

Parameter

Description

Schedule Type

Job schedule type. Possible values:

  • Run once: The job will be run only once.
  • Run periodically: The job will be run periodically.
  • Event-driven: The job will be run when certain external conditions are met.

Parameters for Run periodically

Effective Time

Period during which a job runs.

Schedule Cycle

Frequency at which a job is run. The job can be run once every:

  • Minute
  • Hour
  • Day
  • Week
  • Month

Dependency Job

Job that is depended on. The constraints are as follows:

  • A short-cycle job cannot depend on a long-cycle job.
  • A job whose schedule cycle is Week cannot depend on a job whose schedule cycle is Minute.
  • A job whose schedule cycle is Week cannot depend on or be depended on by another job.
  • A job whose schedule cycle is Month can depend only on the job whose schedule cycle is Day.

Action After Dependency Job Failure

Action that will be performed on the current job if the dependency job fails to be executed. Possible values:

  • Suspend

    The current job will be suspended. A suspended job will block the execution of the subsequent jobs. You can manually force the dependency job to succeed to solve the blocking problem. For details, see Monitoring a Batch Job.

  • Continue

    The current job will continue to be performed.

  • Terminate

    The current job will be terminated. Then the status of the current job will become Canceled.

Run upon completion of the dependency job's last schedule cycle.

Specify whether, before executing the current job, the dependency job's last schedule cycle must end.

Cross-Cycle Dependency

Dependency between instances in the job. Possible values:

  • Independent on the previous schedule cycle
  • Self-dependent (The current job can continue to run only after the previous schedule cycle ends.)

Parameters for Event-driven

Triggering Event Type

Type of the event that triggers job running.

DIS Stream Name

Name of the DIS stream. When a new message is sent to the specified DIS stream, Data Development transfers the new message to the job to trigger the job running.

Concurrent Events

Number of jobs that can be concurrently processed. The maximum number of concurrent events is 128.

Event Detection Interval

Interval at which the system detects the DIS stream or OBS path for new messages. The unit of the interval can be set to second or minute.

Access Policy

Select the location where data is to be read:

  • Access from the previous location: For the first access, data is accessed from the most recently recorded location. For the subsequent access, data is accessed from the previously recoded location.
  • Access from the latest location: Data is accessed from the most recently recorded location each time.

Failure Policy

Select a policy to be performed after scheduling fails.

  • Stop scheduling
  • Ignore failure and proceed

Configuring Node Properties

Click a node in the canvas. On the displayed Node Properties page, configure node properties. For details, see Node Overview.

Configuring Node Scheduling Tasks

You can configure node scheduling tasks for real-time jobs. There are three scheduling types available: Run once, Run periodically, and Event-driven. The procedure is as follows:

Select a node. On the node development page, click the Scheduling Parameter Setup tab. On the displayed page, configure parameters. Table 4 describes the parameters.

Table 4 Node scheduling setup

Parameter

Description

Schedule Type

Job schedule type. Possible values:

  • Run once: The job will be run only once.
  • Run periodically: The job will be run periodically.
  • Event-driven: The job will be run when certain external conditions are met.

Parameters for Run periodically

Effective Time

Period during which a job runs.

Schedule Cycle

Frequency at which a job is run. The job can be run once every:

  • Minute
  • Hour
  • Day
  • Week
  • Month

Cross-Cycle Dependency

Dependency between instances in the job. Possible values:

  • Independent on the previous schedule cycle
  • Self-dependent (The current job can continue to run only after the previous schedule cycle ends.)

Parameters for Event-driven

Triggering Event Type

Type of the event that triggers job running.

DIS Stream Name

Name of the DIS stream. When a new message is sent to the specified DIS stream, Data Development transfers the new message to the job to trigger the job running.

Concurrent Events

Number of jobs that can be concurrently processed. The maximum number of concurrent events is 10.

Event Detection Interval

Interval at which the system detects the DIS stream or OBS path for new messages. The unit of the interval can be set to second or minute.

Failure Policy

Select a policy to be performed after scheduling fails.

  • Stop scheduling
  • Ignore failure and proceed

More Node Functions

To obtain more node functions, right-click the node icon in the canvas and choose a function as required. Table 5 describes the node functions.

Table 5 More mode functions

Function

Description

Configure

Goes to the Node Property page of the node.

Delete

Deletes one or more nodes at the same time.

  • Deleting one node: Right-click the node icon in the canvas and choose Delete or press the Delete shortcut key.
  • Deleting multiple nodes: Click the icons of the nodes to be deleted in the canvas while holding on Ctrl, right-click the blank area of the current job canvas, and choose Delete or press the Delete shortcut key.

Copy

Copies one or more nodes to any job.

  • Single-node copy: You can either right-click the node icon in the canvas, choose Copy, and paste the node to a target location, or click the node icon in the canvas and press Ctrl+C and Ctrl+V to paste the node to a target location. The copied node carries the configuration information of the original node.
  • Multi-node copy: Click the icons of the nodes to be copied in the canvas while holding on Ctrl. Then you can either right-click the blank area of the canvas, choose Copy, and paste the nodes to a target location, or press Ctrl+C and Ctrl+V to paste the nodes to a target location. The copied node carries the configuration information of the original node, but does not contain the connection relationship between nodes.

Test Run

Runs the node for a test.

Upload Simulation Data

Uploads the simulation data to the DIS Stream node. The following types of simulation data are provided:

  • Simulation data from track analysis
  • Simulation data from boiler abnormality monitoring

Add Note

Adds a note to the node. Each node can have multiple notes.

Edit Script

Goes to the script editing page and edits the associated script. This option is available only for the node associated with a script.

Saving and Starting Jobs

After a job is configured, complete the following operations:

Batch processing job

  1. Click to test the job.
  2. After the test is completed, click to save the job configuration information.
  3. Click to start the job.

Processing jobs in real time

  1. Click to save the job configuration.
  2. Click to submit and start the job.

    DLF allows you to modify a running real-time processing job. After the job is modified, click Save. On the page that is displayed, click OK to save the job.