Esta página ainda não está disponível no idioma selecionado. Estamos trabalhando para adicionar mais opções de idiomas. Agradecemos sua compreensão.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Monitoring a Batch Job

Updated on 2022-02-22 GMT+08:00

Batch Processing: Scheduling Jobs

After developing a job, you can manage job scheduling tasks on the Monitor Job page. Specific operations include to run, pause, restore, or stop scheduling.

Figure 1 Scheduling a job

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. In the Operation column of the job, click Run/Pause/Restore/Stop.

Batch Processing: Scheduling the Depended Jobs

You can configure whether to start the depended jobs when scheduling a batch job on the job monitoring page. For details about how to configure depended jobs, see Configuring Job Scheduling Tasks.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab and select a job that has depended jobs.
  4. In the Operation column of the job, click Schedule.

    You can start only the current job or start the depended jobs at the same time when scheduling the job.
    Figure 2 Starting a job

Batch Processing: Notification Settings

You can configure DLF to notify you of job success or failure. The following provides the method for configuring a notification task. You can also configure a notification task on the Monitoring page. For details, see Managing a Notification.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. In the Operation column of the job, choose More > Set Notification. In the displayed dialog box, configure notification parameters. Table 1 describes the notification parameters.
  5. Click OK.

Batch Processing: Instance Monitoring

You can view the running records of all instances of a job on the Monitor Instance page.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. In the Operation column of a job, choose More > Monitor Instance to view the running records of all instances of the job.

    • For details about the Operation column of the instance, see Table 2.
    • For details about the Operation column of the node, see Table 3.

Batch Processing: Scheduling Configuration

You can perform the following steps to go to the development page of a specific job.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. In the Operation column of a job, choose More > Configure Scheduling.

Batch Processing: Job Dependency View

You can view the dependencies between jobs.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. Click the job name and click the Job Dependencies tab. View the dependencies between jobs.

    Figure 3 Job dependencies view

    Click a job in the view. The development page of the job will be displayed.

Batch Processing: PatchData

A job executes a scheduling task to generate a series of instances in a certain period of time. This series of instances are called PatchData. PatchData can be used to fix the job instances that have data errors in the historical records or to build job records for debugging programs.

Only the periodically scheduled jobs support PatchData. For details about the execution records of PatchData, see PatchData Monitoring.

NOTE:

Do not modify the job configuration when PatchData is being performed. Otherwise, job instances generated during PatchData will be affected.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. In the Operation column of the job, choose More > PatchData.
  5. Configure PatchData parameters. Table 1 shows the PatchData parameters.

    Figure 4 PatchData parameters
    Table 1 Parameter description

    Parameter

    Description

    PatchData Name

    Name of the automatically generated PatchData task. The value can be modified.

    Job Name

    Name of the job that requires PatchData.

    Date

    Period of time when PatchData is required.

    Parallel Instances

    Number of instances to be executed at the same time. A maximum of five instances can be executed at the same time.

    Downstream Job Requiring PatchData

    Downstream job (job that depends on the current job) that requires PatchData. You can select more than one downstream jobs.

  6. Click OK. The system starts to perform PatchData and the PatchData Monitoring page is displayed.

Batch Processing: Batch Processing Jobs

You can schedule and stop jobs and configure notification tasks in batches.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. Select the jobs and click Schedule/Stop/Configure Notification to process the jobs in batches.

Batch Processing: Viewing Latest Instances

This function enables you to view the information about five instances that are running in a job.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. Click in front of the job name. The page of the latest instance is displayed. You can view the details about the nodes contained in the latest instances.

Batch Processing: Viewing All Instances

You can view all running records of a job on the Running History page and perform more operations on instances or nodes based on site requirements.

  1. Log in to the DLF console.
  2. In the navigation tree of the Data Development console, choose Monitoring > Monitor Job.
  3. Click the Batch Job Monitor tab.
  4. Click a job name. The Running History page is displayed.

    You can stop, rerun, continue to run, or forcibly run jobs in batches. For details, see Table 2.

    When multiple instances are rerun in batches, the sequence is as follows:

    • If a job does not depend on the previous schedule cycle, multiple instances run concurrently.
    • If jobs are dependent on their own, multiple instances are executed in serial mode. The instance that first finishes running in the previous schedule cycle is the first one to rerun.
    Figure 5 Batch operations

  5. View actions in the Operation column of an instance. Table 2 describes the actions that can be performed on the instance.

    Table 2 Actions for an instance

    Action

    Description

    Stop

    Stops an instance that is in the Waiting, Running, or Abnormal state.

    Rerun

    Reruns an instance that is in the Succeeded or Canceled.

    View Waiting Job Instance

    When the instance is in the waiting state, you can view the waiting job instance.

    Continue

    If an instance is in the Abnormal state, you can click Continue to begin running the subsequent nodes in the instance.

    NOTE:

    This operation can be performed only when Failure Policy is set to Suspend the current job execution plan. To view the current failure policy, click a node and then click Advanced Settings on the Node Properties page.

    Succeed

    Forcibly changes the status of an instance from Abnormal, Canceled, Failed to Succeed.

    View

    Goes to the job development page and view job information.

  6. Click in front of an instance. The running records of all nodes in the instance are displayed.
  7. View actions in the Operation column of a node. Table 3 describes the actions that can be performed on the node.

    Table 3 Actions for a node

    Action

    Description

    View Log

    View the log information of the node.

    Manual Retry

    To run a node again after it fails, click Retry.

    NOTE:

    This operation can be performed only when Failure Policy is set to Suspend the current job execution plan. To view the current failure policy, click a node and then click Advanced Settings on the Node Properties page.

    Succeed

    To change the status of a failed node to Succeed, click Succeed.

    NOTE:

    This operation can be performed only when Failure Policy is set to Suspend the current job execution plan. To view the current failure policy, click a node and then click Advanced Settings on the Node Properties page.

    Skip

    To skip a node that is to be run or that has been paused, click Skip.

    Pause

    To pause a node that is to be run, click Pause. Nodes queued after the paused node will be blocked.

    Resume

    To resume a paused node, click Resume.

Usamos cookies para aprimorar nosso site e sua experiência. Ao continuar a navegar em nosso site, você aceita nossa política de cookies. Saiba mais

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback