Updated on 2022-12-08 GMT+08:00

How Do I Locate a Job Submission Failure?

Symptom

A user cannot submit jobs through DGC or on the MRS console.

Impact

Jobs cannot be submitted, and services are interrupted.

Introduction to the Operation Process

  1. All requests pass through APIG gateway and are restricted by the flow control configured on APIG.
  2. APIG forwards the request to the api-gateway of the MRS management plane.
  3. The API node on the MRS management plane polls the Knox of the active and standby OMS nodes to determine the Knox of the active OMS node.
  4. MRS management-plane API submits a task to Knox of the active OMS.
  5. Knox forwards requests to the Executor process on the current node.
  6. The executor process submits a task to Yarn.
Figure 1 Job process

Procedure

Make preparations:

  • Check whether the job is submitted through DGC or on the MRS console.
  • Prepare the information listed in Table 1.
    Table 1 Items to be prepared before the rectification

    No.

    Projects

    Operation Mode

    1

    Cluster account information

    Apply for password of user admin in the cluster.

    2

    Node account information

    Apply for the passwords of users omm and root of cluster nodes.

    3

    Secure Shell (SSH) remote login tool

    Prepare such tools as PuTTY or SecureCRT.

    4

    Client

    Install the client.

  1. Locate the cause of the exception.

    View the error code received in the job log and check whether the error code belongs to APIG or MRS.

    • If the error code is a public APIG error code (starting with "APIGW"), contact public APIG maintenance personnel.
    • If an error occurs on MRS, go to the next step.

  2. Check the running status of services and processes.

    1. Log in to Manager and check whether a service fault occurs. If a job-related service fault or an underlying basic service fault occurs, rectify the fault.
    2. Check whether a critical alarm is generated.
    3. Log in to the active Master node.
    4. Run the following command to check whether the OMS status is normal and whether the executor and knox processes on the active OMS node are normal: The knox is in active-active mode, and the executor is in single-active mode.

      /opt/Bigdata/om-0.0.1/sbin/status-oms.sh

    5. Run the jmap -heap PID command as user omm to check the memory usage of the Knox and Executor processes. If the old-generation memory usage is 99.9%, the memory overflow occurs.

      Run the netstat -anp | grep 8181 | grep LISTEN command to query the PID of the executor process.

      Run the ps -ef|grep knox | grep -v grep command to query the PID of the knox process.

      If the memory overflows, run the jmap -dump:format=b,file=/home/omm/temp.bin PID command to export the memory information and restart the process.

    6. View the native Yarn page to check the queue resource usage and whether the task is submitted to Yarn.
      On the native Yarn page: choose Components > Yarn > ResourceManager WebUI > ResourceManager (Active).
      Figure 2 Queue resource usage on the Yarn page

  3. Locate the fault causing the task submission failure.

    1. Log in to the MRS management console and click the cluster name to go to the cluster details page.
    2. On the Jobs tab page, locate the row that contains the target job and click View Log in the Operation column.
      Figure 3 View the logs
    3. If there is no log or the log information is not detailed, copy the job ID in the Name/ID column.
    4. Run the following command on the active OMS node to check whether the task request is sent to the KNOX. If the request is not sent to the KNOX, the KNOX may be faulty. In this case, restart the KNOX to rectify the fault.

      grep "mrsjob" /var/log/Bigdata/knox/logs/gateway-audit.log | tail -10

    5. Search for the job ID in the Executor log and view the error information.

      Log file path: /var/log/Bigdata/executor/logs/exe.log

    6. Modify the /opt/executor/webapps/executor/WEB-INF/classes/log4j.properties file to enable the debug log of the executor. Submit the test task and view the executor log. Confirm the error reported during job submission.

      Log file path: /var/log/Bigdata/executor/logs/exe.log

    7. If an error occurs in the executor, run the following command to print the jstack information of the executor and check the current execution status of the thread:

      jstack PID > xxx.log

    8. On the cluster details page, click the Jobs tab. Locate the row that contains the target job, and click View Details in the Operation column to obtain the actual job ID (applicationID).
    9. On the cluster details page, choose Components > Yarn > ResourceManager WebUI > ResourceManager (Active). On the native Yarn page that is displayed, click applicationID.
      Figure 4 Yarn applications
    10. View logs on the task details page.
      Figure 5 Task logs