Help Center> MapReduce Service> User Guide (ME-Abu Dhabi Region)> Troubleshooting> Using Yarn> Temporary Files Are Not Deleted When an MR Job Is Abnormal
Updated on 2022-12-08 GMT+08:00

Temporary Files Are Not Deleted When an MR Job Is Abnormal

Issue

Temporary files are not deleted when an MR job is abnormal.

Symptom

There are too many files in the HDFS temporary directory, occupying too much memory.

Cause Analysis

When an MR job is submitted, related configuration files, JAR files, and files added by running the -files command are stored in the temporary directory on HDFS so that the started container can obtain the files. The configuration item yarn.app.mapreduce.am.staging-dir specifies the storage path. The default value is /tmp/hadoop-yarn/staging.

After a properly running MR job is complete, temporary files are deleted. However, when a Yarn task corresponding to the job exits abnormally, temporary files are not deleted. As a result, the number of files in the temporary directory increases over time, occupying more and more storage space.

Procedure

  1. Log in to a cluster.

    1. Log in to any master node as user root. The user password is the one defined during cluster creation.
    2. If Kerberos authentication is enabled for the cluster, run the following commands to go to the client installation directory and configure environment variables. Then, authenticate the user and enter the password as prompted. Obtain the password from an administrator.

      cd Client installation directory

      source bigdata_env

      kinit hdfs

    3. If Kerberos authentication is not enabled for the cluster, run the following commands to switch to user omm and go to the client installation directory to configure environment variables:

      su - omm

      cd Client installation directory

      source bigdata_env

  2. Obtain the file list.

    hdfs dfs -ls /tmp/hadoop-yarn/staging/*/.staging/ | grep "^drwx" | awk '{print $8}' > job_file_list

    The job_file_list file contains the folder list of all jobs. The following shows an example of the file content:

    /tmp/hadoop-yarn/staging/omm/.staging/job__<Timestamp>_<ID>

  3. Collect statistics on running jobs.

    mapred job -list 2>/dev/null | grep job_ | awk '{print $1}' > run_job_list

    The run_job_list file contains the IDs of running jobs. The content format is as follows:

    job_<Timestamp>_<ID>

  4. Delete running jobs from the job_file_list file. Ensure that data of running jobs is not deleted by mistake when deleting expired data.

    cat run_job_list | while read line; do sed -i "/$line/d" job_file_list; done

  5. Delete expired data.

    cat job_file_list | while read line; do hdfs dfs -rm -r $line; done

  6. Delete temporary files.

    rm -rf run_job_list job_file_list