Help Center> MapReduce Service> Troubleshooting> Using Yarn> Number of Files in the TimelineServer Directory Reaches the Upper Limit
Updated on 2022-11-10 GMT+08:00

Number of Files in the TimelineServer Directory Reaches the Upper Limit

Symptom

In an MRS 3.x cluster, ResourceManager logs show that the number of TimelineServer data directories reaches the upper limit and a large number of error logs are printed.

The exception log is as follows:

The directory item limit of /tmp/hadoop-omm/yarn/timeline/generic-history/ApplicationHistoryDataRoot is exceeded: limit=1048576 items=1048576

Cause Analysis

In MRS 3.x, TimelineServer uses an HDFS directory (for example, the /tmp/hadoop-omm/yarn/timeline/generic-history/ApplicationHistoryDataRoot directory in the preceding error information) to store historical task information. As a result, files in this directory accumulate until the number of directories reaches the upper limit configured in HDFS (the default value of dfs.namenode.fs-limits.max-directory-items is 1048576).

In this case, set yarn.timeline-service.generic-application-history.enabled to false to obtain app task data from ResourceManager. This parameter specifies whether the client obtains app task data directly from TimelineServer.

Procedure

  1. Log in to FusionInsight Manager and choose Cluster > Services > Yarn > Configurations > All Configurations.
  2. In the navigation pane on the left, choose Yarn(Service) > Customization. Locate the yarn.yarn-site.customized.configs parameter in the right pane, set the parameter name to yarn.timeline-service.generic-application-history.enabled and its value to false, and click Save.
  3. Rolling restart the ResourceManager and TimelineServer instances.

    Click the Instance tab of the Yarn service, select all ResourceManager and TimelineServer instances, click More, and select Instance Rolling Restart.

  4. (Optional) Rolling restart NodeManagers during off-peak hours based on service requirements.
  5. After the instances are restarted, delete the directory for storing historical task information from HDFS, for example, /tmp/hadoop-omm/yarn/timeline/generic-history/ApplicationHistoryDataRoot.

    1. Log in to the client installation directory as the client installation user and configure environment variables.

      cd Client installation directory

      source bigdata_env

    2. Run the following command to authenticate the user (skip this step for the user with Kerberos authentication disabled):

      kinit Service user

    3. Run the following command to delete the directory from HDFS:

      hdfs dfs -rm -r /tmp/hadoop-omm/yarn/timeline/generic-history/ApplicationHistoryDataRoot/