Updated on 2025-12-10 GMT+08:00

Restoring Manager Data (MRS 2.x and Earlier)

Scenarios

Metadata restoration is required in the following scenarios: when data is unexpectedly modified or deleted and requires retrieval; when major metadata component operations (such as upgrades or significant adjustments) cause exceptions in system data or fail to achieve the expected result; when all modules fail and become unavailable; and when data is migrated to a new cluster.

This section describes how to create metadata restoration tasks on MRS Manager. The system supports manual data restoration only.

MRS clusters support multiple data path types for restoring Manager data.

  • LocalDir: indicates that data is restored from the local disk of the active management node.
  • LocalHDFS: indicates that data is restored from the HDFS directory of the current cluster.
  • Data restoration can be performed only when the system version is consistent with the version used during data backup.
  • To restore data when the service is running properly, it is recommended that you manually back up the latest management data before performing data restoration. Otherwise, the data that is generated after the data backup and before the data restoration will be lost.
  • Use the OMS data and LdapServer data backed up at the same time to restore data. Otherwise, the service and operation may fail.
  • By default, MRS clusters use DBService to store Hive metadata.

Impact on the System

  • After the data is restored, the data generated after the data backup and before the data restoration is lost.
  • After the data is restored, the configurations of the components that depend on DBService may expire and these components need to be restarted.

Prerequisites

  • The data in the OMS and LdapServer backup files has been backed up at the same time.
  • The status of the OMS resources and the LdapServer instances is normal. If the status is abnormal, data restoration cannot be performed.
  • The status of the cluster hosts and services is normal. If the status is abnormal, data restoration cannot be performed.
  • The cluster host topologies during data restoration and data backup are the same. If the topologies are different, data restoration cannot be performed and you need to back up data again.
  • The services added to the cluster during data restoration and data backup are the same. If the services are different, data restoration cannot be performed and you need to back up data again.
  • The status of the active and standby DBService instances is normal. If the status is abnormal, data restoration cannot be performed.
  • The upper-layer applications that depend on the MRS cluster have been stopped.
  • On MRS Manager, you have stopped all the NameNode role instances whose data is to be recovered. Other HDFS role instances are running properly. After data is recovered, the NameNode role instances need to be restarted and cannot be accessed before the restart.
  • You have checked whether NameNode backup files have been stored in the Data save path/LocalBackup/ directory on the active management node.

Restoring Manager Data

  1. Check the location of backup data.

    1. On MRS Manager, choose System > Back Up Data.
    2. In the row containing the specified backup task, choose More > View History in the Operation column to display the task's historical execution records. In the displayed window, locate the desired success record and click View in the Backup Path column to display the task's backup path information and obtain the following details:
      • Backup Object: indicates the backup data source.
      • Backup Path: indicates the full path where the backup files are stored.
    3. Locate the correct path, and manually copy the full path of the backup files from the Backup Path column.

  2. Create a restoration task.

    1. On MRS Manager, choose System > Recovery Management.
    2. On the page that is displayed, click Create Restoration Task.
    3. Set Task Name to the name of the restoration task.

  3. Select restoration sources.

    In Configuration, select the metadata component whose data is to be restored.

  4. Set the restoration parameters.

    1. Select a backup directory type for Path Type.
      Table 1 Path for data restoration

      Path Type

      Parameter

      Description

      LocalDir

      Source Path

      Full path of the directory storing backup files. Path format: Data storage path/LocalBackup/Backup task name_Task creation time/Data source_Task execution time/Version number_Data source_Task execution time.tar.gz

      LocalHDFS

      Source Path

      Full path of the HDFS directory storing backup files. Path format: Backup path/Backup task name_Task creation time/Version_Data source_Task execution time.tar.gz

      Source Instance Name

      NameService name of the backup directory during restoration task execution. The default value is hacluster.

    2. Click OK.

  5. Execute the restoration task.

    In the restoration task list, locate the row containing the created task, and click Start in the Operation column to execute the restoration task.

    • After the restoration is successful, the progress bar is in green.
    • After the restoration is successful, the restoration task cannot be executed again.
    • If the restoration task fails during the first execution, rectify the fault and try to execute the task again by clicking Start.

  6. Determine what metadata has been restored.

    • If the OMS and LdapServer metadata is restored, go to Step 7.
    • If DBService data is restored, no further action is required.
    • Restore NameNode data. On MRS Manager, choose Services > HDFS > More > Restart Service. The task is complete.

  7. Restarting Manager for the recovered data to take effect

    1. In MRS Manager, Choose LdapServer > More > Restart Service and click OK. Wait until the LdapServer service is restarted successfully.
    2. Log in to the active management node. For details, see Checking MRS Active/Standby Management Nodes.
    3. Run the following command to restart OMS:

      sh ${BIGDATA_HOME}/om-0.0.1/sbin/restart-oms.sh

      The command is successfully executed if the following information is displayed:

      start HA successfully.
    4. On MRS Manager, choose KrbServer > More > Synchronize Configuration. Do not select Restart the services and instances whose configuration has expired. Click OK and wait until the KrbServer service configuration is synchronized and restarted successfully.
    5. Choose Services > More > Synchronize Configuration. Do not select Restart the services and instances whose configuration has expired. Click OK and wait until the cluster is configured and synchronized successfully.
    6. Choose Services > More > Stop Cluster. After the cluster is stopped, choose Services > More> Start Cluster.