Backing Up Elasticsearch Service Data
Scenario
To ensure Elasticsearch service data security routinely or before a major operation on Elasticsearch (such as upgrade or migration), you need to back up Elasticsearch service data. The backup data can be used to recover the system if an exception occurs or the operation has not achieved the expected result, minimizing the adverse impacts on services.
You can create a backup task on FusionInsight Manager to back up Elasticsearch service data. Both automatic and manual backup tasks are supported.
- During the snapshot creation, the search and query functions are not affected. After the snapshot creation process starts, new data is not recorded in the snapshot. Only one snapshot can be created at a time.
- When a backup task is created, only the indexes that have been enabled in the cluster are displayed as backup objects. The disabled indexes are not displayed on the GUI. This way, the disabled indexes are not backed up.
- If some indexes selected for a backup task are disabled before the backup task starts, the disabled indexes will not be backed up. Only enabled indexes are backed up. If all indexes are disabled, the backup task fails to be executed.
- The Elasticsearch service data backup needs to invoke the snapshot interface through the EsNode1 instance. Therefore, ensure that all EsNode1 instances in the cluster are in good health status and can receive requests normally. To ensure successful backup, do not perform operations such as adding, deleting, stopping, or restarting Elasticsearch instances, stopping or restarting the Elasticsearch service, or stopping or restarting the cluster.
- If a large amount of data needs to be backed up in the cluster, back up data at the index level in batches. Otherwise, the backup takes a long time.
- To prevent a large amount of data from being fully backed up each time, create a periodic backup task when creating an index. In this case, data is fully backed up in the first backup task, and incremental backup is performed in subsequent periodic backup tasks.
- If a backup task fails, log in to the backup directory of the target (RemoteHDFS and NFS), which is the value of Target Path for a backup to remote HDFS or the value of Server Shared Path for a backup to the NFS. Delete the subdirectory (Backup task name_Data source_Task creation time) corresponding to the backup task name to delete data that fails to be backed up.
- Before the backup, check whether the index to be backed up is in the green state and no shard is lost. Otherwise, the backup fails.
Prerequisites
- If data needs to be backed up to the remote HDFS, you have prepared a standby cluster for data backup. The authentication mode of the standby cluster is the same as that of the active cluster. For other backup modes, you do not need to prepare the standby cluster.
- The HDFS and Yarn services have been installed if data needs to be backed up to HDFS. For the Elasticsearch cluster in normal mode, service data cannot be backed up to HDFS in a cluster in security mode.
- The HDFS service has been installed if data needs to be backed up to the NAS.
- If the active cluster is deployed in security mode and the active and standby clusters are not managed by the same FusionInsight Manager, mutual trust has been configured. For details, see Configuring Cross-Manager Mutual Trust Between Clusters. If the active cluster is deployed in normal mode, no mutual trust is required.
- Time is consistent between the active and standby clusters and the NTP services on the active and standby clusters use the same time source.
- The HDFS and NAS client in the standby cluster have sufficient space. You are advised to save backup files in a custom directory.
- When backing up the Elasticsearch service data to the NAS (NFS), you have deployed the NAS server and performed the following operations:
After the NAS is started and a shared path is created, create a local repository path and mount it to the shared path of the NAS.
- Create a shared path of the NAS and change its owner and permission. For example, the shared path is /var/nfs.
- Run mkdir /var/nfs to create a path.
- Run chown 65534:65534 /var/nfs to change the owner.
- Run chmod 777 /var/nfs to change the permission.
- On each server, run the following command to mount the local repository path to the shared path of the NAS:
mount ip:/var/nfs /Data storage path/elasticsearch/nas
In the command, ip indicates the IP address of the NAS server. For example:
mount ip:/var/nfs /srv/BigData/elasticsearch/nas
- Create a shared path of the NAS and change its owner and permission. For example, the shared path is /var/nfs.
Procedure
- On FusionInsight Manager, choose O&M > Backup and Restoration > Backup Management.
- Click Create.
- Set Name to the name of the backup task.
- Select the desired cluster from Backup Object.
- Set Mode to the type of the backup task. Periodic indicates that the backup task is periodically executed. Manual indicates that the backup task is manually executed.
To create a periodic backup task, set the following parameters:
- Started: indicates the time when the task is started for the first time.
- Period: indicates the task execution interval. The options include Hours and Days.
- Backup Policy: indicates the volume of data to be backed up in each task execution. Only Full backup at the first time and incremental backup subsequently is supported.
- In Configuration, choose Elasticsearch > Elasticsearch under Service data.
- Set Path Type of Elasticsearch to a backup directory type. Elasticsearch data cannot be backed up to a directory encrypted using RangerKMS.
The following backup directory types are supported:
- RemoteHDFS: indicates that the backup files are stored in the HDFS directory of the standby cluster.
If you select this option, set the following parameters:
- IP Mode: indicates the mode of the target IP address. The system automatically selects the IP address mode based on the cluster network type, for example, IPv4 or IPv6.
- Destination Hadoop PRC Mode: indicates the value of hadoop.rpc.protection in the HDFS basic configuration of the destination cluster.
- Destination Active NameNode IP Address: indicates the service plane IP address of the active NameNode in the destination cluster.
- Destination Standby NameNode IP Address: indicates the service plane IP address of the standby NameNode in the destination cluster.
- Destination NameNode RPC Port: indicates the value of dfs.namenode.rpc.port in the HDFS basic configuration of the destination cluster.
- Target Path: indicates the HDFS directory for storing destination cluster backup data. The path cannot be an HDFS hidden directory, such as snapshot or recycle bin directory, or a default system directory.
- NFS: indicates that backup files are stored in the NAS using the NFS protocol.
If you select this option, set the following parameters:
- IP Mode: indicates the mode of the target IP address. The system automatically selects the IP address mode based on the cluster network type, for example, IPv4 or IPv6.
- Server IP Address: indicates the IP address of the NAS server.
- Backup Speed of a Single Instance (MB/s): indicates the speed of backing up data for a single instance. The default value is 50 MB/s. Change the backup speed based on the actual volume of backup data.
- Restoration Speed of a Single Instance (MB/s): indicates the speed of restoring data for a single instance. The default value is 50 MB/s. Change the restoration speed based on the actual volume of backup data.
- Server Shared Path: indicates the shared directory of the NAS server. (The shared path of the server cannot be set to the root directory, and the user group and owner group of the shared path must be nobody:nobody.)
- RemoteHDFS: indicates that the backup files are stored in the HDFS directory of the standby cluster.
- Set Maximum Number of Recovery Points to any value from 1 to 1000 because this parameter is not used by Elasticsearch.
- Set Backup Content to one or more indexes to be backed up.
You can select backup data using either of the following methods:
- Adding a backup data file
- Click Add.
- Select the table to be backed up under File Directory, and click Add to add the table to Backup Content.
- Click OK.
- Selecting using regular expressions
- Click Query Regular Expression.
- Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get indexes containing es, enter .*es.*. To get indexes starting with es, enter es.*. To get indexes ending with es, enter .*es.
- Click Refresh to view the displayed tables in Directory Name.
- Click Synchronize to save the result.
- When entering regular expressions, click or to add or delete an expression.
- If the selected table or directory is incorrect, click Clear Selected Node to deselect it.
- Adding a backup data file
- Click Verify to check whether the backup task is configured correctly.
The possible causes of the verification failure are as follows:
- The destination active or standby NameNode IP address or port number is incorrect.
- The name of the index to be backed up does not exist in the Elasticsearch cluster.
- Click OK.
- In the Operation column of the created task in the backup task list, click More and select Back Up Now to execute the backup task.
After the backup task is executed, the system automatically creates a subdirectory for each backup task in the backup directory. The format of the subdirectory name is Backup task name_Data source_Task creation time, and the subdirectory is used to save latest data source backup files. All the backup file sets are stored in the related snapshot directories.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot