Exporting Metadata
To ensure that the data properties and permissions of the source cluster are consistent with those of the destination cluster, metadata of the source cluster needs to be exported to restore metadata after data migration.
The metadata to be exported includes the owner, group, and permission information of the HDFS files and Hive table description.
Exporting HDFS Metadata
The metadata information to be exported includes the permission, owner, and group information of files and folders. You can run the following HDFS client command to export:
$HADOOP_HOME/bin/hdfs dfs -ls -R <migrating_path> > /tmp/hdfs_meta.txt
- $HADOOP_HOME: installation directory of the Hadoop client in the source cluster
- <migrating_path>: HDFS data directory to be migrated
- /tmp/hdfs_meta.txt: local path for storing the exported metadata
If the source cluster can communicate with the destination cluster and you run the hadoop distcp command as an administrator to copy data, you can add the -p parameter to enable DistCp to restore the metadata of the corresponding file in the destination cluster while copying data. In this case, you can skip this step.
Exporting Hive Metadata
Hive table data is stored in HDFS. Table data and the metadata of the table data is centrally migrated in directories by HDFS in a unified manner. Metadata of Hive tables can be stored in different types of relational databases (such as MySQL, PostgreSQL, and Oracle) based on cluster configurations.
The exported metadata of the Hive tables in this document is the Hive table description stored in the relational database.
The mainstream big data release editions in the industry support Sqoop installation. For on-premises big data clusters of the community version, you can download the Sqoop of the community version for installation. Use Sqoop to decouple the metadata to be exported and the relational database, then export Hive metadata to HDFS, and migrate it together with the table data for restoration.
The following uses Account A (you) and Account B (another user) as an example:
- Download the Sqoop tool from the source cluster and install it.
For details, see http://sqoop.apache.org/.
- Download the JDBC driver of the relational database to the ${Sqoop_Home}/lib directory.
- Run the following command to export all Hive metadata tables:
All exported data is stored in the /user/<user_name>/<table_name> directory on HDFS.
$Sqoop_Home/bin/sqoop import --connect jdbc:<driver_type>://<ip>:<port>/<database> --table <table_name> --username <user> -password <passwd> -m 1
- $Sqoop_Home: Sqoop installation directory
- <driver_type>: Database type
- <ip>: IP address of the database in the source cluster
- <port>: Port number of the database in the source cluster
- <table_name>: Name of the table to be exported
- <user>: Username
- <passwd>: User password
Commands carrying authentication passwords pose security risks. Disable historical command recording before running such commands to prevent information leakage.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.