Creating and Running a Data Lake Metadata Collection Task
You can create metadata collection tasks for:
For Delta lakes (without metadata) and Hudi clusters (without metadata), you can manually add information about database tables one by one or import them in a batch. For details, see Viewing Metadata.
Prerequisites
You have created a metadata connection to Delta Lake (with metadata) or Hudi (with metadata). For details, see the parts about Delta Lake (with Metadata) and Hudi (with Metadata) in Creating a Connection to a Source Component.
Procedure
- Sign in to the MgC console. In the navigation pane, under Project, select your big data migration project from the drop-down list.
- In the navigation pane, choose Migration Preparations.
- Choose Metadata Management and click Create Data Lake Metadata Collection Task.
Figure 1 Create Data Lake Metadata Collection Task
- Set task parameters based on Table 1.
Table 1 Parameters for configuring a metadata collection task Parameter
Configuration
Task Name
The default name is Data-Lake-Metadata-Collection-Task-4 random characters (including letters and numbers). You can also customize a name.
Metadata Connection
Select the created connection to Delta Lake (with metadata) or Hudi (with metadata).
Databases
Enter the names of the databases whose metadata needs to be collected.
Concurrent Threads
Set the maximum number of threads for executing the collection. The default value is 3. The value ranges from 1 to 10. Configuring more concurrent threads means more efficient collection, but more connection and MgC Agent (formerly Edge) resources will be consumed.
Custom Parameters
You can customize parameters to specify the tables and partitions to collect or set criteria to filter tables and partitions.
- If the metadata source is Alibaba Cloud EMR, add the following parameter:
- Parameter: conf
- Value: spark.sql.catalogImplementation=hive
- If the source is Alibaba Cloud EMR Delta Lake 2.2 and is accessed through Delta Lake 2.3 dependencies, add the following parameter:
- Parameter: master
- Value: local
- If you are creating a verification task for an Alibaba Cloud EMR Delta Lake 2.1.0 cluster that uses Spark 2.4.8, add the following parameter:
- Parameter: mgc.delta.spark.version
- Value: 2
- If the source is Alibaba Cloud EMR and is configured with Spark 3 to process Delta Lake data, add the following parameter:
- Parameter: jars
- Value: '/opt/apps/DELTALAKE/deltalake-current/spark3-delta/delta-core_2.12-*.jar,/opt/apps/DELTALAKE/deltalake-current/spark3-delta/delta-storage-*.jar'
CAUTION:
Replace the parameter values with the actual environment directory and Delta Lake version.
- If the metadata source is Alibaba Cloud EMR, add the following parameter:
- Click Confirm. The metadata collection task is created.
- Under Tasks, review the created metadata collection task and its settings. You can modify the task by choosing More > Modify in the Operation column.
- Click Execute Task in the Operation column to run the task. Each time the task is executed, a task execution is generated.
- Click View Executions in the Operation column. Under Task Executions, you can view the execution records of the task and the status and collection results of each task execution. When a task execution enters a Completed status and the collection results are displayed, you can view the list of databases and tables extracted from collected metadata on the Tables tab.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot