Creating a Data Lake Metadata Collection Task
Use the created metadata connection to create a collection task that gathers the details about source databases, tables, and fields to MgC. Only Delta lakes with metadata are supported.
For Delta lakes without metadata, you can manually add information about database tables one by one or import them in a batch. For details, see Viewing Metadata.
Prerequisites
You have created a connection to Delta Lake (with metadata).
Procedure
- Sign in to the MgC console.
- In the navigation pane on the left, choose Research > Data Lineage. Select a migration project in the upper left corner of the page.
- In the Metadata Collection area, choose Create Task > Data Lake Metadata Collection.
- Set task parameters based on Table 1.
Table 1 Parameters for configuring a metadata collection task Parameter
Configuration
Task Name
The default name is Data-Lake-Metadata-Sync-Task-4 random characters (including letters and numbers). You can also specify a name.
Metadata Connection
Select the created connection to Delta Lake (with metadata).
Databases
Enter the names of the databases whose metadata needs to be collected.
Concurrent Threads
Set the maximum number of threads for executing the collection. The default value is 3. The value ranges from 1 to 10. More concurrent threads means more efficient collection, but more connection and Edge device resources will be consumed.
Custom Parameters
You can configure custom parameters to specify the tables and partitions to collect or set criteria to filter tables and partitions.
- If the metadata source is Alibaba Cloud EMR, add the following parameter:
- Parameter: conf
- Value: spark.sql.catalogImplementation=hive
- If the source is Alibaba Cloud EMR Delta Lake 2.2 and is accessed through Delta Lake 2.3, add the following parameters:
- Parameter: master
- Value: local
- If the source is Alibaba Cloud EMR Delta Lake 2.1.0 and Spark 2.4.8 is configured to process Delta Lake data, add the following parameters:
- Parameter: mgc.delta.spark.version
- Value: 2
- If the source is Alibaba Cloud EMR and Spark 3 is configured to process Delta Lake data, add the following parameters:
- Parameter: jars
- Value: '/opt/apps/DELTALAKE/deltalake-current/spark3-delta/delta-core_2.12-*.jar,/opt/apps/DELTALAKE/deltalake-current/spark3-delta/delta-storage-*.jar'
CAUTION:
Replace the parameter values with the actual environment directory and Delta Lake version.
- If the metadata source is Alibaba Cloud EMR, add the following parameter:
- Click Confirm. The metadata collection task is created.
- Click Collection tasks. Under Tasks, you can review the created metadata collection task and its settings. You can modify the task by choosing More > Modify in the Operation column.
- Click Execute Task in the Operation column to run the task. Each time the task is executed, a task execution is generated.
- Click View Executions in the Operation column. Under Task Executions, you can view the execution records of the task and the status and collection results of each task execution. When a task execution enters a Completed status and the collection results are displayed, you can view the list of databases and tables extracted from collected metadata on the Tables tab.
- Locate a table and click Collect in the Lineage column to create a lineage collection task.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot