Help Center/ Cloud Data Migration/ FAQs/ Troubleshooting/ Hudi Source Case Library/ What Should I Do If a Hudi Read Job Fails to Be Executed Because the Field Mapping Page Has an Extra Column?
Updated on 2023-02-06 GMT+08:00

What Should I Do If a Hudi Read Job Fails to Be Executed Because the Field Mapping Page Has an Extra Column?

Cause: When Spark SQL writes data to the Hudi table, a column whose type is array<string> and whose name is col is automatically added to the schema of the table.

Solution: Delete this column from the field mapping. If auto table creation is enabled, delete this column also from the SQL statement.