Migrating Data Using DSC
Precautions
- Before starting DSC, specify the output folder path. Separate the input folder path, output folder path, and log path with spaces. The input folder path cannot contain spaces, which will cause an error when DSC is used for migrating data. For details, see Troubleshooting.
- If the output folder contains subfolders or files, DSC deletes the subfolders and files or overwrites them based on parameter settings in the application.properties configuration file in the config folder before the migration. Deleted or overwritten subfolders or files cannot be restored using DSC.
- If migration tasks are performed concurrently on the same server (by the same or different DSCs), different migration tasks must use different output folder paths and log paths.
- You can specify a log path by configuring optional parameters. If the path is not specified, DSC automatically creates a log folder under TOOL_HOME. For details, see Log Reference.
Migration Methods
You can run the runDSC.sh or runDSC.bat command to perform a migration task on Windows and Linux. For details, see Table 1.
Scenario |
CLI Parameter |
---|---|
> ./runDSC.sh --source-db Teradata [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--target-db/-T][Optional] > runDSC.bat --source-db Teradata [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--target-db/-T][Optional] |
|
./runDSC.sh --source-db Oracle [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--conversion-type <Conversion-Type-BulkOrBlogic>] [--target-db/-T] runDSC.bat --source-db Oracle [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--conversion-type <Conversion-Type-BulkOrBlogic>] [--target-db/-T] |
|
> ./runDSC.sh --source-db Teradata [--application-lang Perl] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--target-db/-T][Optional] > runDSC.bat --source-db Teradata [--application-lang Perl] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--target-db/-T][Optional] |
|
> ./runDSC.sh --source-db MySQL [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--conversion-type <conversion-Type-BulkOrBlogic>] [--target-db/-T] > runDSC.bat --source-db MySQL [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--conversion-type <conversion-Type-BulkOrBlogic>] [--target-db/-T] |
|
> ./runDSC.sh --source-db Netezza [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--target-db/-T][Optional] > runDSC.bat --source-db Netezza [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--target-db/-T][Optional] |
|
> ./runDSC.sh --source-db db2 [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--conversion-type <Conversion-Type-BulkOrBlogic>] [--target-db/-T] > runDSC.bat --source-db db2 [--application-lang SQL] [--input-folder <input-script-path>] [--output-folder <output-script-path>] [--log-folder <log-path>] [--conversion-type <Conversion-Type-BulkOrBlogic>] [--target-db/-T] |
- The CLI parameters are described as follows:
- source-db specifies the source database. The value can be Teradata or Oracle, which is case-insensitive.
- conversion-type specifies the migration type. This parameter is optional. DSC supports the following migration types:
Bulk: migrates DML and DDL scripts.
BLogic: migrates service logic, such as stored procedures and functions. BLogic is applicable to Oracle PL/SQL and Netezza.
- target-db specifies the target database. The value is GaussDB(DWS).
- Command output description:
Migration process start time indicates the migration start time and Migration process end time indicates the migration end time. Total process time indicates the total migration duration, in milliseconds. In addition, the total number of migrated files, total number of processors, number of used processors, log file path, and error log file path are also displayed on the console.
- For details about CLI parameters, see Database Schema Conversion.
Task Example
- Example 1: Run the following command to migrate the SQL file of the Oracle database to the SQL script of GaussDB(DWS) on Linux:
./runDSC.sh --source-db Oracle --input-folder D:\test\conversion\input --output-folder D:\test\conversion\output --log-folder D:\test\conversion\log --conversion-type bulk --target-db gaussdbA
- Example 2: Run the following command to migrate the SQL file of the Oracle database to the SQL script of GaussDB(DWS) on Windows:
runDSC.bat --source-db Oracle --input-folder D:\test\conversion\input --output-folder D:\test\conversion\output --log-folder D:\test\conversion\log --conversion-type bulk --target-db gaussdbA
Migration details are displayed on the console (including the progress and completion status):
********************** Schema Conversion Started ************************* DSC process start time : Mon Jan 20 17:24:49 IST 2020 Statement count progress 100% completed [FILE(1/1)] Schema Conversion Progress 100% completed ************************************************************************** Total number of files in input folder : 1 Total number of valid files in input folder : 1 ************************************************************************** Log file path :....../DSC/DSC/log/dsc.log Error Log file : DSC process end time : Mon Jan 20 17:24:49 IST 2020 DSC total process time : 0 seconds ********************* Schema Conversion Completed ************************
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.