Preparing Predictive Analysis Data
Before using ModelArts to build a predictive analytics model, upload data to OBS. The OBS bucket and ModelArts must be in the same region. For example, if the OBS bucket is in the CN North-Beijing4 region, ensure that the ModelArts management console is also in the CN North-Beijing4 region. Otherwise, data cannot be obtained.
Requirements on Datasets
The data set used in the predictive analytics project must be a table dataset in .csv format. For details about the table dataset, see Creating a Dataset.
To convert the data from .xlsx to .csv, perform the following operations:
Save the original table data in .xlsx. Choose File > Save As, select a local address, set Save as type: to CSV (Comma delimited), and click Save. Then, click OK in the displayed dialog box.
- The number of columns in the training data must be the same, and there has to be at least 100 data records (a feature with different values is considered as different data records).
- The training columns cannot contain timestamp data (such as yy-mm-dd or yyyy-mm-dd).
- If a column has only one value, the column is considered invalid. Ensure that there are at least two values in the label column and no data is missing.
The label column is the training target specified in a training task. It is the output (prediction item) for the model trained using the dataset.
- In addition to the label column, the dataset must contain at least two valid feature columns. Ensure that there are at least two values in each feature column and that the percentage of missing data must be lower than 10%.
- Due to the limitation of the feature filtering algorithm, place the predictive data column at the last. Otherwise, the training may fail.
Example of a table dataset:
The following table takes the bank deposit predictive dataset as an example. Data sources include age, occupation, marital status, cultural level, and whether there is a personal mortgage or personal loan.
Field |
Meaning |
Type |
Description |
---|---|---|---|
attr_1 |
Age |
Int |
Age of the customer |
attr_2 |
Occupation |
String |
Occupation of the customer |
attr_3 |
Marital status |
String |
Marital status of the customer |
attr_4 |
Education status |
String |
Education status of the customer |
attr_5 |
Real estate |
String |
Housing situation of the customer |
attr_6 |
Loan |
String |
Loan of the customer |
attr_7 |
Deposit |
String |
Deposit of the customer |
attr_1 |
attr_2 |
attr_3 |
attr_4 |
attr_5 |
attr_6 |
attr_7 |
---|---|---|---|---|---|---|
31 |
blue-collar |
married |
secondary |
yes |
no |
no |
41 |
management |
married |
tertiary |
yes |
yes |
no |
38 |
technician |
single |
secondary |
yes |
no |
no |
39 |
technician |
single |
secondary |
yes |
no |
yes |
39 |
blue-collar |
married |
secondary |
yes |
no |
no |
39 |
services |
single |
unknown |
yes |
no |
no |
Uploading Data to OBS
In this section, the OBS console is used to upload data.
Upload files to OBS according to the following specifications:
The OBS path of the predictive analytics projects must comply with the following rules:
- The OBS path of the input data must redirect to the data files. The data files must be stored in a folder in an OBS bucket rather than the root directory of the OBS bucket, for example, /obs-xxx/data/input.csv.
- There must be at least 100 lines of valid data in .csv. There cannot be more than 200 columns of data and the total data size must be smaller than 100 MB.
Procedure for uploading data to OBS:
Perform the following operations to import data to the dataset for model training and building.
- Log in to the OBS console and create a bucket in the same region as ModelArts. If an available bucket exists, ensure that the OBS bucket and ModelArts are in the same region.
- Upload the local data to the OBS bucket. If you have a large amount of data, use OBS Browser+ to upload data or folders. The uploaded data must meet the dataset requirements of the ExeML project.
Upload data from unencrypted buckets. Otherwise, training will fail because data cannot be decrypted.
Creating a Dataset
After the data is prepared, create a proper dataset. For details, see Creating a Dataset.
FAQs
How do I process Schema information when creating a table dataset using data selected from OBS?
- If the original table contains a table header, enable Contain Table Header. The first row of the file will be used as column names. You do not need to modify the Schema information.
- If the original table does not contain a table header, disable Contain Table Header. After data is selected from OBS, the column names will be used as the first row of the table by default. Change the column names to attr_1, attr_2, ..., attr_n. attr_n is the prediction column placed at last.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot