Deploying a Predictive Analytics Service
Deploying a Service
You can deploy a model as a real-time service that provides a real-time test UI and monitoring capabilities. After the model is trained, you can deploy a Successful version with ideal accuracy as a service. The procedure is as follows:
- On the phase execution page, after the service deployment status changes to Awaiting input, double-click Deploy Service. On the configuration details page, configure resource parameters.
- On the service deployment page, select the resource specifications used for service deployment.
- AI Application Source: defaults to the generated AI application.
- AI Application and Version: The current AI application version is automatically selected, which is changeable.
- Resource Pool: defaults to public resource pools.
- Traffic Ratio: defaults to 100 and supports a value range of 0 to 100.
- Specifications: Select available specifications based on the list displayed on the console. The specifications in gray cannot be used in the current environment. If there are no specifications after you select a public resource pool, no public resource pool is available in the current environment. In this case, use a dedicated resource pool or contact the administrator to create a public resource pool.
- Compute Nodes: an integer ranging from 1 to 5. The default value is 1.
- Auto Stop: enables a service to automatically stop at a specified time. If this function is disabled, a real-time service will continue to run and charges will continue to be incurred. The auto stop function is enabled by default. The default value is 1 hour later.
The options are 1 hour later, 2 hours later, 4 hours later, 6 hours later, and Custom. If you select Custom, enter any integer from 1 to 24 in the text box on the right.
You can choose the package that you have bought when you select specifications. On the configuration fee tag, you can view your remaining package quota and how much you will pay for any extra usage.
- After configuring resources, click Next and confirm the operation. Wait until the status changes to Executed, which means the AI application has been deployed as a real-time service.
Testing the Service
- After the model is deployed, you can test the model using code. In ExeML, click Instance Details on the Deploy Service page to go to the real-time service page. On the Prediction tab page, enter the debugging code in the Inference Code area.
- Click Predict to perform the test. After the prediction is complete, the result is displayed in the Test Result pane on the right. If the model accuracy does not meet your expectation, train and deploy the model again on the Label Data tab page. If you are satisfied with the model prediction result, call the API to access the real-time service as prompted. For details, see Accessing Real-Time Services.
- In the input code, the label column of a predictive analytics database must be named class. Otherwise, the prediction will fail.
{ "data": { "req_data": [{ "attr_1": "34", "attr_2": "blue-collar", "attr_3": "single", "attr_4": "tertiary", "attr_5": "no", "attr_6": "tertiary" }] } }
- In the preceding code snippet, predict is the inference result of the label column.
Figure 2 Prediction result
- A running real-time service continuously consumes resources. If you do not need to use the real-time service, stop the service to stop billing. To do so, click Stop in the More drop-down list in the Operation column. If you want to use the service again, click Start.
- If you enable auto stop, the service automatically stops at the specified time and no fees will be generated then.
- In the input code, the label column of a predictive analytics database must be named class. Otherwise, the prediction will fail.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot