What Do I Do If a Conflict Occurs in the Python Dependency Package of a Custom Prediction Script When I Deploy a Real-Time Service?
Before importing a model, save the inference code and configuration file in the model folder. When coding with Python, import custom packages in relative import (Python import) mode.
If there are packages with duplicate names in the ModelArts inference framework code and they are imported not in relative import mode, a conflict will occur, leading to a service deployment or prediction failure.
Real-Time Services FAQs
- What Do I Do If a Conflict Occurs in the Python Dependency Package of a Custom Prediction Script When I Deploy a Real-Time Service?
- How Do I Speed Up Real-Time Prediction?
- Can a New-Version AI Application Still Use the Original API?
- What Is the Format of a Real-Time Service API?
- How Do I Check Whether an Error Is Caused by a Model When a Real-Time Service Is Running But Prediction Failed?
- How Do I Fill in the Request Header and Request Body of an Inference Request When a Real-Time Service Is Running?
- Why Cannot I Access the Obtained Inference Request Address from the Initiator Client?
- What Do I Do If Deploying a Service Failed Due to Insufficient Quota?
- Why Did My Service Deployment Fail with Proper Deployment Timeout Configured?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbotmore