Large Model Inference Process
DataArtsFabric provides you with the entire AI development process from data preparation to model deployment in serverless mode. At each stage of the process, you can use DataArtsFabric independently. This section describes the DataArtsFabric usage process. You can select one of methods to complete AI development.
Process |
Description |
Reference |
---|---|---|
Creating a workspace |
Create a workspace. All subsequent operations are performed in the workspace. |
|
Creating an endpoint |
Create an endpoint. Create endpoints of different types based on service types. |
|
Registering a model |
You can register the fine-tuning model file stored in OBS as your fine-tuning model on the model management page. |
|
Deploying a service |
DataArtsFabric supports the deployment of a model that is fine-tuned based on the base model. |
|
Accessing the service |
After the fine-tuning model is deployed, you can use the inference API provided by DataArtsFabric to perform inference. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot