Help Center/ DataArts Fabric/ User Guide/ Large Model Inference Scenarios/ Introduction to Large Model Inference Scenarios
Updated on 2025-07-08 GMT+08:00

Introduction to Large Model Inference Scenarios

Common large models include large language models (LLMs), multimodal large models, and text-to-image large models. LLMs support text generation and can perform inference based on your prompts. LLMs can be widely used in the following fields:

  • Q&A system: LLMs can process natural languages, understand your intents, and answer your questions.
  • Content production: LLMs can generate coherent articles, stories, and dialogues based on given texts or topics.
  • Text summarization: LLMs can summarize long texts and extract key information, helping you quickly understand text content.
  • Machine translation: LLMs can process translation tasks between multiple languages to implement cross-language communication.

Currently, DataArtsFabric provides the following two methods for inference:

  • Using a Public Inference Service for Inference: DataArtsFabric provides a public inference service based on open-source LLMs (such as Qwen2 and GLM4). You can view inference endpoints on the Inference Endpoint page, and select a desired endpoint to enable it. Then, you can use the public inference service in the playground. In case of this method, common open-source large models can be used for inference after being enabled, and you do not need to deploy them.
  • Creating My Inference Service for Inference: DataArtsFabric allows you to create your own inference services. You can upload your own LLMs or use public LLMs to deploy the inference services. Models created on the Model page of DataArtsFabric are visible only to yourself. You can view and delete models, and manage model versions, including adding, viewing, and deleting model versions.