Event Notifications (EventMap)
This section describes the EventMap events of the web SDK.
API |
Description |
Supported by a Third-Party Large Model |
Supported by a Non-Third-Party Large Model |
---|---|---|---|
Error event. |
√ |
√ |
|
Activates a virtual avatar. |
× |
√ |
|
Automatic hibernation of a virtual avatar. |
× |
√ |
|
Interaction task information change. |
√ |
√ |
|
A virtual avatar starts speaking. |
× |
√ |
|
A virtual avatar stops speaking. |
× |
√ |
|
A speech question is converted to text after ASR. |
× |
√ |
|
The large language model (LLM) performs semantic recognition on a question and the answer text is output. speechRecognized and semanticRecognized indicate the question and answer, respectively. They share one chatId in each round of Q&A. |
× |
√ |
error
[Event Description]
Error event returned when a service exception occurs.
[Callback Parameters]
Parameter |
Type |
Description |
---|---|---|
code |
String |
Error code. See Error Codes (ICSError). |
message |
String |
Error message. |
enterActive
[Event Description]
Activates a virtual avatar. This event is triggered when any of the following occurs:
- The API startChat is called.
- The button of starting a dialog on the web SDK page is clicked.
- Voice wakeup is used (see Web Voice Wakeup).
[Callback Parameters]
None.
enterSleep
[Event Description]
Automatic hibernation of a virtual avatar.
[Callback Parameters]
None.
jobInfoChange
[Event Description]
Interaction task information change. When the status of an interaction task changes, the user will be notified. If the interaction task is ready, jobId in jobInfo is the ID of the ongoing task. If the interaction task is not ready, jobId is an empty string.
[Callback Parameters]
jobInfo: JobInfo type. Table 3 describes the fields.
Parameter |
Type |
Description |
---|---|---|
jobId |
String |
Task ID. |
websocketAddr |
string | undefined |
WebSocket address of the intelligent interaction server. This address is used to assemble a WebSocket connection address when the virtual avatar is equipped with a third-party large model.
NOTICE:
By default, the returned address does not contain the prefix wss://. You need to add the prefix. For example, if the returned content is metastudio-api.cn-north-4.myhuaweicloud.com:443, the address should be wss://metastudio-api.cn-north-4.myhuaweicloud.com:443. |
isReady |
Boolean |
Whether a task is ready. |
languageInfoChange
[Event Description]
Changes the language information of an interaction task. This event is triggered when the language setting is added or deleted in the backend or the language is changed using the API changeLanguage.
[Callback Parameters]
languageInfo: Table 4 describes the fields.
speechRecognized
[Event Description]
A speech question is converted to text after ASR.
[Callback Parameters]
question: SpeechRecognitionInfo type. Table 5 describes the fields.
Parameter |
Type |
Description |
---|---|---|
text |
String |
Recognition result. |
resultId |
number |
Sequence number of each packet returned when a streaming response is recognized. |
isLast |
Boolean |
Whether it is the last recognition result. |
chatId |
String |
Dialog ID, which is unique in each round of Q&A. |

Note: The streaming response of speechRecognized is different from that of semanticRecognized. For details, see What Are the Differences Between the Streaming Response of speechRecognized and That of semanticRecognized?.
semanticRecognized
[Event Description]
The large language model (LLM) performs semantic recognition on a question and the answer text is output. speechRecognized and semanticRecognized indicate the question and answer, respectively. They share one chatId in each round of Q&A.
[Callback Parameters]
answer: SemanticRecognitionInfo type. Table 6 describes the fields.
Parameter |
Type |
Description |
---|---|---|
text |
String |
Recognition result. |
questionText |
String |
Question text. |
resultId |
number |
Sequence number of each packet returned when a streaming response is recognized. |
isLast |
Boolean |
Whether it is the last recognition result. |
chatId |
String |
Dialog ID, which is unique in each round of Q&A. |

Note: The streaming response of semanticRecognized is different from that of speechRecognized. For details, see What Are the Differences Between the Streaming Response of speechRecognized and That of semanticRecognized?.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot