Updated on 2025-05-08 GMT+08:00

Event Notifications (EventMap)

This section describes the EventMap events of the web SDK.

Table 1 Notifications

API

Description

Supported by a Third-Party Large Model

Supported by a Non-Third-Party Large Model

error

Error event.

enterActive

Activates a virtual avatar.

×

enterSleep

Automatic hibernation of a virtual avatar.

×

jobInfoChange

Interaction task information change.

speakingStart

A virtual avatar starts speaking.

×

speakingStop

A virtual avatar stops speaking.

×

speechRecognized

A speech question is converted to text after ASR.

×

semanticRecognized

The large language model (LLM) performs semantic recognition on a question and the answer text is output. speechRecognized and semanticRecognized indicate the question and answer, respectively. They share one chatId in each round of Q&A.

×

error

[Event Description]

Error event returned when a service exception occurs.

[Callback Parameters]

icsError: ICS Error type. Table 2 describes the fields.
Table 2 IcsError

Parameter

Type

Description

code

String

Error code. See Error Codes (ICSError).

message

String

Error message.

enterActive

[Event Description]

Activates a virtual avatar. This event is triggered when any of the following occurs:

  • The API startChat is called.
  • The button of starting a dialog on the web SDK page is clicked.
  • Voice wakeup is used (see Web Voice Wakeup).

[Callback Parameters]

None.

enterSleep

[Event Description]

Automatic hibernation of a virtual avatar.

[Callback Parameters]

None.

jobInfoChange

[Event Description]

Interaction task information change. When the status of an interaction task changes, the user will be notified. If the interaction task is ready, jobId in jobInfo is the ID of the ongoing task. If the interaction task is not ready, jobId is an empty string.

[Callback Parameters]

jobInfo: JobInfo type. Table 3 describes the fields.

Table 3 JobInfo

Parameter

Type

Description

jobId

String

Task ID.

websocketAddr

string | undefined

WebSocket address of the intelligent interaction server. This address is used to assemble a WebSocket connection address when the virtual avatar is equipped with a third-party large model.

NOTICE:

By default, the returned address does not contain the prefix wss://. You need to add the prefix. For example, if the returned content is metastudio-api.cn-north-4.myhuaweicloud.com:443, the address should be wss://metastudio-api.cn-north-4.myhuaweicloud.com:443.

isReady

Boolean

Whether a task is ready.

languageInfoChange

[Event Description]

Changes the language information of an interaction task. This event is triggered when the language setting is added or deleted in the backend or the language is changed using the API changeLanguage.

[Callback Parameters]

languageInfo: Table 4 describes the fields.

Table 4 LanguageInfo

Parameter

Type

Description

languageList

LanguageInfo[]

Language list.

currentLanguage

'zh_CN' | 'en_US'

Language of the current interaction task.

speakingStart

[Event Description]

A virtual avatar starts speaking.

[Callback Parameters]

None.

speakingStop

[Event Description]

A virtual avatar stops speaking.

[Callback Parameters]

None.

speechRecognized

[Event Description]

A speech question is converted to text after ASR.

[Callback Parameters]

question: SpeechRecognitionInfo type. Table 5 describes the fields.

Table 5 SpeechRecognitionInfo

Parameter

Type

Description

text

String

Recognition result.

resultId

number

Sequence number of each packet returned when a streaming response is recognized.

isLast

Boolean

Whether it is the last recognition result.

chatId

String

Dialog ID, which is unique in each round of Q&A.

Note: The streaming response of speechRecognized is different from that of semanticRecognized. For details, see What Are the Differences Between the Streaming Response of speechRecognized and That of semanticRecognized?.

semanticRecognized

[Event Description]

The large language model (LLM) performs semantic recognition on a question and the answer text is output. speechRecognized and semanticRecognized indicate the question and answer, respectively. They share one chatId in each round of Q&A.

[Callback Parameters]

answer: SemanticRecognitionInfo type. Table 6 describes the fields.

Table 6 SemanticRecognitionInfo

Parameter

Type

Description

text

String

Recognition result.

questionText

String

Question text.

resultId

number

Sequence number of each packet returned when a streaming response is recognized.

isLast

Boolean

Whether it is the last recognition result.

chatId

String

Dialog ID, which is unique in each round of Q&A.

Note: The streaming response of semanticRecognized is different from that of speechRecognized. For details, see What Are the Differences Between the Streaming Response of speechRecognized and That of semanticRecognized?.