Updated on 2024-09-14 GMT+08:00

Configuring and Using Custom Word Dictionaries for an OpenSearch Cluster

Prerequisites

An Elasticsearch cluster and the custom word dictionaries you plan to configure for it have been prepared, and the word dictionary files have been uploaded to an OBS bucket.
  • The cluster and word dictionary files meet the requirements described in Constraints.
  • The OBS bucket to which data is uploaded must be in the same region as the cluster. For details about how to upload a word dictionary file to an OBS bucket, see Uploading an Object.

Configuring Custom Word Dictionaries

  1. Log in to the CSS management console.
  2. In the navigation tree on the left, choose Clusters > Elasticsearch. The cluster list is displayed.
  3. On the Clusters page, click the name of the target cluster.
  4. Click the Word Dictionaries tab.
  5. On the Word Dictionaries page, configure custom word dictionaries for the cluster or modify preset ones.
    1. To configure custom word dictionaries, see Table 1.
      Table 1 Configuring custom word dictionaries

      Parameter

      Description

      OBS Bucket

      Select the OBS location for storing the word dictionary file.

      You can click Create Bucket to create an OBS bucket. The new OBS bucket must be in the same region as the cluster, and Default Storage Class must be Standard or Infrequent Access.

      Main Word Dictionary

      Main Word Dictionary is a custom word dictionary. Its initial state is empty. By default, No Update is selected, meaning not to configure this word dictionary.

      • To add a custom main word dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use a certain word dictionary, click Do Not Use.

      Stop Word Dictionary

      Stop Word Dictionary is a custom word dictionary. Its initial state is empty. By default, No Update is selected, meaning not to configure this word dictionary.

      • To add a custom stop word dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use a certain word dictionary, click Do Not Use.

      Synonym Dictionary

      Synonym Dictionary is a custom word dictionary. Its initial state is empty. By default, No Update is selected, meaning not to configure this word dictionary.

      • To add a custom synonym dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use a certain word dictionary, click Do Not Use.
    2. To modify a preset word dictionary, toggle on Modify Preset Word Dictionary, and then modify that word dictionary as needed.

      If the four preset word dictionaries (static main word, static stop word, extra main word, and extra stop word) are not displayed, the current cluster version does not support the deletion or modification of them. To use this function, you are advised to upgrade the cluster version, or create a new cluster and migrate data to it.

      Table 2 Configuring preset word dictionaries

      Parameter

      Description

      Static Main Word Dictionary

      Static Main Word Dictionary is a preset collection of common main words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Static Main Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Static Main Word Dictionary, click Do Not Use.

      Static Stop Word Dictionary

      Static Stop Word Dictionary is a preset collection of common stop words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Static Stop Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Static Stop Word Dictionary, click Do Not Use.

      Extra Main Word Dictionary

      Extra Main Word Dictionary is a preset collection of uncommon main words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Extra Main Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Extra Main Word Dictionary, click Do Not Use.

      Extra Stop Word Dictionary

      Extra Stop Word Dictionary is a preset collection of uncommon stop words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Extra Stop Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Extra Stop Word Dictionary, click Do Not Use.
  6. Click Save. In the dialog box that is displayed, click OK. The word dictionary information is displayed in the lower part of the page. The word dictionary status is Updating. In approximately 1 minute, the word dictionaries are configured, and Word Dictionary Status changes to Successful.
  7. The deletion or update of the four preset dictionaries (static main word, static stop word, extra main word, and extra stop word) requires a cluster restart to take effect. The update of other word dictionaries happens dynamically, and there is no need for cluster restart. For details about how to restart a cluster, see Restarting an Elasticsearch Cluster.

Example

Configure a custom word dictionary for the cluster, set main words, stop words, and synonyms. Search for the target text by keyword and synonym and check the search results.

  1. Configure a custom word dictionary and check the word segmentation result. If there is no need for a custom word dictionary because the preset word dictionaries are already sufficient, skip this step.

    1. Prepare a word dictionary file (a text file encoded using UTF-8 without BOM) and upload it to the target OBS path.

      Set the main word dictionary file, stop word dictionary file, and synonym word dictionary file.

      The built-in static stop word dictionary contains common stop words such as are and the. If the built-in stop word dictionary was never deleted or updated, you do not need to upload such stop words.

    2. Configure the word dictionary by referring to Table 1.
    3. After the word dictionary takes effect, return to the cluster list. Locate the target cluster and click Kibana in the Operation column to access the cluster.
    4. On the Kibana page, click Dev Tools in the navigation tree on the left. The operation page is displayed.
    5. Run the following commands to check the performance of the ik_smart word segmentation policy and ik_max_word word segmentation policy.
      • Use the ik_smart word segmentation policy to split a piece of target text.
        Example code:
        POST /_analyze
        {
          "analyzer":"ik_smart",
          "text":"Text used for word segmentation"
        }

        After the operation is completed, view the word segmentation result.

        {
          "tokens": [
            {
              "token": "word-1",
              "start_offset": 0,
              "end_offset": 4,
              "type": "CN_WORD",
              "position": 0
            },
            {
              "token": "word-2",
              "start_offset": 5,
              "end_offset": 8,
              "type": "CN_WORD",
              "position": 1
            }
          ]
        }
      • Use the ik_max_word word segmentation policy to split a piece of target text.

        Example code:

        POST /_analyze
        {
          "analyzer":"ik_max_word",
          "text":"Text used for word segmentation"
        }

        After the operation is completed, view the word segmentation result.

        {
          "tokens" : [
            {
              "token": "word-1",
              "start_offset" : 0,
              "end_offset" : 4,
              "type" : "CN_WORD",
              "position" : 0
            },
            {
              "token" : "word-3",
              "start_offset" : 0,
              "end_offset" : 2,
              "type" : "CN_WORD",
              "position" : 1
            },
            {
              "token" : "word-4",
              "start_offset" : 0,
              "end_offset" : 1,
              "type" : "CN_WORD",
              "position" : 2
            },
            {
              "token" : "word-5",
              "start_offset" : 1,
              "end_offset" : 3,
              "type" : "CN_WORD",
              "position" : 3
            },
            {
              "token" : "word-6",
              "start_offset" : 2,
              "end_offset" : 4,
              "type" : "CN_WORD",
              "position" : 4
            },
            {
              "token" : "word-7",
              "start_offset" : 3,
              "end_offset" : 4,
              "type" : "CN_WORD",
              "position" : 5
            },
            {
              "token" : "word-2",
              "start_offset" : 5,
              "end_offset" : 8,
              "type" : "CN_WORD",
              "position" : 6
            },
            {
              "token" : "word-8",
              "start_offset" : 5,
              "end_offset" : 7,
              "type" : "CN_WORD",
              "position" : 7
            },
            {
              "token" : "word-9",
              "start_offset" : 6,
              "end_offset" : 8,
              "type" : "CN_WORD",
              "position" : 8
            },
            {
              "token" : "word-10",
              "start_offset" : 7,
              "end_offset" : 8,
              "type" : "CN_WORD",
              "position" : 9
            }
          ]
        }

  2. Create an index and configure a word segmentation policy. After data is imported, search for data by keyword.

    The commands used for Elasticsearch 7.x and later are different from those used for versions earlier than that.

  3. Create an index and configure a synonym policy. After data is imported, search for data by synonyms.

    The commands used for Elasticsearch 7.x and later are different from those used for versions earlier than that.