Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Configuring and Using Custom Word Dictionaries for an OpenSearch Cluster

Updated on 2025-01-23 GMT+08:00

Prerequisites

An Elasticsearch cluster and the custom word dictionaries you plan to configure for it have been prepared, and the word dictionary files have been uploaded to an OBS bucket.
  • The cluster and word dictionary files meet the requirements described in Constraints.
  • The OBS bucket to which data is uploaded must be in the same region as the cluster. For details about how to upload a word dictionary file to an OBS bucket, see Uploading an Object.

Configuring Custom Word Dictionaries

  1. Log in to the CSS management console.
  2. In the navigation tree on the left, choose Clusters > Elasticsearch. The cluster list is displayed.
  3. On the Clusters page, click the name of the target cluster.
  4. Click the Word Dictionaries tab.
  5. On the Word Dictionaries page, configure custom word dictionaries for the cluster or modify preset ones.
    1. To configure custom word dictionaries, see Table 1.
      Table 1 Configuring custom word dictionaries

      Parameter

      Description

      OBS Bucket

      Select the OBS location for storing the word dictionary file.

      You can click Create Bucket to create an OBS bucket. The new OBS bucket must be in the same region as the cluster, and Default Storage Class must be Standard or Infrequent Access.

      Main Word Dictionary

      Main Word Dictionary is a custom word dictionary. Its initial state is empty. By default, No Update is selected, meaning not to configure this word dictionary.

      • To add a custom main word dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use a certain word dictionary, click Do Not Use.

      Stop Word Dictionary

      Stop Word Dictionary is a custom word dictionary. Its initial state is empty. By default, No Update is selected, meaning not to configure this word dictionary.

      • To add a custom stop word dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use a certain word dictionary, click Do Not Use.

      Synonym Dictionary

      Synonym Dictionary is a custom word dictionary. Its initial state is empty. By default, No Update is selected, meaning not to configure this word dictionary.

      • To add a custom synonym dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use a certain word dictionary, click Do Not Use.
    2. To modify a preset word dictionary, toggle on Modify Preset Word Dictionary, and then modify that word dictionary as needed.
      NOTE:

      If the four preset word dictionaries (static main word, static stop word, extra main word, and extra stop word) are not displayed, the current cluster version does not support the deletion or modification of them. To use this function, you are advised to upgrade the cluster version, or create a new cluster and migrate data to it.

      Table 2 Configuring preset word dictionaries

      Parameter

      Description

      Static Main Word Dictionary

      Static Main Word Dictionary is a preset collection of common main words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Static Main Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Static Main Word Dictionary, click Do Not Use.

      Static Stop Word Dictionary

      Static Stop Word Dictionary is a preset collection of common stop words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Static Stop Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Static Stop Word Dictionary, click Do Not Use.

      Extra Main Word Dictionary

      Extra Main Word Dictionary is a preset collection of uncommon main words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Extra Main Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Extra Main Word Dictionary, click Do Not Use.

      Extra Stop Word Dictionary

      Extra Stop Word Dictionary is a preset collection of uncommon stop words. By default, No Update is selected, meaning this word dictionary will be used.

      • To modify Extra Stop Word Dictionary, click Update and select a .txt word dictionary file.
      • If you do not want to use Extra Stop Word Dictionary, click Do Not Use.
  6. Click Save. In the dialog box that is displayed, click OK. The word dictionary information is displayed in the lower part of the page. The word dictionary status is Updating. In approximately 1 minute, the word dictionaries are configured, and Word Dictionary Status changes to Successful.
  7. The deletion or update of the two static word dictionaries requires a cluster restart to take effect. The update of other word dictionaries happens dynamically, and there is no need for cluster restart. For details about how to restart a cluster, see Restarting an OpenSearch Cluster.

Example

Configure a custom word dictionary for the cluster, set main words, stop words, and synonyms. Search for the target text by keyword and synonym and check the search results.

  1. Configure a custom word dictionary and check the word segmentation result. If there is no need for a custom word dictionary because the preset word dictionaries are already sufficient, skip this step.

    1. Prepare a word dictionary file (a text file encoded using UTF-8 without BOM) and upload it to the target OBS path.

      Set the main word dictionary file, stop word dictionary file, and synonym word dictionary file.

      NOTE:

      The built-in static stop word dictionary contains common stop words such as are and the. If the built-in stop word dictionary was never deleted or updated, you do not need to upload such stop words.

    2. Configure the word dictionary by referring to Configuring Custom Word Dictionaries.
    3. After the word dictionary takes effect, return to the cluster list. Locate the target cluster and click Kibana in the Operation column to access the cluster.
    4. On the Kibana page, click Dev Tools in the navigation tree on the left. The operation page is displayed.
    5. Run the following commands to check the performance of the ik_smart word segmentation policy and ik_max_word word segmentation policy.
      • Use the ik_smart word segmentation policy to split a piece of target text.
        Example code:
        POST /_analyze
        {
          "analyzer":"ik_smart",
          "text":"Text used for word segmentation"
        }

        After the operation is completed, view the word segmentation result.

        {
          "tokens": [
            {
              "token": "word-1",
              "start_offset": 0,
              "end_offset": 4,
              "type": "CN_WORD",
              "position": 0
            },
            {
              "token": "word-2",
              "start_offset": 5,
              "end_offset": 8,
              "type": "CN_WORD",
              "position": 1
            }
          ]
        }
      • Use the ik_max_word word segmentation policy to split a piece of target text.

        Example code:

        POST /_analyze
        {
          "analyzer":"ik_max_word",
          "text":"Text used for word segmentation"
        }

        After the operation is completed, view the word segmentation result.

        {
          "tokens" : [
            {
              "token": "word-1",
              "start_offset" : 0,
              "end_offset" : 4,
              "type" : "CN_WORD",
              "position" : 0
            },
            {
              "token" : "word-3",
              "start_offset" : 0,
              "end_offset" : 2,
              "type" : "CN_WORD",
              "position" : 1
            },
            {
              "token" : "word-4",
              "start_offset" : 0,
              "end_offset" : 1,
              "type" : "CN_WORD",
              "position" : 2
            },
            {
              "token" : "word-5",
              "start_offset" : 1,
              "end_offset" : 3,
              "type" : "CN_WORD",
              "position" : 3
            },
            {
              "token" : "word-6",
              "start_offset" : 2,
              "end_offset" : 4,
              "type" : "CN_WORD",
              "position" : 4
            },
            {
              "token" : "word-7",
              "start_offset" : 3,
              "end_offset" : 4,
              "type" : "CN_WORD",
              "position" : 5
            },
            {
              "token" : "word-2",
              "start_offset" : 5,
              "end_offset" : 8,
              "type" : "CN_WORD",
              "position" : 6
            },
            {
              "token" : "word-8",
              "start_offset" : 5,
              "end_offset" : 7,
              "type" : "CN_WORD",
              "position" : 7
            },
            {
              "token" : "word-9",
              "start_offset" : 6,
              "end_offset" : 8,
              "type" : "CN_WORD",
              "position" : 8
            },
            {
              "token" : "word-10",
              "start_offset" : 7,
              "end_offset" : 8,
              "type" : "CN_WORD",
              "position" : 9
            }
          ]
        }

  2. Create an index and configure a word segmentation policy. After data is imported, search for data by keyword.

    The commands used for Elasticsearch 7.x and later are different from those used for versions earlier than that.

  3. Create an index and configure a synonym policy. After data is imported, search for data by synonyms.

    The commands used for Elasticsearch 7.x and later are different from those used for versions earlier than that.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback