Updated on 2025-08-11 GMT+08:00

Basic Concepts

Log Management

Table 1 Log management terms

Term

Description

LTS

Log Tank Service (LTS) collects, analyzes, and stores logs. You can use LTS for efficient device O&M, service trend analysis, security audits, and monitoring.

For more information, see What Is LTS?

Log

A log is a record with a timestamp generated during the running of a computer system, application, or service. Logs capture system internal statuses, user operations, key events, or exceptions. Common examples include user operation logs, API access logs, and system error logs. Logs are typically stored as structured text or binary data on the host where the application is located. A system running record can be logged as either single-line or multi-line text.

LTS collects logs through various methods: via ICAgent, self-built software, or APIs, or directly from cloud services.

Log group

A log group is the basic unit for LTS to manage logs. It comprises log streams and categorizes them, but does not store any log data itself.

For more information, see Managing Log Groups.

Log stream

A log stream is the basic unit for reading and writing logs. Collected logs of different types are classified and stored in different log streams for easier log management. If there are a large number of logs, you can create multiple log streams and name them for quick log search.

For more information, see Managing Log Streams.

Tag

A tag is a piece of metadata used to classify, mark, or describe data, objects, or content. Each tag consists of a key and a value. Tags help you quickly identify, find, and manage objects.

LTS allows you to add tags to log groups, log streams, log ingestion configurations, host groups, and alarm rules.

Endpoint

An endpoint is the request address for calling an API. Endpoints vary depending on services and regions.

You can query the endpoints of LTS in different regions from Regions and Endpoints.

Access key

An access key comprises an access key ID (AK) and secret access key (SK), and is used as a long-term identity credential to sign your requests for Huawei Cloud APIs. AK is used together with SK to sign requests cryptographically, ensuring that the requests are secret, complete, and correct.

For more information, see Access Keys.

Region

Regions are defined by their geographical location and network latency. Public services, such as Elastic Cloud Server (ECS), Elastic Volume Service (EVS), Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS), are shared within the same region. Regions are classified into universal regions and dedicated regions. A universal region provides universal cloud services for common tenants. A dedicated region provides specific services for specific tenants.

A region is the geographical location of a data center. Data cannot be transmitted between different regions over the intranet. As LTS operates on data centers, Choosing the region closest to your service and log data will minimize reporting and access latency.

Standard storage

Standard storage stores data with high availability and offers fast read/write capabilities to ensure quick responses and meet real-time data access needs. It is suitable for scenarios that require frequent and real-time data access.

Cold storage

Cold storage is designed for the long-term storage of data that is not frequently accessed. It has lower read/write performance and storage costs than standard storage. This makes it ideal for cost-sensitive scenarios with infrequent access, such as historical data archiving, long-term video surveillance storage, and scientific data backups.

For more information, see Intelligent Cold Storage.

Shard

Shards are used to store data and manage the read/write capacity of log streams. Each shard has a limit of 5 MB/s for writes and 10 MB/s for reads. When the log read/write traffic exceeds these limits, LTS automatically adds more shards. You can also manually add shards.

For more information, see Managing Log Streams.

Log Ingestion

Table 2 Log ingestion terms

Term

Description

ICAgent

ICAgent is a log collection tool for LTS. It runs on hosts where logs need to be collected.

UniAgent

UniAgent centrally manages the lifecycle of collection plug-ins and delivers instructions (such as script delivery and execution) to LTS. UniAgent does not collect data itself. O&M data is collected by collection plug-ins. You can install collection plug-ins and create collection tasks in the ingestion center to collect metrics.

ICAgent structuring parsing configuration

ICAgent structuring parsing is performed on the collection side and supports combined plug-ins for parsing. You can set multiple collection configurations with different structuring parsing rules for a single log stream.

For more information, see Configuring ICAgent Structuring Parsing.

Cloud structuring parsing configuration

Leveraging the computing power of LTS, cloud structuring parsing structures logs in log streams using various log extraction methods. In the future, it will incur log processing traffic fees based on the log volume.

For more information, see Setting Cloud Structuring Parsing.

Host

A host is a physical or virtual device that can compute independently and has unique network identifiers. It runs software and serves as the fundamental component for network communication. For LTS, hosts are classified into intra-region hosts and extra-region hosts.

  • An intra-region host is in the same region as your LTS console, for example, a Huawei Cloud ECS.

    For more information, see Installing ICAgent (Intra-Region Hosts).

  • An extra-region host refers to a host located outside the current Huawei Cloud region or a non-Huawei Cloud host. This category includes hosts in self-built Internet Data Centers (IDCs), those provided by third parties, and those in other Huawei Cloud regions.

    For more information, see Installing ICAgent (Extra-Region Hosts).

Host group

A host group organizes multiple hosts. By adding hosts to a group and associating that group with log ingestion configurations, you can easily apply the same collection settings to all hosts within the group. This simplifies both host management and log collection.

For more information, see Managing Host Groups.

Log Search and Analysis

Table 3 Log search and analysis terms

Term

Description

Log search

You can use LTS search syntax to specify filter rules and retrieve logs that meet the search criteria.

For more information, see Using Search Syntax.

Log analysis

This process uses SQL analysis syntax to analyze logs filtered by a SQL search and displays the analysis results in statistical charts or dashboards.

For more information, see Log Visualization Overview.

Search and analysis statement

A statement that uses the pipe character (|) to combine search with analysis, following the format Search statement | Analysis statement. A search statement can be used independently, but an analysis statement must be used together with a search statement. This means the analysis function operates on either the search results or all data.

NOTE:

The search and analysis statements are based on the pipe character function. This function is available only to whitelisted users. To use it, submit a service ticket.

Index

Similar to a data directory, an index is a data storage structure that consists of keywords and logical pointers pointing to actual data. Indexes make log queries faster. You must configure indexing before you can query or analyze logs. Different index settings generate different query and analysis results. Configure index settings to fit your service requirements.

For more information, see Configuring Log Indexing.

LTS provides two indexing types:

  • Full-text indexing: creates indexes by splitting entire logs into words based delimiters you set. Field names (keys) and field values (values) are queried as common texts.
  • Field indexing: allows you to specify field names and field values (key:value pairs) to narrow down the query scope.

Delimit

This process splits log texts into meaningful words.

Log texts are long and need to be split into independent, searchable words for easier searching. Delimiters determine the positions where log texts are split. Each segment is called a word and this process is called delimiting.

For more information, see Configuring Log Content Delimiters.

SQL analysis syntax

This syntax is used to analyze logs and display the results in statistical charts or dashboards.

For more information, see Using SQL Analysis Syntax.

SQL analysis syntax (pipe character)

This pipe-character-based SQL analysis syntax is used to analyze logs and display the results in statistical charts or dashboards.

For more information, see SQL Functions.

Log Alarms

Table 4 Log alarm terms

Term

Description

Alarm

Alarm reporting is a notification mechanism that is automatically triggered when an exception or potential problem is detected or a preset condition is met during system, device, or service operation. It is a key part of monitoring and helps O&M personnel locate and rectify faults.

Alarm rule

An alarm rule defines the conditions that trigger alarms, including the query condition, detection rule, statistical period, notification frequency, and notification channel.

For more information, see Configuring Log Alarm Rules.

Message template

A message template is a text template for alarm notifications. It uses variables to reference alarm attributes. When using a message template to send alarm notifications, the system automatically replaces the template variables with the content in the alarm rule.

For more information, see Creating a Message Template on the LTS Console.

Alarm notification rule

An alarm notification rule links an SMN topic and a message template. It is also a parameter in an alarm rule. When an alarm is triggered, SMN automatically sends an alarm notification via channels like SMS or email, based on the message template.

An SMN topic is used to publish messages and subscribe to notifications.

For more information, see Creating an Alarm Notification Rule.

Log Consumption and Processing

Table 5 Log consumption and processing terms

Term

Description

Metric

Metrics measure the status of systems, services, or resources, such as CPU usage, memory usage, and access throughput. They are usually stored in time series for monitoring, analysis, and alarm reporting, providing insight into system statuses and performance.

Metric generation rule

A rule that uses a single filter or multiple filters (including associations and groups) to dynamically collect statistics on structured logs within a specific time range. The statistics are then displayed in AOM Prometheus instances.

For more information, see Generating Metrics from Logs.

SQL scheduled job

These jobs use standard SQL syntax to periodically analyze logs in the source log streams based on scheduling rules. They transform, aggregate, and visualize log data. The analysis results are stored in LTS log streams or AOM Prometheus instances.

For more information, see Processing Logs with SQL Scheduled Jobs.

Log processing with functions

This method use function templates provided by FunctionGraph or your custom functions to process logs.

For more information, see Processing Logs with FunctionGraph Function Templates.