Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Log Tank Service/ User Guide/ Log Ingestion/ Using ICAgent to Collect Logs/ Ingesting ServiceStage Cloud Host Logs to LTS

Ingesting ServiceStage Cloud Host Logs to LTS

Updated on 2025-01-24 GMT+08:00

LTS collects log data from cloud hosts of ServiceStage. By processing a massive number of logs efficiently, securely, and in real time, LTS provides useful insights for you to optimize the availability and performance of cloud services and applications. It also helps you efficiently perform real-time decision-making, device O&M management, and service trend analysis.

Perform the following steps to complete the ingestion configuration:

  1. Step 1: Select a Log Stream
  2. Step 2: (Optional) Select a Host Group
  3. Step 3: Configure the Collection
  4. Step 4: Configure Indexing
  5. Step 5: Complete the Ingestion Configuration

To collect logs from multiple scenarios, set multiple ingestion configurations in a batch.

NOTE:

Currently, this function is available only to whitelisted users. To use it, submit a service ticket.

Prerequisites

Step 1: Select a Log Stream

  1. Log in to the LTS console.
  2. Choose Log Ingestion > Ingestion Center in the navigation pane and click ServiceStage - Cloud Host Logs.

    Alternatively, choose Log Ingestion > Ingestion Management in the navigation pane, and click Ingest Log > ServiceStage - Cloud Host Logs.

    Alternatively, choose Log Management in the navigation pane and click the target log stream to access its details page. Click in the upper right corner. On the displayed page, click the Log Ingestion tab and click Ingest Log. In the displayed dialog box, click ServiceStage - Cloud Host Logs.

  3. In the Select Log Stream step, set the following parameters:
    1. Select a ServiceStage application and ServiceStage environment.
    2. Select a log group from the Log Group drop-down list. If there are no desired log groups, click Create Log Group to create one.
    3. Select a log stream from the Log Stream drop-down list. If there are no desired log streams, click Create Log Stream to create one.
  4. Click Next: (Optional) Select Host Group.

Step 2: (Optional) Select a Host Group

  1. Select one or more host groups from which you want to collect logs. If there are no desired host groups, click Create above the host group list to create one.
    NOTE:
    You can skip this step and configure host groups after the ingestion configuration is complete. There are two ways to do this:
    • Choose Host Management > Host Groups in the navigation pane and associate host groups with ingestion configurations.
    • Choose Log Ingestion > Ingestion Management in the navigation pane. In the ingestion configuration list, click Modify in the Operation column. On the page displayed, select required host groups.
  2. Click Next: Configurations.

Step 3: Configure the Collection

Perform the following steps to configure the collection:

  1. Collection Configuration Name: Enter 1 to 64 characters. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. Do not start with a period or underscore, or end with a period.
  2. Collection Paths: Add one or more host paths. LTS will collect logs from these paths. The rules for setting collection paths are as follows:
    • Logs can be collected recursively. A double asterisk (**) can represent up to 5 directory levels in a path.

      For example, /var/logs/**/a.log will match the following logs:

      /var/logs/a.log
      /var/logs/1/a.log 
      /var/logs/1/2/a.log
      /var/logs/1/2/3/a.log
      /var/logs/1/2/3/4/a.log
      /var/logs/1/2/3/4/5/a.log
      NOTE:
      • /1/2/3/4/5/ indicates the 5 levels of directories under the /var/logs directory. All the a.log files found in all these levels of directories will be collected.
      • Only one double asterisk (**) can be contained in a collection path. For example, /var/logs/**/a.log is acceptable but /opt/test/**/log/** is not.
      • A collection path cannot begin with a double asterisk (**), such as /**/test, to avoid collecting system files.
    • You can use an asterisk (*) as a wildcard for fuzzy match. The wildcard (*) can represent one or more characters of a directory or file name.
      NOTE:

      If a log collection path is similar to C:\windows\system32 but logs cannot be collected, enable WAF and configure the path again.

      • Example 1: /var/logs/*/a.log will match all a.log files found in all directories under the /var/logs/ directory:

        /var/logs/1/a.log

        /var/logs/2/a.log

      • Example 2: /var/logs/service-*/a.log will match files as follows:

        /var/logs/service-1/a.log

        /var/logs/service-2/a.log

      • Example 3: /var/logs/service/a*.log will match files as follows:

        /var/logs/service/a1.log

        /var/logs/service/a2.log

    • If the collection path is set to a file name, the corresponding file is collected. Only text files can be collected.
  3. Allow Repeated File Collection (not available to Windows)

    After you enable this function, one host log file can be collected to multiple log streams. This function is available only to certain ICAgent versions. For details, see Checking the ICAgent Version Description.

    After you disable this function, each collection path must be unique. That is, the same log file in the same host cannot be collected to different log streams.

  4. Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Blacklist filters can be exact matches or wildcard pattern matches. For details, see Collection Paths.

    NOTE:

    If you blacklist a file or directory that has been set as a collection path in the previous step, the blacklist settings will be used and the file or files in the directory will be filtered out.

  5. Collect Windows Event Logs: To collect logs from Windows hosts, enable this option and set the following parameters.
    Table 1 Parameters for collecting windows event logs

    Parameter

    Description

    Log Type

    Log types include System, Application, Security, and Startup.

    First Collection Time Offset

    If you set this parameter to 7, logs generated within the 7 days before the collection start time are collected. This offset takes effect only for the first collection to ensure that the logs are not repeatedly collected. The maximum value is 7 days.

    Event Level

    You can filter and collect Windows events based on their severity (information, warning, error, critical, and verbose). This function is available only to Windows Vista or later.

  6. Set the ServiceStage matching rule by selecting the corresponding component.
  7. Enable structuring parsing. For details, see Setting ICAgent Structuring Parsing Rules.

    LTS enables combined parsing, allowing you to create different structuring parsing rules for each collection configuration of a log stream.

    If you have configured cloud structuring parsing, delete its configurations before configuring ICAgent structuring parsing.

    Figure 1 ICAgent structuring parsing configuration
  8. Set other configurations.
    Table 2 Other configurations

    Parameter

    Description

    Max Directory Depth

    The maximum directory depth is 20 levels.

    Collection paths can use double asterisks (**) for multi-layer fuzzy match. Specify the maximum directory depth in the text box. For example, if your log path is /var/logs/department/app/a.log and your collection path is /var/logs/**/a.log, logs will not be collected when this parameter is set to 1, but will be collected when this parameter is set to 2 or a larger number.

    Split Logs

    • If log splitting is enabled, logs exceeding the specified size will be split into multiple logs for collection. Specify the size in the range from 500 KB to 1,024 KB. For example, if you set the size to 500 KB, a 600 KB log will be split into a 500 KB log and a 100 KB log. This restriction is applicable to single-line logs only, not multi-line logs.
    • If log splitting is disabled, when a log exceeds 500 KB, the extra part will be truncated and discarded.

    Collect Binary Files

    LTS can collect binary files.

    Run the file -i File_name command to view the file type. charset=binary indicates that a log file is a binary file.

    If this option is enabled, binary log files will be collected, but only UTF-8 strings are supported. Other strings will be garbled on the LTS console.

    If this option is disabled, binary log files will not be collected.

    Log File Code

    The log file encoding format can be UTF-8 or GBK (not available to Windows).

    UTF-8 encoding is a variable-length encoding mode and represents Unicode character sets. GBK, an acronym for Chinese Internal Code Extension Specification, is a Chinese character encoding standard that extends both the ASCII and GB2312 encoding systems.

    Collection Policy

    Select incremental or full collection.

    • Incremental: When collecting a new file, ICAgent reads the file from the end of the file.
    • All: When collecting a new file, ICAgent reads the file from the beginning of the file.

    Custom Metadata

    • If this option is disabled, ICAgent will report logs to LTS based on the default system fields. You do not need to and cannot configure the fields.
    • If this option is enabled, ICAgent will report logs based on your selected built-in fields and fields created with custom key-value pairs.

      Built-in Fields: Select built-in fields as required.

      Custom Key-Value Pairs: Click Add and set a key and value.

  9. Configure the log format and time by referring to Table 3.
    Table 3 Log collection settings

    Parameter

    Description

    Log Format

    • Single-line: Each log line is displayed as a single log event.
    • Multi-line: Multiple lines of exception log events can be displayed as a single log event. This is helpful when you check logs to locate problems.

    Log Time

    System time: log collection time by default. It is displayed at the beginning of each log event.

    NOTE:
    • Log collection time is the time when logs are collected and sent by ICAgent to LTS.
    • Log printing time is the time when logs are printed. ICAgent collects and sends logs to LTS with an interval of 1 second.
    • Restriction on log collection time: Logs are collected within 24 hours before and after the system time.

    Time wildcard: You can set a time wildcard so that ICAgent will look for the log printing time as the beginning of a log event.

    • If the time format in a log event is 2019-01-01 23:59:59.011, the time wildcard should be set to YYYY-MM-DD hh:mm:ss.SSS.
    • If the time format in a log event is 19-1-1 23:59:59.011, the time wildcard should be set to YY-M-D hh:mm:ss.SSS.
    NOTE:

    If a log event does not contain year information, ICAgent regards it as printed in the current year.

    Example:

    YY   - year (19)     
    YYYY - year (2019)  
    M    - month (1)     
    MM   - month (01)    
    D    - day (1)       
    DD   - day (01)        
    hh   - hours (23)     
    mm   - minutes (59)   
    ss   - seconds (59) 
    SSS  - millisecond (999)
    hpm     - hours (03PM)
    h:mmpm    - hours:minutes (03:04PM)
    h:mm:sspm  - hours:minutes:seconds (03:04:05PM)       
    hh:mm:ss ZZZZ (16:05:06 +0100)       
    hh:mm:ss ZZZ  (16:05:06 CET)       
    hh:mm:ss ZZ   (16:05:06 +01:00)

    Log Segmentation

    This parameter needs to be specified if the Log Format is set to Multi-line. By generation time indicates that a time wildcard is used to detect log boundaries, whereas By regular expression indicates that a regular expression is used.

    By regular expression

    You can set a regular expression to look for a specific pattern to indicate the beginning of a log event. This parameter needs to be specified when you select Multi-line for Log Format and By regular expression for Log Segmentation.

  10. Click Next: Index Settings.

Step 4: Configure Indexing

  1. (Optional) Configure indexing. For details, see Setting Indexes.
  2. Click Submit.

Step 5: Complete the Ingestion Configuration

The created ingestion configuration will be displayed.
  • Click its name to view its details.
  • Click Modify in the Operation column to modify the ingestion configuration.
  • Click Configure Tag in the Operation column to add a tag.
  • Click More > Copy in the Operation column to copy the ingestion configuration.
  • Click More > Delete in the Operation column to delete the ingestion configuration.
    NOTE:

    Deleting an ingestion configuration may lead to log collection failures, potentially resulting in service exceptions related to user logs. In addition, the deleted ingestion configuration cannot be restored. Exercise caution when performing this operation.

  • To stop log collection of an ingestion configuration, toggle off in the Status column to disable the configuration. To restart log collection, toggle on in the Status column.
    NOTE:

    Disabling an ingestion configuration may lead to log collection failures, potentially resulting in service exceptions related to user logs. Exercise caution when performing this operation.

  • Click More > ICAgent Collect Diagnosis in the Operation column of the ingestion configuration to monitor the exceptions, overall status, and collection status of ICAgent.

Setting Multiple Ingestion Configurations in a Batch

You can set multiple ingestion configurations for multiple scenarios in a batch, avoiding repetitive setups.

  1. On the Ingestion Management page, click Batch Ingest to go to the details page. For details, see Table 4.

    Table 4 Adding configurations in batches

    Type

    Parameter

    Description

    Basic Settings

    Ingestion Type

    Select ServiceStage - Cloud Host Logs.

    Configurations to Add

    Enter the number of ingestion configurations in the text box and click Add.

    A maximum of 100 ingestion configurations can be added, including the one already exists under Ingestion Settings by default. Therefore, you can add up to 99 more.

    Ingestion Settings

    Configuration List

    1. The ingestion configurations are displayed on the left. You can add up to 99 more configurations.
    2. The ingestion configuration details are displayed on the right. Set them by referring to Step 3: Configure the Collection.
    3. After an ingestion configuration is complete, you can click Apply to Other Configurations to copy its settings to other configurations.

  2. Click Check Parameters. After the check is successful, click Submit.

    The added ingestion configurations will be displayed on the Ingestion Management page after the batch creation is successful.

  3. (Optional) Perform the following operations on ingestion configurations:

    • Select multiple existing ingestion configurations and click Edit. On the displayed page, select an ingestion type to modify the corresponding ingestion configurations.
    • Select multiple existing ingestion configurations and click Enable or Disable. If you toggle off the switch in the Status column of an ingestion configuration, logs will not be collected for this configuration.
    • Select multiple existing ingestion configurations and click Delete.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback