Help Center> Log Tank Service> FAQs> Log Management> What Are the Recommended Scenarios for Using LTS?
Updated on 2024-03-05 GMT+08:00

What Are the Recommended Scenarios for Using LTS?

Cloud Host Application Logs

Scenario description: The following suggestions are applicable when a user application system is deployed on cloud hosts and LTS is used to centrally collect and search for logs. Generally, a user application system consists of multiple components (microservices). Each component is deployed on at least two cloud hosts.

Suggestions:

  • Log collection: The log collector ICAgent is recommended. Install ICAgent on the cloud hosts and configure the log collection path by referring to Collecting Logs from ECS. ICAgent is completely decoupled from application systems and does not require code modification. You are not advised to use SDKs or APIs to collect logs because this mode is complex and the application system stability may be affected due to improper code compilation.
  • Log group planning: Place the logs of an application system in a log group. The name of the log group can be the same as that of the application system.
  • Log stream planning:
    • If your logs are irregular, you can collect logs of similar components, for example, Java, PHP, and Python components, to the same log stream. This approach reduces the number of log streams, making management easier. If your number of components is small (for example, less than 20), you can collect logs of each component to different log streams.
    • For logs support structuring parsing, such as the Nginx gateway logs, it is recommended that logs of the same format be collected into the same log stream. A unified log format within a log stream enables you to use SQL analysis to analyze visualized charts.
  • Permission isolation: LTS log streams support enterprise project isolation. By setting enterprise projects for log streams, you can set different log stream access permissions for different IAM users.

Containerized Application Logs

Scenario description: The following suggestions are applicable when a user application system is deployed on Kubernetes clusters and LTS is used to centrally collect and search for logs. A user application system consists of multiple workloads, each with at least two instances.

Suggestions:

  • Log collection:
    • ICAgent is recommended. You can configure the log collection path by referring to Collecting Logs from CCE. ICAgent is completely decoupled from application systems and does not require code modification. You are not advised to use SDKs or APIs to collect logs because this mode is complex and the application system stability may be affected due to improper code compilation.
    • Containerized application logs can be collected as container standard output, container files, node files, and Kubernetes events. Container files are recommended. In contrast to container standard output, container files can be mounted to hosts persistently and the output content can be controlled by users. In contrast to node files, container files collect metadata such as namespaces, workloads, and pods, facilitating log search.
  • Log group planning: Place all logs of a CCE cluster in a log group. The log group alias (modifiable) can be the same as the CCE cluster name, and the original log group name (non-modifiable) is recommended to be k8s-log-{cluster ID}.
  • Log stream planning:
    • If your logs are irregular, you can collect logs of similar components, for example, Java, PHP, and Python components, to the same log stream. This approach reduces the number of log streams, making management easier. If your number of components is small (for example, less than 20), you can collect logs of each component to different log streams.
    • For logs support structuring parsing, such as the Nginx gateway logs, it is recommended that logs of the same format be collected into the same log stream. A unified log format within a log stream enables you to use SQL analysis to analyze visualized charts.
  • Permission isolation: LTS log streams support enterprise project isolation. By setting enterprise projects for log streams, you can set different log stream access permissions for different IAM users.

Cloud Service Log Analysis

  • Ingesting cloud service logs to LTS: LTS can collect logs from cloud services. You need to enable the log function on the corresponding cloud service console to collect logs to a specified log group or log stream.
  • Optimal status: Many cloud service logs support structuring parsing. You can configure structuring parsing rules for them on the log structuring page. For details, see Log Structuring. After structuring parsing, you can use SQL statements to analyze the logs in a visualized manner.

Application Monitoring Alarms

Scenario description: The following suggestions are applicable when logs are used to monitor application systems in real time and detect system faults in advance.

Suggestions:

  • Alarm statistics mode: LTS supports keyword alarms and SQL alarms. For irregular logs such as run logs of Java programs, keyword alarms are applicable. For regular logs such as Nginx gateway logs, SQL alarms are applicable. You can use SQL statements to analyze structuring logs and obtain the required metrics to configure alarms.
  • Alarm rule configuration: Generally, alarms need to be triggered as soon as possible. The recommended alarm rule statistics period is 1 minute. You can use the default message templates provided by LTS to send alarms. If you have personalized requirements, you can modify the default templates and save them as message templates for sending alarms.
  • Configuring log alarms for key cloud services, such as ELB and APIG: ELB is often used as the entry of application systems. To detect system faults in a timely manner, enable ELB logs, collect them to LTS, and configure ELB 5XX status code alarms. In addition, you can use the out-of-the-box ELB dashboard to observe the overall success rate of an application system.

Service Operation Analysis

Scenario description: The following suggestions are applicable when you print service logs, such as the transaction amount, customer, and product information, in an application system and then output visualized charts and dashboards using the SQL analysis function of LTS.

Suggestions:

  • Log collection mode: You are advised to use ICAgent to collect logs and print them in separate log files. Do not mix the logs with the run logs of applications. You are not advised to use SDKs or APIs to report logs.
  • Log structuring parsing: You are advised to use spaces to separate service logs or use the JSON format to quickly configure log structuring parsing rules.
  • Log visualization:
    • You can create a custom dashboard and use SQL-like syntax to analyze service logs that have been structured. You can add multiple charts or filters to a custom dashboard to achieve a BI-like display effect. Using LTS for service analysis can eliminate the procurement costs of data warehouses and BI systems and make it easier to get started.
  • Log processing: In certain cases, service logs to be analyzed are mixed with run logs, sensitive data in service logs needs to be deleted, or logs lack multi-dimensional data. To address this, you can use the DSL processing function (dialing test started from September 30, 2023) to normalize, enrich, transfer, anonymize, and filter logs.

DJCP (MLPS) Compliance

Scenario description: According to the Cybersecurity Law of PRC, listed companies and financial enterprises need to store key system logs for at least 180 days. LTS can centrally collect and store such logs.

Suggestions:

  • Log collection:
    • For cloud host and container logs, you are advised to use ICAgent to collect them by following the log ingestion wizard for ECS or CCE.
    • For logs of cloud services such as ELB and Virtual Private Cloud (VPC), enable the function of collecting logs to LTS on the cloud service page.
  • Log storage:
    • By default, LTS stores logs for up to 365 days. You can change the storage duration. To store logs for a longer period (up to three years), submit a service ticket.
    • Lower storage costs:

      Transferring logs to OBS has the advantage of low cost and the disadvantage that the contents of historical logs cannot be searched.

Log Management FAQs

more