Help Center> Log Tank Service> FAQs> Consultation> What Are the Recommended Scenarios for Using LTS?
Updated on 2024-06-25 GMT+08:00

What Are the Recommended Scenarios for Using LTS?

Cloud Host Application Logs

Scenario description: The following suggestions are applicable when a user application system is deployed on cloud hosts and LTS is used to collect and search for logs. Generally, a user application system consists of multiple components (microservices). Each component is deployed on at least two cloud hosts.

Suggestions:

  • Log collection: The log collector ICAgent is recommended. Install ICAgent on the cloud hosts and configure the log collection path by referring to Collecting Logs from ECS. ICAgent is completely decoupled from application systems and does not require code modification. You are not advised to use SDKs or APIs to collect logs because this mode is complex and the application system stability may be affected due to improper code compilation.
  • Log group planning: Place the logs of an application system in a log group. The name of the log group can be the same as that of the application system.
  • Log stream planning:
    • If your logs are irregular, you can collect logs of similar components, for example, Java, PHP, and Python components, to the same log stream. This approach reduces the number of log streams for easier management. If your number of components is small (for example, less than 20), you can collect logs of each component to different log streams.
    • For logs that support structuring parsing, such as the Nginx gateway logs, it is recommended that logs of the same format be collected into the same log stream. A unified log format within a log stream enables you to use SQL analysis to analyze visualized charts.
  • Permission isolation: LTS log streams support enterprise project isolation. By setting enterprise projects for log streams, you can set different log stream access permissions for different IAM users.

Containerized Application Logs

Scenario description: The following suggestions are applicable when a user application system is deployed on Kubernetes clusters and LTS is used to centrally collect and search for logs. A user application system consists of multiple workloads, each with at least two instances.

Suggestions:

  • Log collection:
    • ICAgent is recommended. You can configure the log collection path by referring to Collecting Logs from CCE. ICAgent is completely decoupled from application systems and does not require code modification. You are not advised to use SDKs or APIs to collect logs because this mode is complex and the application system stability may be affected due to improper code compilation.
    • Containerized application logs can be collected as container standard output, container files, node files, and Kubernetes events. Container files are recommended. In contrast to container standard output, container files can be mounted to hosts persistently and the output content can be controlled by users. In contrast to node files, container files collect metadata such as namespaces, workloads, and pods, facilitating log search.
  • Log group planning: Place all logs of a CCE cluster in a log group. The log group alias (modifiable) can be the same as the CCE cluster name, and the original log group name (non-modifiable) is recommended to be k8s-log-{cluster ID}.
  • Log stream planning:
    • If your logs are irregular, you can collect logs of similar components, for example, Java, PHP, and Python components, to the same log stream. This approach reduces the number of log streams for easier management. If your number of components is small (for example, less than 20), you can collect logs of each component to different log streams.
    • For logs that support structuring parsing, such as the Nginx gateway logs, it is recommended that logs of the same format be collected into the same log stream. A unified log format within a log stream enables you to use SQL analysis to analyze visualized charts.
  • Permission isolation: LTS log streams support enterprise project isolation. By setting enterprise projects for log streams, you can set different log stream access permissions for different IAM users.

Cloud Service Log Analysis

  • Ingesting cloud service logs to LTS: LTS can collect logs from cloud services. You need to enable the log function on the corresponding cloud service console to collect logs to a specified log group or log stream.
  • Optimal status: Many cloud service logs support structuring parsing. You can configure structuring parsing rules for them on the log structuring page. For details, see Log Structuring. After structuring parsing, you can use SQL statements to analyze the logs in a visualized manner.

Application Monitoring Alarms

Scenario description: The following suggestions are applicable when logs are used to monitor application systems in real time and detect system faults in advance.

Suggestions:

  • Alarm statistics mode: LTS supports keyword alarms and SQL alarms. For irregular logs such as run logs of Java programs, keyword alarms are applicable. For regular logs such as Nginx gateway logs, SQL alarms are applicable. You can use SQL statements to analyze structuring logs and obtain the required metrics to configure alarms.
  • Alarm rule configuration: Generally, alarms need to be triggered as soon as possible. The recommended alarm rule statistics period is 1 minute. You can use the default message templates of LTS to send alarms. If you have personalized requirements, you can modify the default templates and save them as message templates for sending alarms.
  • Configuring log alarms for key cloud services, such as ELB and APIG: ELB is often used as the entry of application systems. To detect system faults in a timely manner, enable ELB logs, collect them to LTS, and configure ELB 5XX status code alarms. In addition, you can use the out-of-the-box ELB dashboard to observe the overall success rate of an application system.

Service Operation Analysis

Scenario description: The following suggestions are applicable when you print service logs, such as the transaction amount, customer, and product information, in an application system and then output visualized charts and dashboards using the SQL analysis function of LTS.

Suggestions:

  • Log collection mode: You are advised to use ICAgent to collect logs and print them in separate log files. Do not mix the logs with the run logs of applications. You are not advised to use SDKs or APIs to report logs.
  • Log structuring parsing: You are advised to use spaces to separate service logs or use the JSON format to quickly configure log structuring parsing rules.
  • Log visualization:
    • You can create a custom dashboard and use SQL-like syntax to analyze service logs that have been structured. You can add multiple charts or filters to a custom dashboard and use LTS to analyze your services. This reduces the number of data warehouses to be purchased and simplifies the usage.
  • Log processing: In certain cases, service logs to be analyzed are mixed with run logs, sensitive data in service logs needs to be deleted, or logs lack multi-dimensional data. To address this, you can use the Domain Specific Language (DSL) processing function (closed beta test started on September 30, 2023) to normalize, enrich, transfer, anonymize, and filter logs.

Consultation FAQs

more