更新时间:2025-09-08 GMT+08:00
分享

LTS仪表盘模板

LTS仪表盘模板主要介绍日志生成指标任务监控中心、OBS导入任务监控仪表盘、资源用量明细仪表盘、用量Top排序仪表盘、DSL加工任务监控中心。

  • 日志生成指标任务监控中心:用户在LTS页面只需按照业务需要创建指标规则即可生成自己的统计报表,设置单个日志过滤条件或通过添加关联关系和添加组设置多个日志过滤条件,保留符合条件的日志,对用户特定时间范围内已结构化的日志进行动态统计,并将统计结果动态呈现到aom的Prometheus实例,操作简单且功能强大。
  • OBS导入任务监控仪表盘:支持查看OBS导入LTS的任务详情,例如任务ID、任务名称、成功条数、失败条数、处理速率等关键指标,帮助用户实时监控数据导入状态,快速定位异常问题,确保数据高效、完整导入LTS。
  • 资源用量明细仪表盘:需要开通“可观测For LTS”功能,详细操作请参考设置LTS日志采集配额和使用量预警。支持按时间范围、日志组ID或日志流ID筛选,查看日志流的资源用量明细(如原始日志流量、索引流量、存储量等),帮助用户快速识别资源消耗情况,及时发现异常流量,优化资源配置,提升资源利用率。
  • 用量Top排序仪表盘:需要开通“可观测For LTS”功能,详细操作请参考设置LTS日志采集配额和使用量预警。支持按时间范围、日志组ID或日志流ID筛选,查看TOP N日志组或日志流的资源用量(如原始日志流量、索引流量、存储量等),帮助用户快速识别高资源消耗情况,及时发现异常流量,优化日志采集策略与资源配置,降低无效成本。
  • DSL加工任务监控中心:DSL加工是LTS为您提供的一站式日志加工平台,基于领域自定义的脚本语言和200多个内置函数,您可以在LTS控制台实现端到端的日志规整、富化、流转、脱敏、过滤等加工任务。DSL加工任务监控中心主要展示加工任务ID、加工任务名称、输入行数、输出行数等信息。

请先开通管道符搜索方式才能使用OBS导入任务监控仪表盘、资源用量明细仪表盘、用量Top排序仪表盘。如有需要请提交工单申请开通。

日志生成指标任务监控中心

  1. 登录云日志服务控制台,进入“日志管理”页面。
  2. 在左侧导航栏中选择“仪表盘”。
  3. 在仪表盘模板下方,选择“METRIC仪表盘模板 > 日志生成指标任务监控中心”,查看图表详情。

    • 过滤规则ID,所关联的查询分析语句如下所示:
      * | select distinct(task_set)
    • 输入行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "input" < 1000 THEN concat( cast( "input" AS VARCHAR ), '行' )  WHEN "input" < 1000 * 1000 THEN concat( cast( round( "input"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "input" < 1000000000.0 THEN concat( cast( round( "input"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "input"/ 1000.0 < 1000000000.0 THEN concat( cast( round( "input"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "input"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("input") as "input")
    • 输出行数图表所关联的查询分析语句如下所示
      * | SELECT CASE   WHEN  "output" < 1000 THEN concat( cast( "output" AS VARCHAR ), '行' )  WHEN "output" < 1000 * 1000 THEN concat( cast( round( "output"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "output" < 1000000000 THEN concat( cast( round( "output"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN \"output\"/ 1000.0 < 1000000000 THEN concat( cast( round( "output"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "output"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("output") as "output")
    • 满足过滤条件行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "filters" < 1000 THEN concat( cast( "filters" AS VARCHAR ), '行' )  WHEN "filters" < 1000 * 1000 THEN concat( cast( round( "filters"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "filters" < 1000000000 THEN concat( cast( round( "filters"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "filters"/ 1000.0 < 1000000000 THEN concat( cast( round( "filters"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "filters"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("filters") as "filters")
    • 不满足过滤条件行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "filter_drops" < 1000 THEN concat( cast( "filter_drops" AS VARCHAR ), '行' )  WHEN "filter_drops" < 1000 * 1000 THEN concat( cast( round( "filter_drops"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "filter_drops" < 1000000000 THEN concat( cast( round( "filter_drops"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "filter_drops"/ 1000.0 < 1000000000 THEN concat( cast( round( "filter_drops"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "filter_drops"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("filter_drops") as "filter_drops")
    • 采样行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "samples" < 1000 THEN concat( cast( "samples" AS VARCHAR ), '行' )  WHEN "samples" < 1000 * 1000 THEN concat( cast( round( "samples"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "samples" < 1000000000 THEN concat( cast( round( "samples"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "samples"/ 1000.0 < 1000000000 THEN concat( cast( round( "samples"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "samples"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("samples") as "samples")
    • 采样丢弃行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "sample_drops" < 1000 THEN concat( cast( "sample_drops" AS VARCHAR ), '行' )  WHEN "sample_drops" < 1000 * 1000 THEN concat( cast( round( "sample_drops"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "sample_drops" < 1000000000 THEN concat( cast( round( "sample_drops"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "sample_drops"/ 1000.0 < 1000000000 THEN concat( cast( round( "sample_drops"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "sample_drops"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum( "sample_drops" ) as "sample_drops")
    • 超过日志时间范围行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "out_of_bound" < 1000 THEN concat( cast( "out_of_bounds" AS VARCHAR ), '行' )  WHEN "out_of_bounds" < 1000 * 1000 THEN concat( cast( round( "out_of_bounds"/ 1000.0, 1 ) AS VARCHAR ), '千行' ) WHEN "out_of_bounds" < 1000000000 THEN concat( cast( round( "out_of_bounds"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "out_of_bounds"/ 1000.0 < 1000000000 THEN concat( cast( round( "out_of_bounds"/ 1000.0 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "out_of_bounds"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("out_of_bounds") as "out_of_bounds")
    • 执行记录图表所关联的查询分析语句如下所示:
      * | select from_unixtime( "__time") as "统计时间", sum("input") as "输入行数",sum("output") as "输出行数",sum("filters") as "满足过滤条件行数",sum("filter_drops") as "不满足过滤条件行数",sum("samples") as "采样行数",sum("sample_drops") as "采样丢弃行数",sum("out_of_bounds") as "超过日志时间范围行数" group by "统计时间" order by "统计时间" desc limit 1000

OBS导入任务监控仪表盘

  1. 在仪表盘模板下方,选择“LTS仪表盘模板”。
  2. 在仪表盘列表页面,单击“OBS导入任务监控仪表盘”仪表盘名称,查看图表详情。

    • 过滤任务ID,所关联的查询分析语句如下所示:
      * | select task_set_id group by task_set_id
    • 过滤任务名称,所关联的查询分析语句如下所示:
      * | select task_set_name group by task_set_name
    • 成功条数图表所关联的查询分析语句如下所示:
      * | select sum(max_read_line) as "成功条数" from (select max_by(read_lines,__time) as max_read_line group by task_set_id,period_times limit 10000)
    • 失败条数图表所关联的查询分析语句如下所示:
      * | select sum(max_failure_lines) as "失败条数" from (select max_by(failure_lines,__time) as max_failure_lines group by task_set_id,period_times limit 10000)
    • 丢弃条数图表所关联的查询分析语句如下所示:
      * | select sum(max_ignore_lines) as "丢弃条数" from (select max_by(ignore_lines,__time) as max_ignore_lines group by task_set_id,period_times limit 10000)
    • 读流量图表所关联的查询分析语句如下所示:
      * | select sum(max_in_flow)/1024.0/1024 as "读流量" from (select max_by(in_flow,__time) as max_in_flow group by task_set_id,period_times limit 10000)
    • 写流量图表所关联的查询分析语句如下所示:
      * | select sum(max_out_flow)/1024.0/1024 as "写流量" from (select max_by(out_flow,__time) as max_out_flow group by task_set_id,period_times limit 10000)
    • 导入进度图表所关联的查询分析语句如下所示:
      * | select max_by(case when total_files = 0 or total_files is null then 'NA' else cast(complete_files*1.0/total_files*100 as VARCHAR) end ,__time ) as progress
    • 处理速率图表所关联的查询分析语句如下所示:
      * | SELECT t AS "统计时间", ROUND((  CASE WHEN diff [ 1 ] IS NOT NULL    AND diff [ 2 ] IS NOT NULL AND diff [ 1 ] - diff [ 2 ] > 0 THEN     diff [ 1 ] - diff [ 2 ]     WHEN diff [ 1 ] IS NOT NULL     AND diff [ 2 ] IS NULL THEN diff [ 1 ] ELSE 0     END     ) / 300,0) AS "成功条数/s"    FROM     (     SELECT t, ts_compare ( max_read_lines, 300 ) AS diff     FROM ( SELECT t, sum( max_read_lines ) AS max_read_lines FROM ( SELECT from_unixtime( __time - __time % 300000 ) AS t, max_by ( read_lines, __time ) AS max_read_lines, task_set_id, period_times GROUP BY t, task_set_id, period_times LIMIT 10000 ) GROUP BY t ORDER BY t )     GROUP BY t     ORDER BY t     LIMIT 10000  ) LIMIT 10000
    • 运行异常图表所关联的查询分析语句如下所示:
      * | select  task_set_name as "任务名称",task_set_id as "任务ID" , max_by(task_set_state,__time ) as "任务状态" , from_unixtime(max(__time)) as "采集时间" , max_by(failure_lines,__time) as "失败条数"  group by task_set_id,task_set_name HAVING "任务状态" = 'UNHEALTHY' OR "任务状态" = 'FAILURE' limit 10000
    • 运行状态图表所关联的查询分析语句如下所示:
      * | select task_set_name as "任务名称",task_set_id as "任务ID" , max_by(task_set_state,__time ) as "任务状态" , max_by(read_lines,__time)  as "成功条数" ,max_by(failure_lines,__time) as "失败条数" , max_by(in_flow,__time)/1024/1024 as "读流量 MB" , max_by(out_flow,__time)/1024/1024 as "写流量 MB",period_times as "周期" group by task_set_id,task_set_name,period_times order by max(__time),task_set_id,period_times limit 10000

资源用量明细仪表盘

  1. 在仪表盘模板下方,选择“LTS仪表盘模板”。
  2. 在仪表盘列表页面,单击“资源用量明细”仪表盘名称,查看图表详情。

    若仪表盘中的部分图表展示异常,您可在索引配置页面单击“自动配置”更新索引字段,详细操作请参考创建LTS日志索引。索引更新完成后,返回仪表盘页面,查看图表显示正常。

    • 过滤日志组ID,所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" | select log_group_id from log group by log_group_id limit 100000
    • 过滤日志组名称,所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" | select log_group_name_alias from log group by log_group_name_alias limit 100000
    • 过滤日志流ID,所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" | select log_stream_id from log group by log_stream_id limit 100000
    • 过滤日志流名称,所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" | select log_stream_name_alias from log group by log_stream_name_alias limit 100000
    • 原始日志流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"storage_traffic" | select sum(counter)/1024/1024 as "原始日志流量/MB"
    • 读流量(压缩后)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"read\" | select sum(counter)/1024/1024/5 as \"读流量/MB\"
    • 写流量(压缩后)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"write_traffic\" | select sum(counter)/1024/1024 as \"写流量/MB\"
    • 索引流量(标准型)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"index_traffic\" | select sum(counter)/1024/1024 as \"索引流量/MB\"
    • 索引流量(搜索型)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"search_index_traffic\" | select sum(counter)/1024/1024 as \"索引流量/MB\"
    • 日志DSL加工流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"dsl_traffic\" && counter:* | select sum(counter)/1024/1024 as \"日志DSL加工流量/MB\"
    • 日志转指标流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"log2Metric\" | select sum(counter)/1024/1024 as \"日志转指标流量/MB\"
    • 标准存储量(最新值)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"storage\" && counter:* | select select sum(max_storage) as \"标准存储量(最新值)/MB\" from (select max_by(counter,__time)/1024/1024 as max_storage group by log_stream_id limit 10000 )
    • 冷存储量(最新值)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"cold_storage\" && counter:* | select sum(max_cold_storage) as \"冷存储量(最新值)/MB\" from (select max_by(counter,__time)/1024/1024 as max_cold_storage group by log_stream_id limit 10000 )
    • 基础转储流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and (metric_name:\"obs_transfer\" or metric_name:\"dms_transfer\" or metric_name:\"dis_transfer\") | select sum(counter)/1024.0/1024 as \"基础转储流量/MB\"
    • 高级转储流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and (metric_name:\"dws_transfer\" or metric_name:\"dli_transfer\" or metric_name:\"obs_orc_transfer\") | select sum(counter)/1024/1024 as \"高级转储流量/MB\"
    • 原始日志流量图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"storage_traffic\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter/1024/1024) as \"原始日志流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 读流量(压缩后)图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"read\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"读流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 写流量(压缩后)图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"write_traffic\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"写流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 索引流量(标准型)图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"index_traffic\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"索引流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 索引流量(搜索型)图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"search_index_traffic\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"索引流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 日志DSL加工流量图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"dsl_traffic\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"日志DSL加工流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 日志转指标流量图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"log2Metric\" | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"日志转指标流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 标准存储量(最新值)图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"storage\"| select from_unixtime(__time - __time%60000) as \"时间\",max_by(counter,__time)/1024/1024 as \"标准存储量(最新值)/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 冷存储量(最新值)图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and metric_name:\"cold_storage\" | select from_unixtime(__time - __time%60000) as \"时间\",max_by(counter,__time)/1024/1024 as \"冷存储量(最新值)/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 基础转储流量图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and (metric_name:\"obs_transfer\" or metric_name:\"dms_transfer\" or metric_name:\"dis_transfer\") | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"基础转储流量/MB\" group by \"时间\" order by \"时间\" limit 10000
    • 高级转储流量图表所关联的查询分析语句如下所示:
      not log_group_id:\"__default__*\" and not log_group_name_alias:\"__default__*\" and (metric_name:\"dws_transfer\" or metric_name:\"dli_transfer\" or metric_name:\"obs_orc_transfer\") | select from_unixtime(__time - __time%60000) as \"时间\",sum(counter)/1024/1024 as \"高级转储流量/MB\" group by \"时间\" order by \"时间\" limit 10000

用量Top排序仪表盘

  1. 在仪表盘模板下方,选择“LTS仪表盘模板”。
  2. 在仪表盘列表页面,单击“用量Top排序”仪表盘名称,查看图表详情。

    若仪表盘中的部分图表展示异常,您可在索引配置页面单击“自动配置”更新索引字段,详细操作请参考创建LTS日志索引。索引更新完成后,返回仪表盘页面,查看图表显示正常。

    • 过滤日志组ID,所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*"  | select log_group_id from log group by log_group_id limit 100000
    • 过滤日志组名称,所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" | select log_group_name_alias from log group by log_group_name_alias limit 100000
    • 过滤日志流ID,所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*"  | select log_stream_id from log group by log_stream_id limit 100000
    • 过滤日志流名称,所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" | select log_stream_name_alias from log group by log_stream_name_alias limit 100000
    • 原始日志流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"storage_traffic" | select sum(counter)/1024/1024 as "原始日志流量/MB"
    • 读流量(压缩后)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"read" | select sum(counter)/1024/1024 as "读流量/MB"
    • 写流量(压缩后)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"write_traffic" | select sum(counter)/1024/1024 as "写流量/MB"
    • 索引流量(标准型)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"index_traffic" | select sum(counter)/1024/1024 as "索引流量/MB"
    • 索引流量(搜索型)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"search_index_traffic" | select sum(counter)/1024/1024 as "索引流量/MB"
    • 日志DSL加工流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"dsl_traffic" && counter:* | select sum(counter)/1024/1024 as "日志DSL加工流量/MB"
    • 日志转指标流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"log2Metric" | select sum(counter)/1024/1024 as "日志转指标流量/MB"
    • 标准存储量(最新值)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"storage" && counter:* | select sum(max_storage) as "标准存储量(最新值)/MB" from (select max_by(counter,__time)/1024/1024 as max_storage group by log_stream_id limit 10000 )
    • 冷存储量(最新值)/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and metric_name:"cold_storage" && counter:* | select sum(max_cold_storage) as "冷存储量(最新值)/MB" from (select max_by(counter,__time)/1024/1024 as max_cold_storage group by log_stream_id limit 10000 )
    • 基础转储流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and (metric_name:"obs_transfer" or metric_name:"dms_transfer" or metric_name:"dis_transfer") | select sum(counter)/1024/1024 as "基础转储流量/MB"
    • 高级转储流量/MB图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*" and not log_group_name_alias:"__default__*" and (metric_name:"dws_transfer" or metric_name:"dli_transfer" or metric_name:"obs_orc_transfer") | select sum(counter)/1024/1024 as "高级转储流量/MB"
    • TOP日志组用量图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*"  | select  log_group_id as "日志组ID", sum(case when metric_name = 'storage_traffic' then counter else 0 end)*1.0/1024/1024 as "原始日志流量/MB", sum(case when metric_name = 'index_traffic' then counter else 0 end)*1.0/1024/1024 as "索引流量(标准型)/MB", sum(case when metric_name = 'search_index_traffic' then counter else 0 end)*1.0/1024/1024 as "索引流量(搜索型)/MB", sum(case when metric_name = 'write_traffic' then counter else 0 end)*1.0/1024/1024 as "写流量(压缩后)/MB", sum(case when metric_name = 'storage' then counter else 0 end)*1.0/1024/1024 as "标准存储量(最新值)/MB", sum(case when metric_name = 'cold_storage' then counter else 0 end)*1.0/1024/1024 as "冷存储量(最新值)/MB" group by log_group_id order by "原始日志流量/MB" desc
    • TOP日志流用量图表所关联的查询分析语句如下所示:
      not log_group_id:"__default__*"  | select  log_stream_id as "日志流ID", sum(case when metric_name = 'storage_traffic' then counter else 0 end)*1.0/1024/1024 as "原始日志流量/MB", sum(case when metric_name = 'index_traffic' then counter else 0 end)*1.0/1024/1024 as "索引流量(标准型)/MB", sum(case when metric_name = 'search_index_traffic' then counter else 0 end)*1.0/1024/1024 as "索引流量(搜索型)/MB", sum(case when metric_name = 'write_traffic' then counter else 0 end)*1.0/1024/1024 as "写流量(压缩后)/MB",sum(case when metric_name = 'storage' then counter else 0 end)*1.0/1024/1024 as "标准存储量(最新值)/MB",sum(case when metric_name = 'cold_storage' then counter else 0 end)*1.0/1024/1024 as "冷存储量(最新值)/MB" group by log_stream_id order by "原始日志流量/MB" desc

DSL加工任务监控中心

  1. 在仪表盘模板下方,选择“DSL仪表盘模板 > DSL加工任务监控中心”,查看图表详情。

    • 过滤加工任务ID,所关联的查询分析语句如下所示:
      * | select distinct(task_id)
    • 过滤加工任务名称,所关联的查询分析语句如下所示:
      * | select distinct(task_name)
    • 输入行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "input" < 1000 THEN concat( cast( "input" AS VARCHAR ), '行' )  WHEN "input" < 1000 * 1000 THEN concat( cast( round( "input"/ 1000, 1 ) AS VARCHAR ), '千行' ) WHEN "input" < 1000000000 THEN concat( cast( round( "input"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "input"/ 1000.0 < 1000000000 THEN concat( cast( round( "input"/ 1000 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "input"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("process.accept") as "input")
    • 输出行数图表所关联的查询分析语句如下所示
      * | SELECT CASE   WHEN  "delivered" < 1000 THEN concat( cast( "delivered" AS VARCHAR ), '行' )  WHEN "delivered" < 1000 * 1000 THEN concat( cast( round( "delivered"/ 1000, 1 ) AS VARCHAR ), '千行' ) WHEN "delivered" < 1000000000 THEN concat( cast( round( "delivered"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "delivered"/ 1000.0 < 1000000000 THEN concat( cast( round( "delivered"/ 1000 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "delivered"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("process.delivered") as "delivered")
    • 过滤行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "drop" < 1000 THEN concat( cast( "drop" AS VARCHAR ), '行' )  WHEN "drop" < 1000 * 1000 THEN concat( cast( round( "drop"/ 1000, 1 ) AS VARCHAR ), '千行' ) WHEN "drop" < 1000000000 THEN concat( cast( round( "drop"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "drop"/ 1000.0 < 1000000000 THEN concat( cast( round( "drop"/ 1000 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "drop"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("process.drop") as \"drop\")
    • 失败行数图表所关联的查询分析语句如下所示:
      * | SELECT CASE   WHEN  "failed" < 1000 THEN concat( cast( "failed" AS VARCHAR ), '行' )  WHEN "failed" < 1000 * 1000 THEN concat( cast( round( "failed"/ 1000, 1 ) AS VARCHAR ), '千行' ) WHEN "failed" < 1000000000 THEN concat( cast( round( "failed"/ 1000000.0, 1 ) AS VARCHAR ), '百万行' )  WHEN "failed"/ 1000.0 < 1000000000 THEN concat( cast( round( "failed"/ 1000 / 1000000.0, 1 ) AS VARCHAR ), '十亿行' ) ELSE concat( cast( round( "failed"/ 1000.0 / 1000 / 1000 / 1000, 1 ) AS VARCHAR ), '万亿行' )  END AS  "total"  from (select sum("process.failed") as "failed")
    • 执行记录图表所关联的查询分析语句如下所示:
      * | select MILLIS_TO_TIMESTAMP("start") as "统计开始时间", MILLIS_TO_TIMESTAMP("end") as "统计结束时间", "process.accept" as "输入行数", "process.delivered" as "输出行数", "process.drop" as "过滤行数", "process.failed" as "失败行数" limit 1000

相关文档