Updated on 2025-03-24 GMT+08:00

ELB Dashboard Templates (Dedicated)

ELB automatically distributes incoming traffic across multiple backend servers based on configured routing rules. It expands external service capabilities of application systems through traffic distribution and improves application system availability by eliminating single points of failure (SPOFs). ELB allows you to view and analyze detailed access logs of HTTP and HTTPS requests for layer-7 load balancing, including the request time, client IP address, request path, and server response.

ELB dashboard templates support Viewing ELB Layer-7 Access Center, Viewing ELB Layer-7 Monitoring Center, and Viewing ELB Layer-7 Monitoring by the Second.

  • upstream_addr_priv (applicable to shared load balancers) indicates the IP address and port number of the backend host. There may be multiple values separated by commas and spaces, and each value is in the format of {IP address}:{Port number} or -.
  • upstream_addr (applicable to dedicated load balancers) indicates the IP address and port number of the backend host. There may be multiple values separated by commas and spaces, and each value is in the format of {IP address}:{Port number} or -.

Prerequisites

Viewing ELB Layer-7 Access Center

  1. Log in to the LTS console. In the navigation pane, choose Log Management.
  2. In the Log Applications area, click ELB Log Center and choose Go to the Dashboard.
  3. In the dashboard list, choose ELB dashboard templates under Dashboard Templates and click ELB layer-7 access center to view the chart details.

    • Filter all load balancers. The associated query and analysis statement is:
      select distinct(lb_name)
    • Filter all client IP addresses. The associated query and analysis statement is:
      select distinct(remote_addr)
    • Filter all backend servers' IP addresses. The associated query and analysis statement is:
      select distinct(upstream_addr)
    • Filter all elastic IP addresses. The associated query and analysis statement is:
      select distinct(eip_address) 

      Select a time range in the upper part of the page. A relative time period (that is, the log reporting time) of more than 30 minutes is recommended. Due to the difference between the service time (field time_iso8601) and the log reporting time, the data at both ends of the line chart sorted by service time cannot be used for reference.

      • From now: queries log data generated in a time range that ends with the current time, such as the previous 1, 5, or 15 minutes. For example, if the current time is 19:20:31 and 1 hour is selected as the relative time from now, the charts on the dashboard display the log data that is generated from 18:20:31 to 19:20:31.
      • From last: queries log data generated in a time range that ends with the current time, such as the previous 1 or 15 minutes. For example, if the current time is 19:20:31 and 1 hour is selected as the relative time from last, the charts on the dashboard display the log data that is generated from 18:00:00 to 19:00:00.
      • Specified: queries log data that is generated in a specified time range.
    • Day-over-day PV Change. The associated query and analysis statement is:
      select diff[1] as "total", round((diff[1] - diff[2]) / diff[2] * 100, 2) as inc from(select compare( "pv" , 86400) as diff from (select count(1) as "pv" from log))
    • Week-on-week PV Change. The associated query and analysis statement is:
      select diff[1] as "total", round((diff[1] - diff[2]) / diff[2] * 100, 2) as inc from(select compare( "pv" , 604800) as diff from (select count(1) as "pv" from log))
    • Day-over-day UV Change. The associated query and analysis statement is:
      select diff[1] as "total", round((diff[1] - diff[2]) / diff[2] * 100, 2) as inc from(select compare( "uv" , 86400) as diff from (select APPROX_COUNT_DISTINCT(remote_addr) as "uv" from log))
    • Week-on-week UV Change. The associated query and analysis statement is:
      select diff[1] as "total", round((diff[1] - diff[2]) / diff[2] * 100, 2) as inc from(select compare( "uv" , 604800) as diff from (select APPROX_COUNT_DISTINCT(remote_addr) as "uv" from log))
    • PV Distribution (China). The associated query and analysis statement is:
      select ip_to_province(remote_addr) as province, count(1) as pv where IP_TO_COUNTRY (remote_addr) = 'China'  group by province HAVING province not in ('','Reserved address','*')
    • PV Distribution (Global). The associated query and analysis statement is:
      SELECT ip_to_country(remote_addr) as country,COUNT(1) as PV GROUP BY country HAVING country not in ('','Reserved address','*')
    • UV Distribution (China). The associated query and analysis statement is:
      select ip_to_province(remote_addr) as province, APPROX_COUNT_DISTINCT(remote_addr) as uv where IP_TO_COUNTRY (remote_addr) = 'China' group by province HAVING province not in ('','Reserved address','*')
    • UV Distribution (Global). The associated query and analysis statement is:
      select ip_to_country(remote_addr) as country, APPROX_COUNT_DISTINCT(remote_addr) as uv group by country HAVING country not in ('','Reserved address','*')
    • Average Latency Distribution (China). The associated query and analysis statement is:
      SELECT province,round( CASE WHEN "Average latency (ms)" > 0 THEN "Average latency (ms)" ELSE 0 END, 3 ) AS "Average latency (ms)"FROM (SELECT ip_to_province(remote_addr) as province,avg(request_time) * 1000 AS "Average latency (ms)"WHERE  IP_TO_COUNTRY (remote_addr) = 'China'GROUP BY province HAVING province not in ('','Reserved address','*'))
    • Average Latency Distribution (Global). The associated query and analysis statement is:
      SELECT country,round( CASE WHEN "Average latency (ms)" > 0 THEN "Average latency (ms)" ELSE 0 END, 2 ) AS "Average latency (ms)"FROM (SELECT ip_to_country(remote_addr) as country,avg(request_time) * 1000 AS "Average latency (ms)" GROUP BY country HAVING country not in ('','Reserved address','*'))
    • PV/UV Today. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,PV,UV FROM (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') AS _time_ , count(1) as PV,  APPROX_COUNT_DISTINCT(remote_addr) as UV from log WHERE TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ') <= CURRENT_TIMESTAMP  and TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ') >= DATE_TRUNC( 'DAY',(CURRENT_TIMESTAMP + INTERVAL '8' HOUR)) - INTERVAL '8' HOUR group by _time_ order by _time_)
    • PV/UV in 7 Days. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,PV,UV FROM (select  TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') AS _time_ , count(1) as PV,  APPROX_COUNT_DISTINCT(remote_addr) as UV from log WHERE TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ') <= CURRENT_TIMESTAMP  and TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ') >= DATE_TRUNC( 'DAY',(CURRENT_TIMESTAMP + INTERVAL '8' HOUR)) - INTERVAL '8' HOUR - INTERVAL '7' DAY group by _time_ order by _time_ )
    • Top 10 Provinces by Visits. The associated query and analysis statement is:
      select ip_to_province(remote_addr) as "province", count(1) as "Visits" group by "province" HAVING "province" <> '-1' order by "Visits" desc limit 10
    • Top 10 Cities by Visits. The associated query and analysis statement is:
      select ip_to_city(remote_addr) as "city", count(1) as "Visits" group by "city" HAVING  "city" <> '-1' order by "Visits" desc  limit 10
    • Top 10 Hosts by Visits. The associated query and analysis statement is:
      select  host as "Host", count(1) as "PV" group by "Host" order by "PV" desc limit 10
    • Top 10 UserAgents by Visits. The associated query and analysis statement is:
      select http_user_agent as "UserAgent", count(1) as "PV" group by "UserAgent" order by "PV" desc limit 10
    • Device Distribution by Type. The associated query and analysis statement is:
      select case when regexp_like(lower(http_user_agent), 'iphone|ipod|android|ios') then 'Mobile' else 'PC' end as type , count(1) as total group by  type
    • Device Distribution by System. The associated query and analysis statement is:
      select case when regexp_like(lower(http_user_agent), 'iphone|ipod|ios') then 'IOS' when regexp_like(lower(http_user_agent), 'android') then 'Android' else 'other' end as type , count(1) as total group by  type HAVING type != 'other'
    • TOP URL. The associated query and analysis statement is:
      select router_request_uri , count(1) as pv, APPROX_COUNT_DISTINCT(remote_addr)  as uv, round(sum( case when status < 400 then 1 else 0 end   )  * 100.0 / count(1), 2) as "Access success" group by router_request_uri ORDER by pv desc
    • Top IP Addresses by Visits. The associated query and analysis statement is:
      select remote_addr as "Source IP Address",ip_to_country(remote_addr) as "Country/Region",ip_to_province(remote_addr) as "Province",ip_to_city(remote_addr) as "City",ip_to_provider(remote_addr) as "Carrier",count(1) as "PV",http_user_agent as "UserAgent Sampling",router_request_uri as "URL Sampling" group by remote_addr,http_user_agent,router_request_uri ORDER by "PV" desc

Viewing ELB Layer-7 Monitoring Center

  1. Log in to the LTS console. In the navigation pane, choose Log Management.
  2. In the Log Applications area, click ELB Log Center and choose Go to the Dashboard.
  3. In the dashboard list, choose ELB dashboard templates under Dashboard Templates and click ELB layer-7 monitoring center to view the chart details.

    • Filter all load balancers. The associated query and analysis statement is:
      select distinct(lb_name)
    • Filter all client IP addresses. The associated query and analysis statement is:
      select distinct(remote_addr)
    • Filter all backend servers' IP addresses. The associated query and analysis statement is:
      select distinct(upstream_addr)
    • Filter all elastic IP addresses. The associated query and analysis statement is:
      select distinct(eip_address) 

      Select a time range in the upper part of the page. A relative time period (that is, the log reporting time) of more than 30 minutes is recommended. Due to the difference between the service time (field time_iso8601) and the log reporting time, the data at both ends of the line chart sorted by service time cannot be used for reference.

      • From now: queries log data generated in a time range that ends with the current time, such as the previous 1, 5, or 15 minutes. For example, if the current time is 19:20:31 and 1 hour is selected as the relative time from now, the charts on the dashboard display the log data that is generated from 18:20:31 to 19:20:31.
      • From last: queries log data generated in a time range that ends with the current time, such as the previous 1 or 15 minutes. For example, if the current time is 19:20:31 and 1 hour is selected as the relative time from last, the charts on the dashboard display the log data that is generated from 18:00:00 to 19:00:00.
      • Specified: queries log data that is generated in a specified time range.
    • PV. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,PV FROM (select  TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') AS _time_ , count(1) as PV from log group by _time_ order by _time_ )
    • Request Success Rate. The associated query and analysis statement is:
      select ROUND(sum(case when status < 400 then 1 else 0 end) * 100.0 / count(1),2) as cnt
    • 4xx Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" >= 400 and "status" < 500
    • 499 Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" = 499
    • Average Latency. The associated query and analysis statement is:
      select round(avg(request_time) * 1000, 3) as cnt
    • 404 Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" = 404
    • 504 Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" = 504
    • 5xx Requests. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,cnt FROM ( SELECT TIME_CEIL ( TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'), 'PT300S' ) AS _time_, count( 1 ) AS cnt FROM log where "status" >= 500 GROUP BY _time_ )
    • Status Code Distribution. The associated query and analysis statement is:
      SELECT status, COUNT(1) AS rm GROUP BY status
    • UV. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,UV FROM (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') AS _time_ , APPROX_COUNT_DISTINCT(remote_addr) as UV  from log group by _time_)
    • Traffic. The associated query and analysis statement is:
      select TIME_FORMAT(_time_,'yyyy-MM-dd HH:mm:ss','+08:00') AS _time_,round( CASE WHEN "Inbound" > 0 THEN "Inbound" ELSE 0 END, 2 ) AS "Inbound",round( CASE WHEN "Outbound" > 0 THEN "Outbound" ELSE 0 END, 2 ) AS "Outbound" FROM (SELECT TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') AS _time_,sum(request_length) / 1024.0 AS "Inbound",sum(bytes_sent) / 1024.0 AS "Outbound" group by  _time_)
    • Access Failure Rate. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,round( CASE WHEN "Failure rate" > 0 THEN "Failure rate" ELSE 0 END, 2 ) AS "Failure rate",round( CASE WHEN "5XX" > 0 THEN "5XX" ELSE 0 END, 2 ) AS "5XX" from (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') AS _time_,sum(case when status >= 400 then 1 else 0 end) * 100.0 / count(1) as 'Failure rate' , sum(case when status >=500 THEN 1 ELSE 0 END)*100.0/COUNT(1) as '5XX' group by  _time_)
    • Latency. The associated query and analysis statement is:
      select TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss', '+08:00' ) as _time_,round( CASE WHEN "Avg." > 0 THEN "Avg." ELSE 0 END, 2 ) AS "Avg.",round( CASE WHEN "P50" > 0 THEN "P50" ELSE 0 END, 2 ) AS "P50",round( CASE WHEN "P90" > 0 THEN "P90" ELSE 0 END, 2 ) AS "P90",round( CASE WHEN "P99" > 0 THEN "P99" ELSE 0 END, 2 ) AS "P99",round( CASE WHEN "P9999" > 0 THEN "P9999" ELSE 0 END, 2 ) AS "P9999" from (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT600S') as _time_,avg(request_time) * 1000 as "Avg.", APPROX_QUANTILE_DS("request_time", 0.50)*1000 as "P50", APPROX_QUANTILE_DS("request_time", 0.90)*1000 as "P90" ,APPROX_QUANTILE_DS("request_time", 0.99)*1000 as 'P99',APPROX_QUANTILE_DS("request_time", 0.9999)*1000 as 'P9999' group by  _time_)
    • Top Host Requests. The associated query and analysis statement is:
      SELECT host, pv, uv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "Inbound (KB)" > 0 THEN "Inbound (KB)" ELSE 0 END, 3 ) AS "Inbound (KB)", round( CASE WHEN "Outbound (KB)" > 0 THEN "Outbound (KB)" ELSE 0 END, 3 ) AS "Outbound (KB)"  FROM ( SELECT "host", count( 1 ) AS pv, APPROX_COUNT_DISTINCT ( http_x_forwarded_for ) AS uv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", sum( request_length ) / 1024.0 AS "Inbound (KB)", sum( body_bytes_sent ) / 1024.0 AS "Outbound (KB)"  WHERE "host" != ''  GROUP BY "host" ) ORDER BY pv DESC
    • Top Host Latencies. The associated query and analysis statement is:
      SELECT "host", pv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM ( SELECT "host", count( 1 ) AS pv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)",APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY "host" ) ORDER BY "Average Latency (ms)" desc
    • Top Host Failure Rates. The associated query and analysis statement is:
      SELECT "host", pv,round( CASE WHEN "Access Failure Rate (%)" > 0 THEN "Access Failure Rate (%)" ELSE 0 END, 2 ) AS "Access Failure Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)"  FROM ( SELECT "host", count( 1 ) AS pv, sum( CASE WHEN "status" >= 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Failure Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY "host"  ) ORDER BY "Access Failure Rate (%)" desc
    • Top URL Requests. The associated query and analysis statement is:
      SELECT router_request_uri, pv,uv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "Inbound (KB)" > 0 THEN "Inbound (KB)" ELSE 0 END, 3 ) AS "Inbound (KB)", round( CASE WHEN "Outbound (KB)" > 0 THEN "Outbound (KB)" ELSE 0 END, 3 ) AS "Outbound (KB)"  FROM ( SELECT router_request_uri, count( 1 ) AS pv, APPROX_COUNT_DISTINCT ( http_x_forwarded_for ) AS uv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", sum( request_length ) / 1024.0 AS "Inbound (KB)", sum( body_bytes_sent ) / 1024.0 AS "Outbound (KB)"  WHERE "host" != ''  GROUP BY router_request_uri  ) ORDER BY pv desc
    • Top URL Latencies. The associated query and analysis statement is:
      SELECT router_request_uri, pv,round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)",round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)",round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)",round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM (SELECT router_request_uri, count( 1 ) AS pv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY router_request_uri  ) ORDER BY "Average Latency (ms)" desc
    • Top URL Failure Rates. The associated query and analysis statement is:
      SELECT router_request_uri, pv, round( CASE WHEN "Access Failure Rate (%)" > 0 THEN "Access Failure Rate (%)" ELSE 0 END, 2 ) AS "Access Failure Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM( SELECT router_request_uri, count( 1 ) AS pv, sum( CASE WHEN "status" >= 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Failure Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != '' GROUP BY router_request_uri) ORDER BY "Access Failure Rate (%)" desc
    • Top Backend Requests. The associated query and analysis statement is:
      SELECT addr, pv, uv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "Inbound (KB)" > 0 THEN "Inbound (KB)" ELSE 0 END, 3 ) AS "Inbound (KB)", round( CASE WHEN "Outbound (KB)" > 0 THEN "Outbound (KB)" ELSE 0 END, 3 ) AS "Outbound (KB)"  FROM ( SELECT upstream_addr as addr, count( 1 ) AS pv, APPROX_COUNT_DISTINCT ( http_x_forwarded_for ) AS uv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", sum( request_length ) / 1024.0 AS "Inbound (KB)", sum( body_bytes_sent ) / 1024.0 AS "Outbound (KB)"  WHERE "host" != ''  GROUP BY addr  having length(upstream_addr) > 2) ORDER BY "pv" desc
    • Top Backend Latencies. The associated query and analysis statement is:
      SELECT addr,pv,round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)",round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)",round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)",round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM (SELECT upstream_addr as addr,count( 1 ) AS pv,sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)",avg( request_time ) * 1000 AS "Average Latency (ms)",APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)",APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != '' GROUP BY addr having length(upstream_addr) > 2) ORDER BY "Average Latency (ms)" desc
    • Top Backend Failure Rates. The associated query and analysis statement is:
      SELECT addr, pv, round( CASE WHEN "Access Failure Rate (%)" > 0 THEN "Access Failure Rate (%)" ELSE 0 END, 2 ) AS "Access Failure Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)"  FROM ( SELECT upstream_addr as addr, count( 1 ) AS pv, sum( CASE WHEN "status" >= 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Failure Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY addr having length(upstream_addr) > 2) ORDER BY "Access Failure Rate (%)" desc

Viewing ELB Layer-7 Monitoring by the Second

  1. Log in to the LTS console. In the navigation pane, choose Log Management.
  2. In the Log Applications area, click ELB Log Center and choose Go to the Dashboard.
  3. Choose ELB dashboard templates under Dashboard Templates and click ELB layer-7 monitoring by the second to view the chart details.

    • Filter all load balancers. The associated query and analysis statement is:
      select distinct(lb_name)
    • Filter all client IP addresses. The associated query and analysis statement is:
      select distinct(remote_addr)
    • Filter all backend servers' IP addresses. The associated query and analysis statement is:
      select distinct(upstream_addr)
    • Filter all elastic IP addresses. The associated query and analysis statement is:
      select distinct(eip_address) 

      Select a time range in the upper part of the page. A relative time period (that is, the log reporting time) of more than 30 minutes is recommended. Due to the difference between the service time (field time_iso8601) and the log reporting time, the data at both ends of the line chart sorted by service time cannot be used for reference.

      • From now: queries log data generated in a time range that ends with the current time, such as the previous 1, 5, or 15 minutes. For example, if the current time is 19:20:31 and 1 hour is selected as the relative time from now, the charts on the dashboard display the log data that is generated from 18:20:31 to 19:20:31.
      • From last: queries log data generated in a time range that ends with the current time, such as the previous 1 or 15 minutes. For example, if the current time is 19:20:31 and 1 hour is selected as the relative time from last, the charts on the dashboard display the log data that is generated from 18:00:00 to 19:00:00.
      • Specified: queries log data that is generated in a specified time range.
    • The QPS chart displays the QPS changes. The associated query and analysis statement is as follows:
      select TIME_FORMAT(TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT1S'),'yyyy-MM-dd HH:mm:ss','+08:00') AS _time_ , COUNT(*) as QPS from log group by _time_
    • The Success Rate chart displays the success rate changes. The associated query and analysis statement is:
      select __time,round(CASE WHEN "Success Rate" > 0 THEN "Success Rate" else 0 end,2) as "Success Rate" from (select TIME_FORMAT(TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S'),'yyyy-MM-dd HH:mm:ss','+08:00') as __time, sum(case when status < 400 then 1 else 0 end) * 100.0 / count(1) as 'Success Rate' from log group by __time)
    • The Latency chart displays the latency changes. The associated query and analysis statement is:
      select __time,round(CASE WHEN "Access latency" > 0 THEN "Access latency" else 0 end,2) as "Access latency",round(CASE WHEN "Upstream latency" > 0 THEN "Upstream latency" else 0 end,2) as "Upstream latency" from (select TIME_FORMAT(TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S'),'yyyy-MM-dd HH:mm:ss','+08:00') as __time, avg(request_time)* 1000 as 'Access latency',avg(upstream_response_time)* 1000 as 'Upstream latency' from log group by __time)
    • The Traffic chart displays the traffic changes. The associated query and analysis statement is:
      select TIME_FORMAT(__time, 'yyyy-MM-dd HH:mm:ss', '+08:00') AS __time, round(CASE WHEN "Incoming" > 0 THEN "Incoming" ELSE 0 END, 2) AS "Incoming",round(CASE WHEN "Outgoing body" > 0 THEN "Outgoing body" ELSE 0 END, 2) AS "Outgoing body" from (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'), 'PT5S') as __time,          sum("request_length") / 1024.0 as "Incoming",          sum("body_bytes_sent") / 1024.0 as "Outgoing body"        group by          __time      )
    • The Status Codes chart displays the changes of status codes. The associated query and analysis statement is:
      select      TIME_FORMAT(t.t, 'yyyy-MM-dd HH:mm:ss', '+08:00') as "time",      CASE WHEN a."2XX" IS NOT NULL THEN CAST(a."2XX" AS BIGINT)        ELSE 0      END as "2XX",      CASE        WHEN b."3XX" IS NOT NULL THEN CAST(b."3XX" AS BIGINT)        ELSE 0      END as "3XX",      CASE        WHEN c."4XX" IS NOT NULL THEN CAST(c."4XX" AS BIGINT)        ELSE 0      END as "4XX",      CASE        WHEN d."5XX" IS NOT NULL THEN CAST(d."5XX" AS BIGINT)        ELSE 0      END as "5XX",      CASE        WHEN e."Other" IS NOT NULL THEN CAST(e."Other" AS BIGINT)        ELSE 0      END as "Other"    from      (        select          TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),            'PT5S'          ) as t        from          log        group by          t        order by          t asc      ) t      left join (        select          TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),            'PT5S'          ) as t,          CAST(COUNT(1) as VARCHAR) as "2XX"        from          log        WHERE          "status" >= 200          and "status" < 300        group by          t        order by          t asc      ) a on t.t = a.t      left join (        select          TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),            'PT5S'          ) as t,          CAST(COUNT(1) as VARCHAR) as "3XX"        from          log        WHERE          "status" >= 300          and "status" < 400        group by          t        order by          t asc      ) b on t.t = b.t      left join (        select          TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),            'PT5S'          ) as t,          CAST(COUNT(1) as VARCHAR) as "4XX"        from          log        WHERE          "status" >= 400          and "status" < 500        group by          t        order by          t asc      ) c on t.t = c.t      left join (        select          TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),            'PT5S'          ) as t,          CAST(COUNT(1) as VARCHAR) as "5XX"        from          log        WHERE          "status" >= 500          and "status" < 600        group by          t        order by          t asc      ) d on t.t = d.t      left join (        select          TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),            'PT5S'          ) as t,          CAST(COUNT(1) as VARCHAR) as "Other"        from          log        WHERE          "status" < 200          or "status" >= 600        group by          t        order by          t asc      ) e on t.t = e.t
    • The Backend Response Codes chart displays the changes of backend response codes. The associated query and analysis statement is:
      select TIME_FORMAT(t.t, 'yyyy-MM-dd HH:mm:ss', '+08:00') as "time",CASE WHEN a."2XX" IS NOT NULL THEN  CAST(a."2XX" AS BIGINT) ELSE 0 END as "2XX",CASE WHEN b."3XX" IS NOT NULL THEN  CAST(b."3XX" AS BIGINT) ELSE 0 END as "3XX",CASE WHEN c."4XX" IS NOT NULL THEN  CAST(c."4XX" AS BIGINT) ELSE 0 END as "4XX",CASE WHEN d."5XX" IS NOT NULL THEN  CAST(d."5XX" AS BIGINT) ELSE 0 END as "5XX",CASE WHEN e."Other" IS NOT NULL THEN  e."Other"  ELSE 0 END as "Other" from (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S') as t from log group by t order by t asc) t left join(select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S') as t , CAST(COUNT(1) as VARCHAR) as "2XX" from log WHERE "upstream_status" >= 200 and "upstream_status" < 300 group by t order by t asc) a on t.t = a.t left join (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S') as t , CAST(COUNT(1) as VARCHAR) as "3XX" from log WHERE "upstream_status" >= 300 and "upstream_status" < 400 group by t order by t asc) b on t.t =b.t left join (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S') as t , CAST(COUNT(1) as VARCHAR) as "4XX" from log WHERE "upstream_status" >= 400 and "upstream_status" < 500 group by t order by t asc) c on t.t =c.t left join (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S') as t , CAST(COUNT(1) as VARCHAR) as "5XX" from log WHERE "upstream_status" >= 500 and "upstream_status" < 600 group by t order by t asc) d on t.t =d.t left join (select TIME_CEIL(TIME_PARSE(SUBSTRING(time_iso8601, 2, 25) ,'yyyy-MM-dd''T''HH:mm:ssZZ'),'PT5S') as t , CAST(COUNT(1) as VARCHAR) as "Other" from log WHERE "upstream_status" < 200 or "upstream_status" >= 600 group by t order by t asc) e on t.t =e.t