Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

APIG Dashboard Templates

Updated on 2024-11-18 GMT+08:00

APIG is your fully managed API hosting service with high performance, availability, and security. With APIG, you can build, manage, and deploy APIs at any scale to package your capabilities and sell them at KooGallery. With just a few clicks, you can integrate your internal systems and selectively expose and monetize your service capabilities with minimal cost and risk. APIG helps you lower R&D costs and improve operational efficiency, freeing you to focus on your core business.

APIG dashboard templates support Viewing APIG Access Center, Viewing APIG Monitoring Center, and Viewing APIG Monitoring by the Second.

Prerequisites

Viewing APIG Access Center

  1. Log in to the LTS console. In the navigation pane, choose Dashboards.
  2. Choose APIG dashboard templates under Dashboard Templates and click APIG access center to view the chart details.

    • Filter by requested domain name. The associated query and analysis statement is:
      select distinct(host)
    • Filter by application ID. The associated query and analysis statement is:
      select distinct(app_id)
    • PV Distribution (Global). The associated query and analysis statement is:
      SELECT ip_to_country(my_remote_addr) as country,sum(ori_pv) as PV from (select my_remote_addr, count(1) as ori_pv 
        group by my_remote_addr  
        ORDER BY ori_pv desc 
        LIMIT 10000) GROUP BY country HAVING country not in ('','Reserved address','*')
    • Average Latency Distribution (China). The associated query and analysis statement is:
      SELECT province,round( CASE WHEN "Average latency (ms)" > 0 THEN "Average latency (ms)" ELSE 0 END, 3 ) AS "Average latency (ms)"FROM (SELECT ip_to_province(my_remote_addr) as province,sum(rt)/sum(ori_pv) * 1000 AS "Average latency (ms)" from (select my_remote_addr, sum(request_time) as rt,count(1) as ori_pv 
        group by my_remote_addr  
        ORDER BY ori_pv desc 
        LIMIT 10000) WHERE  IP_TO_COUNTRY (my_remote_addr) = 'China' GROUP BY province )
        where province not in ('','Reserved address','*')
    • Average Latency Distribution (Global). The associated query and analysis statement is:
      SELECT country,round( CASE WHEN "Average latency (ms)" > 0 THEN "Average latency (ms)" ELSE 0 END, 2 ) AS "Average latency (ms)"FROM (SELECT ip_to_country(my_remote_addr) as country,sum(rt)/sum(ori_pv)  * 1000 AS "Average latency (ms)" from (select my_remote_addr, sum(request_time) as rt,count(1) as ori_pv 
        group by my_remote_addr  
        ORDER BY ori_pv desc 
        LIMIT 10000) GROUP BY country )
      where  country not in ('','Reserved address','*')
    • PV/UV Today. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss' ) as _time_,PV,UV FROM (select TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT600S') AS _time_ , count(1) as PV,  APPROX_COUNT_DISTINCT(my_remote_addr) as UV from log WHERE __time <= CURRENT_TIMESTAMP  and __time >= DATE_TRUNC( 'DAY',(CURRENT_TIMESTAMP + INTERVAL '8' HOUR)) - INTERVAL '8' HOUR group by _time_ order by _time_)
    • Top 10 Provinces by Visits. The associated query and analysis statement is:
      select ip_to_province(my_remote_addr) as "province", sum(ori_pv) as "Visits" from(select my_remote_addr, count(1) as ori_pv 
        group by my_remote_addr  
        ORDER BY ori_pv desc 
        LIMIT 10000)group by "province" HAVING "province" <> '-1' order by "Visits" desc limit 10
    • Top 10 Cities by Visits. The associated query and analysis statement is:
      select ip_to_city(my_remote_addr) as "city", sum(ori_pv) as "Visits" from(select my_remote_addr, count(1) as ori_pv 
        group by my_remote_addr  
        ORDER BY ori_pv desc 
        LIMIT 10000) group by "city" HAVING  "city" <> '-1' order by "Visits" desc  limit 10
    • Top 10 Hosts by Visits. The associated query and analysis statement is:
      select  host as "Host", count(1) as "PV" group by "Host" order by "PV" desc limit 10
    • Top 10 UserAgents by Visits. The associated query and analysis statement is:
      select http_user_agent as "UserAgent", count(1) as "PV" group by "UserAgent" order by "PV" desc limit 10
    • Device Distribution by Type. The associated query and analysis statement is:
      select case when regexp_like(lower(http_user_agent), 'iphone|ipod|android|ios') then 'Mobile' else 'PC' end as type , count(1) as total group by  type
    • Device Distribution by System. The associated query and analysis statement is:
      select case when regexp_like(lower(http_user_agent), 'iphone|ipod|ios') then 'IOS' when regexp_like(lower(http_user_agent), 'android') then 'Android' else 'other' end as type , count(1) as total group by  type HAVING type != 'other'
    • TOP URL. The associated query and analysis statement is:
      select router_uri , count(1) as pv, APPROX_COUNT_DISTINCT(my_remote_addr) as UV, round(sum( case when status < 400 then 1 else 0 end   )  * 100.0 / count(1), 2) as "Access Success Rate" group by router_uri ORDER by pv desc
    • Top IP Addresses by Visits. The associated query and analysis statement is:
      select my_remote_addr as "Source IP Address",ip_to_country(my_remote_addr) as "Country/Region",ip_to_province(my_remote_addr) as "Province",ip_to_city(my_remote_addr) as "City",ip_to_provider(my_remote_addr) as "Carrier",count(1) as "PV" group by my_remote_addr ORDER by "PV" desc limit 100

Viewing APIG Monitoring Center

  1. Log in to the LTS console. In the navigation pane, choose Dashboards.
  2. Choose APIG dashboard templates under Dashboard Templates and click APIG monitoring center to view the chart details.

    • Filter by requested domain name. The associated query and analysis statement is:
      select distinct(host)
    • Filter by application ID. The associated query and analysis statement is:
      select distinct(app_id)
    • PV. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss' ) as _time_,PV FROM ( SELECT TIME_CEIL ( TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'), 'PT300S' ) AS _time_, count( 1 ) AS PV FROM log GROUP BY _time_ )
    • Request Success Rate. The associated query and analysis statement is:
      select ROUND(sum(case when status < 400 then 1 else 0 end) * 100.0 / count(1),2) as cnt
    • Average Latency. The associated query and analysis statement is:
      select round(avg(request_time) * 1000, 3) as cnt
    • 4xx Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" >= 400 and "status" < 500
    • 404 Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" = 404
    • 429 Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" = 429
    • 504 Requests. The associated query and analysis statement is:
      SELECT COUNT(1) as cnt WHERE "status" = 504
    • 5xx Requests. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss') as _time_,cnt FROM ( SELECT TIME_CEIL ( TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'), 'PT300S' ) AS _time_, count( 1 ) AS cnt FROM log where "status" >= 500 GROUP BY _time_ )
    • Status Code Distribution. The associated query and analysis statement is:
      SELECT status, COUNT(1) AS rm GROUP BY status
    • UV. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss' ) as _time_,UV FROM (select TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT600S') AS _time_ , APPROX_COUNT_DISTINCT(my_remote_addr) as UV  from log group by _time_)
    • Traffic. The associated query and analysis statement is:
      select TIME_FORMAT(_time_,'yyyy-MM-dd HH:mm:ss') AS _time_,round( CASE WHEN "Inbound" > 0 THEN "Inbound" ELSE 0 END, 2 ) AS "Inbound",round( CASE WHEN "Outbound" > 0 THEN "Outbound" ELSE 0 END, 2 ) AS "Outbound" FROM (SELECT TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT600S') AS _time_,sum(request_length) / 1024.0 AS "Inbound",sum(bytes_sent) / 1024.0 AS "Outbound" group by  _time_)
    • Access Failure Rate. The associated query and analysis statement is:
      SELECT TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss') as _time_,round( CASE WHEN "Failure rate" > 0 THEN "Failure rate" ELSE 0 END, 2 ) AS "Failure rate",round( CASE WHEN "5xx Requests" > 0 THEN "5xx Requests" ELSE 0 END, 2 ) AS "5xx Requests" from (select TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT600S') AS _time_,sum(case when status >= 400 then 1 else 0 end) * 100.0 / count(1) as 'Failure rate' , sum(case when status >=500 THEN 1 ELSE 0 END)*100.0/COUNT(1) as '5xx Requests' group by  _time_)
    • Latency. The associated query and analysis statement is:
      select TIME_FORMAT( _time_, 'yyyy-MM-dd HH:mm:ss') as _time_,round( CASE WHEN "Avg." > 0 THEN "Avg." ELSE 0 END, 2 ) AS "Avg.",round( CASE WHEN "P50" > 0 THEN "P50" ELSE 0 END, 2 ) AS "P50",round( CASE WHEN "P90" > 0 THEN "P90" ELSE 0 END, 2 ) AS "P90",round( CASE WHEN "P99" > 0 THEN "P99" ELSE 0 END, 2 ) AS "P99",round( CASE WHEN "P9999" > 0 THEN "P9999" ELSE 0 END, 2 ) AS "P9999" from (select TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT600S') as _time_,avg(request_time) * 1000 as "Avg.", APPROX_QUANTILE_DS("request_time", 0.50)*1000 as "P50", APPROX_QUANTILE_DS("request_time", 0.90)*1000 as "P90" ,APPROX_QUANTILE_DS("request_time", 0.99)*1000 as 'P99',APPROX_QUANTILE_DS("request_time", 0.9999)*1000 as 'P9999' group by  _time_)
    • Top Host Requests. The associated query and analysis statement is:
      SELECT "host", pv, uv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "Inbound (KB)" > 0 THEN "Inbound (KB)" ELSE 0 END, 3 ) AS "Inbound (KB)", round( CASE WHEN "Outbound (KB)" > 0 THEN "Outbound (KB)" ELSE 0 END, 3 ) AS "Outbound (KB)"  FROM ( SELECT "host", count( 1 ) AS pv, APPROX_COUNT_DISTINCT ( my_remote_addr ) AS uv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", sum( request_length ) / 1024.0 AS "Inbound (KB)", sum( bytes_sent ) / 1024.0 AS "Outbound (KB)"  WHERE "host" != ''  GROUP BY "host" ) ORDER BY pv DESC
    • Top Host Latencies. The associated query and analysis statement is:
      SELECT "host", pv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM ( SELECT "host", count( 1 ) AS pv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)",APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY "host" ) ORDER BY "Average Latency (ms)" desc
    • Top Host Failure Rates. The associated query and analysis statement is:
      SELECT "host", pv,round( CASE WHEN "Access Failure Rate (%)" > 0 THEN "Access Failure Rate (%)" ELSE 0 END, 2 ) AS "Access Failure Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)"  FROM ( SELECT "host", count( 1 ) AS pv, sum( CASE WHEN "status" >= 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Failure Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY "host"  ) ORDER BY "Access Failure Rate (%)" desc
    • Top URL Requests. The associated query and analysis statement is:
      SELECT upstream_uri, pv,uv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "Inbound (KB)" > 0 THEN "Inbound (KB)" ELSE 0 END, 3 ) AS "Inbound (KB)", round( CASE WHEN "Outbound (KB)" > 0 THEN "Outbound (KB)" ELSE 0 END, 3 ) AS "Outbound (KB)"  FROM ( SELECT upstream_uri, count( 1 ) AS pv, APPROX_COUNT_DISTINCT ( my_remote_addr ) AS uv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", sum( request_length ) / 1024.0 AS "Inbound (KB)", sum( bytes_sent ) / 1024.0 AS "Outbound (KB)"  WHERE "host" != ''  GROUP BY upstream_uri  ) ORDER BY pv desc
    • Top URL Failure Rates. The associated query and analysis statement is:
      SELECT upstream_uri, pv, round( CASE WHEN "Access Failure Rate (%)" > 0 THEN "Access Failure Rate (%)" ELSE 0 END, 2 ) AS "Access Failure Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM( SELECT upstream_uri, count( 1 ) AS pv, sum( CASE WHEN "status" >= 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Failure Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != '' GROUP BY upstream_uri  ) ORDER BY "Access Failure Rate (%)" desc
    • Top Backend Requests. The associated query and analysis statement is:
      SELECT addr, pv, uv, round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "Inbound (KB)" > 0 THEN "Inbound (KB)" ELSE 0 END, 3 ) AS "Inbound (KB)", round( CASE WHEN "Outbound (KB)" > 0 THEN "Outbound (KB)" ELSE 0 END, 3 ) AS "Outbound (KB)"  FROM ( SELECT my_remote_addr as addr, count( 1 ) AS pv, APPROX_COUNT_DISTINCT ( my_remote_addr ) AS uv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", sum( request_length ) / 1024.0 AS "Inbound (KB)", sum( bytes_sent ) / 1024.0 AS "Outbound (KB)"  WHERE "host" != ''  GROUP BY addr  having length(my_remote_addr) > 2) ORDER BY "pv" desc
    • Top Backend Latencies. The associated query and analysis statement is:
      SELECT addr,pv,round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)",round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)",round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)",round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM (SELECT my_remote_addr as addr,count( 1 ) AS pv,sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)",avg( request_time ) * 1000 AS "Average Latency (ms)",APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)",APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != '' and "my_remote_addr" != '-' GROUP BY addr ) ORDER BY "Average Latency (ms)" desc
    • Top Backend Failure Rates. The associated query and analysis statement is:
      SELECT addr, pv, round( CASE WHEN "Access Failure Rate (%)" > 0 THEN "Access Failure Rate (%)" ELSE 0 END, 2 ) AS "Access Failure Rate (%)", round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)", round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)", round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)"  FROM ( SELECT my_remote_addr as addr, count( 1 ) AS pv, sum( CASE WHEN "status" >= 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Failure Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != '' and "my_remote_addr" != '-' GROUP BY addr) ORDER BY "Access Failure Rate (%)" desc
    • Top URL Latencies. The associated query and analysis statement is:
      SELECT upstream_uri, pv,round( CASE WHEN "Access Success Rate (%)" > 0 THEN "Access Success Rate (%)" ELSE 0 END, 2 ) AS "Access Success Rate (%)",round( CASE WHEN "Average Latency (ms)" > 0 THEN "Average Latency (ms)" ELSE 0 END, 3 ) AS "Average Latency (ms)",round( CASE WHEN "P90 Latency (ms)" > 0 THEN "P90 Latency (ms)" ELSE 0 END, 3 ) AS "P90 Latency (ms)",round( CASE WHEN "P99 Latency (ms)" > 0 THEN "P99 Latency (ms)" ELSE 0 END, 3 ) AS "P99 Latency (ms)" FROM (SELECT upstream_uri, count( 1 ) AS pv, sum( CASE WHEN "status" < 400 THEN 1 ELSE 0 END ) * 100.0 / count( 1 ) AS "Access Success Rate (%)", avg( request_time ) * 1000 AS "Average Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.9) * 1000 AS "P90 Latency (ms)", APPROX_QUANTILE_DS(request_time, 0.99) * 1000 AS "P99 Latency (ms)" WHERE "host" != ''  GROUP BY upstream_uri  ) ORDER BY "Average Latency (ms)" desc

Viewing APIG Monitoring by the Second

  1. Log in to the LTS console. In the navigation pane, choose Dashboards.
  2. Choose APIG dashboard templates under Dashboard Templates and click APIG monitoring by the second to view the chart details.

    • Filter by requested domain name. The associated query and analysis statement is:
      select distinct(host)
    • Filter by application ID. The associated query and analysis statement is:
      select distinct(app_id)
    • QPS. The associated query and analysis statement is:
      SELECT TIME_FORMAT(TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT1S'),'yyyy-MM-dd HH:mm:ss') AS _time_ , COUNT(*) as QPS from log group by _time_
    • Success Rate. The associated query and analysis statement is:
      select __time,round(CASE WHEN "Success rate" > 0 THEN "Success rate" else 0 end,2) as "Success rate" from (select TIME_FORMAT(TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT5S'),'yyyy-MM-dd HH:mm:ss') as __time, sum(case when status < 400 then 1 else 0 end) * 100.0 / count(1) as 'Success rate' from log group by __time)
    • Latency. The associated query and analysis statement is:
      select __time,round(CASE WHEN "Access latency" > 0 THEN "Access latency" else 0 end,2) as "Access latency",round(CASE WHEN "Upstream latency" > 0 THEN "Upstream latency" else 0 end,2) as "Upstream latency" from (select TIME_FORMAT(TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT5S'),'yyyy-MM-dd HH:mm:ss') as __time, avg(request_time)* 1000 as 'Access latency',avg(upstream_response_time)* 1000 as 'Upstream latency' from log group by __time)
    • Traffic. The associated query and analysis statement is:
      select __time,round( CASE WHEN "Incoming" > 0 THEN "Incoming" ELSE 0 END, 3 ) AS "Incoming",round( CASE WHEN "Outgoing body" > 0 THEN "Outgoing body" ELSE 0 END, 3 ) AS "Outgoing body" from (select TIME_FORMAT(TIME_CEIL(TIME_PARSE(time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ'),'PT5S'),'yyyy-MM-dd HH:mm:ss') as __time , sum("request_length") / 1024.0 as "Incoming", sum("body_bytes_sent") / 1024.0 as "Outgoing body" group by __time)
    • Status Codes. The associated query and analysis statement is:
      SELECT TIME_CEIL ( TIME_PARSE ( time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ' ), 'PT5S' ) AS "time", SUM( CASE WHEN "status" >= 200 AND "status" < 300 THEN 1 ELSE 0 END ) AS "2XX", SUM( CASE WHEN "status" >= 300 AND "status" < 400 THEN 1 ELSE 0 END ) AS "3XX", SUM( CASE WHEN "status" >= 400 AND "status" < 500 THEN 1 ELSE 0 END ) AS "4XX", SUM( CASE WHEN "status" >= 500 AND "status" < 600 THEN 1 ELSE 0 END ) AS "5XX", SUM( CASE WHEN "status" < 200 OR "status" >= 600 THEN 1 ELSE 0 END ) AS "Other" FROM log  WHERE TIME_PARSE ( time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ' ) IS NOT NULL GROUP BY "time"  ORDER BY "time" ASC LIMIT 100000
    • Backend Response Codes. The associated query and analysis statement is:
      SELECT TIME_CEIL ( TIME_PARSE ( time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ' ), 'PT5S' ) AS "time", SUM( CASE WHEN "upstream_status" >= 200 AND "upstream_status" < 300 THEN 1 ELSE 0 END ) AS "2XX", SUM( CASE WHEN "upstream_status" >= 300 AND "upstream_status" < 400 THEN 1 ELSE 0 END ) AS "3XX", SUM( CASE WHEN "upstream_status" >= 400 AND "upstream_status" < 500 THEN 1 ELSE 0 END ) AS "4XX", SUM( CASE WHEN "upstream_status" >= 500 AND "upstream_status" < 600 THEN 1 ELSE 0 END ) AS "5XX", SUM( CASE WHEN "upstream_status" < 200 OR "upstream_status" >= 600 THEN 1 ELSE 0 END ) AS "Other" FROM log  WHERE TIME_PARSE ( time_local, 'dd/MMM/yyyy:HH:mm:ss ZZ' ) IS NOT NULL GROUP BY "time"  ORDER BY "time" ASC LIMIT 100000

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback