Updated on 2023-07-04 GMT+08:00

Configuring a Traffic Policy

  1. Log in to the ASM console and click the target mesh to go to its details page.
  2. In the navigation pane on the left, choose Service Management. In the upper right corner of the list, select the namespace to which the service belongs.
  3. Select a service, click Manage Traffic in the Operation column, and configure retry, timeout, connection pool, outlier detection, load balancing, HTTP header, and fault injection policies on the right.

    Retry

    Auto retries upon service access failures improve the access quality and success rate.

    On the Retry page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    Table 1 Retry parameters

    Parameter

    Description

    Value Range

    Retries

    Maximum number of retries allowed for a single request. The default retry interval is 25 ms. The actual number of retries depends on the configured timeout period and retry timeout period.

    1-2147483647

    Retry Timeout (s)

    Timeout period of an initial or retry request. The default value is the same as the timeout period configured in the Timeout area below.

    0.001-2592000

    Retry Condition

    Configure retry conditions. For details, see Retry Policies and gRPC Retry Policies.

    -

    Timeout

    Auto processing and quickly failure return upon service access timeout eliminate resource locking and request freezing.

    On the Timeout page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    Table 2 Timeout parameters

    Parameter

    Description

    Value Range

    Timeout (s)

    Timeout period for HTTP requests

    0.001-2592000

    Connection Pool

    Connections and requests that exceed the thresholds are cut off to protect target services. Connection pool settings are applied to each pod of the upstream service at the TCP and HTTP levels. For details, see Circuit Breaker.

    On the Connection Pool page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    Table 3 TCP settings parameters

    Parameter

    Description

    Value Range

    Maximum Number of Connections

    Maximum number of HTTP/TCP connections to the target service. The default value is 232-1.

    1-2147483647

    Maximum Number of Non-responses

    Maximum number of keepalive probes to be sent before the connection is determined to be invalid. By default, the OS-level configuration is used. (The default value is 9 for Linux.)

    1-2147483647

    Health Check Interval (s)

    Time interval between two keepalive probes. By default, the OS-level configuration is used. (The default value is 75s for Linux.)

    0.001-2592000

    Connection Timeout (s)

    TCP connection timeout period. The default value is 10s.

    0.001-2592000

    Minimum Idle Period (s)

    Duration in which a connection remains idle before a keepalive probe is sent. By default, the OS-level configuration is used. (The default value is 7,200s for Linux, namely, 2 hours.)

    0.001-2592000

    Table 4 HTTP settings parameters

    Parameter

    Description

    Value Range

    Maximum Number of Requests

    Maximum number of requests that can be forwarded to a single service pod. The default value is 232-1.

    1-2147483647

    Maximum Number of Pending Requests

    Maximum number of HTTP requests that can be forwarded to the target service for processing. The default value is 232-1.

    1-2147483647

    Maximum Connection Idle Period (s)

    Timeout period of an idle upstream service connection. If there is no active request within this time period, the connection will be closed. The default value is 1 hour.

    0.001-2592000

    Maximum Retries

    Maximum number of retries of all service pods within a specified period. The default value is 232-1.

    1-2147483647

    Maximum Number of Requests Per Connection

    Maximum number of requests for each connection to the backend. If this parameter is set to 1, the keepalive function is disabled. The default value is 0, indicating infinite. The maximum value is 229.

    1-536870912

    Outlier Detection

    Unhealthy pods are automatically isolated to improve the overall access success rate.

    The traffic status of service pods is traced to determine whether the pods are healthy. Unhealthy pods will be ejected from the connection pool to improve the overall access success rate. Outlier detection can be configured for HTTP and TCP services. For HTTP services, pods that continuously return 5xx errors are considered unhealthy. For TCP services, pods whose connections time out or fail are considered unhealthy. For details, see Outlier Detection.

    On the Outlier Detection page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    Table 5 Outlier detection parameters

    Parameter

    Description

    Value Range

    Consecutive Errors

    Number of consecutive errors in a specified time period. If the number of consecutive errors exceeds this threshold, the pod will be ejected. The default value is 5. To disable this function, set it to 0.

    1-2147483647

    Base Ejection Time (s)

    Base ejection time of a service pod that meets the outlier detection conditions. The actual ejection time of a service pod = Base ejection time x Number of ejection times. The value must be greater than or equal to 1 ms. The default value is 30s.

    0.001-2592000

    Inspection Interval (s)

    If the number of errors reaches the threshold within this time period, the pod will be ejected. The value must be greater than or equal to 0.001s. The default value is 10s.

    0.001-2592000

    Maximum Percentage of Ejected Pods (%)

    Maximum percentage of ejected service pods. The default value is 10%.

    1-100

    Load Balancing

    You can customize a load balancing policy to target backend service pods.

    On the Load Balancing page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    • Round robin: The default load balancing algorithm. Each service pod in the pool gets a request in turn.
    • Least connection: Requests are forwarded to the pod with fewer connections among two randomly selected healthy pods.
    • Random: Requests are forwarded to a randomly selected healthy pod.
    • Consistent hashing: includes four types, as described in Table 6.
      Table 6 Consistent hashing algorithm types

      Type

      Description

      Based on HTTP header

      The hash value is calculated using the header of the HTTP request. Requests with the same hash value are forwarded to the same pod.

      Based on cookie

      The hash value is calculated using the cookie key name of the HTTP request. Requests with the same hash value are forwarded to the same pod.

      Based on source IP

      The hash value is calculated based on the IP addresses in HTTP requests. Requests with the same value will be forwarded to the same pod.

      Based on query parameter

      The hash value is calculated using the Query parameter name of the HTTP request. Requests with the same hash value are forwarded to the same pod.

    HTTP Header

    You can flexibly add, modify, and delete specified HTTP headers to manage request contents in non-intrusive mode.

    On the HTTP Header page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    Table 7 Operations on the HTTP headers before the request is forwarded to the destination service.

    Parameter

    Description

    Add request headers

    To add a request header, you need to set the key and value. You can also click to add more request headers.

    Edit request headers

    To edit an existing request header, you need to set the key and value. You can also click to add more request headers to be edited.

    Remove request headers

    To remove an existing request header, you need to set the key. You can also click to add more request headers to be removed.

    Table 8 Operations on the HTTP headers before the response is returned to the client.

    Parameter

    Description

    Add response headers

    To add a response header, you need to set the key and value. You can also click to add more response headers.

    Edit response headers

    To edit an existing response header, you need to set the key and value. You can also click to edit more response headers to be edit.

    Remove response headers

    To remove an existing response header, you need to set the key. You can also click to add more response headers to be removed.

    Fault Injection

    You can inject faults into the system to check whether it can tolerate and recover from faults.

    On the Fault Injection page, click Configure now. In the displayed dialog box, set the parameters listed in the table below.

    Table 9 Fault injection parameters

    Parameter

    Description

    Value Range

    Fault Type

    Select a fault type between Delay and Abort.

    • Delay: Service requests are delayed.
    • Abort: A service is aborted and the preset status code is returned.

    Delay and Abort

    Delay (s)

    This parameter needs to be set when Fault Type is set to Delay.

    A request will be delayed for this period of time before it is forwarded.

    0.001–2,592,000

    HTTP Status Code

    This parameter needs to be set when Fault Type is set to Abort.

    HTTP status code returned when an abort fault occurs. The default value is 500.

    200–599

    Fault Percentage (%)

    Percentage of requests for which the delay or abort fault is injected.

    1–100