Updated on 2023-06-25 GMT+08:00

Governing Microservices

After a microservice is deployed, you can govern it based on its running statuses.

Prerequisites

  • You can create a microservice in Microservice List from Service Catalog and start the microservice. After the microservice starts, the service instance is registered under the corresponding service based on configurations in the .yaml file.
  • If the microservice is not created in advance or has been deleted, the microservice is automatically created when the service instance is registered.
  • After a microservice is created, you need to register the service instance before performing the corresponding operation.

Governance Policies

Supports the configuration of policies such as load balancing, rate limiting, fault tolerance, service degradation, circuit breaker, and fault injection. For details, see the following table.

Name

Description

Load Balancing

When the access traffic and traffic volume are large and one server cannot handle the load, you can configure load balancing to distribute traffic to multiple servers for balancing. In this way, the response duration is reduced and server overload can be prevented.

You can configure load balancing policies by adding a rule. The rule parameters include Polling, Random, Response Time Weight, and Session Stickiness.

Rate Limiting

Rate limiting is used to solve the problem of traffic distribution across microservices. This ensures that microservices run in their own resource pools without affecting each other.

  • When the number of requests sent by the rate limiting object to the current service instance exceeds the specified value, the current service instance no longer accepts requests from the rate limiting object.
  • Common detection methods include request timeout and excessive traffic.
  • The parameters include Flow Control Object and QPS.

Service Degradation

Service degradation is a special form of fault tolerance. When the service throughput is large and resources are insufficient, you can use service degradation to disable some services that are not important and have poor performance to avoid occupying resources and ensure that the main services are normal.

Fault Tolerance

Fault tolerance is used when an exception occurs in a service instance after you access that instance. After the exception occurs, you can retry to access the instance, or access another instance based on the configured policy.

Circuit Breaker

If the service is overloaded, you can use circuit breaker to protect the system from breaking down.

Circuit breaker is triggered when a service request is handled abnormally. After circuit breaker is triggered, Hystrix considers that the requested service cannot process requests, so it immediately rejects requests and returns an error message to the caller.

Hystrix attempts to access backend services at a specified interval. If the services are restored, they will exit the circuit breaker state and resume to accept requests.

Fault Injection

Fault injection is used to test the fault tolerance capability of microservices. This helps the user determine whether the system can run properly when latency or fault occurs.

Fault injection allows you to test fault tolerance of microservices with latency or faults.

Routing Policy

Based on the public key authentication mechanism, CSE provides the blacklist and whitelist functions to control the services that can access microservices.

The blacklist and whitelist take effect only after public key authentication is enabled. For details, see Configuring Public Key Authentication.

Configuring Load Balancing

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Load Balancing.
  6. Click Add. Select the microservices to be governed and select a proper load balancing policy. For details, see the following table.

    Policy

    Description

    Round robin

    Supports routes according to the location information about service instances.

    Random

    Provides random routes for service instances.

    Response time weight

    Provides weight routes with the minimum active number (latency) and supports service instances with slow service processing in receiving a small number of requests to prevent the system from stopping response. This load balancing policy is suitable for applications with low and stable service requests.

    Session stickiness

    Provides a mechanism on the load balancer. In the specified session stickiness duration, this mechanism allocates the access requests related to the same user to the same instance.

    • Session Stickiness Duration: session hold time. The value ranges from 0 to 86400, in seconds.
    • Failures: number of access failures. The value ranges from 0 to 10. If the upper limit of failures or the session stickiness duration exceeds the specified values, the microservice stops accessing this instance.

  7. Click OK to save the settings.

Configuring Rate Limiting

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Rate Limiting.
  6. Click Add. The following table describes the configuration items of rate limiting.

    Configuration Item

    Description

    Value Range

    Object

    Other microservices that access the microservice.

    You can set the item in the drop-down list box next to Object.

    QPS

    Requests generated per second. When the number of requests sent by the rate limiting object to the current service instance exceeds the specified value, the current service instance no longer accepts requests from the rate limiting object.

    An integer ranging from 0 to 99999.

    If a microservice has three instances, the rate limiting of each instance is set to 2700 QPS, then the total QPS is 8100, and rate limiting is triggered only when the QPS exceeds 8100.

  7. Click OK to save the settings.

Configuring Service Degradation

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Service Degradation.
  6. Click Add. Select a proper policy. The following table describes the configuration items of service degradation.

    Configuration Item

    Description

    Object

    Microservice to be degraded and the corresponding degradation method.

    Policy

    • Enable
    • Disable

  7. Click OK to save the settings.

Configuring Fault Tolerance

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Fault Tolerance.
  6. Click Add. Select a proper policy. The following table describes the configuration items of fault tolerance.

    Configuration Item

    Description

    Object

    Microservice or method that the application relies on. You can select it from the drop-down list.

    Degradation Mode

    Enable: The system processes the service request based on the selected fault tolerance policy when the request sent to the fault tolerance object encounters an error.

    Disable: The system waits until the timeout interval expires and then returns the failure result even though the service request fails to be implemented.

    Policy

    NOTE:

    Set this parameter when Degradation Mode is set to Enable.

    • Failover

      The system attempts to reestablish connections on different servers.

    • Failfast

      The system does not attempt to reestablish a connection. After a request fails, a failure result is returned immediately.

    • Failback

      The system attempts to reestablish connections on the same server.

    • custom
      • Number of attempts to reestablish connections on the same server
      • Number of attempts to reestablish connections on new servers

  7. Click OK to save the settings.

Configuring Circuit Breaker

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Circuit Breaker.
  6. Click Add. Select a proper policy. The following table describes the configuration items of circuit breaker.

    Configuration Item

    Description

    Object

    Microservice or method invoked by the application.

    Trigger Condition

    • Manual

      Circuit breaker is triggered immediately and microservice instances are not called.

    • Cancel

      Circuit breaker taking effect on the microservice instance is canceled and the microservice instance can be called.

    • Automatic
      • Circuit Breaker Time Window: circuit breaker duration. No response is sent within the time window.
      • Failure Rate: failure rate of window requests, which is a triggering condition.
      • Window Requests: number of requests received by the window, which is a triggering condition. Circuit breaker is triggered only when Failure Rate and Window Requests both reach their thresholds.

  7. Click OK to save the settings.

Configuring Fault Injection

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Fault Injection.
  6. Click Add. Select a proper policy. The following table describes the configuration items of fault injection.

    Configuration Item

    Description

    Object

    Microservices for which fault injection is required. You can specify a method for this configuration item.

    Type

    Type of the fault injected to the microservice.

    • Latency
    • Error

    Protocol

    Protocol for accessing the microservice when latency or fault occurs.

    • Rest
    • Highway

    Latency

    Latency for accessing a microservice. This parameter is required when Type is set to Latency.

    HTTP Error Code

    HTTP error code displayed during microservice access. This parameter is required when Type is set to Error. This error code is an HTTP error code.

    Trigger Probability

    Probability of latency or fault occurrence.

  7. Click OK to save the settings.

Configuring Blacklist and Whitelist

Based on the public key authentication mechanism, CSE provides the blacklist and whitelist functions. The blacklist and whitelist can be used to control which services can be accessed by microservices.

The blacklist and whitelist take effect only after public key authentication is enabled. For details, see Configuring Public Key Authentication.

  1. Log in to ServiceStage and choose Infrastructure > Cloud Service Engines.
  2. Click Console of a microservice engine.
  3. Choose Service Governance.
  4. Click the microservice to be governed.
  5. Click Blacklist and Whitelist.
  6. Click Add to add a blacklist or whitelist for the application. The following table describes the configuration items of blacklist and whitelist.

    Configuration Item

    Description

    Type

    • Blacklist: Microservices matching the matching rule are not allowed to access the current service.
    • Whitelist: Microservices matching the matching rule are allowed to access the current service.

    Matching Rule

    Expressed by a regular expression.

    For example, if Matching Rule is set to data*, services whose name starts with data in the blacklist cannot access the current service, or services whose name starts with data in the whitelist can access the current service.

  7. Click OK to save the settings.

Configuring Public Key Authentication

Public key authentication is a simple and efficient authentication mechanism between microservices provided by CSE. Its security is based on the reliable interaction between microservices and the service center. That is, the authentication mechanism must be enabled between microservices and the service center. The procedure is as follows:

  1. When a microservice starts, a key pair is generated and the public key is registered with the service center.
  2. Before accessing the provider, the consumer uses its own private key to sign a message.
  3. The provider obtains the public key of the consumer from the service center and verifies the signed message.

To enable public key authentication, perform the following steps:

  1. Enable public key authentication for both the consumer and provider.
    servicecomb:
      handler:
        chain:
          Consumer:
            default: auth-consumer
          Provider:
            default: auth-provider
  2. Add the following dependency to the pom.xml file:
    <dependency> 
        <groupId>org.apache.servicecomb</groupId> 
        <artifactId>handler-publickey-auth</artifactId> 
      </dependency>