Configuring Kafka Log Push
Kafka log push policies push calling logs of open APIs to Kafka for analysis.
If your gateway does not support this policy, contact technical support to upgrade the gateway to the latest version.
Policy parameters will be stored as plaintext. To prevent information leakage, do not contain sensitive information in these parameters.
Usage Guidelines
- A maximum of five Kafka log push policies can be created for a gateway.
- APIs bound with a Kafka log push policy will deteriorate in performance by 30%.
- The maximum size of a log to be pushed is 4 KB, and the maximum size of a request body or response body to be pushed is 1 KB. The excess part will be truncated.
- An API can be bound with only one policy of the same type.
- Policies are independent of APIs. A policy takes effect for an API only after they are bound to each other. When binding a policy to an API, you must specify an environment where the API has been published. The policy takes effect for the API only in the specified environment.
- After you bind a policy to an API, unbind the policy from the API, or update the policy, you do not need to publish the API again.
- Taking an API offline does not affect the policies bound to it. The policies are still bound to the API if the API is published again.
- Policies that have been bound to APIs cannot be deleted.
Creating a Kafka Log Push Policy
- Go to the APIG console.
- Select a dedicated gateway at the top of the navigation pane.
- In the navigation pane, choose API Management > API Policies.
- On the Policies tab, click Create Policy.
- On the Select Policy Type page, select Kafka Log Push in the Plug-ins area.
- Set the policy information.
Table 1 Kafka log push parameters Parameter
Description
Name
Enter a policy name. Using naming rules facilitates future search.
Type
Fixed as Kafka Log Push.
Description
Description about the plug-in.
Policy Content
Content of the plug-in, which can be configured in a form or using a script.
Policy Information
Broker Address
Connection address of the target Kafka. Separate multiple addresses with commas (,).
Topic
Topic of the target Kafka to report logs to.
Key
Partition of Kafka for storing logs as an ordered message queue. If this parameter is left blank, logs are stored in different partitions.
Retry
Configuration for retrying when logs fail to be pushed to Kafka.
- Retry Times: the number of retry attempts in case of a failure. Enter 0 to 5.
- Retry Interval: the interval of retry attempts in case of a failure. Enter 1 to 10 seconds.
SASL Configuration
Security Protocol
Protocol used for connecting to the target Kafka.
- PLAINTEXT: user authentication protocol of the default access point
- SASL_PLAINTEXT: SASL user authentication protocol
- SASL_SSL: SSL user authentication protocol
Message Tx/Rx Mechanism
Message transmission and receiving mechanism of the target Kafka. The default value is PLAIN.
SASL Username
This parameter is available only if Security Protocol is set to SASL_PLAINTEXT or SASL_SSL.
Username used for SASL or SSL authentication.
SASL Password
This parameter is available only if Security Protocol is set to SASL_PLAINTEXT or SASL_SSL.
User password used for SASL or SSL authentication.
Confirm SASL Password
This parameter is available only if Security Protocol is set to SASL_PLAINTEXT or SASL_SSL.
Enter the SASL password again.
Certificate Content
This parameter is available only if Security Protocol is set to SASL_SSL.
CA certificate used for SSL authentication.
Metadata Configuration
System Metadata
System fields that need to be included in pushed logs.
By default, the start_time, request_id, client_ip, request_time, http_status, scheme, request_method, host, uri, upstream_addr, upstream_status, upstream_response_time, http_x_forwarded_for, http_user_agent, and error_type fields are carried in logs. You can also specify other system fields that need to be included.
Request Data
API request information that needs to be included in pushed logs.
- The log contains the request header: Specify a header that needs to be included. Separate multiple headers with commas (,). The asterisk (*) can be used as a wildcard.
- The log contains the request QueryString: Specify a query string that needs to be included. Separate multiple query strings with commas (,). The asterisk (*) can be used as a wildcard.
- The log contains the request body: If this option is selected, logs will contain the body of API requests.
Response Data
API response information that needs to be included in pushed logs.
- The log contains the response header: Specify a header that needs to be included. Separate multiple headers with commas (,). The asterisk (*) can be used as a wildcard.
- The log contains the response body: If this option is selected, logs will contain the body of API request responses.
Customized Authentication
Custom authentication information that needs to be included in pushed logs.
- Frontend: Enter a response field of frontend authentication that needs to be included. Separate multiple fields with commas (,).
- Backend: Enter a response field of backend authentication that needs to be included. Separate multiple fields with commas (,).
- Click OK.
- To clone this policy, click Clone in the Operation column.
The name of a cloned policy cannot be the same as that of any existing policy.
- After the policy is created, perform the operations described in Binding the Policy to APIs for the policy to take effect for the API.
- To clone this policy, click Clone in the Operation column.
Binding the Policy to APIs
- Click a policy name to go to the policy details page.
- Select an environment and click Select APIs.
- Select the API group, environment, and required APIs.
APIs can be filtered by API name or tag. The tag is defined during API creation.
- Click OK.
- If an API no longer needs this policy, click Unbind in the row that contains the API.
- If there are multiple APIs that no longer need this policy, select these APIs, and click Unbind above the API list. You can unbind a policy from a maximum of 1000 APIs at a time.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot