Configuring the Platform

Once the deployment is finished, certain functions or configurations on the platform may not align with the project's requirements. If this occurs, you can refer to this section to properly configure the platform.
Modifying Component Software Configurations

To ensure your services can handle the expected scale and pressure, make sure your hardware configuration meets the requirements outlined in "Capacity Planning" of the Alauda Cloud Native Success Platform Installation Guide. Additionally, adjust the software configuration of your components based on this section.
Modify the log collection scope and the storage duration of logs, audit data, and events.
- Log Collection Scope
It can be modified on the platform UI. For details, see the Alauda Cloud Native Success Platform User Guides.
- Log Retention Period
It can be modified on the platform UI. For details, see the Alauda Cloud Native Success Platform User Guides.
- Monitoring Data Retention Period
It can be modified on the platform UI. For details, see the Alauda Cloud Native Success Platform User Guides.
- Limit Modification
# Run the following command on the first master node to search for the component to be modified: kubectl get deploy,sts,ds -A | grep apoll # Run this command to find the Apollo resource name and then run the following command to make changes: kubectl edit -n cpaas-system deployment.apps/apollo
- apollo –es-enablealias Modification
# Changing the value of this parameter to false will improve the log query speed. The default value is false. If it is set to true, it allows Elasticsearch index names that do not follow specifications (for example, log-workload-20230208) and supports log queries based on aliases in customers' on-premises Elasticsearch instances. # Run the following command on the first master node: kubectl edit prdb base # Search for valuesOverride in .spec. If valuesOverride is not present, add the key and then add the following content: valuesOverride: ait/chart-alauda-base: logging: esEnableAliases: false # A Boolean parameter which has only two values: true or false
- Changing the Elasticsearch Fragments (ALAUDA_ES_SHARDING)
For details, see "Platform Center" > "Platform management" > "Clusters" > "Clusters" > "Plugin management" > "Deploying the Log Storage Component" in the Alauda Cloud Native Success Platform User Guides.
- Changing the Number of Nodes Where Elasticsearch Runs
You can modify the configuration on the UI. You can log in to the platform as an administrator, choose Platform Management > Clusters > Clusters > Plugins > ElasticSearch Log Center, click
, and select Update from the drop-down list.
Elasticsearch can run on one node or three nodes. It is not possible to change the node to a higher number. To use an Elasticsearch that runs on more than three nodes, you will have to make some manual parameter modifications.
- Run moduleinfo(kubectl get moduleinfo |grep logcenter | grep <cluster-name>) and get the spec.config.components.elasticsearch.nodes part.
- Add the name of the required node below. (You can run kubectl get nodes to obtain the node name.)
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot