Planning Node Specifications and Capacity
This topic provides suggestions on selecting node specifications and configuring the node storage type, storage capacity, and node quantity for a Logstash cluster, helping you properly plan the capacities of your cluster.
Node Configuration Suggestions
Parameter |
Configuration Suggestions |
---|---|
Node Specifications |
In the node flavor list, vCPUs | Memory indicate the number of vCPUs and memory capacity available for each flavor, and Recommended Storage indicates the supported storage capacity range. We recommend that you select node specifications based on core metrics of your Logstash cluster, such as the CPU load, memory requirements, and I/O features. Logstash Node Specifications describes the application scenarios and core features of different node specifications. It can help you properly plan your cluster. For more information about different node specifications, see ECS Types. |
Node Storage Type and Capacity |
Select an appropriate storage type and capacity for cluster nodes.
|
Nodes |
The number of nodes in a Logstash cluster ranges from 1 to 100. Logstash nodes are used to ingest, parse, process, and transfer data. The number of nodes determines the data migration speed. Select the number of Logstash nodes based on service requirements. When the Logstash cluster has two or more nodes, all nodes use the same configuration files. This mode works when Logstash is a consumer of Kafka data. |
Logstash Node Specifications
Logstash nodes support EVS disks only. EVS disks are a virtual block storage service that is independent of ECSs. They provide high reliability and fast elasticity, making them ideal for workloads that require high data reliability and highly scalable storage capacity.
CPU Architecture |
Node Flavor |
Description |
---|---|---|
x86 |
Compute-intensive |
Core advantages
Application scenarios
Precautions
|
General computing-plus - AC |
Core advantages
Application scenarios
Precautions If tasks mainly involve high network I/O, you are advised to use ultra-high I/O disks. |
|
General computing |
Core advantages Balanced configuration: default specifications, suitable for medium-scale data processing tasks. Application scenarios
Memory usage evaluation Estimate the needed memory capacity using this formula: Average size of each piece of data processed by Logstash x (pipeline.workers x pipeline.batch.size) Example: If the average data size is 1 KB, pipeline.workers = 4, and pipeline.batch.size = 1000, the memory size is ~4 MB. |
|
Memory-optimized |
Core advantages
Application scenarios
Precautions
|
|
Kunpeng |
Kunpeng general computing |
Core advantages
Application scenarios
Precautions Make sure your Logstash plugins (including Java virtual machine and third-party plugins) are compatible with the Arm architecture. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot