CSS provides a variety of node specifications to meet diverse needs. For more information about different node specifications, see ECS Types.
The following describe the use cases and core features of different node specifications.
Elasticsearch and OpenSearch Node Specifications
Table 1 Comparing different node specifications
CPU Architecture |
Node Flavor |
Description |
x86 |
Compute-intensive |
Core advantages
- High-performance CPU: designed for high computational load and low latency, ideal for real-time search and complex queries.
- High reliability: Local NVMe disks enhance cluster stability.
Application scenarios
- E-commerce search and recommendations: real-time responses to search requests, complex filtering and ranking supported.
- App search service: quick search on mobile apps, such as social apps and content recommendations.
- Database acceleration: low latency for complex OLAP queries.
Precautions
- This flavor is suitable for performance-demanding workloads, but it comes at a higher cost. Be sure to balance your performance needs with budget considerations.
- Use ultra-high I/O disks with this node flavor, making sure the data read/write performance matches the CPU performance.
|
General computing-plus - AC |
Core advantages
- Dedicated CPU: no resource contention between different instances, stable performance at a relatively low cost.
- Low latency: ideal for latency-sensitive workloads.
Application scenarios
- High-concurrency search: e-commerce search, social platform search, etc., where fast responses are required.
- Real-time recommendation system: recommendations are generated based on real-time user behavior.
- OLTP scenarios: transactional data processing and real-time queries.
|
General computing |
Core advantages
- Balanced configuration: default specifications, suitable for medium-scale workloads.
- Cost-effective: able to meet the needs of most general search and analytical applications.
Application scenarios
This flavor meets the needs of general use cases through a standard deployment, without the need for special tuning.
- Content search: medium-scale search and analytical workloads. (The data volume per node ranges from 100 GB to 1,000 GB.)
- Log analytics: real-time query and analysis of medium-scale log data.
|
Memory-optimized |
Core advantages
- Large memory capacity: ideal for memory-intensive tasks, such as complex aggregation and caching.
- High throughput: fast processing of large data volumes.
Application scenarios
Use this flavor for search and analytical workloads where the per-node data volume ranges from 100 GB to 2,000 GB.
- Large-scale data analytics, such as user behavior analysis and statistics.
- OLAP scenarios: complex queries and multi-dimensional aggregation.
- Cache-intensive workloads, such as real-time reporting, where data needs to be frequently loaded to the memory.
Precautions
Avoid using this flavor for latency-sensitive workloads.
This flavor comes at a relatively high cost, primary due to high memory usage. Be sure to balance your performance needs with budget considerations. |
Ultra-high I/O |
Core advantages
- Local NVMe SSDs: impressively high disk I/O performance, ideal for workloads with high-concurrency reads/writes.
- Low latency: Compared with cloud-based SSDs, local NVMe SSDs have lower latency and better performance.
Application scenarios
- Real-time public sentiment analysis: frequent writes and fast queries.
- Patent search: quick search and matching of large volumes of text data.
- Database acceleration: read/write splitting architecture for MySQL/PostgreSQL.
Precautions
- In terms of data loss prevention, local disks are less reliable than cloud-based risks. When using local disks, be sure to enable data replicas.
- NVMe SSDs are expensive. Be sure to balance performance and costs.
|
Kunpeng |
Kunpeng general computing |
Core advantages
Cost-effective: In general, the Arm architecture is more cost-effective than x86.
Application scenarios
- Cost-sensitive workloads, such as search services for small and medium enterprises and testing environments.
- Arm ecosystem compatibility: when your search workloads must be compatible with Arm.
Precautions
Ecosystem compatibility: Make sure your applications (including Java virtual machine and third-party plugins) are compatible with the Arm architecture. |
Logstash Node Specifications
Table 2 Comparing different node specifications
CPU Architecture |
Node Flavor |
Description |
x86 |
Compute-intensive |
Core advantages
- High-performance CPU: designed for high computational load, ideal for CPU-intensive tasks.
- Optimized network I/O: supports high throughput in both the inbound and outbound directions (such as network plugins).
Application scenarios
- CPU-intensive plugins: plugins that involve heavy CPU computation, such as grok (regular expression parsing) and dissect (structured log parsing).
- Hybrid load tasks: tasks that involve both heavy CPU computation and network I/O loads, such as real-time log ingestion and data cleaning.
- Large-scale data processing at high speed, such as log aggregation and event stream processing.
Precautions
- Set the number of task threads (pipeline.workers) to equal the number of vCPUs. This optimizes CPU utilization while avoiding resource contention.
- Pay attention to warnings on I/O performance bottlenecks. If the load of network plugins (such as beats and http) is heavy, make sure there is sufficient network bandwidth.
|
General computing-plus - AC |
Core advantages
- Dedicated CPU: no resource contention between different instances, stable performance at a relatively low cost, ideal for high-priority tasks.
- Low latency: guaranteed efficiency for CPU-intensive plugins.
Application scenarios
- High CPU-load tasks: real-time log parsing and complex field extraction (grok and ruby)
- Mission-critical service processes: reliable performance required (such as financial transaction log processing)
- Multi-thread processing: high-concurrency tasks
Precautions
If tasks mainly involve high network I/O, you are advised to use ultra-high I/O disks. |
General computing |
Core advantages
Balanced configuration: default specifications, suitable for medium-scale data processing tasks.
Application scenarios
- Medium-scale log processing, such as enterprise log ingestion and monitoring data aggregation.
- Low CPU-load tasks: mainly network I/O (such as file and kafka plugins).
- Standard deployment: meets the needs of general use cases, no need for special tuning.
Memory usage evaluation
Estimate the needed memory capacity using this formula: Average size of each piece of data processed by Logstash x (pipeline.workers x pipeline.batch.size)
Example: If the average data size is 1 KB, pipeline.workers = 4, and pipeline.batch.size = 1000, the memory size is ~4 MB. |
Memory-optimized |
Core advantages
- Large memory capacity: suitable for memory-intensive tasks, reduced disk I/O pressure.
- Optimized memory queues: data cached in memory queues, more efficient data processing.
Application scenarios
- Large-scale log aggregation: log analytics platform, security information and event management (SIEM), etc.
- Complex data transformation: tasks that involve temporary storage of large amounts of data (for example, the aggregate plugin).
Precautions
- Monitor the memory usage in real time to avoid out of memory (OOM).
- This cost is high. Use this flavor for memory-intensive tasks that have high priorities.
|
Kunpeng |
Kunpeng general computing |
Core advantages
- Cost-effective: In general, the Arm architecture is more cost-effective than x86, with lower power consumption.
- Memory-efficient: Use this flavor for workloads that are cost-sensitive and require large memory capacities.
Application scenarios
- Cost-sensitive workloads, such as log ingestion for small and medium enterprises and testing environments.
- Arm ecosystem compatibility: use when your workloads must be compatible with Kunpeng servers.
Precautions
Make sure your Logstash plugins (including Java virtual machine and third-party plugins) are compatible with the Arm architecture. |