Help Center/ Cloud Search Service/ User Guide/ Elasticsearch/ Importing Data to an Elasticsearch Cluster/ Different Ways to Ingest Data into an Elasticsearch Cluster
Updated on 2025-11-06 GMT+08:00

Different Ways to Ingest Data into an Elasticsearch Cluster

Introduction

Elasticsearch clusters support multiple data ingestion methods, as listed in Table 1. Select one that fits your needs the best. Before starting to ingest data, determine whether to enhance the data ingestion performance of Elasticsearch clusters first. For details, see Enhancing the Data Ingestion Performance of Elasticsearch Clusters.

Table 1 Different ways to ingest data into an Elasticsearch cluster

Data Ingestion Method

Scenario

Supported Data Formats/Sources

Details

CSS Logstash

Use CSS Logstash to ingest data from multiple sources, such as relational databases, a Kafka service, and OBS, into Elasticsearch clusters.

  • RDS MySQL
  • Kafka data
  • OBS data

Using Logstash to Synchronize Data to Elasticsearch

Open-source Logstash

Open-source Logstash offers a server-side, real-time data processing pipeline, which supports data ingestion from multiple sources. It can be used to ingest various types of data, such as logs, monitoring data, and metrics.

JSON, CSV, and text

Using In-house Built Logstash to Import Data to Elasticsearch

Open-source Elasticsearch API

Open-source Elasticsearch APIs can be used to ingest data. This method is flexible, as you can write your own application code.

JSON

Using Open Source Elasticsearch APIs to Import Data to Elasticsearch

Cloud Data Migration (CDM)

You can use CDM for batch data migration. For example, if data is stored in OBS or an Oracle database, CDM is recommended.

JSON

Using CDM to Import Data to Elasticsearch