Importing Logs of Self-built ELK to LTS
Solution Overview
ELK is an acronym that stands for Elasticsearch, Logstash, and Kibana. Together, these three tools provide a most commonly used log analysis and visualization solution in the industry.
- Elasticsearch is an open-source, distributed, and RESTful search and analysis engine based on Lucene.
- Logstash is an open-source data processing pipeline on the server side. It allows you to collect and transform data from multiple sources in real time, and then send the data to your repository. It is usually used to collect, filter, and forward logs.
- Kibana is an open-source platform for data analysis and visualization, enabling you to create dashboards and search and query data. It is usually used together with Elasticsearch.
LTS outperforms the ELK solution in terms of function diversity, costs, and performance. For an in-depth comparison, see What Are the Advantages of LTS Compared with Self-built ELK Stack? This section describes how to use custom Python scripts and ICAgent to migrate logs from Elasticsearch to LTS.
ICAgent can be installed on ECSs to collect their log files. With this function, you can import Elasticsearch logs to LTS.
You can flush Elasticsearch data to ECSs using Python scripts, and then collect the flushed log files to LTS using its log ingestion function.
Importing Logs of Self-built ELK to LTS
- Log in to the LTS console.
- Install ICAgent on the ECS.
- Configure ECS log ingestion on the LTS console. For details, see Ingesting ECS Text Logs to LTS.
- Prepare for script execution. The following example is for reference only. Enter your actual information.
- If you use Python for the first time, you need to install the Python environment.
- If you use Elasticsearch for the first time, you need to install the Python data package of the corresponding Elasticsearch version. Elasticsearch 7.10.1 is used in this solution test.
pip install elasticsearch==7.10.1
- Elasticsearch used in this solution test is created by Huawei Cloud Search Service (CSS).
- Run the python script for constructing index data. If the index already has data, skip this step and go to 6.
The python script must be executed on the ECS and named xxx.py. The following is an example of constructing data:
Modify the following italic fields as required. In this example, 1,000 data records with the content This is a test log,Hello world!!!\n are inserted.
- index: name of the index to be created. It is test in this example.
- es: URL for accessing Elasticsearch. It is http://127.0.0.1:9200 in this example.
from elasticsearch import Elasticsearch def creadIndex(index): mappings = { "properties": { "content": { "type": "text" } } } es.indices.create(index=index, mappings=mappings) def reportLog(index): i = 0 while i < 1000: i = i + 1 body = {"content": "This is a test log,Hello world!!!\n"} es.index(index=index,body=body) if __name__ == '__main__': # Index name index = 'test' # Link to Elasticsearch es = Elasticsearch("http://127.0.0.1:9200") creadIndex(index) reportLog(index)
- Construct the Python read and write script to write Elasticsearch data to the disk. The output file path must be the same as that configured in the log ingestion rule.
The script must be executed on the ECS and named xxx.py. The following is an example of the script for writing data to the disk:
Modify the following italic fields as required.
- index: index name. It is test in this example.
- pathFile: absolute path for writing data to the disk. It is /tmp/test.log in this example.
- scroll_size: size of the index rolling query. It is 100 in this example.
- es: URL for accessing Elasticsearch. It is http://127.0.0.1:9200 in this example.
from elasticsearch import Elasticsearch def writeLog(res, pathFile): data = res.get('hits').get('hits') i = 0 while i < len(data): log = data[i].get('_source').get('content') file = open(pathFile, 'a', encoding='UTF-8') file.writelines(log) i = i + 1 file.flush() file.close() if __name__ == '__main__': # Index name index = 'test' # Output file path pathFile = '/tmp/' + index + '.log' # Size of the scrolling query. The default value is 100. scroll_size = 100 # Link to Elasticsearch es = Elasticsearch("http://127.0.0.1:9200") init = True while 1: if (init == True): res = es.search(index=index, scroll="1m", body={"size": scroll_size}) init =False else: scroll_id = res.get("_scroll_id") res = es.scroll(scroll="1m", scroll_id=scroll_id) if not res.get('hits').get('hits'): break writeLog(res, pathFile)
- Ensure that Python has been installed and run the following command on the ECS to write the Elasticsearch index data to the disk:
python xxx.py
- Check whether the data was successfully queried and written into the disk.
In this example, the path for writing data to the disk is /tmp/test.log. Replace it with your actual path. Run the following command to check whether the data has been written to the disk:
tail -f /tmp/test.log
- Log in to the LTS console. On the Log Management page, click the target log stream to go to its details page. If log data is displayed on the Log Search tab page, log collection is successful.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot