Logstash is an open-source log processing tool that can be used to collect data from multiple sources, filters it, and then stores it for other uses.
Logstash is highly flexible and has powerful syntax analysis capabilities. With a variety of plugins, it supports multiple types of inputs and outputs. In addition, as a horizontally scalable data pipeline, it has powerful log collection and retrieval features that work with Elasticsearch and Kibana.
The Logstash data processing pipeline can be divided into three stages: inputs → filters → outputs.
In addition, Logstash supports encoding and decoding data, so you can specify data formats on the input and output ends.
Note:Logstash consumes resources when processing data. If you deploy Logstash on a production server, the performance of the server may be affected.
logstash_test
.Note:You can click the following tabs to view the detailed directions for using CKafka as
inputs
oroutputs
.
Run bin/logstash-plugin list
to check whether logstash-input-kafka
is included in the supported plugins.
Write the configuration file input.conf
in the .bin/
directory.
In the following example, Kafka is used as the data source, and the standard output is taken as the data destination.
input {
kafka {
bootstrap_servers => "xx.xx.xx.xx:xxxx" // CKafka instance access address
group_id => "logstash_group" // CKafka group ID
topics => ["logstash_test"] // CKafka topic name
consumer_threads => 3 // Number of consumer threads, which is generally the same as the number of CKafka partitions
auto_offset_reset => "earliest"
}
}
output {
stdout{codec=>rubydebug}
}
Run the following command to start Logstash and consume messages.
./logstash -f input.conf
The returned result is as follows:
You can see that the data in the topic above has been consumed now.
Was this page helpful?