ES allows access to your cluster through private VIP within your VPC. You can write code to access your cluster through the Elasticsearch REST client and import your data into the cluster. You can also ingest your data through Elasticsearch's official components such as Logstash and Beats.
This document takes the official components Logstash and Beats as examples to describe how to connect your data source of different types to ES.
Preparations
You need to create a CVM instance or a Docker cluster in the same VPC as the ES cluster, as accessing the ES cluster needs to be done within the VPC.
Using Logstash to Access ES Cluster
Accessing ES cluster from CVM
1. Install and deploy Logstash and Java 8.
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.6.4.tar.gz
tar xvf logstash-5.6.4.tar.gz
yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y
Note:
Please note that the Logstash version should be the same as the Elasticsearch version.
3. Run Logstash.
nohup ./bin/logstash -f ~/*.conf 2>&1 >/dev/null &
Accessing ES cluster from Docker
Creating Docker cluster
1. Pull the official image of Logstash.
docker pull docker.elastic.co/logstash/logstash:5.6.9
2. Customize the \\*.conf
configuration file based on the data source type and place it in the /usr/share/logstash/pipeline/
directory which can be customized.
3. Run Logstash.
docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:5.6.9
Using TKE
Tencent Cloud Docker clusters run on CVM instances, so you need to create a CVM cluster in the TKE Console first.
1. Log in to the TKE Console and select Cluster > Create on the left sidebar to create a cluster.
2. Select Service on the left sidebar and click Create to create a service.
3. Select the official image of Logstash.
In this example, the Logstash image provided by TencentHub image registry is used. You can also create a Logstash image by yourself.
4. Create a data volume.
Create a data volume to store the Logstash configuration file. In this example, a configuration file named logstash.conf
is added to the /data/config
directory on the CVM instance and mounted to the /data
directory of Docker, so that the logstash.conf
file can be read when the container starts.
5. Configure the execution parameters.
6. Configure the service parameters and create a service as needed.
Configuration file description
File data sources
input {
file {
path => "/var/log/nginx/access.log"
}
}
filter {
}
output {
elasticsearch {
hosts => ["http://172.16.0.89:9200"]
index => "nginx_access-%{+YYYY.MM.dd}"
}
}
Kafka data sources
input{
kafka{
bootstrap_servers => ["172.16.16.22:9092"]
client_id => "test"
group_id => "test"
auto_offset_reset => "latest"
consumer_threads => 5
decorate_events => true
topics => ["test1","test2"]
type => "test"
}
}
output {
elasticsearch {
hosts => ["http://172.16.0.89:9200"]
index => "test_kafka"
}
}
Database data sources connected with JDBC
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://172.16.32.14:3306/test"
jdbc_user => "root"
jdbc_password => "Elastic123"
jdbc_driver_library => "/usr/local/services/logstash-5.6.4/lib/mysql-connector-java-5.1.40.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
statement => "select * from test_es"
schedule => "* * * * *"
type => "jdbc"
}
}
output {
elasticsearch {
hosts => ["http://172.16.0.30:9200"]
index => "test_mysql"
document_id => "%{id}"
}
}
Using Beats to Access ES Cluster
Beats contains a variety of single-purpose collectors. These collectors are relatively lightweight and can be deployed and run on servers to collect data such as logs and monitoring information. Beats occupies less system resources than Logstash does.
Beats includes FileBeat for collecting file-type data, MetricBeat for collecting monitoring metric data, PacketBeat for collecting network packet data, etc. You can also develop your own Beats components based on the official libbeat
library as needed.
Accessing ES cluster from CVM
1. Install and deploy Filebeat.
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.4-linux-x86_64.tar.gz
tar xvf filebeat-5.6.4.tar.gz
2. Configure filebeat.yml
.
3. Run Filebeat.
nohup ./filebeat 2>&1 >/dev/null &
Accessing ES cluster from Docker
Creating Docker cluster
1. Pull the official image of Filebeat.
docker pull docker.elastic.co/beats/filebeat:5.6.9
2. Customize the \\*.conf
configuration file based on the data source type and place it in the /usr/share/logstash/pipeline/
directory which can be customized.
3. Run Filebeat.
docker run docker.elastic.co/beats/filebeat:5.6.9
Using TKE
The deployment method of Filebeat through TKE is similar to that of Logstash, and you can use the Filebeat image provided by Tencent Cloud.
Configuration file description
Configure the filebeat.yml
file as follows:
// Input source configuration
filebeat.prospectors:
- input_type: log
paths:
- /usr/local/services/testlogs/*.log
// Output to ES
output.elasticsearch:
hosts: ["172.16.0.39:9200"]
Was this page helpful?