This is the 6th blog in a series on the Elastic product stack. This blog will cover how Beats fits into the mix with Logstash, Kibana and Elasticsearch.
The series cover:
1. Elasticsearch and Oracle Middleware – is there an opportunity?
2. Installation of Elasticsearch: installation and the indexing of – human generated – documents
3. Elasticsearch and Oracle ACM data: example with ACM data
4. Kibana for ACM dashboards: an example of dashboards with ACM data
5. Logstash and Fusion Middleware: how to get log file data into Elasticsearch
6. Beats and Fusion Middleware: a more advanced way to handle log files
On the elastic.co website, Beats are described as ‘Lightweight Data Shippers’. In more detail: ‘Beats is the platform for single-purpose data shippers. They install as lightweight agents and send data from hundreds or thousands of machines to Logstash or Elasticsearch’.
The Beats product family consists of agents that can send data from a server into Logstash or Elasticsearch:
- Filebeat: handle data from (log) files
- Metricbeat: handle system data like memory and cpu
- Packetbeat: handle network traffic data for various products like http, dns, MySQL, Redis, Cassandra, …
- Windowbeat: handle Windows event logs
Full-blown log file solution
A full-blown log file solution would:
- use lightweight Filebeat agents to capture log files on the source systems
- let the Filebeat agents send the data to a Logstash server
- use Logstash to do data processing like filtering and transformations
- let Logstash forward the data to an Elasticsearch cluster
- use Kibana to access the data in the Elasticsearch cluster
That solution would look like shown below:
This solution would make sense when:
- there are many data sources (servers with log files)
- data processing is needed
- the data processing is cpu intensive and needs to be offloaded/centralized to another server, i.e. a separate Logstash server
But, Logstash can also pick up log files itself – as shown in the previous blog. And Filebeat can also insert data directly into Elasticsearch. This makes other, more simple solutions possible:
- Logstash & Elasticsearch: in case Logstash can have direct access to the source log files
- Filebeat & Elasticsearch: if no data processing is required
This blog will continue where the previous blog [todo] left of, and it will end with:
- Filebeat picking up log lines from the , Filebeats will be used to pick up lines from the domain log file
- Filebeat sends the data to Logstash
- Logstash will do a data transformation
- Logstash will send the data to Elasticsearch.
Filebeat installation
Let’s get started with the installation of Filebeat. Version 5.0.0 was released on October 26th 2016:
- download zip file (there are other options): https://artifacts.elastic.co/downloads/logstash/logstash-5.0.0.tar.gz
Now install Logstash:
[developer@localhost beats]$ pwd /home/developer/beats [developer@localhost beats]$ ls filebeat-5.0.0-linux-x86_64.tar.gz [developer@localhost beats]$ tar xzf filebeat-5.0.0-linux-x86_64.tar.gz [developer@localhost beats]$
Configuration will be done in 3 steps:
1. Make Filebeat pick up the domain log file
2. Make Filebeat send the log lines to Logstash
3. Make Logstash listen to Filebeat
Steps 1 and 2 are done by editing the filebeat.yml configuration file.
#----------------------------- Filebeat prospectors -------------------------------- filebeat.prospectors: - input_type: log # Paths that should be crawled and fetched. Glob based paths. paths: - /home/developer/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain/servers/AdminServer/logs/base_domain.log #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"]
Step 3 is done by adding a Logstash configuration file in the setup from previous blog. Add file logstash-filebeat.conf in your Logstash home installation directory (in my case: ‘/home/developer/logstash/logstash-5.0.0’).
The file logstash-filebeat.conf looks like:
input { beats { port => 5044 } } filter { if [message] !~ /^####/ { drop { } } grok { match => { "message" => "\#\#\#\#\<%{DATA:msg_timestamp}\> \<%{DATA:msg_severity}\> \<%{DATA:msg_subsystem}\>%{GREEDYDATA:msg_details}" } } } output { elasticsearch { hosts => ["localhost:9200"] } }
The only difference with respect to the Logstash configuration file from the previous blog is a changed input section.
For more information on Logstash, please refer to blog 5.
Check the Filebeat configuration:
[developer@localhost filebeat-5.0.0-linux-x86_64]$ ./filebeat -configtest -e 2016/11/28 06:05:09.254613 beat.go:264: INFO Home path: [/home/developer/beats/filebeat-5.0.0-linux-x86_64] Config path: [/home/developer/beats/filebeat-5.0.0-linux-x86_64] Data path: [/home/developer/beats/filebeat-5.0.0-linux-x86_64/data] Logs path: [/home/developer/beats/filebeat-5.0.0-linux-x86_64/logs] 2016/11/28 06:05:09.254641 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.0 2016/11/28 06:05:09.254746 logstash.go:90: INFO Max Retries set to: 3 2016/11/28 06:05:09.254826 outputs.go:106: INFO Activated logstash as output plugin. 2016/11/28 06:05:09.254915 publish.go:291: INFO Publisher name: localhost.localdomain 2016/11/28 06:05:09.255153 logp.go:219: INFO Metrics logging every 30s 2016/11/28 06:05:09.255657 async.go:63: INFO Flush Interval set to: 1s 2016/11/28 06:05:09.255666 async.go:64: INFO Max Bulk Size set to: 2048 Config OK [developer@localhost filebeat-5.0.0-linux-x86_64]$
Start it all
Elasticsearch:
[developer@localhost bin]$ pwd /home/developer/elastic/elasticsearch-5.0.0/bin [developer@localhost bin]$ ./elasticsearch
Filebeat:
[developer@localhost filebeat-5.0.0-linux-x86_64]$ pwd /home/developer/beats/filebeat-5.0.0-linux-x86_64 [developer@localhost filebeat-5.0.0-linux-x86_64]$ ./filebeat -e -c filebeat.yml -d "publish"
Logstash:
[developer@localhost logstash-5.0.0]$ pwd /home/developer/logstash/logstash-5.0.0 [developer@localhost logstash-5.0.0]$ bin/logstash -f logstash-filebeat.conf --config.reload.automatic
Kibana:
[developer@localhost bin]$ pwd /home/developer/kibana/kibana-5.0.0-linux-x86_64/bin [developer@localhost bin]$ ./kibana
If you have your FMW system generating some log lines, you should now see them coming into your Kibana dashboard – just like with the previous blog:
Summary
This blog completes a series of 6 blogs on the combination of Oracle Fusion Middleware and Elasticsearch – and it was a lot of fun to do.
The blog series shows that there are use cases for the combination of Oracle Fusion Middleware and Elasticsearch. Of course, you have to carefully design your solution to meet your specific requirements. That’s obvious. But the Elasticsearch product seems mature to the level where it can be considered on enterprise scale levels. But remember, even though all blogs show simple setups that work, building a production system will be more complicated. Aspects like performance, availability, recovery, data loading, etc have to be taken into account. There (still) are no silver bullets…