This is the 5th blog in a series on the Elastic product stack. This blog will cover how Logstash can pick up Fusion Middleware log files and put the log file lines into Elasticsearch.
The series cover:
1. Elasticsearch and Oracle Middleware – is there an opportunity?
2. Installation of Elasticsearch: installation and the indexing of – human generated – documents
3. Elasticsearch and Oracle ACM data: example with ACM data
4. Kibana for ACM dashboards: an example of dashboards with ACM data
5. Logstash and Fusion Middleware: how to get log file data into Elasticsearch
6. Beats and Fusion Middleware: a more advanced way to handle log files
Logstash is described on the elastic.co site as ‘an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.)’.
This article build on the solution from the previous blogs, i.e. Kibana and Elasticsarch are already installed. Now, Logstash will be added into the mix:
Blog number 6 will describe that Beats (FileBeats) can also be part of this solution. For picking the solution that best fits your requirements, please refer to that blog.
Logstash installation
Version 5.0.0 was released on October 26th 2016:
- download zip file (there are other options): https://artifacts.elastic.co/downloads/logstash/logstash-5.0.0.tar.gz
[developer@localhost ~]$ cd [developer@localhost ~]$ mkdir logstash [developer@localhost ~]$ cd logstash [developer@localhost logstash]$ mv ~/Downloads/logstash-5.0.0.tar.gz . [developer@localhost logstash]$ tar -xzf logstash-5.0.0.tar.gz [developer@localhost logstash]$
Logstash configuration
Now, create a Logstash configuration file:
[developer@localhost logstash]$ pwd /home/developer/logstash [developer@localhost logstash]$ cd logstash-5.0.0/ [developer@localhost logstash-5.0.0]$ vi logstash-domain_log.conf
And then edit the log file to look like:
# # specify the location of the log file, and # give it a type (here: domain_log # input { file { path => "/home/developer/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain/servers/AdminServer/logs/base_domain.log" type => "domain_log" } } # # define a filter: # 1. 'drop all message lines that do not start with ####' # 2. 'use grok filter to parse message and filter # msg_timestamp # msg_severity # msg_subsystem # filter { if [message] !~ /^####/ { drop { } } grok { match => { "message" => "\#\#\#\#\<%{DATA:msg_timestamp}\> \<%{DATA:msg_severity}\> \<%{DATA:msg_subsystem}\>%{GREEDYDATA:msg_details}" } } } # # send the output to Elasticsearch # output { elasticsearch { hosts => ["localhost:9200"] } }
Note that the grok filter is based on the definition of the log file format. For our 12c instance, that information was found here:
https://docs.oracle.com/middleware/1221/wls/WLLOG/logging_services.htm#WLLOG130
Check your config file:
[developer@localhost logstash-5.0.0]$ pwd /home/developer/logstash/logstash-5.0.0 [developer@localhost logstash-5.0.0]$ ls bin CONTRIBUTORS Gemfile.jruby-1.9.lock logstash-core logstash-domain_log.conf CHANGELOG.md data lib logstash-core-event-java NOTICE.TXT config Gemfile LICENSE logstash-core-plugin-api vendor [developer@localhost logstash-5.0.0]$ bin/logstash -f logstash-domain_log.conf --config.test_and_exit Sending Logstash logs to /home/developer/logstash/logstash-5.0.0/logs which is now configured via log4j2.properties. Configuration OK [2016-11-27T16:16:44,015][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [developer@localhost logstash-5.0.0]$
Start Logstash
Ensure that Elasticsearch, Kibana and your FMW instance are running. Now, start Logstash:
[developer@localhost logstash-5.0.0]$ pwd /home/developer/logstash/logstash-5.0.0 [developer@localhost logstash-5.0.0]$ bin/logstash -f logstash-domain_log.conf Sending Logstash logs to /home/developer/logstash/logstash-5.0.0/logs which is now configured via log4j2.properties. [2016-11-27T16:43:36,333][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}} [2016-11-27T16:43:36,344][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2016-11-27T16:43:36,617][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2016-11-27T16:43:36,630][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]} [2016-11-27T16:43:36,696][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>750} [2016-11-27T16:43:36,703][INFO ][logstash.pipeline ] Pipeline main started [2016-11-27T16:43:36,785][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Ensure that some activity on your FMW system generates log lines in the domain.log. Now, check the indexes in Elasticsearch:
[developer@localhost logstash-5.0.0]$ curl 'localhost:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open casemilestones jU2LVuiiSBeA30PsZnZuGQ 5 1 16060 0 4.1mb 4.1mb yellow open caseactivities TupoWoM-TUm3gbRdK1fuKw 5 1 3387 0 1.8mb 1.8mb yellow open .kibana NpqhxkHCQg26GjHFa_LaOQ 1 1 7 0 42.2kb 42.2kb yellow open logstash-2016.11.27 0piXs6jzQE6uI-j1U8Cnog 5 1 40 0 171.8kb 171.8kb yellow open casedata feHn0AxLQ8Sg50a0SKDcpg 5 1 16060 0 5.7mb 5.7mb
Log data in Kibana
Fire up your browser, goto Kibana and select the Management tab:
Click on ‘Index patterns’:
Click on ‘+ Add New’ and complete the form as shown below:
Click ‘Create’. This will give the result as shown below:
Now, click the ‘Discover’tab, and (1) select the logstash-* index and (2) set the right time range:
Which results in:
And .. couple of clicks further, there’s the dashboard:
Summary
Configuring Logstash to get the log lines from file into Elasticsearch is easy. The log line format will determine how easy it is to do processing and transformations on the log data. With the data format in Fusion Middleware systems, that seems to be easy…
The other task is to define meaningful visualizations and dashboards in Kibana, which is also … easy 🙂
Hi Luc,
Do you happen to know what happens if ElasticSearch is not running when LogStash tries to execute a pipeline that is supposed feed data into one of its indexes?
kind regards,
Lucas
Ho Lucas,
It will pick it up later – when elasticsearch is running. Logstash knows what log lines have been processed, it will detect when your Elasticsearch is up again, and will then continue processing log files.
The only thing I haven’t tried yet is how it handles log file rotations …
Best regards,
Luc