This is the 4th blog in a series on the Elastic product stack. This blog will cover the Kibana product.
The series cover:
1. Elasticsearch and Oracle Middleware – is there an opportunity?
2. Installation of Elasticsearch: installation and the indexing of – human generated – documents
3. Elasticsearch and Oracle ACM data: example with ACM data
4. Kibana for ACM dashboards: an example of dashboards with ACM data
5. Logstash and Fusion Middleware: how to get log file data into Elasticsearch
6. Beats and Fusion Middleware: a more advanced way to handle log files
Elastic advertises Kibana as ‘Your Window into the Elastic Stack’. In more detail: ‘Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack’.
Kibana can be best described as the BI tool for accessing data in Elasticsearch. Lets not dwell too long over big promises, let’s get started!
This blog continues with the data that was put into Elasticsearch in the previous blog. Now, first install Kibana:
Installation & Start
Version 5.0.0 was released on October 26th 2016:
- download zip file (there are other options): https://artifacts.elastic.co/downloads/kibana/kibana-5.0.0-linux-x86_64.tar.gz
[developer@localhost ~]$ mkdir kibana [developer@localhost ~]$ cd kibana/ [developer@localhost kibana]$ mv ~/Downloads/kibana-5.0.0-linux-x86_64.tar.gz . [developer@localhost kibana]$ tar -xzf kibana-5.0.0-linux-x86_64.tar.gz [developer@localhost kibana]$
Start Kibana:
[developer@localhost ~]$ cd [developer@localhost ~]$ cd kibana/kibana-5.0.0-linux-x86_64/ [developer@localhost kibana-5.0.0-linux-x86_64]$ ls bin data node optimize plugins src config LICENSE.txt node_modules package.json README.txt webpackShims [developer@localhost kibana-5.0.0-linux-x86_64]$ cd bin [developer@localhost bin]$ ./kibana log [15:21:17.204] [info][status][plugin:kibana@5.0.0] Status changed from uninitialized to green - Ready log [15:21:17.259] [info][status][plugin:elasticsearch@5.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [15:21:17.290] [info][status][plugin:console@5.0.0] Status changed from uninitialized to green - Ready log [15:21:17.466] [info][status][plugin:timelion@5.0.0] Status changed from uninitialized to green - Ready log [15:21:17.470] [info][listening] Server running at http://localhost:5601 log [15:21:17.471] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow log [15:21:22.484] [info][status][plugin:elasticsearch@5.0.0] Status changed from yellow to yellow - No existing Kibana index found log [15:21:22.558] [info][status][plugin:elasticsearch@5.0.0] Status changed from yellow to green - Kibana index ready log [15:21:22.559] [info][status][ui settings] Status changed from yellow to green - Ready
First searches
Now, point your browser to http://localhost:5601
See the opening page, which should look something like
On this page you are prompted for an index pattern: the pattern identifies the Elasticsearch indexes that you want to access from Kibana. With the data set from the previous blog, the index pattern that we will use is ‘case*’. Un-check the tickbox ‘index contains time based events’, like below:
The result will look like:
You can check the status page: http://localhost:5601/status.
Now, Kibana is ready to explore the data set that came from the ACM example. Click the ‘Discover’ tab.
First, we will do the same search that we did in the previous blog, but now using Kibana. There, we looked for the word ‘Katwijk’, using a curl command
curl 'localhost:9200/casedata/_search?q=Katwijk&pretty'
In Kibana, that search would look like:
Note that we did not indicate the index: it is taken implicitly because we selected an index set to work with.
Next, copy the caseId ’69ebc23b-1e19-4da7-a8ed-544417b8bbd3′ in the search field and observe that 12 search results (json documents) are found:
Data modelling
The above search example shows a sensible search strategy:
- first one document that concerns the case that is searched for
- search all documents that concern that specific case
In order to make that search strategy work, it is required that all documents of a specific case can be related to eachother. For the case example, this is done with the caseId: all document types have the caseId attribute included!
Visualizing data – completed case activities
Now, let’s move on to the Visualization part of Kibana. We will create a diagram that shows how many case activities have been completed each hour.
First, click the ‘Visualize’ tab:
Click ‘Area chart’:
Click ‘case*’:
And watch the following screen appear:
For ‘buckets, under ‘Select buckets type, click ‘X-Axis’:
Configure your buckets like shown below:
To view the result, press the ‘play’ icon:
The result looks like shown below:
Save the visualization by clicking on Save in the upper right corner and giving it a name:
You may notice some facts about the result:
- case activities have been completed during about 2.5 days, and during the weekend
- apart from the start, the amount of completed case activities per hour is more or less constant
Reason for this is that the case data was generated using test scripts.
Visualizing data – completed case activities per activity type
Next, we will make a visualization that shows ‘completed case activities per hour and per activity type’.
We start with the previous visualization:
Here, click Split Area and then click on ‘Split Area’:
Now, scroll down and complete like shown below and click ‘play’ again:
Summary
The first steps with Kibana are promising: easy and quick. Of course, the real proof of the pudding is in the eating. But so far, it is promising. What I didn’t show in the above blog, is the dashboard option. You can make dashboards real quick – it even put a big grin on my face. And then you’ll discover the drill down functionality. And then … oh well – you should just fiddle around with it.
I do think that a critical success factor is in the modelling of the data: ‘better’ data modelling will make for easier queries and better/easier/quicker visualization results.