Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 1f

Using ElasticSearch, Fluentd and Kibana (for log aggregation)

In my last article I described how I installed ElasticSearch, Fluentd and Kibana (EFK).
[https://technology.amis.nl/2019/04/23/using-vagrant-and-shell-scripts-to-further-automate-setting-up-my-demo-environment-from-scratch-including-elasticsearch-fluentd-and-kibana-efk-within-minikube/]

In this article I will dive into using ElasticSearch, Fluentd and Kibana. Besides log aggregation (getting log information available at a centralized location), I will also describe how I created some visualizations within a dashboard.

Kibana 7.0 has a new sleek design, streamlined navigation, and more for an extra delightful user experience.
[https://www.elastic.co/products/kibana]

Elasticsearch

Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time.
[https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html]

Elasticsearch Basic Concepts

Elasticsearch is a near-realtime search platform. What this means is there is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.

An index is a collection of documents that have somewhat similar characteristics. For example, you can have an index for customer data, another index for a product catalog, and yet another index for order data. An index is identified by a name (that must be all lowercase) and this name is used to refer to the index when performing indexing, search, update, and delete operations against the documents in it.

In a single cluster, you can define as many indexes as you want.
[https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-concepts.html]

A document is a basic unit of information that can be indexed. For example, you can have a document for a single customer, another document for a single product, and yet another for a single order. This document is expressed in JSON (JavaScript Object Notation) which is a ubiquitous internet data interchange format. Within an index, you can store as many documents as you want.

An index can potentially store a large amount of data that can exceed the hardware limits of a single node. For example, a single index of a billion documents taking up 1TB of disk space may not fit on the disk of a single node or may be too slow to serve search requests from a single node alone.

To solve this problem, Elasticsearch provides the ability to subdivide your index into multiple pieces called shards. When you create an index, you can simply define the number of shards that you want. Each shard is in itself a fully-functional and independent “index” that can be hosted on any node in the cluster.

Sharding is important for two primary reasons:

  • It allows you to horizontally split/scale your content volume
  • It allows you to distribute and parallelize operations across shards (potentially on multiple nodes) thus increasing performance/throughput

The mechanics of how a shard is distributed and also how its documents are aggregated back into search requests are completely managed by Elasticsearch and is transparent to you as the user.

For more information about the basic concepts please see: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-concepts.html

On my Windows laptop, after my demo environment is set up, in a Web Browser I can use: http://localhost:9200/_count?pretty

With for example the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 2

Fluentd

Fluentd is an open source data collector, which lets you unify the data collection and consumption for a better use and understanding of data.
[https://www.fluentd.org/]

Elasticsearch index

As you may remember from my previous article, in the vagrant directory I created a subdirectory structure configmaps/configmap-fluentd with a file fluentd.conf.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 3

In this file there is a part, specifying the parameters for the Elasticsearch output plugin, Fluentd will be using.

…

<match **>
   @type elasticsearch
   @id out_es
   @log_level info
   …
   logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
   logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
   index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
   type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
   <buffer>
     …
   </buffer>
</match>

Remark about index_name:

An index pattern identifies one or more Elasticsearch indices that you want to explore with Kibana. Kibana looks for index names that match the specified pattern.

By default, Kibana guesses that you’re working with log data being fed into Elasticsearch by Logstash.
[https://www.elastic.co/guide/en/kibana/current/index-patterns.html]

So, in the file fluentd.conf the value for index_name defaulted to ‘logstash’.

According to the Elasticsearch Output Plugin documentation, the parameter index_name (optional), is the index name to write events to (default: fluentd).
[https://docs.fluentd.org/v1.0/articles/out_elasticsearch#index_name-(optional)]

So, I changed the value to ‘fluentd’, because my demo environment uses Fluentd and not Logstash.

index_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME’] || ‘fluentd’}”

You can read more about this, later on.

Kibana

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack.
[https://www.elastic.co/products/kibana]

Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.

Kibana makes it easy to understand large volumes of data. It’s simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time.
[https://www.elastic.co/guide/en/kibana/current/introduction.html]

Kibana Dashboard

On my Windows laptop, after my demo environment is set up, in a Web Browser I can start the Kibana Dashboard via: http://localhost:5601/app/kibana

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 4

Elasticsearch Index Management

In the Kibana Dashboard, on the left, you can navigate to Management.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 5

Here you can see, the Elasticsearch Index Management:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 6

The index part of the changed file fluentd.conf looks like:

logstash_prefix “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX’] || ‘logstash’}”
logstash_format “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT’] || ‘true’}”
index_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME’] || ‘fluentd‘}”

In order, to let the changed file fluentd.conf take effect, from the Linux Command Prompt (via ssh), I entered the following commands to delete the fluentd release and reinstall it:

kubectl delete configmaps fluentd-configmap --namespace nl-amis-logging

configmap "fluentd-configmap" deleted
helm del --purge fluentd-release

release "fluentd-release" deleted

Linux Command Prompt: cd /vagrant

Linux Command Prompt: cd /scripts

Linux Command Prompt: ./fluentd.sh

With the following output:

**** Begin installing Fluentd

**** End installing Fluentd

In the Kibana Dashboard, I clicked on button “Reload indices”, and as you can see, a new index named “logstash-2019.04.27” was created.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 7

What I wanted to use was “fluentd-2019.04.27”, so changing the parameter index_name in the file fluentd.conf didn’t have the desired effect. There for, I checked the Fluentd documentation.

logstash_format (optional)
With this option set true, Fluentd uses the conventional index name format logstash-%Y.%m.%d (default: false). This option supersedes the index_name option.
[https://docs.fluentd.org/v1.0/articles/out_elasticsearch#logstash_format-(optional)]

logstash_prefix (optional)
The logstash prefix index name to write events when specifying logstash_format as true (default: logstash).
[https://docs.fluentd.org/v1.0/articles/out_elasticsearch#logstash_prefix-(optional)][

Because the parameter logstash_format superseded the parameter index_name in the file fluentd.conf, the Elasticsearch index name didn’t change to fluentd.

In the Kibana Dashboard, I deleted the newly created index by clicking on the index name and choosing Manage | Delete index.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 8

Next, I tried:

logstash_prefix “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX’] || ‘logstash’}”
logstash_format “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT’] || ‘false‘}”
index_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME’] || ‘fluentd’}”

With the following result (after sorting on Name):

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 9

With these parameter settings, the Elasticsearch index name did change to fluentd.
However, I also wanted a date notation as part of the index name.

So, I tried:

logstash_prefix “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX’] || ‘logstash’}”
logstash_format “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT’] || ‘false’}”
index_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME’] || fluentd-%Y.%m.%d}”

With the following output from fluentd.sh:

**** List pods with namespace nl-amis-logging
NAME READY STATUS RESTARTS AGE
elasticsearch-6b46c44f7c-2d67j 1/1 Running 0 5d16h
fluentd-wnh2n 0/1 CrashLoopBackOff 3 61s
kibana-6f96d679c4-2jjl7 1/1 Running 0 5d16h
**** End installing Fluentd

Here you can see, the fluentd pod gives an error (STATUS = CrashLoopBackOff).
So, these parameter settings didn’t work.
[https://github.com/uken/fluent-plugin-elasticsearch/issues/449]

Next, I tried:

logstash_prefix “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX’] || ‘logstash’}”
logstash_format “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT’] || ‘false’}”
index_name fluentd-%Y.%m.%d

With the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 10

With these parameter settings, the date part of the Elasticsearch index name wasn’t presented correctly.

Finally, I tried:

logstash_prefix “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX’] || ‘fluentd‘}”
logstash_format “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT’] || ‘true’}”
index_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME’] || ‘not_used’}”

With the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 11

These parameter settings did the job, the Elasticsearch index name fluentd-2019.04.27 was created.

Log aggregation

In my previous article I talked about the Fluentd configuration file fluentd.conf that I setup to get the logging from the booksservcie containers.

In a containerized environment, like Kubernetes, Pods and the containers within them, can be created and deleted automatically via ReplicaSet’s. So, it’s not always easy to now where in your environment you can find the log file that you need to analyze a problem that occurred in a particular application. Via log aggregation, the log information becomes available at a centralized location.

In the table below, you can see an overview of the booksservice Pods that are present in the demo environment, including the labels that are used:

Spring Boot application Pod Namespace Label key
Environment Database app version environment
DEV H2 in memory booksservice-v1.0-* nl-amis-development booksservice 1.0 development
booksservice-v2.0-* nl-amis-development booksservice 2.0 development
TST MySQL booksservice-v1.0-* nl-amis-testing booksservice 1.0 testing

Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/]

Elasticsearch index

In the Kibana Dashboard via Management | Kibana | Index Patterns you can create an index pattern.
Kibana uses index patterns to retrieve data from Elasticsearch indices for things like visualizations.
[http://localhost:5601/app/kibana#/management/kibana/index_pattern?_g=()]

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 12

In the field “Index pattern” I entered fluentd*. The index pattern matched 1 index. Next, I clicked on button “Next step”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 13

In the field “Time Filter field name” I entered @timestamp.
The Time Filter will use this field to filter your data by time.
You can choose not to have a time field, but you will not be able to narrow down your data by a time range.
[http://localhost:5601/app/kibana#/management/kibana/index_pattern?_g=()]

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 14

The Kibana index pattern fluentd* was created, with 11 fields.
This page lists every field in the fluentd* index and the field’s associated core type as recorded by Elasticsearch. To change a field type, use the Elasticsearch Mapping API.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 15

Postman

Remember that on my Windows laptop, I also wanted to be able to use Postman (for sending requests), via port forwarding this was made possible.
So, I used Postman to add books to and retrieve books from the book catalog. I did this for version 1.0 and 2.0 of the BooksService application.

From Postman I invoked a request named “GetAllBooksRequest” (with method “POST” and URL “http://locahost:9010/books”).
This concerns version 1.0 in the DEV environment.
A response with “Status 200 OK” was shown (with 2 books being retrieved):

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 16

From Postman I invoked a request named “GetAllBooksRequest” (with method “POST” and URL http://locahost:9020/books).
This concerns version 2.0 in the DEV environment.
A response with “Status 200 OK” was shown (with 3 books being retrieved):

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 17

From Postman I invoked a request named “GetAllBooksRequest” (with method “POST” and URL “http://locahost:9110/books”).
This concerns version 1.0 in the TST environment.
A response with “Status 200 OK” was shown (with 4 books being retrieved):

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 18

Remember, each time the getAllBooks method is called, this becomes visible in the container log file.

Kibana Dashboard, Discover

In the Kibana Dashboard via Discover you can see the log files. In my case, this were fluent log files (warning and info) and the aggregated log files from the booksservice containers.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 19

Based on the “Available fields” list, I could see that only a small number of fields were shown.

In the Kibana Dashboard via Management | Kibana | Index Patterns , I clicked the “Refresh fields list” icon.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 20

In the pop-up “Refresh field list?”, I clicked on button “Refresh”.

The Kibana index pattern fluentd* was recreated, with 90 fields. These include the Kubernetes metadata fields (because of my setup of fluentd.conf), such as for example: docker.container_id and kubernetes.namespace_name.
Remember, this time I recreated the Kibana index pattern, after the aggregated log files from the booksservice containers contained information with regard to calling the getAllBooks method.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 21

Kibana Dashboard, Discover, creating search booksservice_begin_logging

Of course, I wanted to focus on the log files from the booksservice containers, so I added a filter.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 22

I clicked on button “Add filter” and in the field “Field” I entered kubernetes.labels.app.keyword, as “Operator” I chose is and as “Value” I chose booksservice.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 23

Then I clicked again on button “Add filter” and in the field “Field” I entered log.keyword, as “Operator” I chose is and as “Value” I chose —-Begin logging BookService.getAllBooks—-.

This filtering resulted in 3 hits.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 24

Then I saved this Search, via a click on button “Save”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 25

In the pop-up “Save search”, in the field “Title” I entered booksservice_begin_logging. Next, I clicked on button “Confirm Save”.
In the left top of the screen this title then becomes visible.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 26

Remark:
All the Saved Objects can be seen in the Kibana Dashboard via Management | Kibana | Saved Objects.

Let’s shortly focus on the first hit.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 27

Via a click on icon “>”, the document is expanded.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 28

In this expanded document you can see, for example, the labels I configured earlier in the deployment artifact.

Label key Label value Field
app booksservice kubernetes.labels.app
version 1.0 kubernetes.labels.version
environment testing kubernetes.labels.environment

Generating some data (request logging) via Postman

In order, to have some more logging data, from Postman I repeatedly invoked the request named “GetAllBooksRequest” for version 1.0 and 2.0 of the BooksService application.

Spring Boot application Pod Namespace Label key Request count
Environment Database app version environment
DEV H2 in memory booksservice-v1.0-* nl-amis-development booksservice 1.0 development 3
booksservice-v2.0-* nl-amis-development booksservice 2.0 development 5
TST MySQL booksservice-v1.0-* nl-amis-testing booksservice 1.0 testing 2

The total amount of requests was 10.

So, the filtering resulted in 10 hits.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 29

Kibana Dashboard, Visualize, creating visualization booksservice_visualization_1

In the Kibana Dashboard via Visualize you can create a visualization.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 30

I clicked on button “Create a visualization” and selected “Pie” as the type for the visualization.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 31

As a source I chose the Saved search, I created earlier.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 32

After I selected Saved search booksservice_begin_logging the following became visible.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 33

In tab “Data”, in the Split Slices part, in the field “Aggregation” I selected Terms and in “Field” I selected kubernetes.labels.environment.keyword and left the other default settings as they were.
Then I clicked on the icon “Apply changes”, with the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 34

In tab “Options”, I selected the checkbox “Show Labels” and left the other default settings as they were.
Then I clicked on the icon “Apply changes”, with the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 35

In tab “Data”, I clicked on button “Add sub-buckets” and in the Split Slices part, in the field “Sub Aggregation” I selected Terms and in “Field” I selected kubernetes.labels.version.keyword and left the other default settings as they were.
Then I clicked on the icon “Apply changes”, with the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 36

Of course, this matched the actual numbers:

Label key Request count Label key
app version environment environment
booksservice 1.0 development 3 80%
booksservice 2.0 development 5
booksservice 1.0 testing 2 20%
Label key Request count Label key
app version environment version
booksservice 1.0 development 3 50%
booksservice 1.0 testing 2
booksservice 2.0 development 5 50%

The total amount of requests was 10.

Then I saved this Visualization, via a click on button “Save”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 37

In the pop-up “Save visualization”, in the field “Title” I entered booksservice_visualization_1. Next, I clicked on button “Confirm Save”.
In the left top of the screen this title then becomes visible.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 38

Remark:
All the Saved Objects can be seen in the Kibana Dashboard via Management | Kibana | Saved Objects.

Kibana Dashboard, Dashboard, creating dashboard booksservice_dashboard_1

In the Kibana Dashboard via Dashboard you can combine data views from any Kibana app into one dashboard and see everything in one place.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 39

I clicked on button “Create new dashboard”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 40

I clicked on button “Add” and selected visualization booksservice_visualization_1.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 41

Based on the data from the ”Last 15 minutes”, this was the result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 42

Of course, you can change the date and time period of the data you want to use.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 43

I clicked on button “Show dates” and selected the date and time period I wanted.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 44

Next, I clicked on button “Update”, with the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 45

Then I saved this Dashboard, via a click on button “Save”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 46

In the pop-up “Save dashboard”, in the field “Title” I entered booksservice_dashboard_1. Next, I clicked on button “Confirm Save”.
In the left top of the screen this title then becomes visible.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 47

Remark:
All the Saved Objects can be seen in the Kibana Dashboard via Management | Kibana | Saved Objects.

Generating some data (request logging) via a Request generator shell script

In order, to expand the dashboard with other visualizations, I first wanted to generate some extra data (request logging).

As described in a previous article, I created a subdirectory named env on my Windows laptop.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]

In the scripts subdirectory I therefor created file requestgenerator.sh with the following content:

#!/bin/bash
echo "**** Begin request generator"

while true; do
  requestinterval=$(( ( RANDOM % 10 )  + 1 ))
  
  if [[ $(($requestinterval % 3)) == 0 ]]; then
    curl http://localhost:9110/books
  fi
  if [[ $(($requestinterval % 4)) == 0 ]]; then
    curl http://localhost:9020/books
  fi
  if [[ $(($requestinterval % 5)) == 0 ]]; then
    curl http://localhost:9010/books
  fi
  echo ""
  # Wait random seconds
  echo "**** Waiting $requestinterval seconds ..."
  sleep $requestinterval
done

echo "**** End request generator"

This file randomly calls a certain endpoint.

I went to the env directory and opened a Windows Command Prompt (cmd) to access linux (within the VirtualBox Appliance) via ssh: vagrant ssh

Linux Command Prompt: cd /vagrant

Linux Command Prompt: cd scripts

Linux Command Prompt: ./ requestgenerator.sh

With the following output:

**** Begin request generator

[{“id”:”1″,”title”:”The Threat: How the FBI Protects America in the Age of Terror and Trump”,”author”:”Andrew G. McCabe”,”type”:”Hardcover”,”price”:17.99,”numOfPages”:288,”language”:”English”,”isbn13″:”978-1250207579″},{“id”:”2″,”title”:”Becoming”,”author”:”Michelle Obama”,”type”:”Hardcover”,”price”:17.88,”numOfPages”:448,”language”:”English”,”isbn13″:”978-1524763138″},{“id”:”3″,”title”:”Five Presidents: My Extraordinary Journey with Eisenhower, Kennedy, Johnson, Nixon, and Ford”,”author”:”Clint Hill, Lisa McCubbin”,”type”:”Paperback”,”price”:11.09,”numOfPages”:464,”language”:”English”,”isbn13″:”978-1476794143″},{“id”:”4″,”title”:”Where the Crawdads Sing”,”author”:”Delia Owens”,”type”:”Hardcover”,”price”:16.2,”numOfPages”:384,”language”:”English”,”isbn13″:”978-0735219090″}]

**** Waiting 3 seconds …

**** Waiting 1 seconds …

[{“id”:”1″,”title”:”The Threat: How the FBI Protects America in the Age of Terror and Trump”,”publishDate”:”2019-02-19T00:00:00.000+0000″,”author”:”Andrew G. McCabe”,”type”:”Hardcover”,”price”:17.99,”numOfPages”:288,”publisher”:”St. Martin’s Press”,”language”:”English”,”isbn13″:”978-1250207579″},{“id”:”2″,”title”:”Becoming”,”publishDate”:”2018-11-13T00:00:00.000+0000″,”author”:”Michelle Obama”,”type”:”Hardcover”,”price”:17.88,”numOfPages”:448,”publisher”:”Crown Publishing Group; First Edition edition”,”language”:”English”,”isbn13″:”978-1524763138″},{“id”:”3″,”title”:”Five Presidents: My Extraordinary Journey with Eisenhower, Kennedy, Johnson, Nixon, and Ford”,”publishDate”:”2017-05-02T00:00:00.000+0000″,”author”:”Clint Hill, Lisa McCubbin”,”type”:”Paperback”,”price”:11.09,”numOfPages”:464,”publisher”:”Gallery Books; Reprint edition”,”language”:”English”,”isbn13″:”978-1476794143″}]

**** Waiting 8 seconds …

[{“id”:”1″,”title”:”The Threat: How the FBI Protects America in the Age of Terror and Trump”,”publishDate”:”2019-02-19T00:00:00.000+0000″,”author”:”Andrew G. McCabe”,”type”:”Hardcover”,”price”:17.99,”numOfPages”:288,”publisher”:”St. Martin’s Press”,”language”:”English”,”isbn13″:”978-1250207579″},{“id”:”2″,”title”:”Becoming”,”publishDate”:”2018-11-13T00:00:00.000+0000″,”author”:”Michelle Obama”,”type”:”Hardcover”,”price”:17.88,”numOfPages”:448,”publisher”:”Crown Publishing Group; First Edition edition”,”language”:”English”,”isbn13″:”978-1524763138″},{“id”:”3″,”title”:”Five Presidents: My Extraordinary Journey with Eisenhower, Kennedy, Johnson, Nixon, and Ford”,”publishDate”:”2017-05-02T00:00:00.000+0000″,”author”:”Clint Hill, Lisa McCubbin”,”type”:”Paperback”,”price”:11.09,”numOfPages”:464,”publisher”:”Gallery Books; Reprint edition”,”language”:”English”,”isbn13″:”978-1476794143″}]

**** Waiting 8 seconds …

[{“id”:”1″,”title”:”The Threat: How the FBI Protects America in the Age of Terror and Trump”,”author”:”Andrew G. McCabe”,”type”:”Hardcover”,”price”:17.99,”numOfPages”:288,”language”:”English”,”isbn13″:”978-1250207579″},{“id”:”2″,”title”:”Becoming”,”author”:”Michelle Obama”,”type”:”Hardcover”,”price”:17.88,”numOfPages”:448,”language”:”English”,”isbn13″:”978-1524763138″},{“id”:”3″,”title”:”Five Presidents: My Extraordinary Journey with Eisenhower, Kennedy, Johnson, Nixon, and Ford”,”author”:”Clint Hill, Lisa McCubbin”,”type”:”Paperback”,”price”:11.09,”numOfPages”:464,”language”:”English”,”isbn13″:”978-1476794143″},{“id”:”4″,”title”:”Where the Crawdads Sing”,”author”:”Delia Owens”,”type”:”Hardcover”,”price”:16.2,”numOfPages”:384,”language”:”English”,”isbn13″:”978-0735219090″}]

**** Waiting 6 seconds …

^C

As you can see, after some time, I terminated the shell script.

Kibana Dashboard, Visualize, creating visualization booksservice_visualization_2

In the Kibana Dashboard via Visualize, I clicked on button “+” and selected “Data Table” as the type for the new visualization.

As a source I chose the Saved search booksservice_begin_logging and “Split Rows” as the bucket type.
Again, in the field “Aggregation”, I selected Terms and I used sub-buckets and left the other default settings as they were.

As term fields I used:

And the result was:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 48

Then I saved this Visualization as booksservice_visualization_2.

Remark:
All the Saved Objects can be seen in the Kibana Dashboard via Management | Kibana | Saved Objects.

Kibana Dashboard, Dashboard, editing dashboard booksservice_dashboard_1

I opened the dashboard I created earlier and clicked on button “Edit”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 49

Next, I clicked on button “Add”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 50

In the pop-up “Add Panels”, I selected booksservice_visualization_2.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 51

With the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 52

Then I rearranged the panels and clicked on button “Save”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 53

Next, I clicked on button “Full screen”.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 54

With the following result:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 55

In order, to explain what is shown in the bottom panel (Data Table), for your convenience, below you can see a list of all the Pods within Minikube, in my demo environment:

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 56

So, now it becomes clear that the request calls, made to the endpoint of the booksservice-* Services, were distributed to the Pods (and the Docker Container in it) related to the Services.

Spring Boot application Service endpoint Pod Namespace Label key
Environment Database app version environment
DEV H2 in memory http://localhost:9010/books booksservice-v1.0-* nl-amis-development booksservice 1.0 development
http://localhost:9020/books booksservice-v2.0-* nl-amis-development booksservice 2.0 development
TST MySQL http://localhost:9110/books booksservice-v1.0-* nl-amis-testing booksservice 1.0 testing

For example, the 32 request calls to endpoint http://localhost:9110/books were distributed to the following Pods:

  • booksservice-v1.0-5bcd5fddbd-x5w9t (17 requests)
  • booksservice-v1.0-5bcd5fddbd-5sqxb (15 requests)

Besides the Kubernetes Pod name, also the Docker Container id is presented (in my setup, there is only 1 Container per Pod).

Kibana Dashboard, Management, Kibana, Saved Objects

In the Kibana Dashboard via Management | Kibana | Saved Objects you can see the Saved Objects.

Using ElasticSearch, Fluentd and Kibana (for log aggregation) lameriks 201905 57

So now it’s time to conclude this article. I tried out some of the functionality of ElasticSearch, Fluentd and Kibana. Besides log aggregation (getting log information available at a centralized location), I also described how I created some visualizations within a dashboard.
I only covered a very small part of the things you can do with Kibana (version 7.0.0). For more information I kindly refer you to the Kibana User Guide.
[https://www.elastic.co/guide/en/kibana/current/index.html]