Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 1f

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube

For training and demo purposes, on my windows laptop, I needed an environment with a guest Operating System, Docker and Minikube available within an Oracle VirtualBox appliance.

So, in a series of articles up till now, I described the following:

In this article I will describe how I installed ElasticSearch, Fluentd and Kibana (EFK) in order to do log aggregation, later on (described in another article).

Also, I wanted it to be possible to set up my demo environment from scratch, including some data (books in the catalog), without the manual steps I had to do up till now.

For example, the manual steps for:

  • installing Helm, Tiller and socat.
  • creating Docker images.
  • setting up a MySQL database named test.
  • installing several Helm charts.

Besides already being able to use the Kubernetes Dashboard (in a Web Browser) on my Windows laptop (via port forwarding), I also wanted to be able to use Postman (for sending requests) and the Kibana Dashboard on my Windows laptop, whilst all the software is running in the VirtualBox appliance (created with Vagrant).

So, in this article I will also describe how I used Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana.

Elasticsearch

Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.
[https://www.elastic.co/products/elasticsearch]

Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time.
[https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html]

Fluentd

Fluentd is an open source data collector, which lets you unify the data collection and consumption for a better use and understanding of data.
[https://www.fluentd.org/]

Fluentd Events

Fluentd’s input sources (where all the data comes from) are enabled by selecting and configuring the desired input plugins using source directives

The source submits events into the Fluentd’s routing engine. An event consists of three entities: tag, time and record.

  • The tag is a string separated by ‘.’s (e.g. myapp.access), and is used as the directions for Fluentd’s internal routing engine. → Where an event comes from
  • The time field is specified by input plugins, and it must be in the Unix time format. → When an event happens.
  • The record is a JSON object. → Actual log content.

The input plugin has the responsibility for generating a Fluentd event from data sources.

Here is a brief overview of the life of a Fluentd event:
[https://docs.fluentd.org/v1.0/articles/config-file#introduction:-the-life-of-a-fluentd-event]

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 2

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 3

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 4

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 5

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 6

A configuration file allows the user to control the input and output behavior of Fluentd by (1) selecting input and output plugins and (2) specifying the plugin parameters. The file is required for Fluentd to operate properly.

If you’re using the Docker container, the default location is located at /fluentd/etc/fluent.conf.
[https://docs.fluentd.org/v1.0/articles/config-file#config-file-location]

Kibana

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack.
[https://www.elastic.co/products/kibana]

Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.

Kibana makes it easy to understand large volumes of data. It’s simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time.
[https://www.elastic.co/guide/en/kibana/current/introduction.html]

Adding logging to the BooksService application

In order to be able to, later on, demonstrate log aggregation via ElasticSearch, Fluentd and Kibana (EFK), I added some extra logging to BookService.java.

package nl.amis.demo.services.books_service.application;

import nl.amis.demo.services.books_service.BooksServiceApplication;
import nl.amis.demo.services.books_service.domain.Book;
import nl.amis.demo.services.books_service.infrastructure.persistence.BookRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.ArrayList;
import java.util.List;

@Service
public class BookService {

    private static final Logger logger = LoggerFactory.getLogger(BookService.class);

    @Autowired
    private BookRepository bookRepository;

    public List getAllBooks() {
        logger.info("\n----Begin logging BookService.getAllBooks----");
        List books = new ArrayList<>();
        bookRepository.findAll().forEach(books::add);
        logger.info("\n---- " + books.size() + " books found ----");
        logger.info("\n----End logging BookService.getAllBooks----");
        return books;
    }

    public Book getBook(String id) {
        return bookRepository.findById(id).orElseGet(Book::new);
    }

    public void addBook(Book whiskey) {
        bookRepository.save(whiskey);
    }

    public void updateBook(String id, Book whiskey) {
        bookRepository.save(whiskey);
    }

    public void deleteBook(String id) {
        bookRepository.deleteById(id);
    }

}

So, each time the getAllBooks method is called, this becomes visible in the log file.

Changing the testing profile

As you may remember, in the testing profile of the BooksService application, the connection to the external MySQL database was configured.
By using the spring.jpa.hibernate.ddl-auto property and setting it to value create, database dropping will be generated followed by database creation (by Hybernate).
[https://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html]
[https://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/Hibernate_User_Guide.html#domain-model, see paragraph “Automatic schema generation”]

Of course, this automatic schema generation is very convenient in a development environment. In my case the test database and the book table are automatically created.
Unfortunately, this has as side effect that after stopping the container that runs the BooksService application and starting a new one, the previously inserted data (books in the catalog) is gone.

In order to confirm what actually happens, I used a mysql-client.

kubectl --namespace=nl-amis-testing run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql-service.nl-amis-testing -ppassword

If you don't see a command prompt, try pressing enter.

I had to click on the Enter button.

mysql> show databases;

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 7

mysql> select create_time from information_schema.tables where table_schema = 'test';

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 8

mysql> select create_time from information_schema.tables where table_schema = 'test' and table_name = 'book';

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 8

mysql> select * from book;

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 9

Next, via the Minikube Dashboard I deleted one of the booksservice-v1.0 pods in the nl-amis-testing namespace. The ReplicaSet ensures that 2 pod replicas are running at any given time. So, a new pod is automatically created and the BooksService application started.

After that I repeated the SQL statements via mysql-client.

mysql> select create_time from information_schema.tables where table_schema = 'test';

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 10

mysql> select create_time from information_schema.tables where table_schema = 'test' and table_name = 'book';

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 10

mysql> select * from book;

With the following output:

Empty set (0.00 sec)

This confirms that when the BooksService application is started (in a newly created pod), the test database and the book table are automatically created and therefor the data is gone. To avoid this behavior, I chose to use the value none for the spring.jpa.hibernate.ddl-auto property.

Because later on, I had to create the book table myself, I wanted to know the metadata.

mysql> desc book;

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 11

mysql> show create table book;

With the following output:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 12

mysql> exit

Bye

pod "mysql-client" deleted

I changed the content of application-testing.properties to:

logging.level.root=INFO
server.port=9091
nl.amis.environment=testing
spring.datasource.url=jdbc:mysql://localhost:3306/test?allowPublicKeyRetrieval=true&useSSL=false
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto=none

With the changed logging and testing profile in place, within IntelliJ IDEA, I created a new Executable jar for each of the two versions of the BooksService application.

When the new jar was used in Minikube and a rest call was made to get all books, this was part of the log from the booksservice-v1-0-container:


[ main] n.a.d.s.b.BooksServiceApplication : Started BooksServiceApplication in 68.099 seconds (JVM running for 74.064)
[ main] n.a.d.s.b.BooksServiceApplication : —-Begin logging BooksServiceApplication—-
[ main] n.a.d.s.b.BooksServiceApplication : —-System Properties from VM Arguments—-
[ main] n.a.d.s.b.BooksServiceApplication : server.port: null
[ main] n.a.d.s.b.BooksServiceApplication : —-Program Arguments—-
[ main] n.a.d.s.b.BooksServiceApplication : Currently active profile – testing
[ main] n.a.d.s.b.BooksServiceApplication : —-Environment Properties—-
[ main] n.a.d.s.b.BooksServiceApplication : server.port: 9091
[ main] n.a.d.s.b.BooksServiceApplication : nl.amis.environment: testing
[ main] n.a.d.s.b.BooksServiceApplication : spring.datasource.url: jdbc:mysql://mysql-service.nl-amis-testing/test?allowPublicKeyRetrieval=true&useSSL=false
[ main] n.a.d.s.b.BooksServiceApplication : spring.datasource.username: root
[ main] n.a.d.s.b.BooksServiceApplication : spring.datasource.password: password
[ main] n.a.d.s.b.BooksServiceApplication : spring.jpa.database-platform: org.hibernate.dialect.MySQL5InnoDBDialect
[ main] n.a.d.s.b.BooksServiceApplication : spring.jpa.hibernate.ddl-auto: none
[ main] n.a.d.s.b.BooksServiceApplication : —-End logging BooksServiceApplication—-
[nio-9091-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet ‘dispatcherServlet’
[nio-9091-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet ‘dispatcherServlet’
[nio-9091-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 52 ms
[nio-9091-exec-1] n.a.d.s.b.application.BookService :
—-Begin logging BookService.getAllBooks—-
[nio-9091-exec-1] o.h.h.i.QueryTranslatorFactoryInitiator : HHH000397: Using ASTQueryTranslatorFactory
[nio-9091-exec-1] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1146, SQLState: 42S02
[nio-9091-exec-1] o.h.engine.jdbc.spi.SqlExceptionHelper : Table ‘test.book’ doesn’t exist
[nio-9091-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet] with root cause
java.sql.SQLSyntaxErrorException: Table ‘test.book’ doesn’t exist
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120) ~[mysql-connector-java-8.0.15.jar!/:8.0.15]

So now, because of the changed testing profile, the book table isn’t automatically generated anymore, hence the SQL error message “Table ‘test.book’ doesn’t exist”.

In the scripts directory I created a file mysql.sh (see later on in this article) were I created the book table myself.

Vagrantfile

To further automate setting up my demo environment, I changed the content of Vagrantfile to:
[in bold, I highlighted the changes]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/xenial64"
  
  config.vm.define "ubuntu_minikube_helm_efk" do |ubuntu_minikube_helm_efk|
  
    config.vm.network "forwarded_port",
      guest: 8001,
      host:  8001,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 5601,
      host:  5601,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 9200,
      host:  9200,
      auto_correct: true  
      
    config.vm.network "forwarded_port",
      guest: 9010,
      host:  9010,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 9020,
      host:  9020,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 9110,
      host:  9110,
      auto_correct: true
      
    config.vm.provider "virtualbox" do |vb|
        vb.name = "Ubuntu Minikube Helm EFK"
        vb.memory = "8192"
        vb.cpus = "1"
        
    args = []
    config.vm.provision "shell",
        path: "scripts/docker.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/minikube.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/kubectl.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/helm.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/namespaces.sh",
        args: args
        
        args = []
    config.vm.provision "shell",
        path: "scripts/elasticsearch.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/kibana.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/fluentd.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/mysql.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/booksservices.sh",
        args: args
    end
    
  end

end

As you can see in this file I used extra shell scripts to install the software needed for my demo environment. Besides already being able to use the Kubernetes Dashboard (in a Web Browser) on my Windows laptop (via port forwarding), I also wanted to be able to use Postman (for sending requests) and the Kibana Dashboard on my Windows laptop. So again, I used the forwarded_port configuration option, to forward a port on my host (Windows) to a port on my guest (Ubuntu).

Vagrant forwarded ports allow you to access a port on your host machine and have all data forwarded to a port on the guest machine, over either TCP or UDP.
[https://www.vagrantup.com/docs/networking/forwarded_ports.html]

From the subdirectory named env on my Windows laptop, I opened a Windows Command Prompt (cmd) and typed: vagrant up

This command creates and configures guest machines according to your Vagrantfile.
[https://www.vagrantup.com/docs/cli/up.html]

With the following output:

Bringing machine ‘ubuntu_minikube_helm_efk’ up with ‘virtualbox’ provider…
==> ubuntu_minikube_helm_efk: Importing base box ‘ubuntu/xenial64’…
==> ubuntu_minikube_helm_efk: Matching MAC address for NAT networking…
==> ubuntu_minikube_helm_efk: Checking if box ‘ubuntu/xenial64’ version ‘20190215.0.0’ is up to date…
==> ubuntu_minikube_helm_efk: A newer version of the box ‘ubuntu/xenial64’ for provider ‘virtualbox’ is
==> ubuntu_minikube_helm_efk: available! You currently have version ‘20190215.0.0’. The latest is version
==> ubuntu_minikube_helm_efk: ‘20190406.0.0’. Run `vagrant box update` to update.
==> ubuntu_minikube_helm_efk: Setting the name of the VM: Ubuntu Minikube Helm EFK
==> ubuntu_minikube_helm_efk: Clearing any previously set network interfaces…
==> ubuntu_minikube_helm_efk: Preparing network interfaces based on configuration…
ubuntu_minikube_helm_efk: Adapter 1: nat
==> ubuntu_minikube_helm_efk: Forwarding ports…
ubuntu_minikube_helm_efk: 8001 (guest) => 8001 (host) (adapter 1)
ubuntu_minikube_helm_efk: 5601 (guest) => 5601 (host) (adapter 1)
ubuntu_minikube_helm_efk: 9200 (guest) => 9200 (host) (adapter 1)
ubuntu_minikube_helm_efk: 9010 (guest) => 9010 (host) (adapter 1)
ubuntu_minikube_helm_efk: 9020 (guest) => 9020 (host) (adapter 1)
ubuntu_minikube_helm_efk: 9110 (guest) => 9110 (host) (adapter 1)
ubuntu_minikube_helm_efk: 22 (guest) => 2222 (host) (adapter 1)
==> ubuntu_minikube_helm_efk: Running ‘pre-boot’ VM customizations…
==> ubuntu_minikube_helm_efk: Booting VM…

ubuntu_minikube_helm_efk: **** End installing Helm

In the table below, you can see an overview of the Pods in the demo environment, this Vagrantfile will create:

Spring Boot application Pod Namespace Service ports
Environment Database nodePort port targetPort
DEV H2 in memory booksservice-v1.0-* nl-amis-development 30010 9190 9090
booksservice-v2.0-* nl-amis-development 30020 9190 9090
TST MySQL booksservice-v1.0-* nl-amis-testing 30110 9191 9091
mysql-* nl-amis-testing 3306
elasticsearch-* nl-amis-logging 30200 9200 9200
fluentd-* nl-amis-logging
kibana-* nl-amis-logging 30601 5601 5601

Now I am going to describe the content of the extra shell scripts. They are mainly used to automate the manual steps, I already described in my previous articles.

You will find that I used the sleep command, in order to wait for a Pod to reach the “Running” status. Again, I used Helm, the package manager for Kubernetes, to install Kubernetes Objects.
[https://technology.amis.nl/2019/03/12/using-helm-the-package-manager-for-kubernetes-to-install-two-versions-of-a-restful-web-service-spring-boot-application-within-minikube/]

Besides using kubctl proxy (for exposing the Kubernetes Dashboard), I used socat to forward a local port to a port on the minikube node.

Remark:
Using kubectl port-forward for this, wasn’t an option because this only works on Pods.
[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward]

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 13

Installing helm

In the scripts directory I created a file helm.sh with the following content:

 
#!/bin/bash
echo "**** Begin installing Helm"

#Install socat
sudo apt-get install socat

#Install Helm client-side
sudo snap install helm --classic

#Install Tiller (Helm server-side)
helm init

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

#List pods
echo "**** List pods with namespace kube-system"
kubectl get pods --namespace kube-system

echo "**** End installing Helm"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing Helm

ubuntu_minikube_helm_efk: Setting up socat (1.7.3.1-1) …
ubuntu_minikube_helm_efk: helm 2.13.1 from ‘snapcrafters’ installed

ubuntu_minikube_helm_efk: Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

ubuntu_minikube_helm_efk: Happy Helming!

ubuntu_minikube_helm_efk: **** End installing Helm

Installing namespaces

In my previous article I already described the use of a namespace-chart helm chart.
[https://technology.amis.nl/2019/03/12/using-helm-the-package-manager-for-kubernetes-to-install-two-versions-of-a-restful-web-service-spring-boot-application-within-minikube/]

I did need another namespace, so I added to the yaml directory a file namespace-logging.yaml with the following content:

apiVersion: v1
kind: Namespace
metadata:
  name: "nl-amis-logging"
  labels:
    name: "nl-amis-logging"

In the scripts directory I created a file namespaces.sh with the following content:

#!/bin/bash
echo "**** Begin installing namespaces"

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/namespace-chart/*
helm create namespace-chart

rm -rf /vagrant/helmcharts/namespace-chart/templates/*
cp /vagrant/yaml/namespace*.yaml /vagrant/helmcharts/namespace-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart namespace-chart"
helm install ./namespace-chart --name namespace-release

# Wait 30 seconds
echo "**** Waiting 30 seconds ..."
sleep 30

#List helm releases
echo "**** List helm releases"
helm list -d

#List namespaces
echo "**** List namespaces"
kubectl get namespaces

echo "**** End installing namespaces"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing namespaces

ubuntu_minikube_helm_efk: **** List namespaces
ubuntu_minikube_helm_efk: NAME STATUS AGE
ubuntu_minikube_helm_efk: default Active 4m20s
ubuntu_minikube_helm_efk: kube-public Active 4m16s
ubuntu_minikube_helm_efk: kube-system Active 4m20s
ubuntu_minikube_helm_efk: nl-amis-development Active 30s
ubuntu_minikube_helm_efk: nl-amis-logging Active 30s
ubuntu_minikube_helm_efk: nl-amis-testing Active 30s
ubuntu_minikube_helm_efk: **** End installing namespaces

Installing elasticsearch

I added to the yaml directory a file deployment-elasticsearch.yaml with the following content:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: nl-amis-logging
  labels:
    app: elasticsearch
    version: "7.0.0"
    environment: logging
spec:
  selector:
    matchLabels:
      app: elasticsearch
      version: "7.0.0"
      environment: logging
  template:
    metadata:
      labels:
        app: elasticsearch
        version: "7.0.0"
        environment: logging
    spec:
      containers:
      - name: elasticsearch-container
        image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
        volumeMounts:
          - name: storage
            mountPath: /data
      volumes:
        - name: storage
          emptyDir: {}

Remark:
This deployment is based on the following command line:

docker run -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” docker.elastic.co/elasticsearch/elasticsearch:7.0.0

[https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html, see Running Elasticsearch from the command line, Development mode]

The replicas are set to 1, so the ReplicaSet ensures that 1 pod replica is running at any given time.
The ReplicaSet manages all the pods with labels that match the selector. In my case these labels are:

Label key Label value
app elasticsearch
version 7.0.0
environment logging

I added to the yaml directory a file service-elasticsearch.yaml with the following content:

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-service
  namespace: nl-amis-logging
  labels:
    app: elasticsearch
    version: "7.0.0"
    environment: logging
spec:
  type: NodePort
  selector:
    app: elasticsearch
    version: "7.0.0"
    environment: logging
  ports:
  - nodePort: 30200
    port: 9200
    targetPort: 9200

In the scripts directory I created a file elasticsearch.sh with the following content:

#!/bin/bash
echo "**** Begin installing Elasticsearch"

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/elasticsearch-chart/*
helm create elasticsearch-chart

rm -rf /vagrant/helmcharts/elasticsearch-chart/templates/*
cp /vagrant/yaml/*elasticsearch.yaml /vagrant/helmcharts/elasticsearch-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart elasticsearch-chart"
helm install ./elasticsearch-chart --name elasticsearch-release

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-logging"
kubectl get pods --namespace nl-amis-logging

#List services
echo "**** List services with namespace nl-amis-logging"
kubectl get service --namespace nl-amis-logging

echo "**** Determine the IP of the minikube node"
nodeIP=$(kubectl get node minikube -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}")
echo "---$nodeIP---"

echo "**** Via socat forward local port 9200 to port 30200 on the minikube node ($nodeIP)"
socat tcp-listen:9200,fork tcp:$nodeIP:30200 &

echo "**** Send a request to Elasticsearch"
curl -XGET http://localhost:9200/_count?pretty

echo "**** End installing Elasticsearch"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing Elasticsearch

ubuntu_minikube_helm_efk: **** Determine the IP of the minikube node
ubuntu_minikube_helm_efk: —10.0.2.15—
ubuntu_minikube_helm_efk: **** Via socat forward local port 9200 to port 30200 on the minikube node (10.0.2.15)
ubuntu_minikube_helm_efk: **** Send a request to Elasticsearch

ubuntu_minikube_helm_efk: {
ubuntu_minikube_helm_efk:   “count” : 0,
ubuntu_minikube_helm_efk:   “_shards” : {
ubuntu_minikube_helm_efk:     “total” : 0,
ubuntu_minikube_helm_efk:     “successful” : 0,
ubuntu_minikube_helm_efk:     “skipped” : 0,
ubuntu_minikube_helm_efk:     “failed” : 0
ubuntu_minikube_helm_efk:   }
ubuntu_minikube_helm_efk: }

ubuntu_minikube_helm_efk: **** End installing Elasticsearch

Remark:
On my Windows laptop, after my demo environment is set up, in a Web Browser I can use: http://localhost:9200/_count?pretty

With for example the following result:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 14

Installing kibana

I added to the yaml directory a file deployment-kibana.yaml with the following content:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kibana
  namespace: nl-amis-logging
spec:
  selector:
    matchLabels:
      app: kibana
      version: "7.0.0"
      environment: logging
  template:
    metadata:
      labels:
        app: kibana
        version: "7.0.0"
        environment: logging
    spec:
      containers:
      - name: kibana-container
        image: docker.elastic.co/kibana/kibana:7.0.0
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch-service.nl-amis-logging:9200
        ports:
        - containerPort: 5601

Remark:
This deployment is based on the following command line:

docker pull docker.elastic.co/kibana/kibana:7.0.0

[https://www.elastic.co/guide/en/kibana/current/docker.html#pull-image]

The default settings configure Kibana to run on localhost:5601.
[https://www.elastic.co/guide/en/kibana/7.0/settings.html#settings]

Under Docker, Kibana can be configured via environment variables.
[https://www.elastic.co/guide/en/kibana/current/docker.html#environment-variable-config]

The following settings have different default values when using the Docker images:

Environment Variable Kibana Setting Default value
SERVER_NAME server.name kibana
SERVER_HOST server.host “0”
ELASTICSEARCH_HOSTS elasticsearch.hosts http://elasticsearch:9200
XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED xpack.monitoring.ui.container.elasticsearch.enabled true

These settings are defined in the default kibana.yml. They can be overridden with a custom kibana.yml or via environment variables.
[https://www.elastic.co/guide/en/kibana/current/docker.html#docker-defaults]

I changed the value of the environment variable ELASTICSEARCH_HOSTS to: http://elasticsearch-service.nl-amis-logging:9200

The replicas are set to 1, so the ReplicaSet ensures that 1 pod replica is running at any given time.
The ReplicaSet manages all the pods with labels that match the selector. In my case these labels are:

Label key Label value
app kibana
version 7.0.0
environment logging

I added to the yaml directory a file service-kibana.yaml with the following content:

apiVersion: v1
kind: Service
metadata:
  name: kibana-service
  namespace: nl-amis-logging
  labels:
    app: kibana
    version: "7.0.0"
    environment: logging
spec:
  type: NodePort
  selector:
    app: kibana
    version: "7.0.0"
    environment: logging
  ports:
  - nodePort: 30601
    port: 5601
    targetPort: 5601

In the scripts directory I created a file kibana.sh with the following content:

#!/bin/bash
echo "**** Begin installing Kibana"

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/kibana-chart/*
helm create kibana-chart

rm -rf /vagrant/helmcharts/kibana-chart/templates/*
cp /vagrant/yaml/*kibana.yaml /vagrant/helmcharts/kibana-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart kibana-chart"
helm install ./kibana-chart --name kibana-release

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-logging"
kubectl get pods --namespace nl-amis-logging

#List services
echo "**** List services with namespace nl-amis-logging"
kubectl get service --namespace nl-amis-logging

echo "**** Determine the IP of the minikube node"
nodeIP=$(kubectl get node minikube -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}")
echo "---$nodeIP---"

echo "**** Via socat forward local port 5601 to port 30601 on the minikube node ($nodeIP)"
socat tcp-listen:5601,fork tcp:$nodeIP:30601 &

echo "**** End installing Kibana"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing Kibana

ubuntu_minikube_helm_efk: **** Determine the IP of the minikube node
ubuntu_minikube_helm_efk: —10.0.2.15—
ubuntu_minikube_helm_efk: **** Via socat forward local port 5601 to port 30601 on the minikube node (10.0.2.15)
ubuntu_minikube_helm_efk: **** End installing Kibana

Remark:
On my Windows laptop, after my demo environment is set up, in a Web Browser I can start the Kibana Dashboard via: http://localhost:5601/app/kibana

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 15

Installing fluentd

I had a look at “Kubernetes Logging with Fluentd”, which has a reference to the following Git repository:
[https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd]

[https://github.com/fluent/fluentd-kubernetes-daemonset]

From this “Fluentd daemonset for Kubernetes and it’s Docker image” Git repository, I extracted the file fluentd-daemonset-elasticsearch.yaml with the following content:
[https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml]

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:elasticsearch
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-logging"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          # Option to configure elasticsearch plugin with self signed certs
          # ================================================================
          - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
            value: "true"
          # X-Pack Authentication
          # =====================
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "changeme"
          # Logz.io Authentication
          # ======================
          - name: LOGZIO_TOKEN
            value: "ThisIsASuperLongToken"
          - name: LOGZIO_LOGTYPE
            value: "kubernetes"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

From this “Fluentd daemonset for Kubernetes and it’s Docker image” Git repository, I also extracted the file fluentd-daemonset-elasticsearch-rbac.yaml with the following content:
[https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch-rbac.yaml]
[In bold, I highlighted the differences with file fluentd-daemonset-elasticsearch.yaml]

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: fluentd
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:elasticsearch
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-logging"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          # X-Pack Authentication
          # =====================
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "changeme"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

This file uses Role-based access control (RBAC). See for more information, for example:

https://kubernetes.io/docs/reference/access-authn-authz/authorization/

Based on the extracted file fluentd-daemonset-elasticsearch-rbac.yaml I created my own yaml files for installing fluentd.
I used as namespace nl-amis-logging.

I added to the yaml directory a file serviceaccount-fluentd.yaml with the following content:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-serviceaccount
  namespace: nl-amis-logging

I added to the yaml directory a file clusterrole-fluentd.yaml with the following content:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: fluentd-clusterrole
  namespace: nl-amis-logging
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch

I added to the yaml directory a file clusterrolebinding-fluentd.yaml with the following content:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: fluentd-clusterrolebinding
  namespace: nl-amis-logging
roleRef:
  kind: ClusterRole
  name: fluentd-clusterrole
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd-serviceaccount
  namespace: nl-amis-logging

I added to the yaml directory a file daemonset-fluentd.yaml with the following content:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: nl-amis-logging
  labels:
    app: fluentd
    version: "1.0"
    environment: logging
spec:
  template:
    metadata:
      labels:
        app: fluentd
        version: "1.0"
        environment: logging
    spec:
      serviceAccount: fluentd-serviceaccount
      serviceAccountName: fluentd-serviceaccount
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-container
        image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-elasticsearch-1.7
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-service.nl-amis-logging"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: "disable"
          - name: FLUENTD_CONF
            value: "custom-config/fluentd.conf"
          # X-Pack Authentication
          # =====================
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "changeme"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluentd-config-volume
          mountPath: /fluentd/etc/custom-config
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluentd-config-volume
        configMap:
          name: fluentd-configmap

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
A typical use of a DaemonSet is running a logs collection daemon on every node, such as fluentd or logstash.
[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/]

Remark about the image:
As mentioned earlier, from the Git repository, I extracted the file fluentd-daemonset-elasticsearch-rbac.yaml which uses the following image:

fluent/fluentd-kubernetes-daemonset:elasticsearch

After I created the fluentd DaemonSet, the log file from the fluentd-container Container gave the following output:


[info]: reading config file path=”/fluentd/etc/fluent.conf”
[info]: starting fluentd-0.12.43

So, the version is: 0.12.43

In the Git repository README.md, the section about “Image versions” mentions:

The following repository expose images based on Alpine Linux and Debian. For production environments we strongly suggest to use Debian images.

Fluentd versioning is as follows:

Series Description
v1.x stable
v0.12 Old stable, no longer updated

[https://github.com/fluent/fluentd-kubernetes-daemonset]

Of course, I wanted to use a more recent version. In the Git repository README.md, the section about “Supported tags and respective Dockerfile links”, Image versions mentions the tags:

v1.3.3-debian-elasticsearch-1.7,v1.3-debian-elasticsearch-1

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 16

So, I chose the image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-elasticsearch-1.7

Remark about the env FLUENT_ELASTICSEARCH_*:
Because Fluentd has to be able to send events to Elasticsearch, it has to know it’s url, in the format: <SCHEME>://<HOST><PORT>.

Remark about volumes and volumeMounts:
To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the root of the filesystem hierarchy, and any volumes are mounted at the specified paths within the image.

A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
[https://kubernetes.io/docs/concepts/storage/volumes/#hostpath]

Remark about systemd:
If you don’t setup systemd in the container, fluentd shows following messages by default configuration.

[warn]: #0 [in_systemd_bootkube] Systemd::JournalError: No such file or directory retrying in 1s
[warn]: #0 [in_systemd_kubelet] Systemd::JournalError: No such file or directory retrying in 1s
[warn]: #0 [in_systemd_docker] Systemd::JournalError: No such file or directory retrying in 1s

You can suppress these messages by setting disable to FLUENTD_SYSTEMD_CONF environment variable in your kubernetes configuration.
[https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/README.md]

Remark about the fluentd config file:
If you’re using the Docker container, the default location is located at /fluentd/etc/fluent.conf.

You can change default configuration file location via FLUENT_CONF.
[https://docs.fluentd.org/v1.0/articles/config-file#config-file-location]

As you can see, I changed the default configuration file location (via FLUENT_CONF) to: /fluentd/etc/custom-config/fluentd.conf

There for, in the vagrant directory I created a subdirectory structure configmaps/configmap-fluentd with a file fluentd.conf with the following content:

<source>
  @type tail
  @id in_tail_booksservice_logs
  path "/var/log/containers/booksservice*.log"
  pos_file "/var/log/ fluentd-booksservice.log.pos"
  tag "kubernetes.*"
  read_from_head true
  <parse>
    @type "json"
    time_format "%Y-%m-%dT%H:%M:%S.%NZ"
    time_type string
  </parse>
</source>

<filter kubernetes.**>
  @type kubernetes_metadata
  @id filter_kube_metadata
</filter>

<filter kubernetes.**>
  @type parser
  key_name log
  reserve_data true
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{2}-\d{2}/
    format1 /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3})\s+(?<level>[^\s]+)\s+(\[(?<service>[^,]*),(?<trace>[^,]*),(?<span>[^,]*),(?<exportable>[^\]]*)\]\s+)?(?<pid>\d+)\s+---\s+\[(?<thread>[^\]]+)\]\s+(?<source>[^\s]+)\s*:\s+(?<message>.*)/
    time_format %Y-%m-%d %H:%M:%S.%N
  </parse>
</filter>

<match **>
   @type elasticsearch
   @id out_es
   @log_level info
   include_tag_key true
   host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
   port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
   path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
   scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
   ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
   ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1'}"
   user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
   password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
   reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
   reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
   reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
   logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
   logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
   index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
   type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
   <buffer>
     flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
     flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
     chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
     queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
     retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
     retry_forever true
   </buffer>
</match>

Remark:
As a base for this file, I used the information mentioned in the article from my colleague from First8 (a sister company within the Conclusion ecosystem).
[https://technology.first8.nl/kubernetes-logging/]
I used parts of the content of the default configuration file /fluentd/etc/fluent.conf (see, later on in this article) and also the match part from the file fluent.conf, I extracted from the “Fluentd daemonset for Kubernetes and it’s Docker image” Git repository, I mentioned and used earlier.
[https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.3/debian-elasticsearch/conf/fluent.conf]

I only wanted the get the logging from the booksservice containers, so I made some changes, which in the file above, I highlighted in bold.

Remark about using a ConfigMap:
By using a ConfigMap, you can provide configuration data to an application without storing it in the container image or hardcoding it into the pod specification.
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/]

I created a ConfigMap that holds the fluentd config file using:

kubectl create configmap fluentd-configmap --from-file=/vagrant/configmaps/configmap-fluentd --namespace nl-amis-logging

Next, I added labels to the ConfigMap using:

kubectl label configmap fluentd-configmap --namespace nl-amis-logging app=fluentd
kubectl label configmap fluentd-configmap --namespace nl-amis-logging version="1.0"
kubectl label configmap fluentd-configmap --namespace nl-amis-logging environment=logging

A ConfigMap can be created via a yaml file, but not if you want to use the from-file option, because kubernetes isn’t aware of the local file’s path.
[https://stackoverflow.com/questions/51268488/kubernetes-configmap-set-from-file-in-yaml-configuration]

You must create a ConfigMap before referencing it in a Pod specification (unless you mark the ConfigMap as “optional”). If you reference a ConfigMap that doesn’t exist, the Pod won’t start.
ConfigMaps reside in a specific namespace. A ConfigMap can only be referenced by pods residing in the same namespace.
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/]

When you create a ConfigMap using –from-file, the filename becomes a key stored in the data section of the ConfigMap. The file contents become the key’s value.
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume]

In the scripts directory I created a file fluentd.sh with the following content:

#!/bin/bash
echo "**** Begin installing Fluentd"

#Create ConfigMap before creating DaemonSet
kubectl create configmap fluentd-configmap --from-file=/vagrant/configmaps/configmap-fluentd --namespace nl-amis-logging

#Label ConfigMap
kubectl label configmap fluentd-configmap --namespace nl-amis-logging app=fluentd
kubectl label configmap fluentd-configmap --namespace nl-amis-logging version="1.0"
kubectl label configmap fluentd-configmap --namespace nl-amis-logging environment=logging

#List configmaps
echo "**** List configmap fluentd-configmap with namespace nl-amis-logging"
kubectl get configmaps fluentd-configmap --namespace nl-amis-logging -o yaml

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/fluentd-chart/*
helm create fluentd-chart

rm -rf /vagrant/helmcharts/fluentd-chart/templates/*
cp /vagrant/yaml/*fluentd.yaml /vagrant/helmcharts/fluentd-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart fluentd-chart"
helm install ./fluentd-chart --name fluentd-release

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

echo "**** Check if a certain action (list) on a resource (pods) is allowed for a specific user (system:serviceaccount:nl-amis-logging:fluentd-serviceaccount) ****"
kubectl auth can-i list pods --as="system:serviceaccount:nl-amis-logging:fluentd-serviceaccount" --namespace nl-amis-logging

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-logging"
kubectl get pods --namespace nl-amis-logging

echo "**** End installing Fluentd"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing Fluentd

ubuntu_minikube_helm_efk: **** List configmap fluentd-configmap with namespace nl-amis-logging
ubuntu_minikube_helm_efk: apiVersion: v1
ubuntu_minikube_helm_efk: data:
ubuntu_minikube_helm_efk:   fluentd.conf: |-
ubuntu_minikube_helm_efk:     <source>
ubuntu_minikube_helm_efk:       @type tail
ubuntu_minikube_helm_efk:       @id in_tail_booksservice_logs
ubuntu_minikube_helm_efk:       path “/var/log/containers/booksservice*.log”
ubuntu_minikube_helm_efk:       pos_file “/var/log/fluentd-booksservice.log.pos”
ubuntu_minikube_helm_efk:       tag “kubernetes.*”
ubuntu_minikube_helm_efk:       read_from_head true
ubuntu_minikube_helm_efk:       <parse>
ubuntu_minikube_helm_efk:         @type “json”
ubuntu_minikube_helm_efk:         time_format “%Y-%m-%dT%H:%M:%S.%NZ”
ubuntu_minikube_helm_efk:         time_type string
ubuntu_minikube_helm_efk:       </parse>
ubuntu_minikube_helm_efk:     </source>
ubuntu_minikube_helm_efk:
ubuntu_minikube_helm_efk:     <filter kubernetes.**>
ubuntu_minikube_helm_efk:       @type kubernetes_metadata
ubuntu_minikube_helm_efk:       @id filter_kube_metadata
ubuntu_minikube_helm_efk:     </filter>
ubuntu_minikube_helm_efk:
ubuntu_minikube_helm_efk:     <filter kubernetes.**>
ubuntu_minikube_helm_efk:       @type parser
ubuntu_minikube_helm_efk:       key_name log
ubuntu_minikube_helm_efk:       reserve_data true
ubuntu_minikube_helm_efk:       <parse>
ubuntu_minikube_helm_efk:         @type multiline
ubuntu_minikube_helm_efk:         format_firstline /\d{4}-\d{2}-\d{2}/
ubuntu_minikube_helm_efk:         format1 /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3})\s+(?<level>[^\s]+)\s+(\[(?<service>[^,]*),(?<trace>[^,]*),(?<span>[^,]*),(?<exportable>[^\]]*)\]\s+)?(?<pid>\d+)\s+—\s+\[(?<thread>[^\]]+)\]\s+(?<source>[^\s]+)\s*:\s+(?<message>.*)/
ubuntu_minikube_helm_efk:         time_format %Y-%m-%d %H:%M:%S.%N
ubuntu_minikube_helm_efk:       </parse>
ubuntu_minikube_helm_efk:     </filter>
ubuntu_minikube_helm_efk:
ubuntu_minikube_helm_efk:     <match **>
ubuntu_minikube_helm_efk:        @type elasticsearch
ubuntu_minikube_helm_efk:        @id out_es
ubuntu_minikube_helm_efk:        @log_level info
ubuntu_minikube_helm_efk:        include_tag_key true
ubuntu_minikube_helm_efk:        host “#{ENV[‘FLUENT_ELASTICSEARCH_HOST’]}”
ubuntu_minikube_helm_efk:        port “#{ENV[‘FLUENT_ELASTICSEARCH_PORT’]}”
ubuntu_minikube_helm_efk:        path “#{ENV[‘FLUENT_ELASTICSEARCH_PATH’]}”
ubuntu_minikube_helm_efk:        scheme “#{ENV[‘FLUENT_ELASTICSEARCH_SCHEME’] || ‘http’}”
ubuntu_minikube_helm_efk:        ssl_verify “#{ENV[‘FLUENT_ELASTICSEARCH_SSL_VERIFY’] || ‘true’}”
ubuntu_minikube_helm_efk:        ssl_version “#{ENV[‘FLUENT_ELASTICSEARCH_SSL_VERSION’] || ‘TLSv1’}”
ubuntu_minikube_helm_efk:        user “#{ENV[‘FLUENT_ELASTICSEARCH_USER’]}”
ubuntu_minikube_helm_efk:        password “#{ENV[‘FLUENT_ELASTICSEARCH_PASSWORD’]}”
ubuntu_minikube_helm_efk:        reload_connections “#{ENV[‘FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS’] || ‘false’}”
ubuntu_minikube_helm_efk:        reconnect_on_error “#{ENV[‘FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR’] || ‘true’}”
ubuntu_minikube_helm_efk:        reload_on_failure “#{ENV[‘FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE’] || ‘true’}”
ubuntu_minikube_helm_efk:        logstash_prefix “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX’] || ‘logstash’}”
ubuntu_minikube_helm_efk:        logstash_format “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT’] || ‘true’}”
ubuntu_minikube_helm_efk:        index_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME’] || ‘logstash’}”
ubuntu_minikube_helm_efk:        type_name “#{ENV[‘FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME’] || ‘fluentd’}”
ubuntu_minikube_helm_efk:        <buffer>
ubuntu_minikube_helm_efk:          flush_thread_count “#{ENV[‘FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT’] || ‘8’}”
ubuntu_minikube_helm_efk:          flush_interval “#{ENV[‘FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL’] || ‘5s’}”
ubuntu_minikube_helm_efk:          chunk_limit_size “#{ENV[‘FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE’] || ‘2M’}”
ubuntu_minikube_helm_efk:          queue_limit_length “#{ENV[‘FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH’] || ’32’}”
ubuntu_minikube_helm_efk:          retry_max_interval “#{ENV[‘FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL’] || ’30’}”
ubuntu_minikube_helm_efk:          retry_forever true
ubuntu_minikube_helm_efk:        </buffer>
ubuntu_minikube_helm_efk:     </match>
ubuntu_minikube_helm_efk: kind: ConfigMap
ubuntu_minikube_helm_efk: metadata:
ubuntu_minikube_helm_efk:   creationTimestamp: “2019-04-21T19:04:57Z”
ubuntu_minikube_helm_efk:   labels:
ubuntu_minikube_helm_efk:     app: fluentd
ubuntu_minikube_helm_efk:     environment: logging
ubuntu_minikube_helm_efk:     version: “1.0”
ubuntu_minikube_helm_efk:   name: fluentd-configmap
ubuntu_minikube_helm_efk:   namespace: nl-amis-logging
ubuntu_minikube_helm_efk:   resourceVersion: “1178”
ubuntu_minikube_helm_efk:   selfLink: /api/v1/namespaces/nl-amis-logging/configmaps/fluentd-configmap
ubuntu_minikube_helm_efk:   uid: 5ae864b5-6468-11e9-a88b-023e591c269a

ubuntu_minikube_helm_efk: **** Check if a certain action (list) on a resource (pods) is allowed for a specific user (system:serviceaccount:nl-amis-logging:fluentd-serviceaccount) ****
ubuntu_minikube_helm_efk: yes

ubuntu_minikube_helm_efk: **** End installing Fluentd

Remark:
You can list the configmap using:

kubectl get configmaps fluentd-configmap --namespace nl-amis-logging -o yaml

With the following output:

apiVersion: v1
data:
  fluentd.conf: |-
    <source>

So here you can see, that the filename is fluentd.conf.

Getting a shell to the running container gave the following output:

vagrant@ubuntu-xenial:/vagrant$ kubectl exec -it fluentd-zv5w5 –namespace nl-amis-logging — ls -latr /fluentd/etc
total 32
-rw-r–r– 1 root root 1199 Apr 12 18:03 systemd.conf
-rw-r–r– 1 root root 421 Apr 12 18:03 prometheus.conf
-rw-r–r– 1 root root 4848 Apr 12 18:03 kubernetes.conf
-rw-r–r– 1 root root 1871 Apr 12 18:03 fluent.conf
-rw-r–r– 1 root root 0 Apr 12 18:08 disable.conf
drwxr-xr-x 1 fluent fluent 4096 Apr 12 18:08 ..
drwxrwxrwx 3 root root 4096 Apr 21 19:04 custom-config
drwxr-xr-x 1 fluent fluent 4096 Apr 21 19:05 .
vagrant@ubuntu-xenial:/vagrant$

Remark:
The double dash symbol “–” is used to separate the arguments you want to pass to the command from the kubectl arguments.
[https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/]

Before setting the FLUENT_CONF variable, and after I created the fluentd DaemonSet, the log file from the fluentd-container Container gave the following output


[info]: parsing config file is succeeded path=“/fluentd/etc/fluent.conf”
[info]: using configuration file: <ROOT>
  <source>
    @type prometheus
    bind “0.0.0.0”
    port 24231
    metrics_path “/metrics”
  </source>
  <source>
    @type prometheus_output_monitor
  </source>
  <match fluent.**>
    @type null
  </match>
  <source>
    @type tail
    @id in_tail_container_logs
    
path “/var/log/containers/*.log”
    pos_file “/var/log/fluentd-containers.log.pos”
    
tag “kubernetes.*”
    read_from_head true
    <parse>
      @type “json”
      time_format “%Y-%m-%dT%H:%M:%S.%NZ”
      time_type string
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_minion
    
path “/var/log/salt/minion”
    pos_file “/var/log/fluentd-salt.pos”
    
tag “salt”
    <parse>
      @type “regexp”
      expression /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
      time_format “%Y-%m-%d %H:%M:%S”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_startupscript
    
path “/var/log/startupscript.log”
    pos_file “/var/log/fluentd-startupscript.log.pos”
    
tag “startupscript”
    <parse>
      @type “syslog”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_docker
    
path “/var/log/docker.log”
    pos_file “/var/log/fluentd-docker.log.pos”
    
tag “docker”
    <parse>
      @type “regexp”
      expression /^time=”(?<time>[^)]*)” level=(?<severity>[^ ]*) msg=”(?<message>[^”]*)”( err=”(?<error>[^”]*)”)?( statusCode=($<status_code>\d+))?/
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_etcd
    
path “/var/log/etcd.log”
    pos_file “/var/log/fluentd-etcd.log.pos”
    
tag “etcd”
    <parse>
      @type “none”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_kubelet
    multiline_flush_interval 5s
    
path “/var/log/kubelet.log”
    pos_file “/var/log/fluentd-kubelet.log.pos”
    
tag “kubelet”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_kube_proxy
    multiline_flush_interval 5s
    
path “/var/log/kube-proxy.log”
    pos_file “/var/log/fluentd-kube-proxy.log.pos”
    
tag “kube-proxy”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_kube_apiserver
    multiline_flush_interval 5s
    
path “/var/log/kube-apiserver.log”
    pos_file “/var/log/fluentd-kube-apiserver.log.pos”
    
tag “kube-apiserver”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_kube_controller_manager
    multiline_flush_interval 5s
    
path “/var/log/kube-controller-manager.log”
    pos_file “/var/log/fluentd-kube-controller-manager.log.pos”
    
tag “kube-controller-manager”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_kube_scheduler
    multiline_flush_interval 5s
    
path “/var/log/kube-scheduler.log”
    pos_file “/var/log/fluentd-kube-scheduler.log.pos”
    
tag “kube-scheduler”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_rescheduler
    multiline_flush_interval 5s
    
path “/var/log/rescheduler.log”
    pos_file “/var/log/fluentd-rescheduler.log.pos”
    
tag “rescheduler”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_glbc
    multiline_flush_interval 5s
    
path “/var/log/glbc.log”
    pos_file “/var/log/fluentd-glbc.log.pos”
    
tag “glbc”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_cluster_autoscaler
    multiline_flush_interval 5s
    
path “/var/log/cluster-autoscaler.log”
    pos_file “/var/log/fluentd-cluster-autoscaler.log.pos”
    
tag “cluster-autoscaler”
    <parse>
      @type “kubernetes”
      expression /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m
      time_format “%m%d %H:%M:%S.%N”
    </parse>
  </source>
  <source>
    @type tail
    @id in_tail_kube_apiserver_audit
    multiline_flush_interval 5s
    
path “/var/log/kubernetes/kube-apiserver-audit.log”
    pos_file “/var/log/kube-apiserver-audit.log.pos”
    
tag “kube-apiserver-audit”
    <parse>
      @type “multiline”
      format_firstline “/^\\S+\\s+AUDIT:/”
      format1 /^(?<time>\S+) AUDIT:(?: (?:id=”(?<id>(?:[^”\\]|\\.)*)”|ip=”(?<ip>(?:[^”\\]|\\.)*)”|method=”(?<method>(?:[^”\\]|\\.)*)”|user=”(?<user>(?:[^”\\]|\\.)*)”|groups=”(?<groups>(?:[^”\\]|\\.)*)”|as=”(?<as>(?:[^”\\]|\\.)*)”|asgroups=”(?<asgroups>(?:[^”\\]|\\.)*)”|namespace=”(?<namespace>(?:[^”\\]|\\.)*)”|uri=”(?<uri>(?:[^”\\]|\\.)*)”|response=”(?<response>(?:[^”\\]|\\.)*)”|\w+=”(?:[^”\\]|\\.)*”))*/
      time_format “%Y-%m-%dT%T.%L%Z”
    </parse>
  </source>
  <filter kubernetes.**>
    @type kubernetes_metadata
    @id filter_kube_metadata
  </filter>
  <match **>
    @type elasticsearch
    @id out_es
    @log_level “info”
    include_tag_key true
    host “elasticsearch-service.nl-amis-logging”
    port 9200
    path “”
    scheme http
    ssl_verify true
    ssl_version TLSv1
    user “elastic”
    password xxxxxx
    reload_connections false
    reconnect_on_error true
    reload_on_failure true
    logstash_prefix “logstash”
    logstash_format true
    index_name “logstash”
    type_name “fluentd”
    <buffer>
      flush_thread_count 8
      flush_interval 5s
      chunk_limit_size 2M
      queue_limit_length 32
      retry_max_interval 30
      retry_forever true
    </buffer>
  </match>
</ROOT>
[info]: starting fluentd-1.3.3 pid=6 ruby=”2.3.3″

Installing mysql

In my previous article I already described setting up the following yaml files:

  • persistent-volume-mysql.yaml
  • persistent-volume-claim-mysql.yaml
  • deployment-mysql.yaml
  • service-mysql.yaml

[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]

Instead of manually starting the kubectl commands as described in that article, now I automated it, so in the scripts directory I created a file mysql.sh with the following content:

#!/bin/bash
echo "**** Begin installing MySQL"

#Install mysql-client
sudo apt-get install -y mysql-client

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/mysql-chart/*
helm create mysql-chart

rm -rf /vagrant/helmcharts/mysql-chart/templates/*
cp /vagrant/yaml/*mysql.yaml /vagrant/helmcharts/mysql-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart mysql-chart"
helm install ./mysql-chart --name mysql-release

# Wait 1 minute
echo "**** Waiting 1 minute ..."
sleep 60

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-testing"
kubectl get pods --namespace nl-amis-testing

#List services
echo "**** List services with namespace nl-amis-testing"
kubectl get service --namespace nl-amis-testing

echo "**** Begin preparing mysql database 'test'"

echo "**** Forward local port 3306 to port 3306 on the mysql-service service"
#/bin/bash -c "kubectl port-forward service/mysql-service 3306 --namespace nl-amis-testing &"
kubectl port-forward service/mysql-service 3306 --namespace nl-amis-testing /dev/null &

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

mysql -h 127.0.0.1 -uroot -ppassword -e "create database if not exists test"
mysql -h 127.0.0.1 -uroot -ppassword -e "show databases;"

echo "**** Creating table book"
mysql -h 127.0.0.1 -uroot -ppassword -e "use test; CREATE TABLE book (
  id varchar(255) NOT NULL,
  author varchar(255) DEFAULT NULL,
  isbn13 varchar(255) DEFAULT NULL,
  language varchar(255) DEFAULT NULL,
  num_of_pages int(11) NOT NULL,
  price double NOT NULL,
  title varchar(255) DEFAULT NULL,
  type varchar(255) DEFAULT NULL,
  PRIMARY KEY (id)
);"
mysql -h 127.0.0.1 -uroot -ppassword -e "use test; desc book;"
echo "**** End preparing mysql database 'test'"

echo "**** End installing MySQL"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing MySQL

ubuntu_minikube_helm_efk: **** Begin preparing mysql database ‘test’
ubuntu_minikube_helm_efk: **** Forward local port 3306 to port 3306 on the mysql-service service

ubuntu_minikube_helm_efk: **** Creating table book
ubuntu_minikube_helm_efk: mysql:
ubuntu_minikube_helm_efk: [Warning] Using a password on the command line interface can be insecure.
ubuntu_minikube_helm_efk: mysql:
ubuntu_minikube_helm_efk: [Warning] Using a password on the command line interface can be insecure.
ubuntu_minikube_helm_efk: Field Type Null Key Default Extra
ubuntu_minikube_helm_efk: id varchar(255) NO PRI NULL
ubuntu_minikube_helm_efk: author varchar(255) YES NULL
ubuntu_minikube_helm_efk: isbn13 varchar(255) YES NULL
ubuntu_minikube_helm_efk: language varchar(255) YES NULL
ubuntu_minikube_helm_efk: num_of_pages int(11) NO NULL
ubuntu_minikube_helm_efk: price double NO NULL
ubuntu_minikube_helm_efk: title varchar(255) YES NULL
ubuntu_minikube_helm_efk: type varchar(255) YES NULL
ubuntu_minikube_helm_efk: **** End preparing mysql database ‘test’
ubuntu_minikube_helm_efk: **** End installing MySQL

Remark about the database:
Here you can see that I created the test database with the book table myself (based on the metadata, I got earlier).

I installed the mysql-client for this on the guest operating system and used the kubectl port-forward option to forward the local port on the guest operating system to the port on the Kubernetes cluster.

Remark about </dev/null &>/dev/null &:
There are, by default, three “standard” files open when you run a program, standard input (stdin), standard output (stdout), and standard error (stderr).
In Unix, those are associated with “file descriptors” (stdin = 0, stdout = 1, stderr = 2).
The shell gives you the ability to “redirect” file descriptors:

  • The > operator redirects output.
  • The < operator redirects input.

/dev/null is the so-called null device, which is a special device which discards the information written to it.
[https://askubuntu.com/questions/12098/what-does-outputting-to-dev-null-accomplish-in-bash-scripts]

program_name </dev/null &>/dev/null &

The above command line means:

  • run program_name.
  • redirect standard input from /dev/null (</dev/null).
  • redirect both file descriptors 1 and 2 (stdout and stderr) to /dev/null (&>/dev/null).
  • run the program in the background (&).

[https://unix.stackexchange.com/questions/497207/difference-between-dev-null-21-and-dev-null-dev-null]

Installing booksservices

In my previous article I already described setting up the following yaml files:

  • deployment-booksservice-dev-v1.0.yaml
  • deployment-booksservice-dev-v2.0.yaml
  • deployment-booksservice-tst-v1.0.yaml
  • service-booksservice-dev-v1.0.yaml
  • service-booksservice-dev-v2.0.yaml
  • service-booksservice-tst-v1.0.yaml

[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]

Instead of manually starting the kubectl commands as described in that article, now I automated it, so in the scripts directory I created a file booksservices.sh with the following content:

#!/bin/bash
echo "**** Begin installing booksservices"

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/booksservice-chart/*
helm create booksservice-chart

rm -rf /vagrant/helmcharts/booksservice-chart/templates/*
cp /vagrant/yaml/*booksservice*.yaml /vagrant/helmcharts/booksservice-chart/templates

# Create Docker images
echo "**** Docker images"
cd /vagrant
cd applications
cd books_service_1.0
docker build -t booksservice:v1.0 .

cd ..
cd books_service_2.0
docker build -t booksservice:v2.0 .

# Wait 30 seconds
echo "**** Waiting 30 seconds ..."
sleep 30

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart booksservice-chart"
helm install ./booksservice-chart --name booksservice-release

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-testing"
kubectl get pods --namespace nl-amis-testing

#List services
echo "**** List services with namespace nl-amis-testing"
kubectl get service --namespace nl-amis-testing

echo "**** Determine the IP of the minikube node"
nodeIP=$(kubectl get node minikube -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}")
echo "---$nodeIP---"

echo "**** Via socat forward local port 9010 to port 30010 on the minikube node ($nodeIP)"
socat tcp-listen:9010,fork tcp:$nodeIP:30010 &

echo "**** Via socat forward local port 9020 to port 30020 on the minikube node ($nodeIP)"
socat tcp-listen:9020,fork tcp:$nodeIP:30020 &

echo "**** Via socat forward local port 9110 to port 30110 on the minikube node ($nodeIP)"
socat tcp-listen:9110,fork tcp:$nodeIP:30110 &

echo "**** Add books"
curl --header "Content-Type: application/json" --request POST --data '{"id": 1, "title": "The Threat: How the FBI Protects America in the Age of Terror and Trump", "author": "Andrew G. McCabe", "type": "Hardcover", "price": 17.99, "numOfPages": 288, "language": "English", "isbn13": "978-1250207579"}' http://localhost:9110/books

curl --header "Content-Type: application/json" --request POST --data '{"id": 2, "title": "Becoming", "publishDate": "2018-11-13", "author": "Michelle Obama", "type": "Hardcover", "price": 17.88, "numOfPages": 448, "publisher": "Crown Publishing Group; First Edition edition", "language": "English", "isbn13": "978-1524763138"}' http://localhost:9110/books

echo "**** Get books"
curl http://localhost:9110/books

echo ""
echo "**** List the books in the database"
mysql -h 127.0.0.1 -uroot -ppassword -e "show databases;"
mysql -h 127.0.0.1 -uroot -ppassword -e "use test; select * from book;"

echo "**** End installing booksservices"

With the following output:

ubuntu_minikube_helm_efk: **** Begin installing booksservices

ubuntu_minikube_helm_efk: —10.0.2.15—
ubuntu_minikube_helm_efk: **** Via socat forward local port 9010 to port 30010 on the minikube node (10.0.2.15)
ubuntu_minikube_helm_efk: **** Via socat forward local port 9020 to port 30020 on the minikube node (10.0.2.15)
ubuntu_minikube_helm_efk: **** Via socat forward local port 9110 to port 30110 on the minikube node (10.0.2.15)

ubuntu_minikube_helm_efk: **** Get books

ubuntu_minikube_helm_efk: [{“id”:”1″,”title”:”The Threat: How the FBI Protects America in the Age of Terror and Trump”,”author”:”Andrew G. McCabe”,”type”:”Hardcover”,”price”:17.99,”numOfPages”:288,”language”:”English”,”isbn13″:”978-1250207579″},{“id”:”2″,”title”:”Becoming”,”author”:”Michelle Obama”,”type”:”Hardcover”,”price”:17.88,”numOfPages”:448,”language”:”English”,”isbn13″:”978-1524763138″}]
ubuntu_minikube_helm_efk: **** List the books in the database

ubuntu_minikube_helm_efk: mysql:
ubuntu_minikube_helm_efk: [Warning] Using a password on the command line interface can be insecure.
ubuntu_minikube_helm_efk: id author isbn13 language num_of_pages price title type
ubuntu_minikube_helm_efk: 1 Andrew G. McCabe 978-1250207579 English 288 17.99 The Threat: How the FBI Protects America in the Age of Terror and Trump Hardcover
ubuntu_minikube_helm_efk: 2 Michelle Obama 978-1524763138 English 448 17.88 Becoming Hardcover
ubuntu_minikube_helm_efk: **** End installing booksservices

Remark:
Here you can see that via curl commands, I added two books to the book catalog.

Postman

Remember that on my Windows laptop, I also wanted to be able to use Postman (for sending requests), via port forwarding this was made possible.
So, I used Postman to add two extra books to the book catalog.
From Postman I invoked a request named “PostBook3Request” (with method “POST” and URL “http://locahost:9110/books”) and a response with “Status 200 OK” was shown:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 17

Next, from Postman I invoked a request named “PostBook4Request” (with method “POST” and URL “http://locahost:9110/books”) and a response with “Status 200 OK” was shown:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 18

Then I wanted to check if the books were added to the catalog.
From Postman I invoked a request named “GetAllBooksRequest” (with method “GET” and URL “http://locahost:9110/books”) and a response with “Status 200 OK” was shown:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 19

As you can see, both extra books that I added, were also returned from the catalog.

Kubernetes Dashboard

On my Windows laptop, after my demo environment is set up, in a Web Browser I can start the Kubernetes Dashboard via: http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=_all

Check via the Kubernetes Web UI (Dashboard) that the deployment is created (in the nl-amis-logging namespace):

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/pod?namespace=nl-amis-logging

Navigate to Workloads | Pods:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 20

Check via the Kubernetes Web UI (Dashboard) that the deployment is created (in the nl-amis-development namespace):

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/pod?namespace=nl-amis-development

Navigate to Workloads | Pods:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 21

Check via the Kubernetes Web UI (Dashboard) that the deployment is created (in the nl-amis-testing namespace):

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/pod?namespace=nl-amis-testing

Navigate to Workloads | Pods:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 22

Kibana Dashboard

On my Windows laptop, after my demo environment is set up, in a Web Browser I can start the Kibana Dashboard via: http://localhost:5601/app/kibana

I quickly checked if I could see some logging from the booksservice containers, and this was the case.

In the example below you can see, for example:

Using Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana (EFK) within Minikube lameriks 201904 23

In a next article I will dive into using Kibana.

To conclude this blog, I will sum up the script files I used to get it all working:

  • Vagrantfile
  • docker.sh
  • minikube.sh
  • kubectl.sh
  • helm.sh
  • namespaces.sh
  • elasticsearch.sh
  • kibana.sh
  • fluentd.sh
  • mysql.sh
  • booksservices.sh

In this article I described how I used Vagrant and shell scripts to further automate setting up my demo environment from scratch, including ElasticSearch, Fluentd and Kibana within Minikube.
In a next article I will dive into using ElasticSearch, Fluentd and Kibana in order to do, for example, log aggregation.