Using Elastic Stack, Filebeat and Logstash (for log aggregation)

0

In a previous article I described how I used ElasticSearch, Filebeat and Kibana, for log aggregation (getting log information available at a centralized location).
[https://technology.amis.nl/2019/09/15/using-elastic-stack-filebeat-for-log-aggregation/]

In this article I will talk about the installation and use of Filebeat in combination with Logstash (from the Elastic Stack).

EFK

One popular centralized logging solution is the Elasticsearch, Fluentd, and Kibana (EFK) stack.

Fluentd

Fluentd is an open source data collector, which lets you unify the data collection and consumption for a better use and understanding of data.
[https://www.fluentd.org/]

ELK Stack

“ELK” is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.
[https://www.elastic.co/what-is/elk-stack]

In a previous article I already spoke about Elasticsearch (a search and analytics engine) <Store, Search, and Analyze> and Kibana (which lets users visualize data with charts and graphs in Elasticsearch) <Explore, Visualize, and Share>.
[https://technology.amis.nl/2019/05/06/using-elasticsearch-fluentd-and-kibana-for-log-aggregation/]

Elastic Stack

The Elastic Stack is the next evolution of the ELK Stack.
[https://www.elastic.co/what-is/elk-stack]

Logstash

Logstash <Collect, Enrich, and Transport> is a server-side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch.
[https://www.elastic.co/what-is/elk-stack]

Beats

In 2015, a family of lightweight, single-purpose data shippers were introduced into the ELK Stack equation. They are called Beats <Collect, Parse, and Ship>.
[https://www.elastic.co/what-is/elk-stack]

Filebeat

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them to either to Elasticsearch or Logstash for indexing.
[https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html]

I leave it up to you to decide which product is most suitable for (log) data collection in your situation.

In a previous series of articles, I talked about an environment, I prepared on my Windows laptop, with a guest Operating System, Docker and Minikube available within an Oracle VirtualBox appliance , with the help of Vagrant. And now also I will be using that environment.

Log aggregation

In a containerized environment like Kubernetes, Pods and the containers within them, can be created and deleted automatically via ReplicaSet’s. So, it’s not always easy to now where in your environment, you can find the log file that you need, to analyze a problem that occurred in a particular application. Via log aggregation, the log information becomes available at a centralized location.

In the table below, you can see an overview of the booksservice Pods that are present in the demo environment, including the labels that are used:

Spring Boot applicationPodNamespaceLabel key
EnvironmentDatabaseappversionenvironment
DEVH2 in memorybooksservice-v1.0-*nl-amis-developmentbooksservice1.0development
booksservice-v2.0-*nl-amis-developmentbooksservice2.0development
TSTMySQLbooksservice-v1.0-*nl-amis-testingbooksservice1.0testing

Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/]

Elastic Stack installation order

Install the Elastic Stack products you want to use in the following order:

[https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html]

When installing Filebeat, installing Logstash (for parsing and enhancing the data) is optional.

In a previous article, I started with the installation of Filebeat (without Logstash).
But this time I want to use Logstash.

Logstash event processing pipeline

The Logstash event processing pipeline has three stages: inputs → filters → outputs. Inputs generate events, filters modify them, and outputs ship them elsewhere. Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter. Filters are optional.
[https://www.elastic.co/guide/en/logstash/current/pipeline.html]


[https://www.elastic.co/guide/en/logstash/current/first-event.html]

Logstash Configuration Files

Logstash has two types of configuration files:

  • pipeline configuration files, which define the Logstash processing pipelineYou create pipeline configuration files when you define the stages of your Logstash processing pipeline. To configure Logstash, you create a config file that specifies which plugins you want to use and settings for each plugin. You can reference event fields in a configuration and use conditionals to process events when they meet certain criteria.
    Logstash tries to load only files with .conf extension in the /etc/logstash/conf.d directory and ignores all other files.
    A Logstash config file has a separate section for each type of plugin you want to add to the event processing pipeline. For example:

    # This is a comment. You should use comments to describe
    # parts of your configuration.
    input {

    }

    filter {

    }


    output {

    }

  • settings files, which specify options that control Logstash startup and execution
    The settings files are already defined in the Logstash installation. Logstash includes among others, the following settings file:

    • logstash.yml
      You can set options in the Logstash settings file, logstash.yml, to control Logstash execution. For example, you can specify pipeline settings, the location of configuration files, logging options, and other settings. Most of the settings in the logstash.yml file, are also available as command-line flags when you run Logstash. Any flags that you set at the command line override the corresponding settings in the logstash.yml file.
      For more information about the logstash.yml file, please see:
      https://www.elastic.co/guide/en/logstash/current/logstash-settings-file.html

    For more information about settings files, please see:
    https://www.elastic.co/guide/en/logstash/current/config-setting-files.html

Docker Image for Logstash

Docker images for Logstash are available from the Elastic Docker registry. Obtaining Logstash for Docker is as simple as issuing a docker pull command against the Elastic Docker registry.
[https://www.elastic.co/guide/en/logstash/current/docker.html#docker]

docker pull docker.elastic.co/logstash/logstash:7.4.1

Directory Layout of Docker Images for Logstash

The Docker images have the following directory layout:

TypeDescriptionDefault LocationSetting (in file logstash.yml)
homeHome directory of the Logstash installation/usr/share/logstash
binBinary scripts, including logstash to start Logstash and logstash-plugin to install plugins/usr/share/logstash/bin
settingsConfiguration files, including logstash.yml and jvm.options/usr/share/logstash/configpath.settings
confLogstash pipeline configuration files/usr/share/logstash/pipelinepath.config
pluginsLocal, non Ruby-Gem plugin files. Each plugin is contained in a subdirectory. Recommended for development only/usr/share/logstash/pluginspath.plugins
dataData files used by logstash and its plugins for any persistence needs/usr/share/logstash/datapath.data

[https://www.elastic.co/guide/en/logstash/current/dir-layout.html#docker-layout]

Installing Logstash

I wanted to setup Logstash, in my Kubernetes demo environment, with a pipeline configuration file named pipeline.conf and a settings file named logstash.yml. On the internet I found some example Kubernetes manifest files.

In line with how a previously set up my environment , from these example manifest files, I created the following manifest files (with my own namespace and labels):

  • configmap-logstash.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: logstash-configmap
      namespace: nl-amis-logging
      labels:
        app: logstash
        version: "1.0"
        environment: logging
    data:
      logstash.yml: |-
        http.host: "0.0.0.0"
        path.config: /usr/share/logstash/pipeline
      pipeline.conf: |-
        input {
          beats {
            port => 5044
          }
        }
        output {
          elasticsearch {
            hosts => ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
          }
        }
    

    Remark about part logstash.yml:

    SettingDescriptionValue
    http.hostThe bind address for the metrics REST endpoint.“0.0.0.0”
    path.configThe path to the Logstash config for the main pipeline. If you specify a directory or wildcard, config files are read from the directory in alphabetical order./usr/share/logstash/pipeline

    [https://www.elastic.co/guide/en/logstash/current/logstash-settings-file.html]

    Remark about part pipeline.conf:
    The above configuration tells Logstash to listen on port 5044 for incoming Beats connections (like Filebeat) and to index into Elasticsearch.
    [https://www.elastic.co/guide/en/logstash/7.4/plugins-inputs-beats.html#_description_2]

    Remember that for Filebeat the configuration in filebeat.yml for the output was:

        output.elasticsearch:
          hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
          username: ${ELASTICSEARCH_USERNAME}
          password: ${ELASTICSEARCH_PASSWORD}
    

    Later on, I will change the Filebeat output to be Logstash. But in line with the above configuration I setup the output plugin for the pipeline configuration file, to be Elasticsearch.

    …
    output {
      elasticsearch {
        hosts => ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      }
    }
    

    Because in this configuration I use environment variables references, the variables have to be set and I did so in the Logstash Deployment manifest file (see: deployment-logstash.yaml).

    Under Docker, Logstash settings can be configured via environment variables. When the container starts, a helper process checks the environment for variables that can be mapped to Logstash settings.
    [https://www.elastic.co/guide/en/logstash/current/docker-config.html]

    Note that the ${VAR_NAME:default_value} notation is supported.
    [https://www.elastic.co/guide/en/logstash/current/logstash-settings-file.html]
    [https://www.elastic.co/guide/en/logstash/current/environment-variables.html]

  • deployment-logstash.yaml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: logstash
      namespace: nl-amis-logging
      labels:
        app: logstash
        version: "1.0"
        environment: logging
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: logstash
            version: "1.0"
            environment: logging
        spec:
          containers:
          - name: logstash-container
            image: docker.elastic.co/logstash/logstash:7.4.1
            env:
              - name:  ELASTICSEARCH_HOST
                value: "elasticsearch-service.nl-amis-logging"
              - name:  ELASTICSEARCH_PORT
                value: "9200"
            ports:
            - containerPort: 5044
            volumeMounts:
              - name: logstash-settings-config-volume
                mountPath: /usr/share/logstash/config
              - name: logstash-pipeline-config-volume
                mountPath: /usr/share/logstash/pipeline
          volumes:
          - name: logstash-settings-config-volume
            configMap:
              name: logstash-configmap
              items:
                - key: logstash.yml
                  path: logstash.yml
          - name: logstash-pipeline-config-volume
            configMap:
              name: logstash-configmap
              items:
                - key: pipeline.conf
                  path: pipeline.conf
    
  • service-logstash.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: logstash-service
      namespace: nl-amis-logging
      labels:
        app: logstash
        version: "1.0"
        environment: logging
    spec:
      type: ClusterIP
      selector:
        app: logstash
        version: "1.0"
        environment: logging
      ports:
      - port: 5044
        targetPort: 5044
    

Changing the Filebeat configuration

If you want to use Logstash to perform additional processing on the data collected by Filebeat, you need to configure Filebeat to use Logstash.
[https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html]

In a previous article I described how I used Filebeat, including the enhancement of the exported log files, by adding some extra fields in order to add additional information to the output.
[https://technology.amis.nl/2019/09/15/using-elastic-stack-filebeat-for-log-aggregation/]

In order to output to Logstash instead of Elasticsearch, I changed the content of filebeat.yml to:
[in bold, I highlighted the changes]

filebeat.inputs:
- type: container
  paths:
    - /var/log/containers/mysql*.log
    - /var/log/containers/booksservice*.log
  fields:
    my_custom_field1: 'value_of_my_custom_field1'
  fields_under_root: true
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: "/var/log/containers/"
    - add_fields:
        target: my-custom-sub-dictionary1
        fields:
          my_custom_field2: 'value_of_my_custom_field2'
          my_custom_field3: 'value_of_my_custom_field3'
    - add_fields:
        target: my-custom-sub-dictionary2
        fields:
          my_custom_field4: 'value_of_my_custom_field4'
    - add_fields:
        fields:
          my_custom_field5: 'value_of_my_custom_field5'
        

# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
#  providers:
#    - type: kubernetes
#      host: ${NODE_NAME}
#      hints.enabled: true
#      hints.default_config:
#        type: container
#        paths:
#          - /var/log/containers/*${data.kubernetes.container.id}.log

processors:
  - add_cloud_metadata:
  - add_host_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

#output.elasticsearch:
#  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
#  username: ${ELASTICSEARCH_USERNAME}
#  password: ${ELASTICSEARCH_PASSWORD}

output.logstash:
  hosts: ['${LOGSTASH_HOST:logstash}:${LOGSTASH_PORT:5044}']

In this configuration I use environment variables references. The variables have to be set and therefor I added the variables to the content of the Filebeat DaemonSet manifest file (daemonset-filebeat.yaml):
[in bold, I highlighted the changes]

…
    spec:
      serviceAccountName: filebeat-serviceaccount
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.3.1
        args: [
          "-c", "/etc/custom-config/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "elasticsearch-service.nl-amis-logging"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: LOGSTASH_HOST
          value: "logstash-service.nl-amis-logging"
        - name: LOGSTASH_PORT
          value: "5044"

Vagrantfile

I changed the content of Vagrantfile to:
[in bold, I highlighted the changes]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/xenial64"
  
  config.vm.define "ubuntu_minikube_helm_elastic" do |ubuntu_minikube_helm_elastic|
  
    config.vm.network "forwarded_port",
      guest: 8001,
      host:  8001,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 5601,
      host:  5601,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 9200,
      host:  9200,
      auto_correct: true  
      
    config.vm.network "forwarded_port",
      guest: 9010,
      host:  9010,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 9020,
      host:  9020,
      auto_correct: true
      
    config.vm.network "forwarded_port",
      guest: 9110,
      host:  9110,
      auto_correct: true
      
    config.vm.provider "virtualbox" do |vb|
        vb.name = "Ubuntu Minikube Helm Elastic Stack"
        vb.memory = "8192"
        vb.cpus = "1"
        
    args = []
    config.vm.provision "shell",
        path: "scripts/docker.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/minikube.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/kubectl.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/helm.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/namespaces.sh",
        args: args
        
        args = []
    config.vm.provision "shell",
        path: "scripts/elasticsearch.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/kibana.sh",
        args: args
     
    args = []
    config.vm.provision "shell",
        path: "scripts/logstash.sh",
        args: args
 
    args = []
    config.vm.provision "shell",
        path: "scripts/filebeat.sh",
        args: args

    args = []
    config.vm.provision "shell",
        path: "scripts/mysql.sh",
        args: args
        
    args = []
    config.vm.provision "shell",
        path: "scripts/booksservices.sh",
        args: args
    end
    
  end

end

In the scripts directory I created a file logstash.sh with the following content:

#!/bin/bash
echo "**** Begin installing Logstash"

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/logstash-chart/*
helm create logstash-chart

rm -rf /vagrant/helmcharts/logstash-chart/templates/*
cp /vagrant/yaml/*logstash.yaml /vagrant/helmcharts/logstash-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart logstash-chart"
helm install ./logstash-chart --name logstash-release

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-logging"
kubectl get pods --namespace nl-amis-logging

#List services
echo "**** List services with namespace nl-amis-logging"
kubectl get service --namespace nl-amis-logging

echo "**** End installing Logstash"

From the subdirectory named env on my Windows laptop, I opened a Windows Command Prompt (cmd) and typed: vagrant up

This command creates and configures guest machines according to your Vagrantfile.
[https://www.vagrantup.com/docs/cli/up.html]

With the following output (only showing the part about Logstash):

ubuntu_minikube_helm_elastic: **** Begin installing Logstash
ubuntu_minikube_helm_elastic: **** Create Helm chart
ubuntu_minikube_helm_elastic: Creating logstash-chart
ubuntu_minikube_helm_elastic: **** Install Helm chart logstash-chart
ubuntu_minikube_helm_elastic: NAME: logstash-release
ubuntu_minikube_helm_elastic: LAST DEPLOYED: Tue Nov 5 18:15:25 2019
ubuntu_minikube_helm_elastic: NAMESPACE: default
ubuntu_minikube_helm_elastic: STATUS: DEPLOYED
ubuntu_minikube_helm_elastic: RESOURCES:
ubuntu_minikube_helm_elastic: ==> v1/ConfigMap
ubuntu_minikube_helm_elastic: NAME
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: DATA
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: AGE
ubuntu_minikube_helm_elastic: logstash-configmap
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: 2
ubuntu_minikube_helm_elastic: 0s
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: ==> v1/Pod(related)
ubuntu_minikube_helm_elastic: NAME READY STATUS RESTARTS AGE
ubuntu_minikube_helm_elastic: logstash-59bcc854b7-k9qdp 0/1
ubuntu_minikube_helm_elastic: ContainerCreating 0 0s
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: ==> v1/Service
ubuntu_minikube_helm_elastic: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ubuntu_minikube_helm_elastic: logstash-service ClusterIP 10.101.208.199 5044/TCP 0s
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: ==> v1beta1/Deployment
ubuntu_minikube_helm_elastic: NAME READY UP-TO-DATE AVAILABLE AGE
ubuntu_minikube_helm_elastic: logstash 0/1 1 0 0s
ubuntu_minikube_helm_elastic: **** Waiting 2,5 minute …
ubuntu_minikube_helm_elastic: **** List helm releases
ubuntu_minikube_helm_elastic: NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
ubuntu_minikube_helm_elastic: namespace-release 1 Tue Nov 5 18:09:49 2019 DEPLOYED namespace-chart-0.1.0 1.0 default
ubuntu_minikube_helm_elastic: elasticsearch-release 1 Tue Nov 5 18:10:20 2019 DEPLOYED elasticsearch-chart-0.1.0 1.0 default
ubuntu_minikube_helm_elastic: kibana-release 1 Tue Nov 5 18:12:54 2019 DEPLOYED kibana-chart-0.1.0 1.0 default
ubuntu_minikube_helm_elastic: logstash-release 1 Tue Nov 5 18:15:25 2019 DEPLOYED logstash-chart-0.1.0 1.0 default
ubuntu_minikube_helm_elastic: **** List pods with namespace nl-amis-logging
ubuntu_minikube_helm_elastic: NAME
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: READY
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: STATUS
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: RESTARTS
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: AGE
ubuntu_minikube_helm_elastic: elasticsearch-6b46c44f7c-jbswp
ubuntu_minikube_helm_elastic: 1/1 Running 0
ubuntu_minikube_helm_elastic: 7m35s
ubuntu_minikube_helm_elastic: kibana-6f96d679c4-ph5nt 1/1 Running 0 5m2s
ubuntu_minikube_helm_elastic: logstash-59bcc854b7-k9qdp 0/1 ContainerCreating 0 2m31s
ubuntu_minikube_helm_elastic: **** List services with namespace nl-amis-logging
ubuntu_minikube_helm_elastic: NAME
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: TYPE
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: CLUSTER-IP
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: EXTERNAL-IP
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: PORT(S)
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: AGE
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: elasticsearch-service NodePort 10.107.215.66 9200:30200/TCP 7m35s
ubuntu_minikube_helm_elastic: kibana-service NodePort 10.102.145.233 5601:30601/TCP 5m2s
ubuntu_minikube_helm_elastic: logstash-service ClusterIP 10.101.208.199 5044/TCP 2m31s
ubuntu_minikube_helm_elastic: **** End installing Logstash

My demo environment now looks like:

Via the Kubernetes Web UI (Dashboard) I checked that the Logstash components were created (in the nl-amis-logging namespace):

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/pod?namespace=nl-amis-logging

  • configmap-logstash.yaml

Navigate to Config and Storage | Config Maps:

  • deployment-logstash.yaml

Navigate to Workloads | Deployments:

  • service-logstash.yaml

Navigate to Discovery and Load Balancing | Services:

Of course, first I wanted to check if the log files are forwarded from Filebeat to Logstash and in the end are visible in Elasticsearch.

Postman

As I described in a previous article, I used Postman to add books to and retrieve books from the book catalog. I did this for version 1.0 and 2.0 of the BooksService application.
[https://technology.amis.nl/2019/09/15/using-elastic-stack-filebeat-for-log-aggregation/]

From Postman I invoked a request named “GetAllBooksRequest” (with method “POST” and URL “http://locahost:9010/books”).
This concerns version 1.0 in the DEV environment.
A response with “Status 200 OK” was shown (with 4 books being retrieved).

Elasticsearch index

In the Kibana Dashboard via Management | Kibana | Index Patterns you can create an index pattern.
Kibana uses index patterns to retrieve data from Elasticsearch indices for things like visualizations.
[http://localhost:5601/app/kibana#/management/kibana/index_pattern?_g=()]

In the field “Index pattern” I entered logstash*. The index pattern matched 1 index. Next, I clicked on button “Next step”.

In the field “Time Filter field name” I entered @timestamp.
The Time Filter will use this field to filter your data by time.
You can choose not to have a time field, but you will not be able to narrow down your data by a time range.
[http://localhost:5601/app/kibana#/management/kibana/index_pattern?_g=()]

Next, I clicked on button “Create index pattern”.

The Kibana index pattern logstash* was created, with 90 fields.
This page lists every field in the logstash* index and the field’s associated core type as recorded by Elasticsearch. To change a field type, use the Elasticsearch Mapping API.

Kibana Dashboard, Discover

In the Kibana Dashboard via Discover you can see the log files.

Let’s shortly focus on the first hit.

Via a click on icon “>”, the document is expanded.

You can also choose to view the expanded document in JSON:

{
  “_index”: “logstash-2019.11.05-000001”,
  “_type”: “_doc”,
  “_id”: “A6TxPG4BcI8lvItte9Xk”,
  “_score”: 1,
  “_source”: {
    “log”: {
      “offset”: 11473,
      “file”: {
        “path”: “/var/log/containers/booksservice-v1.0-68785bc6ff-rxx84_nl-amis-development_booksservice-v1-0-container-5f19bff55b42c637ca566455885cbe64fafa9842c6a4df076975ccd3915d6bf1.log”
      }
    },
    “fields”: {
      “my_custom_field5”: “value_of_my_custom_field5”
    },
    “host”: {
      “architecture”: “x86_64”,
      “name”: “ubuntu-xenial”,
      “containerized”: false,
      “hostname”: “ubuntu-xenial”,
      “os”: {
        “version”: “7 (Core)”,
        “kernel”: “4.4.0-142-generic”,
        “family”: “redhat”,
        “name”: “CentOS Linux”,
        “platform”: “centos”,
        “codename”: “Core”
      }
    },
    “@timestamp”: “2019-11-05T19:01:27.313Z”,
    “kubernetes”: {
      “node”: {
        “name”: “minikube”
      },
      “replicaset”: {
        “name”: “booksservice-v1.0-68785bc6ff”
      },
      “container”: {
        “name”: “booksservice-v1-0-container”
      },
      “pod”: {
        “name”: “booksservice-v1.0-68785bc6ff-rxx84”,
        “uid”: “6b854938-fff9-11e9-982d-023e591c269a”
      },
      “labels”: {
        “environment”: “development”,
        “app”: “booksservice”,
        “pod-template-hash”: “68785bc6ff”,
        “version”: “1.0”
      },
      “namespace”: “nl-amis-development”
    },
    “ecs”: {
      “version”: “1.0.1”
    },
    “agent”: {
      “id”: “a0d8d0dd-4f02-4ec7-b582-af252604560a”,
      “type”: “filebeat”,
      “hostname”: “ubuntu-xenial”,
      “version”: “7.3.1”,
      “ephemeral_id”: “d48eaab2-5510-4c60-b133-dbdb366da8de”
    },
    “my_custom_field1”: “value_of_my_custom_field1”,
    “stream”: “stdout”,
    “message”: “2019-11-05 19:01:27.312  INFO 1 — [nio-9090-exec-5] n.a.d.s.b.application.BookService        : “,
    “my-custom-sub-dictionary2”: {
      “my_custom_field4”: “value_of_my_custom_field4”
    },
    “tags”: [
      “beats_input_codec_plain_applied”
    ],
    “my-custom-sub-dictionary1”: {
      “my_custom_field3”: “value_of_my_custom_field3”,
      “my_custom_field2”: “value_of_my_custom_field2”
    },
    “input”: {
      “type”: “container”
    },
    “@version”: “1”
  },
  “fields”: {
    “@timestamp”: [
      “2019-11-05T19:01:27.313Z”
    ]
  }
}


Changing the Logstash configuration

As I mentioned before, in a previous article about using Filebeat, I tried out some form of enhancing the exported log files. So, I added some extra fields in order to add additional information to the output.
[https://technology.amis.nl/2019/09/15/using-elastic-stack-filebeat-for-log-aggregation/]

These custom fields can be seen in the Expanded document above:

Field nameField valuefields_under_roottarget
my_custom_field1value_of_my_custom_field1true
my_custom_field2value_of_my_custom_field2my-custom-sub-dictionary1
my_custom_field3value_of_my_custom_field3
my_custom_field4value_of_my_custom_field4my-custom-sub-dictionary2
my_custom_field5value_of_my_custom_field5

As I did with Filebeat, with Logstash I also wanted to try out some form of enhancing the exported log files. So, here also, I added some extra fields in order to add additional information to the output.

I want to create extra custom fields, some with a value based on other fields:
[in bold, I highlighted the changes]

Field nameField valuetarget
my_custom_field1VALUE_OF_MY_CUSTOM_FIELD1
my_custom_field2value_of_my_custom_field2my-custom-sub-dictionary1
my_custom_field3value_of_my_custom_field3
my_custom_field4value_of_my_custom_field4my-custom-sub-dictionary2
my_custom_field5value_of_my_custom_field5
my_custom_field6new_static_value
my_custom_field7<copy value from my_custom_field1>
my_custom_field8<copy value from my-custom-sub-dictionary1.my_custom_field2>
my_custom_field9<copy value from my-custom-sub-dictionary2.my_custom_field4>my-custom-sub-dictionary3

I also wanted to use a custom index, instead of using the default index.
So, I changed the index to:

logstash-via-filebeat-%{+YYYY.MM.dd}

In the yaml directory I changed the file configmap-logstash.yaml to have the following content:
[in bold, I highlighted the changes]

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: nl-amis-logging
  labels:
    app: logstash
    version: "1.0"
    environment: logging
data:
  logstash.yml: |-
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  pipeline.conf: |-
    input {
      beats {
        port => 5044
      }
    }
    filter {
      mutate {
        uppercase => [ "my_custom_field1" ]
        add_field => {
          "my_custom_field6" => "new_static_value"
          "my_custom_field7" => "%{my_custom_field1}"
          "my_custom_field8" => "%{[my-custom-sub-dictionary1][my_custom_field2]}"
          "[my-custom-sub-dictionary3][my_custom_field9]" => "%{[my-custom-sub-dictionary2][my_custom_field4]}"
        }
      }
    }
    output {
      elasticsearch {
        hosts => ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
        index => "logstash-via-filebeat-%{+YYYY.MM.dd}"
      }
    }

For information about how to use the filter stage, I kindly refer you to the Logstash documentation.
[https://www.elastic.co/guide/en/logstash/current/filter-plugins.html]
[https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html]

So, because I made some changes, I deleted and re-installed the release of Logstash (via Helm, the package manager for Kubernetes).

helm del --purge logstash-release

release "logstash-release" deleted

Next, I started the shell script:

cd /vagrant/scripts
./logstash.sh

With the following output:

**** Begin installing Logstash
**** Create Helm chart
Creating logstash-chart
**** Install Helm chart logstash-chart
NAME: logstash-release
LAST DEPLOYED: Tue Nov 5 20:24:34 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
logstash-configmap 2 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
logstash-59bcc854b7-7kdrj 0/1 ContainerCreating 0 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
logstash-service ClusterIP 10.106.90.34 5044/TCP 0s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
logstash 0/1 1 0 0s

**** Waiting 2,5 minute …
**** List helm releases
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
namespace-release 1 Tue Nov 5 18:09:49 2019 DEPLOYED namespace-chart-0.1.0 1.0 default
elasticsearch-release 1 Tue Nov 5 18:10:20 2019 DEPLOYED elasticsearch-chart-0.1.0 1.0 default
kibana-release 1 Tue Nov 5 18:12:54 2019 DEPLOYED kibana-chart-0.1.0 1.0 default
filebeat-release 1 Tue Nov 5 18:17:59 2019 DEPLOYED filebeat-chart-0.1.0 1.0 default
mysql-release 1 Tue Nov 5 18:19:14 2019 DEPLOYED mysql-chart-0.1.0 1.0 default
booksservice-release 1 Tue Nov 5 18:23:52 2019 DEPLOYED booksservice-chart-0.1.0 1.0 default
logstash-release 1 Tue Nov 5 20:24:34 2019 DEPLOYED logstash-chart-0.1.0 1.0 default
**** List pods with namespace nl-amis-logging
NAME READY STATUS RESTARTS AGE
elasticsearch-6b46c44f7c-jbswp 1/1 Running 0 136m
filebeat-daemonset-4vjwr 1/1 Running 0 129m
kibana-6f96d679c4-ph5nt 1/1 Running 0 134m
logstash-59bcc854b7-7kdrj 1/1 Running 0 2m30s
**** List services with namespace nl-amis-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-service NodePort 10.107.215.66 9200:30200/TCP 136m
kibana-service NodePort 10.102.145.233 5601:30601/TCP 134m
logstash-service ClusterIP 10.106.90.34 5044/TCP 2m31s
**** End installing Logstash

After that, from Postman I invoked a request named “GetAllBooksRequest” (with method “POST” and URL “http://locahost:9010/books”).
This concerns version 1.0 in the DEV environment.
A response with “Status 200 OK” was shown (with 4 books being retrieved).

In the Kibana Dashboard via Discover I refreshed the log files.

Let’s shortly focus on the first hit.

Via a click on icon “>”, the document is expanded.

You can also choose to view the expanded document in JSON:

{
  “_index”: “logstash-via-filebeat-2019.11.05”,
  “_type”: “_doc”,
  “_id”: “E6REPW4BcI8lvIttKdUG”,
  “_version”: 1,
  “_score”: null,
  “_source”: {
    “fields”: {
      “my_custom_field5”: “value_of_my_custom_field5”
    },
    “host”: {
      “name”: “ubuntu-xenial”,
      “os”: {
        “kernel”: “4.4.0-142-generic”,
        “version”: “7 (Core)”,
        “platform”: “centos”,
        “family”: “redhat”,
        “codename”: “Core”,
        “name”: “CentOS Linux”
      },
      “architecture”: “x86_64”,
      “containerized”: false,
      “hostname”: “ubuntu-xenial”
    },
    “my-custom-sub-dictionary2”: {
      “my_custom_field4”: “value_of_my_custom_field4”
    },
    “my_custom_field1”: “VALUE_OF_MY_CUSTOM_FIELD1”,
    “my_custom_field8”: “value_of_my_custom_field2”,
    “tags”: [
      “beats_input_codec_plain_applied”
    ],
    “ecs”: {
      “version”: “1.0.1”
    },
    “kubernetes”: {
      “replicaset”: {
        “name”: “booksservice-v1.0-68785bc6ff”
      },
      “node”: {
        “name”: “minikube”
      },
      “namespace”: “nl-amis-development”,
      “labels”: {
        “version”: “1.0”,
        “pod-template-hash”: “68785bc6ff”,
        “app”: “booksservice”,
        “environment”: “development”
      },
      “pod”: {
        “name”: “booksservice-v1.0-68785bc6ff-lxnhh”,
        “uid”: “6b8724a9-fff9-11e9-982d-023e591c269a”
      },
      “container”: {
        “name”: “booksservice-v1-0-container”
      }
    },
    “my_custom_field6”: “new_static_value”,
    “@timestamp”: “2019-11-05T20:31:43.587Z”,
    “message”: “2019-11-05 20:31:43.586  INFO 1 — [nio-9090-exec-6] n.a.d.s.b.application.BookService        : “,
    “stream”: “stdout”,
    “@version”: “1”,
    “my-custom-sub-dictionary1”: {
      “my_custom_field2”: “value_of_my_custom_field2”,
      “my_custom_field3”: “value_of_my_custom_field3”
    },
    “log”: {
      “offset”: 12293,
      “file”: {
        “path”: “/var/log/containers/booksservice-v1.0-68785bc6ff-lxnhh_nl-amis-development_booksservice-v1-0-container-cd88fc43cd96277e9887bb71da6c4d819bb18f60a60b573719c72f21d809091f.log”
      }
    },
    “input”: {
      “type”: “container”
    },
    “agent”: {
      “hostname”: “ubuntu-xenial”,
      “version”: “7.3.1”,
      “type”: “filebeat”,
      “id”: “a0d8d0dd-4f02-4ec7-b582-af252604560a”,
      “ephemeral_id”: “d48eaab2-5510-4c60-b133-dbdb366da8de”
    },
    “my_custom_field7”: “VALUE_OF_MY_CUSTOM_FIELD1”,
    “my-custom-sub-dictionary3”: {
      “my_custom_field9”: “value_of_my_custom_field4”

    }
  },
  “fields”: {
    “@timestamp”: [
      “2019-11-05T20:31:43.587Z”
    ]
  },
  “sort”: [
    1572985903587
  ]
}


So, my configuration changes (via the filter stage) were successfully applied as can be seen in the expanded document above.

As I did with Filebeat, with Logstash I also wanted to try out filtering.

I wanted to change the message content based on a certain condition.

In the yaml directory I changed the file configmap-logstash.yaml to have the following content:
[in bold, I highlighted the changes]

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: nl-amis-logging
  labels:
    app: logstash
    version: "1.0"
    environment: logging
data:
  logstash.yml: |-
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  pipeline.conf: |-
    input {
      beats {
        port => 5044
      }
    }
    filter {
      if "----Begin logging BookService.getAllBooks----" in [message] {
        mutate {
          replace => {"message" => "My new message. The old message was: %{message}"}
        }
      }
      mutate {
        uppercase => [ "my_custom_field1" ]
        add_field => {
          "my_custom_field6" => "new_static_value"
          "my_custom_field7" => "%{my_custom_field1}"
          "my_custom_field8" => "%{[my-custom-sub-dictionary1][my_custom_field2]}"
          "[my-custom-sub-dictionary3][my_custom_field9]" => "%{[my-custom-sub-dictionary2][my_custom_field4]}"
        }
      }
    }
    output {
      elasticsearch {
        hosts => ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
        index => "logstash-via-filebeat-%{+YYYY.MM.dd}"
      }
    }

So, because I made some changes, I deleted and re-installed the release of Logstash (via Helm, the package manager for Kubernetes) and started the shell script logstash.sh as described earlier.

In the Kibana Dashboard via Discover I refreshed the log files.

In the list above, I search for a specific hit (by looking at the message field).

I found it:

Via a click on icon “>”, the document is expanded.

You can also choose to view the expanded document in JSON:

{
  “_index”: “logstash-via-filebeat-2019.11.05”,
  “_type”: “_doc”,
  “_id”: “IaRnPW4BcI8lvItto9W0”,
  “_version”: 1,
  “_score”: null,
  “_source”: {
    “my_custom_field1”: “VALUE_OF_MY_CUSTOM_FIELD1”,
    “my-custom-sub-dictionary3”: {
      “my_custom_field9”: “value_of_my_custom_field4”
    },
    “ecs”: {
      “version”: “1.0.1”
    },
    “stream”: “stdout”,
    “my_custom_field7”: “VALUE_OF_MY_CUSTOM_FIELD1”,
    “my_custom_field8”: “value_of_my_custom_field2”,
    “log”: {
      “file”: {
        “path”: “/var/log/containers/booksservice-v1.0-68785bc6ff-lxnhh_nl-amis-development_booksservice-v1-0-container-cd88fc43cd96277e9887bb71da6c4d819bb18f60a60b573719c72f21d809091f.log”
      },
      “offset”: 12743
    },
    “@version”: “1”,
    “my-custom-sub-dictionary1”: {
      “my_custom_field2”: “value_of_my_custom_field2”,
      “my_custom_field3”: “value_of_my_custom_field3”
    },
    “agent”: {
      “id”: “a0d8d0dd-4f02-4ec7-b582-af252604560a”,
      “ephemeral_id”: “d48eaab2-5510-4c60-b133-dbdb366da8de”,
      “version”: “7.3.1”,
      “type”: “filebeat”,
      “hostname”: “ubuntu-xenial”
    },
    “message”: “My new message. The old message was: —-Begin logging BookService.getAllBooks—-“,
    “@timestamp”: “2019-11-05T21:10:22.903Z”,
    “fields”: {
      “my_custom_field5”: “value_of_my_custom_field5”
    },
    “kubernetes”: {
      “container”: {
        “name”: “booksservice-v1-0-container”
      },
      “pod”: {
        “name”: “booksservice-v1.0-68785bc6ff-lxnhh”,
        “uid”: “6b8724a9-fff9-11e9-982d-023e591c269a”
      },
      “node”: {
        “name”: “minikube”
      },
      “namespace”: “nl-amis-development”,
      “labels”: {
        “environment”: “development”,
        “app”: “booksservice”,
        “pod-template-hash”: “68785bc6ff”,
        “version”: “1.0”
      },
      “replicaset”: {
        “name”: “booksservice-v1.0-68785bc6ff”
      }
    },
    “my-custom-sub-dictionary2”: {
      “my_custom_field4”: “value_of_my_custom_field4”
    },
    “my_custom_field6”: “new_static_value”,
    “input”: {
      “type”: “container”
    },
    “host”: {
      “name”: “ubuntu-xenial”,
      “os”: {
        “platform”: “centos”,
        “version”: “7 (Core)”,
        “family”: “redhat”,
        “name”: “CentOS Linux”,
        “codename”: “Core”,
        “kernel”: “4.4.0-142-generic”
      },
      “hostname”: “ubuntu-xenial”,
      “architecture”: “x86_64”,
      “containerized”: false
    },
    “tags”: [
      “beats_input_codec_plain_applied”
    ]
  },
  “fields”: {
    “@timestamp”: [
      “2019-11-05T21:10:22.903Z”
    ]
  },
  “sort”: [
    1572988222903
  ]
}

So again, my configuration changes (via the filter stage) were successfully applied as can be seen in the expanded document above.

Creating the ConfigMap from a file

In line with what I did when I used Filebeat (see previous article), I wanted to create a ConfigMap from multiple files in the same directory.
[https://technology.amis.nl/2019/09/15/using-elastic-stack-filebeat-for-log-aggregation/]

The content of the files is based on the file configmap-logstash.yaml, I created earlier.

There for, in the vagrant directory I created a subdirectory structure configmaps/configmap-logstash with a file logstash.yml with the following content:

http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

And a file pipeline.conf with the following content:

input {
  beats {
    port => 5044
  }
}
filter {
  if "----Begin logging BookService.getAllBooks----" in [message] {
    mutate {
      replace => {"message" => "My new message. The old message was: %{message}"}
    }
  }
  mutate {
    uppercase => [ "my_custom_field1" ]
    add_field => {
      "my_custom_field6" => "new_static_value"
      "my_custom_field7" => "%{my_custom_field1}"
      "my_custom_field8" => "%{[my-custom-sub-dictionary1][my_custom_field2]}"
      "[my-custom-sub-dictionary3][my_custom_field9]" => "%{[my-custom-sub-dictionary2][my_custom_field4]}"
    }
  }
}
output {
  elasticsearch {
    hosts => ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
    index => "logstash-via-filebeat-%{+YYYY.MM.dd}"
  }
}

I created a ConfigMap that holds these Logstash config files using:

kubectl create configmap logstash-configmap --from-file=/vagrant/configmaps/configmap-logstash --namespace nl-amis-logging

Next, I added labels to the ConfigMap using:

kubectl label configmap logstash-configmap --namespace nl-amis-logging app=logstash
kubectl label configmap logstash-configmap --namespace nl-amis-logging version="1.0"
kubectl label configmap logstash-configmap --namespace nl-amis-logging environment=logging

A ConfigMap can be created via a yaml file, but not if you want to use the from-file option, because kubernetes isn’t aware of the local file’s path.
[https://stackoverflow.com/questions/51268488/kubernetes-configmap-set-from-file-in-yaml-configuration]

You must create a ConfigMap before referencing it in a Pod specification (unless you mark the ConfigMap as “optional”). If you reference a ConfigMap that doesn’t exist, the Pod won’t start.
ConfigMaps reside in a specific namespace. A ConfigMap can only be referenced by pods residing in the same namespace.
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/]

When you create a ConfigMap using –from-file, the filename becomes a key stored in the data section of the ConfigMap. The file contents become the key’s value.
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume]

On my Windows laptop, in the yaml directory, I deleted the file configmap-logstash.yaml.

In the scripts directory I changed the file filebeat.sh to have the following content:

#!/bin/bash
echo "**** Begin installing Logstash"

#Create ConfigMap before creating Deployment
kubectl create configmap logstash-configmap --from-file=/vagrant/configmaps/configmap-logstash --namespace nl-amis-logging

#Label ConfigMap
kubectl label configmap logstash-configmap --namespace nl-amis-logging app=logstash
kubectl label configmap logstash-configmap --namespace nl-amis-logging version="1.0"
kubectl label configmap logstash-configmap --namespace nl-amis-logging environment=logging

#List configmaps
echo "**** List configmap logstash-configmap with namespace nl-amis-logging"
#kubectl describe configmaps logstash-configmap --namespace nl-amis-logging
kubectl get configmaps logstash-configmap --namespace nl-amis-logging -o yaml

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/logstash-chart/*
helm create logstash-chart

rm -rf /vagrant/helmcharts/logstash-chart/templates/*
cp /vagrant/yaml/*logstash.yaml /vagrant/helmcharts/logstash-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart logstash-chart"
helm install ./logstash-chart --name logstash-release

# Wait 2,5 minute
echo "**** Waiting 2,5 minute ..."
sleep 150

#List helm releases
echo "**** List helm releases"
helm list -d

#List pods
echo "**** List pods with namespace nl-amis-logging"
kubectl get pods --namespace nl-amis-logging

#List services
echo "**** List services with namespace nl-amis-logging"
kubectl get service --namespace nl-amis-logging

echo "**** End installing Logstash"

So, because I made some changes, I deleted and re-installed the release of Logstash (via Helm, the package manager for Kubernetes).

helm del --purge logstash-release

release "logstash-release" deleted
kubectl --namespace=nl-amis-logging delete configmap logstash-configmap

configmap "logstash-configmap" deleted

Next, I started the shell script:

cd /vagrant/scripts
./logstash.sh

With the following output:

**** Begin installing Logstash
configmap/logstash-configmap created
configmap/logstash-configmap labeled
configmap/logstash-configmap labeled
configmap/logstash-configmap labeled
**** List configmap logstash-configmap with namespace nl-amis-logging
apiVersion: v1
data:
logstash.yml: “http.host: \”0.0.0.0\”\r\npath.config: /usr/share/logstash/pipeline”
pipeline.conf: “input {\r\n beats {\r\n port => 5044\r\n }\r\n}\r\nfilter {\r\n
\ if \”—-Begin logging BookService.getAllBooks—-\” in [message] {\r\n mutate
{\r\n replace => {\”message\” => \”My new message. The old message was: %{message}\”}\r\n
\ }\r\n }\r\n mutate {\r\n uppercase => [ \”my_custom_field1\” ]\r\n add_field
=> {\r\n \”my_custom_field6\” => \”new_static_value\”\r\n \”my_custom_field7\”
=> \”%{my_custom_field1}\”\r\n \”my_custom_field8\” => \”%{[my-custom-sub-dictionary1][my_custom_field2]}\”\r\n
\ \”[my-custom-sub-dictionary3][my_custom_field9]\” => \”%{[my-custom-sub-dictionary2][my_custom_field4]}\”\r\n
\ }\r\n }\r\n}\r\noutput {\r\n elasticsearch {\r\n hosts => [‘${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}’]\r\n
\ index => \”logstash-via-filebeat-%{+YYYY.MM.dd}\”\r\n }\r\n}”
kind: ConfigMap
metadata:
creationTimestamp: “2019-11-07T19:56:52Z”
labels:
app: logstash
environment: logging
version: “1.0”
name: logstash-configmap
namespace: nl-amis-logging
resourceVersion: “17502”
selfLink: /api/v1/namespaces/nl-amis-logging/configmaps/logstash-configmap
uid: be12cf33-0198-11ea-982d-023e591c269a
**** Create Helm chart
Creating logstash-chart
**** Install Helm chart logstash-chart
NAME: logstash-release
LAST DEPLOYED: Thu Nov 7 19:56:53 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
logstash-59bcc854b7-qcf68 0/1 ContainerCreating 0 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
logstash-service ClusterIP 10.96.65.0 5044/TCP 0s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
logstash 0/1 1 0 0s


**** Waiting 2,5 minute …
**** List helm releases
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
namespace-release 1 Tue Nov 5 18:09:49 2019 DEPLOYED namespace-chart-0.1.0 1.0 default
elasticsearch-release 1 Tue Nov 5 18:10:20 2019 DEPLOYED elasticsearch-chart-0.1.0 1.0 default
kibana-release 1 Tue Nov 5 18:12:54 2019 DEPLOYED kibana-chart-0.1.0 1.0 default
filebeat-release 1 Tue Nov 5 18:17:59 2019 DEPLOYED filebeat-chart-0.1.0 1.0 default
mysql-release 1 Tue Nov 5 18:19:14 2019 DEPLOYED mysql-chart-0.1.0 1.0 default
booksservice-release 1 Tue Nov 5 18:23:52 2019 DEPLOYED booksservice-chart-0.1.0 1.0 default
logstash-release 1 Thu Nov 7 19:56:53 2019 DEPLOYED logstash-chart-0.1.0 1.0 default
**** List pods with namespace nl-amis-logging
NAME READY STATUS RESTARTS AGE
elasticsearch-6b46c44f7c-jbswp 1/1 Running 0 2d1h
filebeat-daemonset-4vjwr 1/1 Running 0 2d1h
kibana-6f96d679c4-ph5nt 1/1 Running 0 2d1h
logstash-59bcc854b7-qcf68 1/1 Running 0 2m30s
**** List services with namespace nl-amis-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-service NodePort 10.107.215.66 9200:30200/TCP 2d1h
kibana-service NodePort 10.102.145.233 5601:30601/TCP 2d1h
logstash-service ClusterIP 10.96.65.0 5044/TCP 2m30s
**** End installing Logstash

Via the Kubernetes Web UI (Dashboard) I checked that the Logstash Config Map component was created (in the nl-amis-logging namespace):

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/pod?namespace=nl-amis-logging

Navigate to Config and Storage | Config Maps:

Directory layout of my Logstash Pod

I used the following command to list all Pods:

kubectl get pods --namespace nl-amis-logging

With the following output:

NAME                             READY   STATUS    RESTARTS   AGE
elasticsearch-6b46c44f7c-jbswp   1/1     Running   0          2d1h
filebeat-daemonset-4vjwr         1/1     Running   0          2d1h
kibana-6f96d679c4-ph5nt          1/1     Running   0          2d1h
logstash-59bcc854b7-qcf68        1/1     Running   0          7m21s

I started a shell to the running container with the following command:

kubectl exec -it logstash-59bcc854b7-qcf68 --namespace nl-amis-logging -- /bin/bash

I used the following command to list the directory contents:

ls -latr

With the following output:

total 896
-rw-rw-r– 1 logstash root 808305 Oct 22 19:33 NOTICE.TXT
-rw-rw-r– 1 logstash root 2276 Oct 22 19:33 CONTRIBUTORS
-rw-rw-r– 1 logstash root 13675 Oct 22 19:33 LICENSE.txt
-rw-rw-r– 1 logstash root 4144 Oct 22 19:35 Gemfile
-rw-rw-r– 1 logstash root 23111 Oct 22 19:36 Gemfile.lock
drwxrwsr-x 4 logstash root 4096 Oct 22 19:45 modules
drwxrwsr-x 6 logstash root 4096 Oct 22 19:45 lib
drwxrwsr-x 2 logstash root 4096 Oct 22 19:45 bin
drwxrwsr-x 3 logstash root 4096 Oct 22 19:45 tools
drwxrwsr-x 3 logstash root 4096 Oct 22 19:45 logstash-core-plugin-api
drwxrwsr-x 4 logstash root 4096 Oct 22 19:45 logstash-core
drwxrwsr-x 9 logstash root 4096 Oct 22 19:45 x-pack
drwxrwsr-x 4 logstash root 4096 Oct 22 19:45 vendor
drwxr-xr-x 1 root root 4096 Oct 22 19:45 ..
drwxrwsr-x 1 logstash root 4096 Oct 22 19:45 .
drwxrwxrwx 3 root root 4096 Nov 7 19:56 pipeline
drwxrwxrwx 3 root root 4096 Nov 7 19:56 config
drwxrwsr-x 1 logstash root 4096 Nov 7 19:57 data

I used the following commands to list the config directory contents:

cd config
ls -latr

With the following output:

total 12
drwxrwsr-x 1 logstash root 4096 Oct 22 19:45 ..
lrwxrwxrwx 1 root root 19 Nov 7 19:56 logstash.yml -> ..data/logstash.yml
lrwxrwxrwx 1 root root 31 Nov 7 19:56 ..data -> ..2019_11_07_19_56_54.843326173
drwxr-xr-x 2 root root 4096 Nov 7 19:56 ..2019_11_07_19_56_54.843326173
drwxrwxrwx 3 root root 4096 Nov 7 19:56 .

I used the following command to check the content of the configuration file:

cat logstash.yml

With the following output:

http.host: “0.0.0.0”
path.config: /usr/share/logstash/pipeline

I used the following commands to list the pipeline directory contents:

cd ..
cd pipeline
ls -latr

With the following output:

total 12
lrwxrwxrwx 1 root root 20 Nov 7 19:56 pipeline.conf -> ..data/pipeline.conf
lrwxrwxrwx 1 root root 31 Nov 7 19:56 ..data -> ..2019_11_07_19_56_54.185834998
drwxr-xr-x 2 root root 4096 Nov 7 19:56 ..2019_11_07_19_56_54.185834998
drwxrwxrwx 3 root root 4096 Nov 7 19:56 .
drwxrwsr-x 1 logstash root 4096 Nov 7 20:07 ..

I used the following command to check the content of the configuration file:

cat pipeline.conf

With the following output:

input {
  beats {
    port => 5044
  }
}
filter {
  if “—-Begin logging BookService.getAllBooks—-” in [message] {
    mutate {
      replace => {“message” => “My new message. The old message was: %{message}”}
    }
  }
  mutate {
    uppercase => [ “my_custom_field1” ]
    add_field => {
      “my_custom_field6” => “new_static_value”
      “my_custom_field7” => “%{my_custom_field1}”
      “my_custom_field8” => “%{[my-custom-sub-dictionary1][my_custom_field2]}”
      “[my-custom-sub-dictionary3][my_custom_field9]” => “%{[my-custom-sub-dictionary2][my_custom_field4]}”
    }
  }
}
output {
  elasticsearch {
    hosts => [‘${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}’]
    index => “logstash-via-filebeat-%{+YYYY.MM.dd}”
  }
}

Of course, this shows the same content as can be seen in the Kubernetes Web UI (Dashboard).

I closed the shell to the running container with the following command:

exit

Kibana Dashboard

After, I deleted and re-installed the release of Logstash (via Helm, the package manager for Kubernetes), because I created the ConfigMap from a file, from Postman I invoked a request named “GetAllBooksRequest” (with method “POST” and URL “http://locahost:9010/books”).
This concerns version 1.0 in the DEV environment.
A response with “Status 200 OK” was shown (with 4 books being retrieved).

In the Kibana Dashboard via Discover I refreshed the log files.

In the list above, I search for a specific hit (by looking at the message field) and I found it:

Via a click on icon “>”, the document is expanded.

You can also choose to view the expanded document in JSON:

{
  “_index”: “logstash-via-filebeat-2019.11.07”,
  “_type”: “_doc”,
  “_id”: “LaR_R24BcI8lvItthtWs”,
  “_score”: 1,
  “_source”: {
    “my-custom-sub-dictionary3”: {
      “my_custom_field9”: “value_of_my_custom_field4”
    },
    “log”: {
      “offset”: 14404,
      “file”: {
        “path”: “/var/log/containers/booksservice-v1.0-68785bc6ff-rxx84_nl-amis-development_booksservice-v1-0-container-5f19bff55b42c637ca566455885cbe64fafa9842c6a4df076975ccd3915d6bf1.log”
      }
    },
    “my-custom-sub-dictionary2”: {
      “my_custom_field4”: “value_of_my_custom_field4”
    },
    “@timestamp”: “2019-11-07T20:12:44.855Z”,
    “agent”: {
      “type”: “filebeat”,
      “version”: “7.3.1”,
      “ephemeral_id”: “d48eaab2-5510-4c60-b133-dbdb366da8de”,
      “hostname”: “ubuntu-xenial”,
      “id”: “a0d8d0dd-4f02-4ec7-b582-af252604560a”
    },
    “kubernetes”: {
      “container”: {
        “name”: “booksservice-v1-0-container”
      },
      “node”: {
        “name”: “minikube”
      },
      “namespace”: “nl-amis-development”,
      “labels”: {
        “pod-template-hash”: “68785bc6ff”,
        “environment”: “development”,
        “version”: “1.0”,
        “app”: “booksservice”
      },
      “replicaset”: {
        “name”: “booksservice-v1.0-68785bc6ff”
      },
      “pod”: {
        “uid”: “6b854938-fff9-11e9-982d-023e591c269a”,
        “name”: “booksservice-v1.0-68785bc6ff-rxx84”
      }
    },
    “ecs”: {
      “version”: “1.0.1”
    },
    “host”: {
      “name”: “ubuntu-xenial”,
      “architecture”: “x86_64”,
      “hostname”: “ubuntu-xenial”,
      “containerized”: false,
      “os”: {
        “platform”: “centos”,
        “version”: “7 (Core)”,
        “kernel”: “4.4.0-142-generic”,
        “family”: “redhat”,
        “name”: “CentOS Linux”,
        “codename”: “Core”
      }
    },
    “my_custom_field7”: “VALUE_OF_MY_CUSTOM_FIELD1”,
    “my-custom-sub-dictionary1”: {
      “my_custom_field2”: “value_of_my_custom_field2”,
      “my_custom_field3”: “value_of_my_custom_field3”
    },
    “fields”: {
      “my_custom_field5”: “value_of_my_custom_field5”
    },
    “stream”: “stdout”,
    “my_custom_field8”: “value_of_my_custom_field2”,
    “input”: {
      “type”: “container”
    },
    “tags”: [
      “beats_input_codec_plain_applied”
    ],
    “message”: “My new message. The old message was: —-Begin logging BookService.getAllBooks—-“,
    “my_custom_field6”: “new_static_value”,
    “my_custom_field1”: “VALUE_OF_MY_CUSTOM_FIELD1”,
    “@version”: “1”
  },
  “fields”: {
    “@timestamp”: [
      “2019-11-07T20:12:44.855Z”
    ]
  }
}

Based on this content, I could conclude that after creating the ConfigMap from a file, and restarting Logstash, everything worked as expected.

So now it’s time to conclude this article. I tried out some of the functionality of Elastic Filebeat in combination with Logstash. Besides log aggregation (getting log information available at a centralized location), I also described how I used filtering and enhancing the exported log data with Logstash.

About Author

Marc, active in IT (and with Oracle) since 1995, is a Principal Oracle SOA Consultant with focus on Oracle Cloud, Oracle Service Bus, Oracle SOA Suite, Oracle Database (SQL & PL/SQL) and Java, Docker, Kubernetes, Minikube and Helm. He's Oracle SOA Suite 12c Certified Implementation Specialist. Over the past 20 years he has worked for several customers in the Netherlands. Marc likes to share his knowledge through publications, blog’s and presentations.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.