I wanted to set up a demo environment with Apache Kafka (an open-source distributed event streaming platform) and an Oracle Database, all within containers. For my purpose I opted for Oracle Database XE.
[https://kafka.apache.org/]
In my previous article, I described the steps I took, to set up a demo environment with an Oracle Database 21c XE, within an Oracle VirtualBox appliance, with the help of Vagrant.
In this article the focus is on installing Apache Kafka, trying out a quick start example and the problems I encountered, in my demo environment, when using an Open-Source Web GUI for Apache Kafka.
Apache Kafka
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
[https://kafka.apache.org/]
Kafka combines three key capabilities for event streaming:
- Publish (write) and subscribe to (read) streams of events, including continuous import/export of your data from other systems.
- Store streams of events durably and reliably for as long as you want.
- Process streams of events as they occur or retrospectively.
[https://kafka.apache.org/documentation/#intro_platform]
I won’t dive deep into describing the functionality of Kafka, but I will mention how Kafka works in a nutshell:
Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments.
Servers: Kafka is run as a cluster of one or more servers that can span multiple datacenters or cloud regions. Some of these servers form the storage layer, called the brokers. Other servers run Kafka Connect to continuously import and export data as event streams to integrate Kafka with your existing systems such as relational databases as well as other Kafka clusters. To let you implement mission-critical use cases, a Kafka cluster is highly scalable and fault-tolerant: if any of its servers fails, the other servers will take over their work to ensure continuous operations without any data loss.
Clients: They allow you to write distributed applications and microservices that read, write, and process streams of events in parallel, at scale, and in a fault-tolerant manner even in the case of network problems or machine failures. Kafka ships with some such clients included, which are augmented by dozens of clients provided by the Kafka community: clients are available for Java and Scala including the higher-level Kafka Streams library, for Go, Python, C/C++, and many other programming languages as well as REST APIs.
[https://kafka.apache.org/documentation/#intro_nutshell]
I will also briefly explain some main concepts and terminology:
- An event records the fact that “something happened” in the world or in your business. It is also called record or message in the documentation. When you read or write data to Kafka, you do this in the form of events. Conceptually, an event has a key, value, timestamp, and optional metadata headers.
- Producers are those client applications that publish (write) events to Kafka, and consumers are those that subscribe to (read and process) these events.
- Events are organized and durably stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.
- Topics are partitioned, meaning a topic is spread over a number of “buckets” located on different Kafka brokers.
[https://kafka.apache.org/documentation/#intro_concepts_and_terms]
Confluent Developer, Apache Kafka® Quick Start
Because my demo environment already had Docker installed, I wanted to install Kafka via a Docker image.
Of course, there are several images to choose from.
I went for Confluent Community Docker Image for Apache Kafka.
[https://hub.docker.com/r/confluentinc/cp-kafka/]
I wanted to start simple, so I followed the Apache Kafka® Quick Start (the guide demonstrating how to quickly get started with Apache Kafka) from the Confluent Developer website. In this Quick Start we will connect to a broker, create a topic, produce some messages, and consume them.
[https://developer.confluent.io/quickstart/kafka-docker/]
As you can read in the Quick Start, for the setup of a Kafka broker, a Docker Compose file is used.
Installing Docker Compose
I did not have Docker Compose running on my VM, so I had to install it.
A prerequisite for installing Docker Compose is Docker Engine for your OS.
[https://docs.docker.com/compose/install/#prerequisites]
I did have Docker Engine already running on my VM.
Remark:
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
[https://docs.docker.com/compose/]
From the subdirectory named env on my Windows laptop, I navigated to the scripts directory were I created a file docker-compose.sh with the following content (following the instructions in https://docs.docker.com/compose/install/):
#!/bin/bash echo "**** Begin installing Docker Compose" #Download the current stable release of Docker Compose sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose #Apply executable permissions to the binary sudo chmod +x /usr/local/bin/docker-compose #Test the installation docker-compose --version echo "**** End installing Docker Compose"
I changed the content of the Vagrantfile:
[in bold, I highlighted the changes]
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/focal64" config.vm.define "ubuntu_docker" do |ubuntu_docker| config.vm.network "forwarded_port", guest: 1521, host: 1521, auto_correct: true config.vm.network "forwarded_port", guest: 5500, host: 5500, auto_correct: true config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Docker" vb.memory = "8192" vb.cpus = "1" args = [] config.vm.provision "docker shell script", type: "shell", path: "scripts/docker.sh", args: args args = [] config.vm.provision "docker-compose shell script", type: "shell", path: "scripts/docker-compose.sh", args: args # args = [] # config.vm.provision "oracle-database-xe-21c shell script", type: "shell", # path: "scripts/oracle-database-xe-21c.sh", # args: args # end end end
Remark:
Because in this series of articles the focus is on Apache Kafka, I commented out the lines of code to install Oracle Database 21c XE (21.3.0).
In order to test this new set up, I used vagrant destroy and vagrant up.
With the following output:
ubuntu_docker: **** Begin installing Docker Compose ubuntu_docker: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_docker: Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:100 664 100 664 0 0 6707 0 --:--:-- --:--:-- --:--:-- 6707 41 12.1M 41 5186k 0 0 6364k 0 0:00:01 --:--:-- 0:100 12.1M 100 12.1M 0 0 7847k 0 0:00:01 0:00:01 --:--:-- 9430k ubuntu_docker: docker-compose version 1.29.2, build 5becea4c ubuntu_docker: **** End installing Docker Compose
Setting up and starting a Kafka broker
As I mentioned above, for the setup of a Kafka broker, a Docker Compose file is used.
[https://developer.confluent.io/quickstart/kafka-docker/]
So, on my Windows laptop, I added to the env directory a subdirectory called docker-compose, were I created a file docker-compose.yml with the following content (copied from the Quick Start website):
--- version: '3' services: zookeeper: image: confluentinc/cp-zookeeper:6.2.0 container_name: zookeeper environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka:6.2.0 container_name: broker ports: # To learn about configuring Kafka for access across networks see # https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/ - "9092:9092" depends_on: - zookeeper environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
As you can see, this Docker Compose file installs two services, zookeeper (Apache ZooKeeper) and broker (a Kafka broker).
For information about Apache ZooKeeper please see: https://zookeeper.apache.org/doc/current/index.html
From the subdirectory named env on my Windows laptop, I navigated to the scripts directory were I created a file kafka.sh with the following content (following the instructions in https://developer.confluent.io/quickstart/kafka-docker/):
#!/bin/bash echo "**** Begin installing Kafka" cd /vagrant/docker-compose docker-compose up -d echo "**** End installing Kafka"
Remark about shared folder:
Mounting shared folders... ubuntu_docker: /vagrant => C:/My/AMIS/env
Remember, via the shared folder, the copied files are also available from within the Oracle VirtualBox appliance.
Next, I changed the content of the Vagrantfile:
[in bold, I highlighted the changes]
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/focal64" config.vm.define "ubuntu_docker" do |ubuntu_docker| config.vm.network "forwarded_port", guest: 1521, host: 1521, auto_correct: true config.vm.network "forwarded_port", guest: 5500, host: 5500, auto_correct: true config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Docker" vb.memory = "8192" vb.cpus = "1" args = [] config.vm.provision "docker shell script", type: "shell", path: "scripts/docker.sh", args: args args = [] config.vm.provision "docker-compose shell script", type: "shell", path: "scripts/docker-compose.sh", args: args # args = [] # config.vm.provision "oracle-database-xe-21c shell script", type: "shell", # path: "scripts/oracle-database-xe-21c.sh", # args: args # args = [] config.vm.provision "kafka shell script", type: "shell", path: "scripts/kafka.sh", args: args end end end
In order to test this new set up, I used vagrant destroy and vagrant up.
With the following output:
ubuntu_docker: **** Begin installing Kafka ubuntu_docker: Creating network "docker-compose_default" with the default driver ubuntu_docker: Pulling zookeeper (confluentinc/cp-zookeeper:6.2.0)... ubuntu_docker: 6.2.0: Pulling from confluentinc/cp-zookeeper ubuntu_docker: Digest: sha256:9a69c03fd1757c3154e4f64450d0d27a6decb0dc3a1e401e8fc38e5cea881847 ubuntu_docker: Status: Downloaded newer image for confluentinc/cp-zookeeper:6.2.0 ubuntu_docker: Pulling broker (confluentinc/cp-kafka:6.2.0)... ubuntu_docker: 6.2.0: Pulling from confluentinc/cp-kafka ubuntu_docker: Digest: sha256:97f572d93c6b2d388c5dadd644a90990ec29e42e5652c550c84d1a9be9d6dcbd ubuntu_docker: Status: Downloaded newer image for confluentinc/cp-kafka:6.2.0 ubuntu_docker: Creating zookeeper ... ubuntu_docker: Creating zookeeper ... done ubuntu_docker: Creating broker ... ubuntu_docker: Creating broker ... done ubuntu_docker: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ubuntu_docker: 0be292c1abbe confluentinc/cp-kafka:6.2.0 "/etc/confluent/dock…" 2 seconds ago Up Less than a second 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp broker ubuntu_docker: 56c589565120 confluentinc/cp-zookeeper:6.2.0 "/etc/confluent/dock…" 3 seconds ago Up 1 second 2181/tcp, 2888/tcp, 3888/tcp zookeeper ubuntu_docker: edd09e8d6321 hello-world "/hello" About a minute ago Exited (0) About a minute ago strange_shockley ubuntu_docker: **** End installing Kafka
As you can see, the Docker containers named broker and zookeeper are up and running.
I closed the Windows Command Prompt.
Creating a topic and writing and reading messages
With the Kafka broker in place, I then continued with the instructions from the Quick Start.
[https://developer.confluent.io/quickstart/kafka-docker/]
I used vagrant ssh to connect into the running VM. Next, in order to create a topic, I used the following command on the Linux Command Prompt:
docker exec broker \ kafka-topics --bootstrap-server broker:9092 \ --create \ --topic quickstart
With the following output:
Created topic quickstart.
Then, in order to write messages to the topic, I used the following command on the Linux Command Prompt:
docker exec --interactive --tty broker \ kafka-console-producer --bootstrap-server broker:9092 \ --topic quickstart
Next, I typed in some lines of text. Each line is a new message.
this is my first kafka message hello world! this is my third kafka message. I’m on a roll :-D
Then, to exit the producer, I typed Ctrl-D to return to my Linux Command Prompt.
Next, in order to read messages from the topic, I used the following command on the Linux Command Prompt:
docker exec --interactive --tty broker \ kafka-console-consumer --bootstrap-server broker:9092 \ --topic quickstart \ --from-beginning
With the following output:
this is my first kafka message hello world! this is my third kafka message. I’m on a roll :-D
Then, to stop the consumer, I typed Ctrl-C to return to my Linux Command Prompt.
So, Kafka was working on my demo environment 😊.
I closed the Windows Command Prompt.
UI for Apache Kafka
What I also would like to have on my demo environment is some kind of Open-Source Web GUI for Apache Kafka.
After some research on the internet, I opted for UI for Apache Kafka.
UI for Apache Kafka is a free, open-source web UI to monitor and manage Apache Kafka clusters.
UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and deliver optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters – Brokers, Topics, Partitions, Production, and Consumption.
Set up UI for Apache Kafka with just a couple of easy commands to visualize your Kafka data in a comprehensible way. You can run the tool locally or in the cloud.
[https://github.com/provectus/kafka-ui/blob/master/README.md]
From the subdirectory named env on my Windows laptop, I navigated to the scripts directory were I created a file ui-for-apache-kafka.sh with the following content (following the instructions in https://github.com/provectus/kafka-ui/blob/master/README.md#running-from-docker-image):
#!/bin/bash echo "**** Begin installing UI for Apache Kafka" docker pull provectuslabs/kafka-ui docker run -p 8080:8080 \ -e KAFKA_CLUSTERS_0_NAME=local \ -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \ -d provectuslabs/kafka-ui:latest docker container ls -a echo "**** End installing UI for Apache Kafka"
Remark:
I also added a command to get the list of all the Docker containers.
Because the web UI is made available at http://localhost:8080 and I wanted to use a Web Browser on my Windows laptop, I used the forwarded_port configuration option, to forward port 8080 on my host (Windows) to port 8080 on my guest (Ubuntu).
Vagrant forwarded ports allow you to access a port on your host machine and have all data forwarded to a port on the guest machine, over either TCP or UDP.
[https://www.vagrantup.com/docs/networking/forwarded_ports.html]
Next, I changed the content of the Vagrantfile:
[in bold, I highlighted the changes]
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/focal64" config.vm.define "ubuntu_docker" do |ubuntu_docker| config.vm.network "forwarded_port", guest: 1521, host: 1521, auto_correct: true config.vm.network "forwarded_port", guest: 5500, host: 5500, auto_correct: true config.vm.network "forwarded_port", guest: 8080, host: 8080, auto_correct: true config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Docker" vb.memory = "8192" vb.cpus = "1" args = [] config.vm.provision "docker shell script", type: "shell", path: "scripts/docker.sh", args: args args = [] config.vm.provision "docker-compose shell script", type: "shell", path: "scripts/docker-compose.sh", args: args # args = [] # config.vm.provision "oracle-database-xe-21c shell script", type: "shell", # path: "scripts/oracle-database-xe-21c.sh", # args: args # args = [] config.vm.provision "kafka shell script", type: "shell", path: "scripts/kafka.sh", args: args args = [] config.vm.provision "ui-for-apache-kafka shell script", type: "shell", path: "scripts/ui-for-apache-kafka.sh", args: args end end end
In order to test this new set up, I used vagrant destroy and vagrant up.
With the following output:
ubuntu_docker: **** Begin installing UI for Apache Kafka ubuntu_docker: Using default tag: latest ubuntu_docker: latest: Pulling from provectuslabs/kafka-ui … ubuntu_docker: c631eae86696: Pull complete ubuntu_docker: Digest: sha256:69d612ddf1ce38e9c5f59c47a8430247aa40da1822c32c6857cad48570bad791 ubuntu_docker: Status: Downloaded newer image for provectuslabs/kafka-ui:latest ubuntu_docker: docker.io/provectuslabs/kafka-ui:latest ubuntu_docker: 7eb53c0bb75aa5dfa8e1e3fcf6b0e33e83c466a489dd0744b40d707d06ebeecc ubuntu_docker: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ubuntu_docker: 7eb53c0bb75a provectuslabs/kafka-ui:latest "/bin/sh -c 'java $J…" 2 seconds ago Up Less than a second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp competent_khorana ubuntu_docker: b7c477be5799 confluentinc/cp-kafka:6.2.0 "/etc/confluent/dock…" 30 seconds ago Up 29 seconds 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp broker ubuntu_docker: afe900c6f5ec confluentinc/cp-zookeeper:6.2.0 "/etc/confluent/dock…" 32 seconds ago Up 30 seconds 2181/tcp, 2888/tcp, 3888/tcp zookeeper ubuntu_docker: 5e4a03590e6e hello-world "/hello" About a minute ago Exited (0) About a minute ago focused_ardinghelli ubuntu_docker: **** End installing UI for Apache Kafka
On my Windows laptop, in a Web Browser, I entered the URL:
The web UI started.
But unfortunately, the Kafka broker could not be found.
I had a closer look at the following part in my file ui-for-apache-kafka.sh:
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \
Then, I looked at the README.md in the part about Environment Variables.
[https://github.com/provectus/kafka-ui#readme]
[https://github.com/provectus/kafka-ui#env_variables]
In the example from the Quick Start, there also was a bootstrap server mentioned.
docker exec broker \ kafka-topics --bootstrap-server broker:9092 \ --create \ --topic quickstart
I noticed the differences in hosts. Of course, using kafka as host wasn’t going to work. It should be broker.
And also, there was an Environment Variable for the Zookeeper service address.
So, I tried the following settings:
[in bold, I highlighted the changes]
docker run -p 8080:8080 \ -e KAFKA_CLUSTERS_0_NAME=local \ -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:9092 \ -e KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181 \ -d provectuslabs/kafka-ui:latest
But still, I did not get a correct result.
Just to be sure I also tried another Web GUI for Apache Kafka.
This time I opted for Kafdrop.
Kafdrop – Kafka Web UI
Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages.
[https://github.com/obsidiandynamics/kafdrop]
You can run the Kafdrop JAR directly, via Docker, or in Kubernetes.
For running it with Docker, the images are hosted at hub.docker.com/r/obsidiandynamics/kafdrop.
And to launch the container in the background, the following command can be used:
docker run -d --rm -p 9000:9000 \ -e KAFKA_BROKERCONNECT=<host:port,host:port> \ -e JVM_OPTS="-Xms32M -Xmx64M" \ -e SERVER_SERVLET_CONTEXTPATH="/" \ obsidiandynamics/kafdrop
[https://github.com/obsidiandynamics/kafdrop#running-with-docker]
Using this, I still had to work out the correct value of the Environment Variable KAFKA_BROKERCONNECT for my demo environment.
Lucky for me, there also was a docker-compose.yaml file that bundles a Kafka/ZooKeeper instance with Kafdrop.
[https://github.com/obsidiandynamics/kafdrop#docker-compose]
So, I had a look at it:
version: "2" services: kafdrop: image: obsidiandynamics/kafdrop restart: "no" ports: - "9000:9000" environment: KAFKA_BROKERCONNECT: "kafka:29092" JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify" depends_on: - "kafka" kafka: image: obsidiandynamics/kafka restart: "no" ports: - "2181:2181" - "9092:9092" environment: KAFKA_LISTENERS: "INTERNAL://:29092,EXTERNAL://:9092" KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka:29092,EXTERNAL://localhost:9092" KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT" KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL" KAFKA_ZOOKEEPER_SESSION_TIMEOUT: "6000" KAFKA_RESTART_ATTEMPTS: "10" KAFKA_RESTART_DELAY: "5" ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL: "0"
Based on this example, I changed the content of file docker-compose.yml, I used before:
[in bold, I highlighted the changes]
--- version: '3' services: zookeeper: image: confluentinc/cp-zookeeper:6.2.0 container_name: zookeeper environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka:6.2.0 container_name: broker ports: # To learn about configuring Kafka for access across networks see # https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/ - "9092:9092" depends_on: - zookeeper environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 kafdrop: image: obsidiandynamics/kafdrop restart: "no" ports: - "9000:9000" environment: KAFKA_BROKERCONNECT: "broker:29092" JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify" depends_on: - broker
Remark:
Of course, using kafka wasn’t going to work. So, I changed it to broker.
Because the web UI is made available at http://localhost:9000 and I wanted to use a Web Browser on my Windows laptop, I used the forwarded_port configuration option, to forward port 9000 on my host (Windows) to port 9000 on my guest (Ubuntu).
Next, I changed the content of the Vagrantfile:
[in bold, I highlighted the changes]
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/focal64" config.vm.define "ubuntu_docker" do |ubuntu_docker| config.vm.network "forwarded_port", guest: 1521, host: 1521, auto_correct: true config.vm.network "forwarded_port", guest: 5500, host: 5500, auto_correct: true config.vm.network "forwarded_port", guest: 8080, host: 8080, auto_correct: true config.vm.network "forwarded_port", guest: 9000, host: 9000, auto_correct: true config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Docker" vb.memory = "8192" vb.cpus = "1" args = [] config.vm.provision "docker shell script", type: "shell", path: "scripts/docker.sh", args: args args = [] config.vm.provision "docker-compose shell script", type: "shell", path: "scripts/docker-compose.sh", args: args # args = [] # config.vm.provision "oracle-database-xe-21c shell script", type: "shell", # path: "scripts/oracle-database-xe-21c.sh", # args: args # args = [] config.vm.provision "kafka shell script", type: "shell", path: "scripts/kafka.sh", args: args args = [] config.vm.provision "ui-for-apache-kafka shell script", type: "shell", path: "scripts/ui-for-apache-kafka.sh", args: args end end end
In order to test this new set up, I used vagrant destroy and vagrant up.
With the following output:
ubuntu_docker: **** Begin installing Kafka ubuntu_docker: Creating network "docker-compose_default" with the default driver ubuntu_docker: Pulling zookeeper (confluentinc/cp-zookeeper:6.2.0)... ubuntu_docker: 6.2.0: Pulling from confluentinc/cp-zookeeper ubuntu_docker: Digest: sha256:9a69c03fd1757c3154e4f64450d0d27a6decb0dc3a1e401e8fc38e5cea881847 ubuntu_docker: Status: Downloaded newer image for confluentinc/cp-zookeeper:6.2.0 ubuntu_docker: Pulling broker (confluentinc/cp-kafka:6.2.0)... ubuntu_docker: 6.2.0: Pulling from confluentinc/cp-kafka ubuntu_docker: Digest: sha256:97f572d93c6b2d388c5dadd644a90990ec29e42e5652c550c84d1a9be9d6dcbd ubuntu_docker: Status: Downloaded newer image for confluentinc/cp-kafka:6.2.0 ubuntu_docker: Pulling kafdrop (obsidiandynamics/kafdrop:)... ubuntu_docker: latest: Pulling from obsidiandynamics/kafdrop ubuntu_docker: Digest: sha256:b7ba8577ce395b1975b0ed98bb53cb6b13e7d32d5442188da1ce41c0838d1ce9 ubuntu_docker: Status: Downloaded newer image for obsidiandynamics/kafdrop:latest ubuntu_docker: Creating zookeeper ... ubuntu_docker: Creating zookeeper ... done ubuntu_docker: Creating broker ... ubuntu_docker: Creating broker ... done ubuntu_docker: Creating docker-compose_kafdrop_1 ... ubuntu_docker: Creating docker-compose_kafdrop_1 ... done ubuntu_docker: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ubuntu_docker: 46cba830184c obsidiandynamics/kafdrop "/kafdrop.sh" 1 second ago Up Less than a second 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp docker-compose_kafdrop_1 ubuntu_docker: 625d446a5eb0 confluentinc/cp-kafka:6.2.0 "/etc/confluent/dock…" 2 seconds ago Up 1 second 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp broker ubuntu_docker: a8980809b060 confluentinc/cp-zookeeper:6.2.0 "/etc/confluent/dock…" 5 seconds ago Up 2 seconds 2181/tcp, 2888/tcp, 3888/tcp zookeeper ubuntu_docker: d9fe3cfdb718 hello-world "/hello" 6 minutes ago Exited (0) 6 minutes ago loving_rhodes ubuntu_docker: **** End installing Kafka
On my Windows laptop, in a Web Browser, I entered the URL:
The web UI started.
And the broker was recognized.
I closed the Windows Command Prompt.
Now, I wanted to create a topic and write messages to the topic, just to see if these were picked up by the web UI. So, I repeated some of the steps from the Quick Start.
I used vagrant ssh to connect into the running VM. Next, in order to create a topic, I used the following command on the Linux Command Prompt:
docker exec broker \ kafka-topics --bootstrap-server broker:9092 \ --create \ --topic quickstart
Then, in order to write messages to the topic, I used the following command on the Linux Command Prompt:
docker exec --interactive --tty broker \ kafka-console-producer --bootstrap-server broker:9092 \ --topic quickstart
Next, I typed in some lines of text. Each line is a new message.
this is my first kafka message hello world! this is my third kafka message. I’m on a roll :-D
Then, to exit the producer, I typed Ctrl-D to return to my Linux Command Prompt.
On my Windows laptop, I refreshed the Web Browser.
The quickstart topic became visible.
Then I clicked on the View Messages button.
Next, I clicked on the Search and View Messages button.
The 3 messages were found.
So, I finally got the result I was looking for.
Now it was time to understand why the other web UI (UI for Apache Kafka) wasn’t working.
More about the way forward, you can read in my next article. But I can already say that I got it working. And of course, in hindsight it all makes sense.