Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 1f

Quarkus – Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s

In my previous article, I shared with you the steps I took, to further automate setting up my demo environment, and implementing a Service of service type NodePort.
[https://technology.amis.nl/software-development/java/quarkus-supersonic-subatomic-java-trying-out-quarkus-guide-quarkus-kubernetes-extension-reinvestigated-part-4-using-vagrant-and-shell-scripts-to-further-automate-setti/]

Some years ago, I also wrote articles about the Quarkus Kubernetes Extension.

In this article, you can read more about the steps I took to further automate setting up my demo environment, and implementing a Service with service type LoadBalancer.

For the demo environment to start, from the directory named env on my Windows laptop, I opened a Windows Command Prompt (cmd) and typed: vagrant up. Once the VM was running, for executing later manual steps, I used vagrant ssh to connect into the running VM.

Services in Kubernetes

I had a look at the “Kubernetes” documentation, “Service” section.

The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a network. Each Service object defines a logical set of endpoints (usually these endpoints are Pods) along with a policy about how to make those pods accessible.

For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.

The Service abstraction enables this decoupling.

The set of Pods targeted by a Service is usually determined by a selector that you define. To learn about other ways to define Service endpoints, see Services without selectors.
[https://kubernetes.io/docs/concepts/services-networking/service/#services-in-kubernetes]

Customizing the service type

The type of service that will be generated for the application can be set by applying the following configuration:
[https://quarkus.io/guides/all-config]

quarkus.kubernetes.service-type

For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, one that’s accessible from outside of your cluster.

Kubernetes Service types allow you to specify what kind of Service you want.
[https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types]

The available type values and their behaviors are:

Type Behavior
ClusterIP Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don’t explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway.
NodePort Exposes the Service on each Node’s IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.
LoadBalancer Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.
ExternalName Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping configures your cluster’s DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set up.

[https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types]

Remark about Ingress:
If your workload speaks HTTP, you might choose to use an Ingress to control how web traffic reaches that workload. Ingress is not a Service type, but it acts as the entry point for your cluster. An Ingress lets you consolidate your routing rules into a single resource, so that you can expose multiple components of your workload, running separately in your cluster, behind a single listener.
[https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types]

Remark about Gateway:
The Gateway API for Kubernetes provides extra capabilities beyond Ingress and Service. You can add Gateway to your cluster – it is a family of extension APIs, implemented using CustomResourceDefinitions – and then use these to configure access to network services that are running in your cluster.
[https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types]

While using a Service with service type NodePort was okay for my demo environment, I also wanted to try out using a Service with service type LoadBalancer.

Be aware that when using a service type NodePort you expose one or more nodes’ IP addresses directly. So, this type of service exposure is not very secure, because external clients basically have access to worker nodes directly. NodePort is mainly recommended for demos and test environments. Do not use this service type in production.

Service with service type NodePort

Up till now, the generated Kubernetes service object was of service type NodePort, as you may remember.

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 2

Below, you can see the content again of the target/kubernetes/kubernetes.yml Kubernetes manifests, provided by the Quarkus project packaging:

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    app.quarkus.io/quarkus-version: 3.24.1
    app.quarkus.io/build-timestamp: 2025-07-06 - 14:37:48 +0000
  labels:
    app.kubernetes.io/name: kubernetes-quickstart
    app.kubernetes.io/version: 1.0.0-SNAPSHOT
    app.kubernetes.io/managed-by: quarkus
  name: kubernetes-quickstart
  namespace: nl-amis-development
spec:
  ports:
    - name: ports
      nodePort: 30010
      port: 8180
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: kubernetes-quickstart
    app.kubernetes.io/version: 1.0.0-SNAPSHOT
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.quarkus.io/quarkus-version: 3.24.1
    app.quarkus.io/build-timestamp: 2025-07-06 - 14:37:48 +0000
  labels:
    app.kubernetes.io/name: kubernetes-quickstart
    app.kubernetes.io/version: 1.0.0-SNAPSHOT
    app.kubernetes.io/managed-by: quarkus
  name: kubernetes-quickstart
  namespace: nl-amis-development
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/version: 1.0.0-SNAPSHOT
      app.kubernetes.io/name: kubernetes-quickstart
  template:
    metadata:
      annotations:
        app.quarkus.io/quarkus-version: 3.24.1
        app.quarkus.io/build-timestamp: 2025-07-06 - 14:37:48 +0000
      labels:
        app.kubernetes.io/managed-by: quarkus
        app.kubernetes.io/version: 1.0.0-SNAPSHOT
        app.kubernetes.io/name: kubernetes-quickstart
    spec:
      containers:
        - env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          image: localhost:8443/quarkus/kubernetes-quickstart:1.0.0-SNAPSHOT
          imagePullPolicy: Always
          name: kubernetes-quickstart
          ports:
            - containerPort: 8080
              name: ports
              protocol: TCP

Below, you see an overview of my Kubernetes cluster at this moment:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 3

Service with service type: LoadBalancer

On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service’s .status.loadBalancer field. For example:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  clusterIP: 10.0.171.239
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.0.2.127

Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.

To implement a Service of type: LoadBalancer, Kubernetes typically starts off by making the changes that are equivalent to you requesting a Service of type: NodePort. The cloud-controller-manager component then configures the external load balancer to forward traffic to that assigned node port.

You can configure a load balanced Service to omit assigning a node port, provided that the cloud provider implementation supports this.

Some cloud providers allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, the load balancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field that you set is ignored.
[https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer]

Next, on my Windows laptop, in my shared folder, I navigated to kubernetes-quickstart\src\main\resources and changed the content of file application.properties to:
[in bold, I highlighted the changes]

quarkus.container-image.registry=localhost:8443
quarkus.container-image.username=mylocregusername
quarkus.container-image.password=mylocregpassword
quarkus.container-image.group=quarkus
quarkus.kubernetes.namespace=nl-amis-development
quarkus.kubernetes.ports."ports".container-port=8080
quarkus.kubernetes.ports."ports".host-port=8180
quarkus.kubernetes.ports."ports".node-port=30010
quarkus.kubernetes.replicas=3
quarkus.kubernetes.service-type=load-balancer

So, as you can see, I removed some lines.

Remark about quarkus.kubernetes.ports.”ports”.container-port:

Kubernetes property Type Default
quarkus.kubernetes.ports.”ports”.container-port
The port number. Refers to the container port.
Environment variable: QUARKUS_KUBERNETES_PORTS__PORTS__CONTAINER_PORT
int

[https://quarkus.io/guides/all-config]

Remark about quarkus.kubernetes.ports.”ports”.host-port:

Kubernetes property Type Default
quarkus.kubernetes.ports.”ports”.host-port
The nodePort to which this port should be mapped to. This only takes affect when the serviceType is set to node-port.
Environment variable: QUARKUS_KUBERNETES_PORTS__PORTS__NODE_PORT
int

[https://quarkus.io/guides/all-config]

Remark about quarkus.kubernetes.ports.”ports”.node-port:

Kubernetes property Type Default
quarkus.kubernetes.ports.”ports”.host-port
The nodePort to set when serviceType is set to node-port.
Environment variable: QUARKUS_KUBERNETES_NODE_PORT
int

[https://quarkus.io/guides/all-config]

Remark about quarkus.kubernetes.service-type:

Kubernetes property Type Default
quarkus.kubernetes.service-type
The type of service that will be generated for the application
Environment variable: QUARKUS_KUBERNETES_SERVICE_TYPE
cluster-ip,
node-port,
load-balancer, external-name
cluster-ip

[https://quarkus.io/guides/all-config]

In order to recreate the Kubernetes manifests, I used the following commands on the Linux Command Prompt:

cd /mnt/mysharedfolder/kubernetes-quickstart

mvn clean install

Below, you can see the content of the target/kubernetes/kubernetes.yml Kubernetes manifests, provided by the Quarkus project packaging:
[in bold, I highlighted the changes (except app.quarkus.io/build-timestamp)]

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    app.quarkus.io/quarkus-version: 3.24.2
    app.quarkus.io/build-timestamp: 2025-07-08 - 16:32:26 +0000
  labels:
    app.kubernetes.io/name: kubernetes-quickstart
    app.kubernetes.io/version: 1.0.0-SNAPSHOT
    app.kubernetes.io/managed-by: quarkus
  name: kubernetes-quickstart
  namespace: nl-amis-development
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: kubernetes-quickstart
    app.kubernetes.io/version: 1.0.0-SNAPSHOT
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.quarkus.io/quarkus-version: 3.24.2
    app.quarkus.io/build-timestamp: 2025-07-08 - 16:32:26 +0000
  labels:
    app.kubernetes.io/name: kubernetes-quickstart
    app.kubernetes.io/version: 1.0.0-SNAPSHOT
    app.kubernetes.io/managed-by: quarkus
  name: kubernetes-quickstart
  namespace: nl-amis-development
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: kubernetes-quickstart
      app.kubernetes.io/version: 1.0.0-SNAPSHOT
  template:
    metadata:
      annotations:
        app.quarkus.io/quarkus-version: 3.24.2
        app.quarkus.io/build-timestamp: 2025-07-08 - 16:32:26 +0000
      labels:
        app.kubernetes.io/managed-by: quarkus
        app.kubernetes.io/name: kubernetes-quickstart
        app.kubernetes.io/version: 1.0.0-SNAPSHOT
    spec:
      containers:
        - env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          image: localhost:8443/quarkus/kubernetes-quickstart:1.0.0-SNAPSHOT
          imagePullPolicy: Always
          name: kubernetes-quickstart
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP

So, this was what I expected 😊.

Be aware that when using a service type LoadBalancer this is mostly in combination with a cloud provider and it’s cloud environment (such as Azure, Oracle Cloud, AWS or Google Cloud). The down side is that you probably have to pay for a LoadBalancer per exposed service, which can get expensive!

In on-premise environments or for example private clouds, the LoadBalancer service type can still be used but it requires additional configuration. In those situations for example MetalLB is a popular solution for providing load balancer functionality.

NodePort versus LoadBalancer service type

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 4
In Kubernetes, both NodePort and LoadBalancer services expose applications to external traffic, but they differ in their functionality and implementation. NodePort exposes a service on a static port on each node’s IP, while LoadBalancer provisions an external load balancer (e.g., from a cloud provider) to distribute traffic. NodePort is simpler for basic access, while LoadBalancer offers more advanced features like high availability and scalability.
 
Here’s a more detailed breakdown: 

NodePort:

  • Mechanism: Exposes a service on a specific port (within the range 30000-32767 by default) on each node in the cluster. 
  • Access: Clients access the service using the node’s IP address and the allocated NodePort. 
  • Simplicity: Easy to set up and use, suitable for development and testing. 
  • Limitations: Requires knowing node IPs and managing firewall rules. Not ideal for production due to potential limitations and management overhead. 

LoadBalancer:

  • Mechanism: Provisions an external load balancer (e.g., from a cloud provider like AWS, GCP, or Azure) and assigns it a public IP address. 
  • Access: Clients access the service through the load balancer’s public IP. 
  • Scalability and Availability: Offers better scalability and high availability as the load balancer distributes traffic across multiple nodes and handles node failures. 
  • Cost: May incur additional costs from the cloud provider for the load balancer. 
  • Complexity: Requires cloud provider support and may involve more complex configuration. 

Key Differences:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 5

In essence:

  • NodePort is a basic mechanism for exposing services externally, suitable for simple scenarios and testing. 
  • LoadBalancer provides a more robust solution for production environments, offering scalability, high availability, and easier management of external access. 

In addition to the core functionality of LoadBalancer and NodePort, Kubernetes also offers Ingress, which provides another layer of abstraction for managing external access. Ingress can leverage either a NodePort or LoadBalancer service as its underlying mechanism to route traffic to different services based on hostnames or paths. 

When to choose which: 

  • NodePort:

For development, testing, or when you need a quick and simple way to access a service from outside the cluster without needing a load balancer. 

  • LoadBalancer:

For production environments where you need scalability, high availability, and a more robust solution for managing external access to your application.
[AI overview]
AI responses may include mistakes. Learn more

K3s Service Load Balancer

Any LoadBalancer controller can be deployed to your K3s cluster. By default, K3s provides a load balancer known as ServiceLB (formerly Klipper LoadBalancer) that uses available host ports.

Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn’t include a default load balancer implementation, so these services will remain pending until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration.
[https://docs.k3s.io/networking/networking-services#service-load-balancer]

How ServiceLB Works

The ServiceLB controller watches Kubernetes Services with the spec.type field set to LoadBalancer.

For each LoadBalancer Service, a DaemonSet is created in the kube-system namespace. This DaemonSet in turn creates ServiceLB Pods with a svclb- prefix, on each node. These pods leverage hostPort using the service port, hence they will only be deployed on nodes that have that port available. If there aren’t any nodes with that port available, the LB will remain Pending. Note that it is possible to expose multiple Services on the same node, as long as they use different ports.

When the ServiceLB Pod runs on a node that has an external IP configured, the node’s external IP is populated into the Service’s status.loadBalancer.ingress address list with ipMode: VIP. Otherwise, the node’s internal IP is used.

If the traffic to the external IP is subject to Network Address Translation (NAT) – for example in public clouds when using the public IP of the node as external IP – the traffic is routed into the ServiceLB pod via the hostPort. The pod then uses iptables to forward traffic to the Service’s ClusterIP address and port. If the traffic is not subject to NAT and instead arrives with destination address matching the LoadBalancer address, traffic is intercepted (normally by kube-proxy iptables chains or ipvs) and forwarded to the Service’s ClusterIP address and port.
[https://docs.k3s.io/networking/networking-services#how-servicelb-works]

So, the ServiceLB controller creates for each Service of service type LoadBalancer, the following Kubernetes objects (in the kube-system namespace):

  • DaemonSet
  • ServiceLB Pod (with a svclb- prefix)

Removing the created Kubernetes objects

So, as I mentioned in my previous article, the Quarkus generated manifest kubernetes.yml, was applied to the Kubernetes cluster and created the following Kubernetes objects (in my custom nl-amis-development namespace):

  • Service
  • Deployment
  • Replica Set
  • Pod

Because I wanted to create the Kubernetes objects again, first I had to delete the existing ones. I repeated some of the steps I already described in my previous article.
[https://technology.amis.nl/software-development/java/quarkus-supersonic-subatomic-java-trying-out-quarkus-guide-quarkus-kubernetes-extension-reinvestigated-part-4-using-vagrant-and-shell-scripts-to-further-automate-setti/]

In order to delete the Replica Set and all of the dependent Pods (in the nl-amis-development namespace), I used the following command on the Linux Command Prompt:

kubectl delete -n nl-amis-development replicaset $(kubectl get replicasets -n nl-amis-development -o=jsonpath='{range .items..metadata}{.name}{"\n"}{end}' | grep kubernetes-quickstart- | awk '{print $1}')

With the following output:

replicaset.apps "kubernetes-quickstart-85b9dc865d" deleted

To delete a ReplicaSet and all of its Pods, use kubectl delete. The Garbage collector automatically deletes all of the dependent Pods by default.
[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#deleting-a-replicaset-and-its-pods]

In order to delete the Deployment, I used the following command on the Linux Command Prompt:

kubectl delete -n nl-amis-development deployment kubernetes-quickstart

With the following output:

deployment.apps "kubernetes-quickstart" deleted

In order to delete the Service, I used the following command on the Linux Command Prompt:

kubectl delete -n nl-amis-development service kubernetes-quickstart

With the following output:

service "kubernetes-quickstart" deleted

Recreating the Kubernetes objects in the nl-amis-development namespace, including a service with service type LoadBalancer

Next, in order to recreate the Kubernetes manifests (in the nl-amis-development namespace), I used the following commands on the Linux Command Prompt:

cd /mnt/mysharedfolder/kubernetes-quickstart

kubectl apply -f target/kubernetes/kubernetes.yml

With the following output:

service/kubernetes-quickstart created
deployment.apps/kubernetes-quickstart created

Remark:
Be aware that I already created the nl-amis-development namespace object.

Then, I quickly checked whether the Pods were running successfully, via a series of commands on the Linux Command Prompt.

kubectl get services --all-namespaces

With the following output:

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 12d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 12d
kube-system metrics-server ClusterIP 10.43.125.144 <none> 443/TCP 12d
kube-system traefik LoadBalancer 10.43.61.157 10.0.2.15 80:30907/TCP,443:30294/TCP 12d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.43.252.155 <none> 8000/TCP 12d
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.43.220.43 <none> 443/TCP 12d
nl-amis-development kubernetes-quickstart LoadBalancer 10.43.29.200 <pending> 80:31863/TCP 11s

In the output above you can see, that a service of service-type LoadBalancer is created (and still pending). You can also see, that a random nodePort (within the 30000-32767 range) with value 31863 is implemented.

kubectl get replicasets --all-namespaces

With the following output:

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system coredns-697968c856 1 1 1 12d
kube-system local-path-provisioner-774c6665dc 1 1 1 12d
kube-system metrics-server-6f4c6675d5 1 1 1 12d
kube-system traefik-c98fdf6fb 1 1 1 12d
kubernetes-dashboard dashboard-metrics-scraper-749c668b7f 1 1 1 12d
kubernetes-dashboard kubernetes-dashboard-76b75d676c 1 1 1 12d
nl-amis-development kubernetes-quickstart-847654c577 3 3 3 18s

kubectl get daemonset --all-namespaces

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system svclb-kubernetes-quickstart-c47e3530 1 1 0 1 0 <none> 47s
kube-system svclb-traefik-a86d68ac 1 1 1 1 1 <none> 12d

In the output above, as mentioned in the K3s documentation, you can see that for each LoadBalancer Service, a DaemonSet is created in the kube-system namespace.

kubectl get pods --all-namespaces

With the following output:

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-697968c856-72ml9 1/1 Running 0 12d
kube-system helm-install-traefik-crd-fvqvg 0/1 Completed 0 12d
kube-system helm-install-traefik-md9j8 0/1 Completed 1 12d
kube-system local-path-provisioner-774c6665dc-h5j22 1/1 Running 0 12d
kube-system metrics-server-6f4c6675d5-4h64s 1/1 Running 0 12d
kube-system svclb-kubernetes-quickstart-c47e3530-sq4r5 0/1 Pending 0 25s
kube-system svclb-traefik-a86d68ac-4zxjr 2/2 Running 0 12d
kube-system traefik-c98fdf6fb-csd67 1/1 Running 0 12d
kubernetes-dashboard dashboard-metrics-scraper-749c668b7f-hlpr6 1/1 Running 0 12d
kubernetes-dashboard kubernetes-dashboard-76b75d676c-ghdpr 1/1 Running 2 (4d17h ago) 12d
nl-amis-development kubernetes-quickstart-847654c577-9x6sl 1/1 Running 0 25s
nl-amis-development kubernetes-quickstart-847654c577-brkpc 1/1 Running 0 25s
nl-amis-development kubernetes-quickstart-847654c577-svqmx 1/1 Running 0 25s

In the output above, as mentioned in the K3s documentation, you can see that for each LoadBalancer Service, a DaemonSet is created in the kube-system namespace. This DaemonSet in turn creates ServiceLB Pods with a svclb- prefix, on each node.

kubectl get endpoints --all-namespaces

With the following output:

NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.0.2.15:6443 12d
kube-system kube-dns 10.42.0.3:53,10.42.0.3:53,10.42.0.3:9153 12d
kube-system metrics-server 10.42.0.2:10250 12d
kube-system traefik 10.42.0.8:8000,10.42.0.8:8443 12d
kubernetes-dashboard dashboard-metrics-scraper 10.42.0.10:8000 12d
kubernetes-dashboard kubernetes-dashboard 10.42.0.9:8443 12d
nl-amis-development kubernetes-quickstart 10.42.0.25:8080,10.42.0.26:8080,10.42.0.27:8080 38s

kubectl get nodes

With the following output:

NAME                     STATUS   ROLES                  AGE     VERSION
ubuntu2204.localdomain   Ready    control-plane,master   6d23h   v1.32.5+k3s1

In order to determine the IP of the K3s node, I used the following commands on the Linux Command Prompt, as I described in a previous article:
[https://technology.amis.nl/2020/04/30/creating-a-re-usable-vagrant-box-from-an-existing-vm-with-ubuntu-and-k3s-with-the-kubernetes-dashboard-and-adding-mysql-using-vagrant-and-oracle-virtualbox/]

nodeIP=$(kubectl get node ubuntu2204.localdomain -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}")
echo "---$nodeIP---"

With the following output:

---10.0.2.15---

Below, you see an overview of my Kubernetes cluster at this moment:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 6

As you can see the svclb-kubernetes-quickstart-c47e3530-sq4r5 ServiceLB Pod in the kube-system namespace was still pending.

Remember the K3s documentation:

These pods leverage hostPort using the service port, hence they will only be deployed on nodes that have that port available. If there aren’t any nodes with that port available, the LB will remain Pending. Note that it is possible to expose multiple Services on the same node, as long as they use different ports.
[https://docs.k3s.io/networking/networking-services#how-servicelb-works]

So, in order to get more information, I used the following command on the Linux Command Prompt:

kubectl describe -n kube-system pods svclb-kubernetes-quickstart-c47e3530-sq4r5

With the following output:

Name:                 svclb-kubernetes-quickstart-c47e3530-sq4r5
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      svclb
Node:                 <none>
Labels:               app=svclb-kubernetes-quickstart-c47e3530
                      controller-revision-hash=58ff599ccf
                      pod-template-generation=1
                      svccontroller.k3s.cattle.io/svcname=kubernetes-quickstart
                      svccontroller.k3s.cattle.io/svcnamespace=nl-amis-development
Annotations:          <none>
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        DaemonSet/svclb-kubernetes-quickstart-c47e3530
Containers:
  lb-tcp-80:
    Image:      rancher/klipper-lb:v0.4.13
    Port:       80/TCP
    Host Port:  80/TCP
    Environment:
      SRC_PORT:    80
      SRC_RANGES:  0.0.0.0/0
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IPS:    10.43.29.200
    Mounts:        <none>
Conditions:
  Type           Status
  PodScheduled   False
Volumes:         <none>
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly op=Exists
                 node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                 node-role.kubernetes.io/master:NoSchedule op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  11m                 default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  61s (x2 over 6m1s)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

So, apparently on my Node (and I have only 1) the port that is requested for the ServiceLB Pod (port 80) is already in use.

As it turned out, the Traefik LoadBalancer Service and it’s svclb-traefik-a86d68ac-4zxjr ServiceLB Pod is using this port.

So, in order to get more information, I used the following command on the Linux Command Prompt:

kubectl describe -n kube-system pods svclb-traefik-a86d68ac-4zxjr

With the following output:

Name:                 svclb-traefik-a86d68ac-4zxjr
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      svclb
Node:                 ubuntu2204.localdomain/10.0.2.15
Start Time:           Sun, 29 Jun 2025 18:08:15 +0000
Labels:               app=svclb-traefik-a86d68ac
                      controller-revision-hash=5d9b5544b
                      pod-template-generation=1
                      svccontroller.k3s.cattle.io/svcname=traefik
                      svccontroller.k3s.cattle.io/svcnamespace=kube-system
Annotations:          <none>
Status:               Running
IP:                   10.42.0.7
IPs:
  IP:           10.42.0.7
Controlled By:  DaemonSet/svclb-traefik-a86d68ac
Containers:
  lb-tcp-80:
    Container ID:   containerd://8608329e0ee43f68798ba873f3450c96569fcacbb2fa4037338c02f567af89e2
    Image:          rancher/klipper-lb:v0.4.13
    Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
    Port:           80/TCP
    Host Port:      80/TCP
    State:          Running
      Started:      Sun, 29 Jun 2025 18:08:19 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      SRC_PORT:    80
      SRC_RANGES:  0.0.0.0/0
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IPS:    10.43.61.157
    Mounts:        <none>
  lb-tcp-443:
    Container ID:   containerd://ef14b8124c15cf8367448e7e34b097c91afc20c1ec69c6a88c2b60d4d7cada1f
    Image:          rancher/klipper-lb:v0.4.13
    Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
    Port:           443/TCP
    Host Port:      443/TCP
    State:          Running
      Started:      Sun, 29 Jun 2025 18:08:19 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      SRC_PORT:    443
      SRC_RANGES:  0.0.0.0/0
      DEST_PROTO:  TCP
      DEST_PORT:   443
      DEST_IPS:    10.43.61.157
    Mounts:        <none>
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:                      <none>
QoS Class:                    BestEffort
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                              node-role.kubernetes.io/master:NoSchedule op=Exists
                              node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                              node.kubernetes.io/not-ready:NoExecute op=Exists
                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                              node.kubernetes.io/unreachable:NoExecute op=Exists
                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                       <none>

Traefik Ingress Controller

Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications.

The Traefik ingress controller deploys a LoadBalancer Service that uses ports 80 and 443, advertises the LoadBalancer Service’s External IPs in the Status of Ingress resources it manages.

By default, ServiceLB will use all nodes in the cluster to host the Traefik LoadBalancer Service, meaning ports 80 and 443 will not be usable for other HostPort or NodePort pods, and Ingress resources’ Status will show all cluster members’ node IPs.

To restrict the nodes used by Traefik, and by extension the node IPs advertised in the Ingress Status, you can follow the instructions in the Controlling ServiceLB Node Selection section below to limit what nodes ServiceLB runs on, or by adding some nodes to a LoadBalancer pool and restricting the Traefik Service to that pool by setting matching labels in the Traefik HelmChartConfig.

Traefik is deployed by default when starting the server. The default chart values can be found in /var/lib/rancher/k3s/server/manifests/traefik.yaml, but this file should not be edited manually, as K3s will replace the file with defaults at startup. Instead, you should customize Traefik by creating an additional HelmChartConfig manifest in /var/lib/rancher/k3s/server/manifests. For more details and an example see Customizing Packaged Components with HelmChartConfig. For more information on the possible configuration values, refer to values.yaml of the Traefik Helm Chart included with your version of K3s.

To remove Traefik from your cluster, start all servers with the –disable=traefik flag. For more information, see Managing Packaged Components.
[https://docs.k3s.io/networking/networking-services?_highlight=tra#traefik-ingress-controller]

Removing the Traefik Kubernetes objects

Of course, I wanted to get the ServiceLB Pod running. For now, one way to achieve this was to free up port 80 on my node. So, I opted for removing the Traefik LoadBalancer Service (and thereby freeing up port 80).

In order to delete the Service, I used the following command on the Linux Command Prompt:

kubectl delete -n kube-system service traefik

With the following output:

service “traefik” deleted

In order to delete the Replica Set and all of the dependent Pods (in the kube-system namespace), I used the following command on the Linux Command Prompt:

kubectl delete -n kube-system replicaset $(kubectl get replicasets -n kube-system -o=jsonpath='{range .items..metadata}{.name}{"\n"}{end}' | grep traefik- | awk '{print $1}')

With the following output:

replicaset.apps "traefik-c98fdf6fb" deleted

To delete a ReplicaSet and all of its Pods, use kubectl delete. The Garbage collector automatically deletes all of the dependent Pods by default.
[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#deleting-a-replicaset-and-its-pods]

In order to delete the Deployment, I used the following command on the Linux Command Prompt:

kubectl delete -n kube-system deployment traefik

With the following output:

deployment.apps “traefik” deleted

With the Traefik LoadBalancer Service now deleted, I wanted to have a look again at the ServiceLB Pod. So, in order to get more information, I used the following command on the Linux Command Prompt:

kubectl describe -n kube-system pods svclb-kubernetes-quickstart-c47e3530-sq4r5

With the following output:

Name:                 svclb-kubernetes-quickstart-c47e3530-sq4r5
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      svclb
Node:                 ubuntu2204.localdomain/10.0.2.15
Start Time:           Sat, 12 Jul 2025 12:37:55 +0000
Labels:               app=svclb-kubernetes-quickstart-c47e3530
                      controller-revision-hash=58ff599ccf
                      pod-template-generation=1
                      svccontroller.k3s.cattle.io/svcname=kubernetes-quickstart
                      svccontroller.k3s.cattle.io/svcnamespace=nl-amis-development
Annotations:          <none>
Status:               Running
IP:                   10.42.0.28
IPs:
  IP:           10.42.0.28
Controlled By:  DaemonSet/svclb-kubernetes-quickstart-c47e3530
Containers:
  lb-tcp-80:
    Container ID:   containerd://391cec1803dfa222afd6dbbafbc46a183300648afc29aac54af7c5500519b3b4
    Image:          rancher/klipper-lb:v0.4.13
    Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
    Port:           80/TCP
    Host Port:      80/TCP
    State:          Running
      Started:      Sat, 12 Jul 2025 12:37:55 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      SRC_PORT:    80
      SRC_RANGES:  0.0.0.0/0
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IPS:    10.43.29.200
    Mounts:        <none>
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:                      <none>
QoS Class:                    BestEffort
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                              node-role.kubernetes.io/master:NoSchedule op=Exists
                              node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                              node.kubernetes.io/not-ready:NoExecute op=Exists
                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                              node.kubernetes.io/unreachable:NoExecute op=Exists
                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  36m                default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  26m (x2 over 31m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  13m                default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  7m29s              default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Normal   Scheduled         6m14s              default-scheduler  Successfully assigned kube-system/svclb-kubernetes-quickstart-c47e3530-sq4r5 to ubuntu2204.localdomain
  Normal   Pulled            6m14s              kubelet            Container image "rancher/klipper-lb:v0.4.13" already present on machine
  Normal   Created           6m14s              kubelet            Created container: lb-tcp-80
  Normal   Started           6m14s              kubelet            Started container lb-tcp-80

So, removing the Traefik LoadBalancer Service (and thereby freeing up the port) worked 😊.

Then, I quickly checked whether the Pods were running successfully, via a series of commands on the Linux Command Prompt.

kubectl get services --all-namespaces

With the following output:

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 12d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 12d
kube-system metrics-server ClusterIP 10.43.125.144 <none> 443/TCP 12d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.43.252.155 <none> 8000/TCP 12d
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.43.220.43 <none> 443/TCP 12d
nl-amis-development kubernetes-quickstart LoadBalancer 10.43.29.200 10.0.2.15 80:31863/TCP 39m

In the output above you can see, that the service of service-type LoadBalancer is no longer pending, but running (with an external-ip).

kubectl get replicasets --all-namespaces

With the following output:

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system coredns-697968c856 1 1 1 12d
kube-system local-path-provisioner-774c6665dc 1 1 1 12d
kube-system metrics-server-6f4c6675d5 1 1 1 12d
kubernetes-dashboard dashboard-metrics-scraper-749c668b7f 1 1 1 12d
kubernetes-dashboard kubernetes-dashboard-76b75d676c 1 1 1 12d
nl-amis-development kubernetes-quickstart-847654c577 3 3 3 39m

kubectl get daemonset --all-namespaces

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system svclb-kubernetes-quickstart-c47e3530 1 1 1 1 1 <none> 39m

In the output above, as mentioned in the K3s documentation, you can see that for each LoadBalancer Service, a DaemonSet is created in the kube-system namespace.

kubectl get pods --all-namespaces

With the following output:

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-697968c856-72ml9 1/1 Running 0 12d
kube-system helm-install-traefik-crd-fvqvg 0/1 Completed 0 12d
kube-system helm-install-traefik-md9j8 0/1 Completed 1 12d
kube-system local-path-provisioner-774c6665dc-h5j22 1/1 Running 0 12d
kube-system metrics-server-6f4c6675d5-4h64s 1/1 Running 0 12d
kube-system svclb-kubernetes-quickstart-c47e3530-sq4r5 1/1 Running 0 39m
kubernetes-dashboard dashboard-metrics-scraper-749c668b7f-hlpr6 1/1 Running 0 12d
kubernetes-dashboard kubernetes-dashboard-76b75d676c-ghdpr 1/1 Running 2 (4d18h ago) 12d
nl-amis-development kubernetes-quickstart-847654c577-9x6sl 1/1 Running 0 39m
nl-amis-development kubernetes-quickstart-847654c577-brkpc 1/1 Running 0 39m
nl-amis-development kubernetes-quickstart-847654c577-svqmx 1/1 Running 0 39m

In the output above, as mentioned in the K3s documentation, you can see that for each LoadBalancer Service, a DaemonSet is created in the kube-system namespace. This DaemonSet in turn creates ServiceLB Pods with a svclb- prefix, on each node.

kubectl get endpoints --all-namespaces

With the following output:

NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.0.2.15:6443 12d
kube-system kube-dns 10.42.0.3:53,10.42.0.3:53,10.42.0.3:9153 12d
kube-system metrics-server 10.42.0.2:10250 12d
kubernetes-dashboard dashboard-metrics-scraper 10.42.0.10:8000 12d
kubernetes-dashboard kubernetes-dashboard 10.42.0.9:8443 12d
nl-amis-development kubernetes-quickstart 10.42.0.25:8080,10.42.0.26:8080,10.42.0.27:8080 39m

So, the kubernetes-quickstart LoadBalancer Service was running 😊.

By the way, another way to achieve more or less the same result is to disable Traefik when starting the K3s server.

As you may remember from the “Traefik Ingress Controller” documentation shown above:

To remove Traefik from your cluster, start all servers with the –disable=traefik flag. For more information, see Managing Packaged Components.
[https://docs.k3s.io/networking/networking-services?_highlight=tra#traefik-ingress-controller]

I tried this and indeed it worked. For this, in the env/scripts directory on my Windows laptop, I temporarily changed file k3s.sh to the following content:
[in bold, I highlighted the changes]

#!/bin/bash
echo "**** Begin installing k3s"

sudo ufw disable

#Install
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--disable=traefik" sh –

echo "****** Status k3s.service"
sudo systemctl status k3s.service

# Wait 2 minutes
echo "****** Waiting 2 minutes ..."
sleep 120

#List nodes en pods
echo "****** List nodes"
kubectl get nodes

echo "****** List pods"
kubectl get pods --all-namespaces

echo "**** End installing k3s"

In the end, I opted for yet another solution. I undid the remove action in file application.properties with regard to the container-port (8080) and host-port (8180). In this way, the ServiceLB Pod will not be using port 80, but instead port 8180.

You can read more about this in the “Further automating all the manual steps” part of this article.

Kubernetes Dashboard

Next, in order to check the generated objects in Kubernetes, in the Web Browser on my Windows laptop, I started the Kubernetes Dashboard in my demo environment, via:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 7

I used the token, that was visible in the output of my script, via:

kubectl describe -n kubernetes-dashboard secret $(kubectl get secret -n kubernetes-dashboard | grep admin-user | awk '{print $1}')

The Kubernetes Dashboard was opened with the default namespace selected. So, I selected the nl-amis-development namespace.

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 8

Then, I navigated to the Services:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 9

Next, I navigated to the Deployments:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 10

Next, I navigated to the Replica Sets:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 11

Next, I navigated to the Pods:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 12

In order to get the endpoint of the Pod, I navigated to the Services | kubernetes-quickstart:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 13

Next, I selected the kube-system namespace.

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 14

Next, I navigated to the Daemon Sets:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 15

Next, I navigated to the Pods:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 16

Then, I opened the Logs from the Pod:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 17

So, this confirmed to me that everything was up and running.

Below, again you see an overview of my Kubernetes cluster at this moment:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 18

For every one of the 3 Pods I quickly checked if they worked. Below you see the check I did for the first Pod (with the endpoint, including the targetPort we saw earlier).

I used the following command on the Linux Command Prompt:

curl http://10.42.0.25:8080/hello

With the following output:

Hello from Quarkus REST

For the other two Pods, I did the same, using the host 10.42.0.26 and 10.42.0.27.

Next, in order to quickly check if the Service worked (via the Cluster IP and port we saw earlier), I used the following command on the Linux Command Prompt:

curl http://10.43.29.200:80/hello

With the following output:

Hello from Quarkus REST

Then, in order to quickly check if the Service worked (via the Node IP and nodePort we saw earlier), I used the following command on the Linux Command Prompt:

curl http://10.0.2.15:31863/hello

With the following output:

Hello from Quarkus REST

Besides already being able to use the Kubernetes Dashboard (in a Web Browser) on my Windows laptop (via port forwarding), I also wanted to be able to use a Web Browser on my Windows laptop, for sending requests to the Kubernetes kubernetes-quickstart Service on my guest (Ubuntu).

In order to forward local port 8090 to port 31863 on the K3s node ($nodeIP), I used the following command on the Linux Command Prompt, as I described in a previous article:
[https://technology.amis.nl/2020/04/30/creating-a-re-usable-vagrant-box-from-an-existing-vm-with-ubuntu-and-k3s-with-the-kubernetes-dashboard-and-adding-mysql-using-vagrant-and-oracle-virtualbox/]

socat tcp-listen:8090,fork tcp:10.0.2.15:31863 &

With the following output:

[1] 95661

Then, in the Web Browser on my Windows laptop, I entered the URL: http://localhost:8090/hello

And I got the following result:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 19

Further automating all the manual steps

To further automate all the manual steps mentioned in this article, in the env/scripts directory on my Windows laptop, I changed file quarkus-kubernetes-quickstart.sh to the following content:
[in bold, I highlighted the changes]

#!/bin/bash
echo "**** Begin installing Quarkus kubernetes-quickstart"

rm -rf /mnt/mysharedfolder/kubernetes-quickstart

echo "****** Create Quarkus kubernetes-quickstart"
# Create Quarkus kubernetes-quickstart
cd /mnt/mysharedfolder/
mvn io.quarkus.platform:quarkus-maven-plugin:3.21.2:create -DprojectGroupId=org.acme -DprojectArtifactId=kubernetes-quickstart -Dextensions='rest,kubernetes,jib' --no-transfer-progress

#Create file application.properties
cd /mnt/mysharedfolder/kubernetes-quickstart/src/main/resources
sudo printf "quarkus.container-image.registry=localhost:8443
quarkus.container-image.username=mylocregusername
quarkus.container-image.password=mylocregpassword
quarkus.container-image.group=quarkus
quarkus.kubernetes.namespace=nl-amis-development
quarkus.kubernetes.ports."ports".container-port=8080
quarkus.kubernetes.ports."ports".host-port=8180
quarkus.kubernetes.ports."ports".node-port=30010
quarkus.kubernetes.replicas=3
quarkus.kubernetes.service-type=load-balancer\n" > application.properties

echo "****** Output from: cat application.properties"
cat application.properties

echo "****** Add self-signed certificate (local-registry.pem) into file cacerts, the default Java truststore, in an entry with an alias of mylocregcert"
#Import the "Server"s self-signed certificate (local-registry.pem) into file cacerts, the default Java truststore, in an entry with an alias of mylocregcert
cd /mnt/mysharedfolder
sudo "$JAVA_HOME/bin/keytool" -noprompt -keystore "$JAVA_HOME/lib/security/cacerts" -importcert -alias mylocregcert -file local-registry.pem

#Print the contents of the truststore entry identified by alias mylocregcert
keytool -list -keystore "$JAVA_HOME/lib/security/cacerts" -alias mylocregcert

echo "****** Set access rights for /tmp/jib-core-application-layers-cache/tmp"
cd /tmp/jib-core-application-layers-cache/tmp
ls -latr
sudo chmod ugo+w /tmp/jib-core-application-layers-cache/tmp
ls -latr

echo "****** Generate Kubernetes manifest (kubernetes.yml)"
cd /mnt/mysharedfolder/kubernetes-quickstart
mvn clean install -Dquarkus.container-image.build=true -Dquarkus.container-image.push=true --no-transfer-progress

echo "****** Output from: cat kubernetes.yml"
cd /mnt/mysharedfolder/kubernetes-quickstart/target/kubernetes
cat kubernetes.yml

echo "****** List of all the docker images inside my secured local private registry"
#Get a list of all the docker images inside my secured local private registry
cd /mnt/mysharedfolder

#curl --cacert local-registry.pem --user mylocregusername https://localhost:8443/v2/_catalog
curl --cacert local-registry.pem https://localhost:8443/v2/_catalog -K- <<< "--user mylocregusername:mylocregpassword"

echo "****** List of all the tags of docker image nginx inside my secured local private registry"
#Get a list of all the tags of docker image nginx inside my secured local private registry
cd /mnt/mysharedfolder

#curl --cacert local-registry.pem --user mylocregusername https://localhost:8443/v2/quarkus/kubernetes-quickstart/tags/list
curl --cacert local-registry.pem https://localhost:8443/v2/nginx/tags/list -K- <<< "--user mylocregusername:mylocregpassword"

echo "****** List of all the tags of docker image quarkus/kubernetes-quickstart inside my secured local private registry"
#Get a list of all the tags of docker image quarkus/kubernetes-quickstart inside my secured local private registry
cd /mnt/mysharedfolder

#curl --cacert local-registry.pem --user mylocregusername https://localhost:8443/v2/quarkus/kubernetes-quickstart/tags/list
curl --cacert local-registry.pem https://localhost:8443/v2/quarkus/kubernetes-quickstart/tags/list -K- <<< "--user mylocregusername:mylocregpassword"

#Apply the generated manifest namespace-development.yaml to create the namespace nl-amis-development
cd /mnt/mysharedfolder

kubectl apply -f yaml/namespace-development.yaml

#Apply the generated manifest kubernetes.yml to the Kubernetes cluster from the project root
cd /mnt/mysharedfolder/kubernetes-quickstart

kubectl apply -f target/kubernetes/kubernetes.yml

# Wait 30 seconds
echo "****** Waiting 30 seconds ..."
sleep 30

echo "****** List Services (in the nl-amis-development namespace)"
kubectl get services -n nl-amis-development

echo "****** List Replica Sets (in the nl-amis-development namespace)"
kubectl get replicasets -n nl-amis-development

echo "****** List Pods (in the nl-amis-development namespace)"
kubectl get pods -n nl-amis-development

echo "****** List Endpoints (in the nl-amis-development namespace)"
kubectl get endpoints -n nl-amis-development

echo "****** List Nodes"
kubectl get nodes

echo "****** List Daemon Sets (in the kube-system namespace)"
kubectl get daemonset -n kube-system

echo "****** List Pods (in the kube-system namespace)"
kubectl get pods -n kube-system

echo "****** List Endpoints (in the kube-system)"
kubectl get endpoints -n kube-system

echo "****** List Pod svclb-kubernetes-quickstart (in the kube-system namespace)"
kubectl describe -n kube-system pods $(kubectl get pods -n kube-system | grep svclb-kubernetes-quickstart | awk '{print $1}')

echo "****** List Pod svclb-traefik (in the kube-system namespace)"
kubectl describe -n kube-system pods $(kubectl get pods -n kube-system | grep svclb-traefik | awk '{print $1}')

echo "**** Determine the IP of the ubuntu2204.localdomain node"
nodeIP=$(kubectl get node ubuntu2204.localdomain -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}")
echo "---$nodeIP---"

echo "**** Determine the nodePort of the kubernetes-quickstart service"
nodePort=$(kubectl get service kubernetes-quickstart -n nl-amis-development -o yaml | grep nodePort: | grep -E -o "[0-9]{5}")
echo "---$nodePort---"

echo "**** Via socat forward local port 8090 to port $nodePort on the ubuntu2204.localdomain node ($nodeIP)"
socat tcp-listen:8090,fork tcp:$nodeIP:$nodePort &

echo "**** End installing Quarkus kubernetes-quickstart"

Remark about the maven –no-transfer-progress flag:
mvn –no-transfer-progress ….
Maven has now a an option to suppress the transfer progress when downloading/uploading in interactive mode.
[https://maven.apache.org/docs/3.6.1/release-notes.html]

For the demo environment to start, from the directory named env on my Windows laptop, I opened a Windows Command Prompt (cmd) and typed: vagrant up. Once the VM was running, for executing later manual steps, I used vagrant ssh to connect into the running VM.

With the following output (only showing the last part):

ubuntu_quarkus_k3s: ****** Output from: cat kubernetes.yml
    ubuntu_quarkus_k3s: ---
    ubuntu_quarkus_k3s: apiVersion: v1
    ubuntu_quarkus_k3s: kind: Service
    ubuntu_quarkus_k3s: metadata:
    ubuntu_quarkus_k3s:   annotations:
    ubuntu_quarkus_k3s:     app.quarkus.io/quarkus-version: 3.25.2
    ubuntu_quarkus_k3s:     app.quarkus.io/build-timestamp: 2025-08-13 - 09:00:59 +0000
    ubuntu_quarkus_k3s:   labels:
    ubuntu_quarkus_k3s:     app.kubernetes.io/name: kubernetes-quickstart
    ubuntu_quarkus_k3s:     app.kubernetes.io/version: 1.0.0-SNAPSHOT
    ubuntu_quarkus_k3s:     app.kubernetes.io/managed-by: quarkus
    ubuntu_quarkus_k3s:   name: kubernetes-quickstart
    ubuntu_quarkus_k3s:   namespace: nl-amis-development
    ubuntu_quarkus_k3s: spec:
    ubuntu_quarkus_k3s:   ports:
    ubuntu_quarkus_k3s:     - name: ports
    ubuntu_quarkus_k3s:       port: 8180
    ubuntu_quarkus_k3s:       protocol: TCP
    ubuntu_quarkus_k3s:       targetPort: 8080
    ubuntu_quarkus_k3s:   selector:
    ubuntu_quarkus_k3s:     app.kubernetes.io/name: kubernetes-quickstart
    ubuntu_quarkus_k3s:     app.kubernetes.io/version: 1.0.0-SNAPSHOT
    ubuntu_quarkus_k3s:   type: LoadBalancer
    ubuntu_quarkus_k3s: ---
    ubuntu_quarkus_k3s: apiVersion: apps/v1
    ubuntu_quarkus_k3s: kind: Deployment
    ubuntu_quarkus_k3s: metadata:
    ubuntu_quarkus_k3s:   annotations:
    ubuntu_quarkus_k3s:     app.quarkus.io/quarkus-version: 3.25.2
    ubuntu_quarkus_k3s:     app.quarkus.io/build-timestamp: 2025-08-13 - 09:00:59 +0000
    ubuntu_quarkus_k3s:   labels:
    ubuntu_quarkus_k3s:     app.kubernetes.io/name: kubernetes-quickstart
    ubuntu_quarkus_k3s:     app.kubernetes.io/version: 1.0.0-SNAPSHOT
    ubuntu_quarkus_k3s:     app.kubernetes.io/managed-by: quarkus
    ubuntu_quarkus_k3s:   name: kubernetes-quickstart
    ubuntu_quarkus_k3s:   namespace: nl-amis-development
    ubuntu_quarkus_k3s: spec:
    ubuntu_quarkus_k3s:   replicas: 3
    ubuntu_quarkus_k3s:   selector:
    ubuntu_quarkus_k3s:     matchLabels:
    ubuntu_quarkus_k3s:       app.kubernetes.io/version: 1.0.0-SNAPSHOT
    ubuntu_quarkus_k3s:       app.kubernetes.io/name: kubernetes-quickstart
    ubuntu_quarkus_k3s:   template:
    ubuntu_quarkus_k3s:     metadata:
    ubuntu_quarkus_k3s:       annotations:
    ubuntu_quarkus_k3s:         app.quarkus.io/quarkus-version: 3.25.2
    ubuntu_quarkus_k3s:         app.quarkus.io/build-timestamp: 2025-08-13 - 09:00:59 +0000
    ubuntu_quarkus_k3s:       labels:
    ubuntu_quarkus_k3s:         app.kubernetes.io/managed-by: quarkus
    ubuntu_quarkus_k3s:         app.kubernetes.io/version: 1.0.0-SNAPSHOT
    ubuntu_quarkus_k3s:         app.kubernetes.io/name: kubernetes-quickstart
    ubuntu_quarkus_k3s:     spec:
    ubuntu_quarkus_k3s:       containers:
    ubuntu_quarkus_k3s:         - env:
    ubuntu_quarkus_k3s:             - name: KUBERNETES_NAMESPACE
    ubuntu_quarkus_k3s:               valueFrom:
    ubuntu_quarkus_k3s:                 fieldRef:
    ubuntu_quarkus_k3s:                   fieldPath: metadata.namespace
    ubuntu_quarkus_k3s:           image: localhost:8443/quarkus/kubernetes-quickstart:1.0.0-SNAPSHOT
    ubuntu_quarkus_k3s:           imagePullPolicy: Always
    ubuntu_quarkus_k3s:           name: kubernetes-quickstart
    ubuntu_quarkus_k3s:           ports:
    ubuntu_quarkus_k3s:             - containerPort: 8080
    ubuntu_quarkus_k3s:               name: ports
    ubuntu_quarkus_k3s:               protocol: TCP
    ubuntu_quarkus_k3s: ****** List of all the docker images inside my secured local private registry
    ubuntu_quarkus_k3s:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    ubuntu_quarkus_k3s:                                  Dload  Upload   Total   Spent    Left  Speed
100    59  100    59    0     0   2108      0 --:--:-- --:--:-- --:--:--  2185
    ubuntu_quarkus_k3s: {"repositories":["nginx","quarkus/kubernetes-quickstart"]}
    ubuntu_quarkus_k3s: ****** List of all the tags of docker image nginx inside my secured local private registry
    ubuntu_quarkus_k3s:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    ubuntu_quarkus_k3s:                                  Dload  Upload   Total   Spent    Left  Speed
100    35  100    35    0     0   1091      0 --:--:-- --:--:-- --:--:--  1129
    ubuntu_quarkus_k3s: {"name":"nginx","tags":["latest"]}
    ubuntu_quarkus_k3s: ****** List of all the tags of docker image quarkus/kubernetes-quickstart inside my secured local private registry
    ubuntu_quarkus_k3s:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    ubuntu_quarkus_k3s:                                  Dload  Upload   Total   Spent    Left  Speed
    ubuntu_quarkus_k3s: {"name":"quarkus/kubernetes-quickstart","tags":["1.0.0-SNAPSHOT"]}
100    67  100    67    0     0   1788      0 --:--:-- --:--:-- --:--:--  1810
    ubuntu_quarkus_k3s: namespace/nl-amis-development created
    ubuntu_quarkus_k3s: service/kubernetes-quickstart created
    ubuntu_quarkus_k3s: deployment.apps/kubernetes-quickstart created
    ubuntu_quarkus_k3s: ****** Waiting 30 seconds ...
    ubuntu_quarkus_k3s: ****** List Services (in the nl-amis-development namespace)
    ubuntu_quarkus_k3s: NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    ubuntu_quarkus_k3s: kubernetes-quickstart   LoadBalancer   10.43.109.160   10.0.2.15     8180:30522/TCP   31s
    ubuntu_quarkus_k3s: ****** List Replica Sets (in the nl-amis-development namespace)
    ubuntu_quarkus_k3s: NAME                               DESIRED   CURRENT   READY   AGE
    ubuntu_quarkus_k3s: kubernetes-quickstart-6d8f86d78f   3         3         3       32s
    ubuntu_quarkus_k3s: ****** List Pods (in the nl-amis-development namespace)
    ubuntu_quarkus_k3s: NAME                                     READY   STATUS    RESTARTS   AGE
    ubuntu_quarkus_k3s: kubernetes-quickstart-6d8f86d78f-226kx   1/1     Running   0          32s
    ubuntu_quarkus_k3s: kubernetes-quickstart-6d8f86d78f-5dcrv   1/1     Running   0          32s
    ubuntu_quarkus_k3s: kubernetes-quickstart-6d8f86d78f-snf4x   1/1     Running   0          32s
    ubuntu_quarkus_k3s: ****** List Endpoints (in the nl-amis-development namespace)
    ubuntu_quarkus_k3s: Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
    ubuntu_quarkus_k3s: NAME                    ENDPOINTS                                         AGE
    ubuntu_quarkus_k3s: kubernetes-quickstart   10.42.0.12:8080,10.42.0.13:8080,10.42.0.14:8080   33s
    ubuntu_quarkus_k3s: ****** List Nodes
    ubuntu_quarkus_k3s: NAME                     STATUS   ROLES                  AGE   VERSION
    ubuntu_quarkus_k3s: ubuntu2204.localdomain   Ready    control-plane,master   12m   v1.33.3+k3s1
    ubuntu_quarkus_k3s: ****** List Daemon Sets (in the kube-system namespace)
    ubuntu_quarkus_k3s: NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    ubuntu_quarkus_k3s: svclb-kubernetes-quickstart-4085f5bf   1         1         1       1            1           <none>          34s
    ubuntu_quarkus_k3s: svclb-traefik-dee09411                 1         1         1       1            1           <none>          10m
    ubuntu_quarkus_k3s: ****** List Pods (in the kube-system namespace)
    ubuntu_quarkus_k3s: NAME                                         READY   STATUS      RESTARTS   AGE
    ubuntu_quarkus_k3s: coredns-5688667fd4-2hqfj                     1/1     Running     0          12m
    ubuntu_quarkus_k3s: helm-install-traefik-crd-kp2hl               0/1     Completed   0          12m
    ubuntu_quarkus_k3s: helm-install-traefik-vl7x8                   0/1     Completed   2          12m
    ubuntu_quarkus_k3s: local-path-provisioner-774c6665dc-2b9pq      1/1     Running     0          12m
    ubuntu_quarkus_k3s: metrics-server-6f4c6675d5-lljlv              1/1     Running     0          12m
    ubuntu_quarkus_k3s: svclb-kubernetes-quickstart-4085f5bf-r8rpz   1/1     Running     0          35s
    ubuntu_quarkus_k3s: svclb-traefik-dee09411-c7lzh                 2/2     Running     0          10m
    ubuntu_quarkus_k3s: traefik-c98fdf6fb-stkt7                      1/1     Running     0          10m
    ubuntu_quarkus_k3s: ****** List Endpoints (in the kube-system)
    ubuntu_quarkus_k3s: Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
    ubuntu_quarkus_k3s: NAME             ENDPOINTS                                  AGE
    ubuntu_quarkus_k3s: kube-dns         10.42.0.3:53,10.42.0.3:53,10.42.0.3:9153   12m
    ubuntu_quarkus_k3s: metrics-server   10.42.0.2:10250                            12m
    ubuntu_quarkus_k3s: traefik          10.42.0.8:8000,10.42.0.8:8443              10m
    ubuntu_quarkus_k3s: ****** List Pod svclb-kubernetes-quickstart (in the kube-system namespace)
    ubuntu_quarkus_k3s: Name:                 svclb-kubernetes-quickstart-4085f5bf-r8rpz
    ubuntu_quarkus_k3s: Namespace:            kube-system
    ubuntu_quarkus_k3s: Priority:             2000001000
    ubuntu_quarkus_k3s: Priority Class Name:  system-node-critical
    ubuntu_quarkus_k3s: Service Account:      svclb
    ubuntu_quarkus_k3s: Node:                 ubuntu2204.localdomain/10.0.2.15
    ubuntu_quarkus_k3s: Start Time:           Wed, 13 Aug 2025 09:01:45 +0000
    ubuntu_quarkus_k3s: Labels:               app=svclb-kubernetes-quickstart-4085f5bf
    ubuntu_quarkus_k3s:                       controller-revision-hash=864579dbbc
    ubuntu_quarkus_k3s:                       pod-template-generation=1
    ubuntu_quarkus_k3s:                       svccontroller.k3s.cattle.io/svcname=kubernetes-quickstart
    ubuntu_quarkus_k3s:                       svccontroller.k3s.cattle.io/svcnamespace=nl-amis-development
    ubuntu_quarkus_k3s: Annotations:          <none>
    ubuntu_quarkus_k3s: Status:               Running
    ubuntu_quarkus_k3s: IP:                   10.42.0.11
    ubuntu_quarkus_k3s: IPs:
    ubuntu_quarkus_k3s:   IP:           10.42.0.11
    ubuntu_quarkus_k3s: Controlled By:  DaemonSet/svclb-kubernetes-quickstart-4085f5bf
    ubuntu_quarkus_k3s: Containers:
    ubuntu_quarkus_k3s:   lb-tcp-8180:
    ubuntu_quarkus_k3s:     Container ID:   containerd://85a589a72add0c2a2e4564d2df8a2e08f44ce79ae7cee6c3c583e3722cb5fc0e
    ubuntu_quarkus_k3s:     Image:          rancher/klipper-lb:v0.4.13
    ubuntu_quarkus_k3s:     Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
    ubuntu_quarkus_k3s:     Port:           8180/TCP
    ubuntu_quarkus_k3s:     Host Port:      8180/TCP
    ubuntu_quarkus_k3s:     State:          Running
    ubuntu_quarkus_k3s:       Started:      Wed, 13 Aug 2025 09:01:49 +0000
    ubuntu_quarkus_k3s:     Ready:          True
    ubuntu_quarkus_k3s:     Restart Count:  0
    ubuntu_quarkus_k3s:     Environment:
    ubuntu_quarkus_k3s:       SRC_PORT:    8180
    ubuntu_quarkus_k3s:       SRC_RANGES:  0.0.0.0/0
    ubuntu_quarkus_k3s:       DEST_PROTO:  TCP
    ubuntu_quarkus_k3s:       DEST_PORT:   8180
    ubuntu_quarkus_k3s:       DEST_IPS:    10.43.109.160
    ubuntu_quarkus_k3s:     Mounts:        <none>
    ubuntu_quarkus_k3s: Conditions:
    ubuntu_quarkus_k3s:   Type                        Status
    ubuntu_quarkus_k3s:   PodReadyToStartContainers   True
    ubuntu_quarkus_k3s:   Initialized                 True
    ubuntu_quarkus_k3s:   Ready                       True
    ubuntu_quarkus_k3s:   ContainersReady             True
    ubuntu_quarkus_k3s:   PodScheduled                True
    ubuntu_quarkus_k3s: Volumes:                      <none>
    ubuntu_quarkus_k3s: QoS Class:                    BestEffort
    ubuntu_quarkus_k3s: Node-Selectors:               <none>
    ubuntu_quarkus_k3s: Tolerations:                  CriticalAddonsOnly op=Exists
    ubuntu_quarkus_k3s:                               node-role.kubernetes.io/control-plane:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node-role.kubernetes.io/master:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/disk-pressure:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/memory-pressure:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/not-ready:NoExecute op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/pid-pressure:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/unreachable:NoExecute op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/unschedulable:NoSchedule op=Exists
    ubuntu_quarkus_k3s: Events:
    ubuntu_quarkus_k3s:   Type    Reason     Age   From               Message
    ubuntu_quarkus_k3s:   ----    ------     ----  ----               -------
    ubuntu_quarkus_k3s:   Normal  Scheduled  37s   default-scheduler  Successfully assigned kube-system/svclb-kubernetes-quickstart-4085f5bf-r8rpz to ubuntu2204.localdomain
    ubuntu_quarkus_k3s:   Normal  Pulled     35s   kubelet            Container image "rancher/klipper-lb:v0.4.13" already present on machine
    ubuntu_quarkus_k3s:   Normal  Created    34s   kubelet            Created container: lb-tcp-8180
    ubuntu_quarkus_k3s:   Normal  Started    33s   kubelet            Started container lb-tcp-8180
    ubuntu_quarkus_k3s: ****** List Pod svclb-traefik (in the kube-system namespace)
    ubuntu_quarkus_k3s: Name:                 svclb-traefik-dee09411-c7lzh
    ubuntu_quarkus_k3s: Namespace:            kube-system
    ubuntu_quarkus_k3s: Priority:             2000001000
    ubuntu_quarkus_k3s: Priority Class Name:  system-node-critical
    ubuntu_quarkus_k3s: Service Account:      svclb
    ubuntu_quarkus_k3s: Node:                 ubuntu2204.localdomain/10.0.2.15
    ubuntu_quarkus_k3s: Start Time:           Wed, 13 Aug 2025 08:51:36 +0000
    ubuntu_quarkus_k3s: Labels:               app=svclb-traefik-dee09411
    ubuntu_quarkus_k3s:                       controller-revision-hash=648fbc89f8
    ubuntu_quarkus_k3s:                       pod-template-generation=1
    ubuntu_quarkus_k3s:                       svccontroller.k3s.cattle.io/svcname=traefik
    ubuntu_quarkus_k3s:                       svccontroller.k3s.cattle.io/svcnamespace=kube-system
    ubuntu_quarkus_k3s: Annotations:          <none>
    ubuntu_quarkus_k3s: Status:               Running
    ubuntu_quarkus_k3s: IP:                   10.42.0.7
    ubuntu_quarkus_k3s: IPs:
    ubuntu_quarkus_k3s:   IP:           10.42.0.7
    ubuntu_quarkus_k3s: Controlled By:  DaemonSet/svclb-traefik-dee09411
    ubuntu_quarkus_k3s: Containers:
    ubuntu_quarkus_k3s:   lb-tcp-80:
    ubuntu_quarkus_k3s:     Container ID:   containerd://eea37eefbed7b9763d75351f97be65b8d737609f8dda53a1010c9e0799836bac
    ubuntu_quarkus_k3s:     Image:          rancher/klipper-lb:v0.4.13
    ubuntu_quarkus_k3s:     Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
    ubuntu_quarkus_k3s:     Port:           80/TCP
    ubuntu_quarkus_k3s:     Host Port:      80/TCP
    ubuntu_quarkus_k3s:     State:          Running
    ubuntu_quarkus_k3s:       Started:      Wed, 13 Aug 2025 08:51:43 +0000
    ubuntu_quarkus_k3s:     Ready:          True
    ubuntu_quarkus_k3s:     Restart Count:  0
    ubuntu_quarkus_k3s:     Environment:
    ubuntu_quarkus_k3s:       SRC_PORT:    80
    ubuntu_quarkus_k3s:       SRC_RANGES:  0.0.0.0/0
    ubuntu_quarkus_k3s:       DEST_PROTO:  TCP
    ubuntu_quarkus_k3s:       DEST_PORT:   80
    ubuntu_quarkus_k3s:       DEST_IPS:    10.43.141.14
    ubuntu_quarkus_k3s:     Mounts:        <none>
    ubuntu_quarkus_k3s:   lb-tcp-443:
    ubuntu_quarkus_k3s:     Container ID:   containerd://fc565599de1f4dbbcd3ae9b0ee307713654f2dff0714e8def4dee2281ab3c6b4
    ubuntu_quarkus_k3s:     Image:          rancher/klipper-lb:v0.4.13
    ubuntu_quarkus_k3s:     Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
    ubuntu_quarkus_k3s:     Port:           443/TCP
    ubuntu_quarkus_k3s:     Host Port:      443/TCP
    ubuntu_quarkus_k3s:     State:          Running
    ubuntu_quarkus_k3s:       Started:      Wed, 13 Aug 2025 08:51:43 +0000
    ubuntu_quarkus_k3s:     Ready:          True
    ubuntu_quarkus_k3s:     Restart Count:  0
    ubuntu_quarkus_k3s:     Environment:
    ubuntu_quarkus_k3s:       SRC_PORT:    443
    ubuntu_quarkus_k3s:       SRC_RANGES:  0.0.0.0/0
    ubuntu_quarkus_k3s:       DEST_PROTO:  TCP
    ubuntu_quarkus_k3s:       DEST_PORT:   443
    ubuntu_quarkus_k3s:       DEST_IPS:    10.43.141.14
    ubuntu_quarkus_k3s:     Mounts:        <none>
    ubuntu_quarkus_k3s: Conditions:
    ubuntu_quarkus_k3s:   Type                        Status
    ubuntu_quarkus_k3s:   PodReadyToStartContainers   True
    ubuntu_quarkus_k3s:   Initialized                 True
    ubuntu_quarkus_k3s:   Ready                       True
    ubuntu_quarkus_k3s:   ContainersReady             True
    ubuntu_quarkus_k3s:   PodScheduled                True
    ubuntu_quarkus_k3s: Volumes:                      <none>
    ubuntu_quarkus_k3s: QoS Class:                    BestEffort
    ubuntu_quarkus_k3s: Node-Selectors:               <none>
    ubuntu_quarkus_k3s: Tolerations:                  CriticalAddonsOnly op=Exists
    ubuntu_quarkus_k3s:                               node-role.kubernetes.io/control-plane:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node-role.kubernetes.io/master:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/disk-pressure:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/memory-pressure:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/not-ready:NoExecute op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/pid-pressure:NoSchedule op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/unreachable:NoExecute op=Exists
    ubuntu_quarkus_k3s:                               node.kubernetes.io/unschedulable:NoSchedule op=Exists
    ubuntu_quarkus_k3s: Events:
    ubuntu_quarkus_k3s:   Type    Reason     Age   From               Message
    ubuntu_quarkus_k3s:   ----    ------     ----  ----               -------
    ubuntu_quarkus_k3s:   Normal  Scheduled  10m   default-scheduler  Successfully assigned kube-system/svclb-traefik-dee09411-c7lzh to ubuntu2204.localdomain
    ubuntu_quarkus_k3s:   Normal  Pulling    10m   kubelet            Pulling image "rancher/klipper-lb:v0.4.13"
    ubuntu_quarkus_k3s:   Normal  Pulled     10m   kubelet            Successfully pulled image "rancher/klipper-lb:v0.4.13" in 4.45s (4.45s including waiting). Image size: 5020426 bytes.
    ubuntu_quarkus_k3s:   Normal  Created    10m   kubelet            Created container: lb-tcp-80
    ubuntu_quarkus_k3s:   Normal  Started    10m   kubelet            Started container lb-tcp-80
    ubuntu_quarkus_k3s:   Normal  Pulled     10m   kubelet            Container image "rancher/klipper-lb:v0.4.13" already present on machine
    ubuntu_quarkus_k3s:   Normal  Created    10m   kubelet            Created container: lb-tcp-443
    ubuntu_quarkus_k3s:   Normal  Started    10m   kubelet            Started container lb-tcp-443
    ubuntu_quarkus_k3s: **** Determine the IP of the ubuntu2204.localdomain node
    ubuntu_quarkus_k3s: ---10.0.2.15---
    ubuntu_quarkus_k3s: **** Determine the nodePort of the kubernetes-quickstart service
    ubuntu_quarkus_k3s: ---30522---
    ubuntu_quarkus_k3s: **** Via socat forward local port 8090 to port 30522 on the ubuntu2204.localdomain node (10.0.2.15)
    ubuntu_quarkus_k3s: **** End installing Quarkus kubernetes-quickstart

In the output above, you can see:

ubuntu_quarkus_k3s: ****** List Services (in the nl-amis-development namespace)
    ubuntu_quarkus_k3s: NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    ubuntu_quarkus_k3s: kubernetes-quickstart   LoadBalancer   10.43.109.160   10.0.2.15     8180:30522/TCP   31s

In the output above you can see, that a service of service-type LoadBalancer is created and running. You can also see, that a random nodePort (within the 30000-32767 range) with value 30522 is implemented.

So, also when using my scripts, the kubernetes-quickstart LoadBalancer Service was running 😊.

As you also can see, file quarkus-kubernetes-quickstart.sh takes care of port forwarding.
In order to forward local port 8090 to port 30522 ($nodePort) on the K3s node ($nodeIP), it creates the following command, as I described in a previous article:
[https://technology.amis.nl/2020/04/30/creating-a-re-usable-vagrant-box-from-an-existing-vm-with-ubuntu-and-k3s-with-the-kubernetes-dashboard-and-adding-mysql-using-vagrant-and-oracle-virtualbox/]

socat tcp-listen:8090,fork tcp:10.0.2.15:30522 &

Next, in the Web Browser on my Windows laptop, I entered the URL: http://localhost:8090/hello

And I got the following result:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 20

Below, again you see an overview of my Kubernetes cluster at this moment:

Quarkus - Kubernetes extension (reinvestigated, part 5), implementing a Service of service type LoadBalancer with K3s lameriks 2025 08 21

I conclude this article.

In this article, you can read more about the steps I took to further automate setting up my demo environment and some extra changes I made to have the Quarkus quickstart application use a Service with service type LoadBalancer.

There also will be a next article in this series of articles about Quarkus.

Feel free, among my other articles on this subject, to read:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.