End of day two. So many topics already were presented, demonstrated, discussed, tried out to and by me. This article is just a very brief overview of what caught my attention. And what I will have to look into in more detail – or not in some cases.
Containers and Kubernetes
Running containers should be trivial for any software engineer. Whether to provide local instances of platform components during development or to help execute integration tests or to run the build pipeline or to experiment with new technology – containers are really mainstream. Docker Desktop may have been made less accessible with its license changes, but Rancher Desktop and (soon) Podman Desktop make containers readily available on Windows and on Linux and macOS there already was no issue.
Kubernetes is rapidly becoming the next generation application server (with containers the applications to be ran) – the omnipresent (and omnipotent) deployment platform. Applications (microservices please?!) are built and delivered as container image. Deployment is equal to running a Kubernetes Deployment that instantiates Pods based on the container images. Kubernetes locally (minikube, k3s,..) or in the cloud is quickly becoming the core focus of platform engineers – dealing with wide range of tools for all kinds of tasks. Monitoring, service mesh, tracing, logging, volume mapping, release management, GitOps are just some of the areas where tools are available.
And then Kubernetes is used as a control-plane for resources running entirely elsewhere outside of the Kubernetes Cluster. With Custom Resource Definitions and the avalanche of Operators and tools such as Crossplane – it has become a thing to use Kubernetes’ capability of observing resources and constantly trying to bring their actual state in accordance with the defined state for resources that do not even run on Kubernetes, but for example on some remote public cloud.
At Devoxx, some great talks discussed the security of the Kubernetes Cluster – now becoming the hotspot of your application landscape. From container image scanning (Clair or Trivy) and cluster policies (OPA or Kyverno) to observing suspicious activities in the cluster (Falco) and applying network policies in response (Argo CD). Include 3rd party image scanning (Dive ) and using signing strategies (Notary Project). Other tools mentioned include Kubescape, Kube Hunter, Kube-Bench.
Service Mesh seems the logical next thing to discuss once you have embraced the K8S cluster as the runtime platform for applications and especially when these applications are created microservice style. The service mesh (pattern) is about having the mesh handle outgoing and incoming netwerk calls – through the use of a proxy or sidecar that is configured from and interacts with a central control plane. The proxy – in conjunction with other proxies through the control plane – can enforce important policies (mTLS, routing, load balancing, authentication) and help collect runtime data (logging, monitoring metrics, trace data).
Istio seems the de facto standard for K8S service mesh – although Kong (Mesh) and Dapr also have a claim (both very much absent from Devoxx). For monitoring Istio, a commonly presented tool is Kiali.
Gitpod – Ephemeral, quick start cloud based development environments
I have blogged before about Gitpod. It is a SaaS that provides a cloud based development environment – Linux VM with VS Code, 12 language runtimes and 25 additional tool pre-installed. It has a max of 12 GB RAM, plenty of diskspace. It is by far the quickest way to get started exploring any technology or running a tutorial or workshop – especially when you are constrained by the conference wifi. Gitpod accounts are created using a GitLab or GitHub account. The free plan has 50 hours of Gitpod workspace usage per month – which is pretty substantial. I have ran most of the handson workshops at Devoxx on Gitpod – and was able to get going much faster than most of the other attendees who were doing local installations. The ability to start with a clean environment every time I want to, just by spinning up a new workspace is liberating. Trying out new things becomes a breeze.
Presented as the new great alternative for data persistence in Java applications, MicroStream apparently is a smart way to serialize Java object graphs (to file) and to deserialize them again – applying ACID constraints on the data manipulations. For querying it is suggested that Java Collection Streams APIs are applied – as we are dealing with Java objects. I have not seen it in action, but it seems not really comparable to relational databases – as this does not scale horizontally (one application instance can interact with the serialized data files) and I can not really see how this could support very large numbers of objects or complex nested, joining, aggregating queries. However, I am sure there are some great use cases.
Note that the name MicroStream seems to cause quite some confusion. As far as I can tell, there is nothing like an event stream involved, yet everyone I talked to about MicroStream had that immediate association.
GraalVM Native Image
In 5 years’ time, more than 50% of Java applications will run as Native Image.
That is not a quote from anyone with special insights. It is a statement that may provoke a little – or at least stir you a little. What does that mean, why should you say something like that?
Another major component under the GraalVM umbrella is the native image generator. That is so cool – it is hard to explain how important that capability is. It is the secret sauce under Quarkus (and Helidon and Micronaut), it is the holy grail the Spring Framework is hard at work to realize. It is the enabler of serverless Java (functions). It will drive down costs of running Java applications and it will reduce the carbon emissions as a result of running Java applications. It will run the Nobel Prize.,,.,
Okay, perhaps I am overdoing this, However, the important of native image is hard to over estimate.
Take a Java application. Compile it into a runnable jar. Then set the native image generator to do its magic. The result is an executable. A file that can be executed. A file with no dependencies other than on the Operating System it was created for. An executable that when executed starts up blazingly fast and uses far less memory than the corresponding Java application on a JVM would. In most instances the performance of the application is (initially) better than Java on the JIT compiler (regular JVM). For complex Java applications that the JVM has a long time to optimize for (C2 optimizations) at present the performance is better than the native image, but the gap is narrowing rapidly. The recent insight at Oracle is that in the near future, (optimized) native image will run faster than even long running Java applications on JVM.
The workshop demonstrated how native images can be created for Java applications – even applications that use reflection such as Spring Boot applications. Find the workshop at Oracle’s Luna platform (with free online tutorial environments) here: https://t.co/ysywf70dQV or use the resources in GitHub to run in your own environment.
With native image, at runtime what is running is machine code – using internally concepts such as Substrate VM and (some form of) (Java) heap. Whether it is really a Java application is a matter for a (philosophical) debate. What is clear is that it is simple (no Java runtime environment needed) and small (still no Java runtime and an executable that is typically just a few 10s of MB), as well as fast – both in startup time and execution. That means that applications programmed in Java when turned into a Native Image are great for containerization (small, distroless) and also great for Serverless environments (quickstart and low resource usage. are perfect qualities for serverless components).
Dynamic Data Orchestration – Apache Airflow & CrateDB
A nice, sober, clean, straightforward presentation on Apache Airflow and CrateDB was one of the last of the day. Apache Airflow is a workflow orchestrating tool that comes with many operators in its box that make it easy to extract, process and load data from and into many different sources. As such it is widely used for ETL processes, data pipelines that integrate systems and feed data lakes and the like (somewhat similar to Azure Data Factory for example). Note that Airflow can orchestrate ADF pipeline (and perhaps vice versa).
An Airflow process is a DAG (directed acyclic graph) described in a Python program. In the DAG, operators are connected in logical flows, handing (references to) data to operate on to each other.
In the session I attended CrateDB was demonstrated alongside Apache Airflow. I am not sure exactly why CreateDB was pliugged. Its main characteristic seemed to be that it is (almost) PostgreSQL compatible. But why not just use PostgreSQL then was not convincingly made clear. According to the CrateDB website, it handles very large numbers of records, can easily scale and has a built-in Lucence engine that allows searching for unstructured data. Processing data from IoT networks seems to be one of its core strengths. Well, perhaps nice to have a closer look at CrateDBeSQL
Java Application Frameworks
When you start developing a new Java based application in a green field surroundings where you can make all choices independently of legacy and existing obligations, what would you pick? There are some contenders in this arena – although it seems they are not in the same weight category.
The two major players seem to be Quarkus and SpringBoot/Spring Framework. The ability of the Spring project to embrace native images (and get rid of excessive reflection that inhibits the generation of native images) is crucial it would seem for its success in green field environment in the future.
Jakarta EE is still very much alive and is seeing substantial evolution suggesting a comeback of sorts. Micronaut and Helidon are both frameworks sponsored by Oracle. Exactly what their niche is, is not yet clear to me.
JHipster is an interesting project that allows out of the box generation/configuration of an end-to-end web application with Java backend service layer. At present, it supports various client side frameworks (Angular, React, Vue) and a SpringBoot based backend with various Spring subprojects included. The next generation JHipster (JHipster LIte) adopts the hexagonal (microservices) architecture style that is centered around (business) domain definitions, It still uses Spring Boot as the foundation for the backend of the application.
It seems that among the parties not or hardly present at Devoxx are IBM and Oracle. Among technologies that I had expected in the program but did not find are Flutter and Dapr.io (the one open source project I made a substantial contribution to). I am sure there is more not really around that I would have expected, so perhaps I will extend this list at a later moment.