If your tool is a hammer, every problem looks like a nail. Many of us have seen situations where this tunnel vision over time took hold. And for many of us involved in integration, this also has happened. In this article I want to briefly draw your attention to changing views regarding integration and regarding the technology for realizing integration. Important triggers for these changing views include cloud, web scale, new type of user interaction, IoT and real time, serverless – and real life experiences with enterprise integration.
From the early 2000s when we started doing enterprise integration in earnest, we talked about the many integration patterns – synchronous and asynchronous, batch and trickle feed and many more – and primarily the ESB pattern. The enterprise service bus, that magic black box with all its connectors that you could simply plug into and that made integration of any system to any other system a simple goal to achieve. And from that somewhat theoretical approach, we then got real ESB products – tools that fulfilled that role of connecting any to any system. Not always as magically as theory had suggested, but still – we most of the times got it to work. Frequently based on XML and SOAP + WS=* based Web Services and with complex products running on massive application servers. In my case the primary technology was Oracle SOA Suite and Oracle Service Bus; comparable products were available from IBM, Microsoft, Tibco, SAP, JBoss, MuleSoft and others. SOA was the architecture style we embraced – with decoupling as the holy grail and important tenets like encapsulation, autonomy, abstraction, statelessness, reusability and the standardized service contract.
And with the integration platform in our hands, almost any data flow seemed a challenge we could nail. The capability to quickly implement a flow from A to B through the ESB product lured us into implementing many different kinds of flows on that platform. Our hammer struck again and again. From “simple UI needs some data elements from a backend database” to “documents arrive on FTP endpoint and have to be stored in document management system” – any arrow between two blocks on a diagram became an ESB subject.
Through this way of thinking and working, we have achieved a lot. Thinking in terms of standardized, common meta data and canonical models across enterprises for example – which in turn is a cornerstone for domain driven design that inspires us into microservices. And achieving decoupling between systems – and a structured way of approaching data flows. However, in many cases, the ESB platforms became very heavy. Because of the many different services and flows that were running on it and the many instances of all of them that had to be processed. And because of the way these platforms ran on their underlying application server. The logical decoupling that we achieved between services and their consumer was not matched by physical decoupling: generally the integration product had to make do all integrations from a shared set of physical resources and heavy traffic in one integration flow impacted the others. Additionally, managing these complex, overweight platforms became huge issue. From patching and upgrading the platform, rolling out changed or new services to scaling under peak load, monitoring, handling errors, achieving long term stability and achieving required performance – it has become very hard to manage the jack-of-all-trades integration platform.
A New World in enterprise integration
With further increased scale – because of real time and web scale demands introduced by IoT and a further growth of the use of Apps – , tighter security requirements, faster change cycle, increased usage of standard applications (primarily SaaS) and the advent of cloud, that inspired new architecture patterns and insights such as domain driven design, microservices, CQRS, event sourcing, “dumb pipes, smart end points” combined with advances in technology – for example containers, serverless, REST, API technology – the world has changed. Building on experience and concepts that have proven their worth and leveraging new insights and technologies, we are moving towards new ways of approaching different types of data flow. We may still call them integration – but we recognize that they are vastly different , and should be implemented in very different ways. We relinquish our hammer and replace it with a more varied toolset.
One important data flow that we should identify is what I have labelled vertical integration: the synchronous interactions between consumers and one or more back end systems or services, as shown in the next figure.
This data flow tends to be executed at high volumes so has to be very scalable, it has to be completed pretty quickly (no more than 200 ms) and its implementation is usually not only stateless but relatively simple. The API Gateway is almost a light weight ESB product- taking on responsibilities such as Load balance, Routing, Monitoring, Analytics, Throttling, Authentication, Authorization, Circuit breaking, controlled release ( A/B Testing, Canary Release ). It typically does not transform or interact with backend systems beyond simple HTTP calls . The API Gateway invokes an API implementation – typically over HTTP/REST with a JSON payload (although GraphQL seems ready to take over from REST in many UI situations). The implementation of the API could still be done with a traditional service bus product but that is very unlikely and undesirable. It can far more easily be implemented using generic technology such as NodeJS or Java (for example SpringBoot based) and delivered in a container that easily be scaled on a container platform such as Kubernetes. Or, given the stateless nature and potential need to handle high peaks, it would be attractive to adopt a serverless implementation for the API.
The next figure illustrates the evolution of the implementation of the ‘vertical integration flow’.
Clearly, this evolution leads to increased scalability and reduced overhead in administration of the runtime platform. Compared to the frequently declarative style of development on the ESB platform, there may some initial loss in development productivity. That is easily off set by the much better tool support for automated CI and CD and also compensated by the much wider availability of developers able and willing to with modern, generic technologies compared with vendor specific proprietary platform products.
Many interactions are not synchronous, do not involve consumers with an urgent request and may be triggered by systems events or simply by time. These interactions typically take place in the background in an asynchronous fashion. These interactions have to be observed as they are executed in the background. Do they get completed successfully? Can they survive the temporary unavailability of the target system? Will processing time stay acceptable when the volume increases?
Critical in the implementation of these interactions is the use of a message queue or an event bus – a decoupling mechanism that provides a bridge across domains, (versions of) systems, locations (cloud & on premises), technologies and time. This event bus also allows load balancing across multiple processors, throttling and circuit breaking in case of limited downstream capacity and is a crucial element in governance.
By identifying these horizontal integrations, it becomes clearer that patterns such as CQRS (Command Query Responsibility Segregation), Event Sourcing, Microservice (interaction and choreography) as well as data pipelines for IoT, Data Warehouses and Data Lakes are all on the same spectrum of data integration – differing primarily in the purpose and immediate trigger of the data movement (and perhaps in volume and time criticality) but not differing in any fundamental way.
Key challenges in all horizontal integration cases are:
- when: identify the trigger for executing an integration action
- what: extract the data that is required
- how: publish the data in a way that makes it accessible to potential consumers (location, common format, time, technology, scale)
- monitor: handle non happy flow
Some use cases have to deal with high volumes of fast data that need to be processed in near-real time. This requires special streaming analytic capabilities as first port of call for the messages for example from IoT devices or Social Media platforms.
Connectors or Adapters can play a role in spotting events in standard applications and SaaS services and extracting the relevant data as well as in delivering events to target systems. Mature iPaaS offerings deliver value by providing a scalable platform with adapters, management of executions of integration flows, capabilities for spotting and handling exceptions and agent based bridges across clouds and between cloud and the on premises environment. An iPaaS platform will collect analytics, provide job scheduling and management. It will typically provide declarative [and proprietary] development facilities.
It is important to recognize that on the one hand there are many data flows in our enterprise IT landscapes that are really very similar – in essence bringing data from A to B. A lot can be gained by approaching all these flows as part of the same spectrum. On the other hand, the various shades of data flow on the integration spectrum require different approaches as well. Common data models, domain driven design and a desire for decoupling, statelessness and dynamic scalability are similar but the implementation technologies will be different. No longer should we use the ESB platform the hammer that can take on any data integration challenge as a nail. As a first step, let’s liberate synchronous interactions supporting on line clients such as Web Portals and Mobile Apps from the ESB. Let’s embrace the API Gateway plus scalable API implementation – perhaps first through containers and eventually using a serverless implementation. And let’s adopt a message queue or event bus that is scalable, accessible cross technologies and distributed – reliable and spanning across clouds and on premises.
OMESA (Open Modern Enterprise Software Architecture): http://omesa.io/ : The Open Modern Enterprise Software Architecture (OMESA) project was born with the purpose to bring back architectural best practices into modern architectures whilst keeping in mind that the “new” most co-exists with the “old”. It provides reference architectures and guiding principles to help architects from any organisation realise the benefits that modern technologies and architectures can bring to the business whilst avoiding the creation of “micro-silos” or ad-hoc solutions. On the OMESA website, you will find several high level design overviews and proposals for notation styles that will help design modern integrations.