If you haven’t been paying attention to the world of enterprise IT infrastructure, you may have missed the sudden rise of Kubernetes to a position of absolute domination.
It seems that containers themselves are still well behind the years, but at the Cloud Native Computing Foundation’s KubeCon + CloudNativeCon in Barcelona last month, it was patently obvious that containers are here to stay and Kubernetes has handily won the container orchestrator wars.
Such rapid dominance is unusual. Gray-hairs like me will recall the Internet protocol wars of the early nineties, as the battles among contenders like Netware and Token Ring dragged on for years before TCP/IP finally won out.
And let us not forget the UNIX wars of the dot-com era, as vendors positioned one flavor over another until eventually the open source dark horse, Linux, surprisingly came to dominate.
The main reason TCP/IP, Linux, and now Kubernetes won their respective battles is the fact that widespread agreement on foundational infrastructure technology is good for everyone. But business advantages of picking a winner don’t explain the remarkable velocity that Kubernetes exhibited on the way to the container orchestrator brass ring.
A Happy Convergence
We can attribute this rapid ascent, in fact, to a confluence of trends. Perhaps the most predictable of these is the maturation of the public cloud – not simply the market dominance of the big cloud players, but also the widespread acceptance and understanding of core cloud best practices, including horizontal scalability, resilience, and self-service configurability via declarative representations and APIs.
The second trend that contributed to Kubernetes’ victory: DevOps. There are, in fact, two sides to DevOps: first, the organizational transformation as technical teams learn better ways to collaborate in order to deliver and run better software faster than previously possible.
The second: a broad set of tooling that automates many of the tasks that app dev and ops teams must conduct – tooling that itself participates in the same API-centric, declarative configurability that it inherits from the cloud.
Cloud-Native as New Architectural Paradigm
Bridging the maturation of cloud best practice and the dual roles of DevOps is perhaps the most important trend of all: cloud-native architecture. Cloud-native architecture builds on both cloud and DevOps best practices, taking them beyond the cloud itself to all of enterprise IT.
As it turns out, the best way to get started with cloud-native architecture happens to be implementing Kubernetes – although cloud-native covers the gamut from traditional virtualization to containers to serverless computing.
In fact, cloud-native is more than an architectural approach. It represents a lens through which we can see the entirety of enterprise IT in a new light. For this reason, I consider it to be a new architectural paradigm.
The Precursors to Cloud-Native Architecture
Cloud-native architecture didn’t spring forth fully formed out of nothing, of course. Many architectural trends that came before helped teach us the lessons we needed to learn in order to make cloud-native a reality.
In the 2000s we deployed service-oriented architecture (SOA), whose implementation typically depended on sophisticated middleware. These enterprise service buses (ESBs) handled a variety of tasks, including integration, routing, data transformation, security, and more, while exposing application functionality typically as Web Services.
SOA was therefore able to expose lightweight, language-independent service endpoints by shifting the intelligence to the middleware – a pattern we now like to call ‘smart pipes, dumb endpoints.”
With the rise of the cloud transforming the role and nature of middleware, coupled with the rise of containers and microservices, SOA eventually gave way to microservice architecture.
Unlike Web Services that were little more than ‘dumb’ XML-based endpoints, microservices are cohesive, parsimonious units of execution – little packages of goodness that only do one or two things, but do them well.
In common parlance, we refer to microservices architecture as ‘smart endpoints, dumb pipes.’ The microservices are their own mini-programs, with all the smarts we can cram into them. But to integrate them, we typically use nothing more intelligent than HTTP-based RESTful interactions or lightweight, open source queuing technology.
Cloud-Native Architecture: Beyond ‘Smart Endpoints, Dumb Pipes’
Replacing ESBs with ‘dumb pipes’ made sense in the context of the paradigm shift from SOA’s on-premises context to the cloud-centric world of microservices architecture, but implementation, scalability, and agility challenges remained.
These shortcomings of microservice architecture provided the perfect breeding ground for Kubernetes. In the Kubernetes-fueled cloud native architecture paradigm, we have ‘smart endpoints, smart service meshes.’
Service meshes introduce a new approach to integrating microservice endpoints that is entirely cloud-native. Service meshes like the open source Istio (along with its counterpart, the Envoy service proxy) also enable the discoverability and observability of containers and their microservices.
As a result, service meshes in conjunction with Kubernetes allow the full dynamic and ephemeral nature of containers to support core enterprise concerns of security, management, and integration – benefits of ESBs in the SOA days, now brought forward to a fully cloud-native architectural paradigm.
What Cloud-Native Architectures are Missing
Ironically, the best way to understand the paradigm-shifting power of cloud-native architecture is to highlight what’s absent from it: cloud-native is codeless, stateless, and trustless.
I don’t mean to say that we don’t have to deal with state information or write code, and we can certainly trust some things. Rather, these three ‘lesses’ characterize core cloud-native principles.
By codeless I mean that Kubernetes is configurable and extensible, but there’s no call for it being customizable.
Operators handle configuration via YAML files (among other declarative techniques), giving vendors plenty of opportunity to build user-friendly configuration tooling. Even the various ‘flavors’ of Kubernetes – and there are several – all share a single code base.
Containers are also inherently stateless, a necessary side-effect of their inherent ephemerality. After all, you wouldn’t want to store data in one if it could disappear at a moment’s notice.
Kubernetes must handle state information – both persistent data in databases and file systems as well as more transient (but still persistent) application state in caches.
To accomplish such state management in a stateless environment, Kubernetes follows cloud-native architectural principles by abstracting storage via codeless principles and exposing such stateful resources via APIs. This approach allows for whatever availability and resilience the organization requires from its persistence tier without requiring the containers themselves to be stateful.
The third of the ‘lesses’ – trustlessness – is an essential characteristic of modern cybersecurity. We can no longer rely upon perimeter security to provide trusted environments. Instead, we must assume all parts of are network are untrusted, and every endpoint must establish its own trust.
You shouldn’t be surprised that Kubernetes calls for trustless interactions. Microservice endpoints are dynamic, and service meshes abstract them – so it’s essential for such abstracted endpoints to take care of their own security. Trustlessness, in fact, is one of the main reasons why service meshes are so important to cloud-native architectures.
Key Takeaways
Cloud-native architectures leverage cloud and DevOps best practices to deliver codeless, stateless, and trustless infrastructure that supports the full breadth of modern enterprise infrastructure requirements – and Kubernetes is at the center of the story. It’s no wonder it has become the dominant central technology to the cloud-native architecture paradigm.
Infrastructure engineers should understand the importance of architecture to the Kubernetes story. Without it, the entire Kubernetes landscape has the appearance of a mélange of miscellaneous projects and components.
IT and business executives need not concern themselves with the trees, but must certainly understand the forest that is cloud-native architecture. Enterprise IT is undergoing a top-to-bottom transformation, and leaders won’t be able to understand the challenges of digital transformation unless they properly support the bedrock such transformation rests upon.
And for you architects, you’re every bit as important as always, perhaps even more so. The challenge for you is coordinating all the architecture efforts in your organization. Cloud-native architecture is essentially infrastructure architecture, but application, solution, and enterprise architecture must all work together for your organization to achieve success with cloud-native architecture in today’s digital era.