blog

Climbing the Ladder of Abstraction

Jonas Bonér.
CEO/CTO, Lightbend.
  • 28 Mar 2023,
  • 5 minute read

Today’s cloud infrastructure is fantastic. The richness and power of our cloud-native ecosystem around Kubernetes are easy to forget. It’s hard to remember the world before containers and Kubernetes. How hard it was to get anything useful done. We have come a long way from the beginnings of cloud, multi-node, and multi-core development.

But this richness has come with a price. We are drowning in complexity, faced with too many products, decisions, and worries. What products to pick? How to use them individually? How to compose them into a single cohesive system? How to guarantee overall system correctness across product boundaries? How to provide observability of the system as a whole? How to evolve the system over time?

At the same time, users, competition, and new opportunities create new business requirements and a need to move and innovate faster while gracefully managing ever-increasing volumes of users and data—all at a faster pace. And we can’t just move fast and break things; we have to do with predictability, repeatability, and reusability.

Function-as-a-Service (FaaS) was born to help solve these issues, providing faster time-to-market at a lower cost. FaaS supports iterative development methodologies, enabling applications to get up and running sooner and allowing modifications to be made more easily. Serverless solutions are an essential part of this trend.

Serverless is very promising as a general developer experience (DX) for cloud and edge development— too important to be constrained to FaaS. Many cloud products, e.g., databases, message brokers, and caches, have embraced it and provide “serverless APIs.” A step in the right direction but leaves developers with a complex integration project trying to compose them into a single functioning system.

The FaaS approach does have some downsides. For example, the vast majority of today’s FaaS offerings are primarily stateless. FaaS means the state is stored elsewhere, most often in a database. This means that the application state is always “somewhere else” than where you need it to be, externalized from your application, processing, and end user—since you always have to put it somewhere else and fetch it from somewhere else. This approach adds latency, lowers performance, and can compromise availability since it forces you to be at the mercy of the availability of the database.

FaaS is undoubtedly an excellent choice for stateless use cases like data pipelining, integration, and processing-centric “embarrassingly parallel” workloads. But FaaS is not enough; it is limited in the type of use cases it can support efficiently. This also makes it hard to build general-purpose cloud-native applications. For this, you need more tools, in particular efficient and reliable management of distributed state, consistency a la carte (since one size does not fit all), and options for physical co-location of state, processing, and end user.

Can we do better? Most definitely. Alfred North Whitehead famously said, “Civilization advances by extending the number of important operations which we can perform without thinking of them.”

Wisdom that very much applies to our industry. We need to climb the ladder of abstraction. Reach a bit higher. But this requires that we, as developers, learn to let go of control and understand that we don’t need every knob and that delegating means freeing oneself up to focus on more important things, building core business value.

Another solution that is currently “en vogue” is Platform Engineering as a means to improve the developer experience and mask complexity, shielding their developer teams from it through high-level APIs and abstractions. This approach also has its drawbacks. Companies that take on building out a custom Internal Developer Platform (IDP) are up for a daunting task, particularly if multi-cloud support is needed. There are a lot of cloud vendor-specific details that need to be understood and abstracted away, ideally including Kubernetes itself and the tools around it. The challenge is to find the right abstraction level for your developers to work at, not leaking implementation or underlying infrastructure details—to ensure that you are free to let the IDP evolve internally without breaking the APIs exposed to developers. This can be mitigated using a pre-packaged multi-cloud PaaS that takes care of all these details for you, providing a high-level DX, abstracting away the nitty-gritty details.”

As Timothy Keller said, “Freedom is not so much the absence of restrictions as finding the right ones, the liberating restrictions.” We need to learn to embrace the constraints. But what are these liberating constraints, the liberating abstractions? There are three things we, as developers, can never delegate to a product or platform:

  1. Data model: How do I model the business data and domain model? Including its structure, constraints, guarantees, storage, and query model.
  2. API definition: How do I want the service to present itself to the outside world? How should it communicate and coordinate with other services? What data does it expose, and under what guarantees?
  3. Business logic: The logic that drives the business value. How to act and operate on the data, store, query, transform, downsample, relay, mine intelligence, and trigger side-effects.

The first two, the data and API definitions, can be fully declaratively configured, leaving the developer with only the business logic—the function itself. The rest can and should be fully delegated, managed, and automated by the underlying platform. In this model, all infrastructure product decisions (e.g., database, message broker, services mesh, cache, API gateway), data storage/query models, distributed systems patterns around orchestration, resilience, replication, sharding, caching, scheduling, communication, back-pressure, and much more, are inferred from the code, leaving the developer to focus solely on delivering business value.

Author Section will go here