A Diverse API Toolbox Driving Hybrid Integrations Across An Event-Driven Landscape

I’m heading to Vegas in the morning to spend two days in conversations with folks about APIs. I am not there for AWS re:Invent, or the Gartner thingy, but I guess in a way I am, because there are people there for those events, who want to talk to me about the API landscape. Folks looking to swap stories about enterprise API investment in possessing a diverse API toolbox for driving hybrid integrations in an event-driven landscape. I’m not giving any formal talks, but as with any engagement, I’m brushing up on the words I use to describe what I’m seeing across the space when it comes to the enterprise API lifecycle.

The Core Is All About Doing Resource Based, Request And Response APIs Well
I’m definitely biased, but I do not subscribe to popular notions that at some point REST, RESTful, web, and HTTP APIs will have to go away. We will be using web technology to provide simple, precise, useful access to data, content, and algorithms for some time to come, despite the API sectors continued evolution, and investment trends coming and going. Sorry, it is simple, low-cost, and something a wide audience gets from both a provider and consumer perspective. It gets the job done. Sure, there are many, many areas where web APIs fall short, but that won’t stop success continuing to be defined by enterprise organizations who can do microservices well at scale. Despite relentless assaults by each wave of entrepreneurs, simple HTTP APIs driving microservices will stick around for some time to come.

API Discovery And Knowing Where All Of Your APIs Resources Actually Are
API discovery means a lot of things to a lot of people, but when it comes to delivering APIs well at scale in a multi-cloud, event-driven world, I’m simply talking about knowing where all of your API resources are. Meaning, if I walked into your company tomorrow, could you should me a complete list of every API or web service in use, and what enterprise capabilities they enable? If you can’t, then I’m guessing you aren’t going to be all that agile, efficient, and ultimately effective with doing your APIs at scale, and be able to orchestrate much, and identify what the most meaningful events are that occur across the enterprise landscape. I’m not even getting to the point of service mesh, and other API discovery wet dreams, I’m simply talking about being able to coherently articulate what your enterprise digital capabilities are.

Always Be Leveraging The Web As Part Of Your Diverse API Toolbox
Technologists, especially venture fueled technologists love to take the web for granted. Nothing does web scale better than, the web. Understand the objectives behind your APIs, and consider how you are leveraging the web, negotiate, cache, and build on the strengths of the web. Use the right media type for the job, and understand the tradeoffs of HTML, CSV, XML, JSON, YAML, and other media types. Understand when hypermedia media types might be more suitable for document, media, and other content focused API resources. Simple web APIs make a huge difference when they further speak to their intended audience and allow them to easily translate an API call into a workable spreadsheet, or navigate to the previous or next episode, installment, or other logical grouping with sensible hypermedia controls. Good API design is more about having a robust and diverse API design toolbox to choose from, than it is ever about the dogma that exists around any specific approach, philosophy, protocol, or venture capital fueled trend.

Have A Clear Vision For Who Will Be Using Your APIs
One significant mistake that API designers, developers, and architects make over and over again, is not having a clear vision of who will be using the APIs they are building. Defining, designing, delivering, and operating an API that is based upon what the provider wants over what the consumers will need. Using protocols, ignoring existing patterns, and adopting the latest trend that have nothing to do with what API consumers will be needing or capable of putting to work. Make sure you know your consumers, and consider giving them more control with query languages like GraphQL and Falcor, allowing them to define the type of experience they want. Work to have a clear vision of who will be consuming an API, even if you don’t know who they are. Starting simple with basic web APIs that help easily on-board new users who are unfamiliar with the domain and schema, while also allowing for the evolution give power-users who are in the know, more access, more control, and a stronger voice in the vision of what your APIs deliver or do not.

Responding In Real Time, Not Just Upon Request
A well oiled request and response API infrastructure is a critical base for any enterprise organization, however, a sign of a more mature, scalable API operations is always the presence of event-driven infrastructure including webhooks, streaming solutions, and multi-protocol, multi-message approaches to moving data and content around, and responding algorithmically based upon real time events occurring across the domains. Investing in event-driven infrastructure is not simply about using Kafka, it is about having a well-defined, well-honed web API base, with a suite of event-driven approaches in ones toolbox for also providing access to internal, partner, and last mile public and 3rd party resources using an appropriate set of protocols, and message formats. Something that might be as simple as a webhook subscription to changes, getting a simple HTTP push when something changes, to maintaining persistent HTTP connections to get an HTTP push when something changes, all the way to high volume HTTP and TCP connections to a variety of topical channels using Kafka, or other industrial grade API-driven solutions like gRPC, and beyond.

Have A Reason For When You Switch Protocols
There are a number of reasons why we switch protocols, moving off HTTP towards a TCP way of getting things done, with most reasoning being more emotional than they are ever technical. When I ask people why they went from HTTP APIs to Kafka, or Websockets, there is rarely a protocol based response. They did it because they needed things done in real time, through the existence of specific channels, or just simple because Kafka is how you do big data, or Websockets is how you do real time data. There wasn’t much scrutiny of who the consumers are, what was gained by moving to TCP, and what was lost by moving off HTTP. There is little awareness of the work Google has done around gRPC and HTTP/2, or what has happened recently around HTTP/3, formerly known as Quick UDP Internet Connections (QUIC). I’m no protocol expert, but I do grasp the role that these protocols play, and understand that the fundamental foundation of APIs is the web, and the importance of having a well thought out strategy when it comes to using the Internet for delivering on the API vision across the enterprise.

Ensuring All Your API Infrastructure Is Reliable
It doesn’t matter what your API design processes are, and what tools you are using if you cannot do it reliably. If you aren’t monitoring, testing, securing, and understanding performance, consumption, and limitations across ALL of your API infrastructure, then there will never be the right API solution. Web APIs, hypermedia, GraphQ, Webooks, Server-Sent Events, Websockets, Kafka, gRPC, and any other approach will always be inadequate if you cannot reliably operate them. Every tool within your API design toolbox should be able to be effectively deployed, thoughtfully managed, and coherently monitored, tested, secured, and delivered as a reliable service. If you don’t understand what is happening under the hood with any of your API infrastructure, out of your league technically, or kept in the dark through vendor magic, it should NOT be a tool in your toolbox, and be something that left in the R&D lab until you can prove that you can reliably deliver, support, scale, and evolve something that is in alignment with, and has purpose augmenting and working with your existing API infrastructure.

Be Able To Deliver, Operate, And Scale Your APIs Anywhere They Are Needed
One increasingly critical aspect of any tool in our API design is whether or not we can deploy and operate it within multiple environments, or find that we are limited to just a single on-premise or cloud location. Can your request and response web API infrastructure operate within the AWS, Google, or Azure clouds? Does it operate on-premise within your datacenter, locally for development, and within sandbox environments for partners and 3rd party developers? Where your APIs are deployed will have have just as big of an impact on reliability and performance as your approach to design and the protocol you re using. Regulatory and other regional level concerns may have a bigger impact on your API infrastructure, than using REST, GraphQL, Webhooks, Server-Sent Events, or Kafka. Being able to ensure you can deliver, operate, and scale APIs anywhere they are needed is fast becoming a defining characteristic of the tools that we possess in our API toolboxes.

Making Sure All Your Enterprise Capabilities Are Well Defined
The final, and most critical element of any enterprise API toolbox, is ensuring that all of your enterprise capabilities are defined as machine readable API contracts, using OpenAPI, AsyncAPI, JSON Schema, and other formats. API definitions should provide human and machine readable contracts for all enterprise capabilities that are in play. These contracts contribute to every stop along the API lifecycle, and help both API providers and consumers realize everything I have discussed in this post. OpenAPI provides what we need to define our request and response capabilities using HTTP, and Async provides what we need to define our event-driven capabilities, providing the basis for understanding what we are capable of delivering using our API toolboxes, and responding to via the hybrid integration solutions we’ve engineered, and automated using our event-driven solutions. Defining the surface area of our API infrastructure, but also the API operations that surround the enterprise capabilities we are enabling internally, with partners, and publicly via our enterprise API efforts.