API Is Not Just REST

This is one of my talks from APIDays Paris 2018. Here is the abstract: The modern API toolbox includes a variety of standards and methodologies, which centers around REST, but also includes Hypermedia, GraphQL, real time streaming, event-driven architecture , and gRPC. API design has pushed beyond just basic access to data, and also can be about querying complex data structures, providing experience rich APIs, real-time data streams with Kafka and other standards, as well as also leveraging the latest algorithms and providing access to machine learning models. The biggest mistake any company, organization, or government agency can do is limit their API toolbox to be just about REST. Learn about a robust and modern API toolbox from the API Evangelist, Kin Lane.

Diverse Toolbox
After eight years of evangelizing APIs, when I participate in many API conversations, some people still assume I’m exclusively talking about REST as the API Evangelist–when in reality I am simply talking about APIs that leverage the web. Sure, REST is a dominant design pattern I shine a light on, and has enjoyed a significant amount of the spotlight over the last decade, but in reality on the ground at companies, organizations, institutions, and government agencies of all shapes and sizes, I find a much more robust API toolbox is required to get the job done. REST is just one tool in my robust and diverse toolbox, and I wanted to share with you what I am using in 2018.

The toolbox I’m referring tool isn’t just about what is needed to equip an API architect to build out the perfect vision of the future. This is a toolbox that is equipped to get us from present day into the future, acknowledging all of the technical debt that exists within most organizations which many are looking to evolve as part of their larger digital transformation. My toolbox is increasingly pushing the boundaries of what I’ve historically defined as an API, and I’m hoping that my experiences will also push the boundaries of what you define as an API, making you ready for what you will encounter on the ground within organizations you are delivering APis within.

Application Programming Interface
API is an acronym standing for application programming interface. I do not limit the scope of application in the context to just be about web or mobile application. I don’t even limit it to the growing number of device-based applications I’m seeing emerge. For me, application is about applying the digital resources made available via an programmatic interface. I’m looking to take the data, content, media, and algorithms being made available via APIs and apply them anywhere they are needed on the web, within mobile and device applications, or on the desktop, via spreadsheets, digital signage, or anywhere else that is relevant, and sensible in 2018.

API does not mean REST. I’m really unsure how it got this dogmatic association, nor do I care. It is an unproductive legacy of the API sector, and one I’m looking to move beyond. Application programming interfaces aren’t the solution to every digital problem we face. They are about understanding a variety of protocols, messaging formats, and trying to understand the best path forward depending on your application of the digital resources you are making accessible. My API toolbox reflects this view of the API landscape, and is something that has significantly evolved over the last decade of my career, and is something that will continue to evolve, and be defined by what I am seeing on the ground within the companies, organizations, institutions, and government agencies I am working with.

SOAP
I have been working with databases since 1987, so I fully experienced the web services evolution of our industry. During the early years of the web, there was a significant amount of investment into thinking about how we exchanged data across many industries, as well as within individual companies when it came to building out the infrastructure to deliver upon this vision. The web was new, but we did the hard work to understand how we could make data interoperability in a machine readable way, with an emphasis on the messages we were exchanging. Looking back I wish we had spent more time thinking about how we were using the web as a transport, as well as the influence of industry and investment interests, but maybe it wasn’t possible as the web was still so new.

While web services provided a good foundation for delivering application programming interfaces, it may have underinvested in its usage of the web as a transport, and became a victim of the commercial success of the web. The need to deliver web applications more efficiently, and a desire to hastily use the low cost web as a transport quickly bastardized and cannibalized web services, into a variety of experiments and approaches that would get the job done with a lot less overhead and friction. Introducing efficiencies along the way, but also fragmenting our enterprise toolbox in a way which we are still putting back together.

XML & JSON RPC
One of the more fractious aspects of the web API evolution has been the pushback when API providers call their XML or JSON remote procedure call (RPC) APIs, RESTful, RESTish, or other mixing of philosophy and ideology, which has proven to be a dogma stimulating event. RESTafarians prefer that API providers properly define their approach, while many RPC providers could care less about labels, and are looking to just get the job done. Making XML and JSON RPC a very viable approach to doing APIs, something that still persists almost 20 years later.

Amazon Web Services, Flickr, Slack, and other RPC APIs are doing just fine when it comes to getting the job done, despite the frustration, ranting, and shaming by the RESTafarians. It isn’t an ideal approach to delivering programmatic interfaces using the web, but it reflects its web roots, and gets the job done with low cost web infrastructure. RPC leaves a lot of room for improvement, but is a tool that has to remain in the toolbox. Not because I am designing new RPC APIs, but there is no doubt that at some point I will have to be integrating with an RPC API to do what you need to get done in my regular work.

REST at Center
Roy Fielding’s dissertation on representational state transfer, often referred to as simple REST, is an amazing piece of work. It makes a lot of sense, and I feel is one of the most thorough looks at how to use the web for making data, content, media, and algorithms accessible in a machine readable way. I get why so many folks feel it is the RIGHT WAY to do things, and one of the reasons it is the default approach for many API designers and architects–myself included. However, REST is a philosophy, and much like microservices, provides us with a framework to think about how we put our API toolbox to work, but isn’t something that should blind us from the other tools we have within our reach.

REST is where I begin most conversations about APIs, but it doesn’t entirely encompass what I mean when every time I use the phrase API. I feel REST has given me an excellent base for thinking about how I deliver APIs, but will slow my effectiveness if I leave my REST blinders on, and let dogma control the scope of my toolbox. REST has shown me the importance of the web when talking about APIs, and will continue to drive how I deliver APIs for many years. It has shown me how to structure, standardize, and simplify how I do APIs, and help my applications reach as wide as possible audience, using commonly understood infrastructure.

Negotiating CSV
As the API Evangelist, I work with a lot of government, and business users. One thing I’ve learned working with this group is the power of using comma separated values (CSV) as a media type. I know that us developers and database folks enjoy a lot more structure in our lives, but I have found that allowing for the negotiation of CSV responses from APIs, can move mountains when it comes to helping onboard business users, and decision makers to the potential of APIs–even if the data format doesn’t represent the full potential of an API. CSV responses is the low bar I set for my APIs, making them accessible to a very wide business audience.

CSV as a data format represents an anchor for the lowest common denominator for API access. As a developer, it won’t be the data format I personally will negotiate, but as a business user, it very well could mean the difference between using an API or not. Allowing me to take API responses and work with them in my native environment, the Excel spreadsheet, or Google Sheets environment. As I am designing my APIs, I’m always thinking about how I can make my resources available to the masses, and enabling the negotiation of CSV responses whenever possible, helps me achieve my wier objectives.

Negotiating XML
I remember making the transition from XML to JSON in 2009. At first I was uncomfortable with the data format, and resisted using it over my more proven XML. However, I quickly saw the potential for the scrappy format while developing JavaScript applications, and when developing mobile applications. While JSON is my preferred, and default format for API design, I am still using XML on a regular basis while working with legacy APIs, as well as allowing for XML to be negotiated by the APIs I’m developing for wider consumption beyond the startup community. Some developers are just more comfortable using XML over JSON, and who knows, maybe by extending an XML olive branch, I might help developers begin to evolve in how they consume APIs.

Similar to CSV, XML represents support for a wider audience. JSON has definitely shifted the landscape, but there are still many developers out there who haven’t made the shift. Whether we are consumers of their APIs, or providing APIs that target these developers, XML needs to be on the radar. Our toolbox needs to still allow for us to provide, consume, validate, and transform XML. If you aren’t working with XML at all in your job, consider yourself privileged, but also know that you exist within a siloed world of development, and you don’t receive much exposure to many systems that are the backbone of government and business.

Negotiating JSON
I think about my career evolution, and the different data formats I’ve used in 30 years. It helps me see JSON as the default reality, not the default solution. It is what is working now, and reflects not just the technology, but also the business and politics of doing APIs in a mobile era, where JavaScript is widely used for delivering responsive solutions via multiple digital channels. JSON speaks to a wide number of developers, but we can’t forget that it is mostly comprised of developers who have entered the sector in the last decade.

JSON is the default media type I use for any API I’m developing today. No matter what my backend data source is. However, it is just one of several data formats I will potentially open up for negotiation. I feel like plain JSON is lazy, and whenever possible I should be thinking about a wider audience by providing CSV and XML representations, but I should also be getting more structured and standardized in how I handle the requests and responses for my API. While I want my APIs to reach as wide as possible audience, I also want them to deliver rich results that best represents the data, content, media, algorithms, and other digital resources I’m serving up.

Hypermedia Media Types
Taking the affordances present when humans engage with the web via browsers for granted is one of the most common mistakes I make as an API design, developer, and architect. This is a shortcoming I am regularly trying to make up for by getting more sophisticated in my usage of existing media types, and allowing for consumers to negotiate exactly the content they are looking for, and achieve a heightened experience consuming any API that I deliver. Hypermedia media types provide a wealth of ways to deliver consistent experiences, that help be deliver many of the affordances we expect as we make use of data, content, media, and algorithms via the web.

Using media types like Hal, Siren, JSON API, Collection+JSON, and JSON-LD are allowing me to deliver a much more robust API experience, to a variety of API clients. Hypermedia reflects where I want to be when it comes to API design and architecture that leverages the web, but it is a reflection I have to often think deeply about as I still work to reach out to a wide audience, forcing me to make it one of several types of experience my consumers can negotiate. While I wish everyone saw the benefits, sometimes I need to make sure CSV, XML, and simpler JSON are also on the menu, ensuring I don’t leave anyone behind as I work to bridge where we are with where I’d like to go.

API Query Layers
Knowing my API consumers is an important aspect of how I use my API toolbox. Depending on who I’m targeting with my APIs, I will make different decisions regarding the design pattern(s) I put to work. While I prefer investing resources into the design of my APIs, and crafting the URLs, requests, and responses my consumers will receive, in some situations my consumers might also need much more control over crafting the responses they are getting back. This is when I look to existing API query languages like Falcor or GraphQL to give my API consumers more of a voice in what their API responses will look like.

API query layers are never a replacement for a more RESTful, or hypermedia approaches to delivering web APIs, but they can provide a very robust way to hand over control to consumers. API design is important for providers to understand, and define the resources they are making available, but a query language can be very powerful when it comes to making very complex data and content resources available via a single API URL. Of course, as with each tool present in this API toolbox, there are trade offs with deciding to use an API query language, but in some situations it can make the development of clients much more efficient and agile, depending on who your audience is, and the resources you are looking to make available.

Webhooks
In my world APIs are rarely a one way street. My APIs don’t just allow API consumers to poll for data, content, and updates. I’m looking to define and respond to events, allowing data, and content to be pushed to consumers. I’m increasingly using Webhooks as a way to help my clients make their APIs a two-way street, and limit the amount of resources it takes to make digital assets available via APIs. Working with them to define the meaningful events that occur across the platform, and allow API consumers to subscribe to these events via Webhooks. Opening the door for API providers to deliver a more event-driven approach to doing APIs.

Webhooks are the 101 level of event-driven API architecture for API providers. It is where you get started trying to understand the meaningful events that are occurring via any platform. Webhooks are how I am helping API providers understand what is possible, but also how I’m training API consumers in a variety of API communities about how they can deliver better experiences with their applications. I see webhooks alongside API design and management, as a way to help API providers and consumers better understand how API resources are being used, developing a wider awareness around which resources actually matter, and which ones do not.

Websub
In 2018, I am investing more time in putting Websub, formerly known as the word which none of us could actually pronounce, PubSubHubbub. This approach to making content available by subscription as things change has finally matured into a standard, and reflects the evolution of how we deliver APIs in my opinion. I am using Websub to help me understand not just the event-driven nature of the APIs I’m delivering, but also that intersection of how we make API infrastructure more efficient and precise in doing what it does. Helping us develop meaningful subscriptions to data and content, that adds another dimension to the API design and even query conversation.

Websub represents the many ways we can orchestrate our API implementations using a variety of content types, push and pull mechanisms, all leveraging web as the transport. I’m intrigued by the distributed aspect of API implementations using Websub, and the discovery that is built into the approach. The remaining pieces are pretty standard API stuff using GETs, POSTs, and content negotiation to get the job done. While not an approach I will be using by default, for specific use cases, delivering data and content to known consumers, I am beginning to put Websub to work alongside API query languages, and other event-driven architectural approaches. Now that Websub has matured as a standard, I’m even more interested in leveraging it as part of my diverse API toolbox.th

Server Sent Events (SSE)
I consider webhooks to be the gateway drug for API event-driven architecture. Making API integrations a two street, while also making them more efficient, and potentially real time. After webhooks, the next tool in my toolbox for making API consumption more efficient and real time are server-sent events (SSE). Server-sent events (SSE) is a technology where a browser receives automatic updates from a server via a sustained HTTP connection, which has been standardized as part of HTML5 by the W3C. The approach is primarily used to established a sustained connection between a server, and the browser, but can just as easily be used server to server.

Server-sent events (SSE) delivers one-way streaming APIs which can be used to send regular, and sustained updates, which can be more efficient than regular polling of an API. SSE is an efficient way to begin going beyond the basics of client-server request and response model and pushing the boundaries of what APIs can do. I am using SSE to make APIs much more real time, while also getting more precise with the delivery of data and content, leverage other standards like JSON Patch to only provide what has changed, rather than sending the same data out over the pipes again, making API communication much more efficient.

Websockets
Shifting things further into real time, websockets is what I’m using to deliver two-way API streams that require data be both sent and received, providing full-duplex communication channels over a single TCP connection. WebSocket is a different TCP protocol from HTTP, but is designed to work over HTTP ports 80 and 443 as well as to support HTTP proxies and intermediaries, making it compatible with the HTTP protocol. To further achieve compatibility, the WebSocket handshake uses the HTTP Upgrade header to change from the HTTP protocol to the WebSocket protocol, pushing the boundaries of APIs beyond HTTP in a very seamless way.

SSE is all about the one-way efficiency, and websockets is about two-way efficiency. I prefer keeping things within the realm of HTTP with SSE, unless I absolutely need the two-way, full-duplex communication channel. As you’ll see, I’m fine with pushing the definition of API out of the HTTP realm, but I’d prefer to keep things within bounds, as I feel it is best to embrace HTTP when doing business on the web. I can accomplish a number of objectives for data, content, media, and algorithmic access using the HTTP tools in my toolbox, leaving me to be pretty selective when I push things out of this context.

gRPC Using HTTP/2
While I am forced to use Websockets for some existing integrations such as with Twitter, and other legacy implementations, it isn’t my choice for next generation projects. I’m opting to keep things within the HTTP realm, and embracing the next evolution of the protocol, and follow Google’s lead with gRPC. As with other RPC approaches, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. gRPC embraces HTTP/2 as its next generation transport protocol, and while also employing Protocol Buffers, Google’s open source mechanism for the serialization of structured data.

At Google, I am seeing Protocol Buffers used in parallel with OpenAPI for defining JSON APIs, providing two speed APIs using HTTP/1.1 and HTTP/2. I am also seeing Protocol Buffers used with HTTP/1.1 as a transport, making it something I have had to integrated with alongside SOAP, and other web APIs. While I am integrating with APIs that use Protocol Buffers, I am most interested in the usage of HTTP/2 as a transport for APIs, and I am investing more time learning about the next generation headers in use, and the variety of approaches in which HTTP/2 is used as a transport for traditional APIs, as well as multi-directional, streaming APIs.

Apache Kafka
Another shift I could not ignore across the API landscape in 2017 was the growth in adoption of Kafka as a distributed streaming API platform. Kafka focuses on enabling providers to read and write streams of data like a messaging system, and develop applications that react to events in real-time, and store data safely in a distributed, replicated, fault-tolerant cluster. Kafka was originally developed at LinkedIn, but is now an Apache open source product that is in use across a number of very interesting companies, many of which have been sharing their stories of how efficient it is for developing internal data pipelines. I’ve been studying Kafka throughout 2017, and I have added it to my toolbox, despite it pushing the boundaries of my definition of what is an API beyond the HTTP realm.

Kafka has moved out of the realm of HTTP, using a binary protocol over TCP, defining all APIs as request response message pairs, using its own messaging format. Each client initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message–no handshake is required on connection or disconnection. TCP is much more efficient over HTTP because it allows you to maintain persistent connections used for many requests. Taking streaming APIs to new levels, providing a super fast set of open source tools you can use internally to deliver the big data pipeline you need to get the job done. My mission is to understand how these pipelines are changing the landscape and which tools in my toolbox can help augment Kafka and deliver the last mile of connectivity to partners, and public applications.

Message Queuing Telemetry Transport (MQTT)
Continuing to round off my API toolbox in a way that pushes the definition of APIs beyond HTTP, and helping me understand how APIs are being used to drive Internet-connected devices, I’ve added Message Queuing Telemetry Transport (MQTT), an ISO standard for implementing publish-subscribe-based messaging protocol to my toolbox. The protocol works on top of the TCP/IP protocol, and is designed for connections with remote locations where a light footprint” is required because compute, storage, or network capacity is limited. Making MQTT optimal for considering when you are connecting devices to the Internet, and unsure of the reliability of your connection.

Both Kafka, and MQTT have shown me in the last couple of years, the limitations of HTTP when it comes to the high and low volume aspects of moving data around using networks. I don’t see this as a threat to APIs that leverage HTTP as a transport, I just see them as additional tools in my toolbox, for projects that meet these requirements. This isn’t a failure of HTTP, this is simply a limitation, and when I’m working on API projects involving internet connected devices I’m going to weight the pros and cons of using simple HTTP APIs, alongside using MQTT, and being a little more considerate about the messages I’m sending back and forth between devices and the cloud over the network I have in place. MQTT reflects my robust and diverse API toolbox, as one that gives me a wide variety of tools I’m familiar with and can use in different environments.

Mastering My Usage Of Headers
One thing I’ve learned over the years while building my API toolbox is the importance of headers, and they are something that have regularly been not just about HTTP headers, but the more general usage of network networks. I have to admit that I understood the role of headers in the API conversation, but had not fully understood the scope of their importance when it comes to taking control over how your APIs operate within a distributed environment. Knowing which headers are required to consume APIs is essential to delivering stable integrations, and providing clear guidance on headers from a provider standpoint is essential to APIs operating as expected on the open web.

Content negotiation was the header doorway I walked through that demonstrated the importance of HTTP headers when it comes to deliver the meaningful API experiences. Being able to negotiate CSV, XML, and JSON message formats, as well as being able to engage with my digital resources in a deeper way using hypermedia media types. My headers mastery is allowing me to better orchestrate an event-driven experience via webhooks, and long running HTTP connections via Server-Sent Events. They are also taking me into the next generation of connectivity using HTTP/2, making them a critical aspect of my API toolbox. Historically, headers have often been hidden in the background of my API work, but increasingly they are front and center, and essential to me getting the results I’m looking for.

Standardizing My Messaging
I have to admit I had taken the strength of message formats present in my web service days for granted. While I still think they are bloated and too complex, I feel like we threw out a lot of benefits when we made the switch to more RESTful APIs. Overall I think the benefits of the evolution were positive, and media types provide us with some strong ways to standardize the messages we pass back and forth. I’m fine operating in a chaotic world of message formats and schema that are developed in the moment, but I’m a big fan of all roads leading to standardization and reuse of meaningful formats, so that we can try to speak with each other via APIs in more common formats.

I do not feel that there is one message format to rule them all, or that even one for each industry. I think innovation at the message layer is important, but I also feel like we should be leveraging JSON Schema to help tame things whenever possible, and standardize as media types. Whenever possible, reuse existing standards from day one is preferred, but I get that this isn’t always the reality, and in many cases we are handed the equivalent of a filing cabinet filled with handwritten notes. In my world, there will always be a mixed of known and unknown message formats, something that I will always work to tame, as well as be increasingly apply machine learning models to help me identify, evolve, and make sense of–standardizing things in any way I possibly can.

Knowing (Potential) Clients
I am developing APIs for a wide variety of clients. Some are designed for web applications, others are mobile applications, and some are devices. They could be spreadsheets, widgets, documents, and machine learning models. The tables could be flipped, and the APIs exist on device, and the cloud becomes the client. Sometimes the clients are known, other times they are unknown, and I am looking to attract new types of clients I never envisioned. I am always working to understand what types of clients I am looking to serve with my APIs, but the most important aspect of this process is understanding when there will be unknown clients.

When I have a tightly controlled group of target clients, my world is much easier. When I do not know who will be developing against an API, and I am looking to encourage wider participation, this is when my toolbox comes into action. This is when I keep the bar as low as possible regarding the design of my APIs, the protocols I use, and the types of data formats and messages I use. When I do not know my API consumers and the clients they will developing, I invest more in API design, and keep my default requests and responses as simple as possible. Then I also allow for the negotiation of more complex, higher speed, more control aspects of my APIs by consumers who are in the know, targeting more specific client scenarios.

Using The Right Tools For The Job
API is not REST. It is one tool in my toolbox. API deployment and integration is about having the right tool for the job. It is a waste of my time to demand that everyone understand one way of doing APIs, or my way of doing APIs. Sure, I wish people would study and learn about common API patterns, but in reality, on the ground in companies, organizations, institutions, and government agencies, this is not the state of things. Of course, I’ll spend time educating and training folks wherever I can, but my role is always more about delivering APIs, and integrating with existing APIs, and my API toolbox reflects this reality. I do not shame API providers for their lack of knowledge and available resources, I roll up my sleeves, put my API toolbox on the table, and get to work improving any situation that I can.

My API toolbox is crafted for the world we have, as well as the world I’d like to see. I rarely get what I want on the ground deploying and integrating with APIs. I don’t let this stop me. I just keep refining my awareness and knowledge by watching, studying, and learning from what others are doing. I often find that when someone is in the business of shutting down a particular approach, or being dogmatic about a single approach, it is usually because they aren’t on the ground working with average businesses, organizations, and government agencies–they enjoy a pretty isolated, privileged existence. My toolbox is almost always open, constantly evolving, and perpetually being refined based upon the reality I experience on the ground, learning from people doing the hard work to keep critical services up and running, not simply dreaming about what should be.