The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.

Delivering Large API Responses As Efficiently As Possible

I’m participating in a meeting today where one of the agenda items will be discussing the different ways in which the team can deal with increasingly large API responses. I don’t feel there is a perfect response to this question, and the answer should depend on a variety of considerations, but I wanted to think through some of the possibilities, and make sure the answers were on the tip of my tongue. It helps to exercise these things regularly in my storytelling so when I need to recall them, they are just beneath the surface, ready to bring forward.

Reduce Size Pagination
I’d say the most common approach to send over large amounts of data is to break things down into smaller chunks, based upon rows being sent over, and paginate the responses. Sending over a default amount (ie. 10,25,100), and require consumers either ask for larger amount, as well as request each additional page of results in a separate API request. Providing consumers with a running count of how many pages, what the current page is, and the ability to paginate forward and backward through the pages with each request. It is common to send pagination parameters through the query parameters, but some providers prefer handle it through headers (ie. Github).

Organizing Using Hypermedia
Another approach, which usually augments and extends pagination is using common hypermedia formats for messaging. To paginate results, many API providers use hypermedia media types as a message format, because the media types allow for easy linking to paginate results, as well as providing the relevant parameters in the body of the response. Additionally, hypermedia would further allow you to intelligently break down large responses into different collections, beyond just simple pagination. Then use the linking that is native to hypermedia to provide meaningful links with relations to the different collections of potential responses. Allowing API consumers to obtain all, or just the portions of information they are looking for.

Exactly What They Need With Schema Filtering
Another way to break things down, while putting the control into the API consumers hands, is to allow for schema filtering. Providing parameters and other ways for API consumers to dictate which fields they would like to return with each API response. Reducing or expanding the schema that gets returned based upon which fields are selected by the consumer. There are simpler examples of doing this with a fields parameter, all the way to more holistic approaches using query languages like GraphQL that let you provide a schema of which response you want returned via the URL or the parameter of each API request. Providing a very consumer-centric approach to ensuring only what is needed is transmitted via each API call.

Defining Specific Responses Using The Prefer Header
Another lesson known approach is to use the Prefer Header to allow API consumers to request which representation of a resource they would prefer, based upon some pre-determined definitions. Each API request would pass in a value for the Prefer Header, providing definitions like simple, complete, or other variation of response defined by the API provider. Keeping API responses based upon known schema and scopes defined by the API provider, but still allowing API consumers to select from the type of response they would like to receive. Balancing the scope of API responses between both the needs of API providers, and API consumers.

Using Caching To Make Response More Efficient
Beyond breaking up API calls into different types of responses, we can start focusing on the performance of the API, and making sure HTTP caching is used. Keeping API responses as standardized as possible while leveraging CDN, web server, and HTTP to ensure that each response is being cached as much as it makes sense. Ensuring that an API provider is thinking about, measuring, and responding to how frequent or infrequent data and content is changing. Helping make each API response as fast as it possibly can by leverage the web as a transport.

More Efficiency Through Compression
Beyond caching, HTTP Compression can be used to further reduce the surface area of API responses. DEFLATE and GZIP are the two most common approaches to compression, helping make sure API responses are as efficient as they possibly can. Compressing each API call to make sure only the least amount of bytes are transmitted across the wire.

Breaking Things Down With Chunked Responses
Moving beyond breaking things down, and efficiency approaches using the web, we can start looking at different HTTP standards for making the transport of API responses more efficient. Chunked transfers are one way to send API responses in not just a single API response, but break it down into an appropriate number of chunks, and send them in order. API consumers can make a request and receive large volumes of data in separate chunks that are reassembled on the client side. Leveraging HTTP to make sure large amounts of data is able to be received in smaller, more efficient API calls.

Switch To Providing More Streaming Responses
Beyond chunk responses, streaming using HTTP with standards like Server-Sent Events (SSE) can be used to deliver large volumes of data as streams. Allowing volumes of data and content to be streamed in logical series, with API consumers receiving as individual updates in a continuous, long-running HTTP request. Only receiving the data that has been requested, as well as any incremental updates, changes, or other events that have been subscribed to as part of the establishment of streams. Providing a much more real time approach to making sure large amounts of data can be sent as efficiently as possible to API consumers.

Moving Forward With HTTP/2
Everything we’ve discussed until now leverages the HTTP 1.1 standard, but there are also increased efficiencies available with the HTTP/2 release of the standard. HTTP/2 has delivered a number of improvements on caching and streaming, and handling the threading of of multiple messages within a single API request. Allowing for single, or bi-directional API requests and responses to exist simultaneously. When combined with existing approaches to pagination, hypermedia, and query languages, or using serialization formats like Protocol Buffers, further efficiency gains can be realized, while staying within the HTTP realm, but moving forward to use the latest version of the standard.

This is just a summary look at the different ways to help deliver large amounts of data using APIs. Depending on the conversations I have today I may dive in deeper into all of these approaches and provide more examples of how this can be done. I might do some round ups of other stories and tutorials I’ve curated on these subjects, aggregating the advice of other API leaders. I just wanted to spend a few moments thinking through possibilities so I can facilitate some intelligent conversations around the different approaches out there. While also sharing with my readers, helping them understand what is possible when it comes to making large amounts of data available via APIs.

Machine Readable API Regions For Use At Discovery And Runtime

I wrote about Werner Vogel of Amazon’s post considering the impact of cloud regions a couple weeks back. I feel that his post captured an aspect of doing business in the cloud that isn’t discussed enough, and one that will continue to drive not just the business of APIs, but also increasingly the politics of APIs. Amidst increasing digital nationalism, and growing regulation of not just the pipes, but also platforms, understanding where your APIs are operating, and what networks you are using will become very important to doing business at a global scale.

It is an area I’m adding to my list of machine readable API definitions I’d like to add to the APIs.json stack. The goal with APIs.json is to provide a single index where we can link to all the essential building blocks of a APIs operations, with OpenAPI being the first URI, which provides a machine readable definition of the surface area of the APIs. Shortly after establishing the APIs.json specification, we also created API Commons, which is designed to be a machine readable specification for describing the licensing applied to an API, in response to the Oracle v Google API copyright case. Beyond that, there hasn’t been many other machine readable resources, beyond some existing API driven solutions used as part of API operations like Github and Twitter. There are other API definitions like Postman Collections and API Blueprint that I reference, but they are in the same silo as OpenAPI operates within.

Most of the resources we link to are still human-centered URLs like documentation, pricing, terms of service, support, and other essential building blocks of API operations. However, the goal is to evolve as many of these as possible towards being more machine readable. I’d like to see pricing, terms of services, and aspects of support become machine readable, allowing them to become more automated and understood not just at discovery, but also at runtime. I’m envisioning that regions should be added to this list of currently human readable building blocks that should eventually become machine readable. I feel like we are going to be needing to make runtime decisions regarding API regions, and we will need major cloud providers like Amazon, Azure, and Google to describe their regions in a machine readable way–something that API providers can reflect in their own API definitions, depending on which regions they operate in.

At the application and client level, we are going to need to be able to quantify, articulate, and switch between different regions depending on the user, type of resources being consumed, and business being conducted. While this can continue being manual for a while, at some point we are going to need it to become machine readable so it can become part of the API discovery, as well as integration layers. I do not know what this machine readable schema will look like, but I’m sure it will be defined based upon what AWS, Azure, and Google are already up to. However, it will quickly need to become a standard that is owned by some governing body, and overseen by the community and not just vendors. I just wanted to plant the seed, and it is something I’m hoping will grow over time, but I’m sure it will take 5-10 years before something takes roots, based upon my experience with OpenAPI, APIs.json, and API Commons.

REST and gRPC Side by Side In New Google Endpoints Documentation

Google has been really moving forward with their development, and storytelling around gRPC. Their high speed to approach to doing APIs that uses HTTP/2 as a transport, and protocol buffers (ProtoBuf) as its serialized message format. Even with all this motion forward they aren’t leaving everyone doing basic web APIs behind, and are actively supporting both approaches across all new Google APIs, as well as in their services and tooling for deploying APIs in the Google Cloud–supporting two-speed APIs side by side, across their platform.

When you are using Google Cloud Endpoints to deploy and manage your APIs, you can choose to offer a more RESTful edition, as well as a more advanced gRPC edition. They’ve continued to support this approach across their service features, and tooling, by now also documenting your APIs. As part of their rollout of a supporting API portal and documentation for your Google Cloud Endpoints, you can automatically document both flavors of your APIs. Making a strong case for considering to offer both types of APIs, depending on the types of use cases you are looking to solve, and the types of developers you are catering to.

In my experience, simpler web APIs are ideal for people just getting going on their API journey, and will accommodate the evolution of 60-75% of the API deployment needs out there. Where some organizations further along in their API journey, and those providing B2B solutions, will potentially need higher performance, higher volume, gRPC APIs. Making what Google is offering with their cloud API infrastructure a pretty compelling option for helping mature API providers shift gears, or even helping folks understand that they’ll be able to shift gears down the road. You get an API deployment and management solution that simultaneously supports both speeds, but also the other supporting features, services, and tooling like documentation delivers at both speeds.

Normally I am pretty skeptical of single provider / community approaches to delivering alternative approaches to APIs. It is one of the reasons I still hold reservations around GraphQL. However with Google and gRPC they have put HTTP/2 to work, and the messaging format is open source. While the approach is definitely all Google, they have embraced the web, which I don’t see out of the Facebook led GraphQL community. I still questions Google’s motives, not because they are up to anything shady, but I’m just skeptical of EVERY company’s motivations when it comes to APIs. After eight years of doing this I don’t trust anyone to not be completely self serving. However, I’ve been tuned into gRPC for some time now and I haven’t seen any signs that make me nervous, and they keep delivering beneficial features like they did with this documentation, keeping me writing stories and showcasing what they are doing.

OpenAPI Makes Me Feel Like I Have A Handle On What An API Does

APIs are hard to talk about across large groups of people, while ensuring everyone is on the page. APIs don’t have much a visual side to them, providing a tangible reference for everyone to use by default. This is where OpenAPI comes in, helping us “see” an API, and establish a human and machine readable document that we can produce, pass around, and use as a reference to what an API does. OpenAPI makes me feel like I have a handle on what an API does, in a way that can actually have a conversation around with other people–without it, things are much fuzzier.

Many folks associate OpenAPI with documentation, code generation, or some other tooling or service that uses the specification–putting their emphasis on the tangible thing, over the definition. While working on projects, I spend a lot of time educating folks about what OpenAPI is, what it is not, and how it can facilitate communication across teams and API stakeholders. While this work can be time consuming, and a little frustrating sometimes, it is worth it. A little education, and OpenAPI adoption can go a long way to moving projects along, because (almost) everyone involved is able to be actively involved in moving API operations forward.

Without OpenAPI it is hard to consistently design API paths, as well as articulate the headers, parameters, status codes, and responses being applied across many APIs, and teams. If I ask, “are we using the sort parameter across APIs?” If there is no OpenAPI, I can’t get an immediate or timely answer, it is something that might not be reliably answered. Making OpenAPI a pretty critical conversation and collaboration driver across the API projects I’m working on. I am not even getting to the part where we are deploying, managing, documenting, or testing APIs. I’m just talking about APIs in general, and making sure everyone involved in a meeting is on the same page when we are talking about one or many APIs.

Almost every API I’m working on starts as a basic OpenAPI, even with just a title and description, published to a Github, Gitlab, Bitbucket, or other repository. Then I usually add schema definitions, which drive conversations about how the schema will be accessed, as we add paths, parameters, and other details of the requests, and responses for each API. With OpenAPI acting as the guide throughout the process, ensuring we are all on the same page, and ensuring all stakeholders, as well as myself have a handle on what is going on with each API being developed. Without OpenAPI, we can never quite be sure we are all talking about the same thing, let alone having a machine readable definition that we can all take back to our corners to get work done.

My Response To The VA Microconsulting Work Statement On API Outreach

The Department of Veterans Affairs (VA) is listening to my advice around how to execute their API strategy and adopting a micro-approach to not just delivering services, but also to the business of moving the platform forward at the federal agency. I’ve responded to round one, and round two of the RFI’s, and now they have submitted a handful of work statements on Github, so I wanted to provide an official response, share my thoughts on each of the work statements, and actually bid for the work.

First, Here is The VA Background
The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem.

Next, What The VA Considers The Play
As the Lighthouse Product Owner, I must have a repeatable process to communicate with internal and external stakeholders the availability of existing, new, and future APIs so that the information can be consumed for the benefit of the Veteran. This outreach/evangelism effort may be different depending on the API type.

Then, What The VA Considers The Deliverable
A defined and repeatable strategy/process to evangelize existing, new, and future APIs to VA’s stakeholders as products. This may be in the form of charts, graphics, narrative, or combination thereof. Ultimately, VA wants the best format to accurately communicate the process/strategy. This strategy/process may be unique to each type of API.

Here Is My Official Response To The Statement
API Evangelism is always something that is more about people, than it is about the technology, and should always be something that speaks to not just developers, being inclusive to all stakeholders involved in, and being served by a platform. Evangelism is all about striking the right balance around storytelling about what is happening across a platform, providing a perfect blend of sales and marketing that expands the reach of the platform, while also properly showcasing the technical value APIs bring to the table.

For the scope of this micro engagement around the VA API platform, I recommend focusing on delivering a handful of initiatives involved with getting the word out about what the API platforms, while also encouraging feedback, but all in an easily measurable way:

  • Blogging - No better way to get the word out than an active, informative, and relevant blog with an RSS feed that can be subscribed to.
  • Twitter - Augmenting the platform blog with a Twitter account that can help amplify platform storytelling in a way that can directly engage with users.
  • Github - Make sure the platform is publishing code, content, and other resources to Github repositories for the platform, and engaging with the Github community around these repos.
  • Newsletter - Publishing a weekly newsletter that includes blog posts, other relevant links to things happening on a platform, as well as in the wider community.
  • Feedback Loop - Aggregating, responding to, supporting, and organizing email, Twitter, Github, and other feedback from a platform and report back to stakeholders, as well as incorporate into regular storytelling.

For the scope of this project, there really isn’t room to do much else. Daily storytelling, Twitter, and Github engagement, as well as a weekly newsletter would absorb the scope of the project for a 30-60 day period. There would be no completion date for this type of work, as it is something that should go on in perpetuity. However, the number of blog posts, tweets, repos, followers, likes, subscribers, and other metrics could be tracked and reported upon weekly providing a clear definition of what work has been accomplished, and the value from the overall effort over any timeframe.

Evangelism isn’t rocket science. It just takes someone who cares about he platform’s mission, and the developers, and end-users being served by the platform. With a little passion, and technical curiosity, a platform can become alive with activity. A little search engine and social media activity can go a long way towards bringing in new users, keeping existing users engaged and encouraging increased level of activities, internally and externally around platform operations. With simple KPIs in place, and a simple reporting process in operation, the evangelism around a platform can be executed, quantified, and scaled across several individuals, as well as rolling teams of contractors.

Some Closing Thoughts On This Project
That concludes my response to the work statement. Evangelism is something I know. I’ve been studying and doing it for 8 years straight. Simple, consistent, informative evangelism is why I’m in a position to respond to this project out of the VA. It is how many of these ideas have been planted at the VA, through storytelling I’ve been investing in since I worked at the VA in 2013. A single page response doesn’t allow much space to cover all the details, but an active blog, Twitter, Github, and newsletter with a structured feedback loop in place can provide the proper scaffolding needed not to just execute a single cycle of evangelism, but guide many, hopefully rolling contracts to deliver evangelism for the platform in an ongoing fashion. Hopefully bringing fresh ideas, individuals, storytelling and outreach to the platform. Which can significantly amplify awareness around the APIs being exposed by the agency, and helping the platform better deliver on the mission to better serve veterans.

My Response To The VA Microconsulting Work Statement On API Landscape Analysis and Near-term Roadmap

The Department of Veterans Affairs (VA) is listening to my advice around how to execute their API strategy and adopting a micro-approach to not just delivering services, but also to the business of moving the platform forward at the federal agency. I’ve responded to round one, and round two of the RFI’s, and now they have submitted a handful of work statements on Github, so I wanted to provide an official response, share my thoughts on each of the work statements, and actually bid for the work.

First, Here is The VA Background
The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem.

Next, What The VA Considers The Play
As the VA Lighthouse Product Owner I have identified two repositories of APIs and publicly available datasets containing information with varying degrees of usefulness to the public. If there are other publicly facing datasets or APIs that would be useful they can be included. I need an evaluation performed to identify and roadmap the five best candidates for public facing APIs.

The two known sources of data are:

  • - and
  • -

Then, What The VA Considers The Deliverable
Rubric/logical analysis for determining prioritization of APIs Detail surrounding how the top five were prioritized based on the rubric/analysis (Not to exceed 2 pages unless approved by the Government) Deliverables shall be submitted to VA’s GitHub Repo.

Here Is My Official Response To The Statement
To describe my approach to this work statement, and describe what I’ve been already doing in this area for the last five years, I’m going to go with a more logical analysis, as opposed a rubric. There is already too much structured data in all of these efforts, and I’m a big fan of making the process more human. I’ve been working to identify the most valuable data, content, and other resources across the VA, as well as other federal agencies since I worked in DC back in2013, something that has inspired my Knight Foundation funded work with Adopta.Agency.

Always Start With Low Hanging Fruit When it comes to where you should start with understanding what data and content should be turned into APIs, you always begin with the public website for a government agency. If the data and content is already published to the website, it means that there has been demand for accessing the information, and it has already been through a rigorous process to make publicly available. My well developed low hanging fruit process involves spidering of all VA web properties identifying any page containing links to XML, CSV, and other machine readable formats, as well as any page with a table containing over 25 rows, and any forms providing search, filtering, and access to data. The low hanging fruit portion of this will help identify potentially valuable sources of data beyond what is already available at,, and on Github–expanding the catalog of APIs that we should be targeting for publishing, going well beyond just,, and Github.

Tap Existing Government Metrics
To help understand what data available at,, on Github, as well as across all VA web properties, you need to reach out to existing players who are working to track activity across the properties. is a great place to start to understand what people are looking at and searching for across the existing VA web properties. While talking with the folks at the GSA, I would also be talking to them about any available statistics for relevant data on, understanding what is being downloaded and access when it comes to VA data, but also any other related data from other agencies. Then I’d stop and look at the metrics available on Github for any data stored in with repositories, taking advantage of the metrics built into the social coding platform. Tapping into as many of the existing metrics being tracked on across the VA, as well as other federal agencies.

Talk To Veterans And Their Supporters
Next, I’d spend time on Twitter, Facebook, Github, and Linkedin talking to veterans, veteran service organizations (VSO), and people who advocate for veterans. This work would go a long way towards driving other platform outreach and evangelism described in the other statement of work I responded to. I’d spend a significant amount of time asking veterans and their supports what data, content, and other resources they most use, and would be most valuable if were made more available in web, mobile, and other applications. Social media would provide a great start to help understand what information should be made available as APIs, then you could also spend time reviewing the web and mobile properties of organizations that support veterans, taking note of what statistics, data, and other information they are publishing to help them achieve their mission, and better serve veterans.

Start Doing The Work
Lastly, I’d start doing the work. Based upon what I’ve learned from my Adopta.Agency work, each dataset I identified I’d publish my profiling to an individual Github repository. Publishing a README file describing the data, where it is located, and a quick snapshot of what is needed to turn the dataset into an API. No matter where the data ended up being in the list of priorities there would be a seed planted, which might attract its own audience, stewards, and interested parties that might want to move the conversation forward–internally, or externally to VA operations. EVERY public dataset at the VA should have its own Github repository seed, and then eventually it’s corresponding API to help make sure the latest version of the data is available for integration in applications. There is no reason that the work to deploy APIs cannot be started as part of the identifying and prioritization of data across the VA. Let’s plant the seed to ensure all data at the VA eventually becomes an API, no matter what it’s priority level is this year.

Some Closing Thoughts On This Project
It is tough to understand where to begin with an API effort at any large institution. Trust me, I tried to do at the VA. I was the person who originally setup the VA Github organization, and began taking inventory of the data now available on,, and on Github. To get the job done you have to do a lot of legwork to identify where all the valuable data is. Then you have to do the hard work of talking to internal data stewards, as well as individuals and organizations across the public landscape who are serving veterans. This is a learning process unto itself, and will go a long way towards helping you understand which datasets truly matter, and should be a priority for API deployment, helping the VA better achieve its mission in serving veterans.

If Github Pages Could Be Turned On By Default I Could Provide A Run On Github Button

My world runs on Gitub. 100% of my public website projects run on Github Pages, and about 75% of my public web applications run on Gitub Pages. The remaining 25% of it all is my API infrastructure. However, I’m increasingly pushing my data and content APIs to run entirely on Github with Github Pages as frontend, and the Github repo as the backend, with the Github API as the transport. I’d rather be serving up static JSON and YAML from repositories, and building JavaScript web applications that run using Jekyll, than dynamic server-side web applications.

It is pretty straightforward to engineer HTML, CSS, and JavaScript applications that run entirely on Github Pages, and leverage JSON and YAML stored in the underlying Github repo as the database backend, using the Github API as the API backend. Your Github OAuth token becomes your API key, and Github organization and user structure becomes the authentication and access management layer. These apps can run publicly, and when someone wants to write, as well as read data to the application, they just need to authenticate using Github–if they have permission, then they get access, and can write JSON or YAML data to the underlying repo via the API.

I’m a big fan of building portable micro applications using this approach. The only road block for me to make them easier to use by other users, is the ability to fork them into their own Github account with a single click. I have a Github OAuth token for any authenticated user, and with the right scopes I can easily fork a self-contained web application into their account or organization, giving them full control over their own version of the application. The problem is that each user has to enable Github Pages for the repository before they can view the application. It is a simple, but pretty significant hurdle for many users, and is something that prevents me from developing more applications that work like this.

If Github would allow for the enabling of Github Pages by default, or via the API, I could create a Run in Github button that would allow my micro apps to be forked and ran within each users own Github account. Maybe I’m missing something, but I haven’t been able to make Github Pages be enabled by default for any repo, or via the API. It seems like a simple feature to add to the road map, which is something that would have profound implications on how we deploy applications, turning Github into a JavaScript serverless environment, where applications can be forked and operated at scale. It is an approach I will keep using, but I am afraid it won’t catch on properly until users are able to deploy an application on Github with a single click. All I need is for Github to allow for me to create a new repository using the Github API, and allow me to turn on Github Pages by default when it is created–then my run in Github button will become a reality.

My GraphQL Thoughts After Almost Two Years

It has been almost a year and half since I first wrote my post questioning that GraphQL folks didn’t want to do the hard work of API design, which I also clarified that I was keeping my mind open regarding the approach to delivering APIs. I’ve covered several GraphQL implementations since then, as well as my post on waiting the GraphQL assault out-to which I received a stupid amount of, “dude you just don’t get it!”, and “why you gotta be so mean?” responses.

GraphQL Is A Tool In My Toolbox
I’ll start with my official stance on GraphQL. It is a tool in my API toolbox. When evaluating projects, and making recommendations regarding what technology to use, it exists alongside REST, Hypermedia, gRPC, Server-Sent Events (SSE), Websockets, Kafka, and other tools. I’m actively discussing the options with my clients, helping them understand the pros and cons of each approach, and working to help them define their client use cases, and when it makes sense I’m recommending a query language layer like GraphQL. When there are a large number of data points, and a knowledgeable, known group of developers being targeted for building web, mobile, and other client applications, it can make sense. I’m finding GraphQL to be a great option to augment a full stack of web APIs, and in many cases a streaming API, and other event-driven architectural considerations.

GraphQL Does Not Replace REST
Despite what many of the pundits and startups would like to be able to convince everyone of, GraphQL will not replace REST. Sorry, it doesn’t reach as wide of an audience as REST does, and it still keeps APIs in the database, and data literate club. It does make some very complex things simpler, but it also makes some very simple things more complex. I’ve reviewed almost 5 brand new GraphQL APIs lately where the on-boarding time was in the 1-2 hour range, rather than minutes with many other more RESTful APIs. GraphQL augments RESTful approaches, and in some cases it can replace them, but it will not entirely do away with the simplicity of REST. Sorry, I know you don’t want to understand the benefits REST brings to the table, and desperately want there to be a one size fits all solution, but it just isn’t the reality on the ground. Insisting that GraphQL will replace REST does everyone more harm than good, hurts the overall API community, and will impede GraphQL adoption–regardless of what you want to believe.

The GraphQL Should Embrace The Web
One of my biggest concerns around the future of GraphQL is its lack of a focus on the web. If you want to see the adoption you envision, it has to be bigger than Facebook, Apollo, and the handful of cornerstone implementations like Github, Pinterest, and others. If you want to convert others into being believers, then propose the query language to become part of the web, and improve web caching so that it rises to the level of the promises being made about GraphQL. I know underneath the Facebook umbrella of GraphQL and React it seems like everything revolves around you, but there really is a bigger world out there. There are more mountains to conquer, and Facebook won’t be around forever. To many of the folks I’m trying to educate about GraphQL, they can’t help but see the shadow of Facebook, and their lack of respect for the web. GraphQL may move fast, and break some things, but it won’t have the longevity all y’all believe so passionately in, if you don’t change your tune. I’ve been doing this 30 years now, and seen a lot of technology come and go. My recommendation is to embrace the web, bake GraphQL into what is already happening, and we’ll all be better off for it.

GraphQL Does Not Do What OpenAPI Does
I hear a lot of pushback on my OpenAPI work, telling me that GraphQL does everything that OpenAPI does, and more! No, no it doesn’t. By saying that you are declaring that you don’t have a clue what OpenAPI does, and that you are just pushing a trend, and have very little interest in the viability of the APIs I’m building, or my client’s needs. There is an overlap in what GraphQL delivers, and what OpenAPI does, but they both have their advantages, and honestly OpenAPI has a lot more tooling, adoption, and enjoys a diverse community of support. I know in your bubble that GraphQL and React dominates the landscape, but on my landscape it is just a blip, and there are other tools in the toolbox to consider. Along with the web, the GraphQL should be embracing the OpenAPI community, understanding the overlap, and reaching out to the OAI to help define the GraphQL layer for the specification. Both communities would benefit from this work, rather than trying to replace something you don’t actually understand.

GraphQL Dogma Continues To Get In The Way
I am writing this post because I had another GraphQL believer mess up my chances for it being in the roadmap, because of their overzealous beliefs, and allowing dogma to drive their contribution to the conversation. This is the 3rd time I’ve had this happen on high profile projects, and while the GraphQL conversation hasn’t been ruled out, it has been slowed, pushed back, and taken off the table, due to pushing the topic in unnecessary ways. The conversation unfortunately wasn’t about why GraphQL was a good option, the conversations was dominated by GraphQL being THE SOLUTION, and how it was going to fix everything web service and API that came before it. This combined with inadequate questions regarding IP concerns conveyed, and GraphQL being a “Facebook solution”, set back the conversations. In each of these situations I stepped in to help regulate the conversations, and answer questions, but the damage was already done, and leadership was turned off to the concept of GraphQL because of overzealous, dogmatic beliefs in this relevant technology. Which brings me back to why I’m pushing back on GraphQL, not because I do not get the value it brings, but because the rhetoric around how it is being sold and pushed.

No, I Do Get What GraphQL Does
This post will result in the usual number of Tweets, DMs, and emails telling me I just don’t get GraphQL. I do. I’ve installed and played with it, and hacked against several implementations. I get it. It makes sense. I see why folks feel like it is useful. The database guy in me gets why it is a viable solution. It has a place in the API toolbox. However, the rhetoric around it is preventing me from putting it to use in some relevant projects. You don’t need to convince me of its usefulness. I see it. I’m also not interested in convincing y’all of the merits of REST, Hypermedia, gRPC, or the other tools in the toolbox. I’m interested in applying the tools as part of my work, and the tone around GraphQL over the last year or more hasn’t been helping the cause. So, please don’t tell me I don’t get what GraphQL does, I do. I don’t think y’all get the big picture of the API space, and how to help ensure GraphQL is in it for the long haul, and achieving the big picture adoption y’all envision.

Let’s Tone Down The Rhetoric
I’m writing about this because the GraphQL rhetoric got in my way recently. I’m still waiting out the GraphQL assault. I’m not looking to get on the bandwagon, or argue with the GraphQL cult, I’m just looking to deliver on the projects I’m working on. I don’t need to be schooled on the benefits of GraphQL, and what the future will hold. I’m just looking to get business done, and support the needs of my clients. I know that the whole aggressive trends stuff works in Silicon Valley, and the startup community. But in government, and some of the financial, insurance, and healthcare spaces I’ve been putting GraphQL on the table, the aggressive rhetoric isn’t working. It is working against the adoption of the query language. Honestly it isn’t just the rhetoric, I don’t feel the community is doing enough to satisfy the wider IP, and Facebook connection to help make the sale. Anyways, I’m just looking to vent, and push back on the rhetoric aound GraphQL replacing REST. After a year and a half I’m convinced of the utility of GraphQL, however the wider tooling, and messaging around it still hasn’t won me over.

Sharing API Definition Patterns Across Teams By Default

This is a story, in a series I’m doing as part of the version 3.0 release of the platform. is one the few API service providers I’m excited about what when it comes to what they are delivering in the API space, so I jumped at the opportunity to do some paid work for them. As I do, I’m working to make these stories about the solutions the provide, and refrain from focusing on just their product, hopefully maintaining my independent stance as the API Evangelist, and keeping some of the credibility I’ve established.

One of the things I’m seeing emerge from the API providers who are further along in their API journey, is a more shared experience when it comes to not just the API design process, but in action throughout the API lifecycle. You can see this reality playing out with the API definitions, and the move from Swagger 2.0 to the OpenAPI 3.0, where there is a more modular approach to defining an API, encouraging reuse of common patterns across API paths within a single definition, as well as across many different API definitions. Expanding on how API definitions can move an API through the lifecycle, and beginning to apply that across projects and teams, and moving APIs through lifecycle much more quickly, and consistently.

Emulating the Models We Know
We hear a lot about healthy API design practices, following RESTful philosophies, and learning from the API design guides published by leading API providers. While all of these areas have a significant influence on what happens during the API design and development process, the most significant influence on what developers know, it is just what they are exposed to in their own experience. The models we are exposed to often become the models we employ, demonstrating the importance of adopting a shared experience when it comes to defining, designing, and putting common API models to work for us in our API journeys.

A Schema Vision Reflects Maturity
The other side of this shared API definition reality, are the reuse and sharing of schema. The paths, parameters, and models for the surface area of our APIs is essential, but the underlying schema we use as part of the request and responses will eliminate or introduce the most friction when it comes to API integration. Reinventing the wheel with schema creates friction throughout the API lifecycle from design to documentation, and will make each stop significantly more costly, something that will ultimately be also passed off to consumers. Sharing, reusing, and being organized with schema used across the development of APIs is something you see present within the practices of the API teams who are realizing more streamlined, mature, API deliver pipelines.

Sharing Across Teams Is How Its Done
Using common models and schema is important, but none of it matters if you aren’t sharing them across teams. Leveraging Git, as well as the social, team, and collaborative features to share and reuse common models and schema is what helps reduce friction, and introduce efficiency into both the deployment and integration of APIs. API design doesn’t occur in isolation anymore. Gone are the pizza in the basement days for developing APIs, and the API lifecycle is a social, collaborative affair. Teams that have a mature, consistent, and reproducible API lifecycles, are sharing definitions across teams, and actively employing the most mature patterns across the APIs they deliver. Without a team mentality throughout the API lifecycle, APIs will never truly be interoperable, reusable, and consumable set of services we expect.

Without API Definitions There Is No Discovery
Ask many large organizations who are doing APIs without API definitions like OpenAPI, and who haven’t adopted a lifecycle approach, where their APIs are, and they will just give you a blank stare. APIs regularly get lost in the shuffle of busy organizations, and can often be hidden in the shadows of the mobile applications they serve. However, when you adopt and API definition by default, as well as an API definition-driven life cycle, discovery is baked in by default. You can search Github, reverse engineer schema from application logs, and follow the code pipeline to find APIs. API definitions leave a trail, and API documentation, testing, and other resources all point to where the APIs are. API discovery isn’t a problem in an API definition driven world, it is baked in by default.

API Governance Depends on API Definitions
I get a regularly stream of requests from organizations who have already embarked on their API journey regarding how they can work to more consistently deliver APIs, and institute a governance program to help them measure progress across teams. The first question I always ask is whether or not they use OpenAPI, and the second question is regarding what their API lifecycle and Git usage looks like. If they aren’t using OpenAPI, and haven’t adopted a Git workflow across their teams, API governance will rarely ever be realized. Without definitions you can’t measure. Without a Git workflow, you can find the artifacts you need to measure. API governance can’t exist in isolation, without relevant definitions in play.

OpenAPI has significantly contributed to the evolution, and maturity of what we know as the API lifecycle. Without them we can’t share common models and schema. We can’t work together and learn from each other. Without tooling and services in place that allow us to share, collaborate, mock, and communicate around the API design and development process, the API process will never mature. It will not be consistent. It will not move forward and reach a production line level of efficiency. To realize this we have to share definitions across teams by default–without it, we aren’t really doing APIs.

P.S. Thanks for the support!

The ClosedAPI Specification

You’ve heard of OpenAPI, right? It is the API specification for defining the surface area of your web API, and the schema you employ–making your public API more discoverable, and consumable in a variety of tools services. OpenAPI is the API definition for documenting your API when you are just getting started with your platform, and you are looking to maximize the availability and access of your platform API(s). After you’ve acquired all the users, content, investment, and other value, ClosedAPI is the format you will want to switch to, abandoning OpenAPI, for something a little more discreet.

Collect As Much Data As You Possibly Can

Early on you wanted to be defining the schema for your platform using OpenAPI, and even offering up a GraphQL layer, allowing your data model to rapidly scale, adding as may data points as you possible can. You really want to just ingest any data you can get your hands on the browser, mobile phones, and any other devices you come into contact with. You can just dump it all into big data lake, and sort it out later. Adding to your platform schema when possible, and continuing to establish new data points that can be used in advertising and targeting of your platform users.

Turn The Firehose On To Drive Activity

Early on you wanted your APIs to be 100% open. You’ve provided a firehose to partners. You’ve made your garden hose free to EVERYONE. OpenAPI was all about providing scalable access to as many users as you can through streaming APIs, as well as lower volume transactional APIs you offer. Don’t rate limit too heavily. Just keep the APIs operating at full capacity, generating data and value for the platform. ClosedAPI is for defining your API as you begin to turn off this firehose, and begin restricting access to your garden hose APIs. You’ve built up the capacity of the platform, you really don’t need your digital sharecroppers anymore. They were necessary early on in your business, but they are not longer needed when it comes to squeezing as much revenue as you can from your platform.

The ClosedAPI Specification

We’ve kept the specification as simple as possible. Allowing you to still say you have API(s), but also helping make sure you do not disclose too much about what you actually have going on. Providing you the following fields to describe your APIs:

  • Name
  • Description
  • Email

That is it. You can still have hundreds of APIs. Issue press releases. Everyone will just have to email you to get access to your APIs. It is up to you to decide who actually gets access to your APIs, which emails you respond, or if the email account is ever even checked in the first place. The objective is just to appear that you have APIs, and will entertain requests to access them.

Maintain Control Over Your Platform

You’ve worked hard to get your platform to where it is. Well, not really, but you’ve worked hard to ensure that others do the work for you. You’ve managed to convince a bunch of developers to work for free building out the applications and features of your platform. You’ve managed to get the users of those applications to populate your platform with a wealth of data, making your platform exponentially more valuable that you could have done on your own. Now that you’ve achieved your vision, and people are increasingly using your APIs to extract value that belongs to you, you need to turn off the fire hose, garden hose, and kill off applications that you do not directly control.

The ClosedAPI specification will allow you to still say that you have APIs, but no longer have to actually be responsible for your APIs being publicly available. Now all you have to do is worry about generating as much revenue as you possibly can from the data you have. You might lose some of your users because you do not have publicly available APIs anymore, as well as losing some of your applications, but that is ok. Most of your users are now trapped, locked-in, and dependent on your platform–continuing to generate data, content, and value for your platform. Stay in tune with the specification using the road map below.


  • Remove Description – The description field seems extraneous.

I Will Be Speaking At The API Conference In London This Week

I am hitting the road this week heading to London to speak at the API Conference. I will be giving a keynote on Thursday afternoon, and conducting an all day workshop on Friday. Both of my talks are a continuation of my API life cycle work, and pushing forward my use of a transit map to help me make sense of the API life cycle. My keynote will be covering the big picture of why I think the transit model works for making sense of complex infrastructure, and my workshop is going to get down in the weeds with it all.

My keynote is titled, “Looking At The API Life Cycle Through The Lens Of A Municipal Transit System”, with the following abstract, “As we move beyond a world of using just a handful of internal and external APIs, to a reality where we operate thousands of microservices, and depend on hundreds of 3rd party APIs, modern API infrastructure begins to look as complex as a municipal transit system. Realizing that API operations is anything but a linear life cycle, let’s begin to consider that all APIs are in transit, evolving from design to deprecation, while still also existing to move our value bits and bytes from one place to another. I would like to share with you a look at how API operations can be mapped using an API Transit map, and explored, managed, and understood through the lens of a modern, Internet enabled “transit system”.”

My workshop is titled, “Beyond The API Lifecycle And Beginning To Establish An API Transit System”, with the following abstract, “Come explore 100 stops along the modern API life cycle, from definition to deprecation. Taking the learnings from eight years of API industry research, extracted from the leading API management pioneers, I would like to guide you through each stop that a modern API should pass across, mapped out using a familiar transit map format, providing an API experience I have dubbed API Transit. This new approach to mapping out, and understanding the API life cycle as seen through the lens of a modern municipal transit system allows us to explore and learn healthy patterns for API design, deployment, management, and other stops along an APIs journey. Which then becomes a framework for executing and certifying that each API is following the common patterns which each developer has already learned, leading us to a measurable and quantifiable API governance strategy that can be reported upon consistently. This workshop will go into detail on almost 100 stops, and provide a forkable blueprint that can be taken home and applied within any API operations, no matter how large, or small.”

I will also be spending some time meeting with a variety of API service providers, and looking to meet up with some friends while in London. If you are around, ping me. I’m not sure how much time I will have, but I’m always game to try and connect. I’m eager to talk with folks about banking activity in the UK, as well as talking about the event-driven architecture, and API life cycle / governance work I’m doing with I’ll be at the event most of the day Thursday, and all day Friday, but both evenings I should have some time available to connect. See you in London!

Details Of The API Evangelist Partner Program

I’ve been retooling the partner program for API Evangelist. There are many reasons for this, and you can read the full backstory I have written a narrative for these changes to the way in which I partner. I need to make a living, and my readers are expecting me to share relevant stories from across the sector on my blog. I’m also tired of meaningless partner arrangements that never go anywhere, and I’m looking to incentivize my partners to engage with me, and the API community in an impactful way. I’ve crafted a program that I think will do that.

While there will still be four logo slots available for partnership, the rest of the references to API services providers and the solutions they provide will be driven by my partner program. If you want to be involved, you need to partner with me. All that takes is to email me that you want to be involved. It doesn’t cost you a dime, all you have to do is reach out, let me know you are interested, and be willing to play the part of an active API service provider. If you are making an impact on the API space, then you’ll enjoy the following exposure across my work:

  • API Lifecycle - Your services and tooling will be listed as part of my API lifecycle research, and displayed on the home page of my website, and across my network of sites.
  • Short Form Storytelling - Being referenced as part of my stories when I talk about the areas of the API sector in which your services and tooling provides solutions.
  • Long Form Storytelling - Similar to short form, when i’m writing white papers, and guides, I will use your products and services as examples to highlight the solutions I’m talking about.
  • Story Ideas - You will have access a list of working story ideas I’m working through, and able to add to the list, as well as extract ideas from the list for providing story idea seeds for your own content teams.
  • In-Person Talks - When I’m giving talks at conferences I will be including your products and services into my slides, and storytelling, using them as a cornerstone for my in-person talks.
  • Workshops - Similar to talks, I will be working my partners into the workshops I give, and when I walk through my API lifecycle and governance work, I will be referencing the tools and services you provide.
  • Consulting - When working with clients, helping them develop their API lifecycle and governance strategy I will be recommending specific services and tooling that are offered by my partners.

I will be doing this for the partners who have the highest ranking. I won’t be doing this in exchange for money. Sure, some of these partners will be buying logos on the site, and paying me for referrals, and deals they land. However, that is just one aspect of how I fund what I do. It won’t change my approach to research and storytelling. I think I’ve done a good job of demonstrating my ability to stay neutral when it comes to my work, and something that my work will continue to demonstrate. If you question how partner relationships will affect my work, then you probably don’t know me very well.

I won’t be taking money to write stories or white papers anymore. It never works, and always gets in the way of me producing stories at my regular pace. My money will come from consulting, speaking, and the referrals I send to my partners. Anytime I have taken money from a partner, I will disclose that I have on my site, and acknowledge the potential influence on my work. However, in the end, nothing will change. I’ll still be writing as I always have, and maintain as critical as I always have been, even of my partners. The partners who make the biggest impact on the space will rise to the top, and the ones who do not, will stay at the bottom of the list, and rarely show up in my work. If you’d like to partner, just drop me an email, and we’ll see what we can make happen working together.

Crafting A Productive API Industry Partner Program

I struggle with partner relationships. I’ve had a lot of partners operating API Evangelist over the years. Some good. Some not so good. And some amazing! You know who you are. It’s tough to fund what I do as the API Evangelist. It’s even harder to fund who I am as Kin Lane. I’ve revamped my approach to partnering several times now trying to find the right formula for me, my partners, and for my readers. As the partner requests pile up, and I fall short for some of my existing partners, while delivering as expected for others, it is time for me to take another crack at shaping my partner program.

A Strong Partnership Base
A cornerstone of my new approach to partnering is based upon my relationship with They are my primary partner, and supporter of API Evangelist. They have not just helped provide me with the financial base I need to live and operate API Evangelist, they are investing in, and helping grow my existing API lifecycle and API Stack work. We are also working in concert to formalize my API lifecycle work into a growing consultancy, and pushing forward my API Stack research, syndicating it as the API Gallery, and refining the my ranking system for both API providers, and API service providers. Without this latest round of API Evangelist wouldn’t be happening.

Difficulties With The Current Mode Of Partnering
One of the biggest challenges I have right now with partnering is that my partners want me to produce content about them. Writing stories for pay just isn’t a good idea for their brand, or for mine. I know it is what people want, but it just doesn’t work. The other challenge I have is people tend to want predictable stories on a schedule. I know it seems like I’m a regular machine, churning out content, but honestly when it flows, it all works. When it doesn’t flow, it doesn’t work. I schedule things out ahead enough that the bumps are smoothed out. I love writing. The anxiety I get from writing on a deadline, or for expected topics isn’t conducive to producing good content, and storytelling. This applies to both my short form (blog posts), and my long form (white papers). I’ll still be producing both of these things, but I can’t do it for money any longer.

Diffciulties With Incentivizing Partner Behavior
I have no shortage of people who’d like to partner and get exposure from what I do, but incentivizing what I’d like to see from these partners is difficult. I’m not just looking for money from them, I’m looking to incentivize companies to build interesting products and services, tell stories that are valuable to the community, and engage with the API space in a meaningful way. I want to leverage my partners to behave as good citizens, give back to the space, and yes, get new users and generate revenue from their activity. I’ve seen too many partnerships exclusively be about the money involved, or just be about the press release, with no meaningful substance actually achieved in between. This type of hollow, meaningless, partnership masturbation does no good for anyone, and honestly is a waste of my time. I don’t expect ALL partnerships to bear fruit, but there should be a high bar for defining what partnership means, and we should be working to making it truly matter for everyone involved.

There Are Three Dimensions To Creating Partner Value
As I see it, there are three main dimensions to establishing a productive API industry partnership program. There is partner A, and partner B, but then there is potential customer. If a partnership isn’t benefiting all three of these actors equally, then it just won’t work. I understand that as a company you are looking to maximize the value generation and extraction for your business, but there is enough to go around, and one of the core tenets of partnerships is that this value is shared, with the customer and community in mind. Not all companies get this. It is my role to make sure they are reminded of it, and push for balanced partnerships that are healthy, active, fruitful, but also benefit the community. We will all benefit from this, despite many shortsighted, self-centered approaches to doing business in API-land that I’ve encountered over my eight years operating as the API Evangelist. To help me balance my API partnership program, I’m going to be applying mechanism I’ve used for years to help me define and understand who my true partners are, and whoever rises to the top will see the most benefits from the program.

Tuning Into API Service Provider Partner Signals
When it comes to quantifying my partners, who they are, and what they do, I’m looking at a handful of signals to make sure they are contributing not just to the partnership, but to the wider community. I’m looking to incentivize API service providers to deliver as much value to the community than they are extracting as part of the partnership. Here is what I’m looking for to help quantify the participation of any partner on the table:

  • Blog - The presence of an active blog with and RSS / Atom feed. Allowing me to tune into what is happening, and help share the interesting things that are happening via their platform and community. If something good is being delivered, and the story isn’t told, it didn’t happen. An active blog is one of the more important signals I can tune into as the API Evangelist – also I need a feed!!!
  • Github - The presence of an active Github account. Possessing a robust number of repositories for a variety of API ecosystem projects from API definitions to SDKs. I’m tuning into all the Github signals including number of repos, stars, commits, forks, and other chatter to understand the activity and engagement levels occurring within a community. Github is the social network for API providers–make sure you are being active, observable, and engaging.
  • Twitter - The presence of an active, dedicated Twitter account. Because of it’s public nature, Twitter provides a wealth of signals regarding what is happening via any platform, and provides one of the most important ways in which a service provider can be engaging with their customers, and the wider APIs pace. I understand that not everyone excels at Twitter engagements, but using as an information and update sharing channel, and general support and feedback loop is within the realm of operation for EVERYONE.
  • Search - I’m always looking for a general, and hopefully wide search engine presence for API service providers. I have alerts setup for all the API service providers I monitor, which surfaces relevant stories, conversations, and information published by each platform. SEO isn’t hard, but takes a regular, concerted effort, and it is easy to understand how much a service provider is investing in their presence, or not.
  • Business Model - The presence of, or lack of a business model, as well as investment is an important signal I keep an eye on, trying to understand where each API service provider in their evolution. How new a company is, how far along their runway they are, and what the exit and long term strategy for an API service provider is. Keeping an eye on Crunchbase and other investment, pricing plans, and revenue signals coming out of a platform will play a significant role in understand the value an API service is bringing to the table.
  • Integrations - I’m also tracking on the integrations any service provider offers, ensuring that their API service providers are investing in, and encouraging interpretability with other platforms by default. API service providers that do not play well with others, often do not make good partners, insisting on all roads leading to their platform. I’m always on the hunt for a dedicated integrations and plugin page for any API service provider I’m partnering with.
  • Partnerships - Beyond integrations I want to see the all the other partnerships one of my partners is engaging with. The relationships they are engaging in tell a lot about how well they will partner with me, and define what signals they are looking to send to the community. Partnerships tell a lot of story about the motivations behind a companies own partner program, and how it will benefit and impact my own partner program.
  • API - I am always looking for whether or not an API service provider has an API. If a company is selling services, products, and tooling to the API sector, but doesn’t have a public facing API, I’m always skeptical of what they are up to. I get it, it can often not be a priority to operate your own program, but in reality, can anyone trust an API service provider to help deliver on their own program, if they don’t have the same experience operating their own API program? This is one of my biggest pet peeves with API service providers, and a very telling sign about what is happening behind the scenes.
  • API Definitions - If you have an API as an API service provider then they should have an OpenAPI and Postman Collections available for their API. The presence of API definitions, as well as robust portal, docs, and other API building blocks is essential for any API service provider.
  • Story Ideas - I’m very interested in the number of story ideas submitted by API service providers, pointing me to interesting stories I should cover, as well as the interesting things that are occurring via their platform. Ideally, these are referenced by public blog posts, Tweets, and other signals sent by the API service provider, but they also count when received via direct message, email, and carrier pigeon (bonus points for pigeon).

There are other signals I’m looking for from my partners, but that is the core of what I’m looking for. I know it sounds like a lot, but it really isn’t. It is the bar that defines a quality API service provider, based upon eight years of tracking on them, and watching them come and go. The API service providers who are delivering in these areas will float up in my automated ranking system, and enjoy more prominence in my storytelling, and across the API Evangelist network.

Tuning Into The API Community Signals
Beyond the API service providers themselves I’m always tuned into what the community is saying about companies, products, services, and tooling. While there is always a certain level of hype around what is happening in the API sector, you can keep your finger on the pulse of what is going on through a handful of channels. Sure, many of the signals present on these channels can be gamed by the API service providers, but others are much more difficult, and will take a lot of work to game. Which can be a signal on its own. Here is what I’m looking for to understand what the API community is tuning into:

  • Blogs - I’m tuning into what people across the space are writing about on their personal, and company blogs. I’m regularly curating stories that reference tools, services, and the products offered by API service providers. If people are writing about specific companies, I want to know about it.
  • Twitter - Twitter provides a number of signals to understand what the community thinks about an API service provider, including follows, and retweets. The conversation volume initiated and amplified by the community tells a significant story about what an API service provider is up to.
  • Github - Github also provides one of the more meaningful platforms for harvesting signals about what API service providers are up to. The stars, forks, and discussion initiated and amplified by the community on the Github repos and organizations of API services providers tell an important story.
  • Search - Using the Google and Talkwalker alerts setup for each API service provider, I curate the Reddit, Hacker news, and other search indexed conversations occurring about providers, tracking on the overall scope of conversation around each API service provider.

There are other signals I’m developing to understand the general tone of the API community, but these reflect the proven ones I’ve been using for years, and provide a reliable measure of the impact an API service provider is making, or not. One important aspect of this search and social media exhaust is in regards to sentiment, which can be good or bad, depending on the tone the API service provider is setting within the community.

Generating My Own Signals About What Is Happening
This is the portion of the partner relationship where I’m held accountable. These are the things that I am doing that deliver value to my partners, as well as the overall API community. These items are the signals partners are looking for, but also the things I’m measuring to understand the role an API service provider plays in my storytelling, speaking, consulting, and research. Each of these areas reflect how relative an API service, their products, services, and tooling is to the overall API landscape as it pertains to my work. Here is what I’m tracking on:

  • Short Form Storytelling - Tracking on how much I write about an API service provider in my blog posts on API Evangelist,, and other places I’m syndicated like DZone. If I’m talking about a company, they are doing interesting an relevant things, and I want to be showcasing their work. I can’t just write about someone because I’m paid, it is because they are relevant to the story I’m telling.
  • Long Form Storytelling - Understanding how API service providers are referenced in my guides and white papers. Similar to the short form references, I’m only referencing companies when they are doing relevant things to the stories I’m telling. My guides used to be comprehensive when it came to mentioning ALL API service providers, but going forward they will only reference those that float to the top, and rank high in my overall API partner ranking.
  • Story Ideas - I’m regularly writing down story ideas, and aggregating them into lists. Not everything ever turns into a story, but still the idea demonstrates something that grabbed my attention. I tend to also share story ideas with other content producers, and publications, providing the seeds for interesting stories that I may not have the time to write about myself. Providing rich, and relevant materials for others to work from.
  • My In-Person Talks - Throughout the talks I give at conferences, Meetups, and in person within companies, organizations, institutions, and government I am referencing tools and service to accomplish a specific API related task–I need the best solutions to reference as part of these talks. I carefully think about which providers I will reference, and I keep track of which ones I reference.
  • API Lifecycle - My API lifecycle work is build upon my research across the API stack, what I learn from the public API providers I study, as well as what I learn as part of my consulting work. All of this knowledge and research goes back into my API lifecycle and governance strategy, and becomes part of my outreach and strategy which gets implemented on the ground as part of my consulting. Across the 68+ stops along the API there are always API service providers I need to reference, and refer to as part of my work.
  • API Stack - My API Stack work is all about profiling publicly available APIs out there and understanding the best practices at work when it comes to operating their APIs. When I notice that a service or tool is being put to use I take note of it. I use these references as part of my overall tracking and understanding of what is being put to use across the industry.
  • Website Logos - There are four logos for partners to sponsor across the network of API Evangelist sites. While I’m not weighting these in my ranking, when someone is present on the site like that they are part of my consciousness. I recognize this, and acknowledge that it does influence my overall view of the API sector.
  • Conversations - I’m regularly engaging in conversations with my partners, learning about their roadmaps, and staying in tune with what they are working on. These conversations have a significant impact on my view of the space, and help me understand the value that API service providers bring to the table.
  • Referrals - Now is where we start getting to the money part of this conversation. When I refer clients to some of my partners, there are some revenue opportunities available for me. Not every referral is done because I’m getting paid, but there are partners who do pay me when I send business their way. This influences the way that I see the landscape, not because I’m getting paid, but because someone I’m talking to chose to use a service, and this will influence how I see the space.
  • Deals Made - I do make deals, and bring in revenue based upon the business I send to my partners. This isn’t why I do what I do, but it does pay the bills. I’m not writing and telling stories about companies because I have relationships with them. I write and tell stories about them because they do interesting things, and provide solutions for my audience. I’m happy to be transparent about this side of my business, and always work to keep things out in the open.

This is the core the signals I’m generating that tell me which API service providers are making an impact on the API sector. Everything I do reinforces who the most relevant API service providers are. If I’m telling stories about them, then I’m most likely telling more stories. If I’m referring API service providers to my readers and clients, then it just pushes me to read more about what they are doing, and tell more stories about them. The more an API service providers floats up in my world, the more likely they will stay there.

Partnerships Balanced Across Three Dimensions
My goal with all of this is to continue applying my ranking of API service providers across three main dimensions. Things that are within their control. Things that are within the communities control. Then everything weighted and influenced by my opinion. While money does influence this, that isn’t what exclusively drives which API service providers show up across my work. If I take money to write content, then its hard to say that content is independent. The content is what drives the popularity and readership of API Evangelist, so I don’t want to negatively impact this work. It is easy to say that my storytelling and research is influenced by referral fees and deals I make with partners, but I find this irrelevant if my list of partners is ranked by a number of other elements, within the API service providers control, but also due to signals that are not.

The Most Relevant Partners Rise To The Top
Everything on API Evangelist is YAML driven. The stories, the APIs, API service providers, and the partners that show across the stories I tell, and the talks I give. I used to drive the listings of APIs and API service providers using my ranking, floating the most relevant to the top. I’m going to start doing that again. I’ve been slowly turning back on my automated monitoring, and updating the ranking for APIs and API service providers. When an API service providers shows up on the list of services or tools I showcase, only the highest rank will float to the top. If API service providers are active, the community is talking about them, and I’m tuned into what they are doing, then they’ll show up in more work more frequently. Keeping the most relevant services and tooling available across the API space, and my research, being showcased in my storytelling, talks, and consulting.

My objective with this redefining of my partner program is to take what I’ve learned over the years, and retool my partnership to deliver more value, help keep my website up and running, but most importantly get out of the way of what I do best–research and storytelling around APIs. I can’t have my partnerships slow or redirect my storytelling. I can’t let my partnerships dilute or discredit my brand. However, I still need to make a living. I still need partners. They still need value to be generated by me, and I want to help ensure that they bring value to the table for me, my readers, as well as the wider API community. This is my attempt to do that. We’ll see how it goes. I will update each month and see what I can do to continue dialing it in. If you have any comments, questions, concerns, or would like to talk about partnering–let me know.

Nest Branding And Marketing Guidelines

I’m always looking out for examples of API providers who have invested energy into formalizing process around the business and politics of API operations. I’m hoping to aggregate a variety of approaches that I can aggregate into a single blueprint that I can use in my API storytelling and consulting. The more I can help API providers standardize what they do, the better off the API sector will be, so I’m always investing in the work that API providers should be doing, but doesn’t always get prioritized.

The other day while profiling the way that Nest uses Server-Sent Events (SSE) to stream activity via thermostats, cameras, and the other devices they provide, and I stumbled across their branding policies. It provides a pretty nice set of guidance for Nest developers in respect to the platform’s brand, and something you don’t see with many other API providers. I always say that branding is the biggest concern for new API providers, but also the one that is almost never addressed by API providers who are in operation–which doesn’t make a whole lot of sense to me. If I had a major corporate brand, I’d work to protect it, and help developers understand what is important.

The Nest marketing program is intended to qualify applications, and then allow them to use the “Works with Nest” branding in your marketing and social media. To get approved you have submit your product or service for review. As part of the review process verifies that you are in compliance with all of their branding policies, including:

To apply for the program you have to email them with all the following details regarding the marketing efforts our your product or service where you will be using the “Works with Nest” branding:

  • Description of your marketing program
  • Description of your intended audience
  • Planned communication (list all that apply): Print, Radio, TV, Digital Advertising, OOH, Event, Email, Website, Blog Post, Email, Social Post, Online Video, PR, Sales, Packaging, Spec Sheet, Direct Mail, App Store Submission, or other (please specify)
  • High resolution assets for the planned communication (please provide URL for file download); images should be at least 1000px wide
  • Planned geography (list all that apply): Global, US, Canada, UK, or other (please specify)
  • Estimated reach: 1k - 50k, 50k- 100k, 100k - 500k, or 500k+
  • Contact Information: First name, last name, email, company, product name, and phone number

Once submitted, Nest says they’ll provide feedback or approval of your request within 5 business days, and if all is well, they’ll approve your marketing plan for that Works with Nest product. If they find any submission is not in compliance with their branding policies, they’ll ask you to make corrections to your marketing, so you can submit again. I don’t have too many examples of marketing and branding submission process as part of my API branding research. I have the user interface guide, and trademark policy as building blocks, but the user experience guide, and the details of the submission process are all new entries to my research.

I feel like API providers should be able to defend the use of their brand. I do not feel API providers can penalize and cut-off API consumers access unless there are clear guidelines and expectations presented regarding what is healthy, and unhealthy behavior. If you are concerned about how your API consumers reflect your brand, then take the time to put together a program like Nest has. You can look at my wider API branding research if you need other examples, or I’m happy to dive into the subject more as part of a consulting arrangement, and help craft an API branding strategy for you. Just let me know.

Disclosure: I do not “Work with Nest” – I am just showcasing the process. ;-)

Seamless API Lifecycle Integration With Github, Gitlab, And BitBucket

This is a story, in a series of stories that I’m doing as part of the version 3.0 release of the platform. is one the few API service providers I’m excited about what when it comes to what they are delivering in the API space, so I jumped at the opportunity to do some paid work for them. As I do, I’m working to make these stories about the solutions the provide, and refrain from focusing on just their product, hopefully maintaining my independent stance as the API Evangelist, and keeping some of the credibility I’ve established over the years.

Github, Gitlab, and Bitbucket have taken up a central role in the delivery of the valuable API resources we are using across our web, mobile, and device-based applications. These platforms have become integral parts of our development, and software build processes, with Github being the most prominent player when it comes to defining how we deliver not just applications, but increasingly API-driven applications on the web, our mobile phones, and common Internet connected objects becoming more ubiquitous across our physical worlds.

A Lifecycle Directory Of Groups and Users These social coding platforms come with the ability to manage different groups, projects, as well as the users involved with moving project forward. While version control isn’t new, Github is credited with making this aspect of managing code a very social endeavor which can be done publicly, or privately within select groups. Providing a rich environment for defining who is involved with each microservice that is being moved along the API lifecycle, allowing teams to leverage the existing organization and user structure provided by these platforms as part of their API lifecycle organizational structure.

Repositories As A Wrapper For Each Service Five years ago you would most likely find just code within a Github repository. In 2018, you will find schema, documentation, API definitions, and numerous artifacts stored and evolved within individual repositories. The role of the code repository in the API lifecycle when it comes to helping to move forward an API from design, to mocking, to documentation, testing, and other stops along the lifecycle can’t be understated. The individual repository has become synonymous with a single unit of value in microservices, containerization, and continuous deployment and integration conversations, making it the logical vehicle for the API lifecycle.

Knowing the Past And Seeing The Future Of The API Lifecycle The API lifecycle has a beginning, an end, as well as many stops along the way. Increasingly the birth of an API begins in a repository, with a simple README file explaining its purpose. Then the lifecycle becomes defined through a series of user-driven submissions via issues, comments, and commits to the repository. Which are then curated, organized and molded into a road map, list of currently known issues, and a resulting change log of everything that has occurred. Making seamless integration for individual repositories, as well as across all of an organization’s API-centric repositories pretty critical to moving along the API lifecycle in an efficient manner.

Providing A Single Source Voice And Source Of Truth With the introduction of Jekyll, and other static page solutions to the social coding realm, the ability for a repository to become a central source of information, communications, notifications, and truth has been dramatically elevated. The API lifecycle is being coordinated, debated, and published using the underlying Git repository, becoming a platform for delivering the code, definitions, and other more technical elements. We are increasingly publishing documentation along side code and OpenAPI definition updates, and aggregating issues, milestones, and other streams generated at the repository level to provide a voice for the API lifecycle.

Seamless Integration With Lifecycle Tooling Is Essential Version control in its current form is the movement we experience across the API lifecycle. The other supporting elements for organizational, project, and user management provide the participation layer throughout the lifecycle. The communication and notification layers give the life cycle a voice, keeping everyone in sync and moving forward in concert. For any tool or service to contribute to the API lifecycle in a meaningful way, it has to seamless integrate with Git, as well as the underlying API for these software development, version control, devops, and social coding platforms. Successful API service providers will integrate with our existing software lifecycle(s), serving one or many stops, and helping us all move beyond a single, walled garden platform mentality.

Helping Get The Word Out About Version 3.0

I’ve been telling stories about what the team has been building for a couple of years now. They are one of the few API service provider startups left that are doing things that interest me, and really delivering value to their API consumers. In the last couple of years, as things have consolidated, and funding cycles have shifted, there just hasn’t been the same amount of investment in interesting API solutions. So when approached me to do some storytelling around their version 3.0 release, I was all in. Not just because I’m getting paid, but because they are doing interesting things, that I feel are worth talking about.

I’ve always categorized as an API design solution, but as they’ve iterated upon the last couple of versions, I feel they’ve managed to find their footing, and are maturing to become one of the few truly API lifecycle solutions available out there. They don’t serve every stop along the API lifecycle, but they do focus on a handful of the most valuable stops, and most importantly, they have adopted OpenAPI as the core of what they do, allowing API providers to put to work for them, as well as any other solutions that support OpenAPI at the core.

As far as the stops along the API lifecycle that they service, here is how I break them down:

  • Definitions - An OpenAPI driven way of delivering APIs, that goes beyond just a single definition, and allows you to manage your API definitions at scale, across many teams, and services.
  • Design - One of the most advanced API design GUI solutions out there, helping you craft and evolve your APIs using the GUI, or working directly with the raw JSON or YAML.
  • Virtualization - Enabling the mocking and virtualization of your APIs, allowing you to share, consume, and iterate on your interfaces long before you have deliver more costly code.
  • Testing - Provides the ability to not just test your individual APIs, but define and automate using detailed tests, assertions, and deliver a variety of scenarios to ensure APIs are doing what they should be doing.
  • Documentation - Allows for the publishing of simple, clean, but interactive documentation that is OpenAPI driven, and share with your team, and your API community through a central portal.
  • Discovery - Tightly integrated with Github, and maximizing an OpenAPI definition in a way that makes the entire API lifecycle discoverable by default.
  • Governance - Allows for teams to get a handle on the API design and delivery lifecycle, while working to define and enforce API design standards, and enforce a certain quality of service across the lifecycle.

They may describe themselves a little differently, but in terms of the way that I tag API service providers, these are the stops they service along the API lifecycle. They have a robust API definition and design core, with an attractive easy to use interface, which allows you to define, design, virtualize, document, test, and collaborate with your team, community, and other stakeholders. Which makes them a full API lifecycle service provider in my book, because they focus on serving multiple stops, and they are OpenAPI driven which allows every other stop to also be addressed using any other tools and service that supports OpenAPI–which is how you do business with APIs in 2018.

I’ve added API governance to what they do, because they are beginning to build in much of what API teams are going to need to begin delivering APIs at scale across large organizations. Not just design governance, but the model and schema management you’ll need, combined with mocking, testing, documentation, and the discovery that comes along with delivering APIs like does. They reflect not just where the API space is headed with delivering APIs at scale, but what organizations need when it comes to bringing order to their API-driven, software development lifecycle in a microservices reality.

I have five separate posts that I will be publishing over the next couple weeks as releases version 3.0 of their API development suite. Per my style I won’t always be directly about their product, but I’ll be talking about the solutions it deliver, but occasionally you’ll hear me mention them directly, because I can’t help it. Thanks to for supporting what I do, and thanks to you my readers for checking out what brings to the table. I think you are going to dig what they are up to.

OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract
There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract
Each OpenAPI can be used to generate a mock representation of the service using Postman,, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository
Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract
Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance
A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract
An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default
An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client
OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams
Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.

Github Is Your Asynchronous Microservice Showroom

When you spend time looking at a lot of microservices, across many different organizations, you really begin to get a feel for the ones who have owners / stewards that are thinking about the bigger picture. When people are just focused on what the service does, and not actually how the service will be used, the Github repos tend to be cryptic, out of sync, and don’t really tell a story about what is happening. Github is often just seen as a vehicle for the code to participate in a pipeline, and not about speaking to the rest of the humans and systems involved in the overall microservices concert that is occurring.

Github Is Your Showroom
Each microservices is self-contained within a Github repository, making it the showcase for the service. Remember, the service isn’t just the code, and other artifacts buried away in the folders for nobody to understand, unless you understand how to operate the service or continuously deploy the code. It is a service. The service is part of a larger suite of services, and is meant to be understood and reusable by other human beings in the future, potentially long after you are gone, and aren’t present to give a 15 minute presentation in a meeting. Github is your asynchronous microservices showroom, where ANYONE should be able to land, and understand what is happening.

README Is Your Menu
The README is the centerpiece of your showroom. ANYONE should be able to land on the README for your service, and immediately get up to speed on what a service does, and where it is in its lifecycle. README should not be written for other developers, it should be written for other humans. It should have a title, and short, concise, plain language description of what a service does, as well as any other relevant details about what a service delivers. The README for each service should be a snapshot of a service at any point in time, demonstrating what it delivers currently, what the road map contains, and the entire history of a service throughout its lifecycle. Every artifact, documentation, and relevant element of a service should be available, and linked to via the README for a service.

An OpenAPI Contact
Your OpenAPI (fka Swagger) file is the central contract for your service, and the latest version(s) should be linked to prominently from the README for your service. This JSON or YAML definition isn’t just some output as exhaust from your code, it is the contract that defines the inputs and outputs of your service. This contract will be used to generate code, mocks, sample data, tests, documentation, and drive API governance and orchestration across not just your individual service, but potentially hundreds or thousands of services. An update to date, static representation of your services API contract should always be prominently featured off the README for a service, and ideally located in a consistent folder across services, as services architects, designers, coaches, and governance people will potentially be looking at many different OpenAPI definitions at any given moment.

Issues Dirven Conversation
Github Issues aren’t ust for issues. It is a robust, taggable, conversational thread around each individual service. A github issues should be the self-contained conversation that occurs throughout the lifecycle of a service, providing details of what has happened, what the current issues are, and what the future road map will hold. Github Isuses are designed to help organize a robust amount of conversational threads, and allow for exploration and organization using tags, allowing any human to get up to speed quickly on what is going on. Github issues should be an active journal for the work being done on a service, through a monologue by the service owner / steward, and act as the feedback loop with ALL OTHER stakeholders around a service. Github issues for each service will be be how the antrhopolgists decipher the work you did long after you are gone, and should articulate the entire life of each individual service.

Services Working In Concert
Each microservice is a self-contained unit of value in a larger orchestra. Each microservice should do one thing and do it well. The state of each services Github repository, README, OpenAPI contract, and the feedback loop around it will impact the overall production. While a service may be delivered to meet a specific application need in its early moments, the README, OpenAPI contract, and feedback loop should attract and speak to any potential future application. A service should be able to reused, and remixed by any application developer building internal, partner, or some day public applications. Not everyone landing on your README will have been in the meetings where you presented your service. Github is your service’s showroom, and where you will be making your first, and ongoing impression on other developers, as well as executives who are poking around.

Leading Through API Governance
Your Github repo, README, and OpenAPI contract is being used by overall microservices governance operations to understand how you are designing your services, crafting your schema, and delivering your service. Without an OpenAPI, and README, your service does nothing in the context of API governance, and feeding into the bigger picture, and helping define overall governance. Governance isn’t scripture coming off the mountain and telling you how to operate, it is gathered, extracted, and organized from existing best practices, and leadership across teams. Sure, we bring in outside leadership to help round off the governance guidance, but without a README, OpenAPI, and active feedback loop around each service, your service isn’t participating in the governance lifecycle. It is just an island, doing one thing, and nobody will ever know if it is doing it well.

Making Your Github Presence Count
Hopefully this post helps you see your own microservice Github repository through an external lens. Hopefully it will help you shift from Github being just about code, for coders, to something that is part of a larger conversation. If you care about doing microservices well, and you care about the role your service will play in the larger application production, you should be investing in your Github repository being the showroom for your service. Remember, this is a service. Not in a technical sense, but in a business sense. Think about what good service is to you. Think about the services you use as a human being each day. How you like being treated, and how informed you like to be about the service details, pricing, and benefits. Now, visit the Github repositories for your services, think about the people who who will be consuming them in their applications. Does it meet your expectations for a quality level of service? Will someone brand new land on your repo and understand what your service does? Does your microservice do one thing, and does it do it well?

An OpenAPI Service Dependency Vendor Extension

I’m working on a healthcare related microservice project, and I’m looking for a way to help developers express their service dependencies within the OpenAPI or some other artifact. At this point I’m feeling like the OpenAPI is the place to articulate this, adding a vendor extension to the specification that can allow for the referencing of one or more other services any particular service is dependent on. Helping make service discovery more machine readable at discovery and runtime.

To help not reinvent the wheel, I am looking at using the Web API type including the extensions put forth by Mike Ralphson and team. I’d like the x-api-dependencies collection to adopt a standardized schema, that was flexible enough to reference different types of other services. I’d like to see the following elements be present for each dependency:

  • versions (OPTIONAL array of thing -> Property -> softwareVersion). It is RECOMMENDED that APIs be versioned using [semver]
  • entryPoints (OPTIONAL array of Thing -> Intangible -> EntryPoint)
  • license (OPTIONAL, CreativeWork or URL) - the license for the design/signature of the API
  • transport (enumerated Text: HTTP, HTTPS, SMTP, MQTT, WS, WSS etc)</p>
  • apiProtocol (OPTIONAL, enumerated Text: SOAP, GraphQL, gRPC, Hydra, JSON API, XML-RPC, JSON-RPC etc)
  • webApiDefinitions (OPTIONAL array of EntryPoints) containing links to machine-readable API definitions
  • webApiActions (OPTIONAL array of potential Actions)

Using the Web type would allow for a pretty robust way to reference dependencies between services in a machine readable way, that can be indexed, and even visualized in services and tooling. When it comes to evolving and moving forward services, having dependency details baked in by default make a lot of sense, and ideally each dependency definition would have all the details of the dependency, as well as potential contact information, to make sure everyone is connected regarding the service road map. Anytime a service is being deprecated, versioned, or impacted in any way, we have all the dependencies needed to make an educated decision regarding how to progress with the least amount of friction as possible.

I’m going to go ahead and create a draft OpenAPI vendor extension specification for x-service-dependencies, and use the WebAPI type, complete with the added extensions. Once I start using it, and have successfully implemented it for a handful of services I will publish and share a working example. I’m also on the hunt for other examples of how teams are doing this. I’m not looking for code dependency management solutions, I am specifically looking for API dependency management solutions, and how teams are making these dependencies discoverable in a machine readable way. If you know of any interesting approaches, please let me know, I’d like to hear more about it.

The API Stack Profiling Checklist

I just finished a narrative around my API Stack profiling, telling the entire story around the profiling of APIs for inclusion in the stack. To help encourage folks to get involved, I wanted to help distill down the process into a single checklist that could be implemented by anyone.

The Github Base Everything begins as a Github repository, and it can existing in any user or organization. Once ready, I can fork and publish as part of the API stack, or sync with an existing repository project.

  • Create Repo - Create a single repository with the name of the API provider in plain language.
  • Create README - Add a README for the project, articulating who the target is and the author.

OpenAPI Definition Profiling the API surface area using OpenAPI, providing a definition of the request and response structure for all APIs. Head over to their repository if you need to learn more about OpenAPI. Ideally, there is an existing OpenAPI you can start with, or other machine readable definition you can use as base–look around within their developer portal, because sometimes you can find an existing definition to start with. Next look on Github, as you never know where there might be something existing that will save you time an energy. However you approach, I’m looking for complete details on the following:

  • info - Provide as much information about the API.
  • host - Provide a host, or variables describing host.
  • basePath - Document the basePath for the API.
  • schemes - Provide any schemes that the API uses.
  • produces - Document which media types the API uses.
  • paths - Detail the paths including methods, parameters, enums, responses, and tags.
  • definitions - Provide schema definitions used in all requests and responses.

To help accomplish this, I often will scrape, and use any existing artifacts I can possible find. Then you just have to roll up your sleeves and begin copying and pasting from the existing API documentation, until you have a complete definition. There is never any definitive way to make sure you’ve profiled the entire API, but do your best to profile what is available, including all the detail the provider as shared. There will always be more that we can do later, as the API gets used more, and integrated by more providers and consumers.

Postman Collection Once you have an OpenAPI definition available for the API, import it into Postman. Make sure you have a key, and the relevant authentication environment settings you need. Then begin making API calls to each individual API path, making sure your API definition is as complete as it possibly can. This can be the quickest, or the most time consuming of the profiling, depending on the complexity and design of the API. The goal is to certify each API path, and make sure it truly reflects what has been documented. Once you are done, export a Postman Collection for the API, complimenting the existing OpenAPI, but with a more run-time ready API definiton.

Merging the Two Definitions Depending on how many changes occurred within the Postman portion of the profiling you will have to sync things up with the OpenAPI. Sometimes it is a matter of making minor adjustments, sometimes you are better off generating an entirely new OpenAPI from the Postman Collection using APIMATIC’s API Transformer. The goal is to make sure the OpenAPI and Postman are in sync, and working the same way as expected. Once they are in sync they can uploaded to the Github repository for the project.

Managing the Unkown Unknowns There will be a lot of unknowns along the way. A lot of compromises, and shortcuts that can be taken. Not every definition will be perfect, and sometimes it will require making multiple definitions because of the way an API provider has designed their API and used multiple subdomains. Document it all as Github issues in the repository. Use the Github issues for each API as the journal for what happened, and where you document any open questions, or unfinished dwork. Making the repository the central truth for the API definition, as well as the conversation around the profiling process.

Providing Central APIs.json Index The final step of the process is to create an APIs.json index for the API. You can find a sample one over at the specification website. When I profile an API using APIs.json I am always looking for as much detail as I possibly can, but for the purposes of API Stack profiling, I’m looking for these essential building blocks:

  • Website - The primary website for an entity owning the API.
  • Portal - The URL to the developer portal for an API.
  • Documentation - The direct link to the API documentation.
  • OpenAPI - The link to the OpenAPI I created on Github.
  • Postman - The link to the Postman Collection I created on Github.
  • Sign Up - Where do you sign up for an API.
  • Pricing - A link to the plans, tiers, and pricing for an API.
  • Terms of Service - A URL to the terms of service for an API.
  • Twitter - The Twitter account for the API provider – ideally, API specific.
  • Github - The Github account or organization for the API provider.

If you create multiple OpenAPIs, and Postman Collections, you can add an entry for each API. If you break a larger API provider into several entity provider repositories, you can link them together using the include property of the APIs.json file. I know the name of the specification is JSON, but feel free to do them in YAML if you feel more comfortable–I do. ;-) The goal of the APIs.json is to provide a complete profile of the API operations, going beyond what OpenAPI and Postman Collections deliver.

Including In The API Stack You should keep all your work in your own Github organization or user account. Once you’ve created a repository you would like to include in the API Stack, and syndicate the work to the API Gallery,,, Postman Network, and others, then just submit as a Github issue on the main repository. I’m still working on the details of how to keep repositories in sync with contributors, then reward and recognize them for work their work, but for now I’m relying on Github to track all contributions, and we’ll figure this part out later. The API Stack is just the workbench for all of this, and I’m using it as a place to merge the work of many partners, from many sources, and then work to sensibly syndicate out validated API profiles to all the partner galleries, directories, and search engines.

My API Stack Profiling Process

I am escalating conversations I’m having with folks regarding how I profile and understand APIs as part of my API Stack work. It is something I’ve been evolving for 2-3 years, as I struggle with different ways to attack and provide solutions within the area of API discovery. I want to understand what APIs exist out there as part of my industry research, but along the way I want to also help others find the APIs they are looking for, while also helping API providers get their APIs found in the first place. All of this has lead me to establish a pretty robust process for profiling API providers, and document what they are doing.

As I shift my API Stack work into 2nd gear, I need to further formalize this process, articulate to partners, and execute at scale. Here is my current snapshot of what is happening whenever I’m profiling an API and adding it to my API Stack organization, and hopefully moving it forward along this API discovery pipeline in a meaningful way. It is something I’m continuing to set into motion all by myself, but I am also looking to bring in other API service providers, API providers, and API consumers to help me realize this massive vision.

Github as a Base
Every new “entity” I am profiling starts with a Github repository and a README, within a single API Stack organization for providers. Right now this is all automated, as I’m transition to this way of doing things at scale. Once in motion, each provider I’m profiling will become pretty manual. What I consider an “entity” is fluid. Most smaller API providers have a single repository, while other larger ones are broken down by line of business, to help reduce API definitions and conversation around APIs down to the smallest possible unit we can. Every API service provider I’m profiling gets its own Github repository, subdomain, set of definitions, and conversation around what is happening. Here are the base elements of what is contained within each Github repo.

  • OpenAPI - A JSON or YAML definition for the surface are of an API, including the request, responses, and underlying schema using as part of both.
  • Postman Collection - A JSON Postman Collection that has been exported from the Postman client after importing the complete OpenAPI definition, and full tested.
  • APIs.json - A YAML definition of the surface area of an API’s operation, including home page, documentation, code samples, sign up and login, terms of service, OpenAPI, Postman Collection, and other relevant details that will matter to API consumers.

The objective is to establish a single repository that can be continuously updated, and integrated into other API discovery pipelines. The goal is to establish a complete (enough) index of the APIs using OpenAPI, make it available in a more runtime fashion using Postman, and then indexing the entire API operations, including the two machine readable definitions of the API(s) using APIs.json. Providing a single URL in which you can ingest and use to integrate with an API, and make sense of API resources at scale.

Everything Is OpenAPI Driven
My definition still involves OpenAPI 2.0, but it is something I’m looking to shift towards 3.0 immediately. Each entity will contain one or many OpenAPI definitions location in the underscore openapi folder. I’m still thinking about the naming and versioning convention I’m using, but I’m hoping to work this out over time. Each OpenAPI should contain complete details for the following areas:

  • info - Provide as much information about the API.
  • host - Provide a host, or variables describing host.
  • basePath - Document the basePath for the API.
  • schemes - Provide any schemes that the API uses.
  • produces - Document which media types the API uses.
  • paths - Detail the paths including methods, parameters, enums, responses, and tags.
  • definitions - Provide schema definitions used in all requests and responses.

While it is tough to define what is complete when crafting an OpenAPI definition, every path should be represented, with as much detail about the surface area of requests and responses. It is the detail that really matters, ensuring all parameters and enums are present, and paths are properly tagged based upon the resources being made available. The more OpenAPI definitions you create, the more you get a feel for what is a “complete” OpenAPI definition. Something that becomes real once you create a Postman Collection for the API definition, and begin to actually try and use an API.

Validating With A Postman Collection
Once you have a complete OpenAPI definition defined, the next step is to validate your work in Postman. You simply import the OpenAPI into the client, and get to work validating that each API path actually works. You make sure you have an API key, and any other authentication requirements, and work your way through each path, validating that the OpenAPI is indeed complete. If I’m missing sample responses, the Postman portion of this API discovery journey is where I get them. Grabbing validated responses, anonymizing them, and pasting them back into the OpenAPI definition. Once I’ve worked my way through every API path, and verified it works, I then export a final Postman Collection for the API I’m profiling.

Indexing API Operations With APIs.json
I upload my OpenAPI, and Postman Collection to the Github repository for the API entity I am targeting. Then I get to work on an APIs.json for the API. This portion of the process is about documenting the OpenAPI and Postman I just created, but then also doing the hard work of identifying everything else an API provider offers to support their operations, and will be needed to get up and running with API integration. Here are a handful of the essential building blocks I’m documenting:

  • Website - The primary website for an entity owning the API.
  • Portal - The URL to the developer portal for an API.
  • Documentation - The direct link to the API documentation.
  • OpenAPI - The link to the OpenAPI I created on Github.
  • Postman - The link to the Postman Collection I created on Github.
  • Sign Up - Where do you sign up for an API.
  • Pricing - A link to the plans, tiers, and pricing for an API.
  • Terms of Service - A URL to the terms of service for an API.
  • Twitter - The Twitter account for the API provider – ideally, API specific.
  • Github - The Github account or organization for the API provider.

This list represents the essential things I’m looking for. I have almost 200 other building blocks I document as part of API operations ranging from SDKs to training videos. The presence of these building blocks tell me a lot about the motivations behind an API platform, and send off a wealth of signals that can be tuned into by humans, or programmatically to understand the viability and health of an API.

Continuously Integratable API Definition
Now we should have the entire surface area of an API, and it’s operations defined in a machine readable format. The APIs.json, OpenAPI, and Postman can be used to on-board new developers, and used by API service providers looking to provides valuable services to the space. Ideally, each API provider is involved in this process, and once I have a repository setup, I will be reaching out to each provider as part of the API profiling process. Even when API providers are involved, it doesn’t always mean you get a better quality API definition. In my experience API providers often don’t care about the detail as much as some consumers and API service providers will. In the end, I think this is all going to be a community effort, relying on many actors to be involved along the way.

Self-Contained Conversations About Each API
With each entity being profiled living in its own Github repository, it opens up the opportunity to keep API conversation localized to the repository. I’m working to establish any work in progress, and conversations around each APIs definitions within the Github issues for the repository. There will be ad-hoc conversations and issues submitted, but I’ll be also adding path specific threads that are meant to be an ongoing thread dedicated to profiling each path. I’ll be incorporating these threads into tooling and other services I’m working with, encouraging a centralized, but potentially also distributed conversation around each unique API path, and its evolution. Keeping everything discoverable as part of an API’s definition, and publicly available so anyone can join in.

Making Profiling A Community Effort
I will repeat this process over and over for each entity I’m profiling. Each entity lives in its own Github repository, with the definition and conversation localized. Ideally API providers are joining in, as well as API service providers, and motivated API consumers. I have a lot of energy to be profiling APIs, and can cover some serious ground when I’m in the right mode, but ultimately there is more work here than any single person could ever accomplish. I’m looking to standardize the process, and distribute the crowdsourcing of this across many different Github repositories, and involve many different API service providers I’m working with to ensure the most meaningful APIs are being profiled, and maintained. Highlighting another important aspect of all of this, and the challenges to keep API definitions evolving, soliciting many contributions, while keeping them certified, rated, and available for production usage.

Ranking APIs Using Their Definitions
Next, I’m ranking APIs based upon a number of criteria. It is a ranking system I’ve slowly developed over the last five years, and is something I’m looking to continue to refine as part of this API Stack profiling work. Using the API definitions, I’m looking to rate each API based upon a number of elements that exist within three main buckets:

  • API Provider - Elements within the control of the API provider like the design of their API, sharing a road map, providing a status dashboard, being active on Twitter and Github, and other signals you see from healthy API providers.
  • API Community - Elements within the control of the API community, and not always controlled by the API provider themselves. Things like forum activity, Twitter responses, retweets, and sentiment, and number of stars, forks, and overall Github activity.
  • API Analyst - The last area is getting the involvement of API analysts, and pundits, and having them provide feedback on APIs, and get involved in the process. Tracking on which APIs are being talked about by analysts, and storytellers in the space.

Most of this ranking I can automate through there being a complete APIs.json and OpenAPI definition, and machine readable sources of information present in those indexes. Signals I can tap into using blog RSS, Twitter API, Github API, Stack Exchange API, and others. However some it will be manual, as well as require the involvement of API service providers. Helping me add to my own ranking information, but also bringing their own ranking systems, and algorithms to the table. In the end, I’m hoping the activity around each APIs Github repository, also becomes its own signal, and consumers in the community can tune into, weight, or exclude based upon the signals that matter most to them. Stream Rank
Adding to my ranking system, I am applying my partner’s ranking system for APIs, looking to determine that activity surrounding an API, and how much opportunity exists to turn an API event a real time stream, as well as identify the meaningful events that are occurring. Using the OpenAPI definitions generated for each API being profiled, I begin profiling for Stream Rank by defining the following:

  • Download Size - What is the API response download size when I poll an API?
  • Download Time - What is the API response download time when I poll an API?
  • Did It Change - Did the response change from the previous response when I poll an API?

I poll an API for at least 24-48 hours, and up to a week or two if possible. I’m looking to understand how large API responses are, and how often they change–taking notice of daily or weekly trends in activity along the way. The result is a set of scores that help me identify:

  • Bandwidth Savings - How much bandwidth can be saved?
  • Processor Savings - How much processor time can be saved?
  • Real Time Savings - How real time an API resource is, which amplifies the savings?

All of which go into what we are calling a Stream Rank, which ultimately helps us understand the real time opportunity around an API. When you have data on how often an API changes, and then you combine this with the OpenAPI definition, you have a pretty good map for how valuable a set of resources is based upon the ranking of the overall API provider, but also the value that lies within each endpoints definition. Providing me with another pretty compelling dimension to the overall ranking for each of the APIs I’m profiling.

APIMetrics CASC Score
Next I’m partnering with APIMetrics to bring in their CASC Scoring when it makes sense. They have been monitoring many of the top API providers, and applying their CASC score to the overall API availability. I’d like to pay them to do this for ALL APIs in my API Stack work. I’d like to have every public API actively monitored from many different regions, and posses active, as well as historical data that can be used to help rank each API provider. Like the other work here, this type of monitoring isn’t easy, or cheap, so it will take some prioritization, and investment to properly bring in APIMetrics, as well as other API service providers to help me rank things.

Event-Driven Opportunity With AsyncAPI
Something I’ve only just begun doing, but I wanted to include on this list, is adding a fourth definition to the stack when it makes sense. If any of these APIs possess a real time, event-driven, or message oriented API, I want to make sure it is profiled using the AsyncAPI specification. The specification is just getting going, but I’m eager to help push forward the conversation as part of my API Stack work. The format is a great sister specification for OpenAPI, and something I feel is relevant for any API that I profile with a high enough Stream Rank. Once an API possesses a certain level of activity, and has a certain level of richness regarding parameters and the request surface area, the makings for an event-driven API are there–the provider just hasn’t realized it. The map for the event-driven exists within the OpenAPI parameter enums, and tagging for each API path, all you need is the required webhooks or streaming layer to realize an existing APIs event-driven potential–this is where comes in. ;-)

One Stack, Many Discovery Opportunities
I am doing this work under my API Stack research. However it is all being funded by It is also being stimulated by my work with Postman, the OpenAPI Initiative,, APIs.json, API Deck,, and beyond. The goal is to turn API Stack into a workbench, where anyone can contribute, as well as fork and integrate API definitions into their existing pipelines. All of this will take years, and a considerable amount of investment, but it is something I’ve already invested several years into. I’m also seeing the API discovery conversation reaching a critical mass thanks to the growth in the number of APIs, as well as adoption of OpenAPI and Postman. Making API discovery a more urgent problem. The challenge will be that most people want to extract value from these kind of effort, and not give back. While I feel strongly that API definitions should be open and freely accessible, the nature of the industry doesn’t alway reflect my vision, so I’ll have to get creative about how I incentivize folks, and reward or punish certain behaviors along the way.

The OpenAPI 3.0 GUI

I have been editing my OpenAPI definitions manually for some time now, as I just haven’t found a GUI editor that works well with my workflow. Swagger editor really isn’t a GUI solution, and while I enjoy services like, APIMATIC, and others, I’m very picky about what I adopt within my world. One aspect of this is that I’ve been holding out for a solution that I can run 100% on Github using Github Pages. I’ve delusionally thought that someday I would craft a slick JavaScript solution that I could run using only Github Pages, but in reality I’ve never had the time to pull together. So, I just kept manually editing my OpenAPI definitions on the desktop using Atom, and publish to Github as I needed.

Well, now I can stop being so dumb, because my friend Mike Ralphson (@permittedsoc) has created my new favorite OpenAPI GUI, that is OpenAPI 3.0 compliant by default. As Mike defines it, “OpenAPI-GUI is a GUI for creating and editing OpenAPI version 3.0.x JSON/YAML definitions. In its current form it is most useful as a tool for starting off and editing simple OpenAPI definitions. Imported OpenAPI 2.0 definitions are automatically converted to v3.0.”

He provides a pretty simple way to get up and running with OpenAPI GUI because, “OpenAPI-GUI runs entirely client-side using a number of Javascript frameworks including Vue.JS, jQuery and Bulma for CSS. To get the app up and running just browse to the live version on GitHub pages, deploy a clone to GitHub pages, deploy to Heroku using the button below, or clone the repo and point a browser at index.html or host it yourself - couldn’t be simpler.” – “You only need to npm install the Node.js modules if you wish to use the openapi-gui embedded web server (i.e. not if you are running your own web-server), otherwise they are only there for PaaS deployments.”

Portable, Forkable API Design GUI The forkability of OpenAPI GUI is what attracts me to it. The client-side JavaScript approach makes it portable and forkable, allowing it to run wherever it is needed. The ability to quickly publish using Github Pages, or other static environment, makes it attractive. I’d like to work on a one click way for deploying locally, so I could make available to remote microservices teams to use locally to manage their OpenAPI definitions. I think it is important to have local, as well as cloud-based API design GUI locations, with more investment in the cloud-base solutions being more shared, possessing a team component, as well as a governance layer. However, I really like the thought of each OpenAPI having its own default editor, and you edit the definition within the repo it is located. I could see two editions of OpenAPI GUI, the isolated edition, and a more distributed team and definition approach. IDK, just thinking out loud.

Setting the OpenAPI 3.0 Stage The next thing I love about OpenAPI GUI is that it is all about OpenAPI 3.0, which I just haven’t had the time to fully migrate to. It has the automatic conversion of OpenAPI definitions, and gives you all the GUI elements you need to manage the robust features present in 3.0 of the OpenAPI specification. I’m itching to get ALL of my API definitions migrated from 2.0 to 3.0. The modular and reusable nature of OpenAPI 3.0, enabled by OpenAPI GUI is a formula for a rich API design experience. There are a lot of features built into the expand and collapse GUI, which I could see becoming pretty useful once you get the hang of working in there. Adding, duplicating, copying, and reusing your way to a powerful, API-first way of getting things done. I like.

Bringing In The Magic With Wizards Ok, now I’m gonna go all road map on you Mike. You know you don’t have to listen to everything I ask for, but cherry pick the things that make sense. I see what you are doing in the wizard section. I like it. I want a wizard marketplace. Go all Harry Potter and shit. Create a plugin framework and schema so we can all create OpenAPI driven wizards that do amazing things, and users can plugin and remove as they desire. Require each plug to be its own repo, which can be connected and disconnected easily via the wizards tab. I’d even consider opening up the ability to link to wizards from other locations in the editor, and via the navigation. Then we are talking about paid wizards. You know, of the API validation varietal. Free validations, and then more advanced paid validations. ($$)

Branding And Navigation Customization I would love to be able name, organize, and add my own buttons. ;-) Let’s add a page that allows for managing the navigation bars, and even the icons that show within the page. I know it would be a lot of work, but a great thing to have in the road map. You could easily turn on the ability to add logo, change color palette, and some other basics that allows companies to easily brand it, and make it their space. Don’t get me wrong. It looks great now. It is clean, simple, uncluttered. I’ve just seen enough re-skinned Swagger UI and editors to know that people are going to want to mod the hell out of their GUI API design tooling. Oh yeah, what about drag and drop–I’d like to reorder by paths. ;-) Ok, I’ll shut up now.

OpenAPI GUI In A Microservices World As I said before, I like the idea of one editor == one OpenAPI. Something that can be isolated within the repo of a service. Don’t make things too complicate and busy. In my API design editor I want to be able to focus on the task at hand. I don’t want to be distracted by other services. I’d like this GUI to be the self-container editor accessible via the Jekyll, Github Pages layer for each of my microservices. Imagine a slightly altered API design experience for each service. I could have the workspace configurable and personalized for exactly what I need to deliver a specific service. I would have a base API design editor for jumpstarting a new service, but I’d like to be able to make the OpenAPI GUI fit each service like a glove, even making the About tab a README for the service, rather than for the editor–making it a microservice dashboard landing page for each one of my services.

Making API Design a Shareable Team Sport Ok, contradicting my anti-social, microservices view of things. I want the OpenAPI GUI to be a shareable team environment. Where each team member possesses their own GUI, and the world comes to them. I want a shared catalog of OpenAPI definitions. I want to be able to discover widely, and checkout or subscribe to within my OpenAPI GUI. I want all the reusable components to have a sharing layer, allowing me to share my components, work with the shared components of other developers, and work from a central repository of components. I want a discovery layer across the users, environments, components, OpenAPI definitions, and wizards. I want to be able to load my API design editor with all of the things I need to deliver consistent API design at scale. I want to be able to reuse the healthiest patterns in use across teams, and possibly lead by helping define, and refine what reusable resources are available for development, staging, and production environments.

Github, Gitlab, or BitBucket As A Backend For The OpenAPI GUI I want to run on Github, Gitlab, or BitBucket environments, as a static site–no backend. Everything backend should be an API, and added as part of the wizard plugin (aka magic) environment. I want the OpenAPI, components, and any other artifacts to live as JSON or YAML within a Github, Gitlab, or BitBucket repository. If it is an isolated deployment, everything is self-contained within a single repository. If it is a distributed deployment, then you can subscribe to shared environments, catalogs, components, and wizards via many organizations and repositories. This is the first modification I’m going to do to my version, instead of the save button saving to the vue.js store, I’m going to have it write to Github using the Github API. I’m going to add a Github OAuth icon, which will give me a token, and then just read / write to the underlying Github repository. I’m going to begin with an isolated version, but then expand to search, discovery, read and write to many different Github repositories across the organizations I operate across.

Annotation And Discussion Throughout While OpenAPI GUI is a full featured, and potentially busy environment, I’d like to be able to annotate, share, and discuss within the API design environment. I’d like to be able to create Github issues at the path level, and establish conversations about the design of an API. I’d like to be able to discuss paths, schema, tags, and security elements as individual threads. Providing a conversational layer within the GUI, but one that spans many different Github repositories, organizations, and users. Adding to the coupling with the underlying repository. Baking a conversational layer into the GUI, but also into the API design process in a way that provides a history of the design lifecycle of any particular service. Continuing to make API design a team sport, and allowing teams to work together, rather than in isolation.

API Governance Across Distributed OpenAPI GUIs With everything stored on Github, Gitlab, Bitbucket, and shareable across teams using organizations and repositories, I can envision some pretty interesting API governance possibilities. I can envision a distributed API catalog coming into focus. Indexing and aggregating all services, their definitions, and conversations aggregated across teams. You can see the seeds for this under the wizard tab with validation, visualization, and other extensions. I can see there being an API design governance framework present, where all API definitions have to meet certain number of validations before they can move forward. I can envision a whole suite of free and premium governance services present in a distributed OpenAPI GUI environment. With an underlying Github, BitBucket, and Gitlab base, and the OpenAPI GUI environment, you have the makings for some pretty sweet API governance possibilities.

Lot’s Of Potential With OpenAPI GUI I’ll pause there. I have a lot of other ideas for the API design editor. I’ve been thinking about this stuff for a long time. I’m stoked to adopt what you’ve built Mike. It’s got a lot of potential, and a nice clean start. I’ll begin to put to use immediately across my API Evangelist, API Stack, and other API consulting work. With several different deployments, I will have a lot more feedback. It’s an interesting leap forward in the API design tooling conversation, building on what is already going on with API service providers, and some of the other API design tooling like Apicurio, and publishing the API design conversation forward. I’m looking forward to baking OpenAPI GUI into operations a little more–it is lightweight, open source, and extensible enough to get me pretty excited about the possibilities.

Obtaining A Scalable API Definition Driven API Design Workflow On Github Complete With API Governance

There is a lot of talk of an API design first way of doing things when it comes to delivering microservices. I’m seeing a lot of organizations make significant strides towards truly decoupling how they deliver the APIs they depend on, and continue to streamline their API lifecycle, as well as governance. However, if you are down in the weeds, doing this work at scale, you know how truly hard it is to actualy get everything working in concert across many different teams. While I feel like I have many of the answers, actually achieving this reality, and assembling the right tools and services for the job is much harder then it is in theory.

There was a question posted on API Craft recently that I think gets at the challenges we all face:

We plan to use Open API 3 specification for designing API’s that are required to build our enterprise web application. These API’s are being developed to integrate the backend with frontend. They are initially planned to be internal/private. To roll out an API First strategy across multiple teams (~ 30) in our organization we want to recommend and centrally deploy a standard set of tools that could be used by teams to design and document API’s.

I am new to the swagger tool set. I understand that there is a swagger-editor tool that can help in API design while swagger-ui could help in API documentation. Trying them I realized a few problems

1. How would teams save their API’s centrally on a server? Swagger editor does not provide a way to centrally store them.
2. How can we get a directory look up that displays all the designed API’s?
3. How can we integrate the API design and API documentation tool?
4. How can the API specifications be linked with the implementation (java) to keep them up-to-date?
5. How do we show API dependencies when one api uses the other one?

We basically need to think about the end-to-end work-flow that helps teams in their SDLC to design API’s. For the start I am trying to see what can we achieve with free tools. Can someone share their thoughts based on their experience? This reflects what I’m hearing from the numerous projects I’m consulting with. Some of the OpenAPI (Swagger) related questions can be addressed using Github, but the code integration, dependency, and other aspects of this question are much harder to answer, let alone solve with a single tool. Each team has their existing workflows, pipelines, and software delivery processes in place, so helping articulate how you should do this at scale, gets pretty difficult very quickly.

Github, Gitlab, or BitBucket At Core The first place you want to start is with Github, Gitlab, or Bitbucket at the core. If you aren’t managing your services, and supporting API definitions as individual repositories within a version control system, achieving this at scale will be difficult. These software development platforms are where your APIs should begin, live and evolve, as well as experience their end of life. Making not just the code accessible throughout the lifecycle, but also the API definitions and other artifacts that are essential throughout the API lifecycle.

Full API Lifecycle Service Providers There are a handful of API service providers out there who will help you get pretty far down this road. Evolving towards being what I consider to be a full lifecycle API service provider. Not just helping you with design, or any other individual stop along the API lifecycle, and emerging to help you with as much of what you’ll need to get the job done. Here are just a handful of the leading ones out there that I track on:

  • Postman - Focusing on the client, but bringing in a full stack of lifecycle development tools to support.
  • Stoplight - Beginning with API design, but then expanding quickly to help you manage and govern the lifecycle.
  • APIMATIC - Emphasis on quality SDK generation, but approach life cycle deployment as a pipeline.
  • Swagger - Born out of the API definition, but quickly introducing other lifecycle solutions as part of the package.

I can’t believe I’m talking about Swagger, but I’ll vent about that in another post. Anyways, these are what I’d consider to be the leading API lifecycle development solutions out there. They aren’t going to get you all the way towards everything listed above, but they will get you pretty far down the road. Another thing to note is that they ALL support OpenAPI definitions, which means you can actually use several of the providers, as long as your central truth is an OpenAPI available on a Github repository.

No single one of these services will address the questions posted above, but collectively they could. It shows how fragmented services in the space are, and how many different interpretations of the API lifecycle there are. However, it also shows the importance of an API definition-driven approach to the lifecycle, and how using OpenAPI and Postman Collections can help bridge our usage of multiple API service providers to get the job done. Next, I’ll work on a post outlining some of the open source tooling I’d suggest to help solve some of these problems, and then work to connect the dots more on how you could actually do this in practice on the ground within your organization.

Using Postman to Explore the Triathlon API

I’m taking time to showcase any API I come across who have published their OpenAPI definitions to Github like New York Times, Box, Stripe, SendGrid, Nexmo, and others have. I’m also taking the time to publish stories showcasing any API provider who similarly publish Postman Collections as part of their API documentation. Next up on my list is the Triathlon API, who provides a pretty sophisticated API stack for managing triathlons around the world, complete with a list of Postman Collections for exploring and getting up and running with their API.

Much like Okta, which I wrote about last week, the Triathlon API has broken their Postman Collections into individual service collections, and provides a nice list of them for easy access. Making it quick and easy to get up and running making calls to the API. Something that ALL API providers should be doing. Sorry, but Postman Collections should be a default part of your API documentation, just like OpenAPI definition should be as the driver of your interactive API docs, and the rest of your API lifecycle.

Every provider should be maintaining their OpenAPI definitions, as well as Postman Collections on Github, and baking them into their API documentation. Your OpenAPI should be the central truth for your API operations, and then you can easily import it, and generate Postman Collections as you design, test, and evolve your API using the Postman development suite. I know there are many API providers who haven’t caught up to this approach to delivering API resources, but it is something they need to tune into, and make the necessary shift in how you are delivering your resources.

In addition to regular stories like this on the blog, you will find me reaching out to individual API providers asking if they have an OpenAPI and / or Postman Collections. I’m personally invested in getting API providers to adopt their API definition formats. I want to see their APIs present in my API Stack work, as well as other API discovery projects I’m contributing to like, Postman Network, APIs.Guru, and others. Making sure your APIs are discovered, and making sure you are getting out of the way of your developers by baking API definitions into your API operations–it is just what you do in 2018.

The Impact Of Availability Zones, Regions, And API Deployment Around The Globe

Werner Vogels shared a great story looking back at 10 years of compartmentalization at AWS, where he talks about the impact Amazon has made on the landscape by allowing for the deployment of resources into different cloud regions, zones, and jurisdictions. I agree with him regarding the significant impact this has had on how we deliver infrastructure, and honestly isn’t something that gets as much recognition and discussion as it should. I think this is partly due to the fact that many companies, organizations, institutions, and governments are still making their way to the cloud, and aren’t far enough in their journeys to be able to sufficiently take advantage of the different availability zones.

In Werner’s piece he focuses on the availability, scalability, and redundancy benefits of operating in different zones. Which I think gets at the technical benefits of this benefit of operating infrastructure in the cloud, but there are also significant business, and even political considerations at play here. As the web matures, the business and political implications of being able to operate precisely within a specific region, zone, and jurisdiction is becoming increasingly important. Sure, you want your API infrastructure to be reliable, redundant, and failover when there has been an outage in a specific region, but increasingly clients are asking for APIs to be delivered close to where business occurs, and regulatory bodies are beginning to mandate that digital business gets done within specific borders as well.

Regions have become a top level priority for Amazon, Azure, and Google. Clearly, they are also becoming a top level priority for their customers who operate within their clouds. It is one of those things I notice evolving across the technology landscape and have felt the need to pay attention to more as I see more activity and chatter. I’ve begun documenting which regions each of the cloud providers are operating in, and have been increasing the number of stories I’m writing about the potential for API providers, as well as API service providers. So it was good to see Werner reflecting on the significant role regions have played in the evolution of the cloud, and backing up what I’m already feeling and seeing across the sector.

While Werner focused on the technical benefits, I think the political, legal, and regulatory benefits will soon dwarf the technical ones. While the web has enjoyed a borderless existence for the last 25 years, I think we are going to start seeing things change in the next decade. Making cloud regions more about maintaining control over your countries digital assets, how you generate tax revenue, and defend the critical digital infrastructure of your nation from your enemies. The cloud providers who are empowering companies, organizations, institutions, and government agencies to securely, but flexibly operate in multiple regions are going to be in a good position. Similarly, the API providers, and service providers who behave in a similar way, delivering API resources in a multi-cloud way, are going to emerge as the strongest players in the API economy.

Sharing The APIs.json Vision Again

I know many folks in the API sector don’t know about APIs.json, and if they do they often think it is yet another API definition format, competing wit OpenAPI, Postman Collections, and others. So, I want to take a moment to share the vision again, and maybe convert one or two more folks to the possibilities around having a machine readable format for the entire operations. This is where APIs.json elevates the conversation, is it isn’t just about defining an API, it is about defining API operations, looking to make things more discoverable, and executable by default.

APIs.json is a JSON format, as the name implies, but admittedly a bad name, as I’ve already started created APIs.yaml editions, which defines the entire API operations, not just any single API. It starts with the basics of the API providers, with: name, description, image, tags, created, and modified. Then it has a collection for defining one or many actual APIs. Repeating the need for a name, description, image, and tags, but this time for each API–admittedly this might be redundant in many cases. Each API also has a humanURL and baseURL for each API, providing two key links that developers will need. After that, we start getting to the meat of APIs.json, with the properties collection.

The properties collection is an array of URLs and types, which can point to many different building blocks of API operations. Common elements that should be present with every API provider like:

  • Documentation - URL to the API documentation.
  • Plans & Pricing - Landing page for the API tiers.
  • Terms of Service - Where do you find the legal department.
  • Sign Up - How do you sign up for an API.

These elements are often times meant for humans, but the goal of APIs.json is to also provide machine readable URLs of common building blocks, including:

APIs.json isn’t meant to compete with these existing API definition formats, it is designed to index them. It is meant to provide a complete index of the human, and machine readable components required to operate an API platform. The properties collection is meant to index all of these, but also work to move the human elements towards being more machine readable, as we’ve seen happen with documentation and OpenAPI, as well as what we achieved with licensing and API Commons. Ultimately we’d like to see all the essential aspects of API integration become machine readable–elements like:

  • Pricing - A machine readable breakdown of pricing, tiers, and limits.
  • Terms of Services - Breaking down TOS so they can be interpreted by a machine.
  • Signup - Making signup for APIs something that is automated.

Yes, this will all take a significant amount of work, just like indexing APIs, and making them more discoverable, but it has to be done. Currently, I’m just looking to keep APIs.json, OpenAPI, and Postman Collection definitions up to date across thousands of Github repositories included in my API Stack work. As this work stabilizes, and continues to come into focus, I’ll be pushing forward with better defining signup and pricing across API providers, and then get to work on terms of service. We need all of these elements of API operations to become machine readable and executable as part of the discovery process, otherwise this shit will never scale.

That is a fresh look at the APIs.json vision. There is more to it than just indexing API operations and making them more discoverable. I am also using to build different topical, and industry level collections, which you can reference as part of the APIs.json includes property. But, we’ll save that for another story. In the mean time, I’m going to get back to work profiling APIs using APIsjson, and publishing them to my API Stack organization(s), and syndicating them out to other partners like,, and Postman Network. If you’d like to know more about APIs.json, feel free to reach out, and I’ll make sure and work on more stories around how I am using the API discovery format.

A Decade Later Twitter's API Plan Slowly Begins To Take Shape

It’s been a long, long road for Twitter when it comes to realizing the importance of having a plan when it comes to their API management strategy. Aside from monetizing the firehose through former parter, and now acquired solution Gnip, Twitter has never had any sort of plan when it came to providing access tiers for their API. I try to revisit the Twitter developer portal a couple times a year, but I’m going to have to increase the number of visits as there seems to be more rapid shifts towards getting control over their API management layer in recent months.

The API plan matrix that has emerged as part of Twitter’s pricing page provides a view of the unfolding plan for the social media platform.

The Twitter API plan doesn’t show the entire scope, as it doesn’t cover the streaming layer of the API, but it does provide an important first step towards bringing coherence to how developers can access the API, and pay for premium levels of access. This conversation isn’t just about Twitter making money or Twitter charging for access to our data, this is about Twitter taking control over who has access to our data, and the platform. It is up to Twitter whether or not they focus on revenue, or the needs of end-users, something that we will only realize as their API plan continues to unfold.

This is the layer where Twitter will begin to reign in bots, and other malicious activity, but it is something that won’t be easy, and undoubtedly cause a lot of pain and frustration for developers, and end-users. It is something they should have done a long time ago, but as we are seeing play out with Facebook, to incentivize the growth their investors and shareholders desired, these platform needed to look the other way at the API management layer. To achieve the eyeballs, clicks, and engagements they desired, it required them to ignore harassment, bots, and other forms of abuse that occurs. Twitter is different than Facebook, because it is primarily a public platform, but the lack of a plan at the API management layer, has had a similar impact.

I’ll keep an eye on where Twitter is headed with all of this. While I’m concerned for how they’ll prioritize the needs of end-users and developers, It makes me happy to see them putting a plan in place at the API management level. Twitter has operated for too long without a plan, and without a clear path for serious applications built on the platform to grow and evolve. I’ll keep an eye out for continued examples of Twitter restricting access to our own data, and call them out when they lean to heavily towards revenue generation. While it is a decade too late, I have to commend them for putting their API plan in place, and begin the messy work of reigning in API access to the platform.

Data Driven Apps At Scale Using Github And Jekyll

I’ve pushed my Github driven, Jekyll fueled, application delivery process to a new levels the last couple of weeks. I’ve been investing more cycles into my API Stack work, as I build out an API Gallery for All of my work as the API Evangelist has long lived as independent Github repositories, with a JSON or YAML core that drives the static Jekyll UI for my websites, applications, and API developer portals. Designed to be independent, data and API driven micro-solutions to some related aspect of my API research. I’m just turning up the volume on what already exists.

I’m taking this to the next level with my approach to API discovery. API Stack used to be a single Github repository with huge number of APIs.json and OpenAPI definitions, but after hitting the ceiling with the size of the repository, I’ve begun to shard projects by entity, topic, and collection. So far I have 285 entities, with 9850 API paths, spanning 397 topics. Each entity, and topic exists within its own Github repository, acting as an individual set of API definitions, including an OpenAPI, Postman Collection, and APIs.json index. Allowing me to move forward each API definition for an entity independently, and begin aggregating APIs into interesting collections by topic, industry, or other relevant area.

There is no backend database for these projects. Well, there is a central system I use to help me orchestrate things, but each project is standalone, with everything it needs stored in the Github repository. It is something that took some engineering, and time to setup and first, but now that it is setup, managing it is actually pretty easy. The approach has also allowing me to scale it without any additional performance issues. Each entity I add becomes its own repo, and unless I hit some magic number for the number of repositories I can have within an organization, I should be able to scale to several thousand repos pretty quickly. Doing the same thing for topics, and other types of collections I’m aggregating APIs into.

Each entity repository has the OpenAPI, and Postman Collection for its provider. I’m also breaking out each individual API path into it’s own OpenAPI definition, and considering doing the same for Postman Collections–distilling things down to its smallest possible unit of compute. The APIs.json for each repository indexes the OpenAPI, Postman collection, as well as the other elements of API operations like documentation, blog, Twitter, and Github accounts. In the future I will be publishing other machine readable artifacts here including details on plans and pricing, terms of service, monitoring, performance, and other aspects of the ranking and profiling of the API sector that I do.

I’m getting ready to softly launch’s API Gallery, but my API Stack work will continue to be my workbench for this type of API discovery work. I’ll keep harvesting, scraping, and profiling APIs and their operations, and as they come into focus I’ll sync with’s API Gallery, and look at how I can help Postman with their API Network, with their directory, as well as other API aggregation and integration platforms like AnyAPI, APIDeck, and others. I’m just getting to the point where I feel like things are working as I want them to, and are ready for me to shift into higher gear. There are still a lot of issues with individual entities, and topics, but the process is working well–I just need to spend more time refining the API definitions themselves, and work on syncing with other efforts in the space.

Right now I’m heavily using the Github API to make all this work, but I’m probably going to switch to relying on Git for the heaviest lifting. However, as a 30 year database person, I have admit that I am rather enjoying the scope of delivering JSON and YAML driven API-driven continuous deployment and integration projects like this that have a machine readable layer, but also a UI layer for humans to put to use. It provides me a with a very API way of driving how I profile APIs, quantify what it is they do, and begin to rank and make sense of the growing API landscape. Continuing with my way of doing things out in the open, while hopefully providing more value to the API discovery conversation, in an OpenAPI and Postman Collection driven API universe.

OAuth Has Many Flaws But It is The Best We Have At The Moment

I was enjoying the REST API Notes newsletter from my friend Matthew Reinbold (@libel_vox) today, and wanted to share my thoughts on his mention of my work while it was top of my mind. I always enjoy what Matt has to say, and regularly encourage him writing on his blog, and keep publishing his extremely thoughtful newsletter. It is important for the API sector to have many thoughtful, intelligent voices breaking down what is going on. I recommend subscribing to REST API Notes if you haven’t, you won’t regret it.

Anyways, Matt replied with the following about my Facebook response:

Kin Lane did a fine job identifying the mechanisms most companies already have in place to mitigate the kind of bad behavior displayed by Cambridge Analytica. It is a good starting point. However, I do want to challenge one of his assertions. Kin implies that OAuth consent is a sufficient control for folks to manage their data. While better than nothing, I maintain that most consumers are incapable of making informed decisions. It’s not a question of their intelligence. It is a question of complexity and incentives for the business to be deliberately opaque.

I agree. To quote my other friend Mehdi Medjaoui (@medjawii), “OAuth is flawed, but it is the best we have”. I would say that “consumers are incapable of making informed decisions”, because we’ve crafted the world this way, and our profit margins depend on customers not being able to make informed decisions. It is how markets work things out, and “smart” people get ahead, and all that bullshit. You see this same thing playing out at the terms of service level as well. As a consumer of online services, I am regularly incapable of being able to make informed decisions around how my data is being used, in exchange for using a free web application. Is it because I’m not smart? No, it is because terms of service are purposefully confusing, verbose, legalize, and mind numbing. Why? So I don’t read them, understand them, and run the other way.

This is intentional. You see this in the Tweet responses from Facebook engineers as they call all of us ignorant fools, and say that we agreed to all of this. The problem isn’t OAuth. The problem is intentional deceit and manipulation by corporations, and for some reason we keep showcasing this type of behavior as being savvy, smart, and just doing business. There is no reason that terms of service or OAuth flows can’t be helpful, informative, and in plain language. Platforms just choose to not invest in these areas, because an uninformed user means more profit for the platform, its investors, and shareholders. The lack of available startups, services, and tooling at the terms of service and privacy layer of technology sector demonstrates what a scam all of this has been. If startups cared about their users, we would have seen investment in tools and services that help us all make sense of things. Period.

I’ve sat in multi-day OAuth negotiation sessions on behalf of the White House, negotiating OAuth scopes on behalf of power customers. I’ve seen how strong armed the corporations can be, and how little defense the average consumer has. I’ve seen technology platforms intentionally complexify things over and over. I’ve also seen plain language terms of service that actually make sense–read Slack’s for an example. I’ve seen OAuth flows that protect a user’s interest, however I haven’t seen ANY investment in actually educating end-users about what is OAuth, and why it matters. You know, like the personal finance classes we all get in high school? (I wrote that shit in 2013!!!) Why aren’t we educating the average consumer about managing their personal data and privacy? Why aren’t we setting standards for OAuth scopes, and flows for technology platforms? Good question!

I’ve spent the last eight years trying to encourage platforms to be better citizens using their APIs. I’ve been aggregating and showcasing the healthy building blocks they can use to better serve their customers, and provide more observability into how they operate. At this point, I feel like the tech sector has had their chance. Y’all chose to willfully ignore the interest of end-users, and get rich on all of our data. I know you all will cry foul now that the regulatory winds are starting to blow. Too bad. You had you chance. I’m going to focus all my energy and resources into educating policy and lawmakers about how they can use APIs, OAuth, and other building blocks already in use to put y’all into check. There is no reason the average consumer can’t make an informed decision around the terms of service of the platforms they depend on, and intelligently participate in conversations around access to their data using OAuth. As Matt said, “it is a question of complexity and incentives for the business to be deliberately opaque.”–regulations and policy is how we shift that.

Sharing Your API First Principles

I’ve been publishing regular posts from the API design guides of API providers I’ve been studying. API providers who publish their API design guides tends to be further along in their API journey. These API providers tend to have more experience and insight, and are often worth studying further, and learning from. I’ve bee getting a wealth of valuable information from the German fashion and technology company Zalando, who has shared some pretty valuable API first principles.

In a nutshell Zalando’s API First principles focuses on two areas:

  • define APIs outside the code first using a standard specification language
  • get early review feedback from peers and client developers

By defining APIs outside the code, Zalando wants to facilitate early review feedback and also a development discipline that focus service interface design on:

  • profound understanding of the domain and required functionality
  • generalized business entities / resources, i.e. avoidance of use case specific APIs
  • clear separation of WHAT vs. HOW concerns, i.e. abstraction from implementation aspects — APIs should be stable even if we replace complete service implementation including its underlying technology stack

Moreover, API definitions with standardized specification format also facilitate:

  • single source of truth for the API specification; it is a crucial part of a contract between service provider and client users
  • infrastructure tooling for API discovery, API GUIs, API documents, automated quality checks

Their guidance continues by stating:

An element of API rirst are also a review process to get early review feedback from peers and client developers. Peer review is important for us to get high quality APIs, to enable architectural and design alignment and to supported development of client applications decoupled from service provider engineering life cycle.

It is important to learn, that API First is not in conflict with the agile development principles that we love. Service applications should evolve incrementally — and so its APIs.

Of course, our API specification will and should evolve iteratively in different cycles; however, each starting with draft status and early team and peer review feedback. API may change and profit from implementation concerns and automated testing feedback. API evolution during development life cycle may include breaking changes for not yet productive features and as long as we have aligned the changes with the clients. Hence, API First does not mean that you must have 100% domain and requirement understanding and can never produce code before you have defined the complete API and get it confirmed by peer review.

On the other hand, API First obviously is in conflict with the bad practice of publishing API definition and asking for peer review after the service integration or even the service productive operation has started. It is crucial to request and get early feedback — as early as possible, but not before the API changes are comprehensive with focus to the next evolution step and have a certain quality (including API Guideline compliance), already confirmed via team internal reviews.

Zalando get’s at one of the reasons I’m a big advocate for a design, mock, and document API lifecycle, forcing developers to realize their API vision, get feedback, and iterate upon their designs, before they ever consider moving into a production environment. I feel like there are many views of what API first means, and many do not focus on the process, and obsess too much on just API design. I think Zalando’s API design guide reflects this, where the guide is more about API guidance, than it is just about the design of an API. It provides a wealth of wisdom and knowledge, but is still labeled as being about API design.

Similar to pushing companies, organizations, institutions, and government agencies to do in APIs in the first place, I’m encouraging more of them to begin publishing their API design guides as part of their journey. It seems like it is an important part of being able to articulate not just the API design guidelines, but also a place to articulate the overall reasons behind doing APIs in the first place, and the principle, ethics, and process surrounding how we are doing APIs across the disparate teams that make things go around in our worlds.

Government Has Benefitted From Lack Of Oversight At The Social API Management Layers As Well

There are many actors who have benefitted from Facebook not properly management their API platform. Collecting so many data points, tracking so many users, and looking the other way as 3rd party developers put them to use in a variety of applications. Facebook did what is expected of them, and focused on generating advertising revenue from allowing their customers to target the many data points that are generated by the web and mobile applications developed on top of their platform.

I hear a lot of complaining about this being just about democrats being upset that it was the republicans who were doing this, when in reality there are so many camps that are complicit in this game, if you are focusing in on any single camp you are probably looking to shift attention and blame from your own camp, in my opinion. I’m all for calling everyone out, even ourselves for being so fucking complicit in this game where we allow ourselves to be turned into a surveillance marketplace. Something that benefits the Mark Zuckerbergs, Cambridge Analyticas, and the surveillance capitalist platform, but also providing a pipeline to government, and law enforcement.

If government is going to jump on the bandwagon calling for change at Facebook, they need to acknowledge that they’ve been benefitting from the data free for all that is the Facebook API. We’ve heard stories about 3rd party developers building applications for law enforcement to target Black Lives Matters, all the way up to Peter Thiel’s (a Facebook Investor) Palantir who actively works with NSA, and other law enforcement. If we are going to shine a light on what is happening on Facebook, let’s make it a spotlight and call out ALL of the actors who have been sucking user’s data out of the API, and better understand what they have been doing. I’m all for a complete audit of how the RNC, DNC, and EVERY other election, policing, and government entity has been doing with social data they have access to.

Let’s bring all of the activity out of the shadows. Let’s take EVERY Facebook application and publish as part of a public database, with an API for making sense of who is behind each one. Let’s require every application share the name’s of individuals and business interests behind their applications, as well as their data retention, storage, and sharing practices. Let’s make the application database observable by the public, journalists, and researchers. If we are going to understand what is going on with our data, we need to turn the lights on in the application warehouse that is built on the Facebook API. Let’s not just point the finger at one group, let’s shine a spotlight on EVERYONE, and make sure we are actually being honest at this point in time, or we are going to see this same scenario play out every fiver years or so.

Acknowledging That Why I Do APIs Is Very Different Than Why Others Are Doing APIs

When I started doing API Evangelist in 2010 I was still very, very, very naive about why people were doing APIs. While I do not always expect everyone doing APIs to be ethical and sensible with the reasons behind doing APIs, I have been regularly surprised at how many different views there are out there regarding why we should be doing APIs in the first place. The lesson for me is to never assume that someone is doing APIs for the same reasons I am doing APIs, because rarely that will ever be the case–hopefully minimizing the chances I’ll get continue to be surprised down the road.

After watching Salesforce, then Amazon, Flickr, Twitter, Facebook, and Google Maps doing APIs, I saw the potential for APIs to open up access to resources like never before. I also saw the potential for not just opening up these new opportunities to developers, but OAuth was beginning to give end-users a voice in how their data and content could be put to use. I saw the opportunity for a new type of partnership between platforms, developers, and end-users emerging that could change how we do business online, how government works, and much, much more. Sadly my white male privilege, and powers of denial prevented me from seeing the many other ways in which APIs were being seen as an opportunity.

I find that the reasons people publish regarding why they are doing APIs rarely size up with the real reasons why they are actually doing APIs. Sometimes you can read between the lines by evaluating their overall business model, or lack of one. Other times you can find some telling signs present in the design of their API, as the paths, parameters, and technical details tend to tell a more accidentally honest view of what is happening behind the curtain. However, when it comes to the theater of “open APIs”, everything quickly becomes a funhouse of mirrors when it comes to trying to understand why someone is doing APIs, with the only constant being that nothing last forever–all APIs will eventually change, and go away, no matter what the reasons are behind them.

The reasons I recommend doing APIs centers around opening up access to valuable data, content, and algorithms in a secure and observable way, that protects the privacy of everyone involved. I’m an advocate for API literacy amongst everyone involved in the API conversation, even the non-technical end-user who is most likely being impacted by their existence. I’m team API because they can bring observability into some very black box algorithms, and closed door platforms, that are increasingly governing our lives. My belief in APIs is not purely based on their technical merit, but in that some proven processes involving them have been established, which introduce some healthy balance into how we deliver web, mobile, and device-based applications. It isn’t the technology, it is how us humans are using the technology.

Like other web-based technology, APIs have been weaponized, and identified as a valuable tool for exploitation. Platforms like Facebook and Twitter have demonstrated how they can help a platform grow and evolve, but when this growth is in the service of an advertising-focused business, they can be used to influence, nudge, and harass people in some very damaging ways. With all the exploitation, bad behavior, and looking the other way that occurs around API operations, I find it very difficult to consider representing the sector, but I do find it important that I continue making sense of what is happening. I just need to make sure that I remember that the reasons I’m doing APIs will almost always be different than why others are doing API, and I should never assume folks have the healthiest, and most meaningful intent behind operating their API platforms, and why they are reaching out to me.

Nexmo Manages Their OpenAPI 3.0 Definition Using Github

I’m big on supporting API providers that publish their OpenAPI definitions to Github. It is important for the wider API community that ALL API definitions are machine readable, and available in a way that can be forked, and integrated into continuous integration pipelines. I’m not even talking about the benefits to the API providers when it comes to managing their own API lifecycle. I’m just focusing on the benefits to API consumers, and helping make on-boarding, integration, and keeping in sync with the road map as frictionless as possible.

To help incentivize API providers doing this I’m committed to writing up stories for each API provider that publishes their OpenAPI, APIs.json, or Postman Collections to Github. Bonus points if you are doing it in an interesting way that further benefits your operations, as well as your community. Today’s API provider to showcase is the SMS, voice and phone verifications API provider Nexmo, who tweeted the Github repository at me, which contains their OpenAPI definition for their APIs. As they say, it is a work in progress, but it provides a damn good start for a machine readable definition for their API(s), and I mean c’mon, aren’t all of our APIs a working progress?

Nexmo uses their API definition as “a single point of truth that can be used end-to-end can we used to:”

  • Planning Shared during product discussions for planning API functionality
  • Implementation Inform engineering during development
  • Testing As the basis for testing or mocking API endpoints
  • Documentation For producing thorough and interactive documentation
  • Tooling To generate server stubs and client SDKs.

Nexmo has adopted version 3.0 of the OpenAPI definition, which is forward leaning for the API provider. They also provide a list of resources, tooling, and their API definition available as Ruby packages for easier integration. Another thing they do that I think is interesting, is they list the owner, and contributors for each API definition they have published, or are working on. Which is a concept I fully support, and would like to bake into my own API stack work. Working on API definitions is hard work, and we should be showcasing, and supporting anyone that steps up to do the hard work, as well as providing a point of contact for each available definition.

I am going to spend some time going through Nexmo’s API definition, and learning more about OpenAPI 3.0. I will also be spending time going through their API developer portal creating an APIs.json, and Postman Collection for their API. I’m looking to add their API to the Postman API Network, as well as my API Stack, API Gallery, and other API aggregation, integration, and discovery projects I am working on. Thanks for bringing to my attention Nexmo, and keep up the good work. I’ll keep paying attention to what you are doing, with Github, and your OpenAPI definition being a conduit for guiding my attention.

A Regulatory Framework For Facebook And Other Platforms Is Already In Place

There is lots of talk this week about regulating Facebook after the Cambridge Analytica story broke. Individuals, businesses, lawmakers, and even Facebook are talking about how we begin to better regulate not just Facebook, but the entire data industry. From my perspective, as the API Evangelist, the mechanisms are already in place, they just aren’t being used to their fullest by providers, with no sufficient policy in place at the federal level to incentivize healthy behavior by API providers, data brokers, and 3rd party application developers.

API Management Is Already In Place
Modern API platforms, Facebook included, leverage what is called an API management layer, to help manage the applications being developed on their platforms. Every developer who wants to access platform APIs has to sign up, submit the name and details of their application(s), and then receive an API key which they have to include with each call they make to a platform’s API when requesting ANY data, content, or usage of an algorithm. This means that all the platforms should already be in tune with every application that is using their platform, unless they allow internal groups, and specific partners to bypass the API management layer.

An Application Review Process
Every application submitting for API keys has to go through some sort of review process. The bar for this review process might be as low as requiring you to have an email address, or as high as submitting drivers and business licenses, and verify how you are storing your data. Facebook has an application review process in place, and is something they are promising to tighten up a bit in response to Cambridge Analytica. How loose, or tight a platform is with their application review process is always in alignment with the platform’s business model, and how the value their user’s privacy and security.

Govern Data Access OAuth Using Scopes
Most platforms use a standard called OAuth for enabling 3rd party developers access to a platform user’s data. Each quiz, game, and application has to ask a user to authorize access to their data, and OAuth scopes govern how much, or how little data can be accessed. OAuth scopes are defined by the platform, and negotiated by 3rd party developers, and end-users. Facebook has tightened their OAuth scopes in recent years, but has more work ahead of them to make sure they protect end user’s interest, and that 3rd party developers are properly following OAuth healthy practices, and end-users are aware and literate of what they are doing when participating in OAuth flows.

Logging Of Every API Access
When it comes to API access, everything is logged, and you know exactly who has accessed what, and when. Everything an application does with Facebook, or any other platform’s data via their API is logged. This is standard practice across the industry. Since every application passes their API key with each call, you know the details of what each application is doing in real time. Saying you aren’t aware of what an application is doing with user’s data doesn’t exist in an API-driven world, it just means that you aren’t actively paying attention to, and responding to log activity in real time.

Analysis And Reporting Of Usage
Detailed analysis and reporting is baked into API management, providing visual dashboards for API platforms, application developers, and end-users to understand how data is being accessed and put to use by applications. Understanding activity at the API management layer is one of the fundamental elements of striking a balance between access and control over platform data consumption. APIs aren’t secure if nobody is paying attention to what is happening, and actively responding to bad behavior, and undesirable access. Platforms, 3rd party developers, as well as end-users should all be equipped, and educated regarding the analysis and reporting that is available via all any platform in which they operate on.

Application Directory And Showcase
Many API platforms actively showcase the applications that are built upon their APIs, sharing the interesting applications that are available, and allowing consumers to put them to use. When it comes to regulation, this existing practice just needs expanded upon, requiring ALL applications to be registered, whether for internal, partner, or external use. Then also expanding upon the data that is required and published as part of the application submission, review, and operation practice. Additionally, the application directory and showcase should be operated and audited by external entities, who aren’t beholden to platform interest, providing a more transparent, observable, and accountable approach to understanding what an application’s motivations are, and fairly reporting upon all platform usage.

Recurring Application Auditing Practices
Building on the existing application review process, and in conjunction with the application directory, all active applications should be subject to a recurring review process triggered by time, and by different levels of consumption. You already see some of this from providers like Twitter who limit applications to 1M OAuth tokens, which then you are subject to further scrutiny. This is something that should happen regularly for all applications, pushing for annual review and audit of their operations, as well as at 10K, 100K, and 1M OAuth token levels, pushing to further understand the data gathering, storage, funding, and partner relationships in play behind the scenes with each application built on top of a platform’s API.

Breach Notifications And Awareness
Every platform should be required to communicate with users, and the public regarding any breach that occurs within a 72 hour period. We see this practice taking hold in Europe around the GDPR regulation, and is something that needs to be adopted throughout the United States. This shouldn’t be limited to large scale, and malicious breaches, and also include any terms of service, privacy, or security violations incurred by individual applications. Publishing and sharing the details of the incident as part of the application directory and showcase. Highlighting not just the healthy behavior on the platform, but being open and transparent regarding the bad behavior that occurs. Bringing bad actors out of the shadows, and holding platforms, as well as 3rd party developers accountable for what goes on behind the scenes.

API Access For API Management Layer
All aspects of a platform should have APIs. Every feature available in the UI for a web or mobile application, should have a corresponding API. This applies to the platform as well, requiring that the entire API management, logging, and application directory layers outlined above all have APIs. Providing access to all applications who are accessing a platform, as well as understanding how they are putting API resources to work, and consuming valuable data and content. Making the platform more observable and accountable, allowing it to be audited by 3rd party research, journalists, regulators, and other industry level actors. Of course, not everyone would gain access to this layer by default, requiring different tiers of access, but still holding ALL API consumers to the same levels of accountability, no matter which tier they are working at.

Service Composition For Different Types Of Users
Another fundemental aspect of API management that exists across the tech sector, but isn’t always discussed or understood, is the different tiers of access available, depending on who you are. API service composition at the API management layer is what dictates that new API consumers only get access to a limited amount of resources, while more trusted partner, and internal groups get access to higher volumes, as well as additional number of API resources. Service composition is what limits the scope of API terms of service, privacy, and security breaches, and also allows for the trusted auditing, reporting, and analysis of how a platform operates by researchers, regulators, and other 3rd party actors. Every platform has different levels of API access as defined through their service composition at the API management layer. Sometimes this is reflected on their plans and pricing pages, but often times when there is no clear business models, these levels of access operate in the shadows.

Looking At Existing Solutions When it Comes To Platform Regulations
The cat is out of the bag. APIs have been used to measure, understand, and police the access to data, content, and algorithms at the API level for over a decade. Nothing I’ve outline is groundbreaking, or suggests anything new be developed. The only changes required are at the government policy level, and a change in behavior at the platform level. The tooling and services to implement the regulations of platforms already exists, and in most cases are already in place–it just happens that they are serving the platform masters, and have not be enlisted to serve end-users, and most definitely not at the regulatory level. API management is how platforms like Facebook, Twitter, and others have gotten to where they are, by incentivizing development and user growth via API integrations, the problem is this often involves turning the API faucet completely on, and not limiting access based upon privacy or security, and only in service of the business model when it makes sense.

Facebook has the ability to understand what Cambridge Analytic was doing. They can identify healthy and not so healthy practices via API consumption long before they become a problem. The problem is that their business model hasn’t been in alignment with taking control of things at the API management level–it encourages bad behavior, not limits it. There also isn’t any API access to this API management layer, and no observability into the applications that are built on top of existing APIs. There is no understanding at the regulatory level that these things are possible, and there are no policies in place requiring platforms have APIs, or provide industry level regulatory access to their API management layers. You can see the beginning of this emerge with banking regulatory efforts out of the UK, and requiring the API management layer be operated by a 3rd part entity, with all actors be registered in the API management directory, and accountable to the community, and the industry. Setting a precedent for regulatory change at the API management layer, and providing a working blueprint for how this can work.

API management is over a decade old, and something that is baked into the Amazon, Azure, and Google Clouds. It is standard practice across all leading API providers like Facebook, Twitter, Google, Amazon, and others. The concept of regulating data, content, and algorithmic access and usage at the API layer isn’t anything new. The only thing missing is the policy, and regulatory mandate to make it so. Americans are acting like this is something new, but we see a precent already being set in Europe through GDPR, and PSD2 in banking. The CFPB in DC was already working on moving this conversation forward until the Trump White House began unraveling things at the agency. Regulating our personal data won’t be a technical, or business challenge to solve, it will ultimately be about the politics of how we operate our platforms in a more sustainable way, and truly begin protecting the privacy and security of users (or not).

Investment In API Security Will Continue To Fall Short While There Is No Breach Accountability

I have a lot of conversations with folks down in the trenches about API security, and what they are doing to be proactive when it comes to keeping their API infrastructure secure. The will and the desire amongst folks I talk to regarding API security is present. They want to do what it takes to truly understand what is needed to keep their APIs secure, but many have their hands tied because lack of resources to actually do what is needed. Every API team I know is short-handed, and doing the best they can with what they have available to them. A lack of investment in API security isn’t always intentional, it ends up just begin a reality of the priorities on the ground within the organizations they work in.

While I’m sure leaders within these companies are concerned about breaches within their API infrastructure, the urgency to invest in this area isn’t always a priority. Despite an increase in high profile, often API-induced breaches, IT and API groups are still not given the amount of resources they need to do something about potential security incidents. Other than the stress and bad press of a breach, there really are no consequences in the United States. We have seen this play out over and over, and when high profile breaches like Equifax go unpunished, other corporate leaders fully understand that there will no consequences, so why invest in preventative measures–we will just respond to it “if” it happens.

This is why GDPR, and other similar legislation will become important to the API security industry. Without real civil or criminal penalties involved with breaches, and even heavier penalties for poorly handled breaches, companies just aren’t going to care. Data is just a replaceable commodity, and a company can recover from the hit to their brand when a breach does occur. Making the investment in proactive API security training, staffing, services, and processes an unnecessary thing. Reflecting how health care is handled in this country, with 95% of the investment in treating things after they happen, and about 5% investment in preventative care. Hoping all along you don’t get sick, or have a breach.

I can talk until I blue in the face to business leaders about API security, and make them aware of healthy practices, but if there isn’t an incentive to invest in API security, it will never happen. At this point I feel that API security is more a reflection of a wider systemic illness around how we view data, and that country and industry level policy is where change needs to occur. I will keep showcasing specific building blocks of an API security strategy, as well as showcase services and tools to help you implement your strategy, but I feel like the most meaningful change will have to occur at the policy level. Otherwise business leaders will never prioritize API security, leaving all of OUR data vulnerable to exploitation–it is just a cost of doing business at this point.

Facebook's Business Model Is Out Of Alignment With Their API Management Layer

All the controls for Facebook to have done something about the Cambridge Analytics situation are already in place. The Facebook API management layer has everything needed to have prevented the type of user data abuse we’ve experienced, and honestly the user data abuse that has happens in many other applications, and will continue to occur. The cause of this behavior is rooted in Facebook’s business model, and a wider culture that is fueled by venture capital investment, and backdoor data brokering. It is just how the big data / app economy is funded, and when your business model is all based upon advertising, user engagement to generate data / signals, and being about leveraging that behavior for targeting–you are never going to reign in your API management layer.

In alignment with my other story today around lack of investment in security, Facebook just doesn’t have the incentive to invest in policing their API management layer. There is no motivation to thoroughly vet new applications, let alone regularly review and audit what each application is doing as they approach 10K tokens, 100K, or 270K tokens. Sure you can’t get at all the friends data anymore, so they did work tighten things down at the API management schema layer, but most action we’ve seen is in response to bad situations, and not preventative. If you are properly managing the access to your APIs, and have the resources to monitor and respond to activity in real time, you are going to see the bad actors, and respond accordingly. Minimizing the damage that occurs to end-user, developing a robust set of patterns to keep an eye out for, and just get better at keeping your platform data consumption tight.

The problem is when your primary business model is centered around advertising, and providing a wealth of controls around targeting users based upon the data points they provide, you want as many apps, as many data points, and as many users as you possible can. You have no incentive to police what your applications are doing, as long as it drives the bottom line. You won’t invest in properly mapping out and restricting your schema, understanding what applications are doing, and sorting out the good from the bad. If it is fueling the delivery of advertising, allowing for more data points to target users based upon, driving clicking, sharing, and the eye balls on advertising, then it is by definition good. Investment in your API management becomes an extra cost that doesn’t need more investment, especially when it will ultimately hurt the bottom line.

We see this same behavior playing out via Twitter, Google, and other platforms. They will only manage their APIs if it directly competes with them, or makes for bad publicity. This is why Twitter hasn’t reigned in their bot networks until recently, and why Google doesn’t reign in the fake news, conspiracy, and propaganda networks. When advertising is your business model, you want the API faucet open wide, with very little filter on what flows. It isn’t a question of what mechanism we can put in place to bring some balance, these are already in place in the form of API management layers. The challenge is bringing business models in alignment with this layer, and incentivizing platforms to behave differently. Use Amazon as an example of this. Imagine if Amazon EC2 or S3 were advertising-driven, what type of bad behavior would we see? I’m not saying bad things don’t happen on these platforms, but they are managed much better, with different incentive models in play.

This conversation reflects one of the reasons I do API Evangelist. I don’t believe everyone should be doing APIs. However, the cat is out of the bag. The APIs are in place, and driving the web, mobile, and device applications across our Internet connected lives. The mechanisms are in place to monitor, limit, control, and understand what applications are doing with platform data. We just have to push for more observability at the API layer, and acknowledge that the business model for these platforms, and the wider technology sector is out of balance. This will prove to be the biggest challenge in changing all of this behavior, is that entrepreneurs and investors have gotten a taste of the value that can be generated at this layer, and it won’t be something they will give up easily. There is a lot of money to be via platforms when you can easily look the other way and say, “I didn’t know that was happening, so I shouldn’t be responsible.”

SendGrid Managing Their OpenAPI Using Github

This is a post that has been in my API notebook for quite a while. I feel it is important to keep showcasing the growing number of API providers who are not just using OpenAPI, but also managing them on Github, so I had to make the time to talk about the email API provider SendGrid managing their OpenAPI using Github. Adding to the stack of top tier API providers managing their API definitions in this way.

SendGrid is managing announcements around their OpenAPI definition using Github, allowing developers to signup for email notifications around releases and breaking changes. You can use the Github repository to stay in tune with the API roadmap, and contribute feature requests, submit bug reports, and even submit a pull request with the changes you’d like to see. Going beyond just an API definition being about API documentation, and actually driving the API road map and feedback loop.

This approach to managing an API definition, while also making it a centerpiece of the feedback loop with your community is why I keep beating this drum, and showcasing API providers who are doing this. It is a way to manage the central API contract, where the platform provider and API consumers both have a voice, and producing a machine readable artifact that can be then used across the API lifecycle, from design to deprecation. Elevating the API definition beyond just a bi-product of creating API docs, and making it a central actor in everything that occurs as part of API operations.

I’m still struggling with convincing API providers that they should be adopting OpenAPI. That it is much more than an artifact driving interactive documentation. Showcasing companies like SendGrid using OpenAPI, as well as their usage of Github, is an important part of me convincing other API providers to do the same. If you want me to write about your API, and what you are working to accomplish with your API resources, then publish your OpenAPI definition to Github, and engage with your community there. It may take me a few months, but eventually I will get around to writing it up, and incorporating your story into my toolbox for changing behavior in our API community.

Breaking Down Your Postman API Collections Into Meaningful Units Of Compute

I’m fascinated with the unit of compute as defined by a microservice, OpenAPI definition, Postman Collection, or other way of quantifying an API-driven resource. Asking the question, “how big or how small is an API?”, and working to define the small unit of compute needed at runtime. I do not feel there is a perfect answer to any of these questions, but it doesn’t mean we shouldn’t be asking the questions, and packaging up our API definitions in a more meaningful way.

As I was profiling APIs, and creating Postman Collections, the Okta team tweeted at me, their own approach to delivering their APIs. They tactically place Run in Postman buttons throughout their API documentation, as well as provide a complete listing of all the Postman Collections they have. Showcasing that they have broken up their Postman Collections along what I’d consider to be service lines. Providing small, meaningful collections for each of their user authentication and authorization APIs:

Collections Click to Run
Authentication Run in Postman
API Access Management (OAuth 2.0) Run in Postman
OpenID Connect Run in Postman
Client Registration Run in Postman
Sessions Run in Postman
Apps Run in Postman
Events Run in Postman
Factors Run in Postman
Groups Run in Postman
Identity Providers (IdP) Run in Postman
Logs Run in Postman
Admin Roles Run in Postman
Schemas Run in Postman
Users Run in Postman
Custom SMS Templates Run in Postman

Okta’s approach delivers a pretty coherent, microservices approach to crafting their Postman Collections, providing separate API runtimes for each service they bring to the table. Which I think gets at what I’m looking to understand when it comes to defining and presenting our APIs. It can be a lot more work to create your Postman Collections like this, rather than just creating one single collection, with all API paths, but I think from a API consumer standpoint, I’d rather have them broken down like this. I may not care about all APIs, and I’m just looking at getting my hands on a couple of services–why make me wade through everything?

I have imported the Postman Collections for the Okta API, and added to my API Stack research. I’m going to convert them into OpenAPI definitions so I can use beyond just Postman. I will end up merging them all back into a single OpenAPI definition, and Postman Collection for all the API paths. However, I will also be exploding them into individual OpenAPIs and Postman Collections for each individual API path, going well beyond what Okta has done. Further distilling down each unit of compute, allowing it to be profiled, executed, streamed, or other meaningful action in isolation, without the constraints of the other services surrounding it.

General Data Protection Regulation (GDPR) Forcing Us To Ask Questions About Our Data

I’ve been learning more about the EU General Data Protection Regulation (GDPR) recently, and have been having conversation about compliance with companies in the EU, as well as the US. In short, GDPR requires anyone working with personal data to be up front about the data they collect, making sure what they do with that data is observable to end-users, and takes a privacy and security by design approach when it comes to working with all personal data. While the regulations seems heavy handed and unrealistic to many, it really reflects a healthy view of what personal data is, and what a sustainable digital future will look like.

The biggest challenge with becoming GDPR compliant is the data mess most companies operate in. Most companies collect huge amounts of data, believing it is essential to the value they bring to the table, without no real understanding of everything that is being collected, and any logical reasons behind why it is gathered, stored, and kept around. A “gather it all”, big data mentality has dominated the last decade of doing business online. Database groups within organizations hold a lot of power and control because of the data they possess. There is a lot of money to be made when it comes to data access, aggregation, and brokering. It won’t be easy to unwind and change the data-driven culture that has emerged and flourished in the Internet age.

I regularly work with companies who do not have coherent maps of all the data they possess. If you asked them for details on what they track about any given customer, very few will be able to give you a consistent answer. Doing web APIs has forced many organizations to think more deeply about what data they posses, and how they can make it more discoverable, accessible, and usable across systems, web, mobile, and device applications. Even with this opportunity, most large organizations are still struggling with what data they have, where it is stored, and how to access it in a consistent, and meaningful way. Database culture within most organizations is just a mess, which contributes to why so many are freaking out about GDPR.

I’m guessing many companies are worried about complying with GDPR, as well as being able to even respond to any sort of regulatory policing event that may occur. This fear is going to force data stewards to begin thinking about the data the have on hand. I’ve already had conversations with some banks who are working on PSD2 compliant APIs, who are working in tandem on GDPR compliance efforts. Both are making them think deeply about what data they collect, where it is stored, and whether or not it has any value. Something I’m hoping will force some companies to stop collecting some of the data all together, because it just won’t be worth justifying its existence in the current cyber(in)secure, and increasingly accountable regulatory environment.

Doing APIs and becoming GDPR compliant go hand in hand. To do APIs you need to map out the data landscape across your organization, something that will contribute to GDPR. To respond to GDPR events, you will need APIs that provide access to end-users data, and leverage API authentication protocols like OAuth to ensure partnerships, and 3rd party access to end-users data are accountable. I’m optimistic that GDPR will continue to push forward healthy, transparent, and observable conversations around our personal data. One that focuses on, and includes the end-users who’s data we are collecting, storing, and often time selling. I’m hopeful that the stakes become higher, regarding the penalty for breaches, and shady brokering of personal data, and that GDPR becomes the normal mode of doing business online in the EU, and beyond.

Facebook, Cambridge Analytica, And Knowing What API Consumers Are Doing With Our Data

I’m processing the recent announcement by Facebook to shut off the access of Cambridge Analytica to it’s valuable social data. The story emphasizes the importance of having a real time awareness and response to API consumers at the API management level, as well as the difficulty in ensuring that API consumers are doing what they should be with the data and content being made available via APIs. Access to platforms using APIs is more art than science, but there are some proven ways to help mitigate serious abuses, and identify the bad actors early on, and prevent their operation within the community.

While I applaud Facebook’s response, I’m guessing they could have taken more action earlier on. Their response is more about damage control to their reputation, after the fact, than it is about preventing the problem from happening. Facebook most likely had plenty of warning signs regarding what Aleksandr Kogan, Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica, were up to. If they weren’t than that is a problem in itself, and Facebook should be investing in more policing of their API consumers activity, as they claim they are doing in their release.

If Aleksandr Kogan has that many OAuth tokens for Facebook users, then Facebook should be up in his business, better understanding what he is doing, where his money comes from, and who is partners are. I’m guessing Facebook probably had more knowledge, but because it drove traffic, generated ad revenue, and was in alignment with their business model, it wasn’t a problem. They were willing to look the other way with the data sharing that was occurring, until it became a wider problem for the election, our democracy, and in the press. Facebook should have more awareness, oversight, and enforcement at the API management layer of their platform.

This situation I think highlights another problem of doing APIs, and ensuring API consumers are behaving appropriately with the data, content, and algorithms they are accessing. It can be tough to police what a developer does with data once they’ve pulled from an API. Where they store it, and who they share it with. You just can’t trust that all developers will have the platform, as well as the end user’s best interest in mind. Once the data has left the nest, you really don’t have much control over what happens with it. There are ways you can identify unhealthy patterns of consumption via the API management layer, but Aleksandr Kogan’s quizzes probably would appear as a normal application pattern, with no clear signs of the relationships, and data sharing going on behind the scenes.

While I sympathize with Facebook’s struggle to police what people do with their data, I also know they haven’t invested in API management as much as they should have, and they are more than willing to overlook bad behavior when it supports their bottom line. The culture of the tech space supports and incentivizes this type of bad behavior from platforms, as well as consumers like Cambridge Analytica. This is something that regulations like GDPR out of the EU is looking to correct, but the culture in the United States is all about exploitation at this level, that is until it becomes front page news, then of course you act concerned, and begin acting accordingly. The app, big data, and API economy runs on the generating, consuming, buying, and selling of people’s data, and this type of practice isn’t going to go away anytime soon.

As Facebook states, they are taking measures to reign in bad actors in their developer community by being more strict in their application review process. I agree, a healthy application review process is an important aspect of API management. However, this does not address the regular review of applications usage at the API management level, assessing their consumption as they accumulate access tokens, to more user’s data, and go viral. I’d like to have more visibility into how Facebook will be regularly reviewing, assessing, and auditing applications. I’d even go so far as requiring more observability into ALL applications that are using the Facebook API, providing a community directory that will encourage transparency around what people are building. I know that sounds crazy from a platform perspective, but it isn’t, and would actually force Facebook to know their customers.

If platforms truly want to address this problem they will embrace more observability around what is happening in their API communities. They would allow certified and verified researchers and auditors to get at application level consumption data available at the API management layer. I’m sorry y’all, self-regulation isn’t going to cut it here. We need independent 3rd party access at the platform API level to better understand what is happening, otherwise we’ll only see platform action after problems occur, and when major news stories are published. This is the beauty / ugliness of APIs. The cats out of the bag, and platforms need them to innovate, deliver resources to web, mobile, and device applications, as well as remain competitive. APIs also provide the opportunity to peek behind the curtain, and better understand what is happening, and profile the good and the bad actors within each ecosystem–let’s take advantage of the good here, to help regulate the bad.

68 Stops In A Comprehensive API Strategy Transit Map

I am refining my comprehensive list of stops that I highlight as part of the API lifecycle, or what I call API Transit–the stops that each of the services we deliver will have “pass through” at some point. I’m sharing this list with other team members as part of existing consulting arrangements I’m engaged in with, as well as baking into my API transit work for some workshops I’m doing in April. All of these are available on the API lifecycle section of API Evangelist, but I will be pushing them to become a first class citizen on the home page of API Evangelist once again.

Here are 68 potential stops I’d consider for any microservice to be exposed to, as part of a larger API transit strategy map:

  • Definitions - From the simplest definition of what a service does, all the way to the complete OpenAPI definition for each service.
  • Design - The consistent design of each service leverage existing patterns, as well as the web to deliver each service.
  • Versioning - Plan for handling changes, and the inevitable evolution of each service, its definitions, and supporting code.
  • DNS - The domain addressing being used for routing, management and auditing of all service traffic.
  • Database - The backend database schema, infrastructure, and other relevant information regarding how data is managed.
  • Compute - How is the underlying compute delivered for each service using containers, serverless, cloud, and on-premise infrastructure.
  • Storage - What does storage of all heavy objects, media, and other assets involved with the delivery of services.
  • Deployment - Details services developed, and deployed, including frameworks and other libraries being used.
  • Orchestration - Defining what the build, syndication, and orchestration of service deployments and integrations look like, and are executed.
  • Dependencies - Identifying all backend service, code and infrastructure dependencies present for each service.
  • Search - Strategy for understanding what search will look like for each individual service, and play a role in larger, more comprehensive search across services.
  • Proxy - What proxies are in place to translate, cache, stream, and make services more efficient and secure?
  • Gateway - What gateways are in place to defend, protect, route, translate, and act as gatekeeper for service operations?
  • Virtualization - Details how how services and their underlying data are mocked, virtualized, sandboxed, and made available for non-production usage.
  • Authentication - What is involved with authentication across all services, including key-based approaches, basic authentication, JWT, and OAuth solutions.
  • Management - Information about how services are composed, limited, measured, reported upon, and managed consistently across all services.
  • Logging - What is being logged as part of service operations, and what is required to participate in overall logging strategy?
  • Portal - The strategy for how all services are required to be published to one or many public, private, and partner developer portals.
  • Documentation - The requirements for what documentation is expected as part of each service presence, defining what the services delivers.
  • Support - Relevant support channels, points of contact, and best practices for receiving support as an API consumer.
  • Communications - Information about the communication strategy around each service, and how blogs, social, and other channels should be leveraged.
  • Road Map - Details on a services road map, and what the future will hold, providing individual service, as well as larger organizational details on what is next.
  • Issues - Expectations, communications, and transparency around what the current bugs, issues, and other active problems exist on a platform.
  • Change Log - Translating the road map, and issues into a log of activity that has occurred for each service, providing a history of the service.
  • Monitoring - What is required when it comes to monitoring the availability and activity of each individual service.
  • Testing - The tactical, as well as strategic testing that is involved with each service, ensuring it is meeting service level agreements.
  • Performance - How is the performance of each service measured and report upon, providing a benchmark for the quality of service.
  • Observability - What does observability of the service look like, providing transparency and accountability using all of its existing outputs?
  • Caching - What caching exists at the server, CDN, and DNS layers for each service to provide a higher level of performance?
  • Encryption - Details regarding what is expected regarding encryption on disk, as well as in transport for each service.
  • Security - Detailed strategy and processes regarding how each service is secured on a regular basis as part of operations.
  • Terms of Service (TOS) - Considerations, and legal requirements applied to each service as part of the overall, or individual terms of services.
  • Privacy - Details regarding the privacy of platform owners, developers, and end users as it pertains to service usage.
  • Service Level Agreements (SLA) - Details regarding what service levels are required to be met when it comes to partner and public engagements.
  • Licensing - Information about backend server code, API surface area, data, and client licensing in play.
  • Branding - What branding requirements are in place for the platforms and its partners when it comes to the service.
  • Regulation - Information regarding regulations in place that effect service operations, and are required as part of its usage.
  • Discovery - How are services catalogued and made discoverable, making them accessible to other systems, developers, as well as to internal, partner, or public groups.
  • Client - Information about clients being used to interface with and work with the service, allowing it to be put to use without code.
  • Command Line Interface - The command line interface (CLI) tooling being used to developer or consume a service.
  • SDKs - What software development kits (SDK) are generated, or developed and maintained as part of a service’s operation?
  • Plugin - What platform plugins are developed and maintained as part of a services operations, allowing it to work with other platforms and services.
  • IDE - Are there integrated development environment (IDE) integrations, libraries, plugins, or availability considerations for each service?
  • Browsers - Are there any browser plugins, add-ons, bookmarklets and integrations used as part of each service’s operation?
  • Embeddable - Information about any embeddable badges, buttons, widgets, and other JavaScript enabled solutions built on top of a service?
  • Bots - What type of automation and bot implementations are in development or being supported as part of a service’s operation?
  • Visualization - Are there specific visualizations that exist to help present the resources available within any service?
  • Analysis - How are logs, APIs, and other aspects of a service being used as part of wider analysis, and analytics strategy?
  • Aggregation - Are there any aggregation solutions in place that involve the service, and depend on its availability?
  • Integration - What 3rd party integrations are available for working with a service in existing integration platforms like IFTTT, and Zapier.
  • Network - Information about networks that are setup, used, and allocated for each service governing how it can be accessed and consumed.
  • Regions - Which regions does a service operate within, making it available in specific regions, and jurisdictions–also which regions is it not allowed to operate within.
  • Webhooks - Are there webhooks employed to respond to events that occur via the service, pushing data, and notification externally to consumers?
  • Migration - What solutions are available for migrating an API between regions, cloud environments, and on-premise?
  • Backup - What types of backups are in place to bring redundancy to the database, server, storage, and other essential aspects of a service’s operations?
  • Real Time - Are there Server-Sent Events (SSE), Websocket, Kafka, and other real time aspects of a service’s delivery?
  • Voice - Are there any voice or speech enablement, integrating services with Alex, Siri, Google Home, or other conversational interface?
  • Spreadsheets - What types of spreadsheet integrations, connectors, and publishing solutions are available for a service?
  • Investment - Where do the funds for operating a service come from within a company, from external organizations, or VC funds?
  • Monetization - What does it cost to operate a service, breaking down the one time and recurring costs involved with delivering the service.
  • Plans - Outline of plans involved with defining, measuring, reporting, and quantifying value generated around a service’s operation.
  • Partners - An overview of partner program involved with a service’s operation, and how a wider partner strategy affects a service’s access and consumption.
  • Certification - Are there certification channels available for applications and developers, defining the knowledge required to competently operate and integrate a service.
  • Evangelism - What does internal, public, and partner evangelism efforts and requirements look like for a service, and its overall presence?
  • Showcase - How are developers and applications using a service showcased, and presented to the community, demonstrating the valuable around a service?
  • Deprecation - What are the plans for deprecating a service, involved the road map and communication around the individual and overall deprecation of service(s).
  • Training - What training materials need to be developed, or already exist to support the service.
  • Governance - How are all steps measured, quantified, aggregated, reported upon, and audited as part of a larger quality of service effort for each service.

I struggle calling this a lifecycle, as most of these do not occur in any sort of linear fashion. This is why I’m resorting to using the API Transit approach, as it is something that reflects the ever-changing, but also hopefully eventually more consistent nature of delivering microservices. Ideally, every microservice passes through each of these stops in a consistent fashion, making sure that they operate in concert with a larger API platform strategy. However, I know what the reality on the ground is, and know that not all of these relevant to each organization I am talking with, requiring me to trim down and change the order in which I present these.

If you’d like to talk about how these stops apply to your API operations, feel free to reach out. I’m doing more API lifecycle and governance strategy consulting with my partners, and I’m happy to help your company, organization, institution, or government agency develop your own API transit map, which can be used across your API operations.

A Visual View Of API Responses Within Our Documentation

Interactive API documentation is nothing new. We’ve had Swagger UI, and other incarnations for over five years now. We also have API explorers, and full API lifecycle client solutions like Postman to help us engage with APIs, and be able to quickly see responses from the APIs we are consuming. In my effort to keep pushing forward the API documentation conversation I’ve been beating the drum for more visual solutions to be baked into our interactive documentation for a while now, encouraging providers to make the responses we receive much more meaningful, and intuitive for consumers.

To help drum up awareness to this aspect of API documentation I’m always on the lookout for any interesting examples of it in the wild. There was the interesting approach out of the Food and Drug Administration (FDA), and now I’ve seen one out of the web data feeds API provider When you are making API calls in their interactive dashboard you get a JSON response for things like news articles on the left hand side, but you also get an interesting slider that will show a visual representation of the JSON response on the right side–making it much more readable to non-developers.

It provides a nice way to quickly make sense of API responses. Making them more accessible. Making it something that even non-developers can do. Essentially providing a reverse view source (view results?) for API responses. Taking the raw JSON, and providing an HTML lens for the humans trying to make decisions around the usage of a particular API. View source is how I learned HTML back in the mid 1990s, and I can see visualization tools for API responses helping average businesses users learn JSON, or at least make it a little less intimidating, and something they feel like they put to work for them.

I really feel like more visualizations baked into API documentation is the future of interactive API docs. Being able to see API responses rendered as HTML, or as graphs, charts, and other visualizations, makes a lot of sense. APIs are an abstract thing, and even as a developer, I have a hard time understand what is contained within each API response. I think having visual API responses will help us craft a more meaningful API request, making our API consumption much precise, and impactful. If you see any interesting visualization layers to your favorite API’s documentation, please drop me a line, I’d like to add it to my list of interesting approaches.

LinkedIn Account Validation For New API Consumers

I was on-boarding with the data marketplace Dawex the other day, and I thought their on-boarding process was interesting. It is pretty rigid, requiring users to validate themselves in multiple ways, but it provides some interesting approaches to knowing more about who your API developers are. Dawex has the basic level email and phone number validation, but they have added a 3rd dimension, validating who you are using your LinkedIn profile.

After signing up, I was required to validate my email account–pretty standard stuff. Then I was asked to enter a code sent to my cell phone via SMS–not as common, but increasingly a way that platforms are using to verify you. Then I was asked to OAuth and connect my LinkedIn account to my Dawex profile. I’ve seen sign up and login using LinkedIn, but never using it as a 3rd type of validating who you are, and that you are truly a legitimate business user. I haven’t been verified yet, it says it will take up to 72 hour is what the notification at the top of my dashboard says, but it caught my attention.

Dawes also has a complete, Know Your Customer (KYC) process, which takes the validation to another level, and something I’ll write about separately. I think the social profile validation provides an interesting look at how platforms validate who we are. In an environment where API developers will often sign up for multiple accounts, and do other shady things with anonymous accounts, I can get behind requiring consumers to prove who they are. I also think that providing robust, active, verifiable social media accounts using Github, Twitter, LinkedIn, and Facebook makes a lot of sense. I’d say that all four of these profiles represent who I am as a person, as well as a business.

I have talked about using Github to validate and rate API consumers before. I could see a world where I get validated upon signup using my social profile(s), and the amount of access to an API I am entitled to varies depending on how complete, robust, and verifiable my presence is. I’m not keen on giving up all my data to every API I sign up, but providing access to my profile so they can validate and rank who I am, and then get out of my way when it comes to using their resources, is something I can get behind. I could even envision where you throw Paypal into the mix, and my billing profile is further rounded off, allowing me to consume whatever I desire, and pay for what I used.

An OpenAPI Vendor Extension For Defining Your API Audience

The clothing marketplace Zalando has an interesting approach to classifying their APIs based upon who is consuming them. It isn’t just about APIs being published publicly, or privately, they actually have standardized their definition, and have established an OpenAPI vendor extension, so that the definition is machine readable and available via their OpenAPI.

According to the Zalando API design guide, “each API must be classified with respect to the intended target audience supposed to consume the API, to facilitate differentiated standards on APIs for discoverability, changeability, quality of design and documentation, as well as permission granting. We differentiate the following API audience groups with clear organisational and legal boundaries.

  • component-internal - The API consumers with this audience are restricted to applications of the same functional component (internal link). All services of a functional component are owned by specific dedicated owner and engineering team. Typical examples are APIs being used by internal helper and worker services or that support service operation.
  • business-unit-internal - The API consumers with this audience are restricted to applications of a specific product portfolio owned by the same business unit.
  • company-internal - The API consumers with this audience are restricted to applications owned by the business units of the same the company (e.g. Zalando company with Zalando SE, Zalando Payments SE & Co. KG. etc.)
  • external-partner - The API consumers with this audience are restricted to applications of business partners of the company owning the API and the company itself.
  • external-public - APIs with this audience can be accessed by anyone with Internet access.

Note: a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. The API audience is provided as API meta information in the info-block of the Open API specification and must conform to the following specification:

  type: string
    - component-internal
    - business-unit-internal
    - company-internal
    - external-partner
    - external-public
  description: |
    Intended target audience of the API. Relevant for standards around
    quality of design and documentation, reviews, discoverability,
    changeability, and permission granting.

Note: Exactly one audience per API specification is allowed. For this reason a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. If parts of your API have a different target audience, we recommend to split API specifications along the target audience — even if this creates redundancies (rationale).

Here is an example of the OpenAPI vendor extension in action, as part of the info block:

swagger: '2.0'
  x-audience: company-internal
  title: Parcel Helper Service API
  description: API for <...>
  version: 1.2.4

Providing a pretty interesting way of establishing the scope and reach of each API in a way that makes each API owner think deeply about who they are / should be targeting with the service. Done in a way that makes the audience focus machine readable, and available as part of it’s OpenAPI definition which can be then used across discovery, documentation, and through API governance and security.

I like the multiple views of who the audience could be, going beyond just public and private APIs. I like that it is an OpenAPI vendor extension. I like that they even have a schema crafted for the vendor extension–another interesting concept I’d like to see more of. Overall, making for a pretty compelling approach to define the reach of our APIs, and quantifying the audience we are looking to reach with each API we publish.

Rules for Extending Your API With Each Version

I’m spending time learning from the API design guides of other leading API providers, absorbing their healthy practices, and assimilating them into my own consulting and storytelling. One API design guide I am learning a lot from is out of the Adidas Group, which contains a wealth of wisdom regarding not just the design of your API, but also the deployment and evolution of the API resources we are publishing.

One particularly interesting piece of advice I found within Adidas API design guidance were their rules for extending an API, which I think is pretty healthy advice for an API developer to think about.

Any modification to an existing API MUST avoid breaking changes and MUST maintain backward compatibility.

In particular, any change to an API MUST follow the following Rules for Extending:

  • You MUST NOT take anything away (related: Minimal Surface Principle , Robustness Principle)
  • You MUST NOT change processing rules
  • You MUST NOT make optional things required
  • Anything you add MUST be optional (related Robustness Principle)

NOTE: These rules cover also renaming and change to identifiers (URIs). Names and identifiers should be stable over the time including their semantics.

First of all, I think many API developers aren’t even thinking about what constitutes a breaking change most of the time. So having any guidance that makes them pause and think about this topic is a good thing. Second, I think we should be sharing more stories about when things break, helping folks think more about these elements–the problem is that many folks are embarrassed they introduced a breaking changes, and would rather not talk about it, let alone make it publicly known.

I am working my way through the API Stylebook, learning from all the API design guides it has aggregated. There is a wealth of knowledge in there to learn from, and contains topics that make for great stories here on the blog. I wish more API providers would actively publish their API governance strategy, so that we can keep aggregating, and learning from each other. Making the wider API space more consistent, and hopefully more reliable along the way.

Seeing Messy API Design Practices As An Opportunity

I’m generating OpenAPI definitions for a wide variety of APIs currently, and I’m regularly stumbling on the messiness of the API design practices being deployed. When you are exposed to a large number of different APIs it is easy to get frustrated, begin ranting, and bitching about how ignorant people are of healthy, sensible API practices. This is the easy route. Making sense of it all, finding the interesting signals and patterns, and extracting where the opportunity are takes a significant amount of effort (so does biting tongue).

After profiling almost 500 API providers, I have almost 25K separate API paths indexed using OpenAPI, and APIs.json. I’ve tagged each API path using its OpenAPI definition. Pulling words from the path name, and any summary and description provided within the API documentation. Doing my best to describe the value contained within each API resource. Then I started grouping by these tags, to see what it produced. Sometimes the API paths it lists makes total sense, but most of the time it makes no sense at all. Then, I started noticing interesting patterns in how people describe their resources. Grouping things like “favorites” across all types of APIs, revealing some pretty interesting perspectives, and honest views of the resources being exposed.

Something as simple as “activities” can mean 30 or 40 different things when applied across CRM, storage, DNS, travel or sports APIs. At first, I’m like this shit is broken. The more time I spend with the mess, the more I’m starting to think there is more to this mess than meets the eye. I could be wrong. I often am. It is likely just my contrarian view of things, and my unique view of the API landscape in action. Where many people see duplicative work, I see common patterns. I just see the landscape differently than people who are just looking to get their work done, sell a product or service, and find an exit for their startup. I’m not looking for any solution, doorway, or exit. I just see all of this as a journey, and I’m fascinated by how people view their API resources, then describe, tag and bag them.

I’m not sure where all of this API indexing and tagging will lead me. I don’t feel compelled to sort out the mess, and covert any of these API providers to a more sensible API religion. I’m just looking to understand more about the motivations behind why they did what they did. Provide commentary on where they fit into the bigger picture, and if they are delivering interesting and valuable API resources, help make them more discoverable, and executable when it matters. As I have been doing for the las eight years, I’m just looking to learn from others, and stay in tune with where things are headed, no matter how messy it might be. I see APIs as a kind of controlled chaos, which are increasingly driving our already chaotic lives.

Google Releases a Protocol Buffer Implementation of the Fast Healthcare Interoperability Resources (FHIR) Standard

Google is renewing its interest in the healthcare space by releasing a protocol buffer implementation of the fast healthcare interoperability resources (FHIR) standard. Protocol buffers are “Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler”. Its the core of the next generation of APIs at Google, often using HTTP/2 as a transport, while also living side by side with RESTful APIs, which use OpenAPI as the definition, in parallel to what protocol buffers deliver.

It’s a smart move by Google. Providing a gateway for healthcare data to find its way to their data platform products like Google Cloud BigQuery, and their machine learning solutions built on Tensorflow. They want to empower healthcare providers with powerful solutions that help onboard their data, and be able to connect the dots, and make sense of it at scale. However, I wouldn’t stop with protocol buffers. I would also make sure they also invest in API infrastructure on the RESTful side of the equation, developing OpenAPI specs alongside the protocol buffers, and providing translation between, and tooling for both realms.

While I am a big supporter of gRPC, and protocol buffers, I’m skeptical of the complexity it brings, in exchange for higher performance. Part of making sense of health care data will require not just technical folks being able to make sense of what is going on, but also business folks, and protocol buffers, and gRPC solutions will be out of reach of these users. Web APIs, combined with YAML OpenAPI has begun to make the API contracts involved in all of this much more accessible to business users, putting everything within their reach. In our technological push forward, let’s not forget the simplicity of web APIs, and exclude business healthcare users as IT has done historically.

I’m happy to see more FHIR-compliant APIs emerging on the landscape. PSD2 for banking, and FHIR for healthcare are the two best examples we have of industry specific API standards. So it is important that the definitions proliferate, and the services and tooling emerge and evolve. I’m hoping we see even more movement on this front in 2018, but I have to say I’m skeptical of Google’s role, as they’ve come and gone within this arena before, and are exceptional at making sure all roads lead to their solutions, without many roads leading back to enrich the data owners, and stewards. If we can keep API definitions, simple, accessible, and usable by everyone, not just developers and IT folks, we can help empower healthcare data practitioners, and not limit, or restrict them, when it is most important.

Multi-Lingual Machine Learning API Deployments

Machine learning API ParallelDots has a story on launching their APIs in multiple languages. Allowing them to “serve a truly global customer base with following language options for our key APIs (Sentiment Analysis, Emotion Analysis, and Keyword generator)”. Something that I think more APIs providers are going to have to think about in coming years, as the need for API resources expands around the globe.

I’m tracking on the localization efforts of different API providers, and I’d say that having service availability in different regions, and multi-lingual support are the top two areas I’m seeing movement. I see two driving forces behind this, 1) the customers are demanding localization for their businesses, and 2) the governments in those countries are imposing regulations and other laws that dictate where resources can be stored and operate. I guess, something that can be seen as markets working things out, if you also believe in the value of regulations.

I also see a negative in all of this, specifically in this case, the imperialistic aspects of artificial intelligence and machine learning. Meaning, the models are trained here in western countries, but then being applied, injected, and imposed upon people in other countries. Selling the service as some sort of truth, watermark, and organic solution to sentiment, emotion, and intelligence without actually localizing the models. I’m not making any assumptions around ParralelDots motivations, just pointing out that this will be a problem, and should be ignored–if it makes you mad, then you are probably part of the problem.

As I conduct my API research, I will keep tracking on localization efforts like this. When I have the bandwidth I will dive in deeper and better understand how providers are defining these localization layers to their APIs in their API design, deployment, documentation, and other elements. I’ll be keeping an eye on which industries are moving fastest when it comes to API localization, and try to understand where it is deemed the most important by providers, consumers, and regulators. Lots to keep an eye on, and understand in 2018 when it comes to the expansion of the world of APIs.

API As A Product Principles From Zalando

As I’m working through the API design guides from API leaders, looking for useful practices that I can include in my own API guidance, I’m finding electronic commerce company Zalando’s API design guide full of some pretty interesting advice. I wanted to showcase the section about their API as a product principles, which I think reflects what I hear many companies striving for when they do APIs.

From the Zalando API design guide principles:

Zalando is transforming from an online shop into an expansive fashion platform comprising a rich set of products following a Software as a Platform (SaaP) model for our business partners. As a company we want to deliver products to our (internal and external) customers which can be consumed like a service.

Platform products provide their functionality via (public) APIs; hence, the design of our APIs should be based on the API as a Product principle:

  • Treat your API as product and act like a product owner
  • Put yourself into the place of your customers; be an advocate for their needs
  • Emphasize simplicity, comprehensibility, and usability of APIs to make them irresistible for client engineers
  • Actively improve and maintain API consistency over the long term
  • Make use of customer feedback and provide service level support

RESTful API as a Product makes the difference between enterprise integration business and agile, innovative product service business built on a platform of APIs.

Based on your concrete customer use cases, you should carefully check the trade-offs of API design variants and avoid short-term server side implementation optimizations at the expense of unnecessary client side obligations and have a high attention on API quality and client developer experience.

API as a Product is closely related to our API First principle which is more focused on how we engineer high quality APIs.

Zalando provides a pretty coherent vision for how we all should be doing APIs. I like this guidance because it helps quantify something we hear a lot–APIs as a product. However, it also focuses in on what is expected of the product owners. It also gets at why companies should be doing APIs in the first place, talking about the benefits they bring to the table.

I’m enjoying the principles section of Zalando’s API design guide. It goes well beyond just API design, and reflects what I consider to be principles for wider API governance. Many companies are still considering this API design guidance, but I find that companies who are publishing these documents publicly are often maturing and moving beyond just thinking deeply about design–providing a wealth of other wisdom when it comes to doing APIs right.

The Importance Of Tags In OpenAPI Definitions For Machine Learning APIs

I am profiling APIs as part of my partnership with, and my continued API Stack work. As part of my work, I am creating OpenAPI, Postman Collections, and APIs.json indexes for APIs in a variety of business sectors, and as I’m finishing up the profile for ParallelDots machine learning APIs, I am struck (again) by the importance of tags within OpenAPI definitions when it comes to defining what any API does, and something that will have significant effects on the growing machine learning, and artificial intelligence space.

While profiling ParallelDots, I had to generate the OpenAPI definition from the Postman Collection they provide, which was void of any tags. I went through the handful of API paths, manually adding tags for each of the machine learning resources. I’m adding tags like sentiment, emotions, semantics, taxonomy, and classification, to each path. Trying to capture what resources were available, allowing for the discovery, filtering, and execution of each individual machine learning model being exposed using a simple web API. While the summary and description explain what each API does to developers, the tags are really the precise meaning in a machine readable context.

In the fast moving, abstract realm of machine learning, and artificial intelligence it can be difficult to truly understand what each API does, or doesn’t do. I struggle with it, and I’m pretty well versed in what is possible and not possible with machine learning. Precise tagging provide us with a single definition of what each machine learning API does, setting a baseline of understanding for putting ML APIs to work. Something that if I consistently apply across all of the machine learning APIs I’m profiling, I can can begin honing and dialing in my catalog of valuable API resources, and begin creating a coherent map of what is possible with machine learning APIs–helping ground us in this often hyped realm.

Once I have a map of the machine learning landscape established, I want to continue evolving my API ranking strategy to apply specifically to machine learning models being exposed via APIs. Not just understanding what they do, but also the quality of what they do. Are the machine algorithms delivering as advertised? Which APIs have a consistent track record in not just the reliability of the APIs, but also the reliability of the responses. Further bringing clarity to the fast moving, venture capital fueled, magical, mystical realm of artificial intelligence. Helping average business consumers better understand which APIs they can depend on, and which ones they might want to avoid.

I’m hoping my API Stack profiling will encourage more API providers to begin doing the heavy lifting of creating OpenAPI definitions, complete with tags themselves. We are always going to need the participation of the community to help make sure they are complete, and as meaningful as they can, but API providers will need to step up and invest in the process whenever possible. As the machine learning and artificial intelligence realms mature, we are going to need a meaningful vocabulary for describing what it is they do, and a ranking system for sorting out the good, the mediocre, and the bad. We’ll need all of this to be contained within the machine readable definitions we are using for discovery, and at runtime, allowing us to automate more efficiently using machine learning APIs.

<< Prev Next >>