The API Evangelist Blog

This blog is dedicated to understanding the world of APIs, exploring a wide range of topics from design to deprecation, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let us know by submitting a Github issue for the repository.


The Many Ways In Which APIs Are Taken Away

APIs are notorious for going away. There are so many APIs that disappear I really stopped tracking on it as a data point. I used keep track of APIs that were shuttered so that I could play a role in the waves of disgruntled pitchfork mobs rising up in their wake–it used to be a great way to build up your Hacker News points! But, after riding the wave a couple hundred waves of APIs shuttering, you begin to not really not give a shit anymore—-growing numb to it all. API deprecation grew so frequent, I wondered why anyone would make the claim that once you start an API you have to maintain it forever. Nope, you can shut down anytime. Clearly.

In the real world, APIs going away is a fact of life, but rarely a boolean value, or black and white. There are many high profile API disappearances and uprising, but there are also numerous ways in which some API providers giveth, and then taketh away from API consumers.:

  • Deprecation - APIs disappear regularly both communicated, and not so communicated, leaving consumers scratching their heads.
  • Disappear - Companies regularly disappear specific API endpoints acting like they were never there in the first place.
  • Acquisition - This is probably one of the most common ways in which high profile, successful APIs disappear.
  • Rate Limits - You can always rate limit away users make APIs inaccessible, or barely usable for users, essentially making it go away.
  • Error Rates - Inject elevated error rates either intentionally or unintentionally can make an API unusable to everyone or select audience.
  • Pricing Tiers - You can easily be priced out of access to an API making it something that acts just like deprecating for a specific group.
  • Versions - With rapid versioning, usually comes rapid deprecation of earlier versions, moving faster than some consumers can handle.
  • Enterprise - APIs moving from free or paid tier, into the magical enterprise, “call us” tier is a common ways in which APIs go away.
  • Dumb - The API should not have existed in the first place and some people just take a while to realize it, and then shut down the API.

I’d say Facebook, Twitter, and Google shutting down, or playing games in any of these areas have been some of the highest profile. The sneaky shuttering of the Facebook Audience Insight API was one example, but didn’t get much attention. I’d say that Parse is one that Facebook handled pretty well. Google did it with Google+ and Google Translate, but then brought back it with a paid tier. LinkedIn regularly locks down and disappears its APIs. Twitter has also received a lot of flack for limiting, restricting, and playing games with their APIs. In the end, many other APIs shutter after waves of acquisitions, leaving us with as my ex-wife says–“nothing nice”!

Face.com, Netflix, 23andMe, Google Search, ESPN, and others have provided us with lots of good API deprecation stories, but in reality, most APIs go aware without much fanfare. You are more likely to get squeezed out by rate limits, errors, pricing, and other ways of consciously making an API unusable. If you really want to understand the scope of API deprecation visit the leading deprecated API directory ProgrammableWeb—-they have thousands of APIs listed that do not exist anymore. In the end it is very difficult to successfully operate an API, and most API providers really aren’t in it for the long haul. The reasons why APIs stay in existence are rarely a direct result of them being directly financially viable. Developers squawk pretty loudly when the free API they’ve been mooching off of disappears, but there are many, many, many other APIs that go away and nobody notices, or nobody talks about. In the world of APIs there are very few things you can count on being around for very long, and you should always have a plan B and C for every API you depend on.


Paying for API Access

APIs that I can’t pay for more access grinds my gears. I am looking at you GitHub, Twitter, Facebook, and a few others. I spend $250.00 to $1500.00 a month on my Amazon bill, depending on what I’m looking to get done. I know I’m not the target audience for all of these platforms, but I’m guessing there is a lot more money on the table than is being acknowledged. I’m guessing that the reason companies don’t cater to this, is that there are larger buckets of money involved in what they are chasing behind the scenes. Regardless, there isn’t enough money coming my way to keep my mouth shut, so I will keep bitching about this one alongside the inaccessible pricing tiers startups like to employ as well. I’m going to keep kvetching about API pricing until we are all plugged into the matrix—-unless the right PayPal payment lands in my account, then I’ll shut up. ;-)

I know. I know. I’m not playing in all your reindeer startup games, and I don’t understand the masterful business strategy you’ve all crafted to get rich. I’m just trying to do something simple like publish data to GitHub, or do some basic research on an industry using Twitter. I know there are plenty of bad actors out there who want also access to your data, but it is all something that could be remedied with a little pay as you go pricing, applied to some base unit of cost applied to your resources. If I could pay for more Twitter and GitHub requests without having to be in the enterprise club, I’d be very appreciative. I know that Twitter has begun expanding into this area, but it is something that is priced out of my reach, and not the simple pay as you go pricing I prefer with AWS, Bing, and other APIs I happily spend money on.

If you can’t apply a unit of value to your API resources, and make them available to the masses in a straightforward way—-I immediately assume you are up to shady tings. Your business model is much more loftier than a mere mortal like me can grasp, let alone afford. I am just an insignificant raw material in your supply chain—-just be quiet! However, this isn’t rocket science. I can’t pay for Google Searches, but I can pay for Bing searches. I can’t pay for my GitHub API calls. I’m guessing at some point I’ll see Bing pricing go out of reach as Microsoft continues to realize the importance of investing at scale in the surveillance economy—-it is how you play in the big leagues. Or maybe they’ll be like Amazon, and realize they can still lead in the surveillance game while also selling things to the us lower level doozers who are actually building things. You can still mine data on what we are doing and establish your behavioral models for use in your road map, while still generating revenue by selling us lower level services.

The problem here ultimately isn’t these platforms. It is me. Why the hell do I keep insisting on using these platforms. I can always extricate myself from them. I just have to do it. I’d much rather just pay for my API calls, and still give up my behavioral data, over straight extraction and not getting what I need to run my business each day. I feel like the free model, with no access to pay for more API calls is a sign of a rookie operation. If you really want to operate at scale, you should be obfuscating your true intentions with a real paid service. If you are still hiding behind the free model, you are just getting going. The grownups are already selling services for a fair price as a front, while still exploiting us at levels we can’t see, so ultimately they can compete with all of us down the road. Ok, in all seriousness, why can’t we get on a common model for defining API access, and setting pricing. I’m really tired of all the games. It would really simplify my life if I could pay for what I use of any resource. I’m happy to pay a premium for this model, just make sure it is within my reach. Thanks.


Imperative, Declarative, and Workflow APIs

At every turn in my API work I come across folks who claim that declarative APIs solutions are superior to imperative ones. They want comprehensive, single implementation, do it all their way approaches, over granular, multiple implementation API calls that are well defined by the platform. Declarative calls allow you to define a single JSON or YAML declaration that can then be consumed to accomplish many things, abstracting away the complexity of doing those many things, and just getting it done. Imperative API interfaces require many individual API calls to tweak each and every knob or dial on the system, but is something that is often viewed as more cumbersome from a seasoned integrator, but for newer, and lower level integrators a well designed imperative API can be an important lifeline.

Declarative APIs are almost always positioned against imperative APIs. Savvier, more experienced developers almost always want declarations. Where newer developers and those without a full view of the landscape, often prefer well designed imperative APIs that do one thing well. From my experience, I always try to frame the debate as imperative and declarative where the most vocal developers on the subject prefer to frame it as declarative vs imperative. I regularly have seasoned API developers “declare” that I am crazy for defining every knob and dial of an API resource, without any regard for use cases beyond what they see. They know the landscape, don’t want to be burdened them with having to pull every knob and dial, just give them one interface to declare everything they desire. A single endpoint with massive JSON or YAML post or abstract it all away with an SDK, Ansible, GraphQL, or a Terraform solution. Demanding that a platform meet their needs, without ever considering how more advanced solutions are delivered and executed, or the lower level folks who are on boarding with a platform, and may not have the same view of what is required to operate at that scale.

I am all for providing declarative API solutions for advanced users, however, not at the expense of the new developers, or the lower level ones who spend their day wiring together each individual knob or dial, so it can be used individually, or abstracted away as part of more declarative solution. I find myself regularly being the voice for these people, even though I myself, prefer a declarative solution. I see the need for both imperative and declarative, and understand the importance of good imperative API design to drive and orchestrate quality declarative API implementations, and even more flexible workflow solutions, which in my experience are often what processes, breaks down, and executes most declarations that are fed into a system. The challenge in these discussions are that the people in the know, who want the declarative solutions, are always the loudest and usually higher up on the food chain when it comes to getting things done. They can usually rock the boat, command the room, and dictate how things will get delivered, rarely ever taking into consideration the full scope of the landscape, especially what lower level people encounter in their daily work.

For me, the quality of an API always starts with the imperative design, and how well you think through the developer experience around every dial and knob, which will ultimately work to serve (or not), a more declarative and workflow approach. They all work together. If you dismiss the imperative, and bury it within an SDK, Ansible, or Terraform solution, you will end up with an inferior end solution. They all have to work in concert, and at some point you will have to be down in the weeds turning knobs and dials, understanding what is possible to orchestrate the overall solution we want. It is all about ensuring there is observability and awareness in how our API solutions work, while providing very granular approaches to putting them to work, while also ensuring there are simple, useable comprehensive approaches to moving mountains with hundreds or thousands of APIs. To do this properly, you can’t be dismissing the importance of imperative API design, in the service of your declarative—-if you do this, you will cannibalize the developer experience at the lower levels, and eventually your declarations will become incomplete, clunky, and do not deliver the correct “big picture” vision you will need of the landscape.

When talking API strategy with folks I can always tell how isolated someone is based upon whether they see it as declarative vs imperative, or declarative and imperative. If it is the former, they aren’t very concerned with with others needs. Something that will undoubtedly introduce friction as the API solutions being delivered, because they aren’t concerned with the finer details of API design, or the perspectives of the more junior level, or newer developers to the conversation. They see these workers in the same way they see imperative APIs, something that should be abstracted away, and is of no concern for them. Just make it work. Something that will get us to the same place our earlier command and control, waterfall, monolith software development practices have left us. With massive, immovable, monolith solutions that are comprehensive and known by a few, but ultimately cannot pivot, change, evolve, or be used in new ways because you have declared one or two ways in with the platform should be used. Rather than making things more modular, flexible, and orchestrate-able, where anyone can craft (and declare) a new way of stitching things together, painting an entirely new picture of the landscape with the same knobs and dials used before, but done so in a different way than was ever conceived by previous API architects.


Hoping For More Investment In API Design Tooling

I was conducting an accounting of my API design toolbox, and realized it hasn’t changed much lately. It is still a very manual suite of tooling, and sometimes services, that help me craft my APIs. There are some areas I am actively investing in when it comes to API design, but honestly there really isn’t that much new out there to use. To help me remember how we got here, I wanted to take a quick walk through the history of API design, and check in on what is currently available when it comes to investing in your API design process.

API design has historically meant REST. Many folks still see it this way. While there has been plenty of books and articles on API design for over a decade, I attribute the birth of API design to Jakub and Z at Apiary (https://apiary.io). I believe they first cracked open the seed of API design, and the concept API design first. Which is one of the reasons I was so distraught when Oracle bought them. But we won’t go there. The scars run deep, and where has it got us? Hmm? Hmm?? Anyways, they set into motion an API design renaissance which has brought us a lot of interesting thinking on API design, a handful of tooling and services, some expansion on what API design means, but ultimately not a significant amount of movement overall.

Take a look at what AP+I Design (https://www.apidesign.com/) and API Plus (https://apiplus.com/) have done to architecture, API has done for the oil and gas industry (https://www.api.org/), and API4Design has done for the packaging industry (http://api4design.com/)—I am kidding. Don’t get me wrong, good things have happened. I am just saying we can do more. The brightest spot that represents the future for me is over at:

  • Stoplight.io - They are not just moving forward API design, they are also investing in the full API lifecycle, including governance. I rarely plug startups, unless they are doing meaningful things, and Stoplight.io is.

After Stoplight.io, I’d say some of the open source tooling that still exists, and has been developed over the last five years, gives me the most hope that I will be able to efficiently design APIs at scale across teams:

  • Swagger Editor - I still find myself using this tool for most of my quick API design work.
  • API Stylebook - What Arnaud has done reflects what should be standard across all industries.
  • OpenAPI GUI - A very useful and progressive OpenAPI GUI editor.
  • Apicurio - Another useful GUI OpenAPI editor which shouldn’t be abandoned (cough cough).
  • Web Concepts - The building blocks of every API design effort is located here.

I use these tools in my daily work, and think they reflect what I like to see when it comes to API design tooling investment. I would be neglectful if I didn’t give a shout out to a handful of companies doing good work in this area:

  • Postman - While not coming from a pure API design vantage point, you can do some serious design work within Postman.
  • APIMATIC - It takes good API design to deliver usable SDKs, and APIMATIC provides a nice set of design tooling for their services.
  • Reprezen - They have invested heavily in their overall API design workflow and are a key player in the OpenAPI conversation.
  • Restlet - My friends over at Restlet are still up to good things even thought they are part of Talend.

While I am not quite ready to showcase and highlight these companies because they don’t always reflect the positive API community influence I’d like to see, I don’t want to leave them out for what they bring to the table:

  • SwaggerHub - They are doing interesting things, even if I’m still bummed over the whole Swagger -> OpenAPI bullshit, which I will never forget!
  • Mulesoft - Their AnyPoint Designer is worth noting in this discussion. I will leave it there.

While writing this, and taking a fresh look at the search results for API design, editors, and tooling, and looking through my archives, I came across a couple players I have never seen before, either because I haven’t been tuned in, or because they are new:

  • OpenAPI Designer - An interesting new player to the OpenAPI editor game.
  • Visual Paradigm - Another interesting approach to delivering APIs – we may have to test drive.

I do not want to stop there. Maybe there are other API design toolbox forces at play, influencing, shifting, or directing the API design conversation in other ways, using different protocols and approaches:

  • GraphQL - Maybe the GraphQL believers are right? Maybe it is the true solution to designing our APIs? What if? OMG
  • Kafka - I think an event-driven approach has shifted the conversation for many, moving classic API design into the background.
  • gRPC - Maybe we are moving towards a more HTTP/2 RPC way of delivering APIs and API design is becoming irrelevant.

It is possible that these solutions are siphoning off the conversation in new directions. Maybe the lack of investment is due to other influences that go well beyond the technology, or a specific approach to defining and designing the API problems. What might be some out of the box reasons the API design conversation hasn’t moved forward:

  • Investment - Most of the movement I’ve seen has occurred as well as stagnated because of the direction of venture capital.
  • Hard - Maybe it is because it is hard, and nobody wants to do it, making it something that is difficult too monetize.
  • Expertise - We need more training and the development of API design expertise to help lead the way, and show us how it is done.
  • Bullshit - Maybe API design is bullshit, and we are delusional to think anything will ever become of API design in the first place.
  • My Vision - Maybe my vision of API design tooling and services is too high of a bar, or unrealistic in some way.

Ultimately, I think API design is difficult. I think we need more investment in small open source API design tooling that do one thing well. I think other API paradigms will continue to distract us, but also potentially enrich us. I think my vision of API design is obtainable, but out of view of the current investment crowd. I think API design vision is either technical, or it is business. There is very little in between. This is why I highlight Stoplight.io. I feel they are critically thinking about not just API design, but also the rest of the lifecycle. I’d throw Postman into this mix, but they are more API lifecycle than pure design, but I do think they reflect more of the type of services and tooling I’d like to see.

I do not think resource centered web API design is going anywhere. From what I”m seeing, it is going mainstream. It is simple. Low cost. It gets the job done with minimal investment. I think we should invest more into open source API design solutions. I think we need continued investment in API design services like Stoplight.io, Postman, Reprezen, Restlet, and others. I think we need to shift the conversation to also include GraphQL, Kafka, and gRPC. I think investors can do a better job, but I”m not going to hold my breathe there. In the end, I go back to my hand-crafted, artisanal, API design workbench out back where I have cobbled together a few open source tools, on top of a Git foundation. Honestly, it is all I can afford, but I’ll keep playing with other tools I have access to, to see if something will shift my approach.


What Is An API Contract?

I am big on regularly interrogating what I mean when I use certain phrases. I’ve caught myself repeating and reusing many hollow, empty, and meaningless phrases over my decade as the API Evangelist. One of these phrases is, “an API contract”. I use it a lot. I hear it a lot. What exactly do we mean by it? What is an API contract, and how is it different or similar to our beliefs and understanding around other types of contracts? Is it truth, or is just a way to convince people that what we are doing is just as legitimate as what came before? Maybe it is even more legitimate, like in a blockchain kind of way? It is an irreversible, unbreakable, digital contract think bro!

If I was to break down what I mean when I say API contract, I’d start with being able to establish to a shared articulation of what an API does. We have an OpenAPI definition which describes the surface area of the request and response of each individual API method being offered. It is available in a machine and human readable format for both of us to agree upon. It is something that both API provider and API consumer can agree upon, and get to work developing and delivering, and then integrating and consuming. An API contract is a shared understanding of what the capabilities of a digital interface are, allowing for applications to be programmed on top of.

After an API contract establishes a shared understanding, I’d say that an API contract helps mitigate change, or at leasts communicates it—-again, in a human and machine readable way. It is common practice to semantically version your API contracts, ensuring that you won’t remove or rename anything within a minor or patch release, committing to only changing things in a big way with each major release. Providing an OpenAPI of each version ahead of time, allowing consumers to review that new version of an API contract before they ever commit to integrating and moving to the next version. Helping reduce the amount of uncertainty that inevitably exists when an API changes, and consumers will have to respond with changes in their client API integrations.

Then I’d say an API contract moves into service level agreement (SLA) territory, and helps provide some guarantees around API reliability and stability. Moving beyond any single API, and also speaking to wider operations. An API contract represents a commitment to offering a reliable and stable service that is secure, observability, and the provider has consumers best interest in mind. A contract should reflect a balance between the provider and the consumer interests, and provide a machine and human readable agreement that reflects the shared understanding of what an API delivers—for an agreed upon price. Any API contract reflects the technical and business details of us doing business in this digital world.

Sadly, an API contract is often wielded in the name of all of these things, but there really is very little accountability or enforcement when it comes to API contracts. It is 100% up to the API provider to follow through and live up to the contract, with very little an API consumer can do if the contract isn’t met. Resulting in many badly behaved API providers, as well as monstrous API consumers. Right now, API contract is thrown around by executives, evangelists, analysts and pundits, more than they are ever actually used to govern what happens on the ground of API operations. Only time will tell if API contracts are just another buzzword that comes and goes, or if they become common place when it comes to doing business online in a digital world.


My Primary API Search Engines

I am building out several prototypes for the moving parts of an API search engine I want to build, pushing my usage of APIs.json and OpenAPI, but also trying to improve how I define, store, index, and retrieve valuable data about thousands of APIs through a simple search interface. I’m breaking out the actual indexing and search into their own areas, with rating system being another separate dimension, but even before I get there I have to actually develop the primary engines for my search prototypes, feeding the indexes with fresh signals of where APIs exist across the online landscape. There isn’t an adequate search engine out there, so I’m determined to jumpstart the conversation with an API search engine of my own. Something that is different from what web search engines do, and tailored to the unique mess we’ve created within the API industry.

My index of APIs.json and OpenAPI definitions, even with a slick search interface is just a catalog, directory, or static index of a small piece of the APIs that are out there. I see a true API search engine as three parts

  1. The Humans Searching for APIs - Providing humans with web application to search for new and familiar APIs.
  2. The Search Engine Searching For APIs - Ensuring that the search engine is regularly searching for new APIs.
  3. Other Systems Searching For APIs - Providing an API for other systems to search for new and familiar APIs.

Without the ability for the search engine to actually seek out new APIs, it isn’t a search engine in my opinion—-it is a search application. Without an API for searching for APIs, in my opinion, it isn’t an API search engine. It takes all three of these areas to make an application a true API search engine, otherwise we just have another catalog, directory, marketplace, or whatever you want to call it.

To help me put the engine into my API search engine, I’m starting with a handful of sources I’ve cultivated over the last five years studying the API industry. Providing me with some seriously rich sources of information when it comes to identifying new APIs:

  • GitHub Code Search API - I have a vocabulary I use for uncovering artifacts that provide clues to where APIs exist. I can also expand this search to topics, repos, and other dimensions of GitHub search, but I’m going to make sure I’m exhausting and optimizing core search for all I can before I move on. GitHub provides me with a handful of nuggets when it comes to finding APIs:
    • Swagger - Machine readable JSON and YAML API definitions.
    • OpenAPI - Machine readable JSON and YAML API definitions.
    • Postman - Machine readable JSON API definitions.
    • API Blueprint - Machine readable markdown definitions.
    • RAML - Machine readable YAML definitions.
    • HAR - Machine readable traffic snapshots.
    • Domains - Domains doing interesting things with APIs.
    • People - People doing interesting things with APIs.
  • Bing Web Search API - I use my vocabulary to uncover domains and artifacts that provide clues to where APIs exist. Unlike GitHub, I have to pay for these API calls, so I’m being much more careful about how I spider, and coherently defining the vocabulary I use to uncover this landscape. Bing provides me with a handful of nuggets when it comes to finding APIs:
    • Domains - A look into many different domains who are talking APIs in specific verticals.
    • GitHub - I find that Bing has some interesting indexes of GitHub — wondering how this will evolve.
  • Twitter - I use my vocabulary to identify new domains where people are talking about APIs, and additional signs of APIs.
    • Domains - A look into many different domains who are talking APIs in specific verticals.
    • People - People doing interesting things with APIs.
  • Domain - I have an exhaustive list of domains to spider for API artifacts, autogenerating OpenAPI index along the way.
    • URLs - I look for a handful of valuable URLs for use as part of my index.
      • Twitter - Their Twitter accounts.
      • GitHub - Their GitHub accounts.
      • LinkedIn - Their LinkedIn accounts.
      • Feeds - Their Atom and RSS feeds.
      • Definitions - Any API definitions I can find.
      • Documentation - Where their documentation is.
      • Other Links - I have a long list of other links I look for

These four areas represent the primary engines for my API search engine. I currently have these engines running on AWS, processing GitHub and Bing searches, and I’m currently refining my existing Twitter, and relevant domain harvesting engines. While Bing and GitHub are harvesting API signals, and indexing API artifacts like OpenAPI, Postman, and others, I’m going to overhaul my Twitter and domain approaches. Twitter has always been a treasure trove of API signals, but like on GitHub, it is getting harder to obtain these API signals at scale—-as their value increases, things are getting tighter with API access. Also, running a proper domain harvesting campaign across thousands of domains isn’t easy, and will require some refactoring to do at the scale I need for this type of effort.

While I have two of these up and running, indexing new APIs, I still have a significant amount of work to invest in each engine. What I have now is purely of prototype. It will take several cycles until I get each engine performing as desired, and then I’m expecting ongoing tweaks, adjustments, and refinements to be made daily, weekly, and monthly to get the results I’m looking for. There are many areas of deficiency in the API sector that bother me, but not having a simple way to search for new and existing APIs is one are I cannot tolerate any longer. I am happy that ProgrammableWeb has been around all these years, but they haven’t moved the needle in the right way. I also get why people do API marketplaces, but I’m afraid they aren’t moving the needle in a positive direction either. I’d say that APIs.Guru (https://apis.guru/openapi-directory/) is the most progressive vision when it comes to API search in the last decade–with all the innovation supposedly going on, that is just sad. I am guessing that venture capital does not always equal meaningful things we need will get built.


Taking A Fresh Look At The Nuance Of API Search

I have a mess of APIs.json and OpenAPI definitions I need to make sense of. Something that I could easily fire up an ElasticSearch instance, point at my API “data lake”, and begin defining facets and angles for making sense of what is in there. I’ve done this with other datasets, but I think this round I’m going to go a more manual route. Take my time to actually understand the nuance of API search over other types of search, take a fresh look at how I define and store API definitions, but also how I search across a large volume of data to find exactly the API I am looking for. I may end up going back to a more industrial grade solution in the end, but I am guessing I will at least learn a lot along the way.

I am using API standards as the core of my API index—APIs.json and OpenAPI. I’m importing other formats like API Blueprint, Postman, RAML, HAR, and others, but the core of my index will be APIs.json and OpenAPI. This is where I feel solutions like ElasticSearch might overlook some of the semantics of each standard, and I may not immediately be able to dial-in on the preciseness of the APIs.json schema when it comes to searching API operations, and OpenAPI schema when it comes to searching the granular details of what each individual API method delivers. While this process may not get me to my end solution, I feel like it will allow me to more intimately understand each data point within my API index in a way that helps me dial-in exactly the type of search I envision.

The first dimensions are of my API search index are derived from APIs.json schema properties I use to define every entity within my API search index:

  • Name - The name of a company, organization, institution, or government agency.
  • Description - The details of what a particular entity brings to the table.
  • Tags - Specific tags applied to an entity, or even a collection of entities.
  • Kin Rank - What Kin thinks of the entity being indexed with APIs.json.
  • Alexa Rank - What Alex thinks of the entity being indexed with APIs.json.
  • Common Properties - Using common properties like blog, Twitter, and GitHub.
  • Included - Other related APIs that are included within the index.
  • Maintainers - Details about who is the maintainer of the API definition.
  • API Name - The name of specific API program or project that an entity possesses.
  • API Description - The details of a specific API program or project that an entity possesses.
  • API Tags - How the individual API program or project is tagged for organization.
  • API Properties - The details of specific properties of an API like documentation, pricing, etc.

After indexing the 100K view with APIs.json, providing references to the different layers of API operations, I’m indexing the following OpenAPI schema properties:

  • Title - The title of an individual API program or project that an entity possesses.
  • Description - The description of an individual API program or project that an entity possesses.
  • Domain - The subdomain, or top level domain that an API operates within.
  • Version - The version of each individual API.
  • Tags - The tags that are applied to a specific API program or project that an entity possesses.
  • API Path - The actual path of each API.
  • API Method Summary - The summary for an individual API method.
  • API Method Description - The description for an individual API method.
  • API Method Operation ID - The operation id for an individual API method.
  • API Method Query Parameters - The query parameters for an individual API method.
  • API Method Headers - The headers for an individual API method.
  • API Method Body - The body of an individual API method.
  • API Method Tags - The tags applied to each individual API methods.
  • Schema Object Name - The name of each of the schema objects.
  • Schema Object Description - The description of each of the schema objects.
  • Schema Properties - The properties of each of the schema objects.
  • Schema Tags - The tags of each of the schema objects.

These details provide to be by the OpenAPI definition for each API provides me with the long tail of my search, going beyond just the names and description of each API, allowing me to turn on or turn off different facets of the OpenAPI specification when indexing, and delivering search results. My biggest challenges in building this index center around:

  • Completeness - I struggle with being able to invest the resources to properly complete the profile for each API.
  • Inconsistency - Navigating the inconsistency of APIs, trying to nail down a single definition across thousands of the is hard.
  • Performance - The performance of basic JavaScript search against such a large set of YAML / JSON documents isn’t optimal.
  • Accuracy - The accuracy of API methods is difficult to ascertain without actually getting a key and firing up Postman, or other script.
  • Up to Date - Understanding when information has become out of date, obsolete, or deprecated is a huge challenge with search.

Right now I have about 2K APIs defined with APIs.json, with a variety of OpenAPI artifacts to support. With more coming in each day through my search engine spiders, trolling GitHub and the open web for signs of API life. I’m working to refine my current index of APIs, making sure they are complete-enough for making available publicly. Then I want to be able to provide a basic keyword search tool, then slowly add each of these individual data points to some sort of advanced filter setting for this search tool. I’m not convinced I’ll end up with a usable solution in the end, but I convinced that I will flesh out more of the valuable data points that exist within an APIs.json and OpenAPI index.

This prototype will at least give me something to play with when it comes to crafting a JavaScript interface for the YAML API index I am publishing to GitHub. I feel like these API search knobs will help me better define my search index, and craft cleaner OpenAPI definitions for use in this API search index. As the index grows I can dial in the search filters, and look for the truly interesting patterns that exist across the API landscape. Then I’m hoping to add an API ratings layer to further help me cut through the noise, and identify the truly interesting APIs amidst the chaos and trash. Not all APIs are created equal and I will need a way to better index, rank, and then ultimately search for the APIs I need. While also helping me more easily discover entirely new types of APIs that I may not notice in my insanely busy world.


Navigating API Rate Limit Differences Between Platforms

I always find an API providers business model to be very telling about the company’s overall strategy when it comes to APIs. I’m currently navigating the difference between two big API providers, trying to balance my needs spread across very different approaches to offering up API resources. I’m working to evolve and refine my API search algorithms and I find myself having to do a significant amount of work due to the differences between GitHub and Microsoft Search. Ironically, they are both owned by the same company, but we all know their business models are seeking alignment as we speak, and I suspect my challenges with GitHub API is probably a result of this alignment.

The challenges with integrating with GitHub and Microsoft APIs are pretty straightforward, and something I find myself battling regularly when integrating with many different APIs. My use of each platform is pretty simple. I am looking for APIs. The solutions are pretty simple, and robust. I can search for code using the GitHub Search API, and I can search for websites using the Bing Search API. Both produce different types of results, but what both produce is of value to me. The challenge comes in when I can pay for each API call with Bing, and I do not have that same option with GitHub. I am also noticing much tighter restriction on how many calls I can make to the GitHub APIs. With Bing I can burst, depending on how much money I want to spend, but with GitHub I have no relief value—I can only make X calls a minute, per IP, per user.

This is a common disconnect in the world of APIs, and something I’ve written a lot about. GitHub (Microsoft) has a more “elevated” business model, with the APIs being just an enabler of that business model. Where Bing (Microsoft) is going with a much more straightforward API monetization strategy—pay for what you use. In this comparison my needs are pretty straightforward—-both providers have data I want, and I’m willing to pay for it. However, there is an additional challenge. I’m also using GitHub to manage the underlying application for my project. Meaning after I pull search results from GitHub and Bing, and run them through my super top secret, magical, and proprietary refinement algorithm, I publish the refined results to a GitHub repository, and manage the application in real time using Git, and GitHub APIs—which counts against my API usage.

I used to manage all my static sites and applications 100% on GitHub, using the APIs to orchestrate the data behind each Jekyll-driven site. For the last five years I’ve run API Evangelist, and waves of simple data-driven static applications on GitHub like this. It has been a good ride. A free ride. One I fear is coming to a close. I can no longer deploy static data-driven Jekyll apps on the platform, and confidently manage using the GitHub API anymore. It is something I do not expect to continue getting for free. I’d be happy to pay for my account on a per organization, per repo, and per API call basis. In the end, I’ll probably begin just relying on Git for bulk builds of each application I run on GitHub, and eventually begin migrating them to my own servers, running Jekyll on my own, and custom developing an API for managing the more granular changes across hundreds of micro applications that run on Jekyll using YAML data. It would be nice for GitHub to notice this type of application development as part of their business model, but I’m guessing it isn’t mainstream enough for folks to adopt, and GitHub to cater to.

Getting back to the search portion of this post. I am finding myself writing a scheduling algorithm that spread out my API calls across a 24 hour period. I guess I can also leverage the different GitHub accounts I have access to and maybe spread the harvesting across a couple EC2 instance, but I’d rather just do what Bing offers me, and put in my credit card. I am sure there are other ways I can find to circumvent the GitHub API rate limits, but why? I would rather just be above board and put in my credit card to be able to scale how I’m using the platform. One of the biggest challenges to API integration at scale in the future will be API providers who do not offer relief valves for their consumers. Significantly increasing the investment required to integrate with an API in a meaningful way, making it much more difficult to seamless use just a handful of APIs, let alone hundreds or thousands of them. This challenge is nothing new, and just one example of how the business of APIs can get in the way of the technology of APIs—-slowing things down along the way.


The JSON Schema Tooling In My Life

I am always pushing for more schema order in my life. I spend way too much time talking about APIs, when a significant portion of the API foundation is schema. I don’t have as many tools to help me make sense of my schema, and to improve them as definitions of meaningful objects. I don’t have the ability to properly manage and contain the growing number of schema objects that pop up in my world on a daily basis, and this is a problem. There is no reason I should be making schema objects available to other consumers if I do not have a full handle on what schema objects exist, let alone a full awareness of everything that has been defined when it comes to the role that each schema object plays in my operations.

To help me better understand the landscape when it comes to JSON Schema tooling, I wanted to take a moment and inventory the tools I have bookmarked and regularly use as part of my daily work with JSON Schema:

  • JSON Schema Editor - https://json-schema-editor.tangramjs.com/ - An editor for JSON Schema.
  • JSON Schema Generator - https://github.com/jackwootton/json-schema - Generates JSON Schema from JSON
  • JSON Editor - https://json-editor.github.io/json-editor/ - Generates form and JSON from JSON Schema.
  • JSON Editor Online -https://github.com/josdejong/jsoneditor/ - Allows me to work with JSON in a web interface.
  • Another JSON Schema Validator (AJV) - https://github.com/epoberezkin/ajv - Validates my JSON using JSON Schema.

I am going to spend some time consolidating these tools into a single interface. They are all open source, and there is no reason I shouldn’t be localizing their operation, and maybe even evolving and contributing back. This helps me understand some of my existing needs and behavior when it comes to working with JSON Schema, which I’m going to use to seed a list of my JSON Schema needs, as drive a road map for things I’d like to see developed. Getting a little more structure regarding how I work with JSON Schema.

  • Visual Editor - Being able to visual render and edit JSON Schema in browser.
  • YAML / JSON Editor - Being able to edit JSON Schema in YAML or JSON.
  • YAML to JSON Converter - Converting my YAML JSON Schema into JSON.
  • JSON to YAML Converter - Converting my JSON JSON Schema into YAML.
  • JSON to JSON Schema Generator - Generate JSON Schema from JSON object.
  • JSON Schema to JSON Generator - Generate a JSON object from JSON Schema.
  • JSON Validation Using JSON Schema - Validate my JSON using JSON Schema.
  • Enumerators - Help me manage enumerators used across many objects.
  • Search - Help me search across my JSON Schema objects, wherever they are.
  • Guidance - Help me create better JSON Schema objects with standard guidelines.

This is a good start. If I can bring some clarity and coherence to these areas, I’m going to be able to step up my API design and development game. If I can’t, I’m afraid I’m going to be laying a poor foundation for any API I’m designing in this environment. I mean, how can I consciously provide access to any schema object that I don’t have properly defined, indexed, versioned, and managed? If I don’t fully grasp my schema objects, my API design is going to be off kilter, and most likely be causing friction with my consumers. Granted, I could be offloading the responsibility for making sense of my schema to my consumers using a GraphQL solution, but I’m more in the business of doing the heavy lifting in this area, as it pertains to my business—-I’m the one who should know what is going on with each and every object that passes through my business servers.

I wish there was a schema tool out there to help me do everything that I need. Unfortunately I haven’t seen it. The tooling that has rose up around the OpenAPI specification helps us better invest in schema objects when they are in the service of our API contracts, but nothing just for the sake of schema management. I will keep taking inventory of what tooling is available, as well as what I am needing when it comes to JSON Schema management. Who knows, something might pop up out there on the landscape. Or, more realistically I’m hoping little individual open source solutions keep popping up, allowing me to stitch them together and create the experience I’m looking for. I’m a big fan of this approach, rather than one service provider swooping in and providing the one tool to rule them all, only to get acquired and then be shut down–breaking my heart all over again.


The Details Of My API Rating Formula

Last week I put some thoughts down about the basics of my API rating system. This week I want to go through each of those basics, and try to flesh out the details of how I would gather the actual data needed to rank API providers. This is a task I’ve been through with several different companies, only to be abandoned, and then operated on my own for about three years, only to abandon once I ran low on resources. I’m working to invest more cycles into actually defining my API rating in a transparent and organic way, then applying it in a way that allows me to continue evolving, while also using to make sense of the APIs I am rapidly indexing.

First, I want to look at the API-centric elements I will be considering when looking at a company, organization, institution, government agency, or other entity, and trying to establish some sort of simple rating for how well they are doing APIs. I’ll be the first to admit that ratings systems are kind of bullshit, and are definitely biased and hold all kinds of opportunity for going, but I need something. I need a way to articulate in real time how good of an API citizen an API provider is. I need a way to rank the searches for the growing number of APIs in my API search index. I need a list of questions I an ask about an API in both a manual, or hopefully automated way:

  • **Active / Inactive **- APIs that have no sign of life need a lower rating.
    • HTTP Status Code - Do I get a positive HTTP status code back when I ping their URL(s)?
    • Active Blog - Does their blog have regular activity on it, with relevant and engaging content?
    • Active Twitter - Is there a GitHub account designated for the API, and is it playing an active role in its operations?
    • Active GitHub - Is there a GitHub account designated for the API, and is it playing an active role in its operations?
    • Manual Visit - There will always be a need for a regular visit to an API to make sure someone is still home.
  • Free / Paid - What something costs impacts our decision to use or not.
    • Manual Visit - There is no automated way to understand API pricing.
  • Openness - Is an API available to everyone, or is a private thing.
    • Manual Review - This will always be somewhat derived from a manual visit by an analyst to the API.
    • Sentiment Analysis - Some sentiment about the openness could be established from analyzing Twitter, Blogs, and Stack Exchange.
  • Reliability - Can you depend on the API being up and available.
    • Manual Review - Regularly check in on an API to see what the state of things are.
    • Sentiment Analysis - Some sentiment about the reliability of an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
    • Monitoring Feed / Data - For some APIs, monitoring could be setup, but will cost resources to be able to do accurately.
  • Fast Changing - Does the API change a lot, or remain relatively stable.
    • Manual Review - Regularly check in on an API to see how often things have changed.
    • Change Log Feed - Tune into a change log feed to see how often changes are Ade.
    • Sentiment Analysis - Some sentiment about the changes to an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
  • Social Good - Does the API benefit a local, regional, or wider community.
    • Manual Review - It will take the eye of an analyst to truly understand the social impact of an API.
  • **Exploitative - Does the API exploit its users data, or allow others to do so.
    • Manual Review - It will take a regular analyst review to understand whether an API has become exploitative.
    • Sentiment Analysis - Some sentiment about the exploitative nature of an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
  • Secure - Does an API adequately secure its resources and those who use it.
    • Manual Review - Regularly check in on an API to see how secure things are.
    • Sentiment Analysis - Some sentiment about the security of an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
    • Monitoring Feed / Data - For some APIs, monitoring could be setup, but will cost resources to be able to do accurately, unless provided by provider.
  • Privacy - Does an API respect privacy, and have a strategy for platform privacy.
    • Manual Review - Regularly check in on an API to see how privacy is addressed, and what steps the platform has been taking to address.
    • Sentiment Analysis - Some sentiment about the privacy of an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
  • Monitoring - Does a platform actively monitor its platform and allow others as well.
    • **Manual Review **- Regularly check in on an API to see how secure things are.
    • Sentiment Analysis - Some sentiment about the security of an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
    • Monitoring Feed / Data - For some APIs, monitoring could be setup, but will cost resources to be able to do accurately, unless provided by provider.
  • Observability - Is there visibility into API platform operations, and its processes.
    • Manual Review - It will always take an analyst to understand observability until there are feeds regarding every aspect of operations.
  • Environment - What is the environment footprint or impact of API operations.
    • Manual Review - This would take a significant amount of research into where APIs are hosted, and disclosure regarding the environment impact of data centers, and the regions they operate in.
  • Popular - Is an API popular, and something that gets a large amount of attention.
    • Manual Review - Analysts can easily provide a review of an API to better understand an APIs popularity.
    • Sentiment Analysis - Some sentiment about the presence of an API could be established from analyzing Twitter, Blogs, and Stack Exchange.
    • Twitter Followers** - The number of Twitter followers for an account dedicated to an API provides some data.
    • Twitter Mentions - Similarly the number of mentions of an API providers Twitter account provides additional data.
    • GitHub Followers - The number of GitHub followers provides another dimension regarding how popular an API is.
    • Stack Exchange Mentions - The question and answer site always provides some interesting insight into which APIs are being used.
    • Blog Mentions - The number of blog posts on top tech blogs, as well as independent blogs provide some insight into popularity.
  • Value - What value does an API bring to the table in generalized terms.
    • Manual Review **- The only way to understand the value an API brings to the table is for an analyst to evaluate the resources made available. Maybe some day we’ll be able to do this with more precision, but currently we do not have the vocabulary for describing.

I am developing a manual questionnaire I can execute against while profiling every API. I have already done this for many APIs, but I’m looking to refine for 2019. I will also be automating wherever I can, leverage other APIs, feeds, and some machine learning to help me augment my heuristic analyst rank with some data driven elements. Some of these will only change when I, or hopefully another analyst reviews them, but some of this will be more dependent on data gathered each month. It will take some time for a ranking system based upon these elements to come into focus, but I’m guessing along the way I”m going to learn a lot, and this list will look very different in twelve months.

Next, I wanted to look at the elements of the rating system itself which I think are essential to the success of an API ranking system based upon the elements above. I’ve seen a number of efforts fail when it comes to indexing and ranking APIs. It is not easy. It is a whole lot of work, without an easy path to monetization like Google established with advertising. Many folks have tried and failed, and I feel like some of these elements will help keep things grounded, and provide more opportunity for success, if not at least sustainability.

  • YAML Core - I would define the rating system in YAML.
    • Rating Formula - The rating formula is machine readable and available as YAML, taking everything listed above and automating the application of it across APIs using a standard YAML definition.
    • Rating Results - Publishing a YAML dump of the results of rating for each API provider, also providing a machine readable template for understanding how each API provider is being ranked.
  • GitHub Base - Everything would be in a series of repositories.
    • GitHub Repo - A GitHub repository is the unit of compute and storage for the rating.
    • Git Management - I am using GitHub to apply the rating system across all APIs in my search index.
    • GitHub API Management - I am automating the granular editing of the YAML core using the GitHub API.
  • Observable - The entire algorithm, process, and results are open.
    • Search Transparency - I will be tracking keyword searches, minus IP and user agent, then publishing the results to GitHub as YAML.
    • Minimal Tracking - There will be minimal tracking of end-users searching and applying the ranking, with tracking being provider focused.
  • Evolvable - It would be essential to evolve and adapt over time.
    • Semantic Versioned - The search engine will be semantically versioned, providing a way of understanding it as it evolves.
    • YAML - Everything is defined as YAML which is semantically versioned, so nothing is removed or changed until major releases.
  • Weighted - Anyone can weight the questions that matters to them.
    • Data Points - All data points will have a weight applied as a default, but ultimately will allow end-users to define the weights they desire.
    • Slider Interface - Providing end-users with a sliding interface for defining the importance of each data point to them, and apply to the search.
  • Completeness - Not all the profiles of APIs will be as complete as others.
    • Data Points - The continual addition and evolution of data points, until we find optimal levels of ranking across industries, for sustained periods of time.
  • Ephemeral - Understanding that nothing lasts forever in this world.
    • Inactive - Making sure things that are inactive reflect this state.
    • Deprecation - Always flag something as deprecated, reducing in rank.
    • Archiving - Archive everything that has gone away, keeping indexes pure.
  • Community - It should be a collaboration between key entities and individuals.
    • GitHub - Operate the rating system out in the open on GitHub, leveraging the community for evolving.
    • Merge Request - Allow for merge requests on the search index, as well as the ratings being applied.
    • Forks - Allow for the workability of the API search, leveraging ranking as a key dimensions for how things can be forked.
    • Contribution - Allow for community contribution to the index, and the ranking system, establishing partnerships along the way.
  • Machine Readable - Able for machines to engage with seamlessly.
    • YAML - Everything is published as YAML to keep things simple and machine readable.
    • APIs.json - Follow a standard for indexing API operations and making them available.
    • OpenAPI - Follow a standard for indexes the APIs, and making them available.
  • Human Readable - Kept accessible to anyone wanting to understand.
    • HTML - Provide a simple HTML application for end-users.
    • CSS - Apply a minimalist approach to using CSS.
    • JavaScript - Drive the search and engagement with client-side JavaScript, powered by APIs.

This provides me with my starter list of elements I think will set the tone for how this API search engine will perform. Ultimately there will be a commercial layer to how the API search and ranking works, but the goal is to be as transparent, observable, and collaborative around how it all works. A kind of observability that does not exist in web search, and definitely doesn’t in anything API search related. I’ll give it to DuckDuckGo, for being the good guys of web search, which I think provides an ethical model to follow, but I want to also be open with the rating system behind, to avoid some of the illness that commonly exists within rating agencies of any kind.

Next stop, will be about turning the rating elements into a YAML questionnaire that I can begin systematically applying to the almost 2,000 APIs I have in my index. With most of it being a manual process, I need to get the base rating details in place, begin asking them, and then version the questionnaire schema as I work my way through all of the APIs. I have enough experience with profiling APIs to know that what questions I ask, how I ask them, and what data I can gather about API will rapidly evolve once I begin trying to satisfy questions again real world APIs. How fast I can apply my API rating system to the APIs I have indexed, as well as quickly turn around and refresh over time will depend on how much time and resources I am able to manifest for this project. Something that will come and go, as this is just a side project for me, to keep me producing fresh content and awareness of the API space.


Thinking Differently When Approaching OpenAPI Diffs And Considering How To Layer Each Potential Change

I have a lot of OpenAPI definitions, covering about 2,000 separate entities. For each entity, I often have multiple OpenAPIs, and I am finding more all the time. One significant challenge I have in all of this centers around establishing a master “truth” OpenAPI, or series of definitive OpenAPIs for each entity. I can never be sure that I have a complete definition of any given API, so I want to keep vacuuming up any OpenAPI, Swagger, Postman, or other artifact I can, and compare it with the “truth” copy” I have on indexed. Perpetually layering the additions and changes I come across while scouring the Internet for signs of API life. This perpetual update of API definitions in my index isn’t easy, and any tool that I develop to assist me will be in need constant refinement and evolution to be able to make sense of the API fragments I’m finding across the web.

There are many nuances of API design, as well as the nuances of how the OpenAPI specification is applied when quantifying the design of an API, making the process of doing a “diff” between two OpenAPI definitions very challenging. Rendering common “diff” tools baked into GitHub, and other solutions ineffective when it comes to understanding the differences between two API definitions that may represent a single API. These are some of the things I’m considering as I’m crafting my own OpenAPI “diff” tooling:

  • Host - How the host is stored, defined, and applied across sandbox, production, and other implementations injects challenges.
  • Base URL - How OpenAPI define their base url versus their host will immediately cause problems in how diffs are established.
  • Path - Adding even more instability, many paths will often conflict with host and base URL, providing different fragments that show as differences.
  • Verbs - Next I take account of the verbs available for any path, understanding what the differences are in methods applied.
  • Summary - Summaries are difficult to diff, and almost always have to be evaluated and weighted by a human being.
  • Description - Descriptions are difficult to diff, and almost always have to be evaluated and weighted by a human being.
  • Operation ID - These are usually autogenerated by tooling, and rarely reflect a provider defined standard, making them worthless in “diff”.
  • Query Properties - Evaluating query parameters individually is essential to a granular level diff between OpenAPI definitions.
  • Path Properties - Evaluating path parameters individually is essential to a granular level diff between OpenAPI definitions.
  • Headers - Evaluating headers individually is essential to a granular level diff between OpenAPI definitions.
  • Tags - Most providers do not tag their APIs, and they are often not included, and rarely provide much value when applying a “diff”.
  • **Request Bodies - Request bodies provide a significant amount of friction for diffs depending on the complexity and design of an API.
  • Responses - Responses often provide an incomplete view of an API, and rarely are robust enough to impact the “diff” view.
  • Status Codes - Status codes should be evaluated on an individual basis, providing a variety of ways to articulate these statuses.
  • Content Types - Content types these days are often application/json, but do provide some opportunities to define unique characteristics.
  • Schema Objects - Schema is often not defined, and rarely used as part of a diff unless OpenAPIs are generated from log, HAR, and other files.
  • Schema Properties - Schema properties are rarely present in OpenAPIs, making them not something that comes up on the “diff” radar.
  • Security Definitions - Security definitions are the holy grail of automating API indexing, but are rarely present in OpenAPI, and only in Postman Collections.
  • References - The use of $ref, or absence of $ref and doing everything inline poses massive challenges to coherently considering “diff” results.
  • Scope - The size of the OpenAPI snippet being applied as part of a “diff” helps narrow what needs to be considered by a human or machine.

This reflects the immediate concerns I have approaching the development of a custom “diff” tool for OpenAPI. First I am just trying to establish a strategy for stripping back the layers of OpenAPI definitions, and established a sort of layered user interface for me to manually accept or reject changes to an OpenAPI. An interface that will also allow me to define a sort of rules vocabulary for increasingly automating the decision making process. I’d love it if eventually the diff tool would show me just a single diff, present me with the change it thinks I should make, and allow me to just agree and move to the next “diff”. I have a lot of work to get things to this point.

Like API search, I feel like API diff is something I have to reduce to its basics, and then fumble my way towards finding an acceptable solution. I don’t feel there is a single “diff” tool for JSON or YAML that will have the eye that I demand for analyzing, presenting, and either manually or automatically merging a diff. Like the other layers of my API search engine, diff is something I need to think through, iterate upon, and repeat until I come up with something that helps me merge “diffs” efficiently across thousands of APIs, and hopefully eventually automates and abstract away the most common differences between the APIs that I am spidering and indexing. Like every other area it is something I’m only working on when I have time, but something I will eventually come out the other end with a usable OpenAPI diff tool, that can help me make sense of all the API definitions I’m bombarded with on a daily basis.


Why The Open Data Movement Has Not Delivered As Expected

I was having a discussion with my friends working on API policy in Europe about API discovery, and the topic of failed open data portals came up. Something that is a regular recurring undercurrent I have to navigate in the world of APIs. Open data is a subset of the API movement, and something I have first-hand experience in, building many open data portals, contributing to city, county, state, and federal open data efforts, and most notably riding the open data wave into the White House and working on open data efforts for the Obama administration.

Today, there are plenty of open data portals. The growth in the number of portals hasn’t decreased, but I’d say the popularity, utility, and publicity around open data efforts has not lived up to the hype. Why is this? I think there are many dimensions to this discussion, and few clear answers when it comes to peeling back the layers of this onion, something that always makes me tear up.

  • Nothing There To Begin With - Open data was never a thing, and never will be a thing. It was fabricated as part of an early wave of the web, and really never got traction because most people do not care about data, let alone it being open and freely available.
  • It Was Just Meant To Be A Land Grab - The whole open data thing wasn’t about open data for all, it was meant to be open for business for a few, and they have managed to extract the value they needed, enrich their own datasets, and have moved on to greener pastures (AI / ML).
  • No Investment In Data Providers - One f the inherent flaws of the libertarian led vision of web technology is that government is bad, so don’t support them with taxes. Of course, when they open up data sets that is goo for us, but supporting them in covering compute, storage, bandwidth, and data refinement or gathering is bad, resulting in many going away or stagnating.
  • It Was All Just Hype From Tech Sector - The hype about open data outweighs the benefits and realities on the ground, and ultimately hurt the movement with unrealistic expectations, setting efforts back many years, and are now only beginning to recover now that the vulture capitalists are on to other things.
  • Open Data Is Not Sexy - Open data is not easy to discover, define, refine, manage, and maintain as something valuable. Most government, institutions, and other organizations do have the resources to do properly, and only the most attractive of uses have the resources to pay people to do the work properly, incentivizing commercial offerings over the open, and underfunded offerings.
  • Open Data Is Alive and Well - Open data is doing just fine, and is actually doing better, now that the spotlight is off of them. There will be many efforts that go unnoticed, unfunded, and fall into disrepair, but there will also be many fruitful open data offerings out there that will benefit communities, and the public at large, along with many commercial offerings.
  • Open Data Will Never Be VC Big - Maybe open data share the spotlight because it just doesn’t have the VC level revenue that investors and entrepreneurs are looking for. If it enriches their core data sets, and can be used to trying their machine learning models, it has value as a raw material, but as something worth shining a light on, open data just doesn’t rise to the scope needed to be a “product” all by itself.

My prognosis on why open data never has quite “made it”, is probably a combination of all of these things. There is a lot of value present in open data as a raw material, but a fundamental aspect of why data is “open”, is so that entrepreneurs can acquire it for free. They aren’t interested in supporting city, county, state, and federal data stewards, and helping them be successful. They just want it mandated that it is publicly available for harvesting as a raw material, for use in the technology supply chain. Open data primarily was about getting waves of open data enthusiasts to do the heavy lifting when it came to identifying where the most value raw data sources exist.

I feel pretty strong that we were all used to initiate a movement where government and institutions opened up their digital resources, right as this latest wave of information economy was peaking. Triggering institutions, organizations, and government agencies to bare fruit, that could be picked by technology companies, and used to enrich their proprietary datasets, and machine learning models. Open doesn’t mean democracy, it mostly means for business. This is the genius of the Internet evolution, is that it gets us all working in the service of opening things up for the “community”. Democratizing everything. Then once everything is on the table, companies grab what they want, and show very little interest in giving anything back to the movement. I know I have fallen for several waves of this ver the last decade.

I think open data has value. I think community-driven, standardized sets of data should continue to be invested in. I think we should get better at discovery mechanisms involving how we find data, and how we enable our data to be found. However, I think we should also recognize that there are plenty of capitalists who will see what we produce as a valuable raw resource, and something they want to get their hands on. Also, more importantly, that these capitalists are not in the businesses of ensuring this supply of raw resource continues to exist in the future. Like we’ve seen with the environment, these companies do not care about the impact their data mining has on the organizations, institutions, government agencies, and communities that produced them, or will be impacted when efforts go unfunded, and unsupported. Protecting our valuable community resources from these realities will not be easy as the endless march of technology continues.


API Interoperability is a Myth

There are a number of concepts we cling to in the world of APIs. I’ve been guilting of inventing, popularizing, and spreading many myths in my almost decade as the API Evangelist. One of them that I’d like to debunk and be more realistic about is when it comes to API interoperability. When you are focused on just the technology of APIs, as well as maybe the low-level business of APIs, you are an API interoperability believer. Of course everyone wants API interoperability, and that all APIs should work seamlessly together. However, if you at all begin operating at the higher levels of the business of APIs, and spend any amount of time studying the politics of why and how we do APIs at scale, you will understand that API interoperability is a myth.

This reality is one of the reasons us technologists who possess just a low-level understanding of how the business behind our tech operation, are such perfect tools for the higher level business thinkers, and people who successfully operate and manipulate at the higher levels of industries, or even at the policy level. We are willing to believe in API interoperability, and work to convince our peers that it is a thing, and we all work to expose, and open up the spigots across our companies, organizations, institutions, and government agencies. Standardized APIs and schema that play nicely with each other are valuable, but only within certain phases of a companies growth, or as part of a myth-information campaign to convince the markets that a company is a good corporate citizen. However, once a company achieves dominance, or the winds change around particular industry trends, most companies just want to ensure that all roads lead towards their profitability.

Believing in API interoperability without a long term strategy means you are opening up your company to value extraction by your competitors. I don’t care how good your API management is, if your API makes it frictionless to integrate with because you use a standard format, it just means that the automated value harvesters of companies will find it frictionless to get at what you are serving, and more easily weaponize and monetize your digital resources. It pains me to say this, but it is the reality. If you are in the business of making your API easier to connect with, you are in the business of making it easier for your competitors to extract value from you. Does this mean we shouldn’t do APIs, and make them interoperable? No, but it does mean that we shouldn’t be ignorant of the cutthroat, exploitative, and aggressive nature of businesses that operate within our industries. Does it mean we shouldn’t invest in standards? No, but we should be aware that not every company sitting at the table shares the same interests as us, and could be playing a longer game that involves lock-in, proprietary nuances, or even slowing the standards movement in their favor.

I think that storage APIs are a great example of this. In the early days of cloud storage APIs, I remember everyone saying they were AWS S3 compatible—even Google and Microsoft highlighted this. However, as things have progressed, everyone adds their own tweaks, changes, and nuances that make it much harder to get your terabytes of data off their platform. It was easy to get it in, and keep it synced across your providers, but eventually the polarities change, and all roads lead to lock-in, and are not in the service of interoperability. This is just businesses. I’m not condoning it, I am only repeating what my entrepreneurial friends tell me. If you make it easy for your customers to use other services, you are eroding their loyalty to your brand, and eventually they will leave. So you have to make it harder for them over time. Just incrementally. Forget to grease the door hinges. Change the way the doorknob turns. Make the door narrower. Stop following the international or local standards for how you design a door, call it innovation, and reduce the ways in which your customers can easily get out the door.

I call this the Hotel California business model. You can check-in, but you can never leave. Wrap it all in a catchy tune, even call yourself a hotel, but in reality you’ve gotten hooked a technological myth, and you will never actually be able to ever find the door. Anyways, c’mon, I fucking hate the Eagles, don’t we just have some Credence we could play? Anyways, I got off track. Nobody, but us low-level delusional developers believe in API interoperability. The executives don’t give a shit about it. Unless it supports the latest myth-information campaign. In the long run, nobody wants their APIs to work together, we all just want EVERYONE to use OUR APIs! Sure, we also want to be seen as working together on standards groups, and that our APIs are the the standard EVERYONE should follow, ensuring interoperability with us at the center. But, nobody truly believes in API interoperability. If you do, I recommend you do some soul searching regarding where you exist in the food chain. I’m guessing you are a lower level pawn, doing the work of the puppet master in your industry. This is why you won’t find me on many standards bodies, or me blindly pushing interoperability at scale. It doesn’t exist. It isn’t real. Let’s get to work on more meaningful policy level things that will help shape the industry.


Your API and Schema Is That Complex Because You Have Not Done The Hard Work To Simplify

I find myself looking at a number of my more complex API designs, and saying to myself, “this isn’t complicated because it is hard, it is complicated because I did not spend the time required to simplify it appropriately”. There are many factors contributing to this reality, but I find that more often than not it is because I’m over-engineering something, and I am caught up in the moment focusing on a purely computation approach, and not considering the wider human, business, and other less binary aspects of delivering APIs.

While I am definitely my own worst enemy in many API deliver scenarios, I’d say there are a wide range of factors that are influencing how well, or poorly that I design my API resources, with just a handful of them being:

  • Domain - I just do not have the domain knowledge required to get the job done properly.
  • Consumer - I just do not have the knowledge I need of my end consumers to do things right.
  • Bandwidth - I just do not have the breathing room to properly sit down and make it happen.
  • Narcissism - I am the best at this, I know what is needed, and I deem this complexity necessary.
  • Lazy - I am just too damn lazy to actually dig in and get this done properly in the first place.
  • Caring - I just do not give a shit enough to actually go the extra distance with API design.
  • Dumb - This API is dumb, and I really should not be developing it in the first place.

These are just a few of the reasons why I settle for complexity over simplicity in my API designs. It isn’t right. However, it seems to be a repeating pattern in some of my work. It is something that I should be exploring more. For me to understand why my work isn’t always of highest quality possible I need to explore each of these areas and understand where I can make improvements, and which areas I cannot. Of course I want to improve in my work, and reach new heights with my career, but I can’t help be dogged by imperfections that seem out of my control…or are they?

I have witnessed API simplicity. APIs that do powerful and seemingly complicated things, but in an easy and distilled manner. I know that it is possible to do, but I can’t help but feel that 90% of my API designs fall short of this reality. Some get very close, while others look like amateur hour. One thing is clear. If I’m going to deliver high quality simple and intuitive APIs, I’m going to have to work very hard at it. No matter how much experience I have, I can only improve the process so much, and there will always be a significant amount of investment required to take things to the next level.


The Basics of My API Rating Formula

I have been working on various approaches to rating APIs since about 2012. I have different types of algorithms, even having invested in operating one from about 2013 through 2016, which I used to rank my daily intake of API news. Helping me define what the cream on top of each industry being impacted by APIs, while also not losing site of interesting newcomers to the space. I have also had numerous companies and VCs approach me about establishing a formal API rating system—many of whom concluded they could do fairly easily and went off to try, then failed, and gave up. Rating the quality of APIs is subjective and very hard.

When it comes to rating APIs I have a number of algorithms to help me, but I wanted to step back and think of it from a more simpler human vantage point, and after establishing a new overall relationship with the API industry. What elements do I think should exist within a rating system for APIs:

  • Active / Inactive - APIs that have no sign of life need a lower rating.
  • Free / Paid - What something costs impacts our decision to use or not.
  • Openness - Is an API available to everyone, or is a private thing.
  • Reliability - Can you depend on the API being up and available.
  • Fast Changing - Does the API change a lot, or remain relatively stable.
  • Social Good - Does the API benefit a local, regional, or wider community.
  • Exploitative - Does the API exploit its users data, or allow others to do so.
  • Secure - Does an API adequately secure its resources and those who use it.
  • Privacy - Does an API respect privacy, and have a strategy for platform privacy.
  • Monitoring - Does a platform actively monitor its platform and allow others as well.
  • Observability - Is there visibility into API platform operations, and its processes.
  • Environment - What is the environment footprint or impact of API operations.
  • Popular - Is an API popular, and something that gets a large amount of attention.
  • Value - What value does an API bring to the table in generalized terms.

These are just a handful of the most relevant data points I’d rank APIs on. Using terms that almost anyone can understand. You might not fully understand the technical details of what an API delivers, but you should be able to walk through these overall concepts that will impact your company, organization, institution, or government agency when putting an API to work. Now the more difficult questions around this API rating system, is how do you make it happen. Gathering data to satisfy all of these areas is easier said than done.

Getting the answer to some of these question will be fairly easy, and we can come up with some low tech ways to handle. However, some of them are near impossible to satisfy, let alone do it continually over time. It isn’t easy to gather the data needed to answer these questions. In my opinion no single entity can deliver what is needed, and it will ultimately need to be a community thing if we are going to provide any satisfactory answer to these API rating questions, regularly satisfy them on a regular schedule, and then honestly acknowledge when an API goes dormant, or away altogether. To properly do this, it would have to be done out in the open, and a few things I’d consider introducing would be:

  • YAML Core - I would define the rating system in YAML.
  • YAML Store - I would store all data gathered as YAML.
  • GitHub Base - Everything would be in a series of repositories.
  • Observable - The entire algorithm, process, and results are open.
  • Evolvable - It would be essential to evolve and adapt over time.
  • Weighted - Anyone can weight the questions that matters to them.
  • Completeness - Not all the profiles of APIs will be as complete as others.
  • Ephemeral - Understanding that nothing lasts forever in this world.
  • Collaborative - It should be a collaboration between key entities and individuals.
  • Machine Readable - Able for machines to engage with seamlessly.
  • Human Readable - Kept accessible to anyone wanting to understand.

I will stop there. I have other criteria I’d like to consider when defining a ranking system. Specifically, a public ranking system, for public APIs. Actually, I’d consider a company, organizations, institution, and government agency ranking system with an emphasis on APIs. Do they do APIs or not? If they do, how many of the questions can we answer pretty easily, with as few resources as possible, while being able to reliably measure using these data points on into the future. It is something we need. It is something that will be very difficult to setup, taking a significant amount of investment over time. It will also rely on the contribute of other entities and individuals. It is something that won’t be easy to make happen. That shouldn’t stop us from doing it.

That is the basics of my API rating formula. Version 2019. This is NOT an idea for a startup. There is revenue to be generated here, but not if approached through the entrepreneurial playbook. It will fail. I’ve seen it happen over and over. This is a request for the right entities and individuals to come together and make it happen. It is dumb that there is no way of understanding which APIs are good or bad. It is also dumb that there is no healthy and active API search engine after APIs having gone so mainstream—another sign of the ineffectiveness of doing not just API rating, but API discovery as a venture backed startup. Sorry, there are just some infrastructural things we’ll all need to invest in together to make this all work at scale. Otherwise we are going to just end up with a chaotic, unreliable network of API-driven services behind our applications. If you’d like to talk API ratings with me, drop me an email at [email protected], or tweet at @apievangelist.


The Complexity of API Discovery

I can’t get API discovery out of my mind. Partly because I am investing significant cycles in this area at work, but it is also something have been thinking about for so long, that it is difficult to move on. It remains one of the most complex, challenging, and un-addressed aspects of the way the web is working (or not working) online today. I feel pretty strongly that there hasn’t been investment in the area of API discovery because most technology companies providing and consuming APIs prefer things be un-discoverable, for a variety of conscious and un-conscious reasons behind these belief systems.


What API Discovery Means? Depends On Who You Are…

One of the reasons that API discovery does not evolve in any significant ways is because there is not any real clarity on what API discovery is. Depending on who you are, and what your role in the technology sector is, you’ll define API discovery in a variety of ways. There are a handful of key actors that contribute to the complexity of defining, and optimizing in the area of API discovery. 


  • Provider - As an API provider, being able to discover your APIs, informs your operations regarding what your capabilities are, building a wider awareness regarding what a company, organization, institution, or government agency does, helping eliminate inefficiencies, and allows for more meaning decisions to be made at run-time across operations.
  • Consumer - What you need as an internal consumer of APIs, or maybe a partner, or 3rd party developer will significantly shift the conversation around what API discovery means, and how it needs to be “solved”. There is another significant dimension to this discussion, separating human from other system consumption, further splintering the discussion around what API discovery is when you are a consumer of APIs.
  • Analyst - As an analyst for specific industries, or technology in general, need to understand the industries they are watching, and how API tooling is being applied, helping them develop an awareness of what is happening, and understand what the future might hold.
  • Investor - A small number of investors are in tune with APIs, and even grasp what API discovery means to their portfolio, and the industries they are investing in, generally being unaware of how API discovery will set the tone for how markets behave–providing the nutrients (or lack of) markets need to understand what is happening across industries.
  • Journalist - Most journalists grasp the importance of Facebook and Twitter, but only a small percentage understand what APIs are, and how ubiquitous they are, and the benefits they bring to the table when it comes to helping them in their investigations, and research, let alone how it can benefit them in their work when it comes to syndication and exposure for their work–making API discovery pretty critical to what they do.
  • University - I am seeing more universities depending on accessible APIs when it comes to research, and an increase in the development of API related curriculum–just as important, I am also seeing the increased development of open source API tooling out of university environments.
  • Government - APIs will play an increasing role in regulation, taxation, and government funded / implemented research, making API discovery key to finding the data they need, and being able to have their finger on the pulse of what citizens and businesses are up to on any given week.
  • End-User - Last, but definitely not least, the end-user should b e concerned with API discovery, and using it as a low water mark for where they should be doing business, and requiring that the platforms they use have APIs, and have their best interests in mind when allowing for 3rd party access to their data.

How APIs Are Discovered?

Across these different views of the API discovery landscape, there are a variety of ways in which APIs are discovered, and made discoverable. Providing some formal, and some not so formal ways to define the the API landscape, and develop an awareness of the API ecosystems that have risen up within different industries, within institutions and government agencies. These are the ways I am focused on finding APIs, and making APIs findable, inside and outside the firewall.

Our motivations for finding APIs inside or outside the firewall are not always in sync with our motivations for having our APIs be found. This affects our view of the landscape when it comes to how hard API discovery is, and what the possible solutions for it will be. I find people’s view on API discovery to be very relative to their view of the landscape, and very few people in the API sector travel widely enough, exchange ideas externally enough, and lift themselves up high enough to be able to understand API discovery well enough to provide the right tools and services to move the conversation forward.

Within the Firewall

API discovery with the firewall is a real struggle. Most companies I know do not know where all of their APIs are. There is no single up to date truth of where each API is, what it does, let alone the machine readable details of what it delivers. These are a few of the ways in which I have seen groups tackle API discovery within the firewall:


  • Directories - Many employ a catalog, directory, or database of APIs, micro services, and other relevant solutions.
  • Git Repositories - Git within the enterprise is a very viable way to manage a large volume of artifacts you can use for discovery.
  • Word of Mouth - Talking to people is always a great way to understand what APIs exist across the enterprise landscape.
  • Documentation - The documentation for APIs often become the focal point of API discovery because it can be searched.
  • Log Files - Harvest what is actually coming across the wire, parse APIs that are in use, and documenting them in some way.

The biggest challenge with API discovery within the firewall is having dedicated resources to keep up with the discovery, documenting, while also keeping catalogs, repositories, documentation, and other key sources of API discovery information up to date. In my experience, rarely do teams ever get the budget to properly invest in API discovery, with things often done as part of skunk works, or in every day operations—as teams can–which is never enough.

Outside the Firewall

This is the aspect of API discovery that gets the most attention when it comes to API discovery. These are the API rock stars, directors, marketplaces, and API showcasing that occurs across tech blogs. API discovery outside the firewall has access to far more tools and services to leverage when getting the word out, or helping you find what you are looking for. The challenge becomes, like most other things on the open web, how do you cut through the noise, and get your APIs found, or find the APIs you are looking for.


  • Directories - You use some of the existing API directories out there. Providing you access to a small subsection of the APIs that these operators can find, and manually publish to their API catalogs. There really isn’t a single source of truth when it comes to API directories, but there are some that have been around longer than others.
  • Marketplaces - There have been numerous waves of investment into API marketplaces, trying to centralize the discovery and integration with APIs, hoping to simplify APIs enough that consumers use you as their doorway to the API world—giving the API marketplace provider a unique look at how APIs are consumed.
  • Documentation - Public API documentation is how your APIs will be found using Google, which is the number one way people are going to find your API—using a Google search. I’m surprised that Google hasn’t tackled the issue of API discovery already, but I’m guessing they haven’t deemed it valuable enough to tackle, and will most likely swoop in and take dominate once some smaller providers plant the seeds.
  • Definitions - API discovery has gotten easier, and more robust with the introduction of API definitions like Swagger, OpenAPI, API Blueprint, RAML, Postman collections and other machine readable formats. Going beyond just static or even dynamic API documentation, and providing a machine readable artifact that can be indexed and used to drive API search.
  • Domains - You can drive a lot of interesting API discovery using domains, and building indexes of different types of domains, then conduct several types of searches to see if they have any APIs. Using search engine APIs, Twitter, Github, and other social media APIs to find APIs across the landscape—using domains as the anchor for refining and making API discovery queries more precise.
  • Scraping - If you have the URL for an companies, organization, institution, or government agencies API documentation page you can also scrape that page, or pages for more information about how any API operations, and what it does—adding to the index of APIs to be search against.
  • Search Engines - While Google doesn’t have a good API anymore, Bing and DuckDuckGo do.It is easy to build up an index of queries to search Bing to uncover potential domains, documentation, definitions, and other artifacts to enrich an API discovery index. With the right key phrase glossary, you can quickly automate a pretty comprehensive search for new APIs.
  • GitHub - The social coding platform is a rich one of API related data. Similar to search engines you can easy build a search query vocabulary to uncover a number of domains, documentation, definitions, and other artifacts to enrich an API discovery index.
  • Integrated Development Environment (IDE) - The IDE is an untapped market when it comes to helping developers find APIs, and to help API providers reach developers. Microsoft has been increasing API discovery features, while also being extended with plugins by the community. There is no universal API search engine baked into the top IDEs, a definite missed opportunity.
  • News - It is pretty easy to harvest press releases, blog posts, and other news sources and use them as rich sources of information for new APIs. Helping seed a list of potential domains to look for documentation and other API artifacts. Publishing a press release, or posting to their blog is the most common way that API providers get the word out, unfortunately it tends to be the only thing they do.
  • Tweets - Twitter is also a rich source of information about APIs, with a variety of accounts, hashtags, and other ways to make queries for new APIs more precise. Using the social media platform as.a way of tuning into potential new APIs, as well as the activity around existing APIs in the index.
  • LinkedIn - More business, institutions, and government entities are talking about their APIs on LinkedIn, publishing posts, job listings, and other API related goings on. Making it a pretty rich way to find new APIs, and companies who are embarking, or making their way along their API journey.
  • Security Alerts - Sadly, security alerts is one way I learn about companies and their APIs—when they become un-secure. Providing information about API providers who operate in the shadows, as well as more information to index when actually rating APIs in the index, but that is another story.

Even once you discover the APIs you are looking for, often times more context is required, and you may or may not have the time or expertise to assess what an API delivers, and what it will take to get up and running with an API.

  • Documentation - Where the documentation resides for the API.
  • Signup - Where a user can signup to use an API.
  • Pricing - What is the pricing for using an API.
  • Support - Where do you get support if you need help.
  • Terms of Service - Where do I find the legalize behind AP operations.

These are just five of the most common questions APIs consumers are going to ask when they are looking at an API, and would benefit from direct links to these essential building blocks as part of any API search results. I have over a hundred questions that I like to ask of an API as I’m reviewing, which all reflect what a potential API consumer will be asking when they come across an API out in the wild.

Four Primary Dimensions Of Complexity

I’d say that these are the four main dimensions of complexity I see out there when it comes to API discovery, which makes it really hard to provide a single API discovery solution, and why there hasn’t been more investment in this area. It is difficult to make sense of APIs, and what people are looking for. It is hard to get all API providers on the same page when it comes to investing in API discovery as part of their regular operations. API discovery is something I’ll keep investing in, but it is something that will need wider investment from the community, as well as some bigger players to step up and help move the conversation forward.

Other than API marketplaces, and the proliferation of API definitions, I haven’t seen any big movements in the API discovery conversation. We still have ProgrammableWeb. We still have Google. Not much more. With the number of APIs growing, this is only going to become more of a pain point. I’m interested in investing in API discovery not because I want to help everyone find APIs, or have their APIs found. Im more interested in shining a light on what is going on. I am looking to understand the spread of APIs across the digital landscape, and better see where they are pushing into our physical worlds. My primary objective is not to make sure all APIs are found so they can be used. My primary objects is to make sure all APIs are found so we have some observability into how the machine works.


Why Schema.org Does Not See More Adoption Across The API Landscape

I’m a big fan of Schema.org. A while back I generated an OpenAPI 2.0 (fka Swagger) definition for each one and published to GitHub. I’m currently cleaning up the project, publishing them as OpenAPI 3.0 files, and relaunching the site around it. As I was doing this work, I found myself thinking more about why Schema.org isn’t the goto schema solution for all API providers. It is a challenge that is multi-layered like an onion, and probably just as stinky, and will no doubt leave you crying.

First, I think tooling makes a big difference when it comes to why API providers haven’t adopted Schema.org by default across their APIs. If more API design and development tooling would allow for the creation of new APIs using Schema.org defined schema, I think you’d see a serious uptick in the standardization of APIs that use Schema.org. In my experience, I have found that people are mostly lazy, and aren’t overly concerned with doing APIs right, they are just looking to get them delivered to meet specifications. If we provide them with tooling that gets the API delivered to specifications, but also in a standardized, they are doing to do it.

Second, I think most API providers don’t have the time and bandwidth to think of the big picture like using standardized schema for all of their resources. Most people are under pressure to more with less, and standards is something that can be easily lost in the shuffle when you are just looking to satisfy the man. It takes extra effort and space to realize common standards as part of your overall API design. This is a luxury most companies, organizations, and government agencies can not afford, resulting in many ad hoc APIs defined in isolation.

Third, I think some companies just do not care about interoperability. Resulting in them being lazy, or not making it a priority as stated in the first and second points. Most API providers are just concerned with you using their API, or checking the box that they have an API. They do not connect the dots between standardization and it being easier for consumers to put their resources to work. Many platforms who are providing APIs are more interested in lock-in, providing proprietary SDKs and tooling on top of their API. Selling them on the benefits of interoperable open source SDKs and tooling just falls on deaf ears—leaving most API providers to perpetually reinvent the wheel when there is an existing well defined one within reach.

I wish ore API folks cared about Schema.org. I wish I had the luxury of using in more of my own work. If it is up to me, I will always adopt Schema.org for my core API designs, but unfortunately I’m not always the one in charge of what gets decided. I will continue to invest in OpenAPI definitions for all of the Schema.org defined schema. This allows me to have within reach when I’m getting ready to define a new API. If Schema.org ready API definitions are in a neat stack on my desk, the likelihood that I’m going to put to work in the API tooling I’m developing, and actually as the base for an API I’m delivering, increases significantly. Helping me standardize my API vocabulary to something that reaches beyond the tractor beam of daily API bubble.


Avoiding Complexity and Just Deploying YAML, JSON, and CSV APIs Using GitHub or GitLab

I find that a significant portion of I should be doing when defining, designing, developing, and delivering an API is all of avoiding complexity. Every step away along the API journey I am faced with opportunities to introduce complexity, forcing me to constantly question and say no to architectural design decisions. Even after crafting some pretty crafty APIs in my day, I keep coming back to JSON or YAML within Git, as the most sensible API architectural decision I can make.

Git, with JSON and YAML stored within a repository, fronted with a Jekyll front-end does much of what I need. The challenge with selling this concept to others is that it is a static publish approach to APIs, instead of a dynamic pull of relevant data. This approach isn’t for every API solution, I’m not in the business selling one API solution to solve all of our problems. However, for many of the API uses I’m building for, a static Git-driven approach to publishing machine readable JSON or YAML data is a perfect low cost, low tech solution to delivering APIs.

A Git repository hosted on GitHub or GitLab will store a significant amount of JSON, YAML, or CSV data. Something you can easily shard across multiple repositories within an organization / group, as well as across many repositories within many organization / groups. Both GitHub and GitLab offer free solutions, essentially letting you publish as many repositories as you want. As I said earlier, this is not a solution to all API needs, but when I’m looking to introduce some constraints to keep things low cost, simple, and easy to use and manage—a Git-driven API is definitely worth considering. However, going static for your API will force you to think about how you automate the lifecycle of your data, content, and other resources.

The easiest way to manage JSON, CSV, or YAML data you have on GitHub or GitLab is to use the GitHub or GitLab API, allowing you to update individual JSON, CSV, or YAML files in real-time. If you want to do it in batch process by checking out the repository, making all the changes you want and committing back as a single Git commit using the command line—I’ve automated a number of these to reduce the number of API calls I’m making. If you want to put in an editorial layer you can require submission via pull / merge request, requiring there be multiple eyes on each update to the data behind a static API. It is an imperative, and a declarative API, with an open source approval workflow by default—all for free.

Once you have your data in the Git repository you can make it available using the RAW links provided by GitHub or GitLab. However, I prefer to publish a Jekyll front-end, which can act as the portal landing page for the site, but then you can also manually or dynamically create neat paths that route users to your data using sensible URLs—the best part is you can add a cname and publish your own domain. Making your API accessible to humans, while also providing intuitive, easy to follow URLs to the static data that has been published using the GitHub or GitLab API, or the underlying Git infrastructure to do in bulk.

This is the cheapest, most productive way to deliver simple data and content APIs. The biggest challenges are that you have to begin thinking a little differently about how you manage your data. You have to move from a pull to a push way of delivering data, and embrace the existing CI/CD way of doing things embraced by both GitHub and GitLab. For me, using Git to deliver APIs provides a poor mans way to not just deliver an API, but also ensure it is secure, performant, and something I can automate the management of using existing tools developers are depending on. Over the last couple of years I’ve pushed the limits of this approach by publishing thousands of OpenAPI-driven API discovery portals, and it is something I’m going to be refining and using as the default layer for delivering simple APIs that allow me to avoid unnecessary complexity.


Organizing My APIs Using OpenAPI Tags

I like my OpenAPI tags. Honestly, I like tags in general. Almost every API resource I design ends up having some sort of tagging layer. Too help me organize my world, I have a centralized tagging vocabulary that I use across my JSON Schema, OpenAPI, and AsyncAPI, to help me group, organize, filter, publish, and maintain my catalog of API and schema resources.

The tag object for the OpenAPI specification is pretty basic, allowing you to add tags for an entire API contract, as well as apply them to each individual API method. Tooling, such as API documentation uses these tags to group your API resources, allowing you to break down your resources into logical bounded contexts. It is a pretty basic way of defining tags, that can go a long ways depending on how creative you want to get. I am extending tags with an OpenAPI vendor extension, but I also see that there is a issue submitted suggesting they move the specification forward by allowing for the nesting of tags–potentially taking OpenAPI tagging to the next level.

I’m allowing for a handful of extensions to the OpenAPI specification to accomplish the following:

  • Tag Grouping - Help me nest, and build multiple tiers of tags for organization APIs.
  • Tag Sorting - Allowing me to define a sort order that goes beyond an alphabetical list.

I am building listing, reporting, and other management tools based up OpenAPI tagging to help me in the following areas:

  • Tag Usage - Reporting how many resources are available for each tag, and tag group.
  • Tag Cleanup - Helping me de-dupe, rename, deal with plural challenges, etc.
  • Tag Translations - Translating old tags into new tags, helping keep things meaningful.
  • Tag Clouds - Generating D3.js tag clouds from the tags applied to API resources.
  • Packages - Deployment of NPM packages based upon different bounded contexts defined by tags.

I am applying tags to the following specifications, stretching my OpenAPI tagging approach to be more about a universal way to organize all my resources:

  • JSON Schema - All schema objects have tags to keep organized.
  • OpenAPI - Each API method have tags, for easy grouping.
  • AsyncAPI - Each pub/sub, event, and message API have APIs.
  • APIs.json - Collections of APIs have tags for discoverability.

Tags are an important dimension of API discoverability when it comes to my API definitions. They provide rich metadata that I can use to make sense of my API infrastructure. Without them, the quality of my API definitions tend to trend lower. By evolving the tagging schema, and investing in tooling to help me make sense across the API definitions I’m depending on, I can push the boundaries of how I tag, and evolve it to be the core of how I manage my API definitions.

I’ll keep watching how others are tagging their APIs, although I don’t see too much innovation, and low levels of usage by other API providers as part of their API definitions. I’ll also keep an eye out for other ways in which tag schema are being extended, helping potentially define the future of how the leading API specifications enable tagging. I have a short list of tooling I am developing to help make my life easier, but I’m working hard to just make sure I’m applying tags across my API resources in a consistent way. I find this is the most valuable aspect of API tagging, but eventually I’m guessing that the tooling will make the real difference when it comes to slicing and dicing, and making sense of my API infrastructure at scale.


Doing The Hard Work To Define APIs

Two years later, I am still working to define the API driven marketplace that is my digital self. Understanding how I generate revenue from my brand (vomits in mouth a little bit), but also fight off the surveillance capitalists from mining and extracting value from my digital existence. It takes a lot of hard work to define the APIs you depend on to do business, and quantify the digital bits that you are transacting on the open web, amongst partners, and unknowingly with 3rd parties. As an individual, I find this to be a full time job, and within the enterprise, it is something that everyone will have to own a piece of, which in reality, is something that is easier said than done.

Convincing enterprise leadership of the need to be aware of every enterprise capability being defined at the network, system, or application level is a challenge, but doable. Getting consensus on how to do this at scale, across the enterprise will be easier said than done. Identifying how the network, system, and applications across a large organization are being accessed, what schema, messages, and other properties are being exchanged is not a trivial task. It is a massive undertaking to reverse engineer operations, and identify the core capabilities being delivered, then define and document in a coherent way that can be shared with others, and included as part of an organization messaging campaign.

Many will see work to define all enterprise API capabilities as a futile task–something that is impossible to deliver. Many others will not see the value of doing it in the first place, and unable to realize the big picture, they will see defining of APIs and underlying schema as meaningless busy work. Even when you do get folks on-board with the important, having the discipline to see the job through becomes a significant challenge. If moral is low within any organization group, and team members do not have visibility into the overall strategy, the process of defining gears that make the enterprise move forward will be seen as mind numbing accounting work–not something most teams will respond positively about working on.

Once you do define your APIs, you then have to begin investing in defining what the future will hold when it comes to unwinding, evolving, maturing the API infrastructure you are delivering and depending on. This is something that can’t fully move forward until a full or partial accounting of enterprise API capabilities has occurred. If you do not know what is, you will always have trouble defining or controlling what will be. One of the reasons we have so much technical debt is we prefer to focus on what comes next rather than attending to the maintenance required to keep everything clean ,coherent, organized, and well defined. It is always easier to focus on the future, than it is to reconcile with mess we’ve created in the past. This is why it is so easy to sell each wave of enterprise technology solutions, promising to do this work for you–when in reality, most times, you are just laying down the next layer of debt.

Whether it is our personal lives, or our professional worlds, defining the APIs we depend on, as well as the APIs that aren’t useful and become parasitic, as well as the schema objects they produce and exchange is a lot of hard work. Hard work most of us will neglect and outsource for convenience. Making the work become even harder down the road–nobody will care about doing this as we do. The really fascinating part of all of this for me, is that with each cycle of technology that passes through, we keep doubling down on technology being the solution to yesterday’s problem, even though it is the core of yesterday’s problem. In my experience, once you really begin investing in the hard work to define the APIs you depend on, you begin to realize that you don’t need so many of them. That the number of API connections you depend on can actually begin to hurt you, which is something that can wildly grow if you aren’t in tune with what your API landscape consists of in your personal and professional worlds.


There Is No Single Right Way To Do APIs

My time working in the API sector has been filled with a lot of lessons. I researched hard, paid attention, and found a number of surprising realities emerge across the API landscape. The majority of surprises have been in the shadows caused by my computational belief scaffolding I’ve been erecting since the early 1980s. A world where there has to be absolutes, known knowns, things are black and white, or more appropriately 1 or 0. If I studied all APIs, I’d find some sort of formula for doing APIs that is superior to everyone else’s approach to doing APIs. I was the API Evangelist man–I could nail down the one right way to do APIs. (Did I mention that I’m a white male autodidact?)

I was wrong. There is no one right way to do APIs. Sure, there are many different processes, practices, and tools that can help you optimize your API operations, but despite popular belief, there is no single “right way” to do define, design, provide, or consume APIs. REST is not the one solution. Hypermedia is not the one solution. GraphQL is not the one solution. Publish / Subscribe is not the one solution. Event-driven is not the one solution. APIs in general are the the one solution. Anyone telling you there is one solution to doing APIs for all situations is selling you something. Understanding your needs, and what the pros and cons of different approaches are, is the only thing that will help you.

If you are hyper focused on the technology, it is easy to believe in a single right way of doing things. Once you start having to deliver APIs in a variety of business sectors and domains, you will quickly begin to see your belief system in a single right way of doing things crumble. Different types of data require different types of approaches to API enablement. Different industries are knowledgable in different ways of API enablement, with some more mature than others. Each industry will present its own challenges to API delivery and consumption, that will require a different toolbox, and mixed set of skills required to be successful. Your social network API strategy will not easily translate to the healthcare industry, or other domain.

With a focus on the technology and business of APIs, you can still find yourself dogmatic around a single right way of doing things. Then, if you find yourself doing APIs in a variety of organizations, across a variety of industries, you quickly realize how diverse your API toolbox and approach will need to be. Organizations come with all kinds of legacy technical debt, requiring a myriad of approaches to ensuring APIs properly evolve across the API lifecycle. Once a technological approach to delivering software gets baked into operations, it becomes very difficult to unwind, and change behavior at scale across a large organization–there is no single right way to do APIs within a large enterprise organization. If someone is telling you there is, they are trying to sell you the next generation of technology to bake into your enterprise operations, which will have to be unwound at some undisclosed date in the future–if ever.

I’ve always considered my API research and guidance to be a sort of buffet–allowing my readers choose the mix that works for them. However, I still found myself providing industry guides, comprehensive checklists, and other declarative API narratives that still nod towards there being a single, or at least a handful of right ways of doing APIs. I think that APIs will ultimately be like cancer, something we never quite solve, but there will be huge amounts of money spent trying to deliver the one cure. I’ll end with the comparison there, because I don’t want to get me on a rant regarding the many ways in which APIs and cancer will impact the lives of everyday people. In the end, I will still keep studying and understanding different approaches to doing APIs (both good or bad), but you’ll find my narrative to be less prescriptive when it comes to any single way of doing APIs, as well as suggesting that doing APIs in the first place is the right answer to any real world problem.


API Definitions Are Important

I found myself scrolling down the home page of API Evangelist and thinking about what topic(s) I thought were still the most relevant in my mind after not writing about APIs for the last six months. Hands down it is API definitions. These machine and human readable artifacts are the single most important thing for me when it comes to APIs I’m building, and putting to work for me.

Having mature, machine readable API definitions for all API that you depend on, is essential. It also takes a lot of hard work to make happen. It is why I went API define first a long time ago, defining my APIs before I ever get to work designing, mocking, developing, and deploying my APIs. Right now, I’m heavily dependent on my:

  • JSON Schema - Essential for defining all objects being used across API contracts.
  • OpenAPI - Having OpenAPI contracts for al my web APIs is the default–they drive everything.
  • AsyncAPI - Critical for defining all of my non HTTP 1.1 API contracts being provided or consumed.
  • Postman Collections - Providing me with the essential API + environment definitions for run-time use.
  • APIs.json - Helping me define all the other moving parts of API operations, indexing all my definitions.

While there is plenty of other stops along the API lifecycle that are still relevant to me, my API definitions are hands down the most valuable intellectual property I possess. These four API specifications are essential to making my world work, but there are other more formalized specifications I’d love to be able to put to work:

  • Orchestrations - I’d liked to see a more standardized, machine readable way for working with many API calls in a meaningful way. I know you can do this with Postman, and I’ve done with OpenAPI, and like Stoplight.io’s approach, but I want an open source solution.
  • Licensing - I am not still actively using API Commons, but I’d like to invest in a 2.0 version of the API licensing specification, moving it beyond just the API licensing, and consider SDK, and other layers.
  • Governance - I’d like to see a formal way of expressing API governance guidance that can be viewed by a human, or executed as part of the pipeline, ensuring that all API contracts conform to a set of standards.

These area hit on the main areas that concern for me when it comes to defining the contracts I need to further automate the integration and deployment of API resources in my life. While there are definitely other stops along the API lifecycle on my mind, I spend the majority of my time creating, refining, communicating, and moving forward API definitions that define and drive every other stop along the API lifecycle.

API definitions represent API sanity for me. If they aren’t in order, there is disorder. An immature API definition requires investment, socialization amongst stakeholders, and iterating upon before it will ever be considered for publishing. I’ll be exploring the other things that matter for me along the API lifecycle, and then I’m guessing that the rest of this stuff I’ve been researching over the last eight years will either disappear, or just be demoted on the site. We’ll see how this refresh rolls out, but I’m guestimating about 25% of my research will continue to move forward after this reboot.


API Evangelist Is Open For Business

After six months of silence I've decided to fire API Evangelist back up again. I finally reached a place where I feel like I can separate out the things that caused me to step back in the first place. Mostly, I have a paycheck now, some health insurance, and I don't have to pretend I give a shit about APIs, startups, venture capital, and the enterprise. I'm being paid well to do an API job. I can pay my rent. I can go to the doctor when my health takes a hit. My basic needs are met.

Beyond that, I'm really over having to care about building an API community, making change with APIs, and counteracting all of the negative effects of APIs in the wild. I can focus on exactly what interests me about technology, and contribute to the 3% of the API universe that isn't doing yawnworthy, half-assed, or straight up shady shit. I don't have to respond to all the emails in my inbox just because I need to eat, and have a roof over my head. I don't have to jump, just because you think your API thing is the next API thing. I can just do me, which really is the founding principle of API Evangelist.

Third, I got a kid to put through college, and I'm going to make y'all pay for it. So, API Evangelist is open for business. I won't be producing the number of stories I used to. I'll only be writing about things I actually find interesting, and will explore other models for generating content, traffic, and revenue. So reach out, and pitch me. I'm looking for sponsors, and open to almost anything. Don't worry, I'll be my usual honest self and tell you whether I'm interested or not, and have strong opinions on what should be said, but pitch me. I'm open for business, I'll entertain any business offer keep API Evangelist in forward motion, and generating revenue for me.

If you are interested in sponsoring API Evangelist, it is averaging 2K page views a day, but normally averages 5K a day when it is in full active mode. The Twitter account has 10K followers, and the audience is a pretty damn good representation of the API pie if I don't say so myself. It is a damn shame to squander what I've built over the last nine years just because I like to ride on a sparkly high horse. If I've learned anything during my time as the API Evangelist, it is that revenue drives ALL decisions. So get in on the action. Let me know what you are thinking, and I'll get to work adding your logo to the site, and turning on the other sponsorship opportunities. Ping me at [email protected] to get the ball rolling.


Asking The Honest Questions When It Comes To Your API Journey

I engage with a lot of enterprise organizations in a variety of capacities. Some are more formal consulting and workshop engagements. While others are just emails, direct messages, and random conversation in the hallways and bars around API industry events. Many conversations are free flowing, and they trust me to share my thoughts about the API space, and provide honest answers to their questions regarding their API journey. Where others express apprehension, concern, and have trouble engaging with me because they are worried about what I might say about their approach to doing APIs within their enterprise organizations. Some have even told me that they’d like to formally bring me in for discussions, but they can’t get me pass legal or their bosses–stating I have a reputation for being outspoken.

While in Vegas today, I had breakfast with Paolo Malinverno, analyst from Gartner, he mentioned the Oscar Wilde quote, “Whenever people agree with me I always feel I must be wrong.” Referring to the “yes” culture than can manifest itself around Gartner, but also within industries and the enterprise regarding what you should be investing in as a company. That people get caught up in up in culture, politics, and trends, and don’t always want to be asking, or be asked the hard questions. Which is the opposite of what any good API strategist, leader, and architect should be doing. You should be equipped and ready to be asked hard questions, and be searching out the hard questions. This stance is fundamental to API success, and you will never find what you are seeking when it comes to your API journey if you do not accept that many questions will be difficult.

The reality that not all API service providers truly want to help enterprise organizations genuinely solve the business challenges they face, and that many enterprise technology leaders aren’t actually concerned with truly solving real world problems, has been one of the toughest pills for me to swallow as the API Evangelist over the last eight years. Realizing that there is often more money to be made in not solving problems, not properly securing systems, or systems being performant, efficient, and working as expected. While I think many folks are too far down in the weeds of operations and company culture to fully make the right decision, I also think there are many people who make the wrong technological decision because it is the right business decision in their view. They do it to please share holders, investors, their boss, or just going with the flow when it comes to the business culture within their enterprise, and the industry that they operate in.

Honestly, there isn’t a lot of money to be made asking the hard questions, and addressing the realities of getting business done using APIs within the enterprise. Not all companies are willing to pay you to come in and speak truth to what is going on. Pointing out the illnesses that exist within the enterprise, and potentially provide solutions to what is happening. People are afraid what you are going to ask. People don’t want to pay someone to rock the boat. I find it to be a rare occurrence to find large enterprise organizations who are willing to look in the mirror and be held accountable for their legacy technical debt, and be forced to make the right decisions when it comes to moving forward with the next generation of investment. Which is why most organizations will stumble repeatedly in their API journeys, be more susceptible to the winds of technological trends and investment cycles, all because they aren’t willing to surround themselves with the right people who are willing to speak truth.


A Diverse API Toolbox Driving Hybrid Integrations Across An Event-Driven Landscape

I’m heading to Vegas in the morning to spend two days in conversations with folks about APIs. I am not there for AWS re:Invent, or the Gartner thingy, but I guess in a way I am, because there are people there for those events, who want to talk to me about the API landscape. Folks looking to swap stories about enterprise API investment in possessing a diverse API toolbox for driving hybrid integrations in an event-driven landscape. I’m not giving any formal talks, but as with any engagement, I’m brushing up on the words I use to describe what I’m seeing across the space when it comes to the enterprise API lifecycle.

The Core Is All About Doing Resource Based, Request And Response APIs Well
I’m definitely biased, but I do not subscribe to popular notions that at some point REST, RESTful, web, and HTTP APIs will have to go away. We will be using web technology to provide simple, precise, useful access to data, content, and algorithms for some time to come, despite the API sectors continued evolution, and investment trends coming and going. Sorry, it is simple, low-cost, and something a wide audience gets from both a provider and consumer perspective. It gets the job done. Sure, there are many, many areas where web APIs fall short, but that won’t stop success continuing to be defined by enterprise organizations who can do microservices well at scale. Despite relentless assaults by each wave of entrepreneurs, simple HTTP APIs driving microservices will stick around for some time to come.

API Discovery And Knowing Where All Of Your APIs Resources Actually Are
API discovery means a lot of things to a lot of people, but when it comes to delivering APIs well at scale in a multi-cloud, event-driven world, I’m simply talking about knowing where all of your API resources are. Meaning, if I walked into your company tomorrow, could you should me a complete list of every API or web service in use, and what enterprise capabilities they enable? If you can’t, then I’m guessing you aren’t going to be all that agile, efficient, and ultimately effective with doing your APIs at scale, and be able to orchestrate much, and identify what the most meaningful events are that occur across the enterprise landscape. I’m not even getting to the point of service mesh, and other API discovery wet dreams, I’m simply talking about being able to coherently articulate what your enterprise digital capabilities are.

Always Be Leveraging The Web As Part Of Your Diverse API Toolbox
Technologists, especially venture fueled technologists love to take the web for granted. Nothing does web scale better than, the web. Understand the objectives behind your APIs, and consider how you are leveraging the web, negotiate, cache, and build on the strengths of the web. Use the right media type for the job, and understand the tradeoffs of HTML, CSV, XML, JSON, YAML, and other media types. Understand when hypermedia media types might be more suitable for document, media, and other content focused API resources. Simple web APIs make a huge difference when they further speak to their intended audience and allow them to easily translate an API call into a workable spreadsheet, or navigate to the previous or next episode, installment, or other logical grouping with sensible hypermedia controls. Good API design is more about having a robust and diverse API design toolbox to choose from, than it is ever about the dogma that exists around any specific approach, philosophy, protocol, or venture capital fueled trend.

Have A Clear Vision For Who Will Be Using Your APIs
One significant mistake that API designers, developers, and architects make over and over again, is not having a clear vision of who will be using the APIs they are building. Defining, designing, delivering, and operating an API that is based upon what the provider wants over what the consumers will need. Using protocols, ignoring existing patterns, and adopting the latest trend that have nothing to do with what API consumers will be needing or capable of putting to work. Make sure you know your consumers, and consider giving them more control with query languages like GraphQL and Falcor, allowing them to define the type of experience they want. Work to have a clear vision of who will be consuming an API, even if you don’t know who they are. Starting simple with basic web APIs that help easily on-board new users who are unfamiliar with the domain and schema, while also allowing for the evolution give power-users who are in the know, more access, more control, and a stronger voice in the vision of what your APIs deliver or do not.

Responding In Real Time, Not Just Upon Request
A well oiled request and response API infrastructure is a critical base for any enterprise organization, however, a sign of a more mature, scalable API operations is always the presence of event-driven infrastructure including webhooks, streaming solutions, and multi-protocol, multi-message approaches to moving data and content around, and responding algorithmically based upon real time events occurring across the domains. Investing in event-driven infrastructure is not simply about using Kafka, it is about having a well-defined, well-honed web API base, with a suite of event-driven approaches in ones toolbox for also providing access to internal, partner, and last mile public and 3rd party resources using an appropriate set of protocols, and message formats. Something that might be as simple as a webhook subscription to changes, getting a simple HTTP push when something changes, to maintaining persistent HTTP connections to get an HTTP push when something changes, all the way to high volume HTTP and TCP connections to a variety of topical channels using Kafka, or other industrial grade API-driven solutions like gRPC, and beyond.

Have A Reason For When You Switch Protocols
There are a number of reasons why we switch protocols, moving off HTTP towards a TCP way of getting things done, with most reasoning being more emotional than they are ever technical. When I ask people why they went from HTTP APIs to Kafka, or Websockets, there is rarely a protocol based response. They did it because they needed things done in real time, through the existence of specific channels, or just simple because Kafka is how you do big data, or Websockets is how you do real time data. There wasn’t much scrutiny of who the consumers are, what was gained by moving to TCP, and what was lost by moving off HTTP. There is little awareness of the work Google has done around gRPC and HTTP/2, or what has happened recently around HTTP/3, formerly known as Quick UDP Internet Connections (QUIC). I’m no protocol expert, but I do grasp the role that these protocols play, and understand that the fundamental foundation of APIs is the web, and the importance of having a well thought out strategy when it comes to using the Internet for delivering on the API vision across the enterprise.

Ensuring All Your API Infrastructure Is Reliable
It doesn’t matter what your API design processes are, and what tools you are using if you cannot do it reliably. If you aren’t monitoring, testing, securing, and understanding performance, consumption, and limitations across ALL of your API infrastructure, then there will never be the right API solution. Web APIs, hypermedia, GraphQ, Webooks, Server-Sent Events, Websockets, Kafka, gRPC, and any other approach will always be inadequate if you cannot reliably operate them. Every tool within your API design toolbox should be able to be effectively deployed, thoughtfully managed, and coherently monitored, tested, secured, and delivered as a reliable service. If you don’t understand what is happening under the hood with any of your API infrastructure, out of your league technically, or kept in the dark through vendor magic, it should NOT be a tool in your toolbox, and be something that left in the R&D lab until you can prove that you can reliably deliver, support, scale, and evolve something that is in alignment with, and has purpose augmenting and working with your existing API infrastructure.

Be Able To Deliver, Operate, And Scale Your APIs Anywhere They Are Needed
One increasingly critical aspect of any tool in our API design is whether or not we can deploy and operate it within multiple environments, or find that we are limited to just a single on-premise or cloud location. Can your request and response web API infrastructure operate within the AWS, Google, or Azure clouds? Does it operate on-premise within your datacenter, locally for development, and within sandbox environments for partners and 3rd party developers? Where your APIs are deployed will have have just as big of an impact on reliability and performance as your approach to design and the protocol you re using. Regulatory and other regional level concerns may have a bigger impact on your API infrastructure, than using REST, GraphQL, Webhooks, Server-Sent Events, or Kafka. Being able to ensure you can deliver, operate, and scale APIs anywhere they are needed is fast becoming a defining characteristic of the tools that we possess in our API toolboxes.

Making Sure All Your Enterprise Capabilities Are Well Defined
The final, and most critical element of any enterprise API toolbox, is ensuring that all of your enterprise capabilities are defined as machine readable API contracts, using OpenAPI, AsyncAPI, JSON Schema, and other formats. API definitions should provide human and machine readable contracts for all enterprise capabilities that are in play. These contracts contribute to every stop along the API lifecycle, and help both API providers and consumers realize everything I have discussed in this post. OpenAPI provides what we need to define our request and response capabilities using HTTP, and Async provides what we need to define our event-driven capabilities, providing the basis for understanding what we are capable of delivering using our API toolboxes, and responding to via the hybrid integration solutions we’ve engineered, and automated using our event-driven solutions. Defining the surface area of our API infrastructure, but also the API operations that surround the enterprise capabilities we are enabling internally, with partners, and publicly via our enterprise API efforts.


The API Journey

I’ve been researching the API space full time for the last eight years, and over that time I have developed a pretty robust view of what the API landscape looks like. You can find almost 100 stops along what I consider to be the API lifecycle on the home page of API Evangelist. While not every organization has the capacity to consider all 100 of these stops, they do provide us with a wealth of knowledge generated throughout my own journey. Where I’ve been documenting what the API pioneers have been doing with their API operations, how startups leverage simple web API infrastructure, as well as how the enterprise has been waking up to the API potential in the last couple of years.

Over the years I’ve tapped this research for my storytelling on the blog, and for the white papers and guides I’ve produced. I use this research to drive my talks at conferences, meetups, and the workshops I do within the enterprise. I’ve long had a schema for managing my research, tracking on the APIs, companies, people, tools, repos, news, and other building blocks I track across the API universe. Now, after a year of working with them on the ground at enterprise organizations, I’m partnering with Streamdata.io (SDIO) to continue productizing my approach to the API lifecycle, which we are calling Journey, or specifically SDIO Journey.

Our workshops are broken into four distinct areas of the lifecycle:

  • Discovery (Goals, Definition, Data Sources, Discovery Sources, Discovery Formats, Dependencies, Catalog, Communication, Support, Evangelism) - Defining your digital resources are and what your enterprise capabilities are.
  • Design (Definitions, Design, Versioning, Webhooks, Event-Driven, Protocols, Virtualization, Testing, Landing Page, Documentation, Support, Communication, Road Map, Discovery) - Going API first, as well as API design first when it comes to the delivery of all of your API resources.
  • Development (Definitions, Discovery, Virtualization, Database, Storage, DNS, Deployment, Orchestration, Dependencies, Testing, Performance, Security, Communication, Support) - Considering what is needed to properly develop API resources at scale, and move from design to production.
  • Production (Definitions, Discovery, Virtualization, Authentication, Management, Logging, Plans, Portal / Landing Page, Getting Started, Documentation, Code, Embeddables, Licensing, Support, FAQs, Communication, Road Map, Issues, Change Log, Legal, Monitoring, Testing, Performance, Tracing, Security, Analysis, Maintenance) - Thinking about the production needs of an API operation, extracting the building blocks from successful APIs available across the web.
  • Outreach (Purpose, Scope, Defining Success, Sustaining Adoption, Communication, Support, Virtualization, Measurement, Structure) - Getting more structured around how you handle outreach around your APIs, whether they are internal, partner, or public API resources.
  • Governance (Design, Testing, Monitoring, Performance, Security, Observability, Discovery, Analysis, Incentivization, Competition) - Looking at how you can begin defining, measuring, analyzing, and providing guidance across API operations at the highest levels.

We are currently working with several API service providers to deliver SDIO Journey workshops within their enteprise organizations, helping bring more API awareness to their pre-sales, sales, business, and executive groups. While also working to deliver independent Journey workshops for their customers, helping them see the bigger picture when it comes to the API lifecycle, but also begin establishing their own formal strategy for how they can execute on their own personal vision and version of it. Helping enterprise organization learn from the research I’ve gathered over the last eight years, and begin thinking more constructively, and being more thoughtful and organized about how they approach the delivery, iteration, and sustainment of APIs across the enterprise.

I have turned SDIO Journey into a set of basic APIs that allow me to build, replicate, and deliver our Journey workshops. I’m preparing for a handful of workshops before the end of the year with Axway, and for API Days in Paris, but then in 2019, continue productizing and delivering these API workshops, helping encourage other enterprise organizations to invest more in their own API Journey, get more structured in how they think about the delivering of microservices across the enterprise. Helping them realize that the transformation they are going through right now isn’t going to stop. It is something that will be ongoing, and require their organization to learn to accept perpetual change and evolution in how they deliver the data, content, and algorithmic resources they’ll need to do business across the enterprise. While also evolving their understanding that all of this is more about people, business, and politics more than it will ever be about technology all by itself.

If you have any questions about the SDIO Journey workshops we are doing, feel free to reach out, and I’ll get you more details about how to get involved.


YAML API Management Artifacts From AWS API Gateway

I’ve always been a big supporter of creating machine readable artifacts that help define the API lifecycle. While individual artifacts can originate and govern specific stops along the API lifecycle, they can also bring value when applied across other stops along the API lifecycle, and most importantly when it comes time to govern everything. The definition and evolution of individual API lifecycle artifacts is the central premise of my API discovery format APIs.json–which depends on there being machine readable elements within the index of each collection of APIs being documented, helping us map out the entire API lifecycle.

OpenAPI provides us with machine readable details about the surface area of our API which can be used throughout the API lifecycle, but it lacks other details about the surface area of our API operations. So when I do come across interesting approaches to extending the OpenAPI specification which are also injecting a machine readable artifact into the OpenAPI that support other stops along the API lifecycle, I like to showcase what they are doing. I’ve become very fond of one within the OpenAPI export of any AWS API Gateway deployed API I’m working with, which provides some valuable details that can be used as part of both the deployment and management stops along the API lifecycle:


x-amazon-apigateway-integration:
	uri: "http://example.com/path/to/the/code/behind/"
	responses:
		default:
			statusCode: "200"
	requestParameters:
		integration.request.querystring.id: "method.request.path.id"
	passthroughBehavior: "when_no_match"
	httpMethod: "GET"
	type: "http"

This artifact is associated with each individual operation within my OpenAPI. It tells the AWS gateway how to deploy and manage my API. When I first import this OpenAPI into the gateway, it will deploy each individual path and operation, then it helps me manage it using the rest of the available gateway features. From this OpenAPI definition I can design, then autogenerate and deploy the code behind each individual operation, then deploy each individual path and operation to the AWS API Gateway and map them to the code behind. I can do this for custom APIs I’ve deployed, as well as Lambda brokered APIs–I prefer the direct way, because it is still easier, stabler, more flexible and cost effective for me to write the code behind each of my API operations, than to go full serverless.

However, this artifact demonstrates for me the importance of artifacts associated with each stop along the API lifecycle. This little bit of OpenAPI extended YAML gives me a significant amount of control when it comes to the automation of deploying and managing my APIs. There are even more properties available for other layers of the AWS Gateway not included in this example, but is something that I will keep mapping out. Having these types of machine readable artifacts present within our OpenAPI specifications for describing the surface area of our APIs, as well as present within our APIs.json indexes for describing the surface area of our API operations will be critical to further automating, scaling, and defining the API lifecycle as it exists across the enterprise.


What Does The Next Chapter Of Storytelling Look Like For API Evangelist?

I find myself refactoring API Evangelist again this holiday season. Over the last eight years of doing API Evangelist I’ve had to regularly adjust what I do to keep it alive and moving forward. As I close up 2018, I’m finding the landscape shifting underneath me once again, pushing me to begin considering what the next chapter of API Evangelist will look like. Pushing me to adjust my presence to better reflect my own vision of the world, but hopefully also find balance with where things are headed out there in the real world.

I started API Evangelist in July of 2010 to study the business of APIs. As I was researching things in 2010 and 2011 I first developed what I consider to be the voice of the API Evangelist, which continues to be the voice I use in my storytelling here in 2018. Of course, it is something that has evolved and matured over the years, but I feel I have managed to remain fairly consistent in how I speak about APIs throughout the journey. It is a voice I find very natural to speak, and is something that just flows on some days whether I want it to or not, but then also something I can’t seem to find at all on other days. Maintaining my voice over the last eight years has required me to constantly adjust and fine tune, perpetually finding the frequency required to keep things moving forward.

First and foremost, API Evangelist is about my research. It is about me learning. It is about me crafting stories that help me distill down what I’m learning, in an attempt to articulate to some imaginary audience, which has become a real audience over the years. I don’t research stuff because I’m being paid (not always true), and I don’t tell stories about things I don’t actually find interesting (mostly true). API Evangelist is always about me pushing my skills forward as a web architect, secondarily about me making a living, and third about sharing my work publicly and building an audience–in short, I do API Evangelist to 1) learn and grow, 2) pay the bills, and 3) cultivate an audience to make connections.

As we approach 2019, I would say my motivations remain the same, but there is a lot that has changed in the API space, making it more challenging for me to maintain the same course while satisfying all these areas in a meaningful way. Of course, I want to keep learning and growing, but I’d say a shift in the API landscape toward the enterprise is making it more challenging to make a living. There just aren’t enough API startups out there to help me pay the bills anymore, and I’m having to speak and sell to the enterprise more. To do this effectively, a different type of storytelling strategy is required to keep the paychecks coming in. Something I don’t think is unique to my situation, and is something that all API focused efforts are facing right now, as the web matures, and the wild west days of the API come to a close. It was fun while it lasted–yee haw!!

In 2019, the API pioneers like SalesForce, Twitter, Facebook, Instagram, Twilio, SendGrid, Slack, and others are still relevant, but it feels like API storytelling is continuing it’s migration towards the enterprise. Stories of building an agile, scrappy startup using APIs isn’t as compelling as they used to be. They are being replaced by stories of existng enterprise groups become more innovative, agile, and competitive in a fast changing digital business landscape. The technology of APIs, the business of APIs, and the stories that matter around APIs have all been caught up in the tractor beam of the enterprise. In 2010, you did APIs if you were on the edge doing a startup, but by 2013 the enterprise began tuning into what is going on, by 2016 the enterprise responded with acquisitions, and by 2018 we are all selling and talking to the enterprise about APIs.

Despite what many people might believe, I’m not anti-enterprise. I’m also not pro-startup. I’m for the use of web infrastructure to deliver on ethical and sensible private sector business objectives, strengthen expectations of what is possible in the public sector, while holding both sectors accountable to each other. I understand the enterprise, and have worked there before. I also understand how it is evolving over the last eight years through API discussions I have been having had with enterprise folks, workshops I’ve conducted within various public and private sector groups, and studying this latest shift in technology adoption across large organizations. Ultimately, I am very skeptical that large business enterprises can adapt, decouple, evolve, and embrace API and microservice principles in a way that will mean success, but I’m interested in helping educate enterprise teams, and assist them in crafting their enterprise-wide API strategy, and contribute what I can to incentivize change within these large organizations.

A significant portion of my audience over the last eight years is from the enterprise. However, I feel like these are the people within the enterprise who have picked up their heads, and consciously looked for new ways of doing things. My audience has always been fringe enterprise folks operating at all levels, but API Evangelist does not enjoy mainstream enterprise adoption and awareness. A significant portion of my storytelling speaks to the enterprise, but I recognize there is a portion of it that scares them off, and doesn’t speak to them at all. One of the questions I am faced with is around what type of tone do I strike as the API Evangelist in this next chapter? Will it be a heavy emphasis on the politics of APIs, or will it be more about the technology and business of APIs? To continue learning and growing in regards to what is happening on the ground with APIs, I’m going to need enterprise access. To continue making a living doing APIs, I’m going to need more enterprise access. The question for me is always around how far I put my left foot in the enterprise or government door, and how far I keep my right found outside in the real world–where there is no perfect answer, and is something that requires constant adjustment.

Another major consideration for me is always around authenticity. An area I posses a natural barometer in, and while I have a pretty high tolerance for API blah blah blah, and writing API industry white papers, when I start getting into areas of technology, business, or politics where I feel like I’m not being authentic, I automatically begin shutting down. I’ve developed a bulshit-o-meter over the years that helps me walk this line successfully. I’m confident I can maintain and not sell out here. My challenge is more about continuing to do something that matters to someone who will continue investing in my work, and having relevance to the audience I want to reach, and less about keeping things in areas that I’m interested in. I will gladly decline conversations, relationships, and engagements in unethical areas, shady government or business practices, avoid classified projects, and pay for play concepts along the way. Perpetually pushing me to always strike a balance between something that interests me, that pushes my skills, bring value to the table, has a meaningful impact, enjoys a wide reach, while also paying the bills. Which reflects what I’m thinking through as I write this blog post, demonstrating how I approach my own professional development.

So, what does the next chapter of storytelling look like for API Evangelist? I do not know. I know it will have more of a shift towards the enterprise. Which means a heavy emphasis on the technology and business of APIs. However, I’m also thinking deeply about how I present the political side of the API equation, and how I voice my opinions and concerns when it comes to privacy, security, transparency, observability, regulation, surveillance, and ethics that swirls around APIs. I’m guessing they can still live side by side in some way, I just need to be smarter about the politics of it, and less rantier and emotional. Maybe separate things into a new testament for the enterprise that is softer, wile also maintaining a separate old testament for the more hellfire and brimstone. IDK. It is something I’ll continue mulling over, and make decisions around as I continue to shift things up here at API Evangelist. As you can tell my storytelling levels are lower than normal, but my traffic is still constant, reflecting other shifts in my storytelling that have occurred in the past. I’ll be refactoring and retooling over the holidays, and no doubt have more posts about the changes. If you have any opinions on what value you get from API Evangelist, and what you’d like to see present in the next chapter, I’d love to hear from you in the comments below, on Twitter, or personally via email.


The Ability To Link To API Service Provider Features In My Workshops And Storytelling

All of my API workshops are machine readable, driven from a central YAML file that provides all the content and relevant links I need to deliver what I need during a single, or multi-day API strategy workshop. One of the common elements of my workshops are links out to relevant resource, providing access to services, tools, and other insight that supports whatever I’m covering in my workshop. There are two parts to this equation, 1) me knowing to link to something, and 2) being able to link to something that exists.

A number of API services and tooling I use don’t follow web practices and do not provide any easy way to link to a feature, or other way of demonstrating the functionality that exists. The web is built on this concept, but along the way within web and mobile applications, we’ve have seemed to lose our understanding for this fundamental concept. There are endless situations where I’m using a service or tool, and think that I should reference in one of my workshops, but I can’t actually find any way to reference as a simple URL. Value buried within a JavaScript nest, operating on the web, but not really behaving like you depend on the web.

Sometimes I will take screenshots to illustrate the features of a tool or service I am using, but I’d rather have a clean URL and bookmark to a specific feature on a services page. I’d rather give my readers, and workshop attendees the ability to do what I’m talking about, not just hear me talk about it. In a perfect world, every feature of a web application would have a single URL to locate said feature. Allowing me to more easily incorporate features into my storytelling and workshops, but alas many UI / UX folks are purely thinking about usability and rarely thinking about instruct-ability, and being able to cite and reference a feature externally, using the fundamental building blocks of the web.

I understand that it isn’t easy for all application developers to think externally like this, but this is why I tell stories like this. To help folks think about the externalities of the value they are delivering. It is one of the fundamental features of doing business on the web–you can link to everything. However, I think we often forgot what makes the web so great, as we think about how to lock things down, erect walled gardens around our work, something that can quickly begin to work against us. This is why doing APIs is so important as it can helps us think outside of the walls of the gardens we are building, and consider someone else’s view of the world. Something that can give us the edge when it comes to reaching a wider audience with whatever we are creating.


Flickr And Reconciling My History Of APIs Storytelling

Flickr was one of the first APIs that I profiled back in 2010 when I started API Evangelist. Using their API as a cornerstone of my research, resulting in their API making it into my history of APIs storytelling, continuing to be a story I’ve retold hundreds of times in the conversations I’ve had over the eight years of being the API Evangelist. Now, after the second (more because of Yahoo?) acquisition, Flickr users are facing significant changes regarding the number of images we can store on the platform, and what we will be charged for using the platform–forcing me to step back, and take another look at the platform that I feel has helped significantly shape the API space as we know it.

When I step back and think about Flickr, it’s most important contribution to the world of APIs was all about the resources it made available. Flickr was the original image sharing API, powering the growing blogosphere at the beginning of this century. Flickr gave us a simple interface for humans in 2004, and an API for other applications just six months later, that provided us all with a place to upload the images we would be using across our storytelling on our blogs. Providing the API resources that we would be needed to power the next decade of storytelling via our blogs, but also set into the motion the social evolution of the web, demonstrating that images were an essential building block of doing business on the web, and in just a couple of years, on the new mobile devices that would become ubiquitous in our lives.

Flickr was an important API resource, because it provided access to an important resource–our images. The API allowed you to share these meaningful resources on your blog, via Facebook and Twitter, and anywhere else you wanted. In 2005, this was huge. At the time, I was working to make a shift from being an developer lead, to playing around with side businesses built using the different resources that were becoming available online via simple web APIs. Flickr quickly became a central actor in my digital resource toolbox, and I was using it regularly in my work. As an essential application, Flickr quickly got out of my way by offering an API. I would still use the Flickr interface, but increasingly I was just publishing images to Flickr via the API, and embedding them in blogs, and other marketing, becoming what we began to call social media marketing, and eventually was something that I would rebrand as API Evangelist while making it more about the tooling I was using, than the task I was accomplishing.

After thinking about Flickr as a core API resource, next I always think about the stories I’ve told about Flickr’s Caterina Fake who coined the phase, “business development 2.0”. As I tell it, back in the early days of Flickr, the team was getting a lot of interest in the product, and unable to respond to all emails and phone calls. They simply told people to build on their API, and if they were doing something interesting, they would know, because they had the API usage data. Flickr was going beyond the tech and using an API to help raise the bar for business development partnerships, putting the burden on the integrator to do the heavy lifting, write the code, and even build the user base, before you’d get the attention of the platform. If you were building something interesting, and getting the attention of users, the Flickr team would be aware of it because of their API management tooling, and they would reach out to you to arrange some sort of partner relationship.

It makes for a good story. It resonates with business people. It speaks to the power of doing APIs. It is also enjoys a position which omits so many other negative aspects of doing startups, which as a technologist becomes too easy to look the other way when you are just focused on the tech, and as a business leader after the venture capital money begins flowing. Business development 2.0 has a wonderful libertarian, pull yourself up by your bootstrap ring to it. You make valuable resources available, and smart developers will come along and innovate! Do amazing things you never thought of! If you build it, they will come. Which all feeds right into the sharecropping, and exploitation that occurs within ecosystems, leading to less than ethical API providers poaching ideas, and thinking that it is ok to push public developers to work for free on their farm. Resulting in many startups seeing APIs as simply a free labor pool, and source of free road map ideas, manifesting concepts like the “cloud kill zone”. Business development 2.0 baby!!

Another dimension of this illness we like to omit is around the lack of a business model. I mean, the shit is free! Why would we complain about free storage for all our images, with a free API? It is easier for us to overlook the anti-competitive approaches to pricing, and complain down the road when each acquisition of the real product (Flickr) occurs, than it is to resist companies who lack a consumer level business model, simply because we are all the product. Flickr, Twitter, Facebook, Gmail, and other tools we depend on are all free for a reason. Because they are market creating services, and revenue is being generated at other levels out of our view as consumers, or API developers. We are just working on Maggie’s Farm, and her pa is reaping all the benefit. When it come’s to Flickr, Maggie and her {a cashed out a long time ago, and the farm keeps getting sold and resold, all while we still keep working away in the soil, giving them our digital bits that we’ve cultivate there, until conditions finally become unacceptable enough to run us off.

I’ve begun moving off of Flickr a couple years ago. I stopped using them for blog photo hosting in 2010. I stopped uploading photos there regularly over the last couple years. The latest crackdown doesn’t mean much to me. It will impact my storytelling to potentially lose such an amazing resource of openly licensed photos. However, I’ve saved each photo I use, and it’s attribution locally–hopefully my attribution link doesn’t begin to 404 at some point. Hopefully other openly licensed photo collections emerge on the horizon, and ideally SmugMug doesn’t do away with openly licensed treasure trove they are stewards of now. The latest acquisition and business model shift occurring across the Flickr platform doesn’t hit me too hard, but the situation does give me an opportunity to step back and reassess my API storytelling, and the role that Flickr plays in my API Evangelist narrative. Giving me another opportunity to eliminate bullshit and harmful myths from my storytelling and myth making–which I feel like is getting pretty close to leaving me with nothing left to tell when it comes to APIs.

In the end, if I just focus purely on the tech, and ignore the business and politics of APIs, I can keep telling these bullshit. This is the real Flickr lesson for me. I’d say there is two reasons we perpetuate stories like this. One, “because we just didn’t know any better”. Which is pretty weak. Two, it is how capitalism works. It is why us dudes, especially us white dudes thrive so well in a Silicon Valley tech libertarian world, because this type of myth making benefits us, even when it repeatedly sets us up for failure. This is one of the things that makes me throw up a little (a lot) in my mouth when I think about the API Evangelist persona I’ve created. This entire reality makes it difficult for me to keep doing this API Evangelist theater each day. APIs are cool and all, but when they are wielded as part of this larger money driven stream of consciousness, we (individuals) are always going to lose. In the end, why the fuck do I want to be a mouthpiece for this kind of exploitation. I don’t.

Photo Credit: Kin Lane (The First Photo I Uploaded to Flickr)


The Impact Of Travel On Being The API Evangelist

Travel is an important part of what I do. It is essential to striking up new relationships, and reenforcing old ones. It is important for me to get out of my bubble, expose myself to different perspectives, and see the world in different ways. I am extremely grateful for the ability to travel around the US, and the world the way that I do. I am also extremely aware of the impact that travel has on me being the API Evangelist–the positive, the negative, and the general shift in my tone in storytelling after roaming the world.

One of the most negative impact that traveling has on my world is on my ability to complete blog posts. If you follow my work, when I’m in the right frame of mind, I can produce 5-10 blog posts across the domains I write for, on a daily basis. The words just do not flow in the same way when I am on the road. I’m not in a storyteller frame of mind. At least in the written form. When I travel, I am existing in a more physical and verbal sense as the API Evangelist, something that doesn’t always get translated into words on my blog(s). This is something that is ok for short periods of time, but after extended periods of time on the road, it is something that will begin to take a toll on my overall digital presence.

After the storytelling impact, the next area to suffer when I am on the road, is my actual project work. I find it very difficult to write code, or think at architectural levels while on the road. I can flesh out and move forward smaller aspects of the projects I’m working on, but because of poor Internet, packed schedules, and the logistics of being on the road, my technical mind always suffers. This is something that is also related to the impact on my overall storytelling. Most of the stories I publish on a daily basis evolve out of me moving forward actual projects as part of my API Evangelist work. If I am not actually developing a strategy, designing a specific API, or working on API definitions, discovery, governance, or one of the loftier aspects of my work, the chances I’m telling interesting stories will significantly be diminished.

Once I land back home, one of the first orders of business is to unclog the pipes with a “travel is hard” story. ;-) Pushing my fingers to work again. Testing out the connections between my brain and my fingers. While I also open up my IDE, command line, API universe dashboard, and begin refining my paper notes about what the fuck I was actually doing before I got on that airplane. Make it all work again is tough. Even the simplest of tasks seem difficult, and many of the projects I’m working on just seem too big to even know where to even begin. However, with a little effort, focus, and lack of a plane, train, or meeting to be present for, I’ll find my way forward again, slowly picking back up the momentum I enjoy as the API Evangelist. Researching, coding, telling stories, and pushing forward my projects so that they can have an impact on the space, and continue paying the bills to keep this vessel moving forward in the direction that I want.


What Are Your Enterprise API Capabilities?

I spend a lot of time helping enterprise organizations discover their APIs. All of the organizations I talk to have trouble knowing where all of their APIs are–even the most organized of them. Development and IT groups have just been moving too fast over the last decade to know where all of their web services, and APIs are. Resulting in large organizations not fully understanding what all of their capabilities are, even if it is something they actively operate, and may drive existing web or mobile applications.

Each individual API within the enterprise represents a single capability. The ability to accomplish a specific enterprise tasks that is valuable to the business. While each individual engineer might be aware of the capabilities present on their team, without group wide, and comprehensive API discovery across an organization, the extent of the enterprise capabilities is rarely known. If architects, business leadership, and any other stakeholder can’t browse, list, search, and quickly get access to all of the APIs that exist, the knowledge of the enterprise capabilities will not be able to be quantified or articulated as part of regular business operations.

In 2018, the capabilities of any individual API is articulated by it’s machine readable definition. Most likely OpenAPI, but could also be something like API Blueprint, RAML, or other specification. For these definitions to speak to not just the technical capabilities of each individual API, but also the business capabilities, they will have to be complete. Utilizing a higher level strategic set of tags that help label and organize each API into a meaningful set of business capabilities that best describes what each API delivers. Providing a sort of business capabilities taxonomy that can be applied to each API’s definition and used across the rest of the API lifecycle, but most importantly as part of API discovery, and the enterprise digital product catalog.

One of the first things I ask any enterprise organization I’m working with upon arriving, is “do you know where all of your APIs are?” The answer is always no. Many will have a web services or API catalog, but it almost always is out of date, and not used religiously across all groups. Even when there are OpenAPI definitions present in a catalog, they rarely contain the meta data needed to truly understand the capabilities of each API. Leaving developer and IT operations existing as black holes when it comes to enterprise capabilities, sucking up resources, but letting very little light out when it comes to what is happening on the inside. Making it very difficult for developers, architects, and business users to articulate what their enterprise capabilities are, and often times reinventing the wheel when it comes to what the enterprise delivers on the ground each day.


Join Me For A Fireside Chat At The Paris API Meetup This Wednesday

I am in Europe for most of October, and while I am in Paris we thought it would be a good idea to pull together a last minute API Meetup. Romain Simiand (@RomainSimiand), the API Evangelist at PeopleDoc was gracious enough to help pull things together, and the Streamdata.io team is stepping up to help with food and drink. Pulling together a last minute gathering at PeopleDoc in Paris, and bringing me on stage to talk about the technology, business, and politics of APIs, well as about some of my recent work on API discovery, and event-driven architecture.

You can find more details on the Paris API Meetup site, with directions on how to find PeopleDoc. Make sure you RSVP so that we know you are coming, and of course, please help spread the word. We are over 30 people attending so far, but I think we can do better. I’m happy to get on stage and help drive the API discussion, but I’d prefer to have a healthy representation of the Paris API community asking questions, helping me understand what is happening across the area when it comes to APIs. I always have plenty of knowledge to share, but it becomes exponentially more valuable when people on the ground within communities are asking questions, and making it relevant to what is happening within the day to day operations of companies in the local area.

While I enjoy doing conference keynotes and panels, my favorite format of event is the Meetup. Bringing together less than 100 people have a discussion about APIs. I always find that I learn the most in this environment, and able to actually engage with developers and business folks about what really matters when it comes to APIs. The larger the audience the more it is just about me broadcasting my message, and when it is a smaller and more intimate venue, I feel like I can better connect with people. In my opinion, this is how all API events should be–small, intimate, and a real world conversation about APIs. Not just an API pundit pushing their thoughts out, ensuring that all participants feel like they are actually part of the conversation.

If you are in the Paris region, or can make the time to hope on a plane or train and make it to Paris this Wednesday, I love to hang out. If you can’t make it, I’ll be back for API Days Paris in December, but it will be a bigger event, and it might be more difficult to carve out the time to hang. So, bring your API questions, and come over to the PeopleDoc office this Wednesday, and we’ll have a proper discussion about the technology, business, or politics of APIs. Helping drive the API discussion going on in France, continuing to push it forward. Making France a leader when it comes to doing business in the growing API economy. I look forward to seeing you all in Paris this week!


I Participated In An API Workshop With The European Commission Last Week

I was in Ispra, Italy last week for a two day workshop on APIs with the European Commission. The European Commission’s DG CONNECT together with the Joint Research Centre (JRC) launched a study with the purpose to gain further understanding of the current use of APIs in digital government and their added value for public services, and they invited me to participate. I was joined by Mehdi Medjaoui (@medjawii), David Berlind (@dberlind), and Mark Boyd (@mgboydcom), along with EU member states, and European cities, to help provide feedback and strategies for consideration by the commission.

This European Commission study is looking at “innovative ways to improve interconnectivity of public services and reusability of public sector data, including dynamic data in real-time, safeguarding the data protection and privacy legislation in place.” Looking to:

  • assess digital government APIs landscape and opportunities to support the digital
  • transformation of public sector
  • identify the added value for society and public administrations of digital government APIs (key enablers, drivers, barriers, potential risks and mitigates)
  • define a basic Digital Government API EU framework and the next steps

David Berlind from ProgrammableWeb gave a couple talks, with myself, Mehdi, and Mark following up. The rest of the time spent was hearing presentations from EU member states, and other municipal efforts–learning more about the successes and the challeges they face. What I heard reflected what I’ve experienced in federal government, as well as city, county, and state level API efforts I’ve participated in across the United States. <p></p>All groups were struggling to win over leaders and the public, modernize legacy system, build on top of open data efforts, and push forward the conversation using a modern approach to delivering web APIs.

I am eager to see what comes out of the European Commission API project. While there are still interesting things happening in the United States, I feel like there is an opportunity for the EU to leap frog us when it comes to meaningful API adoption within government. While many cities, counties, and states are still investing in open data and APIs, the investment at the federal level has stagnated with the current administration. There are still plenty of agencies moving forward the API conversation, but the leadership is coming from the GSA, and from within individual agencies, not from the executive branch. What is happening at the European Commission has the potential to be adopted by all the countries in the European Union, and making a pretty significant impact in how government works using APIs.

I’ll be staying in touch with the group leading the effort, and making myself available for future gatherings. There was talk of holding another gathering at API Days in Paris, and I am sure there will be further workshops as the project evolves. Clearly the European Commission has a huge amount of work ahead of them, but the fact that they are coming together like this, and highlighting, as well as learning from the existing work going on across the member states, shows significant promise. I made it clear as we were wrapping up regarding the importance of continued storytelling between the member states, as well as out of the European Commission. Emphasizing it will take a regular drumbeat of activity, and sharing of the work in real-time, for all of this to evolve as they desire. However, with the right cadence, the API effort out of Europe could make a pretty significant impact across the EU, and beyond.


API Evangelist API Lifecycle Workshop on API Discovery

I’ve been doing more workshops on the API lifecycle within enterprise groups lately. Allowing me to refine my materials on the ground within enterprise groups, further flesh out the building blocks I recommend to API groups to help them craft their own API strategy. One of the first discussions I start with large enterprise groups is always in the area of API discovery, or commonly asked as, “do you know where all your APIs are?”

EVERY group I’m working with these days is having challenges when it comes to easy discovery across all the digital resources they possess, and put to use on a daily basis. I’m working with a variety of companies, organizations, institutions, and government agencies when it comes to the API discovery of their digital self:

  • Low Hanging Fruit (outline) - Understanding what resources are already on the public websites, and applications, by spidering existing domains looking for data assets that should be delivered as API resources.
  • Discovery (outline) - Actively looking for web services and APIs that exist across an organization, industry, or any other defined landscape. Documenting, aggregating, and evolving what is available about each API, while also publishing back out and making available relevant teams.
  • Communication (outline) - Having a strategy for reaching out to teams and engaging with them around API discovery, helping the remember to register and define their APIs as part of wider strategy.
  • Definitions (outline) - Work to make ensure that all definitions are being aggregated as part of the process so that they can be evolved and moved forward into design, development and production–investing in all of the artifacts that will be needed down the road.
  • Dependencies (outline) - Defining any dependencies that are in play, and will play a role in operations. Auditing the stack behind any service as it is being discovered and documented as part of the overall effort.
  • Support (outline) - Ensure that all teams have support when it comes to questions about web service and API discovery, helping them initially, as well as along the way, making sure all their APIs are accounted for, and indexed as part of discovery efforts.

API discovery will positively or negatively impact the rest of the API lifecycle at any organization. Not knowing where all of your resources are, and not having them properly defined for discovery at critical design, development, production, and integration movements, is an illness all companies are suffering from in 2018. We’ve deployed layers of services to deliver on enterprise growth, and put down a layer of web APIs to service the web, mobile, and increasingly device-based applications we’ve been delivering. Resulting in a tangled web of services, we need to tame before we can move forward properly.

Let me know if you need help with API discovery where you work. It is the fastest growing aspect of my API workshop portfolio. Aside from security, I feel like API discovery is the biggest challenge facing large enterprise groups learning to be more agile, flexible, and pushing forward with a microservices, and event-driven way of doing business. I definitely don’t have all the solutions when it comes to API discovery, but I knew have a lot of experience to share around how we are defining our enterprise capabilities and resources, and making them more discoverable across our entire API catalog.


API Evangelist API Lifecycle Workshop on API Design

I’ve been doing more workshops on the API lifecycle within enterprise groups lately. Allowing me to refine my materials on the ground within enterprise groups, further flesh out the building blocks I recommend to API groups to help them craft their own API strategy. One area of the API lifecycle I find more groups working on these days, centers around a design-first approach to the API lifecycle.

While not many groups I work with achieved a design-first approach doing APIs, almost all of them I talk to express interest in making this a reality at least within some groups, or projects. The appeal of being able to define, design, mock, and iterate upon an API contract before code gets written is very appealing to enterprise API groups, and I’m looking to help them think through this part of their API lifecycle, and work towards making API design first a reality at their organization.

  • Definition (outline) - Using definitions as the center of the API design process, developing an OpenAPI contract for moving things through the design phase, iterating, evolving, and making sure the definitions drive the business goals behind each service.
  • Design (outline) - Considering the overall approach to design for all APIs, executing upon design patterns that are in use to consistently deliver services across teams. Leveraging a common set of patterns that can be used across services, beginning with REST, but also evetually allowing the leveraging of hypermedia, GraphQL, and other patterns when it comes to the deliver of services.
  • Versioning (outline) - Managing the definition of each API contract being defined as part of the API design stop for this area of the lifecycle, and having a coherent approach to laying out next steps.
  • Virtualization (outline) - Providing mocked, sandbox, and virtualized instances of APIs and other data for understanding what an API does, helping provide an instance of an API that reflects exactly how it should behave in a production environment.
  • Testing (outline) - Going beyond just testing, and making sure that a service is being tested at a granular level, using schema for validation, and making sure each service is doing exactly what it should, and nothing more.
  • Landing Page (outline) - Making sure that each individual service being designed has a landing page for acccessing it’s documentation, and other elements during the design phase.
  • Documentation (outline) - Ensuring that there is always comprehensive, up to date, and if possible interactive API documentation available for all APIs being designed, allowing all stakeholders to easily understand what an API is going to accomplish.
  • Support (outline) - Ensuring there is support channels available for an API, and stakeholders know who to contact when providing feedback and answering questions in real, or near real time, pushing forward the design process.
  • Communication (outline) - Making sure there is a communication strategy for moving an API through the design phase, and making sure stakeholders are engaged as part of the process, with regular updates about what is happening.
  • Road Map (outline) - Providing a list of what is being worked on with each service being designed, and pushed forward, providing a common list for everyone involved to work from.
  • Discovery (outline) - Make sure all APIs are discoverable after they go through the design phase, ensuring each type of API definition is up to date, and catalogs are updated as part of the process.

I currently move my own APIs forward in this way using a variety of open source tooling, and GitHub. I’m working with some groups to do this in Stoplight.io, as well as Postman. I don’t think there is any “right way” to go API define and design first. I’m here to just educate teams about what is going on out there. What some of the services and tools that help enable an API design first reality, and talk through the technological, business, and political challenges are preventing a team, or entire enterprise group from becoming API design first.

Let me know if you need help thinking through the API design strategy where you work. I’ve been studying this area since it emerged as a discipline in 2012, led by API service providers like Apiary, but continue with other next generation platforms like Stoplight.io, APIMATIC, and others. For me, API design is less about REST vs Hypermedia vs GraphQL, and more about the lifecycle, services, tooling, and API definitions you use. I’m happy to share my view of the API design landscape with your group, just let me know how I can help.


The Layers Of Completeness For An OpenAPI Definition

Everyone wants their OpenAPIs to be complete, but what that really means will depend on who you are, what your knowledge of OpenAPI is, as well as being driven by your motivation for having an OpenAPI in the first place. I wanted to take a crack at articulating a complete(enough) definition for OpenAPIs I create, based upon what I’m needing them to do.

Info & Base - Give the basic information I need to understand who is behind, and where I can access the API.

Paths - Provide an entry for every path that is available for an API, and should be included in this definition.

Parameters - Provide a complete list of all path, query, and header parameters that can be used as part of an API.

Descriptions - Flesh out descriptions for all the path and parameter descriptions, helping describe an API does.

Enums - Publish a list of all the enumerated values that are possible for each parameter used as part of an API.

Definitions - Document the underlying schema being returned by creating a JSON schema definition for the API.

Responses - Associate the definition for the API with the path using a response reference, connecting the dots regarding what will be returned.

Tags - Tag each path with a meaningful set of tags, describing what resources are available in short, concise terms and phrases.

Contacts - Provide contact information for whoever can answer questions about an API, and provide a URL to any support resources.

Create Security Definitions - Define the security for accessing the API, providing details on how each API request will be authenticated.

Apply Security Definitions - Apply the security definition to each individual path, associating common security definitions across all paths.

Complete(enough) - That should give us a complete (enough) API description.

Obviously there is more we can do to make an OpenAPI even more complete and precise as a business contract, hopefully speaking to both developers and business people. Having OpenAPI definitions are important, and having them be up to date, complete (enough), and useful is even more important. OpenAPIs provide much more than documentation for an API. They provide all the technical details an API consumer will need to successfully work with an API.

While there are obvious payoffs for having an OpenAPI, like being able to publish documentation, and generate code libraries. There are many other uses for an OpenAPI like loading into Postman, Stoplight, and many other API services and tooling that helps developers understand what an API does, and reduce friction when they integrate, and have to maintain their applications. Having an OpenAPI available is becoming a default mode of operation, and something every API provider should have.


A Quick Manual Way To Create An OpenAPI From A GET API Request

I have numerous tools that help me create OpenAPIs from the APIs I stumble across each day. Ideally I’m crawling, scraping, harvesting, and auto-generating OpenAPIs, but inevitably the process gets a little manual. To help reduce friction in these manual processes, I try to have a variety of services, tools, and scripts I can use to make my life easier, when it comes to create a machine readable definition of an API I am using–in this scenario it is the xignite CloudAlerts API.

One way I’ll create an OpenAPI from a simple GET API request, providing me with a machine readable definition of the surface area of that API, is using Postman. When you have the URL copied onto your clipboard, open up your Postman, and paste the URL with all the query parameters present.

You’ll have to save your API request, and add it to a collection, but then you can choose to share the collection, and retrieve the URL to this specific requests Postman Collection.

This gives you a machine readable definition of the surface area of this particular API, defining the host, baseURL, path, and parameters, but it doesn’t give me more detail about the underlying schema being returned. To begin crafting the schema for the underlying definition of the API, and connect it to the response for my API definition, I’ll need an OpenAPI–which I can create from my Postman Collection using API Transformer from APIMATIC.

After pasting the URL for the Postman Collection into the API transformer form, you can generate an OpenAPI from the definition. Now you have an OpenAPI, except it is missing the underlying schema, which I will just grab the response from my last request, and convert it into JSON schema using JSONSchema.net. I’ll just grab the properties section of these, as the bottom definitions portion of the OpenAPI specification is just JSON Schema.

I can merge my JSON schema with my OpenAPI, adding it to the definition collection at the bottom. With a little more love, adding a more coherent title, description, and fluffing up some of the summaries, descriptions, tags, etc., I now have a fairly robust profile of this particular API. Ideally, this is something the API provider would do, but in the absence of an OpenAPI or Postman Collection, this is a pretty quick and dirty way to produce an OpenAPI and Postman Collection from a simple GET API, but the formula works for other types of API requests as well–leaving me with a machine readable definition for an API I will be integrating with.

There are definitely other ways of scraping API documentation, processing .HAR files generated from a proxy, but I think this is a way that anyone, even a non-developer can accomplish. I did my version in JSON, but the same process will work for YAML, making the resulting definition a little more human readable, while still maintaining it’s machine readability. I like documenting these little processes so that my readers can put to use, but it also provides a nice definition that I can use to remember how I get things done–my memory isn’t what it used to be.

The resulting API definitions from this process are:

  • OpenAPI
  • ]Postman Collection](https://www.getpostman.com/collections/02a6658c25631d716407)

API Evangelist And Streamdata.io API Lifecycle Workshops

I have been partnering with Streamdata.io to evolve how I work with enterprise groups on their API lifecycle strategy. After working closely with the Streadata.io sales team, it became clear that many enterprise organizations weren’t quite ready for the event-driven infrastructure Streamdata.io provides. Most groups were in desperate need of stepping back and developing their own formal strategy for delivering APIs across the enterprise, before they could every scale their operations and take advantage of things being more event-driven and real time.

In response, I set out to evolve my own API lifecycle research, gathered over the last eight years of studying the API space, and make it more accessible to the enterprise, as self-service short form and long form content, in-person workshops, and forkable blueprints that any enterprise can set in motion on their own. The results is a series of evolvable API projects, that we are using to drive our ongoing workshop engagements with enterprise API groups, focusing in on six critical areas of the API lifecycle:

  • Discovery (demo) - Knowing where all of your APIs and services are across groups.
  • Design (demo) - Focus in on an a design and virtualized API lifecycle before deployment.
  • Development (demo) - Understanding the many ways in which APIs can be developed & deployed.
  • Production (demo) - Thinking critically about properly operating API infrastructure.
  • Governance (demo) - Understanding how to measure, report, and evolve API operations.

Not all of our workshops will cover all of these areas. Depending on the time we have available, the scope of the team participating in a workshop(s), and how far along teams are in their overall API journey, the workshops might head in different directions, and expand or contract the depth in which we dive into each of these area (ie. not everyone is ready for governance). After several workshops this year, we have found these areas of the API lifecycle to be the most meaningful ways to organize a workshop, and help enterprise group think more critically about their API strategy.

Craft An API Strategy For Your Enterprise
The purpose of our API lifecycle workshops is to help enterprise organizations develop a strategy. Bring in outside API knowledge, learn more about where an enterprise API group is in their overall API journey, and leave them with a structured artifact that helps them step back and look at the entire lifecycle of their APIs. Moving the API conversation across the enterprise forward in a meaningful way with three distinct actions:

  • Starter API Lifecycle Strategy - Provide a template API lifecycle strategy in GitHub or GitLab as README, and YAML file. Providing a framework to consider as you craft your own API strategy, providing a starting point for your journey. Generated from eight years of research on the API space, providing a living document that can be used to execute and evolve the overall API lifecycle strategy for an enterprise organization.
  • API Lifecycle Workshop - Schedule and conduct a single, or multi-day API lifecycle workshop on-site, with as many enterprise and / or partner stakeholders as possible. We will come on site, and walk teams through each stop along a modern API lifecycle, helping customize, personalize, and make the API strategy better fit the enterprises strategy.
  • Evolve API Lifecycle Strategy - Coming out of the workshop, you will be given an updated API lifecycle YAML document. Providing a human and machine readable framework that represents your API lifecycle strategy, helping provide a scaffolding for future discussions. Producing a usable artifact out of the gathering, encapsulating the research and experience we bring to the table, adding what we learned during the workshop, and hopefully continually being used to drive the API strategy road map.
  • Provide Execution & Support - After the workshop is done, and we have provided an updated API lifecycle, we are still here to support. We can answer questions via the repository we leave an API strategy artifact, as well as email. We are happy to conduct virtual meetings to help check in on where you are at, and of course we are happy to always come back and conduct future workshops as you need.

We are happy to continue the conversation around the API lifecycle artifact we will leave with you. We don’t expect you to use everything we propose. We are more interested in teaching you about what is possible, and continuing to work with you to refine, evolve, and make the API lifecycle your own. We’ve just worked hard to identified many different ways to operating API infrastructure at scale, and continue to help standardize and make it more accessible by large enterprise organizations.

Helping You On Your Journey
The resulting API lifecycle strategy we leave behind after the workshop is done will possess all the knowledge we’ve aggregated across API research, gathered across leading API providers, and polished by conducting API workshops within the enterprise. Embedded within this API lifecycle artifact we’ll leave you with some added elements that will help you in your journey, going beyond just advice on process, and helping the rubber meet the road.

  • Links - Provide a wealth of references to external resources, attached to each stop along the API lifecycle, bring our API research into your organization, allowing you to put to use inline as you are building your API strategy.
  • Services - Embedding links to API services that you are already using, and introducing you to other useful API services along the way. Making sure specific services are associated with each stop along the API lifecycle, across the different areas, and even sub-linking to specific features that can help accomplish a specific aspect of API operations.
  • Tooling - Embedding links to open source API tools, specifications, and other solutions that can be used to accomplish a specific aspect of operating an API platform. Brining in open source solutions that can be considered as you are crafting the API strategy for your enterprise organization.

While not all organizations will be ready to use a YAML API lifecycle artifact as part of their API orchestration, it helps to have the API lifecycle well defined, even if many steps are still manual. It helps teams think more critically about how they approach the deliver of APIs, while also being something that can be downloaded, forked, and reused by different groups across the enterprise. Eventually it is something that can be further automated, measured, and used to help quantify the maturity level of each APIs, as well as API across distributed teams.

Next Steps For Developing A Strategy
If you are interested in what we are offering with our API lifecycle workshops, there are few things you can do to get things started, to help us bring an API lifecycle workshop your way:

  • Email Me - Happy to answer any questions, and get you the information you need to sell the workshop to your team, and get you in the calendar.
  • Take Survey - Take a quick survey about your operations, helping us tailor a workshop for your needs.
  • Demo Workshop - Take a stroll around one of our demos that we use in our workshop, introducing you to what we’ll be discussing.

While I work from a common set of workshop material when designing these workshops, I work to tailor each engagement for the company, organization, institution, or government agency I’m working with. All of my API lifecycle workshop blueprints are meant to be forkable, customizable, and something anyone can turn into their own working API lifecycle strategy.


API Developer Outreach Research For The Department of Veterans Affairs

This is a write-up for research I conducted with my partner Skylight Digital. The team conducted a series of interviews with leading public and private sector API platforms regarding how they approached developer outreach, and then I wrote it up as a formal report, which the Skylight Digital team then edited and polished. We are looking to provide as much information as possible regarding how the VA, and other federal agencies should consider crafting their API outreach efforts.

This is Skylight’s report for the U.S. Department of Veterans Affairs (VA) microconsulting project, “Developer Outreach.” The VA undertook this project in order to better understand how private- and public-sector organizations approach Application Programming Interface (API) developer outreach.

In preparing this report, we drew on nearly a decade’s worth of our own API developer outreach expertise, as well as information obtained through interviews with seven different organizations. For each interview, we followed an interview script (see Appendix A) and took notes. The Centers for Medicare & Medicaid Services (CMS) Blue Button API program (see Appendix B), the Census Bureau (see Appendix C), the OpenFEC program (see Appendix D), and Salesforce (see Appendix E) all agreed to releasing our notes publicly. The other three organizations (a large social networking site, a government digital services delivery group, and a cloud communications platform) preferred to keep them private.

We structured this report to largely reflect the interview conversations that we held with people who are involved in developer outreach programs and activities. These conversations focused around the following questions:

  1. What is the purpose and scope of your API developer outreach program?

  2. What does success look like for you?

  3. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you are using to drive and sustain effective adoption of your API(s)?

  4. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

  5. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

  6. How is your outreach program structured and staffed? How did it start out and evolve over time?

  7. Imagine that you are charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

  8. What big ideas do you have for evolving your outreach program to make it even more effective?

if we were forced to distill all of the interviews and all of our existing information down to a single essential piece of advice, it would be this: involve the programmers who are going to be using your APIs. By “involve,” we mean:

  1. Engage them early.

  2. Support the users with documentation, hackathons, forums, and other types of engagement activities.

  3. Measure the happiness of the programmers who are using your APIs, as well as the number of times that they use them and how they use them.

  4. Prioritize your APIs so as to maximize the utility to would-be programmers.

In the pages below, you will find a large number of specific suggestions culled from extensive interviews and our collective personal experience. All of these specific techniques are in service to the idea of designing the API program with the programmers who will use the API in mind at all times.

Purpose and scope of developer outreach programs

What we learned

Different API programs have different purposes, and these purposes are fulfilled by varying levels of API outreach resources. Just as organizations have invested differing amounts of resources into their web presence over the last 25 years, the API providers we talked to are each at different stages in their API journey. This variety presented us with a range of informative stories concerning outreach programs.

The purpose behind APIs

Our interviews revealed a number of potential purposes for API development, a number of which are presented below:

  • Build it and they will come: It is common for companies, organizations, institutions, and government agencies to launch an API effort with no specifically-stated purpose. They make resources available, build awareness amongst developers, and encourage innovation. While such organizations may not know what the future will hold, they believe that their investment in their API platform in a sensible and pragmatic way will attract developers and the inevitably innovative applications they will bring.

  • A focus on APIs as a product: Some interviewees we spoke to are fully invested in their API efforts, treating their APIs as a product in and of themselves. They develop formal programs around their API operations, treat API consumers as end customers, and consistently ensure their programs have the resources they need. In short, they treat API operations like a business and run them as efficiently as possible, even if commercial sales is not the end goal.

  • A focus on APIs as an enabler: Other interviewees we consulted focus on ensuring the availability of APIs as a means to support an existing web or mobile application; in a sense, the API is relegated to a secondary role relative to the primary application’s existence. APIs for these kinds of organizations serve to drive traffic, adoption, and integration when it comes to a larger vision, but the APIs themselves are simply enablers for the overall platform.

  • A focus on the developer: Beyond the API as a product — and the web and mobile apps they power — API efforts tend to focus on the developer. Development-focused APIs emphasize what developers will bring to the table when they have the resources and support they need, and are thus central to the investment and engagement necessary for successful outreach/development.

  • Attract unique entities: API platforms are often aimed at attracting the attention of interesting/progressive companies, universities, institutions, and other entities. These external entities can often put API resources to use in new and innovative ways, and in doing so, they can bring attention and potential partnerships to the platform.

  • Leverage the network effect: Some API providers we interviewed expressed an interest in joining a wider network of collaborative open data and API efforts. They felt that working in concert with others, by bringing together many different federal, state, or municipal agencies, would benefit the platforms. Further, they felt this would allow the API initiatives to be led by either policy or private interests.

  • Save developers time: Overall, the most positive motivation for API development expressed by existing providers was to streamline API onboarding and integration. Further development would help internal, partner, and 3rd party public developers be more successful in putting API resources to work in their web, mobile, and device applications.

The scope of API investment

The programs we spoke to each had differing levels of investment, access to resources, and primary purposes. Since the scope of an API investment is naturally a function of these factors, this meant that the organizations we interviewed had different intended scopes for their API projects. We have collated a selection of these scopes below.

  • Starting small: A common theme across API operations we spoke with was the importance of starting small and building a stable foundation before attempting larger infrastructure development. Beginning with a basic, yet valuable set of API resources, fleshing out the entire lifecycle, and processing around what is necessary to be successful before scaling and increasing the scope of API operations was routinely described as a critical part of any API scope.

  • Invest as it makes sense: A lack of resources is a common element of slow growth and unrealized API operations. Bringing significant levels of investment to projects while simultaneously applying appropriate human and technological capital to API operations were universally mentioned by our interviewees as critical to seeing desired results.

  • Working with what you have: Almost all the API programs we spoke to worked in resource-starved environments. They wished to do more and to be more, but lacked the time, human investment, and technical resources to make it happen. This forced the organizations to work with what they had, making the most of their limited opportunities while hoping for eventual greater contributions.

  • A focus on improvement: All the API programs we interviewed expressed that they wanted to be doing more with their programs. They want to formalize their approach and increase their resource and labor investment in the areas in which they have seen success. Ultimately, their target scope was to focus not on just meeting expectations, but on moving towards excellence and mastery of what it takes to scale their API efforts.

Every API effort clearly has its own personality, and while there are common patterns, the environment, team, and amount of resources available appears to dictate much of the scope of developer outreach programs. However, the scope of any effort will always start small and move forward incrementally as confidence grows.

What our thoughts are

We learned a lot talking to API providers about the purpose and scope of their API operations. Our interviews also reinforced much of what we know about the API operations of leading providers like Amazon and Google. When it comes to the motivations for developing APIs, ensuring an appropriate level of investment and planning for future scalable growth are necessary steps in giving an API the best chance to succeed.

Augmenting what we learned from providers, we recommend focusing on the following areas in order to achieve API purposes, grow and scale API operations, and integrate API technology into the fabric of an organization’s overall operations.

  • APIs are a journey: Always view API operations as a journey and not a destination, setting expectations appropriately from the beginning. Do not raise the bar too high when it comes to achieving goals, reaching metrics, and getting the job done. API operations will always be an ongoing effort, with both wins and losses along the way.

  • Always start small: Reiterating what we researched in the API sector as well as what we confirmed in our interviews, it is important to build a small, stable base before moving forward. This does not mean you cannot develop rapidly, but before you do, make sure you’ve researched the API and planned for its success, understanding what will be needed throughout the lifecycle of all APIs you intend to deliver.

  • Center on the end user: It is always important that every goal and purpose you set for your API platform center on its end-users. While the platform and developer ecosystem is always a consideration, the end-user is the central focus of the “why” and the “how” for API technologies, especially consider the scope of operating an API in both the private and the public sector.

  • A dedicated team: Even if it is a small group, always work to dedicate a team to API operations. While APIs will ultimately be an organization-wide effort, it will take a dedicated team to truly lead the API to success. Without centralized, dedicated leadership, APIs will never attain true relevance within an organization, leaving to be a side project that rarely delivers as expected.

  • Everyone pitches in: While a dedicated team is a requirement, an API’s development should always invite individuals from across an organization to join in the process. Ideally, API operations are a group effort led by a central team. It is important to encourage individual API teams to feel a sense of ownership over their API, just as it is important to encourage business users to participate in development conversations.

  • Striving for excellence: API operations will always be forced to deal with a lack of resources. That is simply a fact of dealing in APIs. However, each API program should be seeking excellence, working to understand what is needed to move APIs forward in a healthy way. Improving upon what has already been done, refining processes that are in motion, and making sure all APIs are iteratively improved continually benefits a platform’s end users.

  • Continued investment: Always be regularly investing in the API platform. Continued input of both human resources and financial resources helps to ensure that a platform is not starved along the way. Otherwise the API-in-question will constantly fall short and threaten the viability of the platform in the long term.

While the purpose of an API may depend on the mission of its developing organization, in the end, APIs always exist to serve end-users while protecting the interest of a platform. The scope of any such API will depend on the commitment of an organization, and as such, there is no “right answer” to the question of determining the purpose of any single API platform. The only true constraint is the assurance that the API remains in alignment with an organization’s mission as it grows and scales.

How success is defined

With a variety of possible purposes, scopes, and approaches, an API’s success can be defined in a myriad of ways. Depending on the motivations behind the API, and the investment made, the measure of success will depend on what matters to the organization. However, there are some common patterns to defining success, which we extracted from both the interviews we conducted and the research we performed as part of this outreach study.

What we learned

Along with what we learned about the purpose and scope of API platforms, we discovered more about the different ways in which API providers are defining success. We have collected the highlights among these metrics below.

  • Building awareness: API success revolves around building awareness that an API exists, as well as awareness of the value of the API resources that are being made available. Awareness is not simply a consumer side consideration, though; providers, too, must possess an awareness of the value of their resources in relation to both other API developers and consumers alike.

  • Attracting new users: Bringing attention to an API and attracting new users is one of the most common ways of measuring the success of API operations. While new users won’t always immediately become active users, their interest and involvement will bring attention to and awareness of what the API platform can deliver. Attracting new users is one of the easiest and most accessible ways to measure the success of any API, according to our interviews, but importantly, none of the platform providers we spoke to recommended that an organization should stop there.

  • Incentivizing active users: While attracting new users is easy, producing active users is much harder. The easier it is to onboard with an API and make the first series of API calls, the greater the likelihood that API consumers will integrate the platform into their own resources and work, which is a critical metric for any API provider.

  • Applications: Applications and development are the cornerstones that incentivize API providers to invest in APIs, and across the board, our interviewees cited application integration and involvement as a prime candidate for determining an API’s success. This could be quantified both in terms of new applications relying on the API platform, as well as active application processes that integrate with the API’s platform. In either case, measuring usage was considered an excellent means to justify the existence and growth of an API platform.

  • Entities: Getting the attention of companies, organizations, institutions, and other government agencies is an important part of the the API journey. In particular, developing an awareness of and encouraging the usage and adoption of APIs among groups already leveraging the technology is an important metric by which success can be determined.

  • End users: Of course, API providers articulated the importance of serving end-users. Besides serving an organization’s mission, the true purpose of an API is to satisfy an end-user, and so measuring success based upon how much value is created and delivered to these users and customers, and even the public at large, can directly verify that an API is living up to its billing.

  • Stakeholders: Further discussions with API providers implied that success is also defined in terms of involvement with other stakeholders. Ensuring that the definition of success was crafted in an inclusive way allows everyone involved and impacted by the project to input their voice. This widens the target audience to make sure success is a large umbrella that covers as many individuals within an organization as possible.

  • New resources: An additional area that was used to define success was the number of new API resources added to a platform. If an organization is currently in the development phase and deploying APIs into production, it is likely that a platform already has a handle on what it takes to successfully deliver APIs throughout their lifecycle. Making new APIs a great way to understand the velocity of any platform, and how well it is ultimately doing.

Measuring the success of an API platform is a much more ambiguous goal than determining scope, purpose, and investment. Our interviews revealed that success often means different things to different providers. Moreover, an organization’s understanding of success is also something that will evolve over time. We learned a lot from API providers about pragmatic views on what API success looks like, and now we would like to translate that into some basic guidance on how to help ensure the success of providing APIs.

What our thoughts are

Defining, measuring, and quantifying the success of API operations is not easy. As discussed above, measuring success functionally amounts to hitting a moving target. It is important to start with the basics, be realistic, and define success in a meaningful way. Adding to what has been gathered from interviews with API providers, we recommend a consideration of the following factors when it comes to defining just exactly what success is.

  • Know your resources: Understand the resources you are making available via an API. Ensure that they are well defined and made accessible in ways that consider security, privacy, and the interests of all stakeholders. Do not just open up APIs for the sake of delivering APIs — make sure the resources are well defined, designed, and serve a purpose.

  • Manage your APIs: API management is essential to measuring success. It is extremely difficult to define success without requiring all developers to authenticate, log, quantify, and analyze their consumption. Measuring these kinds of consumption activities helps to quantify the value produced by the API and its related platform, and an understanding of this value serves as the foundation for any API’s future success.

  • Have a plan: There is no success without a plan. A set of plans are required to apply at the management level in order to quantify the addition of new accounts, define whether they are active or not, and understand how applications are putting resources to work. Providing a plan for how resources are made available, and how they are consumed, generates a framework to think about and measure what API success means.

  • Measure portal activity: Treat your API developer portals as you would any other web property and actively measure its traffic. Apply data-analytic solutions to track sessions, time, and visitors, and use this information to contribute to a sales and marketing funnel that can be used to understand how developers are using portal resources. Importantly, this kind of analysis can also discover points of friction that developers may be encountering when trying to use your API platform.

  • Analyze and report: Produce weekly, monthly, quarterly, and annual reports from data gathered across the API stack, API portal, and from social media. Developing an understanding of what is happening based upon actual data, and consistently reporting upon findings with all stakeholders in API operations, ensures both transparency of API knowledge and information access to formulate plans for growth.

  • Discuss and evangelize: Have a strategy for taking any analysis or reporting from API operations and disseminating it amongst stakeholders. With these resources distributed, consider conducting regular on- and off-line discussions around what they mean. Work with everyone involved to understand the activity across a platform, and use these discussions to transform the understanding of success as platform awareness grows.

  • Make things observable: Make every building block of API operations observable. Ensure that everything has well defined inputs and outputs, and consider how these can be used to better understand whether the platform is working or not. Allowing every single aspect of the platform to be able to contribute to an overall definition of what success means by providing real-time and historic data around how resources are being used can signal important insights about an API and how it might be improved.

The success of an API platform will mean different things to different groups and will evolve over time as awareness around an organization’s resources grows. Know your resources, properly manage your APIs, and have a plan, but make sure you are constantly reassessing exactly what success means while having ongoing conversations with stakeholders. With more experience, you will find that API platform success becomes much more nuanced, but importantly, it also becomes easier to define once you know what it is that you want.

Key practices for driving and sustaining adoption of APIs

After a decade of leading tech companies operating API programs, and a little over five years of government agencies following their lead, a number of common practices emerged that helped drive the adoption of APIs and support relationships between provider and consumer. We spent some time talking to API providers about their approaches, while also bringing our existing research and experience to the table, and have collected our responses and analysis below.

What we learned

This is one area where we believe that our existing research outweighed what we learned in talking to API providers, but the conversations did reinforce what we know while also illuminating some new ways to look at operational components. Here are the key practices our interviewees provided for driving and sustaining the adoption of APIs.

  • Documentation: Documentation is the single most important element that needs to accompany an API that is being made available. This transforms the process of learning about what an API can do from static to interactive (such as by using OpenAPI specifications) and renders the API a hands-on experience.

  • Code: Providing samples, SDKs, start solutions, and other code elements is vital to making sure developers understand through demonstration how to integrate with APIs in a variety of programming languages.

  • Content: Content is critical. Invest in blog posts, samples, tutorials, case studies, and anything else you think will assist your consumers in their journey. We heard over and over how important a regular stream of content is for attracting new developers, keeping active ones engaged, and putting API resources to work.

  • Forums: Provide a forum where developers can find existing answers to their questions while also being able to ask new questions. Offering a safe, up to date, well moderated place to engage in asynchronous conversations around an API platform ensures that dialogue is always happening, which means that use and progress are in continued development.

  • Conferences: Conducting workshops and attending relevant conferences where potential API consumers will be is an important practice in furthering the outreach of an API platform. Engage with your community — both consumers and developers — instead of just pushing content to them online.

  • Proactive: Make sure you are proactive in operating your API platform by constantly marketing your work to developers (remember, continually attracting new attention is vital). At the same time, work to provide existing developers with what they will need based upon common practices across the API sector. Investing in developers by giving them resources they will need before they have to ask for it makes an API’s community feel alive and healthy.

  • Reactive: While proactivity is important, an API team must also be able to react to any questions, feedback, and concerns submitted by API consumers and other stakeholders. Ensuring people do not have to wait very long for an answer to their question makes consumers, developers, and stakeholders alike feel like they are considered a relevant and important part of the API community.

  • Feedback loops: Having feedback loops in place are essential to driving and sustaining the adoption of APIs. Without one or more channels for consumers to provide feedback, as well as responsive and attentive API providers who analyze how the feedback can fit into the overall API plan, API operations will never quite rise to the occasion.

  • Management: Almost all API providers we talked to articulated that having a proper strategy for API management, as well as an investment in services and tooling, was essential to onboarding new consumers. Additionally, this kind of investment facilitates an understanding of how to engage with and incentivize the usage of API resources by existing users. Without the ability to authenticate, define access tiers, quantify application usage, log all activity, and report upon usage and activity across dimensions, it is extremely difficult to scale platform adoption.

  • Webinars: For an enterprise audience, webinars were a viable way to educate new users about what an API platform offers, as well as helping to bring existing API consumers up to speed on new features. Not all communities are well-suited for webinar attendance, but for those that are, it is a valuable tool in the API toolbox.

  • Tutorials: Providing detailed tutorials on how to use an API, understand business logic, and take better advantage of platform resources were all common elements of existing API provider options. Breaking down the features of the platform and providing simple walkthroughs that help consumers put those features to work can streamline the integration and onboarding process that users face when working with APIs.

  • Domain: Our interviewees mentioned that having a dedicated domain or subdomain for an API developer portal significantly helped in attracting new users by providing a known location for existing developers to find the resources they are looking for.

  • Explorer: In some cases, it is important to provide a more hands-on, visual way to explore resources available within an API rather than simply listing or describing such features in documentation. For new and particularly inexperienced users of API technologies, resources that can “connect the dots” between the API’s functional support and the actual implemented pathway of using a particular API tool can be immeasurably important in user retention.

We learned that many API providers in the public sector are actively learning from API providers in the private sector. They employ many of the same elements used by leading API providers who have been doing it for a while now. However, we also found evidence of innovation by some of the public sector API providers we interviewed, especially in the realm of onboarding and retaining new users.

What our thoughts are

Below, we have constructed a list of common building blocks that should be considered when developing, operating, and evolving any API platform. These recommendations are the results of formalizing what we learned as part of the interview process, as well as leveraging eight years worth of research. Our objective is to give API providers the elements they need to attract and engage with new users, while also pushing existing users to be more active. We have broken down our recommendations into eleven separate areas, which are further discussed below.

Portal

It is important to provide a single known location where API providers and consumers can work together to integrate the offered resources into a variety of web, mobile, device, and network applications, as well as directly into other systems. Several components play into the successful adoption and consumption of APIs published to a single portal.

  • Overview: A simple overview explaining what a platform does and clearly defining the value the API offers to consumers.

  • Getting started: A simple series of steps that help onboard a new user so that they can begin putting API resources to work.

  • Documentation: Interactive documentation for all APIs and schema (preferably created in OpenAPI or another interactive API specification format).

  • Errors: A simple, clear list of all the possible errors an API consumer will encounter, starting with HTTP status codes and then proceeding to any specialized schema used to articulate when API errors occur.

  • Explorer: A visual representation of the resources available within the API that allows consumers to search, browse, and explore all available resources without needing to know or write code. Note: it is always helpful to provide a direct link to replicate a search using the API.

These elements set the foundation for any API operations, providing the basic elements that will be needed to onboard with an API. They establish an interface for the other features that will be needed to incentivize deep and sustaining integrations with any platform.

Definitions

Besides the basic functionality described above, industries have turned toward a suite of machine-readable definitions to drive API integrations. Due to the ubiquity of a number of these definitions, we have collected a handful of specification formats that we recommend making a part of the base of any API operations.

  • OpenAPI: An interactive documentation standard that describes the functionality of an API in a machine-readable way.

  • Postman: A standard collection that provides a transactional, runtime-oriented definition of the feature interface of an API for use in client tooling.

  • JSON Schema: A widely used specification that describes the objects, parameters, and other structural elements of the consumption of API resources.

  • APIs.json: A discovery document that provides a machine-readable index of API operations with references to the portal, documentation, OpenAPI, Postman, and other building blocks of an API platform.

Code

It is common practice for API providers to invest in a variety of code-focused resources to help jumpstart the onboarding process for API developers. This reduces the number of technical steps necessary for the technology to successfully integrate with other platforms. Here are the building blocks we recommend considering when crafting the code portion of any developer outreach strategy.

  • Github/Gitlab: Use a social coding platform to manage many of the technical components used to support API developers.

  • Samples: Publish simple examples of making individual API calls in a variety of programming languages.

  • SDKs: Provide more comprehensive software development kits in a variety of programming languages for developers to use when integrating with API resources.

  • PDKs: Provide platform development kits that help developers integrate with existing solutions they may already be using as part of their operations.

  • MDKs: Provide mobile development kits that help jumpstart the development of mobile applications that take advantage of a platform APIs.

  • Starters: Publish complete applications that provide starter kits that developers can use to jumpstart their API integrations.

  • Embeddables: Provide buttons, badges, widgets, and bookmarklets for any API consumer to use when quickly integrating with API resources.

  • Spreadsheets: Offer spreadsheet connectors that allow API consumers to use platform APIs within Microsoft Excel and Google Sheets.

  • Integrations: Invest in a suite of existing integrations with other platforms that API consumers can take advantage of, providing low-code or no-code solutions for integrating with APIs.

While we have presented a variety of code-related resources, we want to point out the caveat that these tools should only be employed if an organization possesses the resources to properly maintain and support them. These elements can provide some extremely valuable coding solutions for developers and consumers to put to work, but if not properly done, they can also quickly become a liability, driving developers away.

Event-driven

In addition to simpler request-and-response delivery and documentation methods, we also recommend thinking about the following event-driven possibilities, which can also be used to incentivize deeper engagement and workflow with an API.

  • Webhooks: These can provide ping and data push opportunities, which allow API consumers to be notified when any event occurs across an API platform.

  • Streams: Providing high-level or individual streams of data allow for long-running HTTP or TCP connections with API resources.

  • Event types: In many cases, it is helpful to publish a list of the types of possible API events, as well as opportunities for subscribing to webhook or streaming channels.

  • Topics: Similarly, developers and consumers alike may find a published list of platform-related topics useful, particularly one that allows API consumers to search, browse, and discovery exactly the topical channels they are interested in.

These event-based tools help augment existing APIs and make them more usable by API consumers at scale. They facilitate a meaningful application experience for end-users, allowing them to stay tuned to specific topics. This in turn fine-tunes the experience for developers,which further drives adoption, ultimately establishing more loyal consumers at the API integration and application user levels.

Management

One of the cornerstones for defining, quantifying, and delivering successful API onboarding and engagement is API management. The following list contains some core elements of API management that should be considered as any API provider is planning, executing, and evolving their operational strategy.

  • Authentication: Providing clear options for onboarding using Basic Auth, API Keys, JWT, or OAuth keeps things standardized, well-explained, and frictionless to implement.

  • Plans/tiers: Establishing well-defined tiers of API consumers in terms of how they access all available API resources can inform the provision of structured plans that define how an API’s resources are being utilized.

  • Applications: Individual applications should be at the center of consumer API engagement. In particular, applications that help onboard new users so that they can begin consuming API resources are imperative.

  • Usage reporting: Tools and metrics that provide real-time and historical data, as well as access to multi-dimensional reporting across all API consumers is useful in analyzing the API’s usage and performance. This information can be helpful to developers in defining the stage of their API journey, as well as any additional resources they might wish to consider.

There are many other aspects of API management, but these building blocks reflect the consumer-facing elements that help onboard new users and drive increased engagement with existing consumers. API management is an area in which API providers should not be reinventing proven methods that already work: the best practices established over the last decade by leading API providers already account for strong engagement and retention levels for users and service providers.

Communications

Engagement is important for consumers not only with the tools of the API, but the communications and news surrounding the API. Streams of information across multiple channels can help unite a communications strategy for any API platform. We have collected the best examples of such information feeds below.

  • Blog: An active blog with an Atom feed, one for each individual API and/or overall platform.

  • Twitter: A dedicated Twitter account for the entire API platform, providing updates and support.

  • GitHub: A GitHub organization dedicated to the platform, with accounts for each API team member. The organization leverages the platform for content as well as code management.

  • Reddit: A helpful forum for answering questions, sharing content, and engaging with consumers on the social bookmarking platform.

  • Hacker News: Another helpful discussion board for answering questions, sharing content, and engaging with consumers.

  • LinkedIn: A business social network enterprise devoted to engaging with consumers. An established LinkedIn page for the platform can be useful for regularly publishing content, as well as engaging in conversations via the platform.

  • Facebook: Similar to LinkedIn, a Facebook page for the platform is helpful in engaging with API consumers via their social media presence. It can be used to regularly publishing content and engage in network-broadcast conversations via social platforms.

  • Press: A platform section detailing the latest releases, as well as a feed that users can subscribe to in order to receive a regular stream of news on the platform.

A coherent communication, content, and social media strategy will be the number one driver of new users to the platform while also keeping existing developers engaged. These communication building blocks provide a regular stream of information and signals that API consumers can use to stay informed and engaged while putting API resources to use in their applications.

Direct support

Besides communication, direct support channels are essential to completing the feedback loop for the platform. There are a handful of common channels API providers use to provide direct support to API consumers, allowing them to receive support from platform operations. We recommend the following selections.

  • Email: Establish a single, shared email address for the platform in which all platform support team can provide assistance.

  • Twitter: Provide support via Twitter, pushing it beyond just a communication channel and making it so that API consumers can directly interact with the platform team.

  • GitHub: Do not just use GitHub for code or content management: leverage individual team member accounts to actively support using GitHub issues, wikis, and other channels the social coding platform provides.

  • Office hours: Provide regular office hours where API consumers can join a hangout or other virtual group chat application in order to have their questions answered.

  • Webinars: Provide regular webinars around platform specific topics, allowing API consumers to engage with team members via a virtual platform that can be recorded and used for in-direct, more asynchronous support in addition to live feedback.

  • Paid: Provide an avenue for API consumers to pay for premium support and receive prioritization when it comes to having their questions answered.

With a small team, it can be difficult to scale direct support channels properly. It makes sense to activate and support only the channels you know you can handle until there are more resources available to expand to new areas. Ensuring that all direct support channels are reactive in terms of communication will help deliver value by bringing the feedback loop back full circle.

Indirect support

After direct support options, there should always be indirect/self-support options available to help answer API consumers’ questions. Such options allow users to get support on their own while still leveraging the community effect that exists on a platform. There are a handful of proven indirect support channels that work well for public as well as private API programs.

  • Forums: Provide a localized, SaaS, or wider community forum for API consumers to have their questions answered by the platform or by other users within the ecosystem.

  • FAQ: Publish a list of common questions broken down by category, allowing API consumers to quickly find the most common questions that get asked. Regularly update the FAQ listing based on questions gathered using the platform feedback loop(s).

  • Stack Overflow: Leverage the question and answer site Stack Overflow to respond to inquiries and allow API consumers to publish their questions, as well as answers to questions posed by other members of the community network.

Indirect, self-service support will be essential to scaling API operations and allowing the API team to do more with less. Outsourcing, automating, and standardizing how support is offered through the platform can make API support available 24 hours per day, turning it into an always-available option for motivated developers to find the answers they are looking for.

Resources

Beyond the documentation and communication, it is helpful to provide other resources to assist API consumers in onboarding and to strengthen their understanding of what is offered via the platform. There are a handful of common resources API providers make available to their consumers, helping to bring attention to the platform and drive usage and adoption of APIs.

  • Guides: Providing step by step guides covering the entire platform, or individual sections of the platform, helps consumers understand how to use the API to solve common challenges they face.

  • Case studies: Examples such as real-world case studies of how companies, organizations, institutions, and other government agencies have put APIs to work in their web, mobile, device, and network applications can help demonstrate the variety of functions that an API platform can perform.

  • Videos: Make video content available on YouTube, and other platforms. Providing video walkthroughs of how the APIs work and the best way to integrate features into existing applications can demystify the process of onboarding with API technologies.

  • Webinars: While webinars can be a helpful source of information to API consumers trying to understand specific concepts, maintaining and publishing an archive of webinars can serve as a historic catalog of such searches, which can provide targeted Q&A for how to put API platforms to work.

  • Presentations: Provide access to all the presentations that have been used in talks about the platform, allowing consumers to search, browse, and learn from presentations that have been given at past conferences, meetups, and other gatherings.

  • Training: It can be immensely helpful to invest in formal curriculum and training materials to help educate API consumers about what the platform does. This provides a structured approach to learn more about the APIs and gives developers access to comprehensive training materials users can tap into on their own schedule.

Like other areas of this recommendation, these resource areas should only be invested in if an organization has the resources available to develop, deliver, and maintain them over time. Providing additional resources like guides, case studies, presentations, and other materials help further extend the reach of the API platform, allowing the API team behind operations do more with less, as well as reach more consumers with well constructed, self-service resources that are easy to discover.

Observability

One important attribute of API platforms that successfully balance attracting new users and creating long term relationships is platform observability. Being able to understand the overall health, availability, and reliability of an API platform allows API consumers to stay informed regarding the services they are incorporating into their applications. There are several key areas that contribute to the observability of an API platform.

  • Roadmap: A simple list of what is being planned for the future of a platform, one that provides as much detail and ranges as far into the future as possible.

  • Issues: A document of any open issues that exist, allowing API consumers to quickly understand if there are any open issues that might impact their applications.

  • Status: A dashboard that describes the health of the overall platform, as well as the status of each individual API being made available via the platform.

  • Change log: A simple list of what has changed on a platform, taking the roadmap and issues that have been satisfied and rolling it into a historical registry that consumers can use to understand what has occurred.

  • Security: Share information about platform security practices and the strategies used to secure platform resources, and share the expectations held of developers when it comes to application security practices.

  • Breaches: Be proactive and communicative around any breaches that occur, providing immediate notification of the breach and a common place to find information regarding current and historic breaches on the platform.

Observability helps build trust with API consumers. In order to develop this trust, platform providers have to invest in APIs in ways that make consumers feel like the platform is stable and reliable. The less transparent that the elements of the platform are, the less likely that API consumers are going to expand and increase their usage of services.

Real-world presence

The final set of recommendations centers on maintaining a real-world presence for the platform. It is important to ensure that the platform does not just have a wide online presence, but is also engaging with API consumers in a face-to-face capacity. There are a handful of ways that leading API providers get their platforms face-time with their community.

  • Meetups: Speaking at and attending meetup events in relevant markets.

  • Hackathons: Throwing, participating in, and attending hackathon events.

  • Conferences: Speaking, exhibiting, and attending conferences in relevant areas.

  • Workshops: Conducting workshops within the enterprise, with partners, and the public.

These four areas help extend and strengthen the relationship between the API platform provider and consumers.

How API developer sandboxes are used to drive adoption

One of the more interesting and forward-thinking aspects of this research is around the delivery of sandbox development, labs, and virtualized environments. Providing a non-production area of a platform where developers can play with API resources in a much safer environment than a live production area can encourage creativity and innovation as well as exploration of the API’s resources.

What we learned

Some of the API providers we interviewed for this proposal had sandbox environments. Their insights into the merits of these environments provided us with some ideas to reduce friction for new developers when onboarding, as well as to help certify applications as they mature with their integrations. Here is what we learned about sandbox environments from the API providers we talked to.

  • Sandboxes are used: Sandbox environments are used, and they do provide value to the API integration process, making them something all API providers should be considering.

  • Sandboxes are not default: Sandboxes are not a default feature of all APIs but have become more critical when PII (personally identifying information) and other sensitive resources are available.

  • Data is virtualized: It was enlightening to see how many of the API providers we talked to provided virtualized data sets and complete database downloads to help drive the sandbox experience.

  • Doing sandboxes well is difficult: We learned that providing sandbox environments that reflect the production environment is, quite simply, hard. It takes significant investment and support to make it something that will work for developers.

  • Safe onboarding: Sandbox environments allow for the safe onboarding of developers and their applications. This helps ensure that only high-quality applications enter into a production environment, which protects the interests of the platform as well as the security and privacy of end-users.

  • Integrated with training: We learned how sandbox environments should also be integrated with other content and training materials. This facilitates access for API consumers to test out the training materials they need while also directly learning about the API.

  • Leverage API management: It was interesting to learn the role that API management plays in being able to deliver both sandbox and production environments. API gateways and management solutions are used to help mock and deliver virtualized solutions, but also to manage the transition of applications from development to a production environment.

Talking to API providers, it was clear that sandboxes provided value to their operations. It was also clear that they aren’t essential for every API implementation and took a considerable investment to do right. API virtualization is a valuable tool when it comes to engaging with API consumers, and it is something that should be considered by most API providers, but it should be approached pragmatically and realistically, with an awareness of both the costs as well as the benefits of moving from transition to production environments.

What our thoughts are

Sandboxes, labs, and virtualized environments are commonplace across the API sector but are not as ubiquitous they should be. We commend the presence of virtualized building blocks for any API that is dealing with sensitive data. A sandbox should be a default aspect of all API platforms, but should be especially applied to help ensure quality control as part of the API journey from development to production. Here are some of the building blocks we recommend when looking at structuring a sandbox environment for a platform.

  • Virtualize APIs: Provide virtualized instance of APIs and a sandbox catalog of API resources.

  • Virtualize data: Provide virtualized and synthesized data along with APIs in order to create as realistic an experience as possible.

  • Virtualized instances: Consider the availability of virtualized instances of an API as computable instances on major cloud platforms, allowing for the deployment of sandboxes anywhere.

  • Virtualized containers: Consider the availability of virtualized instances of an API as containers, allowing API sandboxes to be created locally, in the cloud, and anywhere containers run.

  • Bake into onboarding: Make a sandbox environment a default requirement in the API onboarding process, providing a place for all developers to learn about what a platform offers and pushing applications to prove their value, security, and privacy before entering a production environment.

  • Applications: Center the sandbox experience on the application by certifying it meets all the platform requirements before allowing it to move forward. All developers and their applications get access to the sandbox, but only some applications will be given production access.

  • Certification: Provide certification for both developers and applications. Establishing a minimum bar for what is expected of applications before they can move into production helps developers understand what it takes to move an application from sandbox to distribution, which ensures a high-quality application experience at scale.

  • Showcase: Always provide a showcase for certified applications as well as certified developers. Allow for the browsing and searching of applications and developers, while also highlighting them in communications and other platform resources.

When it comes to API resources that contain sensitive data, virtualized APIs, data, and environments are essential. They should be a default part of ensuring that developers push only high-quality applications to production by forcing them to prove themselves in a sandbox environment first.

Types of metrics for measuring adoption and making decisions

This area of our research overlaps somewhat with the earlier section on measuring success, but here, we provide a more precise look at what can be measured to help quantify success while also ensuring that findings are used as part of the decision-making process throughout the API journey.

What we learned

Our interviews reminded us that it is useful to consider that not all API providers have a fully fleshed out strategy for measuring activity across their platforms. However, we did come away with some interesting lessons from those providers that were using metrics to drive API decision making.

  • Look at it as a funnel: Treat API outreach and engagement as a sales and marketing funnel. Attract as many new users as you can, but then profile the target demographic to try to understand who they are and what their working objectives comprise. From there, devote efforts to incentivizing users to “move down the funnel” through the sandbox environment and eventually to production status. In short, treat API operations like a business and platform users like they are customers.

  • Do not have formal metrics: It was illuminating to also learn that some providers felt that having an overly formal metrics strategy might constrain developer outreach and engagement. Providing words of caution when it comes to measuring too much, as well as only examining data when it comes to making critical decisions, can keep outreach efforts more constructively interacting with API consumers regarding their needs.

  • API keys registered: For data accessibility purposes, it is worthwhile to ensure that all developers have an application and API key before they can access any API resources. Requiring all internal, partner, and eternal developers to pass along their API key with each API call allows all platform activity to be measured, tracked, and used as part of the wider platform decision-making process.

  • Small percentage of users: We also heard that it is common for a small percentage of the overall platform users to generate the majority of API calls to the platform. This makes it important to measure activity on the platform in terms of which users are the most salient (and thereby driving the majority of value on a platform).

  • Amount of investment: Importantly, the usage rates of a platform’s resources can provide a strong justification for investing more resources into the platform’s success, making tracking that data of paramount importance. This transforms investment into a data-driven decision that responds to the actual needs of the platform.

The interview portion of our research provided a valuable look at how API providers are measuring activity across their platforms. Data and metrics are not only being used to define success, but are also used as part of the ongoing decision making process around the direction API providers take their platform, particularly when it comes to measuring adoption of API resources across a platform.

What our thoughts are

When it comes to measuring adoption and understanding how API consumers are putting resources to work, we recommend starting small. Activity tracking is something that will change and evolve as the organization develops a better understanding of platform resources and the interests of internal stakeholders, partners, and 3rd party consumers. These are just a handful of areas we recommend collecting data on to begin with.

  • Traffic: Measure and understand traffic (network and otherwise) across the platform developer portal.

  • New accounts: Track and profile all new accounts signing up for API access.

  • New applications: Track and profile all new applications registered for access.

  • Active applications: Measure and track the usage of the active platform applications.

  • Number of API calls: Understand how many APIs are being called in different dimensions.

  • Conversation: Measure all conversation happening across the platform and use these estimates to develop awareness.

  • Support: Measure all support activity in order to pinpoint the needs of API consumers.

  • Personas: Quantify the different types of consumers who are putting a platform to use.

Begin here when it comes to tracking: develop an awareness of the community and get to know what is important. Then expand from there. Add and remove the metrics that make sense for your organization without tracking metrics for no reason (i.e. all tracking should add value to the API platform or its decision making process).

Structuring and staffing outreach programs

Each platform will have its own mandate for how API programs should be staffed, but there are some common patterns that exist across the space.

What we learned

The API providers we talked to had a lot to share about what it takes to staff their operations, including their structures (or in a few cases lack thereof!).

  • Dedicated evangelist: Make sure there is a dedicated evangelist to help talk-up and support the platform.

  • Include marketing: Include the marketing team in conversations around amplifying the platform and its presence.

  • Provide support: Invest in the resources to support all aspects of the platform effectively, not just those that are consumer-facing or particularly visible.

  • Conduct regular calls: Conduct regular calls with internal, partner, and external stakeholders in order to bring everyone together to discuss the progress of the platform.

  • Use a CRM for automation: Put a CRM to use when it comes to tracking and automating outreach efforts for the platform. Do not reinvent the wheel; leverage an existing service to track all of the details that can be automated.

  • Include other teams: Do not just invest in an isolated API team; make sure other teams across the organization are included in the API conversation.

  • Develop an in-person presence: Make sure to obtain human resources that can be sent to meetups, conferences, and other in-person events so that the organization can expanding and strengthening the presence of the platform.

  • Speak to leadership: Regularly invest time to report platform results and progress to leadership, making sure they understand the overall health and activity of the platform.

We learned that a dedicated evangelist is essential, as well as significant investments in marketing and support. It was good to hear how important in-person events were, supporting stakeholders and leadership when it came to overall outreach efforts. Overall, the conversation we had reflects what we have been seeing across the landscape with other public and private sector providers.

What our thoughts are

Echoing what we heard from our conversations with API providers, we have some advice when it comes to structuring and allocating resources to API efforts. This is a tough area to generalize because different platforms will have different needs, but there are well established traits of successful API providers, with the following characteristics making the top of the list.

  • Dedicated program: Establish a formal program for overseeing API operations across an organization to make sure it gets the attention it needs.

  • Dedicated team: Allocate a dedicated team to the API program and platform to represent its needs and push for its continued advancement.

  • Business involvement: Make sure to involve business groups in the conversation, bringing them in as stakeholders to give their thoughts on the organization’s API program.

  • Marketing involvement: Make sure marketing teams play an influential role in amplifying the message coming from a platform. This is especially important to ensure that a platform does not just speak to developers, but to consumers and users as well.

  • Sales involvement: Ensure that a platform has the sales resources to actually close deals when necessary. This ensures that a platform has continuing end-user participation and the resources it needs to function. Remember, though, that sales is not always about selling commercial solutions.

  • External users: Make a place for external actors to take part in advancing the API-relevance conversation. External users can often rise to highly influential levels within a community, and this influence can be put to work assisting with outreach efforts (but only if the culture of a platform allows for it).

  • Contractors: Consider how vendors and contractors will be contributing to the overall outreach effort by creating a means for vendors to assist with logistics and communication channels, allowing the core team to focus on moving the platform forward while contractors tackle the details.

It will take dedicated resources to move the API conversation forward at a large organization. However, it will also take the involvement of other stakeholders to get the business and political capital to legitimately spark the platform initiative. How a program gets structured, and the amount of resources dedicated to evangelism, communication, support, and other critical areas will set the tone for the platform and determine the overall health and viability of the API.

Anti-patterns

One unorthodox part of our research involved asking API providers about the things they felt would contribute to the failure of their API platform. Below, we have collected some thoughts about the “anti-patterns” that might generate friction on the platform, chase developers away, or make things harder for everyone involved.

What we learned

The API providers we talked to had plenty of ideas for how to run developers off and make integration with platform resources harder. We’ve provided a list of things API providers can avoid when operating their platforms and when reaching out to developers. (To be clear, each of the things presented in the list below is a bad practice — the bolded phrase is a “mistake” an API provider could make, and the text that follows further explains the nature of the poor practice.)

  • Do not take a build it and they will come approach: Building API operations with tremendous resource investment before consumers ask for certain functions or resources is a poor way to achieve stable platform growth.

  • Do not reach out to developers: Not being proactive about contacting developers makes it hard to build working relationships that clarify development needs.

  • Hand built documentation: Providing hand-crafted, bloated, or out of date documentation is not only difficult to maintain, it reduces the value of the API to developers.

  • Breaking changes: Changing the platform too often or too fast may introduce too many breaking changes for developers to deal with, reducing the platform’s effectiveness.

  • Do not communicate: Not communicating platform operation updates leaves developers in the dark about what is happening with the very tool they rely on.

  • Do not listen to your developers: Not listening to developers or including their feedback in the roadmap is sure to eventually run developers off.

  • Do not measure happiness: Measure everything except for the happiness of developers, because such a metric (if you can even operationalize it as such) will never really lead to understanding if they are truly happy about how the platform is being run.

  • Gated content: Putting documentation, content, and other resources behind a login or paywall, or else restricting what developers have access to, is a great way to get developers to ignore your platform.

  • Do not provide a developer edition: Avoid providing developers with a set of APIs to play with simply for the sake of “experimentation.” Without being able to see the value of real, production-strength applications, users will not be building their projects on the platform any time soon.

  • Do not reach out to partners: Focusing on general developers without identifying and reaching out to potential partners can rob a platform of trusted allies, who might have otherwise facilitated operations, integrations, and even broader application development.

It was clear that the API providers we interviewed were aware of the pitfalls that could jeopardize an API’s success. They provided us with a comprehensive look at what an organization should avoid when it comes to operations and investment.

What our thoughts are

Adding to what the API providers mentioned during their interviews, we have some additional recommendations based on some common deficiencies we have seen. These are the areas all API providers should be investing in to strengthen their operations and to avoid the anti-patterns highlighted so far.

  • Documentation: Make documentation a priority by keeping it simple, updated, and interactive for developers.

  • Entry level: Ensure there is entry-level access to a platform, allowing developers to onboard without commitments or procedural friction.

  • Communication: Develop a robust communication strategy for ensuring a platform has a regularly updated stream of new content about what is happening.

  • Support: Ensure that a platform is supported, and that API consumer’s questions are being answered in a timely manner.

  • Feedback loops: Make sure that feedback loops exist and are strengthened by communication, support, and actually listening to developers’ needs.

  • Evangelism: Invest in evangelism amongst leadership, partners, 3rd party developers and all stakeholders to make sure the platform is always being spoken for.

  • Observability: Insist on observability for all components involved in operating a platform to maximize measurements and comprehension of the system.

  • Partners: Seek out partners. They are critical to taking the platform to the next level. Cultivate developers and applications, and produce strong partners who can help take things into the future.

We see anti-patterns as deficient areas of platform operations that can largely be avoided with proper time and resource investments.

Future plans

We concluded our provider interviews with a few questions on what they believe the future holds. Their visions for their growth and development can serve as strong examples and recommendations for what other platforms can include in their own plans.

What we learned

It was interesting to learn about the desires our interviewees possessed when it came to future investment in their platforms. We have generated this short list of areas that API providers can think about when it comes to building out their roadmap.

  • Go with what works: Make sure to continue investing in and scaling the parts of the platform that are earning attention for their success.

  • Product excellence: Emphasize product excellence over simply responding to new developments.

  • Marketing and evangelism: It has never bad to augment marketing efforts to broadcast information about a platform.

  • Attend conferences: Increase the platform’s presence at relevant conferences.

  • Throw hackathons: Invest in throwing hackathons to encourage developers to do more.

  • Conduct conferences: Invest in and produce conferences that support the API community.

  • Building community: Do what it takes to keep building and strengthening the community.

  • Publish more to GitHub: Publish more code, content, and other resources to the GitHub or other version control platform.

  • Make connect more discoverable: Work to make resource more discoverable so that developers can find what they need.

  • Publish to code.gov: Publish code to code.gov, making code open source and available as part of the wider government effort to make resources available.

The common theme we heard from API providers was that allocation for future investment should be spent reinforcing the building blocks already in place that made the API platform successful in the first place. We learned a lot about the motivations and visions of the API providers we interviewed, which helped shape our recommendations for what could be next.

What our thoughts are

With so many possibilities for the path that an API platform’s development can take, it is easy to get caught up in the next step without reflecting on the fact that what is next usually involves reinforcing what is happening now. Our recommendations involve focusing on what matters, and not just what is new, with the following areas of investment:

  • Continue investing in what works: To once again echo our interviewees, continue to invest in building what is already working. Further develop the API platform and its associated vision conversation by looking to what already exists rather than looking for new ways to get things done.

  • Excellence and mastery: Invest in perfecting the process, refining how things get done, and operating the platform in a way that benefits everyone involved. Doing it well, and do it it right, will pay off in the long run.

  • Involve other teams: Expand involvement with the API to include other teams across an organization. This helps distribute the workload of operating the API platform and allows all subgroups within an organization to have a voice in the future of the API.

Conclusion

To produce this research, we spoke with leading API providers from across the public and private sector while also leveraging eight years of analysis gathered from directly studying the API sector. This research should provide a wealth of options to consider when it comes to helping the VA be more effective in reaching out to developers. However, this is not meant to be a complete list of building blocks to consider, but rather, is meant to present a selection of proven elements that can be implemented with the right teams to execute. While things like API documentation are critical, there is no single element of this research that will make or break API outreach efforts; when the right elements are combined and executed properly, the results can be extremely positive.

This research reflects the experience of platform providers like SalesForce who have been iterating upon their API platforms and engaging with consumers since 2001. It also follows the lead of next generation providers like Twilio who have been making developers happy for almost a decade, while also managing to take their company public. It looks at how CMS, Census, and FEC are approaching the outreach around their API programs, bringing in the unique perspective of federal agencies who are finding success with their APIs. Finally, the insights gathered across these API providers is organized, augmented, and rounded off with insight gathered by the API Evangelist as part of research on the business of APIs, research that has been ongoing since July of 2010.

We want to end this look at developer outreach by emphasizing once again that this is a journey. While there are many common patterns in play across the API sector, no two API platforms are the same. The types of developers attracted to a platform, as well as the ones that remained engaged, will vary depending on the organization, the types of resources being made available, and the tone set by the teams(s) operating the platform. It is up to the VA to decide what type of platform they will operate and what kind of tone they will set when reaching out to developers. There will not be any single right answer to how the VA should be reaching out to new developers, but with the right amount of planning, communication, support, and observability, the VA will undoubtedly find its footing.

Appendix A: interview questions

For each interview, we asked the following questions:

  1. What role do you play within the organization?

  2. What is the purpose and scope of your API developer outreach program?

  3. What does success look like for you?

  4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

  5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

  6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

  7. How is your outreach program structured and staffed? How did it start out and evolve over time?

  8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

  9. What big ideas do you have for evolving your outreach program to make it even more effective?

  10. Any other thoughts you’d like to share? If so, please feel free!

Appendix B: CMS Blue Button API interview notes

1. What role do you play within the organization?

Product Manager.

2. What is the purpose and scope of your API developer outreach program?

CMS is offering several APIs. One approach is put it out there and hope they come. Second approach is leveraging APIs on the Socrato platform and hoping there’s adoption. Third approach is partnership agreements with various private partners.

Purpose of the Blue Button outreach program is to drive adoption of the API.

3. What does success look like for you?

Worked with various stakeholders to define the metrics in terms of value. For example, number of unique organizations who are experimenting with the API. Originally set a target of 500 organizations. Up to 450 currently. Use it as a proxy measurement of value. Not sure how beneficiaries are benefiting yet, but adoption is proxy indicator.

Also measuring number of beneficiaries who have linked their data to Blue Button API — for example, Google Verily.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Documentation is the first key element. Originally hosted documentation on GitHub pages, but now using bluebutton.cms.gov.

Had challenges over the year making clear what Blue Button is, value, etc.

Relaunched during recent HIMSS conference. Resulted in a number of sign-ups from press.

Set up a blog to provide a getting started guide, how sign-up works, etc. Working pretty well.

Assigned a designated Developer Evangelist, who pushes content, participates in forums, and hits the niche conference circuit.

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

Sandbox has been a big part. Have a synthetic dataset. In healthcare, this is essential from a privacy standpoint.

Have had challenge with synthetic data in terms of making them realistic.

Have a streamlined process for accessing and onboarding into environment. Experience is not currently great, and working on improving.

Developer portal is only for sandbox environment. No portal for production, so disjointed at this point.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

Look at the adoption as a funnel. That has helped to drive a culture shift around how folks think about metrics and the role they play.

Top of funnel is evangelism/traffic/etc., portal sign-ups, registered an app, and how many of those who have registered made API calls.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

Have a developer evangelist.

Periodically, team gets on phone with developer evangelist and talks about ideas for building awareness and driving adoption.

Using email + CRM tools for outreach. Trying to improve marketing automation.

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Don’t take a “build it and they will come” approach.

Not acting quickly enough on insights gained from the funnel process, including reaching out to individuals to help them through the process (i.e., proactive outreach).

9. What big ideas do you have for evolving your outreach program to make it even more effective?

Double down on what’s working to incrementally grow — conferences, webinars, etc.

Serving as a blueprint for other organizations, particularly at a state & local level.

10. Any other thoughts you’d like to share? If so, please feel free!

N/A

Appendix C: Census interview notes

1. What role do you play within the organization?

Developer engagement lead, serving as the comms arm for Census’ API efforts. First-line of defense for any developer engagement. Participate in hackathons, etc.

Also working on CitySDK. This was in response to the observation that developers were having trouble working with Census APIs. SDK currently not being maintained because lost lead developer and funding for it. Personally learning how to develop in order maintain myself.

2. What is the purpose and scope of your API developer outreach program?

Save developers time and help them understand the nuances of Census data and how to use the data. Engage in communications and help inform the development of API products.

3. What does success look like for you?

Happy developers.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Proactive engagement: work on driving the government’s engagement in National Day of Civic Hacking in collaboration with NASA and Code for America. Hackathons are great for user research and testing new features.

Reactive: have a Slack and Gitter channel.

Haven’t tried blog posts yet. Thinking about interviewing developers who are using APIs and turning their stories into blog posts. Haven’t been able to get legal approval yet for that.

Being able to figure out what data Census has and how that data can be made available to developers is key.

Webinars are good, especially those focused on showing how to use Census data and build something simple using the API.

Use a data search tool — American FactFinder; developers can use this to find out what variables they are interested in and then trace that data back to an API that they can use to access programmatically.

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

No, haven’t done anything like yet. Have a discovery tool that is part of the API.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

Initially focused on number of API keys registered. Starting focusing on metrics of use. A small number of users drive 80% of the API use. Developers tend to move data into their environment for performance purposes (e.g., caching). Plus, they worry about government shutting down and data not being accessible.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

One person focused on outreach.

Entire team dedicated to building the APIs and making the data available and accessible. Each product gets its own endpoint; there are about 30 now and planning to do 70 more.

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Create terrible, hand-rolled documentation with a lot of pictures; don’t give developers a channel to communicate; don’t listen to them; don’t give them a way to test out data before granting a key; and don’t have metrics for measuring developer happiness and just focus on usage metrics.

9. What big ideas do you have for evolving your outreach program to make it even more effective?

Focus more on product excellence and marketing. A lot of developers don’t like being pushed to. Most developers go to hackathons for social interaction and career progression, not because they love working with your data and using your APIs.

10. Any other thoughts you’d like to share? If so, please feel free!

Instead of trying to focus too much on attracting random developers, focus on the developers who are engaged now and reach out to them. Would like to introduce a CRM system to help manage these relationships better.

Learned it’s important to grab as much data during the registration process in order to gain better insight into developer intent, behavior, and characteristics. Found it’s very hard to get information after developers have gained access; they don’t typically respond to email requests for information.

Hackathons, etc. are huge investment of time and resources.

Biggest lesson learned is to always focus on the principle of how to make it as easy for the developers as possible.

Make heavy use of GraphQL because it’s introspective and helps developers understand the API in greater detail. Like GraphQL because it can evolve without breaking things.

Appendix D: OpenFEC interview notes

1. What role do you play within the organization?

Tech Lead at FEC, mostly focused on OpenFEC. Involved in FEC campaign finance data for ten years.

2. What is the purpose and scope of your API developer outreach program?

Don’t have an overarching formal approach in place yet, but do provide support to developers through a variety of channels when they reach out.

3. What does success look like for you?

Number one measure of success is providing developers accurate data, and making sure that is available at all times. Also making sure that developers can find the data that they need.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Don’t have a formally developer outreach program. Have small team of developers, designers, content designers, and product managers.

Would like to be more proactive about engaging.

Do have some mechanisms for getting feedback. For example, the project is open source, which encourages developers to add issues, contribute, etc. Developers can send emails to [email protected]. Existing team tries to get back as quickly as possible. Would like to have a ticket management system to help facilitate workflow around support. Do have a google group in which community of developers can ask and answer questions.

Do have a developer page. And work hard to keep those up-to-date.

Use GSA’s API umbrella to manage users, which handles rate limiting and caching.

Have a try-it-out feature on developer page powered by Swagger API, and can just put params in to see how the API works.

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

Don’t have a sandbox, but do provide instructions for setting up locally. And also provide a sample database that can be copied. Since all data is public, don’t have to worry about PII. Amazon provides a free sandbox environment to anyone with .gov domain that can be used to set-up a test environment.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

Don’t have formal metrics. Using the GSA API Umbrella, there is some data available, not currently using that data to drive decisions.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

The existing team of developers, etc. are providing outresearch support.

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Making breaking changes and don’t tell anyone; not responding to and fixing data quality issues, which would undermine the credibility of the API; couldn’t serve the data in an efficient manner.

9. What big ideas do you have for evolving your outreach program to make it even more effective?

Have discussed holding a hackathon or conference, similar to what Blue Button is doing.

Holding a quarterly developer conference call to starting answering questions and discuss ideas.

Regardless of activity, starting to build a community.

Pushing our code to code.gov to increase awareness.

10. Any other thoughts you’d like to share? If so, please feel free!

There are commercial restrictions around the use of open FEC data.

Appendix E: Salesforce interview notes

1. What role do you play within the organization?

Vice President of Developer Relations. My team doesn’t create APIs but we drive awareness and adoption through producing:

  • Technical content

  • Demos & sessions at events

  • Tools like SDKs and interactive docs

  • Marcom (social, email, etc)

  • Community Building

2. What is the purpose and scope of your API developer outreach program?

Unlike companies who solely monetize services, our primary purpose is to enable developers to create custom integrations with Salesforce.

As an advanced enterprise software platform, its critical that Salesforce work well with all the other enterprise apps you might have. In addition, many of our customers are building custom web or mobile apps that need access to customer data.

Because we are a platform, our purpose is not limited to just “API outreach” but is inclusive both of our APIs and also our app development platform.

Some of the primarily use-cases for our APIs are to extract data for archival purpose, surface customer data in third-party or bespoke apps, or to enable business systems to work together in other ways. We also have a family of machine learning APIs known as Einstein Vision and Einstein Language that can be used by any application to create predictions from unstructured images and text.

3. What does success look like for you?

Ultimately, we know that our customers are demanding more skilled Salesforce Developers every day to support their custom implementations so our primary metrics are monthly signups and active monthly usage.

Our goal is to keep the growth of our developer population in line with our overall customer growth to ensure there are enough developers in the ecosystem to service our customers needs.

We also measure job postings for Salesforce Developers, traffic to our website, engagement with our events like Dreamforce and TrailheaDX, and usage of our online learning portal trailhead.salesforce.com.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Blogging, webinars, documentation (including an API Explorer), training classes, sample code, first-party events (we do regional workshops called DevTrails, large regional events called Salesforce World Tours, and big global events called Dreamforce and TrailheaDX)

In addition, we have built a substantial community program with over 200 developers groups around the world and an MVP program to recognize top contributors in the community. We’re in the process of expanding our open source footprint (sample code, apps, and SDKs).

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

We have a free environment called the Salesforce Developer Edition that anyone can sign up for. There is no expiration date so you can use it as long as you’d like, and you’re allowed to sign up for as many as you’d like for the times that you need a “clean sandbox.”

We’ve also tightly integrated these developer editions into the Trailhead learning platform, so that as a developer is going through an online course (called a module) or a tutorial (called a project) they can actually complete hands-on challenges in their developer environment and get validation that they have done it correctly.

To protect our business interests, these developer environments have limits in terms of API usage, storage, user licenses, etc. With these limits, we do our best to find a balance between empowering the developers and protecting the business. And we have a process where developers can request increased limits.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

We use our CRM platform to measure all the ways we touch our developers and how that impacts success.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

When we launched the program, we started small. 1-2 evangelists, a program manager for community/events, and a web developer. It grew from there.

We don’t share details about our current team size publicly. However, the key components of our program are as follows with approximate percentages:

  • Marketers (12%)

  • Evangelists (35%)

  • Community Program Managers (12%)

  • Event Producers/Program Managers (13%)

  • Marketing Operations: Email, Development, Creative, etc (25%)

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Gate all the content.

Don’t provide a developer edition.

Don’t share use-cases or sample code.

Don’t find any partners.

Don’t engage with your users or help them be successful.

9. What big ideas do you have for evolving your outreach program to make it even more effective?

We have a ton of content but it’s not all discoverable.

Working on making our pages more data-driven and dynamic so we can expose all our resources.

We’re also moving a lot of our content to GitHub as well as creating a more organized process to store our code samples.

Many of them are orphaned in blog posts and other places.

10. Any other thoughts you’d like to share? If so, please feel free!

N/A


Having The Dedication To Lead An API Effort Forward Within A Large Enterprise Organization

I work with a lot of folks who work in large enterprise organizations, institutions, and government agencies who are moving the API conversation forward within their groups. I’m all too familiar with what it takes to move forward the API conversation within large, well established enterprise organizations. However, I am the first to admit that while I have a deep understanding of what it involves, I do not have the fortitude to actually lead an effort for the sustained amount of time it takes to actually make change. I just do not have the patience and the personality for it, and I’m eternally grateful for those that do.

There are regular streams of emails in my inbox from people embedded within enterprise organizations, looking for guidance, counseling, and assistance in moving forward the API conversation at their organizations. I am happy to provide assistance in an advisory capacity, and consulting with groups to help them develop their strategies. A significant portion of my income comes from conducting 1-3 day workshops within the enterprise, helping teams work through what they need to. There is one thing I cannot contribute to any of these teams, the dedication and perseverance it will need to actually make it happen.

It takes a huge amount of organization knowledge to move things forward at a large organization. You have to know who the decision makers are, and who are the gatekeepers for all of the important resources–this is knowledge you have to acquire by being embedded, and working within an organization for a very long time. You just can’t walk in the door and be able to make sense of things within days, or weeks. You have to be able to work around schedules, and personalities–getting to know people, and truly begin to understand their motivations, and willingness to contribute, or whether they’ll actually decide to work against you. The culture of any enterprise organization will be the most important area of concern for you as you craft and evolve your API strategy.

I often wish I had the fortitude to work in a sustained capacity within a large organization. I’ve tried. It just doesn’t fit my view of the world. However, I am super thankful for those of you who are. I’m super happy to help you in your journey. I’m happy to help you think through what you are experiencing as part of my storytelling here on my blog–just email me your questions, thoughts, and concerns. I’m happy to anonymize as I work through my responses here on the blog, about 60% of the stories you read here are the anonymized result of emails I receive from y’all. I’m happy to vent for you, and use you as my muse. I’m also happy to help out in a more dedicated capacity, and provide my consulting assistance to your organization–it is what I do, and how I pay the bills. Let me know how I can help.


Understanding The Event-Driven API Infrastructure Opportunity That Exists Across The API Landscape

I am at the Kong Summit in San Francisco all day tomorrow. I’m going to be speaking about research into the event-driven architectural layers I’ve been mapping out across the API space. Looking for the opportunity to augment existing APIs with push technology like webhooks, and streaming technology like SSE, as well as pipe data in an out of Kafka, fill data lakes, and train machine learning models. I’ll be sharing what I’m finding from some of the more mature API providers when it comes to their investment in event-driven infrastructure, focusing in on Twilio, SendGrid, Stripe, Slack, and GitHub.

As I am profiling APIs for inclusion in my API Stack research, and in the API Gallery, I create an APIs.json, OpenAPI, Postman Collection(s), and sometimes an AsyncAPI definition for each API. All of my API catalogs, and API discovery collections use APIs.json + OpenAPI by default. One of the things I profile in each of my APIs.json, is the usage of webhooks as part of API operations. You can see collections of them that I’ve published to the API Gallery, aggregating many different approaches in what I consider to be the 101 of event-driven architecture, built on top of existing request and response HTTP API infrastructure. Allowing me to better understand how people are doing webhooks, and beginning to sketch out plans for a more event-driven approach to delivering resources, and managing activity on any platform that is scaling.

While studying APIs at this level you begin to see patterns across how providers are doing what they are doing, even amidst a lack of standards for things like webhooks. API providers emulate each other, it is how much of the API space has evolved in the last decade. You see patterns like how leading API providers are defining their event types. Naming, describing, and allowing API consumers to subscribe to a variety of events, and receive webhook pings or pushes of data, as well as other types of notifications. Helping establish a vocabulary for defining the most meaningful events that are occurring across an API platform, and then providing an even-driven framework for subscribing to push data out when something occurs, as well as sustained API connections in the form of server-sent event (SSE), HTTP long polling, and other long running HTTP connections.

As I said, webhooks is the 101 of event-driven technology, and once API providers evolve in their journey you begin to see investment in the 201 level solutions like SSE, WebSub, and more formal approaches to delivering resources as real time streams and publish / subscribe solutions. Then you see platforms begin to mature and evolve into other 301 and beyond courses, with AMQP, Kafka, and often times other Apache Projects. Sure, some API providers begin their journey here, but for many API providers, they are having to ease into the world of event-driven architecture, getting their feet wet with managing their request and response API infrastructure, and slowly evolving with webhooks. Then as API operations harden, mature, and become more easily managed, API providers can confidently begin evolving into using more sophisticated approaches to delivering data where it needs to be, when it is needed.

From what I’ve gathered, the more mature API providers, who are further along in their API journey have invested in some key areas, which has allowed them to continue investing in some other key ways:

  • Defined Resources - These API providers have their APIs well defined, with master planned designs for their suite of services, possessing machine readable definitions like OpenAPI, Postman Collections, and AsyncAPI.
  • Request / Response - Who have fined tuned their approach to delivering their HTTP based request and response structure, along with their infrastructure being so well defined.
  • Known Event Types - Which has resulted in having a handle on what is changing, and what the most important events are for API providers, as well as API consumers.
  • Push Technology - Having begun investing in webhooks, and other push technology to make sure their API infrastructure is a two-way street, and they can easily push data out based upon any event.
  • Query Language - Understanding the value of investment in a coherent querying strategy for their infrastructure that can work seamlessly with the defining, triggering, and overall management of event driven infrastructure.
  • Stream Technology - Having a solid understanding of what data changes most frequently, as well as the topics people are most interested, and augmenting push technology with streaming subscriptions that consumers can tap into.

At this point in most API providers journey, they are successfully operating a full suite of event-driven solutions that can be tapped internally, and externally with partners, and other 3rd party developers. They probably are already investing in Kafka, and other Apache projects, an getting more sophisticated with their event-driven API orchestration. Request and response API infrastructure is well documented with OpenAPI, and groups are looking at event-driven specifications like AsyncAPI to continue to ensure all resources, messages, events, topics, and other moving parts are well defined.

I’ll be showcasing the event-driven approaches of Twilio, SendGrid, Stripe, Slack, and GitHub at the Kong Summit tomorrow. I’ll also be looking at streaming approaches by Twitter, Slack, SalesForce, and Xignite. Which is just the tip of the event-driven API architecture opportunity I’m seeing across the existing API landscape. After mapping out several hundred API providers, and over 30K API paths using OpenAPI, and then augmenting and extending what is possible using AsyncAPI, you begin to see the event-driven opportunity that already exists out there. When you look at how API pioneers are investing in their event-driven approaches, it is easy to get a glimpse at what all API providers will be doing in 3-5 years, once they are further along in their API journey, and have continued to mature their approach to moving their valuable bits an bytes around using the web.


Talking Healthcare APIs With The CMS Blue Button API Team At #APIStrat In Nashville Next Week

We have the API evangelist from one of the most significant APIs out there today at #APIStrat in Nashville next week. Mark Scrimshire (@ekivemark), Blue Button Innovator and Developer Evangelist from NewWave Telecoms and Technologies will be on the main stage next Tuesday, September 25th 2018. Mark will be bringing his experience helping stand up the Blue Button API with the Centers for Medicare and Medicaid Services (CMS), and sharing the stories from the trenches while delivering this critical piece of health API infrastructure within the United States.

I consider the Blue Button API to be one of the most significant APIs out there right now for several key factors:

  • API Reach - An API that has potential to reach 44 million Medicare beneficiaries, which is 15 percent of the U.S. population–that is a pretty significant audience to reach when it comes to the overall API conversation.
  • Fast Healthcare Interoperability Resources (FHIR) - The Blue Button API supports Hl7 / FHIR, pushing the specification forward in the overall healthcare API interoperability discussion, making it extremely relevant to APIStrat and the OpenAPI Initiative (OAI).
  • Government API Blueprint - The way in which the Blue Button API team at CMS and USDS is delivering the API is providing a potential blueprint that other federal and stage level agencies can follow when rolling out their own Medicare related APIs, but also any other critical infrastructure that this country depends on.

This is why I am always happy to support the Blue Button API team in any way I can, and I am very stoked to have them at APIStrat in Nashville next week. I’ve spent a lot of time working with, and studying what the Blue Button API team is up to, and I spoke at their developer conference hosted at the White House last month. They have some serious wisdom to share when it comes to delivering public APIs at this scale, making the keynote with Mark something you will not want to miss.

You can check out the schedule for APIStrat next week on the website. There are also still tickets available if you want to join in the conversation going on there Monday, Tuesday, and Wednesday next week. APIStrat is operated by the OpenAPI Initiative (OA), making it the place where you will be having high level API conversation like this one. When it comes to APIs, and industry changing API specifications like FHIR, APIStrat is the place to be. I’ll see you all in Nashville next week, and I look forward to talking APIs with all y’all in the halls, and around town for APIStrat 2018.


Sadly Stack Exchange Blocks API Calls Being Made From Any Of Amazon's IP Block

I am developing an authentication and access layer for the API Gallery that I am building for Streamdata.io, while also federating it for usage as part of my API Stack research. In addition to building out these catalogs for API discovery purposes, I’m also developing a suite of tools that allow users to subscribe to different topics from popular sources like GitHub, Reddit, and Stack Overflow (Exchange). I’ve been busy adding one or two providers to my OAuth broker each week, until the other day I hit a snag with the Stack Exchange API.

I thought my Stack Exchange API OAuth flow had been working, it’s been up for a few months, and I seem to remember authenticating against it before, but this weekend I began getting an error that my IP address was blocked. I was looking at log files trying to understand if I was making too many calls, or some other potential violation, but I couldn’t find anything. Eventually I emailed Stack Exchange to see what their guidance once, to which I got a prompt reply:

“Yes, we block all of Amazon’s AWS IP addresses due to the large amount of abuse that comes from their services. Unfortunately we cannot unblock those addresses at this time.”

Ok then. I guess that is that. I really don’t feel like setting up another server with another provider just so I can run an OAuth server from there. Or, maybe I guess I might have to if I expect to offer a service that provides OAuth integration with Stack Exchange. It’s a pretty unfortunate situation that doesn’t make a whole lot of sense. I can understand adding another layer of white listing for developers, pushing them to add their IP address to their Stack Exchange API application, and push us to justify that our app should have access, but blacklisting an entire cloud provider from accessing your API is just dumb.

I am going to weigh my options, and explore what it will take to setup another server elsewhere. Maybe I will start setting up individual subdomains for each OAuth provider I add to the stack, so I can decouple them, and host them on another platform, in another region. This is one of those road blocks you encounter doing APIs that just doesn’t make a whole lot of sense, and yet you still have to find a work around–you can’t just give in, despite API providers being so heavy handed, and not considering the impact of the moves on their consumers. I’m guessing in the end, the Stack Exchange API doesn’t fit into their wider business model, which is something that allows blind spots like this to form, and continue.


Justifying My Existence In Your API Sales And Marketing Funnel

I feel like I’m regularly having to advocate for my existence, and the existence of developers who are like me, within the sales and marketing funnel for many APIs. I sign up for a lot of APIs, and have the pleasure of enjoy a wide variety of on-boarding processes for APIs. Many APIs I have no problem signing up, on-boarding, and beginning to make calls, while others I have to just my existence within their API sales and marketing funnel. Don’t get me wrong, I’m not saying that I shouldn’t be expected to justify my existence, it is just that many API providers are setup to immediately discourage, create friction for, and dismiss my class of API integrator–that doesn’t fit neatly into the shiny big money integration you have defined at the bottom of your funnel.

I get that we all need to make money. I have to. I’m actually in the business of helping you make money. I’m just saying that you are missing out on a significant amount of opportunity if you only focus on what comes out the other side of your funnel, and discount the nutrients developers like me can bring to your funnel ecosystem. I’m guessing that my little domain apievangelist.com does return the deal size scope you are looking for, but I think you are putting too much trust into the numbers provided to you by your business intelligence provider. I get that you are hyper focused on making the big deals, but you might be leaving a big deal on the table by shutting out small fish, who might have oversized influence within their organization, government agency, or within an industry. Your business intelligence is focusing on the knowns, and doesn’t seem very open to considering the unknowns.

As the API Evangelist I have an audience. I’ve been in business since 2010, so I’ve built up an audience of enterprise folks who read what I write, and listen to “some” of what I say. I know people like me within the federal government, within city government, and across the enterprise. Over half the people I know who work within the enterprise, helping influence API decisions, are also kicking the tires of APIs at night. Developers like us do not always have a straightforward project, we are just learning, understanding, and connecting the dots. We don’t always have the ready to go deal in the pipeline, and are usually doing some homework so that we can go sell the concept to decision makers. Make sure your funnel doesn’t keep us out, run us away, or remove channels for our voice to be heard.

In a world where we focus only on the big deals, and focus on scaling and automating the operation of platforms, we run the risk of losing ourselves. If you are only interested in landing those big customers, and achieving the exit you desire, I understand. I am not your target audience. I will move. It also means that I won’t be telling any stories about what you are doing, building any prototypes, and generally amplifying what you are doing on social media, and across the media landscape. Providing many of the nutrients you will need to land some of the details you are looking to get, generating the internal and external buzz needed to influence the decision makers. Providing real world use cases of why your API-driven solution is the one an enterprise group should be investing in. Make sure you aren’t locking us out of your platform, and you are investing the energy into getting to know your API consumers, more about what their intentions are, and how it might fit into your larger API strategy–if you have one.


I Am Needing Some Evidence Of How APIs Can Make An Impact In Government

Eric Horesnyi (@EricHoresnyi), the CEO of Streamata.io and I were on a call with a group of people who are moving forward the API conversation across Europe, with the assistance of the EU. The project has asked us to assist them in the discovery of more data and evidence of how APIs are making an impact in how government operates within the European Union, but also elsewhere in the world. Aggregating as much evidence as possible to help influence the EU API strategy, and learn from what is already being done. I’m heading to Italy next month to present to the group, and participate in conversations with other API practitioners and evangelists, so I wanted to start my usual amount of storytelling here on the blog to solicit contributions from my audience about what they are seeing.

I am looking for some help from my readers who work at city, county, state, and federal agencies, or at the private entities who help them with their API efforts. I am looking for official, validated, on the record examples of APIs making a positive impact on how government serves its constituents. Quantifiable examples of how a government agency have published a private, partner, or public API, and it helped the agency better meet its mission. I’m looking for anything mundane, as well as the unique and interesting, with tangible evidence to back it all up. Like number of developers, partners, apps, cost saving, efficiencies, or any other positive effect. Demonstrating that APIs when done right can move the conversation forward at a government agency. For this round, I’m going to need first hand accounts, because I will need to help organize the data, and work with this group to submit it to the European Union as part of their wider effort.

This is something I’ve been doing loosely since 2012, but I need to start getting more official about how I gather the stories, and pull together actual evidence, going beyond just my commentary from the outside in. I’ll be reaching out to all my people in government, asking for examples. If you know of anything, please email me at [email protected] with your thoughts. We have an opportunity to influence the regulatory stance in Europe when it comes to government putting APIs to work, which will be something that washes back upon the shores of the United States during each wave of API regulations to come out of the EU. My casual storytelling about how government APIs are making change on my blog has worked for the last five years, but moving forward we are going to need to better at gathering, documenting, and sharing examples how APIs are working across government. Helping establish more concrete blueprints for how to do all of this properly, and ensuring that we aren’t reinventing the wheel when it comes to API in government.

If you know someone working on APIs in at any level of government, feel free to share a link to my story, or send an introduction via [email protected] I’d love to help share the story, and evidence of the impact they are making with APIs. I appreciate all your support in making this happen–it is something I’ll put back out to the community once we’ve assembled it and talked through it in Italy next month.


Being Open Is More About Being Open So Someone Can Extract Value Than Open Being About Any Shared Value

One of the most important lessons I’ve learned in the last eight years, is that when people are insistent about things being open, in both accessibility, and cost, it is often more about things remaining open for them to freely (license-free) to extract value from it, that it is ever about any shared or reciprocal value being generated. I’ve fought many a battle on the front lines of “open”, leaving me pretty skeptical when anyone is advocating for open, and forcing me to be even critical of my own positions as the API Evangelist, and the bullshit I peddle.

In my opinion, ANYONE wielding the term open should be scrutinized for insights into their motivations–me included. I’ve spend eight years operating on the front line of both the open data, and the open API movements, and unless you are coming at it from the position of a government entity, or from a social justice frame of mind, you are probably wanting open so that you can extract value from whatever is being opened. With many different shades of intent existing when it comes to actually contributing any value back, and supporting the ecosystem around whatever is actually being opened.

I ran with the open data dogs from 2008 through 2015 (still howl and bark), pushing for city, county, state, and federal government open up data. I’ve witnessed how everyone wants it opened, sustained, maintained, and supported, but do not want to give anything back. Google doesn’t care about the health of local transit, as long as the data gets updated in Google Maps. Almost every open data activist, and data focused startup I’ve worked with has a high expectation for what government should be required to do, and want very low expectations regarding what should be expected of them when it comes to pay for commercial access, sharing enhancements and enrichments, providing access to usage analytics, and be observable and open to sharing access to end-users of this open data. Libertarian capitalism is well designed to take, and not give back–yet be actively encouraging open.

I deal with companies, organizations, and institutions every day who want me to be more open with my work. Are more than happy to go along for the ride when it comes to the momentum built up from open in-person gatherings, Meetups, and conference. Always be open to syndicating data, content, and research. All while working as hard as possible to extract as much value, and not give anything back. There are many, many, many companies who have benefitted from the open API work that I, and other evangelist in the space do on a regular basis, without ever considering if they should support them, or give back. I regularly witness partnerships scenarios in all of the API platforms I monitor, where the larger more proprietary and successful partner extracts value from the smaller, more open and less proven partner. I get that some of this is just the way things are, but much of it is about larger, well-resourced, and more closed groups just taking advantage of smaller, less-resourced, and more open groups.

I have visibility into a number of API platforms that are the targets of many unscrupulous API consumers who sign up for multiple accounts, do not actively communicate with platform owners, and are just looking for a free hand out at every turn. Making it very difficult to be open, and often times something that can also be very costly to maintain, sustain, and support. Open isn’t FREE! Publicly available data, content, media, and other resources cost money to operate. The anti-competitive practices of large tech giants setting the price so low for common digital resources have set the bar so low, for so long, it has change behaviors and set unrealistic expectations as the default. Resulting in some very badly behaved API ecosystem players, and ecosystems that encourage and incentivize bad behavior within specific API communities, but also is something that spreads from provider to provider. Giving APIs a bad name.

When I come across people being vocal about some digital resource being open, I immediately begin conducting a little due diligence on who they are. Their motivations will vary depending on where the come from, and while there are no constants, I can usually tell a lot about someone whether they come from a startup ecosystem, the enterprise, government, venture capital, or other dimensions of ur reality that the web has reached into recently. My self-appointed role isn’t just about teaching people to be more “open” with their digital assets, it is more about teaching people to be more aware and in control over their digital assets. Because there are a lot of wolves in sheeps clothing out there, trying to convince you that “open” is an essential part of your “digital transformation”, and showcasing all the amazing things that will happen when you are more “open”. When in reality they are just interested in you being more open so that they can get their grubby hands on your digital resources, then move on down the road to the next sucker who will fall for their “open” promises.


Providing Minimum Viable API Documentation Blueprints To Help Guide Your API Developers

I was taking a look at the Department of Veterans Affairs (VA) API documentation for the VA Facilities API, and intending on providing some feedback on the API implementation. The API itself is pretty sound, and I don’t have any feedback without having actually integrated it into an application, but following on the heals of my previous story about how we get API developers to follow minimum viable API documentation guidance, I had lots of feedback on the overall deliver of the documentation for the VA Facilities API, helping improve on what they have there.

Provide A Working Example of Minimum Viable API Documentation
One of the ways that you help incentivize your API developers to deliver minimum viable API documentation across their API implementations is you do as much of the work for them as you can, and provide them with a forkable, downloadable, clonable API documentation that meets the minimum viable requirements. To help illustrate what I’m talking about I created a base GitHub blueprint for what I’d suggest as a minimum viable API documentation at the VA. Providing something the VA can consider, and borrow from as they are developing their own strategy for ensuring all APIs are consistently documented.

Covering The Bare Essentials That Should Exist For All APIs
I wanted to make sure each API had the bare essentials, so I took what the VA has already done over at developer.va.gov, and republished it as a static single page application that runs 100% on GitHub pages, and hosted in a GitHub repository–providing the following essential building blocks for APIs at the VA:

  • Landing Page - Giving any API a single landing page that contains everything you need to know about working with an API. The landing page can be hosted as its own repo, and subdomain, and the linked up with other APIs using a facade page, or it could be published with many other APIs in a single repository.
  • Interactive Documentation - Providing interactive, OpenAPI-driven API documentation using Swagger UI. Providing a usable, and up to date version of the documentation that developers can use to understand what the API does.
  • OpenAPI Definition - Making sure the OpenAPI behind the documentation is front and center, and easily downloaded for use in other tools and services.
  • Postman Collection - Providing a Postman Collection for the API, and offering it as more of a transactional alternative to the OpenAPI.

That covers the bases for the documentation that EVERY API should have. Making API documentation available at a single URL to a human viewable landing page, complete with documentation. While also making sure that there are two machine readable API definitions available for an API, allowing the API documentation to be more portable, and useable in other tooling and services–letting developers use the API definitions as part of other stops along the API lifecycle.

Bringing In Some Other Essential API Documentation Elements
Beyond the landing page, interactive documentation, OpenAPI, and Postman Collection, I wanted to suggest some other building blocks that would really make sure API developers at the VA are properly documenting, communicating, as well as supporting their APIs. To go beyond the bare bones API documentation, I wanted to suggest a handful of other elements, as well as incorporate some building blocks the VA already had on the API documentation landing page for the VA Facilities API.

  • Authentication - Providing an overview of authenticating with the API using the header apikey.
  • Response Formats - They already had a listing of media types available for the API.
  • Support - Ensuring that an API has at least one support channel, if not multiple channels.
  • Road Map - Making sure there is a road map providing insights into what is planned for an API.
  • References - They already had a listing of references, which I expanded upon here.

I tried not to go to town adding all the building blocks I consider to be essential, and just contribute couple of other basic items. I feel support and road map are essential and cannot be ignored, and should always be part of the minimum viable API documentation requirements. My biggest frustrations with APIs are 1) Up to date documentation, 2) No support, and 3) Not knowing what the future holds. I’d say that I’m also increasingly frustrated when I can’t get at the OpenAPI for an API, or at least find a Postman Collection for the API. Machine readable definitions moved into the essential category for me a couple years ago–even though I know some folks don’t feel the same.

A Self Contained API Documentation Blueprint For Reuse
To create the minimum viable API documentation blueprint demo for the VA, I took the HTML template from developer.va.gov, and deployed as a static Jekyll website that runs on GitHub Pages. The landing page for the documentation is a single index.html page in the root of the site, leverage Jekyll for the user interface, but driving all the content on the page from the central config.yml for the API project. Providing a YAML checklist that API developers can follow when publishing their own documentation, helping do a lot of the heavy lifting for developers. All they have to do is update the OpenAPI for the API and add their own data and content to the config.yml to update the landing page for the API. Providing a self-contained set of API documentation that developers can fork, download, and reuse as part of their work, delivering consistent API documentation across teams.

The demo API documentation blueprint could use some more polishing and comments. I will keep adding to it, and evolving it as I have time. I just wanted to share more of my thoughts about the approach the VA could take to provide function API documentation guidance, as a functional demo. Providing them with something they could fork, evolve, and polish on their own, turning it into a more solid, viable solution for documentation at the federal agency. Helping evolve how they deliver API documentation across the agency, and ensuring that they can properly scale the delivery of APIs across teams and vendors. While also helping maximize how they leverage GitHub as part of their API lifecycle, setting the base for API documentation in a way that ensures it can also be used as part of a build pipeline to deploy APIs, as well as manage, testing, secure, and helping deliver along almost every stop along a modern API lifecycle.

The website for this project is available at: https://va-working.github.io/api-documentation/ You can access the GitHub repository at: https://github.com/va-working/api-documentation


<< Prev Next >>