The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Every Moment Is Potentially An API Story

I was on a call for a federal government API platform project with my partner in crime Chris Cairns (@cscairns) of Skylight.Digital. We were back channeling in our Slack channel during the call, when he said, “I always imagine you participating in these things, finding topics you haven’t covered or emphasized from a certain angle, and then writing a blog post in real time.” He was right, I had taken notes on a couple of new angles regarding the testing, monitoring, and understanding performance of APIs involved with federal government projects.

The single conference call resulted in about seven potential stories in my notebook. I’m guessing that only four of them will actually end up being published. However, the conversation does a good job at highlighting my style for generating stories on the blog, which is something that allows me to publish 3-5 posts a day–keeping things as active as I possibly can, generating traffic, and bringing attention to my work. If you’ve ever worked closely with me, you know that I can turn anything into a story, even a story like this, about how I create stories. ;-)

Think of API Evangelist as my public workbench where I work through the projects I’m tackling each day. It is how I make sense of my research, the projects I am working on, and the conversations I am having. All of which generates SEO exhaust, which brings more attention to my work, extends its reach, and ultimately brings in more work. It is a cycle that helps me work through my ideas in a way that forces me to make them more coherent (hopefully) along the way, while also making them immediately available to my partners, customers, and readers.

Chris has been pushing this concept forward and advocating that we be public with as many of the Skylight.Digital responses and proposals as we possibly can, which resulted in me publishing both of our responses to the first, and the second Department of Veteran Affairs (VA) RFIs. He has also been pushing other Skylight.Digital partners to be public with their proposals and responses, which folks were skeptical about at first, but then it started making sense when they see the effect it has, and the publicity it can generate.

This approach to storytelling isn’t something you can do effectively right out of the gate. It takes practice, and regular exercising–I have eight years of practice. However, I feel it is something that anyone can do eventually. You just have to find your own way of approaching it, and work on establishing your own voice and style. It is something that I’ve found to be essential to how I do business, but also something I find to be personally rewarding. I enjoy working through my ideas, and telling stories. It is always the one aspect of my work that I look forward to doing each day, and I feel like something is missing the days I don’t get to craft any posts. I really enjoy that every moment has the potential to be an API story, it really makes each day an adventure for me.


API Evangelist Is Partnering With Bridge Software To Tell More API Integration Stories

I have been intrigued by the people who reached out after I published a story about my partner program. One of the companies that reached out was the API integration agency Bridge Software, who reflects the next generation of software development groups who are emerging to focus exclusively on API integration.

I’ve had several calls with the Bridge Software team this week, discussing the possible ways in which we can work together. I’m regularly in need of software development resources that I can refer projects to, and I’m also interested in hearing more interesting stories about how businesses are integrating with APIs. Of course, Bridge Software is looking for more customers, which is something I can definitely help with. So we started brainstorming more about how they can regularly feed me stories that I can develop into posts for API Evangelist, and I can help send new business their way.

One of the most valuable resources I possess as the API Evangelist, is access to real world stories from the trenches of API providers and integrators. These sources of stories have been the bread and butter for API Evangelist for eight years. I have depended on the stories that businesses share with me to generate the 3,276 posts I have published on the blog since 2010. So I am stoked to have a new source of fresh stories, from a team who is neck deep in API integrations each day, and will be actively gathering stories from their regular developer team meetings.

I’m looking forward to working with the Bridge Software team, and getting new stories from the API integration trenches. I’m also looking forward to assisting them with finding new customers, and helping them with deliver on their API integration needs. If you are looking for help with API integrations, Bridge Software has a pretty sophisticated approach to delivering integrations that has test driven development at its core. Feel free to reach out to the team, but make sure and tell them where you heard about them, to get special treatment. ;-) Also, make sure you stay tuned for more stories about the API integrations stories they will be sharing with me on a regular basis, which I will be weaving into my regular cadence of storytelling here on the blog.


Cautiously Aware Of How APIs Can Be Used To Feed Government Privatization Machinations

I’m working to define APIs at the Department of Veterans Affairs (VA) on several fronts right now. I’ve provided not just one, but two detailed reponses to their Lighthouse API platform RFIs. Additionally, I’m providing guidance to different teams who are working on a variety of projects occurring simultaneously to serve veterans who receive support from the agency.

As I do this work, I am hyper aware of the privatization machinations of the current administration, and the potential for APIs to serve these desires. APIs aren’t good, bad, nor are they neutral. APIs reflect the desires of their owners, and the good or bad they can inflict is determined by their consumers. Because APIs are so abstract, and are often buried within web and mobile applications, it can be difficult to see what they are truly doing on a day to day basis. This is why we have API management solutions in place to help paint a picture of activity in real time, but as we’ve seen with the Facebook / Cambridge Analytica, it requires the API provider to be paying attention, and to possess incentives that ensure they will actually care about paying attention, and responding to negative events.

By re-defining existing VA internal systems as APIs, we are opening up the possibility that digital resources can be extracted, and transferred externally. Of course, if these systems remain the sources of data, VA power and control can be retained, but it also opens up the possibilities for power to eventually be transferred externally–reversing the polarities of government power and putting it into private hands. Depending on your politics, this could be good, or it could be bad. Personally, I’m a fan of a balance being struck, allowing government to do what it does best, and allowing private, commercial entities to help fuel and participate in what is happening–both with proper oversight. This reflects the potential benefits APIs can bring, but this good isn’t present by default, and could shift one way or the other at any moment.

Early on, APIs appeared as a promising way to deliver value to external developers. They held the promise of delivering unprecedented resources to developers, thing they would not normally be able to get access to. Facebook promised democratization using it’s social platform. While silently mining and extracting value from it’s application end-users, executed by often unknowing, but sometimes complicit developers. A similar transfer of power has occurred with Google Maps, which promised unprecedented access to maps for developers, as well as web and mobile maps for cities, businesses, and other interests. Over time we’ve seen that Google Maps APIs have become very skilled at extracting value from cities, business, and end-users, again executed unknowingly, and knowingly by developers. This process has been playing out over and over throughout the last decade of the open data evolution occurring across government at all levels.

Facebook and Google have done this at the platform level, extracting value and maintaining control within their operations, but the open data movement, Facebook and Google included, have been efficiently extracting value from government, and giving as little back as it possibly can. Sure, one can argue that government benefits from the resulting applications, but this model doesn’t ensure that the data and content stewards within government retain ownership or control over data, and benefit from revenue generated from the digital resources–leaving them struggling with tight budgets, and rarely improving upon the resources. This situation is what I fear will happen at the Department of Veterans Affairs (VA), allowing privatization supporters to not actually develop applications that benefit veterans, but really focusing on extracting value possessed by the VA, capturing it, and generating revenue from it, without ever giving back to the VA–never being held accountable for actually delivering value to veterans.

I have no way of testing the political motivations of people that I am working with. I also don’t want to slow or stop the potential of APIs being used to help veterans, and ensure their health and insurance information can become more interoperable. I just want to be mindful of this reality, and revisit it often. Openly discussing it as I progress in my work, and as the VA Lighthouse platform evolves. Sometimes I regret all the API evangelism I have done, as I witness the damage they can do. However, the cat is out of the bag, and I’d rather be involved, part of the conversation, and pushing for observability, transparency, and open dialogue about the realities of opening up government digital resources to private entities via APIs. Allowing people who truly care about serving veterans to help police what is going on, maximize the value and control over digital resources into the hands of people who are truly serving veteran’s interests, and minimizing access by those who are just looking to make money at veteran’s expense.


General Services Administration (GSA) Needs Help Testing Their FedBizOpps API

The General Services Administration (GSA) has an open call for help to test the new FedBizOpps API. Setting a pretty compelling precedent for releasing APIs in the federal government, slowly bringing federal agencies out of their shell, and moving the API conversation forward in government. Hopefully it will be something that other federal agencies, and other levels of government will consider as they move forward on their API journeys.

It is good to see federal agencies reaching out to ask for help in making sure APIs are well designed, deliver value, and operate as expected. In my experience, most government agencies are gun-shy when it comes to seeking outside help, and accepting criticism when it comes to their work. The GSA is definitely the most progressive on this front, but they are leading by example, showing other agencies what is possible, and something that hopefully will spread. Something I am always keen to support with my storytelling here on API Evangelist.

One thing I’d caution federal agencies on when seeking outside feedback in this area, is to be mindful of fatigue in the private sector when it comes to working for free. API providers have been encouraging developers to document, test, develop code libraries, and other essential aspects of operating APIs for years now, and many developers are growing weary of this exploitation by companies who can afford to pay, but choose not to. While it is good that government is getting on the API bandwagon here, but be aware that you are about 5+ years behind, and the tides are shifting.

I’d recommend thinking about how micro-procurement models emerging at the Department of Veterans Affairs (VA) and other agencies can be applied to testing, performance, security, and other needs agencies will have. Think about how you can create $500, $1,000, and other opportunities for professional testers to make money. Of course, you are going to have to establish a way of vetting developers, but once you do, you are going to get a higher level work than you would for free. It might take more effort to lay the groundwork, but it will be something that will pay off down the road with a community of external professionals who can hep tackle micro-tasks that emerge while developing and operating government APIs.

Overall, I’m stoked to see stuff like this come out of the government. I’m just eager to see it spread to other federal agencies, shifting how digital services are delivered. I’m also eager to see more mock or virtualized APIs exposed publicly, long before any code is actually written. Bringing in outside opinions early on, allowing APIs to be more efficiently delivered–ensuring they are more mature, without the costly evolutionary and versioning process. I’m going to push more projects I’m involved in to follow GSA’s lead, and solicit outside testers to join in and provide feedback at critical steps in the API delivery process. The more agencies do this, the more likely they will to have a trusted group of outside developers, to step up when they need help, and have tasks like this to accomplish.


Opportunity For OpenAPI-Driven Open Source Testing, Performance, Security, And Other Modules

I’ve been on five separate government reflated projects lately where finding modular OpenAPI-driven open source tooling has been a top priority. All of these projects are microservice-focused and OpenAPI-driven, and are investing significant amounts of time looking open source tools that will help with design governance, monitoring, testing, and security, and interact with the Jenkins pipeline. Helping government agencies find success as their API journey picks up speed, and the number of APIs grows exponentially.

Selling to the federal government can be a long journey in itself. They can’t always use the SaaS solutions many of us fire up to get the job done in our startup or enterprise lives. Increasingly government agencies are depending on open source solutions to help them move projects forward. Every agency I’m working with is using OpenAPI (Swagger) to drive their API lifecycle. While not all have gone design (define) first, they are using them as the contract for mocking, documentation, testing, monitoring, and security. The teams I’m working with are investing a lot of energy looking for, vetting, and testing out different open source modules on Github–with varying degrees of success.

Ideally, there was an OpenAPI-driven marketplace, or federated set of marketplaces like OpenAPI.Tools. I’ve had one for a while, but haven’t kept up to date–I will invest some time / resources into it soon. My definition of an OpenAPI tool marketplace would be that it is OpenAPI-driven, and open source. I’m fine with there being other marketplaces of OpenAPI-driven services, but I want a way to get at just the actively maintained open source tools. When it comes to serving government this is an important, and meaningful distinction. I’d also like to encourage many of the project owners to ensure there is CI/CD integration, as well as make sure their projects are actively supported, and they are willing to entertain commercial implementations.

While there wouldn’t always be direct commercial opportunities for open source tooling owners to engage with federal agencies, there would be through contractors and subcontractors. Working for federal agencies is a maze of forms and hoop jumping, but working with contractors can be pretty straightforward if you find the right ones. I don’t think you will get rich developing OpenAPI-driven tooling that serves the API lifecycle, but I think with the right solutions, support, and team behind them, you can make a decent living developing them. Especially as the lifecycle expands, and the number of services being delivered grows, the need for specialized, OpenAPI-driven tools to apply across the API lifecycle is only going to increase. Making it something I’ll be writing more stories about as I hear more stories from the API trenches.

I’m going to try and spend time working with Phil Sturgeon (@philsturgeon) and Matt Trask (@matthewtrask) on API.Tools, as well as give my own toolbox some love. If you have an open source OpenAPI-driven tool you’d like to get some attention feel free to ping me, and make sure its part of API.Tools. Also, if you have a directory, catalog, or marketplace of tools you’d like to showcase, ping me as well, I’m all about supporting diversity of choice in the space. I have multiple federal agencies ear right now when it comes to delivering along the API lifecycle, and I’m happy to point agencies and their contractors to specific tools, if it makes sense. Like I said, there won’t always be direct revenue opportunities, but they are implementations that will undoubtedly lead to commercial opportunities in the form of consulting, advising, and development opportunities with the contractors and subcontractors who are delivering on federal agency contracts.


API Lifecycle Seminars In New York City

I’ve had a huge demand for putting on API seminars for a variety of enterprise groups lately. Helping bring my API 101, history, lifecycle, and governance knowledge to the table. I’ve conducted seminars in the UK and France in April, and have more in Virginia, Massachusetts, and California in May. I’ve been working with a variety of companies, institutions, and government agencies to plan even more internal seminars, which is something that is proving to be challenging for some groups who are just getting started on their API journey.

While groups are keen on me coming visit to share my API knowledge, some of them are having trouble getting me through legal, get me into their payment systems, and making everyone’s schedules work. Some of them don’t have a problem figuring it out, while others are facing significant amounts of bureaucracy and friction due to the size, complexity, and legacy of their organizations. To help simplify the process for organizations to participate in my seminars I am going to begin planning a rolling schedule of seminars beginning in New York City.

Every two weeks, on Tuesday and Wednesday I’ll be planning to do a 2-day API lifecycle seminars in NYC. I’m going to be focusing on delivering 12 stops along the API lifecycle, targeting my readers and customers who have already embarked on their API journey:

Day One:

  • Definition - The basics of OpenAPI, Postman, and using Github to manage API definitions.
  • Design - Entry level API design, including touching on API governance and guidance.
  • Deployment - Looking at the big world of how APIs can deployed on-premise, and the clouds.
  • Virtualization - Understanding mocking, and API, as well as data virtualization.
  • Authentication - Thinking about the common patterns of authentication for API consumption.
  • Management - Covering the services, tools, and approaches to managing APIs in operation.
  • Discussion - Open discussion about anything covered throughout the day, and beyond.

Day Two:

  • Documentation - Demonstrating how to deliver portals, documentation, and other resources.
  • Testing - Providing information on the monitoring and testing of API infrastructure.
  • Security - Walking through API security beyond just API management and authentication.
  • Support - Discussing the importance of providing direct and indirect support for APIs.
  • Evangelism - Looking at how to evangelize your APIs internally and externally.
  • Integration - Thinking briefly about about API integration concerns for API providers.
  • Discussion - Open discussion about anything covered throughout the day, and beyond.

I’m kicking off this seminar series in New York City, with the following dates proposed to get the ball rolling:

I’m going to be charging $1,000.00 per person for the two day seminars. I’m setting the minimum bar for attendance to be 5 people, otherwise I won’t be scheduling the seminar, and will be pushing forward to the next dates. I’ll make sure and let everyone know at least 2 weeks in advance if we don’t get the expected attendance. All seminars are open to anyone who would like to attend, but I am also happy to conduct private group seminars, with minimum of five people in attendance. My goal is to conduct a regular cadence of seminars, that people know they can plan on, and participate in as it works for their schedule, but also provide more stability for my schedule as well.

The objective with these seminars is to make it easier for companies, institutions, and government agencies to participate in my seminars, while also forcing them out of their bubbles. I’m finding the toll of me coming onsite can vary widely, and the value to seminar attendees is lower when they aren’t forced to leave their bubbles. So, I am looking to force people out of their usual domains, so that they begin thinking about how they’ll be doing APIs, and playing nicely with others–even if they are only intended for internal use. I find the process helps you to think outside your normal daily operations, and is easier for me to not have to navigate coming on site to deliver seminars.

I’ve published both initial dates to Eventbrite to test the waters, and we’ll see if I can shift some of the demand I’m getting to this format. I will also keep pushing dates out every other week, and in July I’ll probably start looking at dates in Washington DC, then London, Paris, and probably some other west coast US locations. If you have interest in participating in my API lifecycle seminars and / or have specific requests for venues I’d love to hear from you. I’m looking to continue formalizing my process as well as schedule, and bringing my seminars wherever they are need, but doing so in a more organized fashion that helps you get your team what they need, but will also be something that is more sustainable for me in the long term.


Sports Data API Simulations From SportRadar

I’m a big fan of API sandboxes, labs, and other virtualization environments. API sandboxes should be default in heavily regulated industries like banking. I also support the virtualization of schema and data used across API operations, like I am doing at the Department of Veterans Affairs (VA), with synthetic healthcare data. I’m very interested in anything that moves forward the API virtualization conversation, so I found the live sporting API simulations over at SportRadar very interesting.

SportRadar’s “live simulations give you the opportunity to test your code against a simulation of live data before the preseason starts or any time! Our simulation system replays select completed games allowing you to view our API feeds as if they were happening live.” Here are the details of their NFL Official API simulations that run every day:

  • 11:00 am - Data is reset for the day’s simulations.
  • 1:00 pm - PST week 1 games will run – Oakland at Houston, Detroit at Seattle, Miami at Pittsburgh, and New York at Green Bay
  • 2:00 pm – PST week 2 games will run - Seattle at Atlanta, Houston at New England, Green Bay at Dallas, and Pittsburgh at Kansas City
  • 3:00 pm – PST week 3 games will run – Green Bay at Atlanta and Pittsburgh at New England
  • 4:00 pm – PST week 4 games will run – New England vs Atlanta

Providing a pretty compelling evolution in the concept API virtualization when it comes to event data, or data that will happen on a schedule. I wanted to write up this story to make sure I bookmarked this as part of my API virtualization research. If I don’t write about it, it doesn’t happen. As I’m working to include API sandboxes, lab environments, and other API virtualization approaches in my consulting, I’m keen to add new dimensions like API simulation, which provide great material for workshops, presentations, and talks.

I feel like ALL APIs should have some sort of sandbox to play in while developing. Not just heavily regulated industries, or those with sensitive production data and content. It can be stressful to have to develop against a live environment, and having a realistic sandbox, labs, and simulations which provide complete copies of production APIs, along with real world synthetic data can help developers be more successful in getting up and running. I’ll keep profiling interesting approaches like this one out of SportRadar, and add to my API virtualization research, helping round off my toolbox when it comes to helping API providers develop their sandbox environments.


Using Jekyll To Look At Microservice API Definitions In Aggregate

I’ve been evaluating microservices at scale, using their OpenAPI definitions to provide API design guidance to each team using Github. I just finished a round of work where I took 20 microservices, and evaluated each OpenAPI definition individually, and made design suggestions via an updated OpenAPI definition that I submitted as a Github issue. I was going to submit as a pull request, but I really want the API design guidance to be a suggestions, as well as a learning experience, so I didn’t want them to feel like they had to merge all my suggestions.

After going through all 20 OpenAPI definitions, I took all of them and dumped them into a single local repository where I was running Jekyll via localhost. Then using Liquid I created a handful of web pages for looking at the APIs across all microservices in a single page–publishing two separate reports:

  • API Paths - An alphabetical list of the APIs for all 20 services, listing the paths, verbs, parameters, and headers in a single bulleted list for evaluation.
  • API Schema - An alphabetical list of the APIs for all 20 services, listing the schema definitions, properties, descriptions, and types in a single bulleted list for evaluation.

While each service lives in its own Github repository, and operates independently of each other, I wanted to see them all together, to be able to evaluate their purpose and design patterns from the 100K level. I wanted to see if there was any redundancy in purpose, and if any services overlapped in functionality, as well as trying to see the potential when all API resources were laid out on a single page. It can be hard to see the forest for the trees when working with microservices, and publishing everything as a single document helps flatten things out, and is the equivalent of climbing to the top of the local mountain to get a better view of things.

Jekyll makes this process pretty easy to do by dumping all OpenAPI definitions into a single collection, which I can then navigate easily using Liquid on a single HTML page. Providing a comprehensive report of the functionality across all microservices. I am calling this my microservices monolith report, as it kind of reassembles all the individual microservices into a single view, letting me see the collective functionality available across the project. I’m currently evaluating these service from an API governance perspective, but at a later date I will be taking a look from the API integration perspective, and trying to understand if the services will work in concert to help deliver on a specific application objective.


Measuring Value Exchange Around Government Data Using API Management

I’ve written about how the startup community has driven the value exchange that occurs at the API management layer down a two lane road with API monetizatin and plans. To generate the value that investors have been looking for, we have ended up with a free, pro, enterprise approach to measuring value exchanged with API integration, when in reality there is so much more going on here. Something that really becomes evident when you begin evaluating the API conversations going on across government at all levels.

In conversation after conversation I have with government API folk I hear that they don’t need API management reporting, analysis, and billing features. There is a perception that government isn’t selling access to APIs, so API management measurement isn’t really needed. Missing out on a huge opportunity to be measuring, analyzing, and reporting upon API usage, and requiring a huge amount of re-education on my part to help API providers within government to understand how they need to be measuring value exchange at the API management level.

Honestly, I’d love to see government agencies actually invoicing API consumers at all levels for their usage. Even if the invoices aren’t in dollars. Measuring API usage, reporting, quantifying, and sending an invoice for usage each month would help bring awareness to how government digital resources are being put to use (or not). If all data, content, and algorithms in use across federal, state, and municipal government were available via APIs, then measured, rate limited, and reported upon across all internal, partner, and public consumers–the awareness around the value of government digital assets would increase significantly.

I’m not saying we should charge for all access to government data. I’m saying we should be measuring and reporting upon all access to government data. The APIs, and a modern API management layer is how you do this. We have the ability to measure the value exchanged and generated around government digital resources, and we shouldn’t let misconceptions around how the private sector measures and generates revenue at the API layer interfere with government agencies benefitting from this approach. If you are publishing an API at a government agency, I’d love to learn more about how you are managing access to your APIs, and how you are reporting upon the reading and writing of data across applications, and learn more about how you are sharing and communicating around these consumption reports.


Looking At 20 Microservices In Concert

I checked out the Github repositories for twenty microservices of one of my clients recently, looking understand what is being accomplished across all these services as they work independently to accomplish a single collective objective. I’m being contracted with to help come in blindly and provide feedback on the design of the APIs being exposed across services, and help provide guidance on their API lifecycle, as well as eventually API governance when things have matured to that level. Right now we are addressing pretty fundamental definition and design issues, but eventually we’ll hopefully graduate to the next level.

A complete and up to date README for each microservice is essential to understanding what is going on with a service, and a robust OpenAPI definition is critical to breaking down the details of what each API delivers. When you aren’t part of each service’s development team it can be difficult to understand what each service does, but with an up to date README and OpenAPI, you can get up to speed pretty quickly. If an service is well documented via its README, and the API is well designed, and the surface area is reflected in it’s OpenAPI, you can go from not knowing what a service does to, understanding its value within hopefully minutes, not hours.

When each service possesses an OpenAPI it becomes possible to evaluate what they deliver at scale. You can take all APIs, their paths, headers, parameters, and schema and out them in different ways so that you can begin to paint a picture of what they deliver in aggregate. Bringing all the disparate services back together to perform together in a sort of monolith concert, while still acknowledging they all do their own thing independently. Allowing us to look at how many different service can be used in concert to deliver a single application, or potentially a variety of application instances. Thinking critically about each independent service, but more importantly how they all work together.

I feel like many groups are still struggling with decomposing their monolithic systems into separate services, and while some are doing so in a domain-driven way, few are beginning to invest in understanding how they move forward with services in concert to deliver on application needs. Many of the groups I’m working with are so focused on decomposing and tearing down, they aren’t thinking too critically about how they will make all of this begin work together again. I see monolith systems working like a massive church organ which take a lot of maintenance, and require a single (or handful) of knowledgeable operators to play. Where microservices are much more like an orchestra, where every individual player has a role, but they play in concert, directed by a conductor. I feel like most groups I’m talking with are just beginning the process of hiring a conductor, and have a bunch of musicians roaming around–not quite ready to play any significant productions quite yet.


API Discovery is for Internal or External Services

The topic of API discovery has been picking up momentum in 2018. It is something I’ve worked on for years, but with the number of microservices emerging out there, it is something I’m seeing become a concern amongst providers. It is also something I’m seeing more potential vendor chatter, looking to provide more services and tooling to help alleviate API discovery pain. Even with all this movement, there is still a lot of education and discussion to occur on the subject to help bring people up to speed on what is API discovery.

The most common view of what is API discovery, is when you need to find an API for developing an application. You have a need for a resource in your application, and you need to look across your internal and partner resources to find what you are looking for. Beyond that, you will need to search for publicly available API resources, using Google, Github, ProgrammableWeb, and other common ways to find popular APIs. This is definitely the most prominent perspective when it comes to API discovery, but it isn’t the only dimension of this problem. There are several dimensions to this stop along the API lifecycle, that I’d like to flesh out further, so that I can better articulate across conversations I am having.

Another area that gets lumped in with API discovery is the concept of service discovery, or how your APIs will find their backend services that they use to make the magic happen. Service discovery focuses on the initial discovery, connectivity, routing, and circuit breaker patterns involved with making sure an API is able to communicate with any service it depends on. With the growth of microservices there are a number of solutions like Consul that have emerged, and cloud providers like AWS are evolving their own service discovery mechanisms. Providing one dimension to the API discovery conversation, but different from, and often confused with front-end API discovery and how developers and applications find services.

One of the least discussed areas of API discovery, but is one that is picking up momentum, is finding APIs when you are developing APIs, to make sure you aren’t building something that has already been developed. I come across many organizations who have duplicate and overlapping APIs that do similar things due to lack of communication and a central directory of APIs. I’m getting asked by more groups regarding how they can be conducting API discovery by default across organizations, sniffing out APIs from log files, on Github, and other channels in use by existing development teams. Many groups just haven’t been good at documenting and communicating around what has been developed, as well as beginning new projects without seeing what already exists–something that will only become a great problem as the number of microservices grows.

The other dimension of API discovery I’m seeing emerge is discovery in the service of governance. Understand what APIs exist across teams so that definitions, schema, and other elements can be aggregated, measured, secured, and governed. EVERY organization I work with is unaware of all the data sources, web services, and APIs that exist across teams. Few want to admit it, but it is a reality. The reality is that you can’t govern or secure what you don’t know you have. Things get developed so rapidly, and baked into web, mobile, desktop, network, and device applications so regularly, that you just can’t see everything. Before companies, organizations, institutions, and government agencies are going to be able to govern anything, they are going to have begin addressing the API discovery problem that exists across their teams.

API discovery is a discipline that is well over a decade old. It is one I’ve been actively working on for over 5 years. It is something that is only now getting the discussion it needs, because it is a growing concern. It will be come a major concern with each passing day of the microservice evolution. People are jumping on the microservices bandwagon without any coherent way to organize schema, vocabulary, or API definitions. Let alone any strategy for indexing, cataloging, sharing, communicating, and registering services. I’m continuing my work on APIs.json, and the API Stack, as well as pushing forward my usage of OpenAPI, Postman, and AsyncAPI, which all contribute to API discovery. I’m going to continue thinking about how we can publish open source directories, catalogs, and search engines, and even some automated scanning of logs and other ways to conduct discovery in the background. Eventually, we will begin to find more solutions that work–it will just take time.


API Gateway As A Node And Moving Beyond Backend and Frontend

The more I study where the API management, gateway, and proxy layer is headed, the less I’m seeing a front-end or a backend, I’m seeing just a node. A node that can connect to existing databases, or what is traditionally considered a backend system, but also can just as easily proxy and be a gateway to any existing API. A lot has been talked about when it comes to API gateways deploying APIs from an existing database. There has also been considerable discussion around proxying existing internal APIs or web services, so that we can deploy newer APIs. However, I don’t think there has been near as much discussion around proxying other 3rd party public APIs–which flips the backend and frontend discuss on its head for me.

After profiling the connector and plugin marketplaces for a handful of the leading API management providers I am seeing a number of API deployment opportunities for Twilio, Twitter, SendGrid, etc. Allowing API providers to publish API facades for commonly used public APIs, and obfuscate away the 3rd party provider, and make the API your own. Allowing providers to build a stack of APIs from existing backend systems, private APIs and services, as well as 3rd party public APIs. Shifting gateways and proxies from being API deployment and management gatekeepers for backend systems, to being nodes of connectivity for any system, service, and API that you can get your hand on. Changing how we think about designing, deploying, and managing APIs at scale.

I feel like this conversation is why API deployment is such a hard thing to discuss. It can mean so many different things, and be accomplished in so many ways. It can be driven by a databases, or strictly using code, or be just taking an existing API and turning it into something new. I don’t think it is something that is well understood amongst developer circles, let alone in the business world. An API gateway can be integration just as much as it can be about deployment. It can be both simultaneously. Further complexing what APIs are all about, but also making the concept of the API gateway a more powerful concept, continuing to renew this relic of our web services past, into something that will accommodate the future.

What I really like about this notion, is that it ensures we will all be API providers as well as consumers. The one-sided nature of API providers in recent years has always frustrated me. It has allowed people to stay isolated within their organizations, and not see the world from the API consumer perspective. While API gateways as a node won’t bring everyone out of their bubble, it will force some API providers to experience more pain than they have historically. Understanding what it takes to to not just provide APIs, but what it takes to do so in a more reliable, consistent and friendly way. If we all feel the pain our integration and have to depend on each other, the chances we will show a little more empathy will increase.


1000 LB Gorilla Monolith and Microservices


Making the OpenAPI Contract Friendlier For Developers and Business Stakeholders

I was in a conference session about an API design tool today, and someone asked if you could get at the OpenAPI definition behind the solution. They said yes, but quickly also said that the definition is boring and that you don’t want to be in there, you want to be in the interface. I get that service providers want you to focus on their interface, but we shouldn’t be burying, or abstracting away the API contract for APIs, we should always be educating people about it, an bringing it front and center in any service, tooling, or conversation.

Technology folks burying or devaluing the OpenAPI definition with business users is common, but I also see technology folks doing it to each other. Reducing OpenAPI to be just another machine readable artifact alongside other components of delivering API infrastructure today. I think this begins with people not understanding what OpenAPI is, but I think it is sustained by people’s view of what is technological magic and should remain in the hands of the wizards, and what should be accessible to a wider audience. If you limit who has access and knowledge, you can usually maintain a higher level of control, so they use your interface in the case of a vendor, or they come to you develop and build an API in the case of a developer.

There is nothing in a YAML OpenAPI definition that business users won’t be able to understand. OpenAPIs aren’t anymore boring than a Word document or Spreadsheet. If you are a stakeholder in the service, you should be able to read, understand, and engage with the OpenAPI contract. If we teach people to be afraid of the OpenAPI definitions we are repeating the past, and maintaining the canyon that can exist between business and IT/Developer groups. If you are in the business of burying the OpenAPI definition, I’m guessing you don’t understand the portable API lifecycle potential of this API contract, and simply see it as a config, documentation, or other technical artifiact. Or you are just in the business of maintaining control and power by being the gatekeeper for the API contract, similar to how we see database people defend their domain.

Please do not devalue or hide away the OpenAPI contract. It isn’t your secret sauce. It isn’t boring. It isn’t too technical. It is the contract for how a service will work, that will speak to business and technical groups. It is the contract that all the services and tools you will use along the API lifecycle will understand. It is fine to have the OpenAPI right behind the scenes, but always provide a button, link, or other way to quickly see the latest version, and definitely do not scare people away or devalue it when you are talking. If you are doing APIs, you should be encouraging, and investing in everyone being able to have a conversation around the API contract behind any service you are putting forward.


Constraints Introduced By Supporting Standardized API Definitions and Schema

I’ve had a few API groups contact me lately regarding the challenges they are facing when it comes to supporting organizational, or industry-wide API definitions and schema. They were eager to support common definitions and schema that have been standardized, but were getting frustrated by not being able to do everything they wanted, and having to live within the constraints introduced by the standardized definitions. Which is something that doesn’t get much discussion by those of use who are advocating for standardization of APIs and schema.

Web APIs Come With Their Own Constraints
We all want more developers to use our APIs. However, with more usage, comes more responsibility. Also, to get more usage our APIs need to speak to a wider audience, something that common API definitions and schema will help with. This is why web APIs are working, because they speak to a wider audience, however with this architectural decision we are making some tradeoffs, and accepting some constraints in how we do our APIs. This is why REST is just one tool in our toolbox, so we can use the right tool, establish the right set of constraints, to allow our APIs to be successful. The wider our API toolbox, the wider the number options we will have available when it comes to how we design our APIs, and what schema we can employ

Allowing For Content Negotiation By Consumers
One way I’ve encouraged folks to help alleviate some of the pain around the adoption of common API definitions and schema is to provide content negotiation to consumers, allowing them to obtain the response they are looking for. If people want the standardized approaches they can choose those, and if they want something more precise, or custom they can choose that. This also allows API providers to work around the API standards that have become bottlenecks, while still supporting them where they matter. Having the best of both worlds, where you are supporting the common approach, but still able to do what you want when you feel it is important. Allowing for experimentation as well as standardization using the same APIs.

Participate In Standards Body and Process
Another way to help move things forward is to participate in the standards body that is moving an API definition or schema forward. Make sure you have a seat at the table so that you can present your case for where the problems are, and how to improve on the design, definition, and schema being evolved. Taking a lead in creating the world you want to see when it comes to API and schema standards, and not just sitting back being frustrated because it doesn’t do what you want it to do. Having a role in the standards body, and actively participating in the process isn’t easy, and it can be time consuming, but it can be worth it down the road and helping you better achieve your goals when it comes to your APIs operating as you aspect, as well as the wider community and industry you are serving.

Delivering APIs at scale won’t be easy. To reach a wide audience with your API it helps to be speaking a common vocabulary. This doesn’t always allow you to move as fast as you’d like, and do everything exactly as you envision. You will have to compromise. Operate within constraints. However, it can be worth it. Not just for your organization, but for the overall health of your community, and the industry you operate in. You never know, with a little patience, collaboration, and communication, you might learn even new approaches to defining your APIs and schema that you didn’t think about in isolation. Also, experimentation with new patterns will still be important, even while working to standardize things. In the end, a balance between standardized and custom will make the most sense, and hopefully alleviate your frustrations in moving things forward.


API Design, Either the Provider Does the Work or the Consumer Will Have To

I’m always fascinated by the API design debate, and how many entrenched positions there are when it comes to the right way of doing it. Personally, I don’t see any right way of doing it, I just see many different ways to put the responsibility on the provider or the consumer shoulders, or sharing the load between them. I spend a lot of time profiling APIs, crafting OpenAPI definitions that try to capture 100% of the surface area of an API. Something that is pretty difficult to do when you aren’t the provider, and you are just working from existing static documentation. It is just hard to find every parameter and potential value, exhaustively detailing what is possible when you use an API.

When designing APIs I tend to lean towards exposing the surface area of an API in the path. This is my personal preference for when I’m using an API, but I also do this to try and make APIs more accessible to non-developers. However, I regularly get folks who freak out at how many API paths I have, preferring to have the complexity at the parameter level. This conversation continues with the GraphQL and other query language folks who prefer to craft more complex queries that can be passed in the body, to define exactly what is desired. I do not feel there is a right way of doing this, but it does reflect what I said early about balancing the load between provider and consumer.

The burden for defining and designing the surface area of an API resides in the providers court–only the provider truly knows the entire surface area. Then I’d add that when you offload the responsibility to the consumer using GraphQL, you are limiting who will be able to craft a query, putting API access beyond the reach of business users, and many developers. I feel that exposing the surface area of an API in the URL makes sense to a lot of people, and puts it all out in the open. Unless you are going to provide every possible enum value for all parameters, this is the only way to make 100% of the surface area of an API visible and known. However, depending on the complexity of an API, this is something that can get pretty unwieldy pretty quickly, making parameters the next stop be defining and designing the surface area of your API.

I know my view of API design doesn’t sync with many API believers, across many different disciplines. I’m not concerned with that. I’m happy to hear all the pros and cons of any approach. My objective is to lower the bar for entry into the API game, not raise the bar, or hide the bar. I’m all for pushing API providers to be more responsible for defining the surface area of their API, and not just offloading it on consumers to do all the work, unless they implicitly ask for it. In the end, I’m just a fan of simple, elegantly designed APIs that are intuitive and well documented using OpenAPI. I want ALL APIs to be accessible to everyone, even non-developers. I want them accessible to developers, minimizing the load on them to understand what is happening, and what all the possibilities are. I just don’t want to spend too much time on-boarding with an API. I just want to go from discovery to exploration in as little time as possible.


Turning The Stack Exchange Questions API Into 25 Separate Tech Topic Streaming APIs

I’m turning different APIs into topical streams using Streamdata.io. I have been profiling hundreds of different APIs as part of my work to build out the Streamdata.io API Gallery, and as I’m creating OpenAPI definitions for each API, I’m realizing the potential for event and topical driven streams across many existing web APIs. One thing I am doing after profiling each API is that I benchmark them to see how often the data changes, applying what we are calling StreamRank to each API path. Then I try to make sure all the parameters, and even enum values for each parameter are represented for each API definition, helping me see the ENTIRE surface area of an API. Which is something that really illuminates the possibilities surrounding each API.

After profiling the Stack Exchange Questions API, I began to see how much functionality and data is buried within a single API endpoint, and was something I wanted to expose and make much easier to access. Taking a single OpenAPI definition for the Stack Exchange Questions API:

Then exploding it into 25 separate tech topic streaming APIs. Taking the top 25 enum value for the tags parameter for the Stack Overflow site, and exploding into 25 separate streaming API resources. To do this, I’m taking each OpenAPI definition, and generating an AsyncAPI definition to represent each possible stream:

I’m not 100% sure I’m properly generating the AsyncAPI currently, as I’m still learning about how to use the topics and streams collections properly. However, the OpenAPI definition above is meant to represent the web API resource, and the AsyncAPI definition is meant to represent the streaming edition of the same resource. Something that can be replicated for any tag, or any site that is available via the Stack Exchange API. Turning the existing Stack Exchange API into streaming topic APIs, that people can subscribe to only the topics they are interested in receiving updates.

At this point I’m just experimenting with what is possible with OpenAPI and AsyncAPI specifications, and understanding what I can do with some of the existing APIs I am already using each day. I’m going to try and turn this into a prototype, delivering streaming APIs for all 25 of the top Stack Overflow tags. To demonstrate what is possible on Stack Exchange, but also to establish a proof of concept that I can apply to other APIs like Reddit, Github, and others. Then eventually automating the creation of streaming topic APIs using the OpenAPI definitions for common APIs, and the Streamdata.io service.


Doing Away With Social Logins Due To A Lack Of Trust

I received an email from my CRON job API (EasyCRON) provider this morning about discontinuing the usage of social logins to their service with Gmail, Facebook, etc. Something that I think is a sign of things to come in response to the recent (and continued) bad behavior by many of the leading technology platforms. EasyCRON gave the following response for removing social logins from their service:

In order to: 1) prevent any confusion that could be caused by using third party platform’s authentication API, 2) decouple from those data greedy platforms, 3) and keep our system simple, we decide to stop using third party platform’s authentication API on EasyCron.

As much as I prefer one click login using my Github or Google (rarely use Facebook), I can’t argue with their logic. I’m a big fan of minimizing data sharing, surveillance, and of course keeping things simple. I wish we could trust tech companies to be good citizens with our data, but they seem to prefer repeatedly demonstrating they are more concerned with profits than they are about us.

I’m still on board with using Github login for my infrastructure related services, but will definitely stop using Google, Twitter, Facebook, and any other purely advertising related service for authentication purposes. As much of a pain in the ass as it might be to create yet another login, it is more of a pain in the ass to be surveilled, and exploited on a daily basis by technology platforms. I wish people would have more respect, and be better behaved, but things are what they are, and we should respond accordingly.

Thanks for fucking this one up for us all Facebook, Google, Twitter, and other platforms who just can’t see the big picture over their short term profit. I am going to stop advising API platforms utilize social logins as part of their API management and on-boarding process, due to it putting developers and consumers at risk for exploitation.


GraphQL, Just Get Out Of My Way And Give Me What I Want

One of the arguments I hear for why API providers should be employing GraphQL is that they should just get out of developers way, and let them build their own queries so that they can just get exactly the data that they want. As an application developer we know what we want from your API, do not have us make many different calls, to multiple endpoints–just give us one API and let us ask for exactly what we need. It is an eloquent, logical argument when you operate and live within a “known bubble”, and you know exactly what you want. Now, ask yourself, will every API consumer know what they want? Maybe in some scenarios, every API consumer will know what they want, but in most situations, developers will not have a clue what they want, need, or what an API does.

This is where the GraphQL as a replacement for REST argument begins breaking down. In a narrowly defined bubble, where every developers knows the schema, knows GraphQL, and knows they want–GraphQL can make a lot of sense. In the autodidact alpha developer startup world this argument makes a lot of sense, and gets a lot of traction. However, not everyone lives in this world, and in this real world, API design can become very important. Helping people understand what is possible, learn the schema behind an API, and become more familiar with an API, until they have a better understanding of what they might want. I’m not saying GraphQL doesn’t have a place when you have a significant portion of your audience knowing what they want, I’m just saying that you shouldn’t leave everyone else behind.

To further turn this argument on its head, as a developer, if I know what I want, why make me build a query at all? Just give me a single URL with what I want! Don’t make me do the heavy lifting, and work to craft a query, just give me a single URL with minimal authentication, and let me get what I need in one click. Sure, it will take some time before you can craft enough URLs to meet everyone’s needs, but it will be worth it for those who you can. You can always craft new paths based upon requests, and yes, you can also augment your web APIs with a GraphQL endpoint for those neediest, most demanding developers who love building queries, and know exactly what they want. I guess my point is that there are a lot of definitions of knowing what you want, and how APIs can satisfy that, and not everyone will be in the same mindset as you are.

Now I have to bury in this last paragraph the fact that I’m not anti-GraphQL. I am anti-GraphQL being sold as a replacement for simpler, resource centered web APIs (aka REST). So if you are going to come at me with the Y U HATE GraphQL response–don’t. I’m just trying to show GraphQL believers that they are leaving some people behind with a GraphQL only approach, unless you are 100% sure that ALL your API consumers know GraphQL, know your schema, and know what they want. GraphQL is an important tool in the API toolbox, but it isn’t the one size fits all tool it is often sold as. After much contemplation, and working on the ground within enterprise groups, I am trying to put GraphQL to work in more use cases where it makes sense, and before I can do this I have to push back much of the misinformation that has been peddled about it, and undo the damage I’me seeing on the ground.


API Management In Real-Time Instead Of After The Media Shitstorm

I have been studying API management for eight years now. I’ve spent a lot of time understanding the approach of leading API providers, and the services and tools put out there by API service providers from 3Scale to Tyk, and how the cloud service providers like AWS are baking API management into their clouds. API management isn’t the most exciting aspect of doing APIs, but I feel it is one of the most important elements of doing APIs, delivering on the business and politics of doing APIs, which can often make or break a platform and the applications that depend on it.

Employing a common API management solution, and having a solid plan in place, takes time and investment. To do it properly takes lots of regular refinement, investment, and work. It is something that will often seem unnecessary–having to review applications, get to know developers, and consider how they fit into the picture picture. Making it something that can easily be to pushed aside for other tasks on a regular basis–until it is too late. This is frequently the case when your API access isn’t properly aligned with your business model, and there are no direct financial strings attached attached to new API users, or a line distinguishing active from inactive users, let alone governing what they can do with data, content, and algorithms that are made accessible via APIs.

The API management layer is where the disconnect between API providers and consumers occur. It is also where the connection, and meaningful engagement occurs when done right. In most cases API providers aren’t being malicious, they are just investing in the platform as their leadership has directed, and acting upon the stage their business model has set into motion. If your business model is advertising, then revenue is not directly connected to your API consumption, and you just turn on the API faucet to let in as many consumers as you can possibly attract. Your business model doesn’t require that you play gatekeeper, qualify, or get to know your API developers, or the applications they are developing. This is what we’ve seen with Facebook, Twitter, and other platforms who are experiencing challenges at the API management layer lately, due to a lack of management over the last 5+ years.

There was no incentive for Facebook or Twitter to review applications, audit their consumption, or get to know their usage patterns. The business model is an indirect one, so investment at the API management layer is not a priority. They only respond to situations once it begins to hurt their image, and potentially rise up to begin to hurt their bottom line. The problem with this type of behavior is that other API providers see this and think this is a problem with doing APIs, and do not see it as a problem with not doing API management. That having a business model which is sensibly connected to your API usage, and a team who is actively managing and engaging with your API consumers is how you deal with these challenges. If you manage your API properly, you are dealing with negative situations early on, as opposed to waiting until there is a media shitstorm before you start reigning in API consumption on your platform.

Sensible API management is something I’ve been writing about for eight years. It is something I will continue to write about for many more years, helping companies understand the importance of doing it correctly, and being proactive rather than reactive. I’ll also be regularly writing about the subject to help API consumers understand the dangers of using platforms that do not have a clear business model, as it is usually a sign of this API management disconnect I’m talking about. It is something that will ultimately cause problems down the road, and contribute to platform instability, and APIs potentially being limited or shuttered as part of these reactionary responses we are seeing from Facebook and Twitter currently. I know that my views of API management are out of sync with popular notions of how you incentivize exponential growth on a platform, but my views are in alignment with healthy API ecosystems, successful API consumers, and reliable API providers who are around for the long term.


Synthetic Healthcare Records For Your API Using Synthea

I have been working on several fronts to help with API efforts at the Department of Veterans Affairs (VA) this year, and one of them is helping quantify the deployment of a lab API environment for the platform. The VA doesn’t want it called it a sandbox, so they are calling it a lab, but the idea is to provide an environment where developers can work with APIs, see data just like they would in a live environment, but not actually have access to live patient data before they can prove their applications are reviewed and meet requirements.

One of the projects being used to help deliver data within this environment is called Synthea. Providing the virtualized data will be made available through VA labs API–here is the description of what they do from their website:

Synthea is an open-source, synthetic patient generator that models the medical history of synthetic patients. Our mission is to provide high-quality, synthetic, realistic but not real, patient data and associated health records covering every aspect of healthcare. The resulting data is free from cost, privacy, and security restrictions, enabling research with Health IT data that is otherwise legally or practically unavailable.

Synthea data contains a complete medical history, including medications, allergies, medical encounters, and social determinants of health, providing data can be used without concern for legal or privacy restrictions by developers to support a variety of data standards, including HL7 FHIR, C-CDA and CSV. Perfect work loading up into sandbox and lab API environments, allowing to developers to safely play around with building healthcare applications, without actually touching production patient data.

I’ve been looking for solutions like this for other industries. Synthea even has a patient data generator available on Github, which is something I’d love to see for every industry. Sandbox and labs environment should be default for any API, especially APIs operating within heavily regulated industries. I think Synthea provides a pretty compelling model for the virtualization of API data, and I will be referencing it as part of my work in hopes of incentivizing someone to fork it and use to provide something we can use as part of any API implementation.


Facebook And Twitter Only Now Beginning To Police Their API Applications

I’ve been reading about all the work Facebook and Twitter have been doing over the last couple of weeks to begin asserting more control over their API applications. I’m not talking about the deprecation of APIs, that is a separate post. I’m focusing on them reviewing applications that have access to their API, and shutting off access to the ones who are’t adding value to the platform and violating the terms of service. Doing the hard work to maintain a level of quality on the platform, which is something they should have been doing all along.

I don’t want to diminish the importance of the work they are doing, but it really is something that should have been done along the way–not just when something goes wrong. This kind of behavior really sets the wrong tone across the API sector, and people tend to focus on the thing that went wrong, rather than the best practices of what you should be doing to maintain quality across API operations. Other API providers will hesitate launching public APIs because they’ll not want to experience the same repercussions as Facebook and Twitter have, completely overlooking the fact that you can have public APIs, and maintain control along the way. Setting the wrong precedent for API providers to emulate, and damaging the overall reputation of operating public APIs.

Facebook and Twitter have both had the tools all along to police the applications using their APIs. The problem is the incentives to do so, and to prioritize these efforts isn’t there, due to an imbalance with their business model, and a lack of diversity in their leadership. When you have a bunch of white dudes with a libertarian ethos pushing a company towards profitability with a advertising driven business model, investing in quality control at the API management layer just isn’t a priority. You want as may applications, users, and activity as you possibly can, and when you don’t see the abuse, harassment, and other illnesses, there really is no problem from your vantage point. That is, until you get called out in the press, or are forced to testify in front of congress. The reasons us white dudes get away with this is that there are no repercussions, we just get to ignore until it becomes a problem, apologize, perform a little bit to show we care, and wait until the next problem occurs.

This is the wrong API model to put out there. API providers need to see the benefits of properly reviewing applications that want access to their APIs, and the value of setting a higher bar for how applications use the API. There should be regular reviews of active APIs, and audits of how they are accessing, storing, and putting resources to work. This isn’t easy work, or inexpensive to do properly. It isn’t something you can put off until you get in trouble. It is something that should be done from the beginning, and conducted regularly, as part of the operations of a well funded team. You can have public APIs for a platform, and avoid privacy, security, and other shit-shows. If you need an example of doing it well, look at Slack, who has a public API that is successful, even with a high level of bot automation, but somehow manages to stay out of the spotlight for doing dumb things. It is because their API management practices are in better alignment with their business model–the incentives are there.

For the next 3-5 years I’m going to have to hear from companies who aren’t doing public APIs, because they don’t want to make the same mistake as Facebook and Twitter. All because Facebook and Twitter have been able to get away with such bad behavior for so long, avoid doing the hard work of managing their API platforms, and receive so much bad press. All in the name of growth and profits at all cost. Now, I’m going to have to write a post every six months showcasing Facebook and Twitter as pioneers for how NOT to run your platforms, explaining the importance of healthy API management practices, and investing in your API teams so they have the resources to do it properly. I’d rather have positive role models to showcase rather than poorly behaved role models who I have to work overtime to change perception and alter API provider’s behavior. As an API community let’s learn from what has happened and invest properly in our API management layers, properly screen and get to know who is building application on our resources, and regularly tune into and audit their behavior. Yes, it takes more investment, time, and resources, but in the end we’ll all be better off for it.


A README For Your Microservice Github Repository

I have several projects right now that are needed a baseline for what is expected of microservices developers when it comes to the README for their Github repository. Each microservice should be a self-contained entity, with everything needed to operate the service within a single Github repository. Making the README the front door for the service, and something that anyone engaging with a service will depend on to help them understand what the service does, and where to get at anything needed to operate the service.

Here is a general outline of the elements that should be present in a README for each microservice, providing as much of an overview as possible for each service:

  • Title - A concise title for the service that fits the pattern identified and in use across all services.
  • Description - Less than 500 words that describe what a service delivers, providing an informative, descriptive, and comprehensive overview of the value a service brings to the table.
  • Documentation - Links to any documentation for the service including any machine readable definitions like an OpenAPI definition or Postman Collection, as well as any human readable documentation generated from definitions, or hand crafted and published as part of the repository.
  • Requirements - An outline of what other services, tooling, and libraries needed to make a service operate, providing a complete list of EVERYTHING required to work properly.
  • Setup - A step by step outline from start to finish of what is needed to setup and operate a service, providing as much detail as you possibly for any new user to be able to get up and running with a service.
  • Testing - Providing details and instructions for mocking, monitoring, and testing a service, including any services or tools used, as well as links or reports that are part of active testing for a service.
  • Configuration - An outline of all configuration and environmental variables that can be adjusted or customized as part of service operations, including as much detail on default values, or options that would produce different known results for a service.
  • Road Map - An outline broken into three groups, 1) planned additions, 2) current issues, 3) change log. Providing a simple, descriptive outline of the road map for a service with links to any Github issues that support what the plan is for a service.
  • Discussion - A list of relevant discussions regarding a service with title, details, and any links to relevant Github issues, blog posts, or other updates that tell a story behind the work that has gone into a service.
  • Owner - The name, title, email, phone, or other relevant contact information for the owner, or owners of a service providing anyone with the information they need to reach out to person who is responsible for a service.

That represent ten things that each service should contain in the README, providing what is needed for ANYONE to understand what a service does. At any point in time someone should be able to land on the README for a service and be able to understand what is happening without having to reach out to the owner. This is essential for delivering individual services, but also delivery of service at scale across tens, or hundreds of services. If you want to know what a service does, or what the team behind the service is thinking you just have to READ the README to get EVERYTHING you need.

It is important to think outside your bubble when crafting and maintaining a README for each microservice. If it is not up to date, or lacking relevant details, it means the service will potentially be out of sync with other services, and introduce problems into the ecosystem. The README is a simple, yet crucial aspect of delivering services, and it should be something any service stakeholder can read and understand without asking questions. Every service owner should be stepping up to the plate and owning this aspect of their service development, and professionally owning this representation of what their service delivers. In a microservices world each service doesn’t hide in the shadows, it puts it best foot forward and proudly articulates the value it delivers or it should be deprecated and go away.


Venture Capital Obfuscating The Opportunities For Value Exchange At The API Management Level

p></p>When I talk to government agencies, non-profit organizations, or organizations doing internal APIs I regularly receive pushback about the information I share regarding API management, service composition, rate limiting, access tiers, and the other essential ingredients of the business of APIs. People tell me they aren’t selling access to their APIs, they don’t need to be measuring APIs like publicly available commercial APIs do. They don’t need a free, pro, and other tiers of access, and they won’t be invoicing their consumers like the majority of APIs I showcase from the API universe.

This phenomenon is just one of the many negative side effects of venture capital, and Silicon Valley startups driving the conversation around APIs. Most people can’t even see the value exchange that potentially happens at the API management layer, and only see one way commercial revenue. When in reality API management is all about measuring and developing an awareness of the value exchanged around all of an organizations digital assets. With charging consumers for access just one the many ways in which value exchange can be measured, quantified, and reporting upon, by invoicing the consumer. There are so many other ways in which value can be exchanged, incentivized, measured, and reported upon, but many people have tuned into a single dominant narrative, and are completely unaware there is more than one way of doing things.

Each API request can be recorded at the API management level. Whether it originated internally, from partners, or through public applications. Each resource path, HTTP verb (GET, POST, PUT, PATCH, DELETE), application and end-user token(s) can is recorded. The value generated by ANY API resource, and the exchange with consumers can be measured. Charging for GETs is such a narrow, and unimaginative view of what is possible. Just because many tech startups are interested in charging to GET data, and extracting value by encouraging unrestrained POST, PUT, PATCH, doesn’t mean this is how all API platforms should operate. It doesn’t reflect the value exchange many organizations are looking to facilitate, and honestly can become dangerous if not balanced with healthy privacy, security, and an ethical business model.

Service composition, policies, and analysis at the API management layer is one of the most important areas of API operations that organizations should be investing in and maturing. Not only are some organization I meet not investing heavily in this area, they are completely unaware of the best practices, and solutions provided by the API management tooling they’ve put into place. I blame startup API culture for this. Investors have had a significant amount of control over what API startups present to their customers, thus have controlled the narrative around how you do API management, and deliver the business of your APIs. I’ve had conversations with API management company leadership about measuring value exchange at government agencies, and it was a completely new concept to them. Something they hadn’t ever considered, because they were held capture by this dominant narrative.

I’m hoping with APIs moving more mainstream, and API management being baked into the cloud, we can begin moving beyond the damage that has been done by this narrative. I’m going to be telling more stories about what is possible at the API management layer, and looking for innovative approaches to showcase in my work. Hopefully helping everyone get back to more meaningful value exchange around our digital assets, rather than the one way exploitation that has been occurring for the last decade, creating monsters like Twitter and Facebook, and making venture capital folks wealthier. Allowing us to just get down to doing real business, balanced value exchanges between platforms, developers, and users. Shifting the narrative from one way of doing business that is just a cover for exploitative business models, and extracting as much value as you can with APIs and giving nothing back in return.


Existing Processes And Culture Grinding Down New API Efforts

I’m always amazed what large organizations can achieve. They definitely have more resources, more momentum, and do way more than any single individual can achieve. However, as an individual actor I’m always also amazed at how large organization culture seems to always work to defend itself from change, and actively work to grind down API efforts I’m involved in. Even before I walk in the front door there are processes in place that work to ensure any effective API strategy gets ground down so that it won’t be as effective.

NDAs, intellectual property, background checks, schedules, logistics, leadership buy-in, physical security, legacy incidents, software, processes all seem to begin doing their work. Grinding down on anything new. I would ask folks to leave their baggage at home, but in most cases they aren’t even aware of the number, or the size of the suitcases they travel with. This is how things are done! This is the way things are! This is our standard way of operating! Even when we just want to have a conversation. Even when I’m just looking to share some knowledge with you and your team. I haven’t even begun talking about getting paid, I just want to share some wisdom with y’all, and help your team figure all of this out.

The machine isn’t setup for this time of knowledge sharing. This is why API can be so hard to achieve for some. The machine is setup to extract, aggregate, store, and control value–not share it. So when the machine encounters another entity that just wants to share, no string attached, it just doesn’t want it was designed to do. Even though many have made the conscious effort to open up doors and invite people in to share knowledge, innovate, and explore, the vacuum already in place will often consume anyone involved, sucking the oxygen out of the room, and oxidizing anything new, pulling nutrients and value into the central system. It is what the machine was built to do, and it is hard to reverse the course of the machine, no matter how may holes you poke into the exterior.

This is why so many APIs become extraction engines, rather than delivering resources to applications and users as was intended and promised. It is why Facebook is in the situation they are in. It is why so many API efforts will fail, not deliver as expected, or become harmful to developers and end-users. I watch many well meaning folks try to do APIs, completely unaware of the large machine they are in service of. I don’t want to be in the business of telling people not to do APIs, but in some of the conversations I’m having I feel like I should. APIs are not good by default. Nor are they bad. They are simply a reflection of your organizations existing culture and practices. If things are a mess internally, or your organization is primarily in the value extraction business, then APIs probably aren’t the best option. I think you’ll find they just create more problems for you, for developers, and end-users.


Balancing Virtual API Evangelism With In-Person API Evangelism

I haven’t published any stories this week on API Evangelist. I’ve been on the road in Lyon and Paris, France. Giving talks, and conducting workshops about my API lifecycle and governance work. Often times when I go on the road I try to pre-populate the blog with stories, but I’ve been so busy lately with travel and projects that I just didn’t have the time. Resulting in the blog not resembling its usual stream of API rants and stories. While this bothers me, I understand the balance between virtual API evangelism and the need to be present in-person from time to time.

While I feel like I can reach more people virtually, I feel like at least 30% of the time I should be present in-person. It helps re-enforce what I know, and allows me to evolve my work, and learn knew things by exercising my API knowledge on the ground in the trenches within existing organizations, and at conferences, Meetups, and workshops. While exhausting, and often costly, in-person gatherings help build relationships, allowing me to establish deeper connections with people. Helping my work penetrate the thick bubbles that exist around us in our personal and professional lives. I’m always exhausted after traveling and shaking so many hands, but ultimately it is worth it if I do it in a thoughtful and logical way.

In-person experiences are valuable, but I still feel that consistent, smart, and a syndicated virtual experience can reach more people, and make a more significant impact. It costs a lot less that traveling and attending conferences too. A robust presence isn’t easy to setup, and takes time to establish, but once in motion it can be the most effective way to build an audience. After eight years of doing API Evangelist, with some of it exclusively operating on the road, I can say that the sustained virtual presence will have the biggest impact, and provide a much more evergreen exposure that will keep on producing results even when you aren’t working. I can take a week off like I just did, and my traffic numbers keep growing, as long as I do not take too much time off, and neglect my work for more than a week or two–even when I took 3 months off a while back, my numbers and presence wasn’t damaged to badly.

In the end, it is all a balance. After a road trip I’m always happy to be home, and feel like I want to say no to other invitations. I feel like I could do so much more if I just had interrupted time at home doing the work I feel matters. While this is true to some degree, I definitely grow and learn a lot by talking with people at companies, organizations, institutions, and government agencies, as well as connecting at the right conferences and Meetups. It is a balance I have to assess from month to month, and have to put serious thought into which events and in-person engagements will make a difference. It is something I don’t think there is a perfect formula for, and is something that evolves, flows, and sometimes dries up depending on the season. It is something I just have to keep trusting my gut, while also forcing myself out of my comfort zone as much as I can possibly handle.


Delivering Large API Responses As Efficiently As Possible

I’m participating in a meeting today where one of the agenda items will be discussing the different ways in which the team can deal with increasingly large API responses. I don’t feel there is a perfect response to this question, and the answer should depend on a variety of considerations, but I wanted to think through some of the possibilities, and make sure the answers were on the tip of my tongue. It helps to exercise these things regularly in my storytelling so when I need to recall them, they are just beneath the surface, ready to bring forward.

Reduce Size Pagination
I’d say the most common approach to send over large amounts of data is to break things down into smaller chunks, based upon rows being sent over, and paginate the responses. Sending over a default amount (ie. 10,25,100), and require consumers either ask for larger amount, as well as request each additional page of results in a separate API request. Providing consumers with a running count of how many pages, what the current page is, and the ability to paginate forward and backward through the pages with each request. It is common to send pagination parameters through the query parameters, but some providers prefer handle it through headers (ie. Github).

Organizing Using Hypermedia
Another approach, which usually augments and extends pagination is using common hypermedia formats for messaging. To paginate results, many API providers use hypermedia media types as a message format, because the media types allow for easy linking to paginate results, as well as providing the relevant parameters in the body of the response. Additionally, hypermedia would further allow you to intelligently break down large responses into different collections, beyond just simple pagination. Then use the linking that is native to hypermedia to provide meaningful links with relations to the different collections of potential responses. Allowing API consumers to obtain all, or just the portions of information they are looking for.

Exactly What They Need With Schema Filtering
Another way to break things down, while putting the control into the API consumers hands, is to allow for schema filtering. Providing parameters and other ways for API consumers to dictate which fields they would like to return with each API response. Reducing or expanding the schema that gets returned based upon which fields are selected by the consumer. There are simpler examples of doing this with a fields parameter, all the way to more holistic approaches using query languages like GraphQL that let you provide a schema of which response you want returned via the URL or the parameter of each API request. Providing a very consumer-centric approach to ensuring only what is needed is transmitted via each API call.

Defining Specific Responses Using The Prefer Header
Another lesson known approach is to use the Prefer Header to allow API consumers to request which representation of a resource they would prefer, based upon some pre-determined definitions. Each API request would pass in a value for the Prefer Header, providing definitions like simple, complete, or other variation of response defined by the API provider. Keeping API responses based upon known schema and scopes defined by the API provider, but still allowing API consumers to select from the type of response they would like to receive. Balancing the scope of API responses between both the needs of API providers, and API consumers.

Using Caching To Make Response More Efficient
Beyond breaking up API calls into different types of responses, we can start focusing on the performance of the API, and making sure HTTP caching is used. Keeping API responses as standardized as possible while leveraging CDN, web server, and HTTP to ensure that each response is being cached as much as it makes sense. Ensuring that an API provider is thinking about, measuring, and responding to how frequent or infrequent data and content is changing. Helping make each API response as fast as it possibly can by leverage the web as a transport.

More Efficiency Through Compression
Beyond caching, HTTP Compression can be used to further reduce the surface area of API responses. DEFLATE and GZIP are the two most common approaches to compression, helping make sure API responses are as efficient as they possibly can. Compressing each API call to make sure only the least amount of bytes are transmitted across the wire.

Breaking Things Down With Chunked Responses
Moving beyond breaking things down, and efficiency approaches using the web, we can start looking at different HTTP standards for making the transport of API responses more efficient. Chunked transfers are one way to send API responses in not just a single API response, but break it down into an appropriate number of chunks, and send them in order. API consumers can make a request and receive large volumes of data in separate chunks that are reassembled on the client side. Leveraging HTTP to make sure large amounts of data is able to be received in smaller, more efficient API calls.

Switch To Providing More Streaming Responses
Beyond chunk responses, streaming using HTTP with standards like Server-Sent Events (SSE) can be used to deliver large volumes of data as streams. Allowing volumes of data and content to be streamed in logical series, with API consumers receiving as individual updates in a continuous, long-running HTTP request. Only receiving the data that has been requested, as well as any incremental updates, changes, or other events that have been subscribed to as part of the establishment of streams. Providing a much more real time approach to making sure large amounts of data can be sent as efficiently as possible to API consumers.

Moving Forward With HTTP/2
Everything we’ve discussed until now leverages the HTTP 1.1 standard, but there are also increased efficiencies available with the HTTP/2 release of the standard. HTTP/2 has delivered a number of improvements on caching and streaming, and handling the threading of of multiple messages within a single API request. Allowing for single, or bi-directional API requests and responses to exist simultaneously. When combined with existing approaches to pagination, hypermedia, and query languages, or using serialization formats like Protocol Buffers, further efficiency gains can be realized, while staying within the HTTP realm, but moving forward to use the latest version of the standard.

This is just a summary look at the different ways to help deliver large amounts of data using APIs. Depending on the conversations I have today I may dive in deeper into all of these approaches and provide more examples of how this can be done. I might do some round ups of other stories and tutorials I’ve curated on these subjects, aggregating the advice of other API leaders. I just wanted to spend a few moments thinking through possibilities so I can facilitate some intelligent conversations around the different approaches out there. While also sharing with my readers, helping them understand what is possible when it comes to making large amounts of data available via APIs.


Machine Readable API Regions For Use At Discovery And Runtime

I wrote about Werner Vogel of Amazon’s post considering the impact of cloud regions a couple weeks back. I feel that his post captured an aspect of doing business in the cloud that isn’t discussed enough, and one that will continue to drive not just the business of APIs, but also increasingly the politics of APIs. Amidst increasing digital nationalism, and growing regulation of not just the pipes, but also platforms, understanding where your APIs are operating, and what networks you are using will become very important to doing business at a global scale.

It is an area I’m adding to my list of machine readable API definitions I’d like to add to the APIs.json stack. The goal with APIs.json is to provide a single index where we can link to all the essential building blocks of a APIs operations, with OpenAPI being the first URI, which provides a machine readable definition of the surface area of the APIs. Shortly after establishing the APIs.json specification, we also created API Commons, which is designed to be a machine readable specification for describing the licensing applied to an API, in response to the Oracle v Google API copyright case. Beyond that, there hasn’t been many other machine readable resources, beyond some existing API driven solutions used as part of API operations like Github and Twitter. There are other API definitions like Postman Collections and API Blueprint that I reference, but they are in the same silo as OpenAPI operates within.

Most of the resources we link to are still human-centered URLs like documentation, pricing, terms of service, support, and other essential building blocks of API operations. However, the goal is to evolve as many of these as possible towards being more machine readable. I’d like to see pricing, terms of services, and aspects of support become machine readable, allowing them to become more automated and understood not just at discovery, but also at runtime. I’m envisioning that regions should be added to this list of currently human readable building blocks that should eventually become machine readable. I feel like we are going to be needing to make runtime decisions regarding API regions, and we will need major cloud providers like Amazon, Azure, and Google to describe their regions in a machine readable way–something that API providers can reflect in their own API definitions, depending on which regions they operate in.

At the application and client level, we are going to need to be able to quantify, articulate, and switch between different regions depending on the user, type of resources being consumed, and business being conducted. While this can continue being manual for a while, at some point we are going to need it to become machine readable so it can become part of the API discovery, as well as integration layers. I do not know what this machine readable schema will look like, but I’m sure it will be defined based upon what AWS, Azure, and Google are already up to. However, it will quickly need to become a standard that is owned by some governing body, and overseen by the community and not just vendors. I just wanted to plant the seed, and it is something I’m hoping will grow over time, but I’m sure it will take 5-10 years before something takes roots, based upon my experience with OpenAPI, APIs.json, and API Commons.


REST and gRPC Side by Side In New Google Endpoints Documentation

Google has been really moving forward with their development, and storytelling around gRPC. Their high speed to approach to doing APIs that uses HTTP/2 as a transport, and protocol buffers (ProtoBuf) as its serialized message format. Even with all this motion forward they aren’t leaving everyone doing basic web APIs behind, and are actively supporting both approaches across all new Google APIs, as well as in their services and tooling for deploying APIs in the Google Cloud–supporting two-speed APIs side by side, across their platform.

When you are using Google Cloud Endpoints to deploy and manage your APIs, you can choose to offer a more RESTful edition, as well as a more advanced gRPC edition. They’ve continued to support this approach across their service features, and tooling, by now also documenting your APIs. As part of their rollout of a supporting API portal and documentation for your Google Cloud Endpoints, you can automatically document both flavors of your APIs. Making a strong case for considering to offer both types of APIs, depending on the types of use cases you are looking to solve, and the types of developers you are catering to.

In my experience, simpler web APIs are ideal for people just getting going on their API journey, and will accommodate the evolution of 60-75% of the API deployment needs out there. Where some organizations further along in their API journey, and those providing B2B solutions, will potentially need higher performance, higher volume, gRPC APIs. Making what Google is offering with their cloud API infrastructure a pretty compelling option for helping mature API providers shift gears, or even helping folks understand that they’ll be able to shift gears down the road. You get an API deployment and management solution that simultaneously supports both speeds, but also the other supporting features, services, and tooling like documentation delivers at both speeds.

Normally I am pretty skeptical of single provider / community approaches to delivering alternative approaches to APIs. It is one of the reasons I still hold reservations around GraphQL. However with Google and gRPC they have put HTTP/2 to work, and the messaging format is open source. While the approach is definitely all Google, they have embraced the web, which I don’t see out of the Facebook led GraphQL community. I still questions Google’s motives, not because they are up to anything shady, but I’m just skeptical of EVERY company’s motivations when it comes to APIs. After eight years of doing this I don’t trust anyone to not be completely self serving. However, I’ve been tuned into gRPC for some time now and I haven’t seen any signs that make me nervous, and they keep delivering beneficial features like they did with this documentation, keeping me writing stories and showcasing what they are doing.


OpenAPI Makes Me Feel Like I Have A Handle On What An API Does

APIs are hard to talk about across large groups of people, while ensuring everyone is on the page. APIs don’t have much a visual side to them, providing a tangible reference for everyone to use by default. This is where OpenAPI comes in, helping us “see” an API, and establish a human and machine readable document that we can produce, pass around, and use as a reference to what an API does. OpenAPI makes me feel like I have a handle on what an API does, in a way that can actually have a conversation around with other people–without it, things are much fuzzier.

Many folks associate OpenAPI with documentation, code generation, or some other tooling or service that uses the specification–putting their emphasis on the tangible thing, over the definition. While working on projects, I spend a lot of time educating folks about what OpenAPI is, what it is not, and how it can facilitate communication across teams and API stakeholders. While this work can be time consuming, and a little frustrating sometimes, it is worth it. A little education, and OpenAPI adoption can go a long way to moving projects along, because (almost) everyone involved is able to be actively involved in moving API operations forward.

Without OpenAPI it is hard to consistently design API paths, as well as articulate the headers, parameters, status codes, and responses being applied across many APIs, and teams. If I ask, “are we using the sort parameter across APIs?” If there is no OpenAPI, I can’t get an immediate or timely answer, it is something that might not be reliably answered. Making OpenAPI a pretty critical conversation and collaboration driver across the API projects I’m working on. I am not even getting to the part where we are deploying, managing, documenting, or testing APIs. I’m just talking about APIs in general, and making sure everyone involved in a meeting is on the same page when we are talking about one or many APIs.

Almost every API I’m working on starts as a basic OpenAPI, even with just a title and description, published to a Github, Gitlab, Bitbucket, or other repository. Then I usually add schema definitions, which drive conversations about how the schema will be accessed, as we add paths, parameters, and other details of the requests, and responses for each API. With OpenAPI acting as the guide throughout the process, ensuring we are all on the same page, and ensuring all stakeholders, as well as myself have a handle on what is going on with each API being developed. Without OpenAPI, we can never quite be sure we are all talking about the same thing, let alone having a machine readable definition that we can all take back to our corners to get work done.


My Response To The VA Microconsulting Work Statement On API Outreach

The Department of Veterans Affairs (VA) is listening to my advice around how to execute their API strategy and adopting a micro-approach to not just delivering services, but also to the business of moving the platform forward at the federal agency. I’ve responded to round one, and round two of the RFI’s, and now they have submitted a handful of work statements on Github, so I wanted to provide an official response, share my thoughts on each of the work statements, and actually bid for the work.

First, Here is The VA Background
The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem.

Next, What The VA Considers The Play
As the Lighthouse Product Owner, I must have a repeatable process to communicate with internal and external stakeholders the availability of existing, new, and future APIs so that the information can be consumed for the benefit of the Veteran. This outreach/evangelism effort may be different depending on the API type.

Then, What The VA Considers The Deliverable
A defined and repeatable strategy/process to evangelize existing, new, and future APIs to VA’s stakeholders as products. This may be in the form of charts, graphics, narrative, or combination thereof. Ultimately, VA wants the best format to accurately communicate the process/strategy. This strategy/process may be unique to each type of API.

Here Is My Official Response To The Statement
API Evangelism is always something that is more about people, than it is about the technology, and should always be something that speaks to not just developers, being inclusive to all stakeholders involved in, and being served by a platform. Evangelism is all about striking the right balance around storytelling about what is happening across a platform, providing a perfect blend of sales and marketing that expands the reach of the platform, while also properly showcasing the technical value APIs bring to the table.

For the scope of this micro engagement around the VA API platform, I recommend focusing on delivering a handful of initiatives involved with getting the word out about what the API platforms, while also encouraging feedback, but all in an easily measurable way:

  • Blogging - No better way to get the word out than an active, informative, and relevant blog with an RSS feed that can be subscribed to.
  • Twitter - Augmenting the platform blog with a Twitter account that can help amplify platform storytelling in a way that can directly engage with users.
  • Github - Make sure the platform is publishing code, content, and other resources to Github repositories for the platform, and engaging with the Github community around these repos.
  • Newsletter - Publishing a weekly newsletter that includes blog posts, other relevant links to things happening on a platform, as well as in the wider community.
  • Feedback Loop - Aggregating, responding to, supporting, and organizing email, Twitter, Github, and other feedback from a platform and report back to stakeholders, as well as incorporate into regular storytelling.

For the scope of this project, there really isn’t room to do much else. Daily storytelling, Twitter, and Github engagement, as well as a weekly newsletter would absorb the scope of the project for a 30-60 day period. There would be no completion date for this type of work, as it is something that should go on in perpetuity. However, the number of blog posts, tweets, repos, followers, likes, subscribers, and other metrics could be tracked and reported upon weekly providing a clear definition of what work has been accomplished, and the value from the overall effort over any timeframe.

Evangelism isn’t rocket science. It just takes someone who cares about he platform’s mission, and the developers, and end-users being served by the platform. With a little passion, and technical curiosity, a platform can become alive with activity. A little search engine and social media activity can go a long way towards bringing in new users, keeping existing users engaged and encouraging increased level of activities, internally and externally around platform operations. With simple KPIs in place, and a simple reporting process in operation, the evangelism around a platform can be executed, quantified, and scaled across several individuals, as well as rolling teams of contractors.

Some Closing Thoughts On This Project
That concludes my response to the work statement. Evangelism is something I know. I’ve been studying and doing it for 8 years straight. Simple, consistent, informative evangelism is why I’m in a position to respond to this project out of the VA. It is how many of these ideas have been planted at the VA, through storytelling I’ve been investing in since I worked at the VA in 2013. A single page response doesn’t allow much space to cover all the details, but an active blog, Twitter, Github, and newsletter with a structured feedback loop in place can provide the proper scaffolding needed not to just execute a single cycle of evangelism, but guide many, hopefully rolling contracts to deliver evangelism for the platform in an ongoing fashion. Hopefully bringing fresh ideas, individuals, storytelling and outreach to the platform. Which can significantly amplify awareness around the APIs being exposed by the agency, and helping the platform better deliver on the mission to better serve veterans.


My Response To The VA Microconsulting Work Statement On API Landscape Analysis and Near-term Roadmap

The Department of Veterans Affairs (VA) is listening to my advice around how to execute their API strategy and adopting a micro-approach to not just delivering services, but also to the business of moving the platform forward at the federal agency. I’ve responded to round one, and round two of the RFI’s, and now they have submitted a handful of work statements on Github, so I wanted to provide an official response, share my thoughts on each of the work statements, and actually bid for the work.

First, Here is The VA Background
The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem.

Next, What The VA Considers The Play
As the VA Lighthouse Product Owner I have identified two repositories of APIs and publicly available datasets containing information with varying degrees of usefulness to the public. If there are other publicly facing datasets or APIs that would be useful they can be included. I need an evaluation performed to identify and roadmap the five best candidates for public facing APIs.

The two known sources of data are:

  • Data.gov - https://catalog.data.gov/organization/va-gov and www.va.gov/data
  • Vets.gov - https://github.com/department-of-veterans-affairs/vets-api

Then, What The VA Considers The Deliverable
Rubric/logical analysis for determining prioritization of APIs Detail surrounding how the top five were prioritized based on the rubric/analysis (Not to exceed 2 pages unless approved by the Government) Deliverables shall be submitted to VA’s GitHub Repo.

Here Is My Official Response To The Statement
To describe my approach to this work statement, and describe what I’ve been already doing in this area for the last five years, I’m going to go with a more logical analysis, as opposed a rubric. There is already too much structured data in all of these efforts, and I’m a big fan of making the process more human. I’ve been working to identify the most valuable data, content, and other resources across the VA, as well as other federal agencies since I worked in DC back in2013, something that has inspired my Knight Foundation funded work with Adopta.Agency.

Always Start With Low Hanging Fruit When it comes to where you should start with understanding what data and content should be turned into APIs, you always begin with the public website for a government agency. If the data and content is already published to the website, it means that there has been demand for accessing the information, and it has already been through a rigorous process to make publicly available. My well developed low hanging fruit process involves spidering of all VA web properties identifying any page containing links to XML, CSV, and other machine readable formats, as well as any page with a table containing over 25 rows, and any forms providing search, filtering, and access to data. The low hanging fruit portion of this will help identify potentially valuable sources of data beyond what is already available at va.gov/data, data.gov, and on Github–expanding the catalog of APIs that we should be targeting for publishing, going well beyond just va.gov/data, data.gov, and Github.

Tap Existing Government Metrics
To help understand what data available at va.gov/data, data.gov, on Github, as well as across all VA web properties, you need to reach out to existing players who are working to track activity across the properties. analytics.usa.gov is a great place to start to understand what people are looking at and searching for across the existing VA web properties. While talking with the folks at the GSA, I would also be talking to them about any available statistics for relevant data on data.gov, understanding what is being downloaded and access when it comes to VA data, but also any other related data from other agencies. Then I’d stop and look at the metrics available on Github for any data stored in with repositories, taking advantage of the metrics built into the social coding platform. Tapping into as many of the existing metrics being tracked on across the VA, as well as other federal agencies.

Talk To Veterans And Their Supporters
Next, I’d spend time on Twitter, Facebook, Github, and Linkedin talking to veterans, veteran service organizations (VSO), and people who advocate for veterans. This work would go a long way towards driving other platform outreach and evangelism described in the other statement of work I responded to. I’d spend a significant amount of time asking veterans and their supports what data, content, and other resources they most use, and would be most valuable if were made more available in web, mobile, and other applications. Social media would provide a great start to help understand what information should be made available as APIs, then you could also spend time reviewing the web and mobile properties of organizations that support veterans, taking note of what statistics, data, and other information they are publishing to help them achieve their mission, and better serve veterans.

Start Doing The Work
Lastly, I’d start doing the work. Based upon what I’ve learned from my Adopta.Agency work, each dataset I identified I’d publish my profiling to an individual Github repository. Publishing a README file describing the data, where it is located, and a quick snapshot of what is needed to turn the dataset into an API. No matter where the data ended up being in the list of priorities there would be a seed planted, which might attract its own audience, stewards, and interested parties that might want to move the conversation forward–internally, or externally to VA operations. EVERY public dataset at the VA should have its own Github repository seed, and then eventually it’s corresponding API to help make sure the latest version of the data is available for integration in applications. There is no reason that the work to deploy APIs cannot be started as part of the identifying and prioritization of data across the VA. Let’s plant the seed to ensure all data at the VA eventually becomes an API, no matter what it’s priority level is this year.

Some Closing Thoughts On This Project
It is tough to understand where to begin with an API effort at any large institution. Trust me, I tried to do at the VA. I was the person who originally setup the VA Github organization, and began taking inventory of the data now available on data.gov, va.gov/data, and on Github. To get the job done you have to do a lot of legwork to identify where all the valuable data is. Then you have to do the hard work of talking to internal data stewards, as well as individuals and organizations across the public landscape who are serving veterans. This is a learning process unto itself, and will go a long way towards helping you understand which datasets truly matter, and should be a priority for API deployment, helping the VA better achieve its mission in serving veterans.


If Github Pages Could Be Turned On By Default I Could Provide A Run On Github Button

My world runs on Gitub. 100% of my public website projects run on Github Pages, and about 75% of my public web applications run on Gitub Pages. The remaining 25% of it all is my API infrastructure. However, I’m increasingly pushing my data and content APIs to run entirely on Github with Github Pages as frontend, and the Github repo as the backend, with the Github API as the transport. I’d rather be serving up static JSON and YAML from repositories, and building JavaScript web applications that run using Jekyll, than dynamic server-side web applications.

It is pretty straightforward to engineer HTML, CSS, and JavaScript applications that run entirely on Github Pages, and leverage JSON and YAML stored in the underlying Github repo as the database backend, using the Github API as the API backend. Your Github OAuth token becomes your API key, and Github organization and user structure becomes the authentication and access management layer. These apps can run publicly, and when someone wants to write, as well as read data to the application, they just need to authenticate using Github–if they have permission, then they get access, and can write JSON or YAML data to the underlying repo via the API.

I’m a big fan of building portable micro applications using this approach. The only road block for me to make them easier to use by other users, is the ability to fork them into their own Github account with a single click. I have a Github OAuth token for any authenticated user, and with the right scopes I can easily fork a self-contained web application into their account or organization, giving them full control over their own version of the application. The problem is that each user has to enable Github Pages for the repository before they can view the application. It is a simple, but pretty significant hurdle for many users, and is something that prevents me from developing more applications that work like this.

If Github would allow for the enabling of Github Pages by default, or via the API, I could create a Run in Github button that would allow my micro apps to be forked and ran within each users own Github account. Maybe I’m missing something, but I haven’t been able to make Github Pages be enabled by default for any repo, or via the API. It seems like a simple feature to add to the road map, which is something that would have profound implications on how we deploy applications, turning Github into a JavaScript serverless environment, where applications can be forked and operated at scale. It is an approach I will keep using, but I am afraid it won’t catch on properly until users are able to deploy an application on Github with a single click. All I need is for Github to allow for me to create a new repository using the Github API, and allow me to turn on Github Pages by default when it is created–then my run in Github button will become a reality.


My GraphQL Thoughts After Almost Two Years

It has been almost a year and half since I first wrote my post questioning that GraphQL folks didn’t want to do the hard work of API design, which I also clarified that I was keeping my mind open regarding the approach to delivering APIs. I’ve covered several GraphQL implementations since then, as well as my post on waiting the GraphQL assault out-to which I received a stupid amount of, “dude you just don’t get it!”, and “why you gotta be so mean?” responses.

GraphQL Is A Tool In My Toolbox
I’ll start with my official stance on GraphQL. It is a tool in my API toolbox. When evaluating projects, and making recommendations regarding what technology to use, it exists alongside REST, Hypermedia, gRPC, Server-Sent Events (SSE), Websockets, Kafka, and other tools. I’m actively discussing the options with my clients, helping them understand the pros and cons of each approach, and working to help them define their client use cases, and when it makes sense I’m recommending a query language layer like GraphQL. When there are a large number of data points, and a knowledgeable, known group of developers being targeted for building web, mobile, and other client applications, it can make sense. I’m finding GraphQL to be a great option to augment a full stack of web APIs, and in many cases a streaming API, and other event-driven architectural considerations.

GraphQL Does Not Replace REST
Despite what many of the pundits and startups would like to be able to convince everyone of, GraphQL will not replace REST. Sorry, it doesn’t reach as wide of an audience as REST does, and it still keeps APIs in the database, and data literate club. It does make some very complex things simpler, but it also makes some very simple things more complex. I’ve reviewed almost 5 brand new GraphQL APIs lately where the on-boarding time was in the 1-2 hour range, rather than minutes with many other more RESTful APIs. GraphQL augments RESTful approaches, and in some cases it can replace them, but it will not entirely do away with the simplicity of REST. Sorry, I know you don’t want to understand the benefits REST brings to the table, and desperately want there to be a one size fits all solution, but it just isn’t the reality on the ground. Insisting that GraphQL will replace REST does everyone more harm than good, hurts the overall API community, and will impede GraphQL adoption–regardless of what you want to believe.

The GraphQL Should Embrace The Web
One of my biggest concerns around the future of GraphQL is its lack of a focus on the web. If you want to see the adoption you envision, it has to be bigger than Facebook, Apollo, and the handful of cornerstone implementations like Github, Pinterest, and others. If you want to convert others into being believers, then propose the query language to become part of the web, and improve web caching so that it rises to the level of the promises being made about GraphQL. I know underneath the Facebook umbrella of GraphQL and React it seems like everything revolves around you, but there really is a bigger world out there. There are more mountains to conquer, and Facebook won’t be around forever. To many of the folks I’m trying to educate about GraphQL, they can’t help but see the shadow of Facebook, and their lack of respect for the web. GraphQL may move fast, and break some things, but it won’t have the longevity all y’all believe so passionately in, if you don’t change your tune. I’ve been doing this 30 years now, and seen a lot of technology come and go. My recommendation is to embrace the web, bake GraphQL into what is already happening, and we’ll all be better off for it.

GraphQL Does Not Do What OpenAPI Does
I hear a lot of pushback on my OpenAPI work, telling me that GraphQL does everything that OpenAPI does, and more! No, no it doesn’t. By saying that you are declaring that you don’t have a clue what OpenAPI does, and that you are just pushing a trend, and have very little interest in the viability of the APIs I’m building, or my client’s needs. There is an overlap in what GraphQL delivers, and what OpenAPI does, but they both have their advantages, and honestly OpenAPI has a lot more tooling, adoption, and enjoys a diverse community of support. I know in your bubble that GraphQL and React dominates the landscape, but on my landscape it is just a blip, and there are other tools in the toolbox to consider. Along with the web, the GraphQL should be embracing the OpenAPI community, understanding the overlap, and reaching out to the OAI to help define the GraphQL layer for the specification. Both communities would benefit from this work, rather than trying to replace something you don’t actually understand.

GraphQL Dogma Continues To Get In The Way
I am writing this post because I had another GraphQL believer mess up my chances for it being in the roadmap, because of their overzealous beliefs, and allowing dogma to drive their contribution to the conversation. This is the 3rd time I’ve had this happen on high profile projects, and while the GraphQL conversation hasn’t been ruled out, it has been slowed, pushed back, and taken off the table, due to pushing the topic in unnecessary ways. The conversation unfortunately wasn’t about why GraphQL was a good option, the conversations was dominated by GraphQL being THE SOLUTION, and how it was going to fix everything web service and API that came before it. This combined with inadequate questions regarding IP concerns conveyed, and GraphQL being a “Facebook solution”, set back the conversations. In each of these situations I stepped in to help regulate the conversations, and answer questions, but the damage was already done, and leadership was turned off to the concept of GraphQL because of overzealous, dogmatic beliefs in this relevant technology. Which brings me back to why I’m pushing back on GraphQL, not because I do not get the value it brings, but because the rhetoric around how it is being sold and pushed.

No, I Do Get What GraphQL Does
This post will result in the usual number of Tweets, DMs, and emails telling me I just don’t get GraphQL. I do. I’ve installed and played with it, and hacked against several implementations. I get it. It makes sense. I see why folks feel like it is useful. The database guy in me gets why it is a viable solution. It has a place in the API toolbox. However, the rhetoric around it is preventing me from putting it to use in some relevant projects. You don’t need to convince me of its usefulness. I see it. I’m also not interested in convincing y’all of the merits of REST, Hypermedia, gRPC, or the other tools in the toolbox. I’m interested in applying the tools as part of my work, and the tone around GraphQL over the last year or more hasn’t been helping the cause. So, please don’t tell me I don’t get what GraphQL does, I do. I don’t think y’all get the big picture of the API space, and how to help ensure GraphQL is in it for the long haul, and achieving the big picture adoption y’all envision.

Let’s Tone Down The Rhetoric
I’m writing about this because the GraphQL rhetoric got in my way recently. I’m still waiting out the GraphQL assault. I’m not looking to get on the bandwagon, or argue with the GraphQL cult, I’m just looking to deliver on the projects I’m working on. I don’t need to be schooled on the benefits of GraphQL, and what the future will hold. I’m just looking to get business done, and support the needs of my clients. I know that the whole aggressive trends stuff works in Silicon Valley, and the startup community. But in government, and some of the financial, insurance, and healthcare spaces I’ve been putting GraphQL on the table, the aggressive rhetoric isn’t working. It is working against the adoption of the query language. Honestly it isn’t just the rhetoric, I don’t feel the community is doing enough to satisfy the wider IP, and Facebook connection to help make the sale. Anyways, I’m just looking to vent, and push back on the rhetoric aound GraphQL replacing REST. After a year and a half I’m convinced of the utility of GraphQL, however the wider tooling, and messaging around it still hasn’t won me over.


Sharing API Definition Patterns Across Teams By Default

This is a story, in a series I’m doing as part of the version 3.0 release of the Stoplight.io platform. Stoplight.io is one the few API service providers I’m excited about what when it comes to what they are delivering in the API space, so I jumped at the opportunity to do some paid work for them. As I do, I’m working to make these stories about the solutions the provide, and refrain from focusing on just their product, hopefully maintaining my independent stance as the API Evangelist, and keeping some of the credibility I’ve established.

One of the things I’m seeing emerge from the API providers who are further along in their API journey, is a more shared experience when it comes to not just the API design process, but in action throughout the API lifecycle. You can see this reality playing out with the API definitions, and the move from Swagger 2.0 to the OpenAPI 3.0, where there is a more modular approach to defining an API, encouraging reuse of common patterns across API paths within a single definition, as well as across many different API definitions. Expanding on how API definitions can move an API through the lifecycle, and beginning to apply that across projects and teams, and moving APIs through lifecycle much more quickly, and consistently.

Emulating the Models We Know
We hear a lot about healthy API design practices, following RESTful philosophies, and learning from the API design guides published by leading API providers. While all of these areas have a significant influence on what happens during the API design and development process, the most significant influence on what developers know, it is just what they are exposed to in their own experience. The models we are exposed to often become the models we employ, demonstrating the importance of adopting a shared experience when it comes to defining, designing, and putting common API models to work for us in our API journeys.

A Schema Vision Reflects Maturity
The other side of this shared API definition reality, are the reuse and sharing of schema. The paths, parameters, and models for the surface area of our APIs is essential, but the underlying schema we use as part of the request and responses will eliminate or introduce the most friction when it comes to API integration. Reinventing the wheel with schema creates friction throughout the API lifecycle from design to documentation, and will make each stop significantly more costly, something that will ultimately be also passed off to consumers. Sharing, reusing, and being organized with schema used across the development of APIs is something you see present within the practices of the API teams who are realizing more streamlined, mature, API deliver pipelines.

Sharing Across Teams Is How Its Done
Using common models and schema is important, but none of it matters if you aren’t sharing them across teams. Leveraging Git, as well as the social, team, and collaborative features to share and reuse common models and schema is what helps reduce friction, and introduce efficiency into both the deployment and integration of APIs. API design doesn’t occur in isolation anymore. Gone are the pizza in the basement days for developing APIs, and the API lifecycle is a social, collaborative affair. Teams that have a mature, consistent, and reproducible API lifecycles, are sharing definitions across teams, and actively employing the most mature patterns across the APIs they deliver. Without a team mentality throughout the API lifecycle, APIs will never truly be interoperable, reusable, and consumable set of services we expect.

Without API Definitions There Is No Discovery
Ask many large organizations who are doing APIs without API definitions like OpenAPI, and who haven’t adopted a lifecycle approach, where their APIs are, and they will just give you a blank stare. APIs regularly get lost in the shuffle of busy organizations, and can often be hidden in the shadows of the mobile applications they serve. However, when you adopt and API definition by default, as well as an API definition-driven life cycle, discovery is baked in by default. You can search Github, reverse engineer schema from application logs, and follow the code pipeline to find APIs. API definitions leave a trail, and API documentation, testing, and other resources all point to where the APIs are. API discovery isn’t a problem in an API definition driven world, it is baked in by default.

API Governance Depends on API Definitions
I get a regularly stream of requests from organizations who have already embarked on their API journey regarding how they can work to more consistently deliver APIs, and institute a governance program to help them measure progress across teams. The first question I always ask is whether or not they use OpenAPI, and the second question is regarding what their API lifecycle and Git usage looks like. If they aren’t using OpenAPI, and haven’t adopted a Git workflow across their teams, API governance will rarely ever be realized. Without definitions you can’t measure. Without a Git workflow, you can find the artifacts you need to measure. API governance can’t exist in isolation, without relevant definitions in play.

OpenAPI has significantly contributed to the evolution, and maturity of what we know as the API lifecycle. Without them we can’t share common models and schema. We can’t work together and learn from each other. Without tooling and services in place that allow us to share, collaborate, mock, and communicate around the API design and development process, the API process will never mature. It will not be consistent. It will not move forward and reach a production line level of efficiency. To realize this we have to share definitions across teams by default–without it, we aren’t really doing APIs.

P.S. Thanks Stoplight.io for the support!


The ClosedAPI Specification

You’ve heard of OpenAPI, right? It is the API specification for defining the surface area of your web API, and the schema you employ–making your public API more discoverable, and consumable in a variety of tools services. OpenAPI is the API definition for documenting your API when you are just getting started with your platform, and you are looking to maximize the availability and access of your platform API(s). After you’ve acquired all the users, content, investment, and other value, ClosedAPI is the format you will want to switch to, abandoning OpenAPI, for something a little more discreet.

Collect As Much Data As You Possibly Can

Early on you wanted to be defining the schema for your platform using OpenAPI, and even offering up a GraphQL layer, allowing your data model to rapidly scale, adding as may data points as you possible can. You really want to just ingest any data you can get your hands on the browser, mobile phones, and any other devices you come into contact with. You can just dump it all into big data lake, and sort it out later. Adding to your platform schema when possible, and continuing to establish new data points that can be used in advertising and targeting of your platform users.

Turn The Firehose On To Drive Activity

Early on you wanted your APIs to be 100% open. You’ve provided a firehose to partners. You’ve made your garden hose free to EVERYONE. OpenAPI was all about providing scalable access to as many users as you can through streaming APIs, as well as lower volume transactional APIs you offer. Don’t rate limit too heavily. Just keep the APIs operating at full capacity, generating data and value for the platform. ClosedAPI is for defining your API as you begin to turn off this firehose, and begin restricting access to your garden hose APIs. You’ve built up the capacity of the platform, you really don’t need your digital sharecroppers anymore. They were necessary early on in your business, but they are not longer needed when it comes to squeezing as much revenue as you can from your platform.

The ClosedAPI Specification

We’ve kept the specification as simple as possible. Allowing you to still say you have API(s), but also helping make sure you do not disclose too much about what you actually have going on. Providing you the following fields to describe your APIs:

  • Name
  • Description
  • Email

That is it. You can still have hundreds of APIs. Issue press releases. Everyone will just have to email you to get access to your APIs. It is up to you to decide who actually gets access to your APIs, which emails you respond, or if the email account is ever even checked in the first place. The objective is just to appear that you have APIs, and will entertain requests to access them.

Maintain Control Over Your Platform

You’ve worked hard to get your platform to where it is. Well, not really, but you’ve worked hard to ensure that others do the work for you. You’ve managed to convince a bunch of developers to work for free building out the applications and features of your platform. You’ve managed to get the users of those applications to populate your platform with a wealth of data, making your platform exponentially more valuable that you could have done on your own. Now that you’ve achieved your vision, and people are increasingly using your APIs to extract value that belongs to you, you need to turn off the fire hose, garden hose, and kill off applications that you do not directly control.

The ClosedAPI specification will allow you to still say that you have APIs, but no longer have to actually be responsible for your APIs being publicly available. Now all you have to do is worry about generating as much revenue as you possibly can from the data you have. You might lose some of your users because you do not have publicly available APIs anymore, as well as losing some of your applications, but that is ok. Most of your users are now trapped, locked-in, and dependent on your platform–continuing to generate data, content, and value for your platform. Stay in tune with the specification using the road map below.

Roadmap:

  • Remove Description – The description field seems extraneous.

I Will Be Speaking At The API Conference In London This Week

I am hitting the road this week heading to London to speak at the API Conference. I will be giving a keynote on Thursday afternoon, and conducting an all day workshop on Friday. Both of my talks are a continuation of my API life cycle work, and pushing forward my use of a transit map to help me make sense of the API life cycle. My keynote will be covering the big picture of why I think the transit model works for making sense of complex infrastructure, and my workshop is going to get down in the weeds with it all.

My keynote is titled, “Looking At The API Life Cycle Through The Lens Of A Municipal Transit System”, with the following abstract, “As we move beyond a world of using just a handful of internal and external APIs, to a reality where we operate thousands of microservices, and depend on hundreds of 3rd party APIs, modern API infrastructure begins to look as complex as a municipal transit system. Realizing that API operations is anything but a linear life cycle, let’s begin to consider that all APIs are in transit, evolving from design to deprecation, while still also existing to move our value bits and bytes from one place to another. I would like to share with you a look at how API operations can be mapped using an API Transit map, and explored, managed, and understood through the lens of a modern, Internet enabled “transit system”.”

My workshop is titled, “Beyond The API Lifecycle And Beginning To Establish An API Transit System”, with the following abstract, “Come explore 100 stops along the modern API life cycle, from definition to deprecation. Taking the learnings from eight years of API industry research, extracted from the leading API management pioneers, I would like to guide you through each stop that a modern API should pass across, mapped out using a familiar transit map format, providing an API experience I have dubbed API Transit. This new approach to mapping out, and understanding the API life cycle as seen through the lens of a modern municipal transit system allows us to explore and learn healthy patterns for API design, deployment, management, and other stops along an APIs journey. Which then becomes a framework for executing and certifying that each API is following the common patterns which each developer has already learned, leading us to a measurable and quantifiable API governance strategy that can be reported upon consistently. This workshop will go into detail on almost 100 stops, and provide a forkable blueprint that can be taken home and applied within any API operations, no matter how large, or small.”

I will also be spending some time meeting with a variety of API service providers, and looking to meet up with some friends while in London. If you are around, ping me. I’m not sure how much time I will have, but I’m always game to try and connect. I’m eager to talk with folks about banking activity in the UK, as well as talking about the event-driven architecture, and API life cycle / governance work I’m doing with Streamdata.io. I’ll be at the event most of the day Thursday, and all day Friday, but both evenings I should have some time available to connect. See you in London!


Details Of The API Evangelist Partner Program

I’ve been retooling the partner program for API Evangelist. There are many reasons for this, and you can read the full backstory I have written a narrative for these changes to the way in which I partner. I need to make a living, and my readers are expecting me to share relevant stories from across the sector on my blog. I’m also tired of meaningless partner arrangements that never go anywhere, and I’m looking to incentivize my partners to engage with me, and the API community in an impactful way. I’ve crafted a program that I think will do that.

While there will still be four logo slots available for partnership, the rest of the references to API services providers and the solutions they provide will be driven by my partner program. If you want to be involved, you need to partner with me. All that takes is to email me that you want to be involved. It doesn’t cost you a dime, all you have to do is reach out, let me know you are interested, and be willing to play the part of an active API service provider. If you are making an impact on the API space, then you’ll enjoy the following exposure across my work:

  • API Lifecycle - Your services and tooling will be listed as part of my API lifecycle research, and displayed on the home page of my website, and across my network of sites.
  • Short Form Storytelling - Being referenced as part of my stories when I talk about the areas of the API sector in which your services and tooling provides solutions.
  • Long Form Storytelling - Similar to short form, when i’m writing white papers, and guides, I will use your products and services as examples to highlight the solutions I’m talking about.
  • Story Ideas - You will have access a list of working story ideas I’m working through, and able to add to the list, as well as extract ideas from the list for providing story idea seeds for your own content teams.
  • In-Person Talks - When I’m giving talks at conferences I will be including your products and services into my slides, and storytelling, using them as a cornerstone for my in-person talks.
  • Workshops - Similar to talks, I will be working my partners into the workshops I give, and when I walk through my API lifecycle and governance work, I will be referencing the tools and services you provide.
  • Consulting - When working with clients, helping them develop their API lifecycle and governance strategy I will be recommending specific services and tooling that are offered by my partners.

I will be doing this for the partners who have the highest ranking. I won’t be doing this in exchange for money. Sure, some of these partners will be buying logos on the site, and paying me for referrals, and deals they land. However, that is just one aspect of how I fund what I do. It won’t change my approach to research and storytelling. I think I’ve done a good job of demonstrating my ability to stay neutral when it comes to my work, and something that my work will continue to demonstrate. If you question how partner relationships will affect my work, then you probably don’t know me very well.

I won’t be taking money to write stories or white papers anymore. It never works, and always gets in the way of me producing stories at my regular pace. My money will come from consulting, speaking, and the referrals I send to my partners. Anytime I have taken money from a partner, I will disclose that I have on my site, and acknowledge the potential influence on my work. However, in the end, nothing will change. I’ll still be writing as I always have, and maintain as critical as I always have been, even of my partners. The partners who make the biggest impact on the space will rise to the top, and the ones who do not, will stay at the bottom of the list, and rarely show up in my work. If you’d like to partner, just drop me an email, and we’ll see what we can make happen working together.


Crafting A Productive API Industry Partner Program

I struggle with partner relationships. I’ve had a lot of partners operating API Evangelist over the years. Some good. Some not so good. And some amazing! You know who you are. It’s tough to fund what I do as the API Evangelist. It’s even harder to fund who I am as Kin Lane. I’ve revamped my approach to partnering several times now trying to find the right formula for me, my partners, and for my readers. As the partner requests pile up, and I fall short for some of my existing partners, while delivering as expected for others, it is time for me to take another crack at shaping my partner program.

A Strong Streamdata.io Partnership Base
A cornerstone of my new approach to partnering is based upon my relationship with Streamdata.io. They are my primary partner, and supporter of API Evangelist. They have not just helped provide me with the financial base I need to live and operate API Evangelist, they are investing in, and helping grow my existing API lifecycle and API Stack work. We are also working in concert to formalize my API lifecycle work into a growing consultancy, and pushing forward my API Stack research, syndicating it as the Streamdata.io API Gallery, and refining the my ranking system for both API providers, and API service providers. Without Streamdata.io this latest round of API Evangelist wouldn’t be happening.

Difficulties With The Current Mode Of Partnering
One of the biggest challenges I have right now with partnering is that my partners want me to produce content about them. Writing stories for pay just isn’t a good idea for their brand, or for mine. I know it is what people want, but it just doesn’t work. The other challenge I have is people tend to want predictable stories on a schedule. I know it seems like I’m a regular machine, churning out content, but honestly when it flows, it all works. When it doesn’t flow, it doesn’t work. I schedule things out ahead enough that the bumps are smoothed out. I love writing. The anxiety I get from writing on a deadline, or for expected topics isn’t conducive to producing good content, and storytelling. This applies to both my short form (blog posts), and my long form (white papers). I’ll still be producing both of these things, but I can’t do it for money any longer.

Diffciulties With Incentivizing Partner Behavior
I have no shortage of people who’d like to partner and get exposure from what I do, but incentivizing what I’d like to see from these partners is difficult. I’m not just looking for money from them, I’m looking to incentivize companies to build interesting products and services, tell stories that are valuable to the community, and engage with the API space in a meaningful way. I want to leverage my partners to behave as good citizens, give back to the space, and yes, get new users and generate revenue from their activity. I’ve seen too many partnerships exclusively be about the money involved, or just be about the press release, with no meaningful substance actually achieved in between. This type of hollow, meaningless, partnership masturbation does no good for anyone, and honestly is a waste of my time. I don’t expect ALL partnerships to bear fruit, but there should be a high bar for defining what partnership means, and we should be working to making it truly matter for everyone involved.

There Are Three Dimensions To Creating Partner Value
As I see it, there are three main dimensions to establishing a productive API industry partnership program. There is partner A, and partner B, but then there is potential customer. If a partnership isn’t benefiting all three of these actors equally, then it just won’t work. I understand that as a company you are looking to maximize the value generation and extraction for your business, but there is enough to go around, and one of the core tenets of partnerships is that this value is shared, with the customer and community in mind. Not all companies get this. It is my role to make sure they are reminded of it, and push for balanced partnerships that are healthy, active, fruitful, but also benefit the community. We will all benefit from this, despite many shortsighted, self-centered approaches to doing business in API-land that I’ve encountered over my eight years operating as the API Evangelist. To help me balance my API partnership program, I’m going to be applying mechanism I’ve used for years to help me define and understand who my true partners are, and whoever rises to the top will see the most benefits from the program.

Tuning Into API Service Provider Partner Signals
When it comes to quantifying my partners, who they are, and what they do, I’m looking at a handful of signals to make sure they are contributing not just to the partnership, but to the wider community. I’m looking to incentivize API service providers to deliver as much value to the community than they are extracting as part of the partnership. Here is what I’m looking for to help quantify the participation of any partner on the table:

  • Blog - The presence of an active blog with and RSS / Atom feed. Allowing me to tune into what is happening, and help share the interesting things that are happening via their platform and community. If something good is being delivered, and the story isn’t told, it didn’t happen. An active blog is one of the more important signals I can tune into as the API Evangelist – also I need a feed!!!
  • Github - The presence of an active Github account. Possessing a robust number of repositories for a variety of API ecosystem projects from API definitions to SDKs. I’m tuning into all the Github signals including number of repos, stars, commits, forks, and other chatter to understand the activity and engagement levels occurring within a community. Github is the social network for API providers–make sure you are being active, observable, and engaging.
  • Twitter - The presence of an active, dedicated Twitter account. Because of it’s public nature, Twitter provides a wealth of signals regarding what is happening via any platform, and provides one of the most important ways in which a service provider can be engaging with their customers, and the wider APIs pace. I understand that not everyone excels at Twitter engagements, but using as an information and update sharing channel, and general support and feedback loop is within the realm of operation for EVERYONE.
  • Search - I’m always looking for a general, and hopefully wide search engine presence for API service providers. I have alerts setup for all the API service providers I monitor, which surfaces relevant stories, conversations, and information published by each platform. SEO isn’t hard, but takes a regular, concerted effort, and it is easy to understand how much a service provider is investing in their presence, or not.
  • Business Model - The presence of, or lack of a business model, as well as investment is an important signal I keep an eye on, trying to understand where each API service provider in their evolution. How new a company is, how far along their runway they are, and what the exit and long term strategy for an API service provider is. Keeping an eye on Crunchbase and other investment, pricing plans, and revenue signals coming out of a platform will play a significant role in understand the value an API service is bringing to the table.
  • Integrations - I’m also tracking on the integrations any service provider offers, ensuring that their API service providers are investing in, and encouraging interpretability with other platforms by default. API service providers that do not play well with others, often do not make good partners, insisting on all roads leading to their platform. I’m always on the hunt for a dedicated integrations and plugin page for any API service provider I’m partnering with.
  • Partnerships - Beyond integrations I want to see the all the other partnerships one of my partners is engaging with. The relationships they are engaging in tell a lot about how well they will partner with me, and define what signals they are looking to send to the community. Partnerships tell a lot of story about the motivations behind a companies own partner program, and how it will benefit and impact my own partner program.
  • API - I am always looking for whether or not an API service provider has an API. If a company is selling services, products, and tooling to the API sector, but doesn’t have a public facing API, I’m always skeptical of what they are up to. I get it, it can often not be a priority to operate your own program, but in reality, can anyone trust an API service provider to help deliver on their own program, if they don’t have the same experience operating their own API program? This is one of my biggest pet peeves with API service providers, and a very telling sign about what is happening behind the scenes.
  • API Definitions - If you have an API as an API service provider then they should have an OpenAPI and Postman Collections available for their API. The presence of API definitions, as well as robust portal, docs, and other API building blocks is essential for any API service provider.
  • Story Ideas - I’m very interested in the number of story ideas submitted by API service providers, pointing me to interesting stories I should cover, as well as the interesting things that are occurring via their platform. Ideally, these are referenced by public blog posts, Tweets, and other signals sent by the API service provider, but they also count when received via direct message, email, and carrier pigeon (bonus points for pigeon).

There are other signals I’m looking for from my partners, but that is the core of what I’m looking for. I know it sounds like a lot, but it really isn’t. It is the bar that defines a quality API service provider, based upon eight years of tracking on them, and watching them come and go. The API service providers who are delivering in these areas will float up in my automated ranking system, and enjoy more prominence in my storytelling, and across the API Evangelist network.

Tuning Into The API Community Signals
Beyond the API service providers themselves I’m always tuned into what the community is saying about companies, products, services, and tooling. While there is always a certain level of hype around what is happening in the API sector, you can keep your finger on the pulse of what is going on through a handful of channels. Sure, many of the signals present on these channels can be gamed by the API service providers, but others are much more difficult, and will take a lot of work to game. Which can be a signal on its own. Here is what I’m looking for to understand what the API community is tuning into:

  • Blogs - I’m tuning into what people across the space are writing about on their personal, and company blogs. I’m regularly curating stories that reference tools, services, and the products offered by API service providers. If people are writing about specific companies, I want to know about it.
  • Twitter - Twitter provides a number of signals to understand what the community thinks about an API service provider, including follows, and retweets. The conversation volume initiated and amplified by the community tells a significant story about what an API service provider is up to.
  • Github - Github also provides one of the more meaningful platforms for harvesting signals about what API service providers are up to. The stars, forks, and discussion initiated and amplified by the community on the Github repos and organizations of API services providers tell an important story.
  • Search - Using the Google and Talkwalker alerts setup for each API service provider, I curate the Reddit, Hacker news, and other search indexed conversations occurring about providers, tracking on the overall scope of conversation around each API service provider.

There are other signals I’m developing to understand the general tone of the API community, but these reflect the proven ones I’ve been using for years, and provide a reliable measure of the impact an API service provider is making, or not. One important aspect of this search and social media exhaust is in regards to sentiment, which can be good or bad, depending on the tone the API service provider is setting within the community.

Generating My Own Signals About What Is Happening
This is the portion of the partner relationship where I’m held accountable. These are the things that I am doing that deliver value to my partners, as well as the overall API community. These items are the signals partners are looking for, but also the things I’m measuring to understand the role an API service provider plays in my storytelling, speaking, consulting, and research. Each of these areas reflect how relative an API service, their products, services, and tooling is to the overall API landscape as it pertains to my work. Here is what I’m tracking on:

  • Short Form Storytelling - Tracking on how much I write about an API service provider in my blog posts on API Evangelist, Streamdata.io, and other places I’m syndicated like DZone. If I’m talking about a company, they are doing interesting an relevant things, and I want to be showcasing their work. I can’t just write about someone because I’m paid, it is because they are relevant to the story I’m telling.
  • Long Form Storytelling - Understanding how API service providers are referenced in my guides and white papers. Similar to the short form references, I’m only referencing companies when they are doing relevant things to the stories I’m telling. My guides used to be comprehensive when it came to mentioning ALL API service providers, but going forward they will only reference those that float to the top, and rank high in my overall API partner ranking.
  • Story Ideas - I’m regularly writing down story ideas, and aggregating them into lists. Not everything ever turns into a story, but still the idea demonstrates something that grabbed my attention. I tend to also share story ideas with other content producers, and publications, providing the seeds for interesting stories that I may not have the time to write about myself. Providing rich, and relevant materials for others to work from.
  • My In-Person Talks - Throughout the talks I give at conferences, Meetups, and in person within companies, organizations, institutions, and government I am referencing tools and service to accomplish a specific API related task–I need the best solutions to reference as part of these talks. I carefully think about which providers I will reference, and I keep track of which ones I reference.
  • API Lifecycle - My API lifecycle work is build upon my research across the API stack, what I learn from the public API providers I study, as well as what I learn as part of my consulting work. All of this knowledge and research goes back into my API lifecycle and governance strategy, and becomes part of my outreach and strategy which gets implemented on the ground as part of my consulting. Across the 68+ stops along the API there are always API service providers I need to reference, and refer to as part of my work.
  • API Stack - My API Stack work is all about profiling publicly available APIs out there and understanding the best practices at work when it comes to operating their APIs. When I notice that a service or tool is being put to use I take note of it. I use these references as part of my overall tracking and understanding of what is being put to use across the industry.
  • Website Logos - There are four logos for partners to sponsor across the network of API Evangelist sites. While I’m not weighting these in my ranking, when someone is present on the site like that they are part of my consciousness. I recognize this, and acknowledge that it does influence my overall view of the API sector.
  • Conversations - I’m regularly engaging in conversations with my partners, learning about their roadmaps, and staying in tune with what they are working on. These conversations have a significant impact on my view of the space, and help me understand the value that API service providers bring to the table.
  • Referrals - Now is where we start getting to the money part of this conversation. When I refer clients to some of my partners, there are some revenue opportunities available for me. Not every referral is done because I’m getting paid, but there are partners who do pay me when I send business their way. This influences the way that I see the landscape, not because I’m getting paid, but because someone I’m talking to chose to use a service, and this will influence how I see the space.
  • Deals Made - I do make deals, and bring in revenue based upon the business I send to my partners. This isn’t why I do what I do, but it does pay the bills. I’m not writing and telling stories about companies because I have relationships with them. I write and tell stories about them because they do interesting things, and provide solutions for my audience. I’m happy to be transparent about this side of my business, and always work to keep things out in the open.

This is the core the signals I’m generating that tell me which API service providers are making an impact on the API sector. Everything I do reinforces who the most relevant API service providers are. If I’m telling stories about them, then I’m most likely telling more stories. If I’m referring API service providers to my readers and clients, then it just pushes me to read more about what they are doing, and tell more stories about them. The more an API service providers floats up in my world, the more likely they will stay there.

Partnerships Balanced Across Three Dimensions
My goal with all of this is to continue applying my ranking of API service providers across three main dimensions. Things that are within their control. Things that are within the communities control. Then everything weighted and influenced by my opinion. While money does influence this, that isn’t what exclusively drives which API service providers show up across my work. If I take money to write content, then its hard to say that content is independent. The content is what drives the popularity and readership of API Evangelist, so I don’t want to negatively impact this work. It is easy to say that my storytelling and research is influenced by referral fees and deals I make with partners, but I find this irrelevant if my list of partners is ranked by a number of other elements, within the API service providers control, but also due to signals that are not.

The Most Relevant Partners Rise To The Top
Everything on API Evangelist is YAML driven. The stories, the APIs, API service providers, and the partners that show across the stories I tell, and the talks I give. I used to drive the listings of APIs and API service providers using my ranking, floating the most relevant to the top. I’m going to start doing that again. I’ve been slowly turning back on my automated monitoring, and updating the ranking for APIs and API service providers. When an API service providers shows up on the list of services or tools I showcase, only the highest rank will float to the top. If API service providers are active, the community is talking about them, and I’m tuned into what they are doing, then they’ll show up in more work more frequently. Keeping the most relevant services and tooling available across the API space, and my research, being showcased in my storytelling, talks, and consulting.

My objective with this redefining of my partner program is to take what I’ve learned over the years, and retool my partnership to deliver more value, help keep my website up and running, but most importantly get out of the way of what I do best–research and storytelling around APIs. I can’t have my partnerships slow or redirect my storytelling. I can’t let my partnerships dilute or discredit my brand. However, I still need to make a living. I still need partners. They still need value to be generated by me, and I want to help ensure that they bring value to the table for me, my readers, as well as the wider API community. This is my attempt to do that. We’ll see how it goes. I will update each month and see what I can do to continue dialing it in. If you have any comments, questions, concerns, or would like to talk about partnering–let me know.


Nest Branding And Marketing Guidelines

I’m always looking out for examples of API providers who have invested energy into formalizing process around the business and politics of API operations. I’m hoping to aggregate a variety of approaches that I can aggregate into a single blueprint that I can use in my API storytelling and consulting. The more I can help API providers standardize what they do, the better off the API sector will be, so I’m always investing in the work that API providers should be doing, but doesn’t always get prioritized.

The other day while profiling the way that Nest uses Server-Sent Events (SSE) to stream activity via thermostats, cameras, and the other devices they provide, and I stumbled across their branding policies. It provides a pretty nice set of guidance for Nest developers in respect to the platform’s brand, and something you don’t see with many other API providers. I always say that branding is the biggest concern for new API providers, but also the one that is almost never addressed by API providers who are in operation–which doesn’t make a whole lot of sense to me. If I had a major corporate brand, I’d work to protect it, and help developers understand what is important.

The Nest marketing program is intended to qualify applications, and then allow them to use the “Works with Nest” branding in your marketing and social media. To get approved you have submit your product or service for review. As part of the review process verifies that you are in compliance with all of their branding policies, including:

To apply for the program you have to email them with all the following details regarding the marketing efforts our your product or service where you will be using the “Works with Nest” branding:

  • Description of your marketing program
  • Description of your intended audience
  • Planned communication (list all that apply): Print, Radio, TV, Digital Advertising, OOH, Event, Email, Website, Blog Post, Email, Social Post, Online Video, PR, Sales, Packaging, Spec Sheet, Direct Mail, App Store Submission, or other (please specify)
  • High resolution assets for the planned communication (please provide URL for file download); images should be at least 1000px wide
  • Planned geography (list all that apply): Global, US, Canada, UK, or other (please specify)
  • Estimated reach: 1k - 50k, 50k- 100k, 100k - 500k, or 500k+
  • Contact Information: First name, last name, email, company, product name, and phone number

Once submitted, Nest says they’ll provide feedback or approval of your request within 5 business days, and if all is well, they’ll approve your marketing plan for that Works with Nest product. If they find any submission is not in compliance with their branding policies, they’ll ask you to make corrections to your marketing, so you can submit again. I don’t have too many examples of marketing and branding submission process as part of my API branding research. I have the user interface guide, and trademark policy as building blocks, but the user experience guide, and the details of the submission process are all new entries to my research.

I feel like API providers should be able to defend the use of their brand. I do not feel API providers can penalize and cut-off API consumers access unless there are clear guidelines and expectations presented regarding what is healthy, and unhealthy behavior. If you are concerned about how your API consumers reflect your brand, then take the time to put together a program like Nest has. You can look at my wider API branding research if you need other examples, or I’m happy to dive into the subject more as part of a consulting arrangement, and help craft an API branding strategy for you. Just let me know.

Disclosure: I do not “Work with Nest” – I am just showcasing the process. ;-)


Seamless API Lifecycle Integration With Github, Gitlab, And BitBucket

This is a story, in a series of stories that I’m doing as part of the version 3.0 release of the Stoplight.io platform. Stoplight.io is one the few API service providers I’m excited about what when it comes to what they are delivering in the API space, so I jumped at the opportunity to do some paid work for them. As I do, I’m working to make these stories about the solutions the provide, and refrain from focusing on just their product, hopefully maintaining my independent stance as the API Evangelist, and keeping some of the credibility I’ve established over the years.

Github, Gitlab, and Bitbucket have taken up a central role in the delivery of the valuable API resources we are using across our web, mobile, and device-based applications. These platforms have become integral parts of our development, and software build processes, with Github being the most prominent player when it comes to defining how we deliver not just applications, but increasingly API-driven applications on the web, our mobile phones, and common Internet connected objects becoming more ubiquitous across our physical worlds.

A Lifecycle Directory Of Groups and Users These social coding platforms come with the ability to manage different groups, projects, as well as the users involved with moving project forward. While version control isn’t new, Github is credited with making this aspect of managing code a very social endeavor which can be done publicly, or privately within select groups. Providing a rich environment for defining who is involved with each microservice that is being moved along the API lifecycle, allowing teams to leverage the existing organization and user structure provided by these platforms as part of their API lifecycle organizational structure.

Repositories As A Wrapper For Each Service Five years ago you would most likely find just code within a Github repository. In 2018, you will find schema, documentation, API definitions, and numerous artifacts stored and evolved within individual repositories. The role of the code repository in the API lifecycle when it comes to helping to move forward an API from design, to mocking, to documentation, testing, and other stops along the lifecycle can’t be understated. The individual repository has become synonymous with a single unit of value in microservices, containerization, and continuous deployment and integration conversations, making it the logical vehicle for the API lifecycle.

Knowing the Past And Seeing The Future Of The API Lifecycle The API lifecycle has a beginning, an end, as well as many stops along the way. Increasingly the birth of an API begins in a repository, with a simple README file explaining its purpose. Then the lifecycle becomes defined through a series of user-driven submissions via issues, comments, and commits to the repository. Which are then curated, organized and molded into a road map, list of currently known issues, and a resulting change log of everything that has occurred. Making seamless integration for individual repositories, as well as across all of an organization’s API-centric repositories pretty critical to moving along the API lifecycle in an efficient manner.

Providing A Single Source Voice And Source Of Truth With the introduction of Jekyll, and other static page solutions to the social coding realm, the ability for a repository to become a central source of information, communications, notifications, and truth has been dramatically elevated. The API lifecycle is being coordinated, debated, and published using the underlying Git repository, becoming a platform for delivering the code, definitions, and other more technical elements. We are increasingly publishing documentation along side code and OpenAPI definition updates, and aggregating issues, milestones, and other streams generated at the repository level to provide a voice for the API lifecycle.

Seamless Integration With Lifecycle Tooling Is Essential Version control in its current form is the movement we experience across the API lifecycle. The other supporting elements for organizational, project, and user management provide the participation layer throughout the lifecycle. The communication and notification layers give the life cycle a voice, keeping everyone in sync and moving forward in concert. For any tool or service to contribute to the API lifecycle in a meaningful way, it has to seamless integrate with Git, as well as the underlying API for these software development, version control, devops, and social coding platforms. Successful API service providers will integrate with our existing software lifecycle(s), serving one or many stops, and helping us all move beyond a single, walled garden platform mentality.


Helping Stoplight.io Get The Word Out About Version 3.0

I’ve been telling stories about what the Stoplight.io team has been building for a couple of years now. They are one of the few API service provider startups left that are doing things that interest me, and really delivering value to their API consumers. In the last couple of years, as things have consolidated, and funding cycles have shifted, there just hasn’t been the same amount of investment in interesting API solutions. So when Stoplight.io approached me to do some storytelling around their version 3.0 release, I was all in. Not just because I’m getting paid, but because they are doing interesting things, that I feel are worth talking about.

I’ve always categorized Stoplight.io as an API design solution, but as they’ve iterated upon the last couple of versions, I feel they’ve managed to find their footing, and are maturing to become one of the few truly API lifecycle solutions available out there. They don’t serve every stop along the API lifecycle, but they do focus on a handful of the most valuable stops, and most importantly, they have adopted OpenAPI as the core of what they do, allowing API providers to put Stoplight.io to work for them, as well as any other solutions that support OpenAPI at the core.

As far as the stops along the API lifecycle that they service, here is how I break them down:

  • Definitions - An OpenAPI driven way of delivering APIs, that goes beyond just a single definition, and allows you to manage your API definitions at scale, across many teams, and services.
  • Design - One of the most advanced API design GUI solutions out there, helping you craft and evolve your APIs using the GUI, or working directly with the raw JSON or YAML.
  • Virtualization - Enabling the mocking and virtualization of your APIs, allowing you to share, consume, and iterate on your interfaces long before you have deliver more costly code.
  • Testing - Provides the ability to not just test your individual APIs, but define and automate using detailed tests, assertions, and deliver a variety of scenarios to ensure APIs are doing what they should be doing.
  • Documentation - Allows for the publishing of simple, clean, but interactive documentation that is OpenAPI driven, and share with your team, and your API community through a central portal.
  • Discovery - Tightly integrated with Github, and maximizing an OpenAPI definition in a way that makes the entire API lifecycle discoverable by default.
  • Governance - Allows for teams to get a handle on the API design and delivery lifecycle, while working to define and enforce API design standards, and enforce a certain quality of service across the lifecycle.

They may describe themselves a little differently, but in terms of the way that I tag API service providers, these are the stops they service along the API lifecycle. They have a robust API definition and design core, with an attractive easy to use interface, which allows you to define, design, virtualize, document, test, and collaborate with your team, community, and other stakeholders. Which makes them a full API lifecycle service provider in my book, because they focus on serving multiple stops, and they are OpenAPI driven which allows every other stop to also be addressed using any other tools and service that supports OpenAPI–which is how you do business with APIs in 2018.

I’ve added API governance to what they do, because they are beginning to build in much of what API teams are going to need to begin delivering APIs at scale across large organizations. Not just design governance, but the model and schema management you’ll need, combined with mocking, testing, documentation, and the discovery that comes along with delivering APIs like Stoplight.io does. They reflect not just where the API space is headed with delivering APIs at scale, but what organizations need when it comes to bringing order to their API-driven, software development lifecycle in a microservices reality.

I have five separate posts that I will be publishing over the next couple weeks as Stoplight.io releases version 3.0 of their API development suite. Per my style I won’t always be directly about their product, but I’ll be talking about the solutions it deliver, but occasionally you’ll hear me mention them directly, because I can’t help it. Thanks to Stoplight.io for supporting what I do, and thanks to you my readers for checking out what Stoplight.io brings to the table. I think you are going to dig what they are up to.


OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract
There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract
Each OpenAPI can be used to generate a mock representation of the service using Postman, Stoplight.io, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository
Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract
Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance
A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract
An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default
An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client
OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams
Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.


Github Is Your Asynchronous Microservice Showroom

When you spend time looking at a lot of microservices, across many different organizations, you really begin to get a feel for the ones who have owners / stewards that are thinking about the bigger picture. When people are just focused on what the service does, and not actually how the service will be used, the Github repos tend to be cryptic, out of sync, and don’t really tell a story about what is happening. Github is often just seen as a vehicle for the code to participate in a pipeline, and not about speaking to the rest of the humans and systems involved in the overall microservices concert that is occurring.

Github Is Your Showroom
Each microservices is self-contained within a Github repository, making it the showcase for the service. Remember, the service isn’t just the code, and other artifacts buried away in the folders for nobody to understand, unless you understand how to operate the service or continuously deploy the code. It is a service. The service is part of a larger suite of services, and is meant to be understood and reusable by other human beings in the future, potentially long after you are gone, and aren’t present to give a 15 minute presentation in a meeting. Github is your asynchronous microservices showroom, where ANYONE should be able to land, and understand what is happening.

README Is Your Menu
The README is the centerpiece of your showroom. ANYONE should be able to land on the README for your service, and immediately get up to speed on what a service does, and where it is in its lifecycle. README should not be written for other developers, it should be written for other humans. It should have a title, and short, concise, plain language description of what a service does, as well as any other relevant details about what a service delivers. The README for each service should be a snapshot of a service at any point in time, demonstrating what it delivers currently, what the road map contains, and the entire history of a service throughout its lifecycle. Every artifact, documentation, and relevant element of a service should be available, and linked to via the README for a service.

An OpenAPI Contact
Your OpenAPI (fka Swagger) file is the central contract for your service, and the latest version(s) should be linked to prominently from the README for your service. This JSON or YAML definition isn’t just some output as exhaust from your code, it is the contract that defines the inputs and outputs of your service. This contract will be used to generate code, mocks, sample data, tests, documentation, and drive API governance and orchestration across not just your individual service, but potentially hundreds or thousands of services. An update to date, static representation of your services API contract should always be prominently featured off the README for a service, and ideally located in a consistent folder across services, as services architects, designers, coaches, and governance people will potentially be looking at many different OpenAPI definitions at any given moment.

Issues Dirven Conversation
Github Issues aren’t ust for issues. It is a robust, taggable, conversational thread around each individual service. A github issues should be the self-contained conversation that occurs throughout the lifecycle of a service, providing details of what has happened, what the current issues are, and what the future road map will hold. Github Isuses are designed to help organize a robust amount of conversational threads, and allow for exploration and organization using tags, allowing any human to get up to speed quickly on what is going on. Github issues should be an active journal for the work being done on a service, through a monologue by the service owner / steward, and act as the feedback loop with ALL OTHER stakeholders around a service. Github issues for each service will be be how the antrhopolgists decipher the work you did long after you are gone, and should articulate the entire life of each individual service.

Services Working In Concert
Each microservice is a self-contained unit of value in a larger orchestra. Each microservice should do one thing and do it well. The state of each services Github repository, README, OpenAPI contract, and the feedback loop around it will impact the overall production. While a service may be delivered to meet a specific application need in its early moments, the README, OpenAPI contract, and feedback loop should attract and speak to any potential future application. A service should be able to reused, and remixed by any application developer building internal, partner, or some day public applications. Not everyone landing on your README will have been in the meetings where you presented your service. Github is your service’s showroom, and where you will be making your first, and ongoing impression on other developers, as well as executives who are poking around.

Leading Through API Governance
Your Github repo, README, and OpenAPI contract is being used by overall microservices governance operations to understand how you are designing your services, crafting your schema, and delivering your service. Without an OpenAPI, and README, your service does nothing in the context of API governance, and feeding into the bigger picture, and helping define overall governance. Governance isn’t scripture coming off the mountain and telling you how to operate, it is gathered, extracted, and organized from existing best practices, and leadership across teams. Sure, we bring in outside leadership to help round off the governance guidance, but without a README, OpenAPI, and active feedback loop around each service, your service isn’t participating in the governance lifecycle. It is just an island, doing one thing, and nobody will ever know if it is doing it well.

Making Your Github Presence Count
Hopefully this post helps you see your own microservice Github repository through an external lens. Hopefully it will help you shift from Github being just about code, for coders, to something that is part of a larger conversation. If you care about doing microservices well, and you care about the role your service will play in the larger application production, you should be investing in your Github repository being the showroom for your service. Remember, this is a service. Not in a technical sense, but in a business sense. Think about what good service is to you. Think about the services you use as a human being each day. How you like being treated, and how informed you like to be about the service details, pricing, and benefits. Now, visit the Github repositories for your services, think about the people who who will be consuming them in their applications. Does it meet your expectations for a quality level of service? Will someone brand new land on your repo and understand what your service does? Does your microservice do one thing, and does it do it well?


An OpenAPI Service Dependency Vendor Extension

I’m working on a healthcare related microservice project, and I’m looking for a way to help developers express their service dependencies within the OpenAPI or some other artifact. At this point I’m feeling like the OpenAPI is the place to articulate this, adding a vendor extension to the specification that can allow for the referencing of one or more other services any particular service is dependent on. Helping make service discovery more machine readable at discovery and runtime.

To help not reinvent the wheel, I am looking at using the Schema.org Web API type including the extensions put forth by Mike Ralphson and team. I’d like the x-api-dependencies collection to adopt a standardized schema, that was flexible enough to reference different types of other services. I’d like to see the following elements be present for each dependency:

  • versions (OPTIONAL array of thing -> Property -> softwareVersion). It is RECOMMENDED that APIs be versioned using [semver]
  • entryPoints (OPTIONAL array of Thing -> Intangible -> EntryPoint)
  • license (OPTIONAL, CreativeWork or URL) - the license for the design/signature of the API
  • transport (enumerated Text: HTTP, HTTPS, SMTP, MQTT, WS, WSS etc)</p>
  • apiProtocol (OPTIONAL, enumerated Text: SOAP, GraphQL, gRPC, Hydra, JSON API, XML-RPC, JSON-RPC etc)
  • webApiDefinitions (OPTIONAL array of EntryPoints) containing links to machine-readable API definitions
  • webApiActions (OPTIONAL array of potential Actions)

Using the Schema.org Web type would allow for a pretty robust way to reference dependencies between services in a machine readable way, that can be indexed, and even visualized in services and tooling. When it comes to evolving and moving forward services, having dependency details baked in by default make a lot of sense, and ideally each dependency definition would have all the details of the dependency, as well as potential contact information, to make sure everyone is connected regarding the service road map. Anytime a service is being deprecated, versioned, or impacted in any way, we have all the dependencies needed to make an educated decision regarding how to progress with the least amount of friction as possible.

I’m going to go ahead and create a draft OpenAPI vendor extension specification for x-service-dependencies, and use the Schema.org WebAPI type, complete with the added extensions. Once I start using it, and have successfully implemented it for a handful of services I will publish and share a working example. I’m also on the hunt for other examples of how teams are doing this. I’m not looking for code dependency management solutions, I am specifically looking for API dependency management solutions, and how teams are making these dependencies discoverable in a machine readable way. If you know of any interesting approaches, please let me know, I’d like to hear more about it.


The API Stack Profiling Checklist

I just finished a narrative around my API Stack profiling, telling the entire story around the profiling of APIs for inclusion in the stack. To help encourage folks to get involved, I wanted to help distill down the process into a single checklist that could be implemented by anyone.

The Github Base Everything begins as a Github repository, and it can existing in any user or organization. Once ready, I can fork and publish as part of the API stack, or sync with an existing repository project.

  • Create Repo - Create a single repository with the name of the API provider in plain language.
  • Create README - Add a README for the project, articulating who the target is and the author.

OpenAPI Definition Profiling the API surface area using OpenAPI, providing a definition of the request and response structure for all APIs. Head over to their repository if you need to learn more about OpenAPI. Ideally, there is an existing OpenAPI you can start with, or other machine readable definition you can use as base–look around within their developer portal, because sometimes you can find an existing definition to start with. Next look on Github, as you never know where there might be something existing that will save you time an energy. However you approach, I’m looking for complete details on the following:

  • info - Provide as much information about the API.
  • host - Provide a host, or variables describing host.
  • basePath - Document the basePath for the API.
  • schemes - Provide any schemes that the API uses.
  • produces - Document which media types the API uses.
  • paths - Detail the paths including methods, parameters, enums, responses, and tags.
  • definitions - Provide schema definitions used in all requests and responses.

To help accomplish this, I often will scrape, and use any existing artifacts I can possible find. Then you just have to roll up your sleeves and begin copying and pasting from the existing API documentation, until you have a complete definition. There is never any definitive way to make sure you’ve profiled the entire API, but do your best to profile what is available, including all the detail the provider as shared. There will always be more that we can do later, as the API gets used more, and integrated by more providers and consumers.

Postman Collection Once you have an OpenAPI definition available for the API, import it into Postman. Make sure you have a key, and the relevant authentication environment settings you need. Then begin making API calls to each individual API path, making sure your API definition is as complete as it possibly can. This can be the quickest, or the most time consuming of the profiling, depending on the complexity and design of the API. The goal is to certify each API path, and make sure it truly reflects what has been documented. Once you are done, export a Postman Collection for the API, complimenting the existing OpenAPI, but with a more run-time ready API definiton.

Merging the Two Definitions Depending on how many changes occurred within the Postman portion of the profiling you will have to sync things up with the OpenAPI. Sometimes it is a matter of making minor adjustments, sometimes you are better off generating an entirely new OpenAPI from the Postman Collection using APIMATIC’s API Transformer. The goal is to make sure the OpenAPI and Postman are in sync, and working the same way as expected. Once they are in sync they can uploaded to the Github repository for the project.

Managing the Unkown Unknowns There will be a lot of unknowns along the way. A lot of compromises, and shortcuts that can be taken. Not every definition will be perfect, and sometimes it will require making multiple definitions because of the way an API provider has designed their API and used multiple subdomains. Document it all as Github issues in the repository. Use the Github issues for each API as the journal for what happened, and where you document any open questions, or unfinished dwork. Making the repository the central truth for the API definition, as well as the conversation around the profiling process.

Providing Central APIs.json Index The final step of the process is to create an APIs.json index for the API. You can find a sample one over at the specification website. When I profile an API using APIs.json I am always looking for as much detail as I possibly can, but for the purposes of API Stack profiling, I’m looking for these essential building blocks:

  • Website - The primary website for an entity owning the API.
  • Portal - The URL to the developer portal for an API.
  • Documentation - The direct link to the API documentation.
  • OpenAPI - The link to the OpenAPI I created on Github.
  • Postman - The link to the Postman Collection I created on Github.
  • Sign Up - Where do you sign up for an API.
  • Pricing - A link to the plans, tiers, and pricing for an API.
  • Terms of Service - A URL to the terms of service for an API.
  • Twitter - The Twitter account for the API provider – ideally, API specific.
  • Github - The Github account or organization for the API provider.

If you create multiple OpenAPIs, and Postman Collections, you can add an entry for each API. If you break a larger API provider into several entity provider repositories, you can link them together using the include property of the APIs.json file. I know the name of the specification is JSON, but feel free to do them in YAML if you feel more comfortable–I do. ;-) The goal of the APIs.json is to provide a complete profile of the API operations, going beyond what OpenAPI and Postman Collections deliver.

Including In The API Stack You should keep all your work in your own Github organization or user account. Once you’ve created a repository you would like to include in the API Stack, and syndicate the work to the Streamdata.io API Gallery, APIs.io, APIs.guru, Postman Network, and others, then just submit as a Github issue on the main repository. I’m still working on the details of how to keep repositories in sync with contributors, then reward and recognize them for work their work, but for now I’m relying on Github to track all contributions, and we’ll figure this part out later. The API Stack is just the workbench for all of this, and I’m using it as a place to merge the work of many partners, from many sources, and then work to sensibly syndicate out validated API profiles to all the partner galleries, directories, and search engines.


My API Stack Profiling Process

I am escalating conversations I’m having with folks regarding how I profile and understand APIs as part of my API Stack work. It is something I’ve been evolving for 2-3 years, as I struggle with different ways to attack and provide solutions within the area of API discovery. I want to understand what APIs exist out there as part of my industry research, but along the way I want to also help others find the APIs they are looking for, while also helping API providers get their APIs found in the first place. All of this has lead me to establish a pretty robust process for profiling API providers, and document what they are doing.

As I shift my API Stack work into 2nd gear, I need to further formalize this process, articulate to partners, and execute at scale. Here is my current snapshot of what is happening whenever I’m profiling an API and adding it to my API Stack organization, and hopefully moving it forward along this API discovery pipeline in a meaningful way. It is something I’m continuing to set into motion all by myself, but I am also looking to bring in other API service providers, API providers, and API consumers to help me realize this massive vision.

Github as a Base
Every new “entity” I am profiling starts with a Github repository and a README, within a single API Stack organization for providers. Right now this is all automated, as I’m transition to this way of doing things at scale. Once in motion, each provider I’m profiling will become pretty manual. What I consider an “entity” is fluid. Most smaller API providers have a single repository, while other larger ones are broken down by line of business, to help reduce API definitions and conversation around APIs down to the smallest possible unit we can. Every API service provider I’m profiling gets its own Github repository, subdomain, set of definitions, and conversation around what is happening. Here are the base elements of what is contained within each Github repo.

  • OpenAPI - A JSON or YAML definition for the surface are of an API, including the request, responses, and underlying schema using as part of both.
  • Postman Collection - A JSON Postman Collection that has been exported from the Postman client after importing the complete OpenAPI definition, and full tested.
  • APIs.json - A YAML definition of the surface area of an API’s operation, including home page, documentation, code samples, sign up and login, terms of service, OpenAPI, Postman Collection, and other relevant details that will matter to API consumers.

The objective is to establish a single repository that can be continuously updated, and integrated into other API discovery pipelines. The goal is to establish a complete (enough) index of the APIs using OpenAPI, make it available in a more runtime fashion using Postman, and then indexing the entire API operations, including the two machine readable definitions of the API(s) using APIs.json. Providing a single URL in which you can ingest and use to integrate with an API, and make sense of API resources at scale.

Everything Is OpenAPI Driven
My definition still involves OpenAPI 2.0, but it is something I’m looking to shift towards 3.0 immediately. Each entity will contain one or many OpenAPI definitions location in the underscore openapi folder. I’m still thinking about the naming and versioning convention I’m using, but I’m hoping to work this out over time. Each OpenAPI should contain complete details for the following areas:

  • info - Provide as much information about the API.
  • host - Provide a host, or variables describing host.
  • basePath - Document the basePath for the API.
  • schemes - Provide any schemes that the API uses.
  • produces - Document which media types the API uses.
  • paths - Detail the paths including methods, parameters, enums, responses, and tags.
  • definitions - Provide schema definitions used in all requests and responses.

While it is tough to define what is complete when crafting an OpenAPI definition, every path should be represented, with as much detail about the surface area of requests and responses. It is the detail that really matters, ensuring all parameters and enums are present, and paths are properly tagged based upon the resources being made available. The more OpenAPI definitions you create, the more you get a feel for what is a “complete” OpenAPI definition. Something that becomes real once you create a Postman Collection for the API definition, and begin to actually try and use an API.

Validating With A Postman Collection
Once you have a complete OpenAPI definition defined, the next step is to validate your work in Postman. You simply import the OpenAPI into the client, and get to work validating that each API path actually works. You make sure you have an API key, and any other authentication requirements, and work your way through each path, validating that the OpenAPI is indeed complete. If I’m missing sample responses, the Postman portion of this API discovery journey is where I get them. Grabbing validated responses, anonymizing them, and pasting them back into the OpenAPI definition. Once I’ve worked my way through every API path, and verified it works, I then export a final Postman Collection for the API I’m profiling.

Indexing API Operations With APIs.json
I upload my OpenAPI, and Postman Collection to the Github repository for the API entity I am targeting. Then I get to work on an APIs.json for the API. This portion of the process is about documenting the OpenAPI and Postman I just created, but then also doing the hard work of identifying everything else an API provider offers to support their operations, and will be needed to get up and running with API integration. Here are a handful of the essential building blocks I’m documenting:

  • Website - The primary website for an entity owning the API.
  • Portal - The URL to the developer portal for an API.
  • Documentation - The direct link to the API documentation.
  • OpenAPI - The link to the OpenAPI I created on Github.
  • Postman - The link to the Postman Collection I created on Github.
  • Sign Up - Where do you sign up for an API.
  • Pricing - A link to the plans, tiers, and pricing for an API.
  • Terms of Service - A URL to the terms of service for an API.
  • Twitter - The Twitter account for the API provider – ideally, API specific.
  • Github - The Github account or organization for the API provider.

This list represents the essential things I’m looking for. I have almost 200 other building blocks I document as part of API operations ranging from SDKs to training videos. The presence of these building blocks tell me a lot about the motivations behind an API platform, and send off a wealth of signals that can be tuned into by humans, or programmatically to understand the viability and health of an API.

Continuously Integratable API Definition
Now we should have the entire surface area of an API, and it’s operations defined in a machine readable format. The APIs.json, OpenAPI, and Postman can be used to on-board new developers, and used by API service providers looking to provides valuable services to the space. Ideally, each API provider is involved in this process, and once I have a repository setup, I will be reaching out to each provider as part of the API profiling process. Even when API providers are involved, it doesn’t always mean you get a better quality API definition. In my experience API providers often don’t care about the detail as much as some consumers and API service providers will. In the end, I think this is all going to be a community effort, relying on many actors to be involved along the way.

Self-Contained Conversations About Each API
With each entity being profiled living in its own Github repository, it opens up the opportunity to keep API conversation localized to the repository. I’m working to establish any work in progress, and conversations around each APIs definitions within the Github issues for the repository. There will be ad-hoc conversations and issues submitted, but I’ll be also adding path specific threads that are meant to be an ongoing thread dedicated to profiling each path. I’ll be incorporating these threads into tooling and other services I’m working with, encouraging a centralized, but potentially also distributed conversation around each unique API path, and its evolution. Keeping everything discoverable as part of an API’s definition, and publicly available so anyone can join in.

Making Profiling A Community Effort
I will repeat this process over and over for each entity I’m profiling. Each entity lives in its own Github repository, with the definition and conversation localized. Ideally API providers are joining in, as well as API service providers, and motivated API consumers. I have a lot of energy to be profiling APIs, and can cover some serious ground when I’m in the right mode, but ultimately there is more work here than any single person could ever accomplish. I’m looking to standardize the process, and distribute the crowdsourcing of this across many different Github repositories, and involve many different API service providers I’m working with to ensure the most meaningful APIs are being profiled, and maintained. Highlighting another important aspect of all of this, and the challenges to keep API definitions evolving, soliciting many contributions, while keeping them certified, rated, and available for production usage.

Ranking APIs Using Their Definitions
Next, I’m ranking APIs based upon a number of criteria. It is a ranking system I’ve slowly developed over the last five years, and is something I’m looking to continue to refine as part of this API Stack profiling work. Using the API definitions, I’m looking to rate each API based upon a number of elements that exist within three main buckets:

  • API Provider - Elements within the control of the API provider like the design of their API, sharing a road map, providing a status dashboard, being active on Twitter and Github, and other signals you see from healthy API providers.
  • API Community - Elements within the control of the API community, and not always controlled by the API provider themselves. Things like forum activity, Twitter responses, retweets, and sentiment, and number of stars, forks, and overall Github activity.
  • API Analyst - The last area is getting the involvement of API analysts, and pundits, and having them provide feedback on APIs, and get involved in the process. Tracking on which APIs are being talked about by analysts, and storytellers in the space.

Most of this ranking I can automate through there being a complete APIs.json and OpenAPI definition, and machine readable sources of information present in those indexes. Signals I can tap into using blog RSS, Twitter API, Github API, Stack Exchange API, and others. However some it will be manual, as well as require the involvement of API service providers. Helping me add to my own ranking information, but also bringing their own ranking systems, and algorithms to the table. In the end, I’m hoping the activity around each APIs Github repository, also becomes its own signal, and consumers in the community can tune into, weight, or exclude based upon the signals that matter most to them.

Streamdata.io Stream Rank
Adding to my ranking system, I am applying my partner Streamdata.io’s ranking system for APIs, looking to determine that activity surrounding an API, and how much opportunity exists to turn an API event a real time stream, as well as identify the meaningful events that are occurring. Using the OpenAPI definitions generated for each API being profiled, I begin profiling for Stream Rank by defining the following:

  • Download Size - What is the API response download size when I poll an API?
  • Download Time - What is the API response download time when I poll an API?
  • Did It Change - Did the response change from the previous response when I poll an API?

I poll an API for at least 24-48 hours, and up to a week or two if possible. I’m looking to understand how large API responses are, and how often they change–taking notice of daily or weekly trends in activity along the way. The result is a set of scores that help me identify:

  • Bandwidth Savings - How much bandwidth can be saved?
  • Processor Savings - How much processor time can be saved?
  • Real Time Savings - How real time an API resource is, which amplifies the savings?

All of which go into what we are calling a Stream Rank, which ultimately helps us understand the real time opportunity around an API. When you have data on how often an API changes, and then you combine this with the OpenAPI definition, you have a pretty good map for how valuable a set of resources is based upon the ranking of the overall API provider, but also the value that lies within each endpoints definition. Providing me with another pretty compelling dimension to the overall ranking for each of the APIs I’m profiling.

APIMetrics CASC Score
Next I’m partnering with APIMetrics to bring in their CASC Scoring when it makes sense. They have been monitoring many of the top API providers, and applying their CASC score to the overall API availability. I’d like to pay them to do this for ALL APIs in my API Stack work. I’d like to have every public API actively monitored from many different regions, and posses active, as well as historical data that can be used to help rank each API provider. Like the other work here, this type of monitoring isn’t easy, or cheap, so it will take some prioritization, and investment to properly bring in APIMetrics, as well as other API service providers to help me rank things.

Event-Driven Opportunity With AsyncAPI
Something I’ve only just begun doing, but I wanted to include on this list, is adding a fourth definition to the stack when it makes sense. If any of these APIs possess a real time, event-driven, or message oriented API, I want to make sure it is profiled using the AsyncAPI specification. The specification is just getting going, but I’m eager to help push forward the conversation as part of my API Stack work. The format is a great sister specification for OpenAPI, and something I feel is relevant for any API that I profile with a high enough Stream Rank. Once an API possesses a certain level of activity, and has a certain level of richness regarding parameters and the request surface area, the makings for an event-driven API are there–the provider just hasn’t realized it. The map for the event-driven exists within the OpenAPI parameter enums, and tagging for each API path, all you need is the required webhooks or streaming layer to realize an existing APIs event-driven potential–this is where Streamdata.io comes in. ;-)

One Stack, Many Discovery Opportunities
I am doing this work under my API Stack research. However it is all being funded by Streamdata.io. It is also being stimulated by my work with Postman, the OpenAPI Initiative, APIs.io, APIs.json, API Deck, APIs.guru, and beyond. The goal is to turn API Stack into a workbench, where anyone can contribute, as well as fork and integrate API definitions into their existing pipelines. All of this will take years, and a considerable amount of investment, but it is something I’ve already invested several years into. I’m also seeing the API discovery conversation reaching a critical mass thanks to the growth in the number of APIs, as well as adoption of OpenAPI and Postman. Making API discovery a more urgent problem. The challenge will be that most people want to extract value from these kind of effort, and not give back. While I feel strongly that API definitions should be open and freely accessible, the nature of the industry doesn’t alway reflect my vision, so I’ll have to get creative about how I incentivize folks, and reward or punish certain behaviors along the way.


The OpenAPI 3.0 GUI

I have been editing my OpenAPI definitions manually for some time now, as I just haven’t found a GUI editor that works well with my workflow. Swagger editor really isn’t a GUI solution, and while I enjoy services like Stoplight.io, APIMATIC, and others, I’m very picky about what I adopt within my world. One aspect of this is that I’ve been holding out for a solution that I can run 100% on Github using Github Pages. I’ve delusionally thought that someday I would craft a slick JavaScript solution that I could run using only Github Pages, but in reality I’ve never had the time to pull together. So, I just kept manually editing my OpenAPI definitions on the desktop using Atom, and publish to Github as I needed.

Well, now I can stop being so dumb, because my friend Mike Ralphson (@permittedsoc) has created my new favorite OpenAPI GUI, that is OpenAPI 3.0 compliant by default. As Mike defines it, “OpenAPI-GUI is a GUI for creating and editing OpenAPI version 3.0.x JSON/YAML definitions. In its current form it is most useful as a tool for starting off and editing simple OpenAPI definitions. Imported OpenAPI 2.0 definitions are automatically converted to v3.0.”

He provides a pretty simple way to get up and running with OpenAPI GUI because, “OpenAPI-GUI runs entirely client-side using a number of Javascript frameworks including Vue.JS, jQuery and Bulma for CSS. To get the app up and running just browse to the live version on GitHub pages, deploy a clone to GitHub pages, deploy to Heroku using the button below, or clone the repo and point a browser at index.html or host it yourself - couldn’t be simpler.” – “You only need to npm install the Node.js modules if you wish to use the openapi-gui embedded web server (i.e. not if you are running your own web-server), otherwise they are only there for PaaS deployments.”

Portable, Forkable API Design GUI The forkability of OpenAPI GUI is what attracts me to it. The client-side JavaScript approach makes it portable and forkable, allowing it to run wherever it is needed. The ability to quickly publish using Github Pages, or other static environment, makes it attractive. I’d like to work on a one click way for deploying locally, so I could make available to remote microservices teams to use locally to manage their OpenAPI definitions. I think it is important to have local, as well as cloud-based API design GUI locations, with more investment in the cloud-base solutions being more shared, possessing a team component, as well as a governance layer. However, I really like the thought of each OpenAPI having its own default editor, and you edit the definition within the repo it is located. I could see two editions of OpenAPI GUI, the isolated edition, and a more distributed team and definition approach. IDK, just thinking out loud.

Setting the OpenAPI 3.0 Stage The next thing I love about OpenAPI GUI is that it is all about OpenAPI 3.0, which I just haven’t had the time to fully migrate to. It has the automatic conversion of OpenAPI definitions, and gives you all the GUI elements you need to manage the robust features present in 3.0 of the OpenAPI specification. I’m itching to get ALL of my API definitions migrated from 2.0 to 3.0. The modular and reusable nature of OpenAPI 3.0, enabled by OpenAPI GUI is a formula for a rich API design experience. There are a lot of features built into the expand and collapse GUI, which I could see becoming pretty useful once you get the hang of working in there. Adding, duplicating, copying, and reusing your way to a powerful, API-first way of getting things done. I like.

Bringing In The Magic With Wizards Ok, now I’m gonna go all road map on you Mike. You know you don’t have to listen to everything I ask for, but cherry pick the things that make sense. I see what you are doing in the wizard section. I like it. I want a wizard marketplace. Go all Harry Potter and shit. Create a plugin framework and schema so we can all create OpenAPI driven wizards that do amazing things, and users can plugin and remove as they desire. Require each plug to be its own repo, which can be connected and disconnected easily via the wizards tab. I’d even consider opening up the ability to link to wizards from other locations in the editor, and via the navigation. Then we are talking about paid wizards. You know, of the API validation varietal. Free validations, and then more advanced paid validations. ($$)

Branding And Navigation Customization I would love to be able name, organize, and add my own buttons. ;-) Let’s add a page that allows for managing the navigation bars, and even the icons that show within the page. I know it would be a lot of work, but a great thing to have in the road map. You could easily turn on the ability to add logo, change color palette, and some other basics that allows companies to easily brand it, and make it their space. Don’t get me wrong. It looks great now. It is clean, simple, uncluttered. I’ve just seen enough re-skinned Swagger UI and editors to know that people are going to want to mod the hell out of their GUI API design tooling. Oh yeah, what about drag and drop–I’d like to reorder by paths. ;-) Ok, I’ll shut up now.

OpenAPI GUI In A Microservices World As I said before, I like the idea of one editor == one OpenAPI. Something that can be isolated within the repo of a service. Don’t make things too complicate and busy. In my API design editor I want to be able to focus on the task at hand. I don’t want to be distracted by other services. I’d like this GUI to be the self-container editor accessible via the Jekyll, Github Pages layer for each of my microservices. Imagine a slightly altered API design experience for each service. I could have the workspace configurable and personalized for exactly what I need to deliver a specific service. I would have a base API design editor for jumpstarting a new service, but I’d like to be able to make the OpenAPI GUI fit each service like a glove, even making the About tab a README for the service, rather than for the editor–making it a microservice dashboard landing page for each one of my services.

Making API Design a Shareable Team Sport Ok, contradicting my anti-social, microservices view of things. I want the OpenAPI GUI to be a shareable team environment. Where each team member possesses their own GUI, and the world comes to them. I want a shared catalog of OpenAPI definitions. I want to be able to discover widely, and checkout or subscribe to within my OpenAPI GUI. I want all the reusable components to have a sharing layer, allowing me to share my components, work with the shared components of other developers, and work from a central repository of components. I want a discovery layer across the users, environments, components, OpenAPI definitions, and wizards. I want to be able to load my API design editor with all of the things I need to deliver consistent API design at scale. I want to be able to reuse the healthiest patterns in use across teams, and possibly lead by helping define, and refine what reusable resources are available for development, staging, and production environments.

Github, Gitlab, or BitBucket As A Backend For The OpenAPI GUI I want to run on Github, Gitlab, or BitBucket environments, as a static site–no backend. Everything backend should be an API, and added as part of the wizard plugin (aka magic) environment. I want the OpenAPI, components, and any other artifacts to live as JSON or YAML within a Github, Gitlab, or BitBucket repository. If it is an isolated deployment, everything is self-contained within a single repository. If it is a distributed deployment, then you can subscribe to shared environments, catalogs, components, and wizards via many organizations and repositories. This is the first modification I’m going to do to my version, instead of the save button saving to the vue.js store, I’m going to have it write to Github using the Github API. I’m going to add a Github OAuth icon, which will give me a token, and then just read / write to the underlying Github repository. I’m going to begin with an isolated version, but then expand to search, discovery, read and write to many different Github repositories across the organizations I operate across.

Annotation And Discussion Throughout While OpenAPI GUI is a full featured, and potentially busy environment, I’d like to be able to annotate, share, and discuss within the API design environment. I’d like to be able to create Github issues at the path level, and establish conversations about the design of an API. I’d like to be able to discuss paths, schema, tags, and security elements as individual threads. Providing a conversational layer within the GUI, but one that spans many different Github repositories, organizations, and users. Adding to the coupling with the underlying repository. Baking a conversational layer into the GUI, but also into the API design process in a way that provides a history of the design lifecycle of any particular service. Continuing to make API design a team sport, and allowing teams to work together, rather than in isolation.

API Governance Across Distributed OpenAPI GUIs With everything stored on Github, Gitlab, Bitbucket, and shareable across teams using organizations and repositories, I can envision some pretty interesting API governance possibilities. I can envision a distributed API catalog coming into focus. Indexing and aggregating all services, their definitions, and conversations aggregated across teams. You can see the seeds for this under the wizard tab with validation, visualization, and other extensions. I can see there being an API design governance framework present, where all API definitions have to meet certain number of validations before they can move forward. I can envision a whole suite of free and premium governance services present in a distributed OpenAPI GUI environment. With an underlying Github, BitBucket, and Gitlab base, and the OpenAPI GUI environment, you have the makings for some pretty sweet API governance possibilities.

Lot’s Of Potential With OpenAPI GUI I’ll pause there. I have a lot of other ideas for the API design editor. I’ve been thinking about this stuff for a long time. I’m stoked to adopt what you’ve built Mike. It’s got a lot of potential, and a nice clean start. I’ll begin to put to use immediately across my API Evangelist, API Stack, and other API consulting work. With several different deployments, I will have a lot more feedback. It’s an interesting leap forward in the API design tooling conversation, building on what is already going on with API service providers, and some of the other API design tooling like Apicurio, and publishing the API design conversation forward. I’m looking forward to baking OpenAPI GUI into operations a little more–it is lightweight, open source, and extensible enough to get me pretty excited about the possibilities.


Obtaining A Scalable API Definition Driven API Design Workflow On Github Complete With API Governance

There is a lot of talk of an API design first way of doing things when it comes to delivering microservices. I’m seeing a lot of organizations make significant strides towards truly decoupling how they deliver the APIs they depend on, and continue to streamline their API lifecycle, as well as governance. However, if you are down in the weeds, doing this work at scale, you know how truly hard it is to actualy get everything working in concert across many different teams. While I feel like I have many of the answers, actually achieving this reality, and assembling the right tools and services for the job is much harder then it is in theory.

There was a question posted on API Craft recently that I think gets at the challenges we all face:

We plan to use Open API 3 specification for designing API’s that are required to build our enterprise web application. These API’s are being developed to integrate the backend with frontend. They are initially planned to be internal/private. To roll out an API First strategy across multiple teams (~ 30) in our organization we want to recommend and centrally deploy a standard set of tools that could be used by teams to design and document API’s.

I am new to the swagger tool set. I understand that there is a swagger-editor tool that can help in API design while swagger-ui could help in API documentation. Trying them I realized a few problems

1. How would teams save their API’s centrally on a server? Swagger editor does not provide a way to centrally store them.
2. How can we get a directory look up that displays all the designed API’s?
3. How can we integrate the API design and API documentation tool?
4. How can the API specifications be linked with the implementation (java) to keep them up-to-date?
5. How do we show API dependencies when one api uses the other one?

We basically need to think about the end-to-end work-flow that helps teams in their SDLC to design API’s. For the start I am trying to see what can we achieve with free tools. Can someone share their thoughts based on their experience? This reflects what I’m hearing from the numerous projects I’m consulting with. Some of the OpenAPI (Swagger) related questions can be addressed using Github, but the code integration, dependency, and other aspects of this question are much harder to answer, let alone solve with a single tool. Each team has their existing workflows, pipelines, and software delivery processes in place, so helping articulate how you should do this at scale, gets pretty difficult very quickly.

Github, Gitlab, or BitBucket At Core The first place you want to start is with Github, Gitlab, or Bitbucket at the core. If you aren’t managing your services, and supporting API definitions as individual repositories within a version control system, achieving this at scale will be difficult. These software development platforms are where your APIs should begin, live and evolve, as well as experience their end of life. Making not just the code accessible throughout the lifecycle, but also the API definitions and other artifacts that are essential throughout the API lifecycle.

Full API Lifecycle Service Providers There are a handful of API service providers out there who will help you get pretty far down this road. Evolving towards being what I consider to be a full lifecycle API service provider. Not just helping you with design, or any other individual stop along the API lifecycle, and emerging to help you with as much of what you’ll need to get the job done. Here are just a handful of the leading ones out there that I track on:

  • Postman - Focusing on the client, but bringing in a full stack of lifecycle development tools to support.
  • Stoplight - Beginning with API design, but then expanding quickly to help you manage and govern the lifecycle.
  • APIMATIC - Emphasis on quality SDK generation, but approach life cycle deployment as a pipeline.
  • Swagger - Born out of the API definition, but quickly introducing other lifecycle solutions as part of the package.

I can’t believe I’m talking about Swagger, but I’ll vent about that in another post. Anyways, these are what I’d consider to be the leading API lifecycle development solutions out there. They aren’t going to get you all the way towards everything listed above, but they will get you pretty far down the road. Another thing to note is that they ALL support OpenAPI definitions, which means you can actually use several of the providers, as long as your central truth is an OpenAPI available on a Github repository.

No single one of these services will address the questions posted above, but collectively they could. It shows how fragmented services in the space are, and how many different interpretations of the API lifecycle there are. However, it also shows the importance of an API definition-driven approach to the lifecycle, and how using OpenAPI and Postman Collections can help bridge our usage of multiple API service providers to get the job done. Next, I’ll work on a post outlining some of the open source tooling I’d suggest to help solve some of these problems, and then work to connect the dots more on how you could actually do this in practice on the ground within your organization.


Using Postman to Explore the Triathlon API

I’m taking time to showcase any API I come across who have published their OpenAPI definitions to Github like New York Times, Box, Stripe, SendGrid, Nexmo, and others have. I’m also taking the time to publish stories showcasing any API provider who similarly publish Postman Collections as part of their API documentation. Next up on my list is the Triathlon API, who provides a pretty sophisticated API stack for managing triathlons around the world, complete with a list of Postman Collections for exploring and getting up and running with their API.

Much like Okta, which I wrote about last week, the Triathlon API has broken their Postman Collections into individual service collections, and provides a nice list of them for easy access. Making it quick and easy to get up and running making calls to the API. Something that ALL API providers should be doing. Sorry, but Postman Collections should be a default part of your API documentation, just like OpenAPI definition should be as the driver of your interactive API docs, and the rest of your API lifecycle.

Every provider should be maintaining their OpenAPI definitions, as well as Postman Collections on Github, and baking them into their API documentation. Your OpenAPI should be the central truth for your API operations, and then you can easily import it, and generate Postman Collections as you design, test, and evolve your API using the Postman development suite. I know there are many API providers who haven’t caught up to this approach to delivering API resources, but it is something they need to tune into, and make the necessary shift in how you are delivering your resources.

In addition to regular stories like this on the blog, you will find me reaching out to individual API providers asking if they have an OpenAPI and / or Postman Collections. I’m personally invested in getting API providers to adopt their API definition formats. I want to see their APIs present in my API Stack work, as well as other API discovery projects I’m contributing to like Streamdata.io, Postman Network, APIs.Guru, and others. Making sure your APIs are discovered, and making sure you are getting out of the way of your developers by baking API definitions into your API operations–it is just what you do in 2018.


<< Prev Next >>