Are you going to the APIStrat Conference in Nashville, or the API City Conference in Seattle?

The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Having The Dedication To Lead An API Effort Forward Within A Large Enterprise Organization

I work with a lot of folks who work in large enterprise organizations, institutions, and government agencies who are moving the API conversation forward within their groups. I’m all too familiar with what it takes to move forward the API conversation within large, well established enterprise organizations. However, I am the first to admit that while I have a deep understanding of what it involves, I do not have the fortitude to actually lead an effort for the sustained amount of time it takes to actually make change. I just do not have the patience and the personality for it, and I’m eternally grateful for those that do.

There are regular streams of emails in my inbox from people embedded within enterprise organizations, looking for guidance, counseling, and assistance in moving forward the API conversation at their organizations. I am happy to provide assistance in an advisory capacity, and consulting with groups to help them develop their strategies. A significant portion of my income comes from conducting 1-3 day workshops within the enterprise, helping teams work through what they need to. There is one thing I cannot contribute to any of these teams, the dedication and perseverance it will need to actually make it happen.

It takes a huge amount of organization knowledge to move things forward at a large organization. You have to know who the decision makers are, and who are the gatekeepers for all of the important resources–this is knowledge you have to acquire by being embedded, and working within an organization for a very long time. You just can’t walk in the door and be able to make sense of things within days, or weeks. You have to be able to work around schedules, and personalities–getting to know people, and truly begin to understand their motivations, and willingness to contribute, or whether they’ll actually decide to work against you. The culture of any enterprise organization will be the most important area of concern for you as you craft and evolve your API strategy.

I often wish I had the fortitude to work in a sustained capacity within a large organization. I’ve tried. It just doesn’t fit my view of the world. However, I am super thankful for those of you who are. I’m super happy to help you in your journey. I’m happy to help you think through what you are experiencing as part of my storytelling here on my blog–just email me your questions, thoughts, and concerns. I’m happy to anonymize as I work through my responses here on the blog, about 60% of the stories you read here are the anonymized result of emails I receive from y’all. I’m happy to vent for you, and use you as my muse. I’m also happy to help out in a more dedicated capacity, and provide my consulting assistance to your organization–it is what I do, and how I pay the bills. Let me know how I can help.


Understanding The Event-Driven API Infrastructure Opportunity That Exists Across The API Landscape

I am at the Kong Summit in San Francisco all day tomorrow. I’m going to be speaking about research into the event-driven architectural layers I’ve been mapping out across the API space. Looking for the opportunity to augment existing APIs with push technology like webhooks, and streaming technology like SSE, as well as pipe data in an out of Kafka, fill data lakes, and train machine learning models. I’ll be sharing what I’m finding from some of the more mature API providers when it comes to their investment in event-driven infrastructure, focusing in on Twilio, SendGrid, Stripe, Slack, and GitHub.

As I am profiling APIs for inclusion in my API Stack research, and in the API Gallery, I create an APIs.json, OpenAPI, Postman Collection(s), and sometimes an AsyncAPI definition for each API. All of my API catalogs, and API discovery collections use APIs.json + OpenAPI by default. One of the things I profile in each of my APIs.json, is the usage of webhooks as part of API operations. You can see collections of them that I’ve published to the API Gallery, aggregating many different approaches in what I consider to be the 101 of event-driven architecture, built on top of existing request and response HTTP API infrastructure. Allowing me to better understand how people are doing webhooks, and beginning to sketch out plans for a more event-driven approach to delivering resources, and managing activity on any platform that is scaling.

While studying APIs at this level you begin to see patterns across how providers are doing what they are doing, even amidst a lack of standards for things like webhooks. API providers emulate each other, it is how much of the API space has evolved in the last decade. You see patterns like how leading API providers are defining their event types. Naming, describing, and allowing API consumers to subscribe to a variety of events, and receive webhook pings or pushes of data, as well as other types of notifications. Helping establish a vocabulary for defining the most meaningful events that are occurring across an API platform, and then providing an even-driven framework for subscribing to push data out when something occurs, as well as sustained API connections in the form of server-sent event (SSE), HTTP long polling, and other long running HTTP connections.

As I said, webhooks is the 101 of event-driven technology, and once API providers evolve in their journey you begin to see investment in the 201 level solutions like SSE, WebSub, and more formal approaches to delivering resources as real time streams and publish / subscribe solutions. Then you see platforms begin to mature and evolve into other 301 and beyond courses, with AMQP, Kafka, and often times other Apache Projects. Sure, some API providers begin their journey here, but for many API providers, they are having to ease into the world of event-driven architecture, getting their feet wet with managing their request and response API infrastructure, and slowly evolving with webhooks. Then as API operations harden, mature, and become more easily managed, API providers can confidently begin evolving into using more sophisticated approaches to delivering data where it needs to be, when it is needed.

From what I’ve gathered, the more mature API providers, who are further along in their API journey have invested in some key areas, which has allowed them to continue investing in some other key ways:

  • Defined Resources - These API providers have their APIs well defined, with master planned designs for their suite of services, possessing machine readable definitions like OpenAPI, Postman Collections, and AsyncAPI.
  • Request / Response - Who have fined tuned their approach to delivering their HTTP based request and response structure, along with their infrastructure being so well defined.
  • Known Event Types - Which has resulted in having a handle on what is changing, and what the most important events are for API providers, as well as API consumers.
  • Push Technology - Having begun investing in webhooks, and other push technology to make sure their API infrastructure is a two-way street, and they can easily push data out based upon any event.
  • Query Language - Understanding the value of investment in a coherent querying strategy for their infrastructure that can work seamlessly with the defining, triggering, and overall management of event driven infrastructure.
  • Stream Technology - Having a solid understanding of what data changes most frequently, as well as the topics people are most interested, and augmenting push technology with streaming subscriptions that consumers can tap into.

At this point in most API providers journey, they are successfully operating a full suite of event-driven solutions that can be tapped internally, and externally with partners, and other 3rd party developers. They probably are already investing in Kafka, and other Apache projects, an getting more sophisticated with their event-driven API orchestration. Request and response API infrastructure is well documented with OpenAPI, and groups are looking at event-driven specifications like AsyncAPI to continue to ensure all resources, messages, events, topics, and other moving parts are well defined.

I’ll be showcasing the event-driven approaches of Twilio, SendGrid, Stripe, Slack, and GitHub at the Kong Summit tomorrow. I’ll also be looking at streaming approaches by Twitter, Slack, SalesForce, and Xignite. Which is just the tip of the event-driven API architecture opportunity I’m seeing across the existing API landscape. After mapping out several hundred API providers, and over 30K API paths using OpenAPI, and then augmenting and extending what is possible using AsyncAPI, you begin to see the event-driven opportunity that already exists out there. When you look at how API pioneers are investing in their event-driven approaches, it is easy to get a glimpse at what all API providers will be doing in 3-5 years, once they are further along in their API journey, and have continued to mature their approach to moving their valuable bits an bytes around using the web.


Talking Healthcare APIs With The CMS Blue Button API Team At #APIStrat In Nashville Next Week

We have the API evangelist from one of the most significant APIs out there today at #APIStrat in Nashville next week. Mark Scrimshire (@ekivemark), Blue Button Innovator and Developer Evangelist from NewWave Telecoms and Technologies will be on the main stage next Tuesday, September 25th 2018. Mark will be bringing his experience helping stand up the Blue Button API with the Centers for Medicare and Medicaid Services (CMS), and sharing the stories from the trenches while delivering this critical piece of health API infrastructure within the United States.

I consider the Blue Button API to be one of the most significant APIs out there right now for several key factors:

  • API Reach - An API that has potential to reach 44 million Medicare beneficiaries, which is 15 percent of the U.S. population–that is a pretty significant audience to reach when it comes to the overall API conversation.
  • Fast Healthcare Interoperability Resources (FHIR) - The Blue Button API supports Hl7 / FHIR, pushing the specification forward in the overall healthcare API interoperability discussion, making it extremely relevant to APIStrat and the OpenAPI Initiative (OAI).
  • Government API Blueprint - The way in which the Blue Button API team at CMS and USDS is delivering the API is providing a potential blueprint that other federal and stage level agencies can follow when rolling out their own Medicare related APIs, but also any other critical infrastructure that this country depends on.

This is why I am always happy to support the Blue Button API team in any way I can, and I am very stoked to have them at APIStrat in Nashville next week. I’ve spent a lot of time working with, and studying what the Blue Button API team is up to, and I spoke at their developer conference hosted at the White House last month. They have some serious wisdom to share when it comes to delivering public APIs at this scale, making the keynote with Mark something you will not want to miss.

You can check out the schedule for APIStrat next week on the website. There are also still tickets available if you want to join in the conversation going on there Monday, Tuesday, and Wednesday next week. APIStrat is operated by the OpenAPI Initiative (OA), making it the place where you will be having high level API conversation like this one. When it comes to APIs, and industry changing API specifications like FHIR, APIStrat is the place to be. I’ll see you all in Nashville next week, and I look forward to talking APIs with all y’all in the halls, and around town for APIStrat 2018.


Sadly Stack Exchange Blocks API Calls Being Made From Any Of Amazon's IP Block

I am developing an authentication and access layer for the API Gallery that I am building for Streamdata.io, while also federating it for usage as part of my API Stack research. In addition to building out these catalogs for API discovery purposes, I’m also developing a suite of tools that allow users to subscribe to different topics from popular sources like GitHub, Reddit, and Stack Overflow (Exchange). I’ve been busy adding one or two providers to my OAuth broker each week, until the other day I hit a snag with the Stack Exchange API.

I thought my Stack Exchange API OAuth flow had been working, it’s been up for a few months, and I seem to remember authenticating against it before, but this weekend I began getting an error that my IP address was blocked. I was looking at log files trying to understand if I was making too many calls, or some other potential violation, but I couldn’t find anything. Eventually I emailed Stack Exchange to see what their guidance once, to which I got a prompt reply:

“Yes, we block all of Amazon’s AWS IP addresses due to the large amount of abuse that comes from their services. Unfortunately we cannot unblock those addresses at this time.”

Ok then. I guess that is that. I really don’t feel like setting up another server with another provider just so I can run an OAuth server from there. Or, maybe I guess I might have to if I expect to offer a service that provides OAuth integration with Stack Exchange. It’s a pretty unfortunate situation that doesn’t make a whole lot of sense. I can understand adding another layer of white listing for developers, pushing them to add their IP address to their Stack Exchange API application, and push us to justify that our app should have access, but blacklisting an entire cloud provider from accessing your API is just dumb.

I am going to weigh my options, and explore what it will take to setup another server elsewhere. Maybe I will start setting up individual subdomains for each OAuth provider I add to the stack, so I can decouple them, and host them on another platform, in another region. This is one of those road blocks you encounter doing APIs that just doesn’t make a whole lot of sense, and yet you still have to find a work around–you can’t just give in, despite API providers being so heavy handed, and not considering the impact of the moves on their consumers. I’m guessing in the end, the Stack Exchange API doesn’t fit into their wider business model, which is something that allows blind spots like this to form, and continue.


Justifying My Existence In Your API Sales And Marketing Funnel

I feel like I’m regularly having to advocate for my existence, and the existence of developers who are like me, within the sales and marketing funnel for many APIs. I sign up for a lot of APIs, and have the pleasure of enjoy a wide variety of on-boarding processes for APIs. Many APIs I have no problem signing up, on-boarding, and beginning to make calls, while others I have to just my existence within their API sales and marketing funnel. Don’t get me wrong, I’m not saying that I shouldn’t be expected to justify my existence, it is just that many API providers are setup to immediately discourage, create friction for, and dismiss my class of API integrator–that doesn’t fit neatly into the shiny big money integration you have defined at the bottom of your funnel.

I get that we all need to make money. I have to. I’m actually in the business of helping you make money. I’m just saying that you are missing out on a significant amount of opportunity if you only focus on what comes out the other side of your funnel, and discount the nutrients developers like me can bring to your funnel ecosystem. I’m guessing that my little domain apievangelist.com does return the deal size scope you are looking for, but I think you are putting too much trust into the numbers provided to you by your business intelligence provider. I get that you are hyper focused on making the big deals, but you might be leaving a big deal on the table by shutting out small fish, who might have oversized influence within their organization, government agency, or within an industry. Your business intelligence is focusing on the knowns, and doesn’t seem very open to considering the unknowns.

As the API Evangelist I have an audience. I’ve been in business since 2010, so I’ve built up an audience of enterprise folks who read what I write, and listen to “some” of what I say. I know people like me within the federal government, within city government, and across the enterprise. Over half the people I know who work within the enterprise, helping influence API decisions, are also kicking the tires of APIs at night. Developers like us do not always have a straightforward project, we are just learning, understanding, and connecting the dots. We don’t always have the ready to go deal in the pipeline, and are usually doing some homework so that we can go sell the concept to decision makers. Make sure your funnel doesn’t keep us out, run us away, or remove channels for our voice to be heard.

In a world where we focus only on the big deals, and focus on scaling and automating the operation of platforms, we run the risk of losing ourselves. If you are only interested in landing those big customers, and achieving the exit you desire, I understand. I am not your target audience. I will move. It also means that I won’t be telling any stories about what you are doing, building any prototypes, and generally amplifying what you are doing on social media, and across the media landscape. Providing many of the nutrients you will need to land some of the details you are looking to get, generating the internal and external buzz needed to influence the decision makers. Providing real world use cases of why your API-driven solution is the one an enterprise group should be investing in. Make sure you aren’t locking us out of your platform, and you are investing the energy into getting to know your API consumers, more about what their intentions are, and how it might fit into your larger API strategy–if you have one.


I Am Needing Some Evidence Of How APIs Can Make An Impact In Government

Eric Horesnyi (@EricHoresnyi), the CEO of Streamata.io and I were on a call with a group of people who are moving forward the API conversation across Europe, with the assistance of the EU. The project has asked us to assist them in the discovery of more data and evidence of how APIs are making an impact in how government operates within the European Union, but also elsewhere in the world. Aggregating as much evidence as possible to help influence the EU API strategy, and learn from what is already being done. I’m heading to Italy next month to present to the group, and participate in conversations with other API practitioners and evangelists, so I wanted to start my usual amount of storytelling here on the blog to solicit contributions from my audience about what they are seeing.

I am looking for some help from my readers who work at city, county, state, and federal agencies, or at the private entities who help them with their API efforts. I am looking for official, validated, on the record examples of APIs making a positive impact on how government serves its constituents. Quantifiable examples of how a government agency have published a private, partner, or public API, and it helped the agency better meet its mission. I’m looking for anything mundane, as well as the unique and interesting, with tangible evidence to back it all up. Like number of developers, partners, apps, cost saving, efficiencies, or any other positive effect. Demonstrating that APIs when done right can move the conversation forward at a government agency. For this round, I’m going to need first hand accounts, because I will need to help organize the data, and work with this group to submit it to the European Union as part of their wider effort.

This is something I’ve been doing loosely since 2012, but I need to start getting more official about how I gather the stories, and pull together actual evidence, going beyond just my commentary from the outside in. I’ll be reaching out to all my people in government, asking for examples. If you know of anything, please email me at [email protected] with your thoughts. We have an opportunity to influence the regulatory stance in Europe when it comes to government putting APIs to work, which will be something that washes back upon the shores of the United States during each wave of API regulations to come out of the EU. My casual storytelling about how government APIs are making change on my blog has worked for the last five years, but moving forward we are going to need to better at gathering, documenting, and sharing examples how APIs are working across government. Helping establish more concrete blueprints for how to do all of this properly, and ensuring that we aren’t reinventing the wheel when it comes to API in government.

If you know someone working on APIs in at any level of government, feel free to share a link to my story, or send an introduction via [email protected] I’d love to help share the story, and evidence of the impact they are making with APIs. I appreciate all your support in making this happen–it is something I’ll put back out to the community once we’ve assembled it and talked through it in Italy next month.


Being Open Is More About Being Open So Someone Can Extract Value Than Open Being About Any Shared Value

One of the most important lessons I’ve learned in the last eight years, is that when people are insistent about things being open, in both accessibility, and cost, it is often more about things remaining open for them to freely (license-free) to extract value from it, that it is ever about any shared or reciprocal value being generated. I’ve fought many a battle on the front lines of “open”, leaving me pretty skeptical when anyone is advocating for open, and forcing me to be even critical of my own positions as the API Evangelist, and the bullshit I peddle.

In my opinion, ANYONE wielding the term open should be scrutinized for insights into their motivations–me included. I’ve spend eight years operating on the front line of both the open data, and the open API movements, and unless you are coming at it from the position of a government entity, or from a social justice frame of mind, you are probably wanting open so that you can extract value from whatever is being opened. With many different shades of intent existing when it comes to actually contributing any value back, and supporting the ecosystem around whatever is actually being opened.

I ran with the open data dogs from 2008 through 2015 (still howl and bark), pushing for city, county, state, and federal government open up data. I’ve witnessed how everyone wants it opened, sustained, maintained, and supported, but do not want to give anything back. Google doesn’t care about the health of local transit, as long as the data gets updated in Google Maps. Almost every open data activist, and data focused startup I’ve worked with has a high expectation for what government should be required to do, and want very low expectations regarding what should be expected of them when it comes to pay for commercial access, sharing enhancements and enrichments, providing access to usage analytics, and be observable and open to sharing access to end-users of this open data. Libertarian capitalism is well designed to take, and not give back–yet be actively encouraging open.

I deal with companies, organizations, and institutions every day who want me to be more open with my work. Are more than happy to go along for the ride when it comes to the momentum built up from open in-person gatherings, Meetups, and conference. Always be open to syndicating data, content, and research. All while working as hard as possible to extract as much value, and not give anything back. There are many, many, many companies who have benefitted from the open API work that I, and other evangelist in the space do on a regular basis, without ever considering if they should support them, or give back. I regularly witness partnerships scenarios in all of the API platforms I monitor, where the larger more proprietary and successful partner extracts value from the smaller, more open and less proven partner. I get that some of this is just the way things are, but much of it is about larger, well-resourced, and more closed groups just taking advantage of smaller, less-resourced, and more open groups.

I have visibility into a number of API platforms that are the targets of many unscrupulous API consumers who sign up for multiple accounts, do not actively communicate with platform owners, and are just looking for a free hand out at every turn. Making it very difficult to be open, and often times something that can also be very costly to maintain, sustain, and support. Open isn’t FREE! Publicly available data, content, media, and other resources cost money to operate. The anti-competitive practices of large tech giants setting the price so low for common digital resources have set the bar so low, for so long, it has change behaviors and set unrealistic expectations as the default. Resulting in some very badly behaved API ecosystem players, and ecosystems that encourage and incentivize bad behavior within specific API communities, but also is something that spreads from provider to provider. Giving APIs a bad name.

When I come across people being vocal about some digital resource being open, I immediately begin conducting a little due diligence on who they are. Their motivations will vary depending on where the come from, and while there are no constants, I can usually tell a lot about someone whether they come from a startup ecosystem, the enterprise, government, venture capital, or other dimensions of ur reality that the web has reached into recently. My self-appointed role isn’t just about teaching people to be more “open” with their digital assets, it is more about teaching people to be more aware and in control over their digital assets. Because there are a lot of wolves in sheeps clothing out there, trying to convince you that “open” is an essential part of your “digital transformation”, and showcasing all the amazing things that will happen when you are more “open”. When in reality they are just interested in you being more open so that they can get their grubby hands on your digital resources, then move on down the road to the next sucker who will fall for their “open” promises.


Providing Minimum Viable API Documentation Blueprints To Help Guide Your API Developers

I was taking a look at the Department of Veterans Affairs (VA) API documentation for the VA Facilities API, and intending on providing some feedback on the API implementation. The API itself is pretty sound, and I don’t have any feedback without having actually integrated it into an application, but following on the heals of my previous story about how we get API developers to follow minimum viable API documentation guidance, I had lots of feedback on the overall deliver of the documentation for the VA Facilities API, helping improve on what they have there.

Provide A Working Example of Minimum Viable API Documentation
One of the ways that you help incentivize your API developers to deliver minimum viable API documentation across their API implementations is you do as much of the work for them as you can, and provide them with a forkable, downloadable, clonable API documentation that meets the minimum viable requirements. To help illustrate what I’m talking about I created a base GitHub blueprint for what I’d suggest as a minimum viable API documentation at the VA. Providing something the VA can consider, and borrow from as they are developing their own strategy for ensuring all APIs are consistently documented.

Covering The Bare Essentials That Should Exist For All APIs
I wanted to make sure each API had the bare essentials, so I took what the VA has already done over at developer.va.gov, and republished it as a static single page application that runs 100% on GitHub pages, and hosted in a GitHub repository–providing the following essential building blocks for APIs at the VA:

  • Landing Page - Giving any API a single landing page that contains everything you need to know about working with an API. The landing page can be hosted as its own repo, and subdomain, and the linked up with other APIs using a facade page, or it could be published with many other APIs in a single repository.
  • Interactive Documentation - Providing interactive, OpenAPI-driven API documentation using Swagger UI. Providing a usable, and up to date version of the documentation that developers can use to understand what the API does.
  • OpenAPI Definition - Making sure the OpenAPI behind the documentation is front and center, and easily downloaded for use in other tools and services.
  • Postman Collection - Providing a Postman Collection for the API, and offering it as more of a transactional alternative to the OpenAPI.

That covers the bases for the documentation that EVERY API should have. Making API documentation available at a single URL to a human viewable landing page, complete with documentation. While also making sure that there are two machine readable API definitions available for an API, allowing the API documentation to be more portable, and useable in other tooling and services–letting developers use the API definitions as part of other stops along the API lifecycle.

Bringing In Some Other Essential API Documentation Elements
Beyond the landing page, interactive documentation, OpenAPI, and Postman Collection, I wanted to suggest some other building blocks that would really make sure API developers at the VA are properly documenting, communicating, as well as supporting their APIs. To go beyond the bare bones API documentation, I wanted to suggest a handful of other elements, as well as incorporate some building blocks the VA already had on the API documentation landing page for the VA Facilities API.

  • Authentication - Providing an overview of authenticating with the API using the header apikey.
  • Response Formats - They already had a listing of media types available for the API.
  • Support - Ensuring that an API has at least one support channel, if not multiple channels.
  • Road Map - Making sure there is a road map providing insights into what is planned for an API.
  • References - They already had a listing of references, which I expanded upon here.

I tried not to go to town adding all the building blocks I consider to be essential, and just contribute couple of other basic items. I feel support and road map are essential and cannot be ignored, and should always be part of the minimum viable API documentation requirements. My biggest frustrations with APIs are 1) Up to date documentation, 2) No support, and 3) Not knowing what the future holds. I’d say that I’m also increasingly frustrated when I can’t get at the OpenAPI for an API, or at least find a Postman Collection for the API. Machine readable definitions moved into the essential category for me a couple years ago–even though I know some folks don’t feel the same.

A Self Contained API Documentation Blueprint For Reuse
To create the minimum viable API documentation blueprint demo for the VA, I took the HTML template from developer.va.gov, and deployed as a static Jekyll website that runs on GitHub Pages. The landing page for the documentation is a single index.html page in the root of the site, leverage Jekyll for the user interface, but driving all the content on the page from the central config.yml for the API project. Providing a YAML checklist that API developers can follow when publishing their own documentation, helping do a lot of the heavy lifting for developers. All they have to do is update the OpenAPI for the API and add their own data and content to the config.yml to update the landing page for the API. Providing a self-contained set of API documentation that developers can fork, download, and reuse as part of their work, delivering consistent API documentation across teams.

The demo API documentation blueprint could use some more polishing and comments. I will keep adding to it, and evolving it as I have time. I just wanted to share more of my thoughts about the approach the VA could take to provide function API documentation guidance, as a functional demo. Providing them with something they could fork, evolve, and polish on their own, turning it into a more solid, viable solution for documentation at the federal agency. Helping evolve how they deliver API documentation across the agency, and ensuring that they can properly scale the delivery of APIs across teams and vendors. While also helping maximize how they leverage GitHub as part of their API lifecycle, setting the base for API documentation in a way that ensures it can also be used as part of a build pipeline to deploy APIs, as well as manage, testing, secure, and helping deliver along almost every stop along a modern API lifecycle.

The website for this project is available at: https://va-working.github.io/api-documentation/ You can access the GitHub repository at: https://github.com/va-working/api-documentation


Please Refer The Engineer From Your API Team To This Story

I reach out to API providers on a regular basis, asking them if they have an OpenAPI or Postman Collection available behind the scenes. I am adding these machine readable API definitions to my index of APIs that I monitor, while also publishing them out to my API Stack research, the API Gallery, APIs.io, work to get them published in the Postman Network, and syndicated as part of my wider work as an OpenAPI member. However, even beyond my own personal needs for API providers to have a machine readable definition of their API, and helping them get more syndication and exposure for their API, having an definition present significantly reduces friction when on-boarding with their APIs at almost every stop along a developer’s API integration journey.

One of the API providers I reached out to recently responded with this, “I spoke with one of our engineers and he asked me to refer you to https://developer.[company].com/”. Ok. First, I spend over 30 minutes there just the other day. Learning about what you do, reading through documentation, and thinking about what was possible–which I referenced in my email. At this point I’m guessing that the engineer in question doesn’t know what an OpenAPI or Postman Collection is, they do not understand the impact these specifications are having on the wider API ecosystem, and lastly, I’m guessing they don’t have any idea who I am(ego taking control). All of which provides me with the signals I need to make an assessment of where any API is in their overall journey. Demonstrating to me that they have a long ways to go when it comes to understanding the wider API landscape in which they are operating in, and they are too busy to really come out of their engineering box and help their API consumers truly be successful in integrating with their platform.

I see this a lot. It isn’t that I expect everyone to understand what OpenAPI and Postman Collections are, or even know who I am. However, I do expect people doing APIs to come out of their boxes a little bit, and be willing to maybe Google a topic before responding to question, or maybe Google the name of the person they are responding to. I don’t use a gmail.com address to communicate, I am using apievangelist.com, and if you are using a solution like Clearbit, or other business intelligence solution, you should always be retrieving some basic details about who you are communicating with, before you ever respond. That is, you do all of this kind of stuff if you are truly serious about operating your API, helping your API consumers be more successful, and taking the time to provide them with the resources they need along the way–things like an OpenAPI, or Postman Collections.

Ok, so why was this response so inadequate?

  • No API Team Present - It shows me that your company doesn’t have any humans their to support the humans that will be using your API. My email went from general support, to a backend engineer who doesn’t care about who I am, and about what I need. This is a sign of what the future will hold if I actually bake their API into my applications–I don’t need my questions lost between support and engineering, with no dedicated API team to talk to.
  • No Business Intelligence - It shows me that your company has put zero thought into the API business model, on-boarding, and support process. Which means you do not have a feedback loop established for your platform, and your API will always be deficient of the nutrients it needs to grow. Always make sure you conduct a lookup based upon on the domain, or Twitter handle or your consumers to get the context you need to understand who you are talking to.
  • Stuck In Your Bubble - You aren’t aware of the wider API community, and the impact OpenAPI, and Postman are having on the on-boarding, documentation, and other stops along the API lifecycle. Which means you probably aren’t going to keep your platform evolving with where things are headed.

Ok, so why should you have an OpenAPI and Postman Collection?

  • Reduce Onboarding Friction - As a developer I won’t always have the time to spend absorbing your documentation. Let me import your OpenAPI or Postman Collection into my client tooling of choice, register for a key and begin making API calls in seconds, or minutes. Make learning about your API a hands on experience, something I’m not going to get from your static documentation.
  • Interactive API Documentation - Having a machine readable definition for your API allows you to easily keep your documentation up to date, and make it a more interactive experience. Rather than just reading your API documentation, I should be able to make calls, see responses, errors, and other elements I will need to truly understand what you do. There are plenty of open source interactive API documentation solutions that are driven by OpenAPI and Postman, but you’d know this if you were aware of the wider landscape.
  • Generate SDKs, and Other Code - Please do not make me hand code the integration with each of your API endpoints, crafting each request and response manually. Allow me to autogenerate the most mundane aspects of integration, allowing OpenAPI and Postman Collection to act as the integration contract.
  • Discovery - Please don’t expect your potential consumers to always know about your company, and regularly return to your developer.[company].com portal. Please make your APIs portable so that they can be published in any directory, catalog, gallery, marketplace, and platform that I’m already using, and frequent as part of my daily activities. If you are in my Postman Client, I’m more likely to remember that you exist in my busy world.

These are just a few of the basics of why this type of response to my question was inadequate, and why you’d want to have OpenAPI and Postman Collections available. My experience on-boarding will be similar to that of other developers, it just happens that the application I’m developing are out of the normal range of web and mobile applications you have probably been thinking about when publishing your API. But this is why we do APIs, to reach the long tail users, and encourage innovate around our platforms. I just stepped up and gave 30 minutes of my time (now 60 minutes with this story) to learning about your platform, and pointing me to your developer.[company].com page was all you could muster in return?

Just like other developers will, if I can’t onboard with your API without friction, and I can’t tell if there is anyone home, and willing to give me the time of day when I have questions, I’m going to move on. There are other platforms that will accommodate me. The other downside of your response, and me moving on to another platform, is that now I’m not going to write about your API on my blog. Oh well? After eight years of blogging on APIs, and getting 5-10K page views per day, I can write about a topic or industry, and usually dominate the SEO landscape for that API search term(s) (ego still has control). But…I am moving on, no story to be told here. The best part of my job is there are always stories to be told somewhere else, and I get to just move on, and avoid the friction wherever possible when learning how to put APIs to work.

I just needed this single link to provide in response to my email response, before I moved on!


Stack Exchange Has An API That Returns The Details For All Of Your Access Tokens

I’m a big fan of helpful authentication features, where API providers make it easier to manage our increasingly hellish environment, application, token, and other management duties of the average API integrator. To help me better manage my API apps, and the OAuth tokens I have in play, I am trying to document all the sensible approaches I come across while putting different APIs to work, and scouring the API landscape for stories.

One example of this in action is out of the Stack Exchange API, where you can find an API endpoint for accessing the details of your OAuth tokens, and invalidate, and de-authorize them. A pretty useful API endpoint when you are integrating with APIs, and find yourself having to manage many tokens across many APIs, apps, and users. Helping you check in on the overall health and activity of your tokens, revoking, renewing, and making sure they work when you need them the most.

It is helpful for me to write about the helpful authentication practices I come across while using APIs. It helps me aggregate them into a nice list of features API providers should consider supporting. If I don’t write about it here on the blog, then it doesn’t exist in my research, and my future storytelling. My goal is to help spread the knowledge about what is working across the sector, so that more API providers will adopt along the way. You know what is better than Stack Exchange providing an API to manage your access tokens? All API providers providing you with an API to manage your access tokens!

These stories, and any other relevant links I’ve curated will be published to my API authentication research. Eventually I’ll roll all the features I’ve aggregated into either a long form blog post, or white paper I’ll publish and put out with the assistance of one of my partners. I’m interested in the authentication portion of this, but also I’m looking to begin defining processes for helping us better manage our API integration environments, application ids, secrets, tokens, and other goodies we depend on to secure our consumption of APIs across many different providers. It is something that will continue to expand, multiply, and grow more complex with each additional API we add to our growing list of dependencies.


Some Ideas For API Discovery Collections That Students Can Use

This is a topic I’ve wanted to set in motion for some time now. I had a new university professor city my work again as part of one of their courses recently, something that floated this concept to the top of the pile again–API discovery collections meant for just for students. Helping k-12, community college, and university students quickly understand where to find the most relevant APIs to whatever they are working on. Providing human, but also machine readable collections that can help jumpstart their API education.

I use the API discovery format APIs.json to profile individual, as well as collections of APIs. I’m going to kickstart a couple of project repos, helping me flesh out a handful of interesting collections that might help students better understand the world of APIs:

  • Social - The popular social APIs like Twitter, Facebook, Instagram, and others.
  • Messaging - The main messaging APIs like Slack, Facebook, Twitter, Telegram, and others.
  • Rock Star - The cool APIs like Twitter, Stripe, Twilio, YouTube, and others.
  • Amazon Stack - The core AWS Stack like EC2, S3, RDS, DynamoDB, Lambda, and others.
  • Backend Stack - The essential App stack like AWS S3, Twilio, Flickr, YouTube, and others.

I am going to start there. I am trying to provide some simple, usable collections or relevant APIs for students are just getting started If there are any other categories, or stacks of APIs you think would be relevant for students to learn from I’d love to hear your thoughts. I’ve done a lot of writing about educational and university based APIs, but I’ve only lightly touched upon what APIs should students be learning about in the classroom.

Providing ready to go API collections will be an important aspect of the implementation of any API training and curriculum effort. Having the technical details of the API readily available, as well as the less technical aspects like signing up, pricing, terms of service, privacy policies, and other relevant building blocks should also be front and center. I’ll get to work on these five API discovery collections for students. Get the title, description, and list of each API stack published as a README, then I’ll get to work on publishing the machine, and human readable details for the technology, business, and politics of using APIs.


The Path To Production For Department of Veteran Affairs (VA) API Applications

This post is part of my ongoing review of the Department of Veteran Affairs (VA) developer portal and API presence, moving on to where I take a closer look at their path to production process, and provide some feedback on how the agency can continue to refine the information they provide to their new developers. Helping map out the on-boarding process for any new developer, ensuring they are fully informed about what it will take to develop an application on top of VA APIs, and move those application(s) from a developer state to a production environment, and actually serving veterans.

Beginning with the VA’s base path to production template on GitHub, then pulling in some elements I found across the other APIs they have published to developer.va.gov, and finishing off with some ideas of my own, I shifted the outline for the path to production to look something like this:

  • Background - Keeping the background of the VA API program.
  • [API Overview] - Any information relevant to specific API(s).
  • Applications - One right now, but eventually several applications, SDK, and samples.
  • Documentation - The link, or embedded access to the API documentation, OpenAPI definition, and Postman Collection.
  • Authentication - Overview of how to authenticate with VA APIs.
  • Development Access - Provide an overview of signing up for development access.
  • Developer Review - What is needed to become a developer.
    • Individual - Name, email, and phone.
    • Business - Name, URL.
    • Location - In country, city, and state.
  • Application Review - What is needed to have an application(s).
    • Terms of Service - In alignment with platform TOS.
    • Privacy Policy - In alignment with platform TOS.
    • Rate Limits - Aware of the rate limits that are imposed.
  • Production Access - What happens once you have production access.
  • Support & Engagement - Using support, and expected levels of engagement.
  • Service Level Agreement - Platform working to meet an SLA governing engagement.
  • Monthly Review - Providing monthly reviews of access and usage on platform.
  • Annual Audits - Annual audits of access, and usage, with developer and application reviews.

I wanted to keep much of the content that the VA already had up there, but I also wanted to reorganize things a little bit, and make some suggestions for what might be next. Resulting in a path production section that might look a little more like this.


Department of Veteran Affairs (VA) API Path To Production

Background

The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem. We are in the alpha stage of the project, wherein we provide one API that enables parties external to the VA to submit of VBA forms and supporting documentation via PDF on behalf of Veterans.

[API Specific Overview]

Applications

Take a look at the sample code and documentation we have on GitHub at https://github.com/department-of-veterans-affairs/vets-api-clients. We will be developing more starter applications, developing open source SDKs and code samples, while also showcasing the work of other API developers in the near future–check back often.

Documentation

Spend some time with the documentation for the API(s) you are looking to use. Make sure the VA has the resources you will need to make your application work, before you sign up for a developer account, and submit your application for review.

Authentication

VA uses token-based authentication. Clients must provide a token with each HTTP request as a header called apiKey. This token is issued by VA. To receive a developer token to access this API in a test environment, please request access.

Development Access

All new developers must sign up for development access to the VA API platform, providing a minimal amount of information about yourself, business or organization you represent, where you operate in the United States, and the application you will be developing.

Developer Review

You will provide the VA with details about yourself, the business you work for, and your location. Submitting the following information as a GitHub issue, or via email for more privacy:

  • Individual Information
    • Name - Your name.
    • Email - You email address.
    • Phone - Your phone number.
  • Business Information
    • Name - Your business or organizational name.
    • Website - Your business or organization website.
  • Location Information
    • City - The city where you operate.
    • State - The state where you operation.
    • Country - Acknowledge that you operate within the US.

We will take a look at all your details, then verify you and your business or organization as quickly as possible. Once you hear back from us via email, you will either be declined, or we will send you a Client ID and Secret for access the VA API development environment. When you are ready, you can submit your application for review by the VA team.

Application Review

You will provide us with details about the application you are developing, helping us understand the type of application you will be providing to veterans. Submitting the following information as a GitHub issue, or via email for more privacy:

  • Name - The name of your application.
  • Details - Details about your application.
  • Terms of Service - Your application is in alignment with ur terms of service.
  • Privacy Policy - Your application is in alignment with our privacy policy
  • Rate Limits - You are aware of the rate limits imposed on your application.
  • Working Demo - A working demo of the application will be needed for the review.
  • Code Review - A code review will be conducted when application goes to production.

We will take a look at all your details, and contact you about scheduling a code review. Once all questions about your applications are answered, and a code review has been conducted you will be notified letting you know if your application is accepted or not.

Production Access

Once approved for production access, you will receive an email from the VA API team notifying you of your application’s status. You will receive a new Client ID and Secret for your application for use in production, allowing use the base URL api.vets.gov instead of dev-api.vets.gov, and begin access live data.

Support & Engagement

The VA will be providing support using GitHub issues, and via email. All developers will be required to engage through these channels to be able to actively engage with VA API operations, to be able to maintain access to VA APIs

Service Level Agreement

The VA will be providing a service level agreement for each of the APIs we provide, committing to a certain quality of service, support, and communication around the APIs you will be integrating your applications with.

Monthly Review

Each month developers will receive an email from the VA regarding your access and usage. Providing a summary of our engagement. A response is required to maintain an active status as a VA developer, and application. After 90 days of no response, all developer or production application keys will be revoked, until contact is made. Keeping all applications active, with a responsive administrator actively managing things.

Annual Audits

Each year developers will receive an email from a VA API audit representative, reviewing your access, usage, and conducting a fresh developer review, as well as review application, access tokens, and end-usage. Ensuring that all applications operating in a production environment continually meet the expected security, privacy, support, and operational standards.


It Is Not Just A Path, But Journey
I’m trying to shift the narrative for the path to production into being a more complete picture of the journey. What is expected of the developers, their applications, as well as setting the table of what can be expected of the VA. I don’t expect the VA to bite on all of these suggestions, but I can’t help but put them in there when they are relevant, and I have the opportunity ;-)

Decouple Developer And Application(s)
Some of the reasons I separated the developer review from the application review is so that a developer could sign up and immediately get a key to kick the tires and begin developing. When ready, which many will never be, they can submit an application for potential production access. Developers might end up having multiple applications, so if we can decouple them early on, and allow all developers to have a default application in the development environment, but also be able to have multiple applications in a production environment, I feel things will be more scalable.

Not Forgetting The Path After Production
I wanted to make sure and put in some sort of process that would help ensure both the VA, and developers are investing in ongoing management of their engagement. Ensuring light monthly reviews of ALL applications using the APIs, and pushing for developers and the VA to stay engaged. While also making sure all developers and applications get annual reviews, preventing what happened with the Cambridge Analytica / Facebook situation, where a malicious application gets access to such a large amount of data, without any regular review. Making sure the VA isn’t forgetting about applications once they are in a production state. Pushing this to be more than just about achieving production status with an application, and also continuing to ensure it is serving end-users, the veterans.

Serving Veterans With Your Application
To close, I’d say that calling this a path to production doesn’t do it justice. It should be guide to being able to serve a veteran with your application. Acknowledging that it will take a significant amount of development before your application will be ready, but also that the VA will work to review your developer account, and your application, and ensure that VA APIs, and the applications that depend on them will operate as expected, and always be in service of the veteran. Something that will require a certain level of rigor when it comes to the development, certification, and support of applications across the VA API ecosystem.


An API Value Generation Funnel With Metrics

I’ve had several folks asking me to articulate my vision of an API centric “sales” funnel, which technically is out of my wheelhouse in the sales and marketing area, but since I do have lots opinions on what a funnel should look like for an API platform, I thought I’d take a crack at it. To help articulate what is in my head I wanted to craft a narrative, as well as a visual to accompany how I like to imagine a value generation funnel for any API platform.

I envision a API-driven value generation funnel that can be tipped upside down, over and over, like an hour glass, generating value is it repeatedly pushes API activity through center, driven by a healthy ecosystem of developers, applications, and end-users putting applications to work / use. Providing a way to generate awareness and engagement with any API platform, while also ensuring a safe, reliable, and secure ecosystem of applications that encourage end-user adoption, engagement, and loyalty–further expanding on the potential for developers to continue developing new applications, and enhancing their applications to better serve end-users.

I am seeing things in ten separate layers right now, something I’ll keep shifting and adjusting in future iterations, but I just wanted to get a draft funnel out the door:

  • Layer One - Getting folks in the top of the funnel.
    • Awareness - Making people aware of the APIs that are available.
    • Engagement - Getting people engaged with the platform in some way.
    • Conversation - Encouraging folks to be part of the conversation.
    • Participation - Getting developers participating on regular basis.
  • Layer Two
    • Developers - Getting developers signing up and creating accounts.
    • Applications - Getting developers signing up and creating applications.
  • Layer Three
    • Sandbox Activity - Developers being active within the sandbox environment.
  • Layer Four
    • Certifed Developers - Certifying developers in some way to know who they are.
    • Certified Application - Certifying applications in some way to ensure quality.
  • Layer Five
    • Production Activity - Incentivizing production applications to be as active as possible.
  • Layer Six
    • Value Generation (IN / OUT) - Driving the intended behavior from all applications.
  • Layer Seven
    • Operational Activity - Doing what it takes internally to properly support applications.
  • Layer Eight
    • Audit Developers - Make sure there is always a known developer behind the application.
    • Audit Applications - Ensure the quality of each application with regular audits.
  • Layer Nine
    • Showcase Developers - Showcase developers as part of your wider partner strategy.
    • Showcase Applications - Showcase and push for application usage across an organization.
  • Layer Ten
    • Loyalty - Develop loyal users by delivering the applications that user are needing.
    • End-Users - Drive end-user growth by providing the applications end-users need.
    • Engagement - Push for deeper engagement with end-users, and the applications they use.
    • End-Usage - Incentivize the publishing and consumption of all platform resources.

I’m envisioning a funnel that you can turn on its head over and over and generate momentum, and kinetic energy, with the right amount of investment–the narrative for this will work in either direction. Resulting in a two-sided funnel both working in concert to generate value in the middle of the two-sided funnel.

To go along with this API value generation funnel, I’m picturing the following metrics being applied to quantify what is going on across the platform, and the eleven layers:

  • Layer One - Unique visitors, page views, RSS subscribers, blog comments, tweets, GitHub follows, forks, and likes.
  • Layer Two - New developers who are signing up, and adding new applications to the platform.
  • Layer Three - API calls on sandbox API resources, and overall activity in the development environment.
  • Layer Four - New certified developers and applications that have been reviewed and given production access.
  • Layer Five - API calls for production API resources, understanding the overall activity across the platform.
  • Layer Six - GET, POST, PUT, DELETE on different types of resources, in different types of service plans, at different rates.
  • Layer Seven - Support requests, communication, and other new resources that have occurred in support of operations.
  • Layer Eight - Number of developers and applications audited on a regular basis ensuring quality of application catalog.
  • Layer Nine - Number of new and existing developers and applications being showcased as part of platform operations.
  • Layer Ten - Number of end-users, sessions, page views, and other activity across the applications being delivered.

Providing a stack of metrics you can use to understand how well you are doing within each layer, understanding not just the metrics for a single area of your operations, but how well you are doing at building momentum, and increasing value generation. I hesitate to call this a sales funnel, because sales isn’t my jam. It is also because I do not see APIs as something you always sell–sometimes you want people contributing data and content into a platform, and not always just consuming resources. A well balanced API platform is generating value, not just selling API calls.

I am not entirely happy with this API value generation funnel outline and diagram, but it is a good start, and gets me closer to what I’m seeing in my head. I’ll let it simmer for a few weeks, engage in some conversations with folks, and then take another pass at refining it. Along the way I’ll think about how I would apply to my own API platform, and actually take some actions in each area, and begin fleshing out my own funnel. I’m also going to be working on a version of it with the CMO at Streadmata.io, and a handful of other groups I’m working with on their API strategy. The more input I can get from a variety of API platforms, the more refined I can make this outline and visual of my suggested API value generation funnel.


My API Storytelling Depends On The Momentum From Regular Exercise And Practice

I’ve been falling short of my normal storytelling quotas recently. I like to have at least 3 posts on API Evangelist, and two posts on Streamdata.io each day. I have been letting it slip because it was summer, but I will be getting back to my regular levels as we head into the fall. Whenever I put more coal in the writing furnace, I’m reminded of just how much momentum all of this takes, as well as the regular exercise and practice involved, allowing me to keep pace in the storytelling marathon across my blog(s).

The more stories I tell, the more stories I can tell. After eight years of doing this, I’m still surprised abut what it takes to pick things back up, and regain my normal levels of storytelling. If you make storytelling a default aspect of doing work each day, finding a way to narrate your regular work with it, it is possible to achieve high volumes of storytelling going out the door, generating search engine and social media traffic. Also, if you root your storytelling in the regular work you are already doing each day, the chances it will be meaningful enough for people to tune in only increases.

My storytelling on API Evangelist is important because it helps me think through what I’m working on. It helps me become publicly accessible by generating more attention to my work, firing up new conversations, and reenforces the existing ones I’m already having. When the storytelling slows, it means I’m either doing a unhealthy amount of coding or other work, or my productivity levels are suffering overall. This makes my API storytelling a heartbeat of my operations, and a regular stream of storytelling reflects how healthy my heartbeat is from regular exercise, and usage of my words (instead of code).

I know plenty of individuals, and API related operations that have trouble finding their storytelling voice. Expressing that they just don’t have the time or resources to do it properly. Regular storytelling on your blog is hard to maintain, even with the amount of experience I have. Regardless, it is something you just have to do, and you will have mandate that storytelling just becomes a default aspect of your work each day. If you work on it regularly, eventually you’ll find your voice. However, there will always be times where you lose it, and have to work to regain it again. It is just the fight you will have to fight, but ultimately if you continue, it will be totally worth it. I’m very thankful I’ve managed to keep it going for over eight years now, resulting in a pretty solid platform that enables me to do what I do.


Allowing Users To Get Their Own OAuth Tokens For Accessing An API Without The Need For An API Application

I run a lot of different applications that depend on GitHub, and use GitHub authentication as the identity and access management layer for these apps. One of the things I like the most about GitHub and how I feel it handles it’s OAuth more thoroughly than most other platforms, is how they let you get you own OAuth token under your settings > developer settings >personal access tokens. You don’t need to setup an application, and do the whole OAuth dance, you just get a token that you can use to pass along with each API call.

I operate my own OAuth server which allows me to authenticate using OAuth with many leading APIs, so generating an OAuth token, and setting up a new provider isn’t too hard. However, it is always much easier to go under my account settings, create a new personal access token for a specific purpose, and get to work playing with an API. I wish that ALL API providers did this. At first glance, it looks like GitLab, Harvest, TypeForm, and ContentFul all provide personal access tokens as a first option for on-boarding with their APIs. Demonstrating this is more of a pattern, than just a GitHub feature.

One of these days I’m going to have to do another story documenting the entire GitHub OAuth system, because they have a lot of interesting bells and whistles that make using their platform much more secure, and just a more frictionless experience than other API providers I use on a regular basis. GitHub has ground down a lot of the sharp corners on the whole authentication experience when it comes to OAuth. It would make a nice blueprint to share, and work to convince other API providers it is a pattern worth following. Reducing the cognitive load around OAuth management for any API integration, and standardizing how API providers support their API consumers, and end-users.

I have 3 separate Twitter Apps setup for specific purposes, but I wanted to have a separate personal application just for managing my person @kinlane account. I submitted a Twitter application for review, but haven’t heard back after almost a month. As a individual user of any platform, I should be able to instantly issue a personal access token that let’s me, or someone I sanction, to access my data and content on the platform. Personal access tokens should be a default feature for any consumer focused platform, putting API access more within the control of each end-user, and the platform power users.


What Have You Done For Us Lately (API Partner Edition)

I’ve been working on developing and evolving the Streamdata.io partner program, trying to move forward conversations with other service providers in the space that have existed long before I started working on things, as well as other newer relationships that I’ve helped bring in. I’m fascinated by how partner programs work, or do not work, and have invested a lot of time trying to optimize and improve how I do my own operations, and assist my partners and clients in evolving and delivering on their own partner vision.

It is difficult to establish, and continue meaningful and balanced partnerships between technology service and tooling providers. Sometimes providers have enough compatibility and synergy, that they are able to hit the ground running with meaningful activities that strengthen, and build partnership momentum. We are trying to establish a meaningful, yet effective way of measuring partner activity, and understanding the value that is being generated, and where reciprocity exists. Looking at the following activities produced by Streamdata.io and it’s partners:

  • Partner Page - Being published to both of our partner pages.
  • Testimonials - Providing quotes for each other about our services.
  • Blog Posts - Publishing blog posts about partnership and each others services.
  • White Papers - Publishing white papers or guides about partnership and each others services.
  • Press Releases - Working on join press releases about partnership and each others services.
  • Integrations - Publishing open source repositories demonstrating integration and usage of each others services.
  • Workshops - Conduct workshops for each others customers, helping deliver each others services within our ecosystems.
  • Business - Actually provide business referrals from our customers, and conversations occurring across both companies.

There are other activities we like to see happening, but these eight areas represent the most common exchanges we encourage amongst our partners. The trick is always pushing for reciprocity across all these areas, help deliver on a balanced partnership, and make sure there is equal value being generated for both sides of the partnership. Each of our partners look at this list of activities differently, requiring different levels of participation, and having expectations of results set at different levels.

There are some “potential partners” who don’t want to event talk about any of these items until we have that first business deal. While other partners are more than happy when we engage in these activities, but are hesitant about reciprocating on their side. We are more than happy to take the lead on many of these activities, but increasingly we are tracking on the activity on both sides of the track, to help quantify each partnership, guide our conversations, and our marketing, development, and evangelism efforts. Leaving us to ask regularly of our partners, what have you done for us lately? While also asking ourselves the same question about what we have done for our partners.


The Federal Agencies Who Use Their developer.[domain].gov Subdomain

I was reviewing the new developer portal for the Department of Veterans Affairs (VA), and one of things I took notice of, was their use of the developer.va.gov subdomain. In my experience, the API efforts that invest in a dedicated subdomain, and specifically a developer dot subdomain, tend to more invested in what they are doing than efforts that publish to a subfolder, or subsection of their website. As I was writing this post, I had a question in arise in my mind, regarding how many other federal agencies use a dedicated subdomain for their developer programs–something I wanted to pick up later, and understand the landscape a little more.

I took a list of current federal agency domains from the GSA and wrote a little script to append developer. to each of the domains, and conduct an HTTP status code check to see whether or not these pages existed. Here are the dedicated developer areas I found for the US federal government:

  • Department of Veterans Affairs (VA) - https://developer.va.gov/
  • Department of Labor - https://developer.dol.gov
  • International Trade Administration (ITA) - https://developer.trade.gov & https://developer.export.gov
  • United States Patent and Trademark Office - https://developer.uspto.gov
  • National Renewable Energy Laboratory - https://developer.nrel.gov
  • Centers for Medicare & Medicaid Services - https://developer.cms.gov
  • The Advanced Distributed Learning Initiative - http://developer.adlnet.gov & http://developers.adlnet.gov
  • United States Environmental Protection Agency - http://developer.epa.gov
  • USA Jobs - http://developer.usajobs.gov

These nine agencies have decided to invest in a subdomain for their developer portals. I have to recognize two others who provide these subdomains, but then redirect to a subsection of their websites:

  • National Park Service - http://developer.nps.gov redirect to https://www.nps.gov/subjects/developer/index.htm
  • Data.gov - http://developer.data.gov redirects to https://www.data.gov/developers/

Additionally, there is a single domain I noticed that used the plural version of the subdomain:

  • Code.gov - https://developers.code.gov (plural)

Along the way, I also noticed that many agencies would redirect their subdomain, and I assume all subdomains to the root of their agency’s domain. Ideally, all federal agencies would have a Github account, and publish a developer portal using Github Pages, and publish the developer.[agencydomain].gov as the address for the portal. Even if they just provide access to the agency’s data inventory, it is important to lay down the foundation for a developer platform across data, APIs, and open source software out of all federal agencies, providing a common, well-known location develop upon the government platform.

As part of my larger API discovery work I am going to keep lobbying that federal agencies work to publish a common developer.[agencydomain].gov portal. It would begin to transform how applications are built if you knew that you could automatically find a government agency’s data, APIs, and open source tooling at a single location. Especially if it was something that was default across ALL federal agencies, who were also actively publishing their public data assets, entire API catalog, and showcase of open source solutions they depend on and produce.


The Basics Of The VA API Feedback Loop

I’m working to break down the moving parts of API efforts over at the VA, and work to provide as much relevant feedback as I possibly can. One of the components I’m wanting to think about more is the feedback loop for the VA API efforts. The feedback loop is one of the most essential aspects of doing an API, and is quickly can become one of the most debilitating, paralyzing, and nutrient starving aspects of operating an API platform if done wrong, or non-existent. However, the feedback loop is also one of the most valuable reasons for wanting to do APIs in the first place, providing the essential feedback you will need from consumers, and the entire API ecosystem to move the API forward in a meaningful way.

Current Seeds Of The VA API Feedback Loop
Current the VA is supporting the VA API developer portal using GitHub Issues and email. I mentioned in my previous review of the VA API developer portal that the personal email addresses provided for email support should be generalized, sharing the load when it comes to email support for the platform. Next, I’d like to address the usage of GitHub issues for support, along with email, and step back to look at how this contributes to, or could possibly take away from the overall feedback loop for the VAPI API effort. Defining what the basics of an API feedback loop for the VA might be.

Expanding Upon The VA Usage Of GitHub Issues
I support the usage of GitHub issues for public support of any API related project. It provides a simple, observable way for anyone to get support around the VA APIs. While I’m guessing it was accidental, I like the specialization of the current repo, and usage of GitHub issues, and that it being dedicated to VA API clients and their needs. I’d encourage this type of continued focus when it comes to establishing additional feedback loops, keeping them dedicated to a specific aspect of operating on the VA API platform. It is something that might seem a little messy at first, but could easily be managed with the proper strategy, and usage of GitHub APIs, which I’ll highlight below.

Makes API Operations More Public And Observable
One of the most important reasons for using GitHub as the cornerstone of the VA API feedback loop is that it allows for transparent, observable, auditable operation of the feedback loop across the VA API platform. One of the critical aspects of the overall health of the VA API platform in the future, will be feedback loops being as open as they possibly can. Of course, there are some feedback loops that should remain private, which GitHub issues can accommodate, but whenever possible the feedback loop for the VA API platform should be in the public domain, allowing all stakeholders, veterans, and the public to actively participate in the process. In a way that can ensures every aspect of API operations is documented, and auditable, providing as much accountability as possible across VA API operations.

Allowing For More Modular Organization Of Feedback Loops
Using GitHub Issues for the deployment, management, and organization of more modular feedback loops. Treating your feedback loops just as you would your APIs, making them small, meaningful, and doing one thing and doing it well. Any GitHub repository can have its own GitHub Issues, allowing for the deployment of specialized feedback loops based upon single project that are part of different organizational groups. Beyond the modularity available when you leverage GitHub repositories, and organize them within GitHub Organizations, Github Issues can also be tagged, allowing for even more meaningful organization of feedback as it comes in, tagging and bagging for consideration as part of the road map, and other decision making processes that will be feeding off the VA API platform’s feedback loop.

Enabling Feedback Loop Automation With The GitHub API
Another benefit of using GitHub Issues as an API feedback loop building block, is that they also have an API. Allowing for the automation of all aspects of the VA API platform feedback loop(s). The GitHub API can be used to aggregate, automate, audit, and work with the Github Issues for any GitHub organization and repo the VA has access to. Providing the ability to manage not just the Github Issues for a single GitHub repository, but for the orchestration of feedback loops across many different GitHub repositories, managed by many different GitHub organizations. Establishing a distributed feedback loop system in which VA API leadership can use to coordinate with different internal, agency, partner, vendor, or public stakeholder(s) at scale, across many different projects, and components of the VA API platform.

Augmenting Public Feedback With Private Github Repos
While it is critical that as many of the feedback loops across the VA API platform are publicly accessible, and observable by everyone as possible, it is also important that there are private channels for communication around some of the components of the platform. This is another reason why GitHub Issues can work as a building block for not just public feedback loops, but also being able to operate feedback loops as well. Taking advantage of private repositories when it comes to establishing modular, automated, and private conversations to occur around certain VA API platform projects. Balancing the public aspects of the platform, with feedback loops amongst trusted groups, while still leveraging GitHub for delivering the identity and access management aspects of governing private VA feedback loops.

Extending Private GitHub Repos With Email Support
Beyond the private GitHub repositories, and using their issues to facilitate private conversations, it always makes sense to have a generalized and dedicated email account as part of the feedback loop for any API platform. Providing another private, but also a vendor neutral way of supporting the platform. People just are familiar with email, and it makes sense to have a general account that is managed by many individuals who are coordinating around platform operations. Make it easy to provide feedback around the API the VA API operations, and support anyone participating within the VA API ecosystem.

Auditing, Documenting, And Reporting Upon The VA Feedback Loop
I suggested in my review of the VA API platform that email should be standardized and delivered via a dedicated email account, so that multiple stakeholders can participate in support of the platform from a VA operational perspective. This way emails can be tagged, organized, and archived in support of the larger VA API feedback loop. Making sure all questions get answered, and that they are also contributed to the evolution of the platform. Something that can also be done via the automation described earlier using the GitHub API. Allowing all threads, across any project and organization to be audited, documented, and reported upon across all VA API operations. Ensuring that their is transparency, observability, and accountability across the VA API platform feedback loop.

Have A Strategy In Place For The VA API Feedback Loop
GitHub Issues and email are the two basic building blocks of any API platform, and I support the VA starting their official journey here. I think GitHub makes for an essential building block of any API platform, when used right. It just helps to have a plan in place for when a repo’s GitHub is included in the overall feedback loop framework, and the organization and prioritization of the conversation going on there. GitHub Issues spread across many different GitHub repositories, without any real strategy to how they are organized, tagged, and engaged with can seem overwhelming, and become a mess. However, with a little planning, and the establishment of even the most basic approach to managing them, can help develop a pretty robust feedback loop across the VA API platform, that follows the lead of how open source software gets delivered.

Consider Other API Feedback Loop Building Blocks
I wanted to keep this post just about the basics of the feedback loop for the VA, or for any API platform–GitHub Issues, and email. However, I’d also like suggest the consideration of some other building blocks, to help augment GitHub Issues, providing some other direct, and indirect approaches to operating the VA API platform feedback loop:

  • FAQ - Providing a frequently asked question that is an aggregate of all the questions that get asked across the GitHub issues, and via email.
  • Newsletter - Providing a regular channel for updating platform stakeholders, via a structured email newsletter. Offering up private, and public editions, targeting different groups.
  • Road Map - Publishing a road map regarding what is getting built across all projects included within the VA API platform perimeter, aggregating GitHub Issues that evolve as part of the feedback loop and get tagged as milestones for adding to the road map.

I’m always hesitant to make suggestions about where to go next, when an organization is just getting started on their API journey. However, I think the VA team knows when to ignore my advice, and when they can cherry pick the things they want to include in their strategy. I just want to make sure I provide as much constructive criticism about what is there, and constructive feedback around what can be invested in next.

Hopefully this post provides a basic overview of the VA API platform feedback loop. Expands on what they are already doing, but shines a light on some of the positive aspects of using GitHub for the VA API platform feedback loop. I was the one who worked with the former VA CIO Marina Martin (@MarinaNitze) to get the the VA GitHub organization setup back in 2013. So it makes me happy to see it being used as a cornerstone of the VA API platform. I am happy to give feedback on how they can continue to put the powerful platform to such good use.


Remembering That APIs Are Used To Reduce Everything Down To A Transaction

This is our regular reminder that APIs are not good, nor bad, nor neutral. They are simply a tool in our technological toolbox, and something that is often used for very dark reasons, and occasionally for good. One of the most dangerous things I’ve found about APIs is just the general thought process that goes along with them, regarding how all roads lead to reducing, and distilling things down to a single transaction. APIs, REST, microservices, and other design patterns are all about taking something from our physical world, and redefining it as something that can be transmitted back and forth using the low cost request and response infrastructure of the web.

No matter what you are designing your API for, your mission is to reduce everything to a simple transaction that can be exchanged between your server, and any other system, web, mobile, device, or network application. This digital resource could be a photo of your kids, a message to your mother, the balance of your bank account, your personal thoughts going into your notebook, the latest song you listened to, your DNA, your test results for cancer, or any other piece of relevant data, content, media, object, or other resource that is being sent or received online. APIs are all about reducing all of our meaningful digital bits to the smallest possible transaction, and then daisy chaining them together to produce some desired set of results.

This API-ification of everything can be a good thing. It can make our lives better, but one of the negative side effects of this reducing of everything to a transaction, is that now that transaction can be bought and sold. The digitization of everything in our lives is rarely ever about making our lives better and whatever the reasons we are told up front, and almost always are about reducing that little piece of our lives to a transaction that can be quantified, have a value place on it, and then sold individually, or in bulk with millions of other transactions. As consumers of a digital reality, we rarely see the reasons why something around us are being digitized, and API-ified so that it can transacted online, resulting in something we’ve heard a lot–that we are the product.

It’s easy to believe in the potential of APIs. It is easy to get caught up in the reducing of everyday things down to transactions. It takes discipline, and the ability to stop and consider the bigger picture on a regular basis to avoid being stuck in the strong under currents of the API economy. Making sure we are regularly asking ourselves if we want this piece of our reality digitized and reduced to a transaction, and what the potential negative consequences of this element of our existence being a transaction. Thinking a little more deeply about how we’d feel if someone was buying and selling the digital bits of our life, and are we only ok with this as long as it is someone else’s bits and bytes–demonstrating that APIs are winning, and humanity is losing in this game we’ve developed online.


Why I Feel The Department Of Veterans Affairs API Effort Is So Significant

I have been working on API and open data efforts at the Department of Veterans Affairs (VA) for five years now. I’m passionate about pushing forward the API conversation at the federal agency because I want to see the agency deliver on its mission to take care of veterans. My father, and my step-father were both vets, and I lived through the fallout from my step-fathers two tours in Vietnam, exposure to the VA healthcare and benefits bureaucracy, and ultimately his passing away from cancer which he acquired from to his exposure to Agent Orange. I truly want to see the VA streamline as many of its veteran facing programs as they possibly can.

I’ve been evangelizing for API change and leadership at the VA since I worked there in 2013. I’m regularly investing unpaid time to craft stories that help influence people I know who are working at the VA, and who are potentially reading my work. Resulting in posts like my response to the VA’s RFI for the Lighthouse API management platform, which included a round two response a few months later. Influence through storytelling is the most powerful tool I got in my API evangelist toolbox.

This Is An Amazon Web Services Opportunity
The most popular story on my blog is, “The Secret to Amazon’s Success–Internal APIs”. Which tells a story of the mythical transition of Amazon from an online commerce website to the cloud giant, who is now powering a significant portion of the web. The story is mostly fiction, but continues to be the top performing story on my blog six years later. I’ve heard endless conference talks about this subject, I’ve seen my own story framed on the wall in enterprise organizations in Europe and Australia, and as a feature link on internal IT portals. This is one of the most important stories we have in the API sector, and what is happening at the VA right now will become similar to the AWS story when we are talking about delivering government services a decade from now.

The VA Is Going All In On An API Vision
One of the reasons the VA will obtain the intended results from their API initiative is because they are going all in on APIs across the agency. The API effort isn’t just some sideshow going on in a single department or group. This API movement is being led out of the VA’s Digital Innovation center, but is being adopted, picked up, and moved forward by many different groups across the large government agency. When I did my landscape analysis for them, I scanned as much of their public digital presence as possible in a two week timeframe, and provided them with a list of targets to go after. I see the scope of the results obtained from VA landscape analysis present in the APIs I’m seeing published to their portal, and in development by different groups, revealing in the beginnings of an agency-wide API journey.

The Use Of developer.va.gov Demonstrates The Scope
One way you can tell that the VA is going all in on an API vision, is their usage of the developer.va.gov subdomain. This may seem like it is a trivial thing, but after looking at thousands of API operations, and monitoring some of them for eight years, the companies, organizations, institutions, and government agencies to dedicate a subdomain to their API programs are always more committed to them, and invest the appropriate amount of resources needed to be successful. These API leaders always stand out from the organizations that publish their API efforts as an entry in their help center or knowledge-base, or make it just a footnote in their online presence. The use of the developer.va.gov subdomain demonstrates the scope of investment going on over at the VA in my experience.

The VA Is Properly Funding Their API Efforts
One of the most common challenges I see API teams face is the lack of resources to do what they need to do. API teams that can’t afford to deliver across multiple stops along the API lifecycle, cutting corners on testing, monitoring, security, documentation, and other common building blocks of a successful API platform. Properly funding an API initiative, and making it a first class citizen within the enterprise is essential to success. The number one response an API platform gets rendered ineffective is due to a lack of resources to properly deliver, evangelize, and scale API operations. This condition often leaves API programs unable to effectively spread across large organizations, properly reach out to partners, and generate the attention a public API program will need to be successful. From what I’ve seen so far, the VA is properly funding the expansion of the API vision at the agency, and will continue to do so for the foreseeable future.

The VA Is Providing Access To Meaningful API Resources
I’ve seen thousands of APIs get launched. Large enterprise always start with the safest resources possible. Learning by delivering resources that won’t cause any waves across the organization, which can be a good thing, but after a while, it is important that the resources you put forth do cause waves, and make change across an organization. The VA started with simple APIs like the VA Facilities API, but is quickly shifting gears into benefits, veteran validation, health, and other APIs that are centered around the veteran. I’m seeing the development of APIs that provide a full stack of resources that touch on every aspect of the veterans engagement with the VA. In order to see the intended results from the VA API efforts, they need to be delivering meaningful API resources, that truly make an impact on the veteran. From what I’m seeing so far, the VA is getting right at the heart of it, and delivering the useful API resources that will be needed across web, mobile, and device based applications that are serving veterans today.

There Is Transparency And Storytelling
Every one of my engagements with the VA this year has ended up on my blog. One of the reasons I stopped working within the VA back in 2013 was there were too many questions about being able to publish stories on my blog. I haven’t seen such questions of my work this year, and I’m seeing the same tone being struck across other internal and vendor efforts. The current API movement at the VA understands the significance of transparency, observability, and of doing much of the API work the VA out in the open. Sure, there is still the privacy and security apparatus that exists across the federal government, but I can see into what is happening with this API movement from the outside-in. I’m seeing the right amount of storytelling occurring around this effort, which will continue to sell the API vision internally to other groups, laterally to other federal agencies, and outwards to software vendors and contractors, as well as sufficiently involving the public throughout the journey.

Evolving The Way Things Get Done With Micro-Procurement
Two of the projects I’ve done with the VA have been micro-procurement engagements: 1) VA API Landscape Analysis, and 2) VA API Governance Models In The Public And Private Sector. Both of these projects were openly published to GitHub, opening up the projects beyond the usual government pool of contractors, then were awarded and delivered within two week sprints for less than $10,000.00. Demonstrating that the VA is adopting an API approach to not just changing the technical side of delivering service, but also working to address the business side of the equation. While still a very small portion of what will be needed to get things done at the VA, it reflects an important change in how technical projects can be delivered at the VA. Working to decompose and decouple not just the technology of delivering APIs at the VA, but also the business, and potentially the internal and vendor politics of delivering services at scale across the VA.

The VA Has Been Doing Their API Homework
As of the last couple of months, the VA is shifting their efforts into high gear with an API management, as well as an API development and operations solicitation(s) to help invest in, and build capacity across the agency. However, before these solicitations were crafted the VA has been doing some serious homework. You can see this reflected in the RFI effort which started in 2017, and continued in 2018. You can see this reflected in the micro-procurement contracts that have been executed, and are in progress as we speak. I’ve seen a solid year of the VA doing their homework before moving forward, but once they’ve started moving forward, they’ve managed to be able to shift gears rapidly because of the homework they’ve done to date.

Investing In An API Lifecycle And Governance
I am actively contributing to, and tuning into the API strategy building going on at the VA, and I’m seeing investment into a comprehensive approach to delivering all APIs in a consistent way across a modern API lifecycle. Addressing API design, mocking, deployment, orchestration, management, documentation, monitoring, and testing in a consistent way using an OpenAPI 3.0 contract. Something that is not just allowing them to reliably deliver APIs consistently across different groups and vendors, but is also allowing them to develop a comprehensive API governance strategy to measure, report upon, and evolve their API lifecycle and strategy over time. Dialing in how they deliver services across the VA, by leveraging the development, management, and operational level capacity they are bringing to the table with the solicitations referenced above. This approach demonstrates the scope in which the VA API leadership understands what will be necessary to transform the way the VA delivers services across the massive federal agency.

Providing An API Blueprint For Other Agencies
What the VA is doing is poised to change the way the VA meets its mission. However, because it is being done in such a transparent and observable way, with every stop along the lifecycle being so well documented and repeatable, they are also developing an API blueprint that other federal agencies can follow. There are other healthy API blueprints to follow across the federal government, out of Census, Labor, NASA, CFPB, FDA, and others, but there is not an agency-wide, API definition driven, full life cycle, complete with governance blueprint available at the moment. The VA API initiative has the potential to be the blueprint for API change across the federal government, becoming the Amazon Web Services story that we’ll be referencing for decades to come. All eyes are on the VA right now, because their API efforts reflect an important change at the agency, but also an important change for how the federal government delivers services to it’s people.

I am all in when it comes to support APIs at the VA. As I mentioned earlier, my primary motivation is rooted in my own experiences with the VA system, and helping it better serve veterans. My secondary motivation is all about contributing to, and playing a role in the implementation of one of the significant API platforms out there, which if done right, will change how our federal government works. I’m not trying to be hyperbolic in my storytelling around the VA API platform, I truly believe that we can do this. As always, I am working to be as honest as I can about the challenges we face, and I know that the API journey is always filled with twists and turns, but with the right amount of observability, I believe the VA API platform can deliver on the vision being set by leadership at the agency, and why I find this work to be so significant.


Understanding Where Folks Are Coming From When They Say API Management Is Irrelevant

I am always fascinated when I get push back from people about API management, the authentication, service composition, logging, analysis, and billing layer on the world of APIs. I seem to be find more people who are skeptical that it is even necessary anymore, and that it is a relic of the past. When I first started coming across the people making these claims earlier this year I was confused, but as I’ve pondered on the subject more, I’m convinced their position is more about the world of venture capital, and investment in APIs, that it is about APIs.

People who feel like you do not need to measure the value being exchanged at the API layer aren’t considering the technology or business of delivering APIs. They are simply focused on the investment cycles that are occurring, and see API management as something that has been done, it is baked into the cloud, and really isn’t central to API-driven businesses. They perceive that the age of API management as being over, it is something the cloud giants are doing now, thus it isn’t needed. I feel like this is a symptom of tech startup culture being so closely aligned with investment cycles, and the road map being about investment size and opportunity, and rarely the actual delivery of the thing that brings value to companies, organizations, institutions, and government agencies.

I feel this perception is primarily rooted in the influence of investors, but it is also based upon a limited understanding of API management, and seeing APIs being a about delivering public APIs, maybe with a complimenting a SaaS offering, and a free, pro, and enterprise tiers of access. When in reality API management is about measuring, quantifying, reporting upon, and in some cases billing for the value that is exchanged at the system integration, web, mobile, device, and network application levels. However, to think API operators shouldn’t be measuring, quantifying, reporting, and generating revenue from the digital bits being exchanged behind ALL applications, just demonstrates a narrow view of the landscape.

It took me a few months to be able to see the evolution of API management from 2006 to 2016 through the eyes of an investment minded individual. Once the last round of consolidation occurred, Apigee IPO’d, and API management became baked into Amazon, Google, and Azure, it fell of the radar for these folks. It’s just not a thing anymore. This is just one example of how investment influences the startup road map, as well as the type of thinking that goes on amongst investor influence, painting an entirely different picture of the landscape, than what I see going on. Helping me understand more about where this narrative originates, and why it gets picked up and perpetuated within certain circles.

To counter this view of the landscape, from 2006 to 2016 I saw a very different picture. I didn’t just see the evolution of Mashery, Apigee, and 3Scale as acquisition targets, and cloud consolidation. I also saw the awareness that API management brings to the average API provider. Providing visibility into the pipes behind the web, mobile, device, and network applications we are depending on to do business. I’m seeing municipal, state, and federal government agencies waking up to the value of the data, content, and algorithms they possess, and the potential for generating much needed revenue off commercial access to these resources. I’m working with large enterprise groups to manage their APIs using 3Scale, Apigee, Kong, Tyk, Mulesoft, Axway, and AWS API Gateway solutions.

Do not worry. Authenticating, measuring, logging, reporting, and billing against the value flowing through our API pipes isn’t going anywhere. Yes it is baked into the cloud. After a decade of evolution, it definitely isn’t the early days of investing in API startups. But, API management is still a cornerstone of the API life cycle. I’d say that API definitions, design, and deployment are beginning to take some of the spotlight, but authentication, service composition, logging, metrics, analysis, and billing will remain an essential ingredient when it comes to delivering APIs of any shape or size. If you study this layer of the API economy long enough, you will even see some emerging investment opportunities at the API management layer, but you have to be looking through the right lens, otherwise you might miss some important details.


API Portals Designed For API Provider And API Consumers

I’ve been working a couple organizations who are struggling with providing information within their API developer portal intended for API publishers, pushing their API portal beyond just being for their API consumers. Some of the folks I’ve been working with haven’t thought about their API developer portals being for both API publishers and consumers, and asked me to weigh in on the pros and cons of doing this. Helping them understand how they can continue their journey towards not just being an API platform, but also an API marketplace.

Some of the conversations we were having about providing API lifecycle materials to API developers, helping them deliver APIs consistently across all stops along lifecycle, focused on creating a separate portal for API publishers, decoupling them from where the APIs would be discovered and consumed. After some discussion, and consideration, it feels like it would be an unnecessary disconnect, to have API publishers going to a different location than where their APIs would end up being discovered, and integrated with. That having them actively involved in the publishing, presentation, and even support of, and engagement with consumers would benefit everyone involved.

Think of it being like Rapid API, but a large company, organization, institution, or government agency. You can find APIs, and integrate with existing APIs, or you can also become an API publisher, and be someone who helps publish and manage APIs as well. You will have one account, but you can find documentation, usage information and other resources for the APIs you consume, but then you will also access information, and usage data on the APIs you’ve published. Pushing API developers within an organization to actively think about both sides of the API coin, and learn how to be both provider and consumer. Helping add to the catalog of APIs, but also helping evolve and grow the army of API people across an organization.

We still have a lot of work ahead of us when it comes to fleshing out what type of information we should provide to API publishers, and how to cleanly separate the two worlds, but I feel the realization that a portal can be both for API publishers and consumers was an important one for these groups. I feel like it represents a milestone in the maturity and growth of their API programs, where the API developer portal has grown into something that everyone should be tuning into, and participating in. It isn’t just something that a single team, or handful of individuals managed, and it is has become something that is becoming a group effort. Sharing the load of operating the API portal, and keeping things up to date and active, further contributing to the potential success of the platform, shielding it from becoming yet another web service or API catalog that gets forgotten about on the network.


Trying To Define An Algorithm For My AWS API Cost Calculations Across API Gateway, Lambda, And RDS

I am trying to develop a base base API pricing formula for determining what my hard costs are for each API I’m publishing using Amazon RDS, EC2, and AWS API Gateway. I also have some APIs I am deploying using Amazon RDS, Lambda, and AWS API Gateway, but for now I want to get a default base for determining what operating my APIs will cost me, so I can mark up and reliably generate profit on top of the APIs I’m making available to my partners. AWS has all the data for me to figure out my hard costs, I just need a formula that helps me accurately determine what my AWS bill will be per API.

Math isn’t one of my strengths, so I’m going to have to break this down, and simmer on things for a while, before I will be able to come up with some sort of working formula. Here are my hard costs for what my AWS resources will cost me, for three APIs I have running currently in this stack:

  • AWS RDS - I am running a db.r3.large instance which costs me $0.29 per hour or 211.70 per month, with the bandwidth free to my Amazon EC2 instances in the same availability zone. I do not have any public access, so I don’t have any incoming or outgoing traffic, except from the EC2 instance.
  • AWS EC2 - I am running a t2.large instance which costs me $0.0928 per hour or $67.74 per month with bandwidth out costing me $0.155 per GB. I’m curious if they have an Amazon EC2 to AWS API Gateway data consideration? I couldn’t find anything currently.
  • AWS API Gateway - Overall using AWS API Gateway costs me $3.50 per million API calls received, with the first 1GB costing me $0.00/GB, and costing me $0.09/GB for the next 9.999 TB.

Across these three APIs, I am seeing an average of 5KB per responses, which is an important variable to use in these calculations. The AWS API Calculator helps me figure out your monthly bill across services, but it doesn’t help me break down what my hard costs are per API call. I need to establish a flat rate of what it costs for a single API call to exist. Each API will be in its own plan, so I can charge different rates for different APIs, but I need a baseline that I start with for each API call to make sure I’m covering my hard AWS costs. Sure, the more API calls I make, the more profitable I’ll be, but at some point I’ll also have to scale the infrastructure to keep a certain quality of service–there are a number of things to consider here.

I envision my API pricing having the following components:

  • Base - A base cost to cover my AWS bill, considering AWS RDS, EC2, and API Gateway hard costs.
  • Resource - A price for covering investment in resource. Work finding, cleaning up, refining, and evolving.
  • Markup - The percentage of markup for each API call, allowing me to generate a profit from the resources I’m providing.
  • Partner - Provide a volume discount, charging light users more, and giving a break to my partners who are consumer larger volumes.

I’m studying other pricing models from the telco, and software hosting spaces. I’ll also be doing some landscape analysis to see what people are charging for comparable API resources. I possess a wealth of data on what API providers and service providers are charging for their services. The trick will be finding comparable services to what I’m offering, and for the unique resources I possess, I’m going to have to be able to set my own price, and then test out my assumptions, and formula over time–until I find the sweet spot for covering my hard costs, and generating profit at some point from a specific service I’m offering.

If you have an advice for me. Help on the math side of things, or examples from other industries, I’d love to hear more. I’ll be open sourcing and sharing everything I figure out, and tell the story of how it is being applied to each API I am publishing. I want the history to be present for each of my APIs, adding to my wider API monetization and API planning research. In the end, I don’t think there is a perfect answer to what the pricing for an API should be. The best path forward involves covering your hard costs, and then experimenting over time to see what the market will bear. This is why AWS has gotten so good at doing this for cloud, because they have been doing this work for over a decade now. I am sure they have a lot of data, as well as experience understanding how to price API resources so they are both competitive, disruptive, and profitable.


Reviewing The Department Of Veterans Affairs New Developer Portal

I wanted to take a moment and review the Department Of Veterans Affairs (VA) new developer portal. Spending some time considering at how far they’ve come, what they published so far, and brainstorm on what the future might hold. Let me open by saying that I am working directly and indirectly with the VA on a variety of paid projects, but I’m not being paid to write about their API effort–that is something I’ve done since I worked there in 2013. I will craft a disclosure to this effect that I put at the bottom of each story I write about the VA, but I wanted to put out there in general, as I work through my thoughts on what is happening over at the VA.

The VA Has Come A Long Way
I wanted to open up this review with a nod towards how far the VA has come. There have been other publicly available APIs at the VA, as well as a variety of open data efforts, but the VA is clearly going all in on APIs with this wave. The temperature at VA in 2013 when it came to APIs was lukewarm at best. With the activity I’m seeing at the moment, I can tell that the temperature of the water has gone way up. Sure, the agency still has a long way to go, but from what I can tell, the leadership is committed to the agencies API journey–something I have not seen before.

Developer.VA.Gov Sends The Right Signal
It may not seem like much, but providing a public API portal at developer.va.gov sends a strong signal that the VA is seriously investing in their API effort. I see a lot of API programs, and companies who have a dedicated domain, or subdomain for their API operations are always more committed than people who make it just a subsection of their existing website, or existing as a help entry in a knowledge-base. It is important for federal agencies to have their own developer.[domain].gov portal that is actively maintained–which will be the next test for the VA’s resolve, keeping the portal active and growing.

The General Look And Feel Of The Portal
I like the minimalist look of the VA developer portal. It is simple. Easy on the eyes. I feel like the “site is currently under development” is unnecessary, because this should never cease to be. I like the “an official website of the United States government”, it is clean, and official looking. I’m happy to see the “Get help from Veterans Crisis Line”, and is something that should be on any page with services, data, or content for veterans. I like the flexible messaging area (FMA), where it says “Put VA Data to Work”. I’d like to see this section become an evolving FMA, with a wealth of messages rolling through it, educating the ecosystem about what is happening across the VA developer platform at any given moment.

Getting Started And Learning More
The learn more about VA APIs off the FMA area on home page drops me into benefits API overview, which happens to be the first category of APIs on the documentation page. I recommend isolating this to its own “getting started” page, which provides an overview of how to get started across all APIs. Providing background on the VA developer program, as well as the other building blocks of getting started with APIs, like requesting access, studying the API documentation, and the path to production for any application you are developing. The getting started for the VA developer portal should be a first class citizen, with its own page, and a logical presentation of exactly the building blocks developers will need to get started–then they can move onto documentation across all the API categories.

There Are Valuable APIs Available
Once you do actually begin looking at the API documentation available within the VA developer portal, you realize that there are truly valuable APIs available in there. Don’t get me wrong, the Arlington National Cemetery API is important, which has been the only publicly available API from the VA for several years, but when I think about VA APIs, I’m looking for resources that make a meaningful and significant impact on a vets life today:

  • Benefits Intake - Veterans Benefits Administration (VBA) document uploads.
  • Appeals Status - Enables approved organizations to submit benefits-related PDFs and access information on a Veteran’s behalf.
  • Facilities API - Use the VA Facility API to find relevant information about a specific VA facility. For each VA facility, you’ll find contact information, location, hours of operation and available services.
  • Veterans Health API - [There is no concise description for this API, and what is there needs some serious taming, and pulling out as part of the portal.]
  • Veteran Verification - We are working to give Veterans control of their information – their service history, Veteran status, discharge information, and more – and letting them control who sees it.

One minor thing, but will significantly contribute to the storytelling around VA APIs, is the consistent naming of APIs. Notice that only two of them have API in the title. I’m really not advocating for API to be in the title or not in the title. I am really advocating for consistency in naming, so that storytelling around them flows better. I lean towards using API in each title, just so that their titles in the documentation, OpenAPI contract behind, and everywhere these APIs travel are consistent, meaningful, and explain what is available.

I like the organizing of APIs into the three categories of benefits, facilities, and health. I’d say veteran verification is a little out of place. Maybe have a veteran category, and it is the first entry? IDK. I’m thinking there needs to be a little planning for the future, and what constitutes a category, and some guidance on how things are defined, broken down, and categories, so that there is some thought put into it the API catalog as it grows. A little separation of concerns in categorization, that can maybe begin to contribute the overall microservices strategy across the VA.

The Path To Production For Alpha API Clients
I like the path to production information for alpha API clients, I felt like it should be its own dedicated page, as one building blocks of the getting started section. However, once I started scrutinizing, it seemed like it was a separate process potentially for each individual API or category of APIs. If it can be a standalone item, I’d make it one, and link to it from each individual API category, or individual API. It it can’t be, I’d figure out to make it an expandable section, or subsection of the docs. It isn’t something I want to have to scroll through when working with an API and its documentation. Sure, I want to be aware of it, and be able to understand it as part of on-boarding, and revisiting it at a later date, but I don’t need it to be part of the core documentation page–I just want to get at the interactive API documentation.

Self-Service Signup
The process for signing up seems smooth. I haven’t been approved for access, and the review process makes a lot of sense. I’ll invest time in a separate story taking a look at the signup and on-boarding process, as well as the path to production flow for API clients, but I wanted to lightly reference as part of the review. I’d say the one confusing piece was leaving the website for the signup form, and then being dropped back at https://www.oit.va.gov/developer/, without much of any information about what was happening. It was a little jarring, and the overall flow, and process needs some smoothing out. I get that we are just getting started here, so I’m too worried about this–I just wanted to make a note of it.

The Essentials Are There
Overall, the essentials are present at the VA developer area. It is a great start, and has the making of a good developer API portal. It just isn’t very mature, and you can tell things are just getting started. You can signup, get at API documentation, and understand what it takes to build an application on top of VA API resources. Adding to, refining, and further polishing what is there will take time, so I do not want to be too critical of what the VA has published–it is a much better start than I’ve seen out of other federal agencies.

There are some other random items I’d like to reference, as well as brainstorm a little on what I’d like see invested in next, helping ensure the VA API portal provides what it needed for developers to be successful:

  • Terms of Service - It is good to see the basics of the TOS, and privacy policy there. I’d like to see more about privacy, security, and service level agreements (SLA).
  • Use Of GitHub Issues - I like the submission GitHub issues to request production access, and think it is a healthy use of GitHub issues forms, and is something the brings observability to the on-boarding process across the community.
  • Email Support - Beyond using GitHub issues for support and on-boarding, I see [email protected] a lot across the site. I get why you want to be at the center of things, but email support should be made generic, and enable group ownership of the email support workflow.
  • Road Map - I’d like to see a roadmap of what is being planned, as well as a change log for what has already been accomplished.
  • Frequently Asked Questions - I’d like to see an FAQ page, with questions and answers broken down by category, allowing me to browse some of the common questions as I’m getting up to speed.
  • Code Samples & SDKs - I’d like to see some more code samples in a variety of programming languages, either baked into the interactive documentation, or available on its own SDK / Code Libraries pages. I get if the VA doesn’t want to get in the business of doing this, but with an OpenAPI core, there are more options available out there to generate code samples, libraries, and SDKs. I think this vets API client effort needs to be pulled onto a code samples and SDKs page, and there be more investment in projects like this.
  • OpenAPI Download - I’d like to see a clear icon or button for downloading the OpenAPI 3.0 contract for each of the available APIs. I want to be able to use the OpenAPI definition for more than just documentation.
  • OpenAPI 3.0 - I’m very happy to see OpenAPI 3.0 powering the API documentation, which I think is a little detail that shows the VA API team has been doing their homework.
  • Postman Collections - I’d like to see a Run in Postman button at the top right corner of each API’s documentation, allowing me to quickly load up each API into my Postman API integration and development environment.
  • Communications - I’d like to see a blog, a Twitter account, and emphasis on the VA Github account. As an API consumer I’d like to see that someone is home besides [email protected], and be able to have regular asynchronous conversations, and may engage synchronously during API office hours, or other format.

I’ll stop there. I have endless more ideas of what I’d like to see in the VA developer portal, but I’m just happy to see such a clean, informative portal up and running at developer.va.gov. I’m stoked they are working on delivering real APIs that offer value to the Veteran–that is why we are doing all of this, right? I’m curious to learn about what other APIs already exist and can just be hung within this portal, as well as what APIs are planned for the immediate road map. While there are still missing parts and pieces, what they’ve published is a damn fine start to the VA developer program.

Next, I’m going to do a deep dive into what I’d like to see in the getting started page, as well as the path to production guidance. I’d also like to do some deep thinking on the production application (regular) check-in and review processes. I have a short list of concepts I want to flesh out, and questions I would like to answer in future posts. I just wanted to make sure I took a moment to review the VA’s hard work on their new developer portal. The publishing of their developer portal marks a significant milestone in the agency’s API journey, marking the spot where their API platform is beginning to shift into a higher gear.


Not All Companies Are Interested In The Interoperability That APIs Bring

I’ve learned a lot in eight years of operating API Evangelist. One of the most important things I’ve learned to do is separate my personal belief and interest in technology from the realities of the business world. I’ve learned that not all businesses are ready for the change that doing APIs bring, and that many businesses really aren’t interested in the interoperability, portability, and observability that APIs bring to the table. Despite what I may believe, APIs in the real world often have a very different outcome.

I see the potential of having a public API developer portal where you publish all the digital resources your company offers. Providing self-service registration to access these digital resources at a fair, transparent, and pay for what you use pricing model. I get what this can do for companies when it comes to attracting developer talent to help deliver the applications that are needed for any platform to thrive. I’ve seen the benefits to the end-users of these applications when it comes to giving them control over their data, the ability to leverage 3rd party applications, while also better understanding, managing, and ultimately owning the digital resources they generate each day. I also regularly see how this all can be a serious threat to how some businesses operate, and work to reveal the illnesses that exist within many businesses, and the shady things the occur behind the firewall each day.

I regularly see businesses pay lip service to the concept of APIs, but in reality, are more about locking things up, and slowing things down to their benefit, instead of opening up access, and streamlining anything. I’m not saying that businesses do this by default, and are always being led from the top down to behave this way, I am saying it gets baked into the fabric of how teams, groups, and individuals cells in the overall organizational organism. These cells learn to resist, fight back, appear like they are on board with this latest wave of how we deliver technology, but in reality, they are not interested in the interoperability that APIs bring to the table. There is just too much power, control, and revenue to be generated by locking things up, slowing things down, and making things hard to get.

After eight years of doing this, plus another 22 years of working in the industry, I’m always skeptical of people’s motivation motivation behind doing APIs. Why do you think this resources is important enough to make accessible? Who will get access to this resources? What is the price of this resource? Is pricing observable across all tiers of access? Can we talk about your SLA? Can we talk about your road map? Why are you doing APIs? Who do they benefit? There are so many questions to be asked when getting at the soul of each company’s API efforts. Before you can truly understand if a company is truly interested in the interoperability that APIs bring to the table. Before you can begin to understand what their API journey will involve. Before you understand whether or not you want to do business with a company using their API, and make it something you bake into your own operations and applications.

I write about this only to remind myself that some companies will have other plans. I write about this to remind myself to ask the hard questions of all the organizations I’m engaging with, all along the way. I tend to default to a belief that most people are straight up, and share their real intentions, yet I need a regular reminder that this really isn’t true. Most successful businesses are doing aggressive, shady, and manipulative things to get ahead. The concept of creating the best product, and running a smart business, and you’ll win, is a myth. I’m not saying it doesn’t happen, or can’t happen, I am saying it isn’t the normal mode of the business world–despite popular belief. This is all a reminder that just because a business has APIs, doesn’t mean their belief system around doing APIs reflects my vision, or the popular API community vision around what doing APIs is all about.


Helping The Federal Government Get In Tune With Their API Uptime And Availability

Nobody likes to be told that their APIs are unreliable, and unavailable on a regular basis. However, it is one of those pills that ALL APIs have to swallow, and EVERY API provider should be paying for an external monitoring service to tell us when are APIs are up or down. Having a monitoring service to tell us when our APIs are having problems, complete with a status dashboard, and history of our API's availability are essential building blocks of any API provider. If you expect consumers to use your API, and bake it into their systems and applications, you should committed to a certain level of availability, and offering a service level agreement if possible.

My friends over at APImetrics monitor APIs across multiple industries, but we've been partnering to keep an eye on federal government APIs, in support of my work in DC. They've recently [shared an informative dashboard tracking on the performance of federal government APIs](https://apimetrics.io/us-government-api-performance-dashboard/), providing an interesting view of the government API landscape, and the overall reliability of APIs they provide.

They continue by breaking down the performance of federal government APIs, including how the APIs perform across multiple North American regions across four of the leading cloud providers:

Helping us visualize the availability of federal government APIs for the last seven days, by applying their APImetrics CASC score to each of the federal government APIs, and ranking their overall uptime and availability:

I know it sucks being labeled as one of the worst performing APIs, but you also have the opportunity to be named one the best performing APIs. ;-) This is a subject that many private sector companies struggle with, and the federal government has an extremely poor track record for monitoring their APIs, let alone sharing the information publicly. Facing up to this stuff sucks, and you are forced to answer some difficult questions about your operations, but it is also something can't be ignored away when you have a public API

You can [view the US Government API Performance Dashboard for July 2018 over at APImetrics](https://apimetrics.io/us-government-api-performance-dashboard/). If you work for any of these agencies and would like to have a conversation your API monitoring, testing, and performance strategy, I am happy to talk. I know the APImetrics team are happy to help to, so don't stay in denial about your API performance and availability. Don't be embarrassed. Tackle the problem head on, improve your overall quality of service, and then having an API monitoring and performance dashboard publicly available like this won't hurt nearly as much--it will just be a normal part of operating an API that anyone can depend on.


Provide Your API Developers With A Forkable Example of API Documentation In Action

I responded about how teams should be documenting their APIs when they have both legacy and new APIs the other day. I wanted to keep the conversation thread going with an example of one possible API documentation implementation. The best way to deliver API documentation guidance in any organization is to provide a forkable, downloadable example of whatever you are talking about. To help illustrate what I am talking about, I wanted to take one documentation solution, and publish it as a GitHub repository.

I chose to go with a simple OpenAPI 3.0 defined API contract, driving a Swagger UI driven API documentation, hosted using GitHub Pages, and managed as a GitHub repository. In my story about how teams should be documenting their APIs, I provided several API definition formations, and API documentation options–for this walk-through I wanted to narrow it down to a single combination, providing the minimum(alist) viable options possible using OpenAPI 3.0 and SwaggerUI. Of course, any federal agency implementing such a solution should wrap the documentation with their own branding, similar to the City Pairs API prototype out of GSA, which originated over at CFPB.

I used the VA Facilities API definition from the developer.va.gov portal for this sample. Mostly because it was ready to go, and relevant to the VA efforts, but also because it was using OpenAPI 3.0–I think it is worth making sure all API documentation moving forward supports is supporting the latest version of OpenAPI. The API documentation is here, the OpenAPI definition is here, and the Github repository is here, showing what is possible. There are plenty of other things I’d like to see in a baseline API documentation template, but this provides a good first draft for a true minimum viable definition.

The goal with this project is to provide a basic seed that any team could use. Next, I will add in some other building blocks, and implementation a ReDoc, DapperDox, or WSDLDoc version. Providing four separate documentation examples that developers can fork and use to document the APIs they are working on. In my opinion, one or more API documentation templates like this should be available for teams to fork or download and implement within any organization. All API governance guidance like this should have the text describing the policy, as well as one or many examples of the policy being delivered. Hopefully this projects shows an example of this in action.


How Do We Get API Developers To Follow The Minimum Viable API Documentation Guidance?

After providing some guidance the other day on how teams should be documenting their APIs, one of the follow up comments was: “Now we just have to figure out how to get the developers to follow the guidance!” Something that any API leadership and governance team is going to face as they work to implement new policies across their organization. You can craft the best possible guidance for API design, deployment, management, and documentation, but it doesn’t mean anyone is actually going to follow your guidance.

Moving forward API governance within any organization represents the cultural frontline of API operations. Getting teams to learn about, understand, and implement sensible API practices is always easier said than done. You may think your vision of the organizations API future is the right one, but getting other internal groups to buy into that vision will take a significant amount of work. It is something that will take time, resources, and be something that will always be shifting and evolving over time.

Lead By Example
The best way to get developers to follow the minimum viable API documentation guidance being set forth is to do the work for them. Provide templates and blueprints of what you want them to do. Develop, provide, and evolve forkable and downloadable API documentation examples, with simple README checklists of what is expected of them. I’ve published a simple example using the VA Facilities API definition published as OpenAPI 3.0 and Swagger UI to Github Pages, with the entire thing forkable via the Github repository. It is very bare bones example of providing API documentation guidance is a package that can be reused, providing API developers with a working example of what is expected of them.

Make It A Group Effort
To help get API developers on board with the minimum viable API documentation guidance being set forth, I recommend making it a group effort. Recruit help from developers to improve upon API documentation templates provided, and encourage them to extend, evolve, and push forward their individual API documentation implementations. Give API developers a stake in how you define governance for API documentation–not everyone will be up for the task, but you’d be surprised who will raise their hand to contribute if they are asked.

Provide Incentive Model
This is something that will vary in effectiveness from organization to organization, but consider offering a reward, benefit, perk, or some other incentive to any group who adopts the API documentation guidance. Provide them with praise, and showcase their work. Bring exposure to their work with leadership, and across other groups. Brainstorm creatives ways of incentivizing development groups to get more involved. Establish a list of all development groups, track on ideas for incentivizing their participation and adoption, and work regularly to close them on playing an active role in moving forward your organization’s API documentation strategy.

Punish And Shame Others
As a last resort, for the more badly behaved groups within our organizations, consider punishing and shaming them for not following API documentation guidance, and contributing to overall API governance efforts. This is definitely not something you should not consider doing lightly, and should only be used in special cases, but sometimes teams will need smaller or larger punitive responses to their inaction. Ideally, teams are influenced by praise, and positive examples of why API documentation standards matter, but there will always be situations where teams won’t get on board with the wider organizational API governance efforts, and need their knuckles rapped.

Making Meaningful Change Is Hard
It will not be easy to implement consistent API documentation across any large organization. However, API documentation is often one of the most important stops along the API lifecycle, and should receive significant investment when it comes to API governance efforts. In most situations doing the work for developers, and providing them with blueprints to be successful will accomplish the goal of getting API developers all using a common approach to API documentation. Like any other stop along the API lifecycle, delivering consistent API documentation across distributed teams will take having a coherent strategy, with regular tactical investment to move everything forward in a meaningful way. However, once you get your API documentation house in order, many other stops along the API lifecycle will also begin to fall inline.


Do Not Miss Internal Developer Portals: Developer Engagement Behind the Firewall by Kristof Van Tomme (@kvantomme), Pronovix (@pronovix) At @APIStrat in Nashville, TN This September 24th-26th

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Do Not Miss Internal Developer Portals: Developer Engagement Behind the Firewall” by Kristof Van Tomme (@kvantomme), Pronovix (@pronovix) on September 25th.

Here is Kristof’s abstract for the API session:

While there are a lot of talks and blogposts about APIs and the importance of an APIs Developer eXperience, most are about public API products. And while a lot of the best practices for API products are also applicable to private APIs, there are significant differences in the circumstances and trade-offs they need to make. The most important difference is probably in their budgets: as potential profit centers, API products can afford to invest a lot more money in documentation and UX driven developer portal improvements. Internal APIs rarely have that luxury.

In this talk I will explain the differences between public and private APIs, introduce upstream DX, and explain how it can improve downstream DX. Introduce experience design (a.k.a. gamification) and Innersourcing (open sourcing practices behind the firewall) and describe how they could be used on internal developer portals.

Kristof is an expert in delivering developer portals and API documentation, making his talk a must attend session. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


May Contain Nuts: The Case for API Labeling by Erik Wilde (@dret), API Academy (@apiacademy)

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is May Contain Nuts: The Case for API Labeling by Erik Wilde (@dret), API Academy (@apiacademy) on September 25th.

I’ll let Erik’s bio set the stage for what he’ll be talking about at APIStrat:

Erik is a frequent speaker at both industry and academia events. In his current role at the API Academy, his work revolves around API strategy, design, and management, and how to help organizations with their digital transformation. Based on his extensive background in Web architecture and technologies, Erik combines deep expertise in protocols and representations with insights into API practices at today’s organizations.

Before joining API Academy and working in the API space full-time, Erik spent time at Siemens and EMC, in both cases working at ways how APIs could be used for their internal service ecosystems, as well as for better ways for customers to use services and products. Before that, Erik spent most of his life in academia, working at UC Berkeley and ETH Zürich. Erik received his Ph.D. in computer science from ETH Zürich, and his diploma in computer science from TU Berlin.

Erik nows his stuff, and can be found on the road with the CA API Academy, making this stop in Nashville, TN a pretty special opportunity. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Living In A Post Facebook, Twitter, and Instagram API World

While Facebook, Twitter, and Instagram will always have a place in my history of APIs, I feel like we are entering a post Facebook, Twitter, and Instagram API world. All three platforms are going through serious evolutions, which includes tightening down the controls on API access, and shuttering of many APIs. These platforms are tightening things down for a variety of reasons, which are more about their business goals, than it is about the community. I’m not saying these APIs will go away entirely, but the era where where these API platforms ruled is coming to a close.

The other day I articulated that these platform only needed us for a while, and now that they’ve grown to a certain size do not need us anymore. While this is true, I know there is more to the story of why we are moving on in the Facebook, Twitter, and Instagram story. We can’t understand the transformation that is occurring without considering that these platform’s business models are playing out, and they (we) are reaping what they’ve sown with their free and open platform business models. It isn’t so much that they are looking to screw over their developers, they are just making another decision, in a long line of decisions to keep generating revenue from their user generated realities, and advertising fueled perception.

I don’t fault Twitter, Facebook, and Instagram for fully opening their APIs, then closing them off over time. I fault us consumers for falling for it. I do fault Twitter, Facebook, and Instagram a little for not managing their APIs better along the way, but when your business model is out of alignment with proper API management, it is only natural that you look the other way when bad things are happening, or you are just distracted with other priorities. This is ultimately why you should avoid platforms who don’t have an API, or a clear business model for their platform. There is a reason aren’t having this conversation about Amazon S3 after a decade. With a proper business model, and API management strategy you deal with all the riff raff early on, and along the way–it is how this API game works when you don’t operate a user-exploitative business.

Ultimately, living in a post Twitter, Facebook, and Instagram API world won’t mean much. The world goes on. There will be some damage to the overall API brand, and people will point to these platforms as why you shouldn’t do APIs. Twitter, Facebook, and Instagram will still be able to squeeze lots of advertising based revenue out of their platforms. Eventually it will make them vulnerable, and they will begin to lose market share being such a closed off ecosystem, but there will always be plenty of people willing to spend money on their advertising solutions, and believe they are reaching an audience. Developers will find other data source, and APIs to use in the development of web, mobile, and device applications.

The challenge will be making sure that we can spot the API platforms early on who will be using a similar playbook to Twitter, Facebook, and Instagram. Will we push back on API provides who don’t have clear business models? Will we see the potential damage of purely eyeball based, advertising fueled platform growth? Will we make sound decisions in the APIs we adopt, or will we continue to just jump on whatever bandwagon that comes along, and willfully become sharecroppers on someone else’s farm. Will we learn from this moment, and what has happened in the last decade of growth for some of the most significant API platforms across the landscape today?

To help paint a proper picture of this problem, let me frame another similar situation that is not the big bad Twitter and Facebook, that everyone loves bashing on. Medium. I remember everyone telling me in 2014 that I should move API Evangelist to Medium – I kicked the tires, but they didn’t have an API. A no go for me. Eventually they launched a read only API, and I began syndicating a handful of my stories there. I enjoyed some network effect. I would have scaled upon my engagement if there was write access to APIs, as well as other platform related data like my followers. I never moved my blog to Blogger, Tumblr, Posterous, or Medium, for all the same reasons. I don’t want to be trapped on any platform, and I saw the signs early on with Medium, and managed to avoid any frustration as they go through their current evolution.

I don’t use the Facebook API for much–it just isn’t my audience. I do use Twitter for a lot. I depend on Twitter for a lot of traffic and exposure. I would say the same for LinkedIn and Github. LinkedIn has been closing off their APIs for some time, but honestly it was never something that was ever very open. I worry about Github, especially after the Microsoft acquisition. However, I went into my Github relationship expecting it to be temporary, and because all my data is in a machine readable and portable format, I’m not worried about every having to migrate–I can’t do this with Facebook, Twitter, Instagram, or LinkedIn. I’m saddened to think about a post Twitter API world, where every API call is monetized, and there is no innovation in the community. It is coming though. It will be slow. We won’t notice much of it. But, it is happening.

I know that Twitter, Facebook, and Instagram all think they are making the best decision for their business, and investors. I know they also think that they’ve done the best job they could have under the circumstances over the last decade. You did, within the vision of the world you had established. You didn’t for your communities. If Facebook and Twitter had been more strict and organized about API application reviews from early days, and had structured access tiers of free, as well as paid access early on, a lot fewer people would be complaining as you made those processes, and access tiers more strict. It is just that you didn’t manage anything for so long, and once the bad things happening began effecting the platform bottom line, and worrying investors, then you began managing your API.

I know that Twitter, Facebook, and Instagram all think they will be fine. They will. However, over time they will become the next NBC, AOL, or other relic of the past. They will lose their soul, if they ever had one. And everyone on the Internet will be somewhere else, giving away their digital bits for free. This same platform model will play out over and over again in different incarnations, and the real test will be if we ever care? Will we keep investing in these platforms, building out their integrations, attracting new users, and keeping them engaged. Or, will we work to strike a balance, and raise the bar which platforms we sign up for, and ultimately depend on as part of our daily lives. I’m done getting pissed off about what Twitter, Facebook, and Instagram do, I”m more focused on evaluating and ranking all the digital platforms I depend on, and turn up or down the volume, based upon the signals they send me about what the future will hold.


How Should Teams Be Documenting Their APIs When You Have Both Legacy And New APIs?

I’m continuing my work to help the Department of Veterans Affairs (VA) move forward their API strategy. One area I’m happy to help the federal agency with, is just being available to answer questions, which I also find make for great stories here on the blog–helping other federal agencies also learn along the way. One question I got from the agency recently, is regarding how the teams should be documenting their APIs, taking into consideration that many of them are supporting legacy services like SOAP.

From my vantage point, minimum viable API documentation should always include a machine readable definition, and some autogenerated documentation within a portal at a known location. If it is a SOAP service, WSDL is the format. If it is REST, OpenAPI (fka Swagger) is the format. If its XML RPC, you can bend OpenAPI to work. If it is GraphQL, it should come with its own definitions. All of these machine readable definitions should exist within a known location, and used as the central definition for the documentation user interface. Documentation should not be hand generated anymore with the wealth of open source API documentation available.

Each service should have its own GitHub/BitBucket/GitLab repository with the following:

  • README - Providing a concise title and description for the service, as well as links to all documentation, definitions, and other resources.
  • Definitions - Machine readable API definitions for the APIs underlying schema, and the surface area of the API.
  • Documentation - Autogenerated documentation for the API, driven by its machine readable definition.

Depending on the type of API being deployed and managed, there should be one or more of these definition formats in place:

  • Web Services Description Language (WSDL) - The XML-based interface definition used for describing the functionality offered by the service.
  • OpenAPI - The YAML or JSON based OpenAPI specification format managed by the OpenAPI Initiative as part of the Linux Foundation.
  • JSON Schema - The vocabulary that allows for the annotation and validation of the schema for the service being offered–it is part of OpenAPI specification as well.
  • Postman Collections - JSON based specification format created and maintained by the Postman client and development environment.
  • API Blueprint - The markdown based API specification format created and maintained by the Apiary API design environment, now owned by Oracle.
  • RAML - The YAML based API specification format created and maintained by Mulesoft.

Ideally, OpenAPI / JSON Schema is established as the primary format for defining the contract for each API, but teams should also be able to stick with what they were given (legacy), and run with the tools they’ve already purchased (RAML & API Blueprint), and convert between specifications using API Transformer.

API documentation should be published to it’s GitHub/GitLab/BitBucket repository, and hosted using one of the service static project site solutions with one of the following open source documentation:

  • Swagger UI - Open source API documentation driven by OpenAPI.
  • ReDoc - Open source API documentation driven by OpenAPI.
  • RAML - Open source API documentation driven by RAML.
  • DapperDox - DapperDox is Open-Source, and provides rich, out-of-the-box, rendering of your OpenAPI specifications, seamlessly combined with your GitHub flavoured Markdown documentation, guides and diagrams.
  • wsdldoc - The tool can be used to generate HTML documentation out of WSDL file.

There are other open source solutions available for auto-generating API documentation using the core API’s definition, but these represent some of the commonly used solutions out there today. Depending on the solution being used to deploy or manage an API, there might be built-in, ready to go options for deploying documentation based upon the OpenAPI, WSDL, RAML or other using AWS API Gateway, Mulesoft, or other existing vendor solution already in place to support API operations.

Even with all this effort, a repository, with a machine readable API definition, and autogenerated documentation still doesn’t provide enough of a baseline for API teams to follow. Each API documentation should possess the following within those building blocks:

  • Title and Description - Provide the concise description of what an API does from the README, and make sure it is based into the APIs definition.
  • Base URL - Have the base URL, or variable representation for a base URL present in API definitions.
  • Base Path - Provide any base path that is constant across paths available for any single API.
  • Content Types - List what content types an API accepts and returns as part of its operations.
  • Paths - List all available paths for an API, with summary and descriptions, making sure the entire surface area of an API is documented.
  • Parameters - Provide details on the header, path, and query parameters used for API path being documented.
  • Body - Provide details on the schema for the body of each API path that accepts a body as part of its operations.
  • Responses - Provide HTTP status code and reference to the schema being returned for each path.
  • Examples - Provide example requests and response for each API path being documented.
  • Schema - Document all schema being used as part of requests and responses for all APIs paths being documented.
  • Authentication - Document the authentication method used (ie. Basic Auth, Keys, OAuth, JWT).

If EVERY API possesses its own repository, and README to get going, guiding all API consumers to complete, up to date, and informative documentation that is auto-generated, a significant amount of friction during the on-boarding process can be eliminated. Additionally, friction at the time of hand-off for any service from on team to another, or one vendor to another, will be significantly reduced–with all relevant documentation available within the project’s repository.

API documentation delivered in this way provides a single known location for any human to go when putting an API to work. It also provides a single known location to find a machine readable definition that can be used to on-board using an API client like Postman, PAW, or Insomnia. The API definition provides the contract for the API documentation, but it also provides what is needed across other stops along the API lifecycle, like monitoring, testing, SDK generation, security, and client integration–reducing the friction across many stops along the API journey.

This should provide a baseline for API documentation across teams. No matter how big or small the API, or how new or old the API is. Each API should have API documentation available in a consistent, and usable way. Providing a human and programmatic way for understanding what an API does, that can be use to on-board and maintain integrations with each application. The days of PDF and static API documentation are over, and the baseline for each APIs documentation always involves having a machine readable contract as the core, and managing the documentation as part of the pipeline used to deploy and manage the rest of the API lifecycle.


The Importance Of Postman API Environment Files

I’m a big fan of Postman, and the power of their development environment, as well as their Postman Collection format. I think their approach to not just integrating with APIs, but also enabling the development and delivery of APIs has shifted the conversation around APIs in the last couple of years–not too many API service providers accomplish this in my experience. There are several dimensions to what Postman does that I think are pushing the API conversation forward, but one that has been capturing my attention lately are Postman Environment Files.

Using Postman, you can manage many different environments used for working with APIs, and if you are a pro or enterprise customer, you can export a file that represents an environment, making each of these API definitions more portable and collaborative. Managing the variety of environments for the hundreds of APIs I use is one of the biggest pain points I have. Postman has significantly helped me get a handle on the tokens and keys I use across the internal, as well as partner and public APIs that I depend on each day to operate API Evangelist.

Postman environments allows me to define environments within the Postman application, and then share them as part of the pro / enterprise team experience. You can also manage your environments through the Postman API, if you need to more deeply integrate with your operations. The Postman Environment File makes all of this portable, sharable, and used across environments. It is one of the reasons that makes Postman Collections more valuable to some users, in specific contexts, because it has that run time aspect to what it does. Postman let’s you communicate effectively around the APIs you are deploying and integrating with, and solves relevant pain points like API environment management, that can stand in the way of integration.

There aren’t many features of API service providers I get very excited about, but the potential of Postman as an environment management solution is significant. If Postman is able to establish itself as the broker of credentials at the API environment level, it will give them a significant advantage of other service providers. With the size of their developer base, having visibility at the environment level puts their finger on the pulse of what is going on in the API economy, from both an API provider and consumer perspective. With Postman Environment Files acting as a sort of key, or currency, that has to exist before any API transaction can be executed. And, as the number of APIs we depend on increases, the importance of having a strategy (and solution) for managing our environment will grow exponentially–putting Postman in a pretty sweet position.


Getting Email Updates From The API Providers I Care About Is One Way To Stay In Sync

I do not like email. I do not have a good relationship with my inbox. However, it is one of those ubiquitous tools I have to use, and understand the value it can bring to my world. The goal is to not let my inbox control me too much, as my it is is often a task list that other people think they can control. With all that said, I’m finding renewed value in email newsletters, on several fronts. While I’d prefer to get updates via Atom, I’m warming up to receiving updates from API providers, and API service providers in my inbox.

I am an active subscriber to the REST API Notes Newsletter, API Developer Weekly, and other relative newsletters. I’m also finding myself opening up more emails from the API providers, and service providers I’m registered with. Historically, I often see email as a nuisance, but I’m beginning to see emails from the companies I’m paying attention to as a healthy signal. Increasingly, it is a signal that I’m using to understand the overall health of a platform, and yet another signal that will go silent when a platform isn’t supporting their user base, and potentially running out of funding for their operations.

I recently wrote a script that harvests emails from my inbox, and tracks on the communications occurring with API providers and service providers I am monitoring. At it’s most basic, it is a heartbeat that I can use to tell when an API provider or service provider is still alive. After that, I’m looking at harvest URLs, and other data, and use the signals to float an API provider or service provider up on my list. It can be tough to remember to tune into what is going on across hundreds and thousands of APIs, and any signal I can harvest to help companies float up is a positive thing. Hopefully it is something that will also incentivize API provider and service providers to tell more stories via email, as well as their blog and social media.

Email is still one of my least favorite signals out there, but I’m beginning to realize there is still a lot of value to be found within my inbox. Having a regular newsletter is something I’m going to write about more, encouraging more API providers and service providers to provide. I think more companies, institutions, and government agencies feel comfortable telling stories via email, over a public blog. It may not be my preferred medium of choice, but I know that it takes a diverse set of channels to reach a large audience, and who am I to only use the ones I like the most.


Any Way You Want It: Extending Swagger UI for Fun and Profit by Kyle Shockey (@kyshoc) of SmartBear Software (@SmartBear) At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Any Way You Want It: Extending Swagger UI for Fun and Profit” by Kyle Shockey (@kyshoc) of SmartBear Software (@SmartBear) on September 25th.

Here is Kyle’s abstract for the session:

Your APIs are tailored to your needs - shouldn’t your tools be as well? In this talk, we’ll explore how Swagger UI 3 makes it easier than ever to create custom functionality, and common use cases for the power that the UI’s plugin system provides.

Learn how to:

- Create plugins that extend existing features and define new functionality - Integrate Swagger UI seamlessly by defining a custom layout - Package and share plugins that can be reused by the community (or your organization)

Swagger UI has changed the conversation around how we document our APIs, and being able to extend the interface is an important part of keeping the API documentation conversation evolving, and APIStrat is where this type of discussion is happening. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Testing and Meaningful Mocks in a Microservice System by Laura Medalia (@codergirl__) of Care/Of At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Testing and Meaningful Mocks in a Microservice System” by Laura Medalia (@codergirl__) of Care/Of on September 25th.

Here is Laura’s abstract for the session:

Laura will be talking about tooling for mocking microservice endpoints in a meaningful way using Open API specifications. She will cover how to set up microservice deployments processes so that with each versioned microservice deployed a mock of the service with up to date contracts will also be deployed. Laura will also show how to use tooling she and her team built to consume these lightweight mocks in unit tests and either get default mock responses or mock out custom responses for different test cases.

In an era where many API development groups are working to move to a design and mock first approach, this session will be key to our journey, and APIStrat is where we all need to be. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!

Photo Credit: Laura Medalia on Pintaram.


Searching For APIs That Possess Relevant Company Information

I’m evolving the search for the Streamdata.io API Gallery I’ve been working on lately. I’m looking to move the basic keywords search that searches the API name and description, as well as the API path, summary, and description using a key word or phrase, to also be about searching parameters in a meaningful way. Each of the APIs in the Streamdata.io API have an OpenAPI definition. It is how I render each of the individual API paths using Jekyll and Github Pages. These parameters give me another dimension of data in which I can index, and use as a facet in my API gallery search.

I am developing different sets of vocabulary to help me search against the parameters used across APIs, with one of them being focused on company related information. I’m trying to find APIs that provide the ability to add, update, and search against company related data, content, and execute algorithms that help make sense of company resources. There is no perfect way to search for API parameters that touch on company resources, but right now I’m looking for a handful of fields: company, organization, business, enterprise, agency, ticker, corporate, and employer. Returning APIs that have a parameter with any of those words in the path or summary, and weighting differently if it is in the description or tags for each API path.

Next, I’m also tagging each API path that has a URL field, because this will allow me to connect the dot to a company, organization, or other entity via the domain. This is all I’m trying to do, is connect the dots using the parameter structure of an API. I find that there is an important story being told at the API design layer, and API search and discovery is how we are going to bring this story out. Connecting the dots at the corporate level is just one of many interesting stories out there, just waiting to be told. Pushing forward the conversation around how we understand the corporate digital landscape, and what resources they have available.

You can do a basic API search at the bottom of the Streamdata.io API Gallery main page. I do not have my parameter search available publicly yet. I want to spend more time refining my vocabularies, and also look at searching the request and response bodies for each path–I’m guessing this won’t be as straightforward, as parameters has been. Right now I’m immersed in understanding the words we use to design our APIs, and craft our API documentation. It is fascinating to see how people describe their resources, and how they think (or don’t think) about making these resources available to other people. OpenAPI definitions provide a fascinating way to look at how APIs are opening up access to company information, establishing the digital vocabulary for how we exchange data and content, and apply algorithms to help us better understand the business world around us.


It Is Hard To Go API Define First

Last year I started saying API define first, instead of API design first. In response to many of the conversations out there about designing, then mocking, and eventually deploying your APIs into a production environment. I agree that you should design and iterate before writing code, but I feel like we should be defining our APIs even before we get to the API design phase. Without the proper definitions on the table, our design phase is going to be a lot more deficient in standards, common patterns, goals, and objectives, making it important to invest some energy in defining what is happening first–then iterate on the API definitions throughout the API lifecycle, not just design.

I prefer to have a handful of API definitions drafted, before I move onto to the API design phase:

  • Title - A simple, concise title for my API.
  • Description - A simple, concise description for my API.
  • JSON Schema - A set of JSON schema for my APIs
  • OpenAPI - An OpenAPI for the surface area of my API.
  • Assertions - A list of what my API should be delivering.
  • Standards - What standards are being applied with this API.
  • Patterns - What common web patterns will be used with this API.
  • Goals - What are the goals for this particular API.

I like having all of this in a GitHub repository before I get to work, actually designing my APIs. It provides me with the base set of definitions I need to go to be as effective as I can in my API design phase. Of course, each of these definitions will be iterated, added to, and evolved as part of the API design phase, and beyond. The goal is to just get a base set of building blocks on the workbench, properly setting the tone for what my API will be doing. Grounding my API work early on in the API lifecycle, in a consistent way that I can apply across many different APIs.

The problem with all of this, is that it is easier said than done. I still like to hand code my APIs. It is something I’ve been doing for 20 years, and it is a habit that is hard to kick. When designing an API, often times I do not know what is possible, and I need to hack on the solution for a while. I need to hack on and massage some data, content, or push forward my algorithm a little. All of this has to happen before I can articulate the interface will look like. Sure, I might have some basic RESTful notions about what API paths will be, and the schema I’ve gathered will drive some of the conversation, but I still need to hack together a little goodness, before I can design.

This is ok. With some APIs I will be able to define and then design without ever touching any code. While others I will still have to prototype at least a function to prove the concept behind the API. Once I have the proof of concept, then I can start crafting a sensible interface using OpenAPI, then mock, and work with the concept a little more within an API design phase. Ultimately, I do not think there is any RIGHT WAY to develop an API. I think there are healthier, and less healthier ways. I think there are more hardened, and proven ways, but I also think there should be experimental, and even legacy ways of doing things. My goal is to always make sure the process is as sensible and pragmatic as it can be, while meeting the immediate, and long term business goals of my company, as well as my partners.


Identifying The Different Types Of APIs

APIs come in many shapes and sizes. Even when APIs may share a common resource, the likelihood that they are similar in functional, will be slim. Even after eight years of studying APIs, I still struggle with understanding the differences, and putting APIs into common buckets. Think of the differences between two image APIs like Flickr and Instagram, but then also think about the difference between Twitter and Twilio–the differences are many, and a challenge to articulate.

I’m pushing forward my API Stack, and API Gallery work, and I’m needing to better organize APIs into meaningful groups that I can add to the search functionality for each of API discovery services. To help me establish a handful of new buckets, I’m thinking more critically about the different types of API functionality I’m coming across, establishing seven new buckets:

  • General Data - You can get at data across the platform, users, and resources.
  • Relative Data - You can get at data that is relative to a user, company, or specific account.
  • Static Data - The data doesn’t change too often, and will always remain fairly constant.
  • Evolving Data - The data changes on a regular basis, providing a reason to come back often.
  • Historical Data - Provides access to historical data, going back an X number of. years.
  • Service - The API is offered as a service, or is provided to extend a specific service.
  • Algorithmic - The API provides some sort of algorithmic functionality like ML, or otherwise.

Understanding the type of data an API provides is important to the work I’m doing. Streamdata.io caters to the needs of financial organizations, and they are looking for data to help them with their investment portfolio, but also have very particular opinions around the type of data they want. This first version of my API type list is heavily weighted towards data, but as I evolve in my thinking, I’m guessing the service and algorithmic buckets will expand and evolve as well.

The APIs I am cataloging within this work spring fit into one or many of these buckets. They are meant to transcend the resource being made available, and the provider behind the service. I want to be able to search, filter, and organize APIs across many of the usual characteristics we use to track on. I’m wanting to go beyond the obvious resource focused characteristics, and move beyond the technology being applied. I’m looking to understand what you can do with an API, and be able to stack hundreds, or thousands of similar APIs side by side, and provide a new view of the landscape.


Describing Your API with OpenAPI 3.0 by Anthony Eden (@aeden), DNSimple (@dnsimple) At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Describing Your API with OpenAPI 3.0” by Anthony Eden (@aeden), DNSimple (@dnsimple) on September 25th.

Here is Christian’s abstract for the session:

For the last 10 years, DNSimple has operated a comprehensive web API for buying, connecting, and operating domain names. After hearing about OpenAPI at APIStrat 2017, we decided to describe the DNSimple API using the OpenAPI v3 specification - this is the story of why we did it, how we did it, and where we are today.

By the end of this presentation you will have the tools you’ll need to evaluate your own API and decide if implementing OpenAPI makes sense for you, and if so, how you can get started. You’ll have a better understanding of the tools available to you to help write your OpenAPI 3 definition, as well the basics on how to write your own definition for your APIs.

We are all still working to make the switch from OpenAPI 2.0 to 3.0, and with APIStrat being owned and operated by the OpenAPI Initiative, it will definitely be the place to have face to face discussions that influence the road map for the API specification. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Algolia Kindly Provides A Hacker News Search API

I was working on a serverless app for Streamdata.io that takes posts to Hacker News and streams them into an Amazon S3 data lake, and I came across the Algolia powered Hacker News search API. After being somewhat frustrated with the simplicity of the official Hacker News API, I was pleased to find the search kindly provided by Algolia.

There is no search API available for the core Hacker News API, and the design leaves a lot to be desired, so the simplicity of Algolia’s API solution was refreshing. There is a lot of data flowing into Hacker News on a regular day, so providing a search API is pretty critical. Additionally, Algolia’s ability to deliver such a simple, usable, yet powerful API on top of a relevant data source like Hacker News demonstrates the utility of what Algolia offers as a search solution–something I wanted to take a moment to point out here on the blog.

I consider search to be an essential ingredient for any API. Every API should have a search element to their stack, allowing the indexing and searching of all API resources through a single path. Making Algolia a relevant API service provider in this area, enabling API providers to outsource the indexing and searching of their resources, and the delivery of a dead simple API for your consumers to tap into. This path forward is probably not for every API, as many weave specialized search throughout their API design, but for teams who are lacking in resources, and can afford to outsource this element–Algolia makes sense.

Seeing Algolia in action, for a specific API I was integrating with helped bring their service front and center for me. I tend to showcase Elastic for deploying API search solutions, but it is a good to receive a regular reminder that Algolia does the same thing as a service. Their work on the Hacker News Search API provides a good example of they can do for you–sure, we can all build our own search solutions, but honestly, do you have the time? I’ll make sure and regularly highlight what Algolia is doing as part of my search API research, and thanks Algolia! I really appreciate what you did for the Hacker News API, it made my work a lot easier.


We Need You API Developers Until We Have Grown To A Certain Size

I’m watching several fronts along the API landscape evolve right now, with large API providers shifting, shutting down, and changing how they do business with their APIs–now that they don’t need their ecosystem of developers as much. It is easy to point the finger at Facebook, Twitter, Google, and the other giants, but it really is a wider systemic, business of APIs illness that will continue to negatively impact the API universe. While this behavior is very prominent with the leading API providers right now, it is something we’ll see various waves of it’s influence on the tone of the entire API sector further on down the road.

How API providers treat their consumers vary from industry to industry, and is different depending on the types of resources being made available. However, the issue of API providers treating their developers differently when they are just getting started versus once they grow to a certain size, is something that will continue to plague all areas of the API space. Companies change as they mature, and their priorities will no doubt evolve, but almost all will feel compelled to exploit developers early on so that they can grow their numbers–taking advantage of the goodwill and curiosity of the developer personality.

The polarization of the API management layer is difficult to predict from the outside. I want to help new API providers out early on, but after eight years of doing this, and seeing many API providers and service providers evolve, and go away–I am left very skeptical of EVERYONE. I think many developers in the API space are feeling the same, and are weary of kicking the tires on new APIs, unless they have a specific project and purpose. I think the days of developers working for free within an API ecosystem are over. It is a concept that will still exist in some form, but along with each wave of new API startups taking advantage of this reality, then tightening things down later on down the road, eventually developers will change their behavior in response.

The lesson is that if a API doesn’t have a clear business model early on–steer clear of their services. We can’t fault Twitter for working to monetize their platform now, and make their investors happy. We just should have seen it in the cards back in 2008. We can’t fault Facebook for working to please their shareholders, and protect their warehouse of inventory (Facebook Users) from malicious ecosystem players, we just should have seen this coming back in 2008. We can’t fault Google Maps for raising the prices on their digital assets developed by us giving them our data for the last decade, we should have know this would happen back in 2006. The business and politics of APIs is less straightforward than the technology of APIs is, and as we all know, the technology is a mess.

I will keep calling out the offenses of API providers when feel strongly enough, but I’ve ranted about Facebook, Twitter, and Google a lot in the past. I feel like we should be skeptical of ALL API providers at this point, and assume the worst about them all–new or old. Expect them to change the prices down the road. Expect them to turn off the most valuable resources once we’ve helped them grow and mature them into a prized digital asset. This is how some companies will continue to think they can make their investments grow, and all of as API providers and consumers need to remain skeptical about our role in this process. Always making sure we remember that most API providers will no longer need us developers once they have grown to a certain size.


The Redirect URL To Confirm Selling Your API In AWS Marketplace Provides Us With A Positive Template

I am setting up different APIs using the AWS API Gateway and then publishing them to the AWS Marketplace, as part of my work with Streamdata.io. I’m getting a feel for what the process is all about, and how small I can distill an API product to be, as part of the AWS Marketplace process. My goal is to be able to quickly define APIs using OpenAPI, then publish them to AWS API Gateway, and leverage the gateway to help me manage the entire business of the service from signup to discovery.

As I was adding one of my first couple of APIs to the AWS Marketplace, and I found the instructions regarding the redirect URL for each API to be a good template. Each individual API service I’m offering will have its own subscription confirmation URL with the AWS API marketplace, with the relevant variables present I will need to scale the technical and business of delivering my APIs:

  • AWS Marketplace Confirmation Redirect URL: https://YOUR_DEVELOPER_PORTAL_API_ID.execute-api.[REGION].amazonaws.com/prod/marketplace-confirm/[USAGE_PLAN_ID]

This URL is what my retail customers will be given after they click to subscribe to one of my APIs. Each API has its specific portal (present in URL), as well as being associated with a specific API usage plan within AWS API Gateway. Also notice that they have a variable for region, allowing me to deliver services by region, and scale up the technical side of delivering the various APIs I’m deploying–another important consideration when delivering reliable and performant services.

Pointing out the URL for a signup process might seem like a small thing. However, because of AWS’s first mover advantage in the cloud, and their experience as an API pioneer, I feel like they are in a unique position to be defining the business layer of delivering APIs at scale in the cloud. The business opportunities available at the API management and marketplace layers within the AWS ecosystem are untapped, and represent the next generation of API management that is baked into the cloud. It’s an interesting place to be delivering and integrating with APIs, at this stage of the API economy.


Microservice'ing Like a Unicorn With Envoy, Istio & Kubernetes With Christian Posta Of Red Hat At APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Microservice’ing Like a Unicorn With Envoy, Istio and Kubernetes” by Christian Posta (@christianposta), Red Hat @RedHat on September 25th.

Here is Christian’s abstract for the session:

The exciting parts of APIs, unfortunately, happen when services actually try communicating and working together to accomplish some business function. The service-mesh approach has emerged to help make service communication boring. In particular, a project named Istio.io has garnered attention in the open-source community as a way of implementing the service mesh capabilities. These capabilities include pushing application-networking concerns down into the infrastructure: things like retries, load balancing, timeouts, deadlines, circuit breaking, mutual TLS, service discovery, distributed tracing and others.

As Istio becomes more popular and widely used, we’re going to see a lot of people put it into production for their API use cases. This talk will walk attendees through the Istio architecture, and more importantly, help them understand how it all works.

API delivery, integration, orchestration, and development of mesh networks all represent the next generation of doing APIs, and APIStrat is where we are having these discussions. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Having An API Deprecation Page Like EVRYTHNG Does

The API providers I talk to regularly are rarely proactive when it comes to addressing API deprecation. Most API providers aren’t thinking about shutting down any service they deliver until they’ve actually encountered the need down the road. Many just begin their API journey, assuming their APIs will be a success, and they will have to support every version forever. However, once they encounter what it will take to support older versions, they begin to change their tune, something that often comes as an unexpected surprise to consumers.

Shutting down old APIs will never be easy, but the process can be made easier with a little proactive communication. One example of this practice in action can be found over at the Internet of Things provider Evrythng, with their API deprecation page. Which provides a pretty simple layout for an API deprecation page, beyond just a title and description of what the future might hold.

API Status

The following status labels are applicable to APIs, features, or SDK versions depending on their current support status:

_- Preview - May change at any time.

  • Stable - Fully released and stable. Will not change at short notice.
  • Deprecated - No longer supported (and may have been replaced), and may be removed in the future at an announced date. Use not encouraged.
  • Removed - Removed, and no longer supported or available._

Communication

When a deprecation is announced, the details and any relevant migration information will be available on the following channels:

_- The Developer Blog.

  • The @evrythngdev Twitter account.
  • The relevant feature page on the Developer Hub.
  • Enterprise customers may receive information by email to their specified EVRYTHNG contact, if applicable._

Customers using one our SLAs can read the General section for more information.

Evrythng provides us with some important building blocks for using as part of our overall API deprecation strategy. Something that EVERY API provide should be considering as they prepare to launch a new API. API deprecation shouldn’t be an afterthought, and if we communicate open and honestly about it from the beginning, our API consumers will be more forgiving. Surprising consumers down the road is the quickest way to piss people off, and get the tech blogosphere publishing those pitchfork and torches type of stories we see so often about API providers.

I just finished sharing some API deprecation awareness as part of the API strategy for a federal government agency–they just hadn’t had an open discussion about it. API deprecation is one of those uncomfortable subjects, we all have to get used to discussing early on, whether we like it or not. It is something that will always be difficult, and leave some API consumers unhappy, but if done well, it can help reduce a lot of friction. You can visit my API deprecation research and storytelling for more examples of how to do it right, and how you can avoid doing it wrong–bringing this important subject out into the open.


It Isn't Just That You Have A PDF For Your API Docs, It Is Because It Demonstrates That You Do Not Use Other APIs

I look at a lot of APIs. I can tell a lot about a company, and the people behind an API from looking at their developer portal, documentation, and other building blocks of their presence. One of the more egregious sins I feel an API provider can make when operating their API is publishing their API documentation as a PDF. This is something that was acceptable up until about 2006, but over a decade after it shows that the organization behind an API hasn’t done their homework.

The crime really isn’t the fact that an API provider is using a PDF for their documentation. I’m fine with API providers publishing a PDF version of their API, to provide a portable version of it. Where a PDF version of the documentation becomes a problem is when it is the primary version of the documentation, which demonstrates that the creators don’t get out much and haven’t used many other APIs. If an API team has done their homework, actually put other 3rd party APIs to work, they would know that PDF documentation for APIs is not the norm out in the real world.

One of the strongest characteristics an API provider can possess, is an awareness of what other API providers are doing. The leading API providers demonstrate that they’ve used other APIs, and are aware of what API consumers in the mainstream are used to. Most mainstream API consumers will simply close the tab when they encounter an API that has a PDF document for their API. Unless you have some sort of mandate to use that particular API, you are going to look elsewhere. If an API provider isn’t up to speed on what the norms are for API documentation, and are outwardly facing, the chance they’ll actively support their API is always diminished.

PDF API documentation may not seem like too big of a mistake to many enterprise, institutional, and government API providers, but it demonstrates much more than just a static representation of what an API can do. It represents an isolated, self-contained, non-interactive view of what an API can do. It reflects an API platform that is self-centered, and not really concerned with the outside world. Which often means it is an API platform that won’t always care about you, the API consumer. APIs in the age of the web are all about having an externalized view of the world, and understanding how to play nicely with large groups of developers outside of your firewall–when you publish a PDF version of your API docs, you demonstrate that you don’t get out much, and aren’t concerned with the outside world.


Bringing Discovery Within Data API Marketplaces Out Into The Open

I spend time reviewing each wave of data API marketplaces as they emerge on the landscape every couple of years. There are a number of reasons why these data marketplaces exist, ranging from supporting government agencies, NGOs, or for commercial purposes. One of the most common elements of API-driven data marketplaces that frustrates me is when they don’t do the hard work to expose the meta data around the databases, datasets, spreadsheets, and the raw data they are providing access to–making it very difficult to actually discover anything of interest.

You can see a couple examples of this with mLab, World Health Organization, Data.World, and others. While these platforms provide (sometimes) impressive ability to manage data stores, but they don’t always do a good job exposing the meta data of their catalogs as part of the available APIs. Dynamically generating API endpoints, documentation, and other resources based upon the data that is being published to their platforms. Leaving developers to do the digging, and making the investment to understand what is available on a platform.

Some of the platforms I encounter obfuscate their data metadata on purpose, requiring developers to qualified before they get access to valuable resources. Most I think, just do not put themselves in the position of an API consumer who lands on their developer page, and doesn’t know anything about an API. They understand the database, and the API, so it all makes sense to them, and they don’t have any empathy for anyone else who isn’t in the know. Which is a common trait of database centered people who speak in acronyms, and schema that they assume other people know, and do not spend much time thinking outside of that bubble.

I could make a career out of deploying APIs on top of other data marketplace APIs. Autogenerating a more accessible, indexable, intuitive layer on top of what they’ve already deployed. I regularly find a wealth of data that is accessible through an API interface, but will most likely never be found by anyone. Before most developers will ever make the investment to onboard with an API, they need to understand what valuable resources are available. I can imagine many developers stumble across these data marketplaces, spend about 15 minutes looking around, maybe sign up for a key, but then give up because of the overhead involved with actually understanding what data is actually available.


Working With My OpenAPI Definitions In An API Editor Helps Stabilize Them

I’m deploying three new APIs right now, using a new experimental serverless approach I’m evolving. One is a location API, another providing API access to companies, and the third involves working with patents. I will be evolving these three simple web APIs to meet the specific needs of some applications I’m building, but then I will also be selling retail and wholesale access to each API once they’ve matured enough. With all three APIs of these APIs, I began with a simple JSON schema from the data source, which I used to generate three rough OpenAPI definitions that will acts the contract seed for my three services.

Once I had three separate OpenAPI contracts for the services I was delivering, I wanted to spend some time hand designing each of the APIs before I imported into AWS API Gateway, generating Lambda functions, loading in Postman, and used to support other stops along the API lifecycle. I still use a localized version of Swagger Editor for my OpenAPI design space, but I’m working to migrate to OpenAPI-GUI as soon as I can. I still very much enjoy the side by side design experience in Swagger Editor, but I want to push forward the GUI side of the conversation, while still retaining quick access to the RAW OpenAPI for editing.

One of the reasons why I still use Swagger Editor is because of the schema validation it does behind the scenes. Which is one of the reasons I need to learn more about Speccy, as it is going to help me decouple validation from my editor, and all me to use it as part of my wider governance strategy, not just at design time. However, for now I am highly dependent on my OpenAPI editor helping me standardize and stabilize my OpenAPI definitions, before I use them along other stops along the API lifecycle. These three APIs I’m developing are going straight to deployment, because they are simple datasets, where I’m the only consumer (for now), but I still need to make sure my API contract is solid before I move to other stops along the API lifecycle.

Right now, loading up an OpenAPI in Swagger Editor is the best sanity check I have. Not just making sure everything validates, but also making sure it is all coherent, and renders into something that will make sense to anyone reviewing the contract. Once I’ve spend some time polishing the rough corners of an OpenAPI, adding summary, descriptions, tags, and other detail, I feel like I can begin using to generate mocks, deploy in a gateway, and begin managing the access to each API, as well as the documentation, testing, monitoring, and other stops using the OpenAPI contract. Making this manual stop in the evolution of my APIs a pretty critical one for helping me stabilize each API’s definition before I move on. Eventually, I’d like to automate the validation and governance of my APIs at scale, but for now I’m happy just getting a handle on it as part of this API design stop along my life cycle.


Twice The Dose Of Vanick Digital At APIStrat in Nashville, TN Next Month

We are kicking it into overdrive now that the schedule is up for APIStrat in Nashville, TN this September 24th through 26th. From now until the event at the end of September you are going to hear me talk about all the amazing speakers we have, the companies they work for, and the interesting things they are all doing with APIs. One of the perks of being a speaker or a sponsor at APIStrat–you get coverage on API Evangelist, a become part of the buzz around the 9th edition of the API Strategy & Practice Conference (APIStrat), now operated by the OpenAPI Initiative (OAI) and the Linux Foundation.

Today’s post is about my friends over at the digital solutions and API management agency Vanick Digital. With APIStrat coming to their backyard, and their ability to capture the attention of the APIStrat program committee, Vanick Digital has two separate talks this year:

  • Securing the Full API Stack by Patrick Chipman - APIs open up new channels for sharing and consuming data, but whenever you open a new channel, new security risks emerge. Additionally, APIs often involve a variety of new components, such as API gateways, in-memory databases, edge caches, facade layers, and microservice-aligned data stores that can complicate the security landscape. How and where do you apply the right controls to ensure your API and your data are secure? In this session, we’ll answer that question by identifying the different components commonly used in the delivery of API products. For each layer, we’ll discuss the security risks that can and should be mitigated there, along with best practice approaches (including ABAC, OAuth2, and more) to implement those mitigations.

  • What Do You Mean By “API as a Product”? by Lou Powell - You may have heard the term “API Product.” But what does it mean? In this talk I will introduce the concept and explain the benefits and challenges of transforming your organization to view your APIs as measurable products that expose your companies capabilities, creating agility, autonomy, and acceleration. Traditional product manufacturers create new product and launch them into the marketplace and then measure value; we will teach you to view your APIs in the same way. Concepts covered in this presentation will be designing APIs with Design Thinking, funding your product, building teams, marketing your API, managing your marketplace, and measuring success.

Showcasing their skills as an API focused agency, by bringing it to the stage at APIStrat–smart! I am currently working with their team to understand how API Evangelist and Vanick Digital can work more closely together on projects. Helping me support the customers I’m reaching with my storytelling and workshops, delivering, scaling, and managing the day to day details I don’t have the time to provide for my customers. So it makes me happy to see them at APIStrat, sharing their wisdom, and demonstrating what they are capable of. If you are under resourced like many API providers are, I recommend coming to APIStrat and meeting with the team, or if it can’t wait until September, feel free to reach out directly–just let them know where you found them.

APIStrat is seven weeks away, so make sure you get registered. The workshop, session, and keynote lineup is locked up, but we still have a handful of sponsorship opportunities available. You can find the sponsorship prospectus on the web site, or feel free to contact me directly and I’ll get you plugged in with the events team. Make sure you don’t miss out on an opportunity to be part of this ongoing API conversation that we’ve kept going since 2013–where API developers, architects, designers, and API business leaders, evangelists, advocates, and the API curious gather to discuss where the API space is headed. Now that APIStrat is operated by the OpenAPI Initiative it makes it the place to be if you want to contribute to the road map for the OpenAPI specification, and influence the direction of the API specification. No matter how you choose to get involved, we look forward to seeing you all in Nashville next month!


I Am Speaking In Washington D.C. At The Blue Button 2.0 Developer Conference On The API Life Cycle This Monday

I’m heading to Washington D.C. this Monday to speak on the API life cycle as part of the Blue Button 2.0 Developer Conference. We’ll be coming together in the Eisenhower Executive Office Building, within the west wing complex of the White House, to better understand how we can, “bring together developers to learn and share insights on how we can leverage claims data to serve the Medicare population.”

The gathering will hear from CMS Administrator Seema Verma and other Administrator Leadership about Blue Button 2.0 and the MyHealthEData initiative, while also hosting a series of break sessions, which I’m part of:

  • Blue Button 2.0 and FHIR (where it’s all heading) with Mark Scrimshire and Cat Greim
  • MyHealthEData and Interoperability with Alex Mugge and Joy Day
  • Overview of Medicare Claims Data with Karl Davis
  • Medicare Beneficiary User Research with Allyssa Allen
  • Sync for Science with Josh Mandel and Andrew Bjonnes
  • API Design with Kin Lane

Registration for the gathering is now closed, but if you are a federal govy, I’m sure you can find someone to get you in. I’m looking forward to seeing the CMS, HHS, and USDS folks again, as they are doing some amazing stuff with the Blue Button API, as well as hang out with some of the VA people I know will be there. The Blue Button API is one of the more important API blueprints we have out there in the healthcare space, as well as the federal government. I’ve been a champion on Blue Button since I contributed to the project back when I worked in DC back in 2013, and will continue to invest in its success in coming years.

In my session I will be covering my API lifecycle and governance research as the API Evangelist, but I’m eager to talk with more folks involved with the Blue Button API about what is next, and better understand where HL7 FHIR is headed, while also developing my awareness of who is actively participating in the Blue Button API community. I’ll be in DC late Sunday night, through Monday, and I’m back to west coast on Tuesday AM. If you are around I’d love to connect, and if you want to tune in, I believe there will be a live stream of the event on the Blue Button API portal.


<< Prev Next >>