The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Asking The Honest Questions When It Comes To Your API Journey

I engage with a lot of enterprise organizations in a variety of capacities. Some are more formal consulting and workshop engagements. While others are just emails, direct messages, and random conversation in the hallways and bars around API industry events. Many conversations are free flowing, and they trust me to share my thoughts about the API space, and provide honest answers to their questions regarding their API journey. Where others express apprehension, concern, and have trouble engaging with me because they are worried about what I might say about their approach to doing APIs within their enterprise organizations. Some have even told me that they’d like to formally bring me in for discussions, but they can’t get me pass legal or their bosses–stating I have a reputation for being outspoken.

While in Vegas today, I had breakfast with Paolo Malinverno, analyst from Gartner, he mentioned the Oscar Wilde quote, “Whenever people agree with me I always feel I must be wrong.” Referring to the “yes” culture than can manifest itself around Gartner, but also within industries and the enterprise regarding what you should be investing in as a company. That people get caught up in up in culture, politics, and trends, and don’t always want to be asking, or be asked the hard questions. Which is the opposite of what any good API strategist, leader, and architect should be doing. You should be equipped and ready to be asked hard questions, and be searching out the hard questions. This stance is fundamental to API success, and you will never find what you are seeking when it comes to your API journey if you do not accept that many questions will be difficult.

The reality that not all API service providers truly want to help enterprise organizations genuinely solve the business challenges they face, and that many enterprise technology leaders aren’t actually concerned with truly solving real world problems, has been one of the toughest pills for me to swallow as the API Evangelist over the last eight years. Realizing that there is often more money to be made in not solving problems, not properly securing systems, or systems being performant, efficient, and working as expected. While I think many folks are too far down in the weeds of operations and company culture to fully make the right decision, I also think there are many people who make the wrong technological decision because it is the right business decision in their view. They do it to please share holders, investors, their boss, or just going with the flow when it comes to the business culture within their enterprise, and the industry that they operate in.

Honestly, there isn’t a lot of money to be made asking the hard questions, and addressing the realities of getting business done using APIs within the enterprise. Not all companies are willing to pay you to come in and speak truth to what is going on. Pointing out the illnesses that exist within the enterprise, and potentially provide solutions to what is happening. People are afraid what you are going to ask. People don’t want to pay someone to rock the boat. I find it to be a rare occurrence to find large enterprise organizations who are willing to look in the mirror and be held accountable for their legacy technical debt, and be forced to make the right decisions when it comes to moving forward with the next generation of investment. Which is why most organizations will stumble repeatedly in their API journeys, be more susceptible to the winds of technological trends and investment cycles, all because they aren’t willing to surround themselves with the right people who are willing to speak truth.


A Diverse API Toolbox Driving Hybrid Integrations Across An Event-Driven Landscape

I’m heading to Vegas in the morning to spend two days in conversations with folks about APIs. I am not there for AWS re:Invent, or the Gartner thingy, but I guess in a way I am, because there are people there for those events, who want to talk to me about the API landscape. Folks looking to swap stories about enterprise API investment in possessing a diverse API toolbox for driving hybrid integrations in an event-driven landscape. I’m not giving any formal talks, but as with any engagement, I’m brushing up on the words I use to describe what I’m seeing across the space when it comes to the enterprise API lifecycle.

The Core Is All About Doing Resource Based, Request And Response APIs Well
I’m definitely biased, but I do not subscribe to popular notions that at some point REST, RESTful, web, and HTTP APIs will have to go away. We will be using web technology to provide simple, precise, useful access to data, content, and algorithms for some time to come, despite the API sectors continued evolution, and investment trends coming and going. Sorry, it is simple, low-cost, and something a wide audience gets from both a provider and consumer perspective. It gets the job done. Sure, there are many, many areas where web APIs fall short, but that won’t stop success continuing to be defined by enterprise organizations who can do microservices well at scale. Despite relentless assaults by each wave of entrepreneurs, simple HTTP APIs driving microservices will stick around for some time to come.

API Discovery And Knowing Where All Of Your APIs Resources Actually Are
API discovery means a lot of things to a lot of people, but when it comes to delivering APIs well at scale in a multi-cloud, event-driven world, I’m simply talking about knowing where all of your API resources are. Meaning, if I walked into your company tomorrow, could you should me a complete list of every API or web service in use, and what enterprise capabilities they enable? If you can’t, then I’m guessing you aren’t going to be all that agile, efficient, and ultimately effective with doing your APIs at scale, and be able to orchestrate much, and identify what the most meaningful events are that occur across the enterprise landscape. I’m not even getting to the point of service mesh, and other API discovery wet dreams, I’m simply talking about being able to coherently articulate what your enterprise digital capabilities are.

Always Be Leveraging The Web As Part Of Your Diverse API Toolbox
Technologists, especially venture fueled technologists love to take the web for granted. Nothing does web scale better than, the web. Understand the objectives behind your APIs, and consider how you are leveraging the web, negotiate, cache, and build on the strengths of the web. Use the right media type for the job, and understand the tradeoffs of HTML, CSV, XML, JSON, YAML, and other media types. Understand when hypermedia media types might be more suitable for document, media, and other content focused API resources. Simple web APIs make a huge difference when they further speak to their intended audience and allow them to easily translate an API call into a workable spreadsheet, or navigate to the previous or next episode, installment, or other logical grouping with sensible hypermedia controls. Good API design is more about having a robust and diverse API design toolbox to choose from, than it is ever about the dogma that exists around any specific approach, philosophy, protocol, or venture capital fueled trend.

Have A Clear Vision For Who Will Be Using Your APIs
One significant mistake that API designers, developers, and architects make over and over again, is not having a clear vision of who will be using the APIs they are building. Defining, designing, delivering, and operating an API that is based upon what the provider wants over what the consumers will need. Using protocols, ignoring existing patterns, and adopting the latest trend that have nothing to do with what API consumers will be needing or capable of putting to work. Make sure you know your consumers, and consider giving them more control with query languages like GraphQL and Falcor, allowing them to define the type of experience they want. Work to have a clear vision of who will be consuming an API, even if you don’t know who they are. Starting simple with basic web APIs that help easily on-board new users who are unfamiliar with the domain and schema, while also allowing for the evolution give power-users who are in the know, more access, more control, and a stronger voice in the vision of what your APIs deliver or do not.

Responding In Real Time, Not Just Upon Request
A well oiled request and response API infrastructure is a critical base for any enterprise organization, however, a sign of a more mature, scalable API operations is always the presence of event-driven infrastructure including webhooks, streaming solutions, and multi-protocol, multi-message approaches to moving data and content around, and responding algorithmically based upon real time events occurring across the domains. Investing in event-driven infrastructure is not simply about using Kafka, it is about having a well-defined, well-honed web API base, with a suite of event-driven approaches in ones toolbox for also providing access to internal, partner, and last mile public and 3rd party resources using an appropriate set of protocols, and message formats. Something that might be as simple as a webhook subscription to changes, getting a simple HTTP push when something changes, to maintaining persistent HTTP connections to get an HTTP push when something changes, all the way to high volume HTTP and TCP connections to a variety of topical channels using Kafka, or other industrial grade API-driven solutions like gRPC, and beyond.

Have A Reason For When You Switch Protocols
There are a number of reasons why we switch protocols, moving off HTTP towards a TCP way of getting things done, with most reasoning being more emotional than they are ever technical. When I ask people why they went from HTTP APIs to Kafka, or Websockets, there is rarely a protocol based response. They did it because they needed things done in real time, through the existence of specific channels, or just simple because Kafka is how you do big data, or Websockets is how you do real time data. There wasn’t much scrutiny of who the consumers are, what was gained by moving to TCP, and what was lost by moving off HTTP. There is little awareness of the work Google has done around gRPC and HTTP/2, or what has happened recently around HTTP/3, formerly known as Quick UDP Internet Connections (QUIC). I’m no protocol expert, but I do grasp the role that these protocols play, and understand that the fundamental foundation of APIs is the web, and the importance of having a well thought out strategy when it comes to using the Internet for delivering on the API vision across the enterprise.

Ensuring All Your API Infrastructure Is Reliable
It doesn’t matter what your API design processes are, and what tools you are using if you cannot do it reliably. If you aren’t monitoring, testing, securing, and understanding performance, consumption, and limitations across ALL of your API infrastructure, then there will never be the right API solution. Web APIs, hypermedia, GraphQ, Webooks, Server-Sent Events, Websockets, Kafka, gRPC, and any other approach will always be inadequate if you cannot reliably operate them. Every tool within your API design toolbox should be able to be effectively deployed, thoughtfully managed, and coherently monitored, tested, secured, and delivered as a reliable service. If you don’t understand what is happening under the hood with any of your API infrastructure, out of your league technically, or kept in the dark through vendor magic, it should NOT be a tool in your toolbox, and be something that left in the R&D lab until you can prove that you can reliably deliver, support, scale, and evolve something that is in alignment with, and has purpose augmenting and working with your existing API infrastructure.

Be Able To Deliver, Operate, And Scale Your APIs Anywhere They Are Needed
One increasingly critical aspect of any tool in our API design is whether or not we can deploy and operate it within multiple environments, or find that we are limited to just a single on-premise or cloud location. Can your request and response web API infrastructure operate within the AWS, Google, or Azure clouds? Does it operate on-premise within your datacenter, locally for development, and within sandbox environments for partners and 3rd party developers? Where your APIs are deployed will have have just as big of an impact on reliability and performance as your approach to design and the protocol you re using. Regulatory and other regional level concerns may have a bigger impact on your API infrastructure, than using REST, GraphQL, Webhooks, Server-Sent Events, or Kafka. Being able to ensure you can deliver, operate, and scale APIs anywhere they are needed is fast becoming a defining characteristic of the tools that we possess in our API toolboxes.

Making Sure All Your Enterprise Capabilities Are Well Defined
The final, and most critical element of any enterprise API toolbox, is ensuring that all of your enterprise capabilities are defined as machine readable API contracts, using OpenAPI, AsyncAPI, JSON Schema, and other formats. API definitions should provide human and machine readable contracts for all enterprise capabilities that are in play. These contracts contribute to every stop along the API lifecycle, and help both API providers and consumers realize everything I have discussed in this post. OpenAPI provides what we need to define our request and response capabilities using HTTP, and Async provides what we need to define our event-driven capabilities, providing the basis for understanding what we are capable of delivering using our API toolboxes, and responding to via the hybrid integration solutions we’ve engineered, and automated using our event-driven solutions. Defining the surface area of our API infrastructure, but also the API operations that surround the enterprise capabilities we are enabling internally, with partners, and publicly via our enterprise API efforts.


The API Journey

I’ve been researching the API space full time for the last eight years, and over that time I have developed a pretty robust view of what the API landscape looks like. You can find almost 100 stops along what I consider to be the API lifecycle on the home page of API Evangelist. While not every organization has the capacity to consider all 100 of these stops, they do provide us with a wealth of knowledge generated throughout my own journey. Where I’ve been documenting what the API pioneers have been doing with their API operations, how startups leverage simple web API infrastructure, as well as how the enterprise has been waking up to the API potential in the last couple of years.

Over the years I’ve tapped this research for my storytelling on the blog, and for the white papers and guides I’ve produced. I use this research to drive my talks at conferences, meetups, and the workshops I do within the enterprise. I’ve long had a schema for managing my research, tracking on the APIs, companies, people, tools, repos, news, and other building blocks I track across the API universe. Now, after a year of working with them on the ground at enterprise organizations, I’m partnering with Streamdata.io (SDIO) to continue productizing my approach to the API lifecycle, which we are calling Journey, or specifically SDIO Journey.

Our workshops are broken into four distinct areas of the lifecycle:

  • Discovery (Goals, Definition, Data Sources, Discovery Sources, Discovery Formats, Dependencies, Catalog, Communication, Support, Evangelism) - Defining your digital resources are and what your enterprise capabilities are.
  • Design (Definitions, Design, Versioning, Webhooks, Event-Driven, Protocols, Virtualization, Testing, Landing Page, Documentation, Support, Communication, Road Map, Discovery) - Going API first, as well as API design first when it comes to the delivery of all of your API resources.
  • Development (Definitions, Discovery, Virtualization, Database, Storage, DNS, Deployment, Orchestration, Dependencies, Testing, Performance, Security, Communication, Support) - Considering what is needed to properly develop API resources at scale, and move from design to production.
  • Production (Definitions, Discovery, Virtualization, Authentication, Management, Logging, Plans, Portal / Landing Page, Getting Started, Documentation, Code, Embeddables, Licensing, Support, FAQs, Communication, Road Map, Issues, Change Log, Legal, Monitoring, Testing, Performance, Tracing, Security, Analysis, Maintenance) - Thinking about the production needs of an API operation, extracting the building blocks from successful APIs available across the web.
  • Outreach (Purpose, Scope, Defining Success, Sustaining Adoption, Communication, Support, Virtualization, Measurement, Structure) - Getting more structured around how you handle outreach around your APIs, whether they are internal, partner, or public API resources.
  • Governance (Design, Testing, Monitoring, Performance, Security, Observability, Discovery, Analysis, Incentivization, Competition) - Looking at how you can begin defining, measuring, analyzing, and providing guidance across API operations at the highest levels.

We are currently working with several API service providers to deliver SDIO Journey workshops within their enteprise organizations, helping bring more API awareness to their pre-sales, sales, business, and executive groups. While also working to deliver independent Journey workshops for their customers, helping them see the bigger picture when it comes to the API lifecycle, but also begin establishing their own formal strategy for how they can execute on their own personal vision and version of it. Helping enterprise organization learn from the research I’ve gathered over the last eight years, and begin thinking more constructively, and being more thoughtful and organized about how they approach the delivery, iteration, and sustainment of APIs across the enterprise.

I have turned SDIO Journey into a set of basic APIs that allow me to build, replicate, and deliver our Journey workshops. I’m preparing for a handful of workshops before the end of the year with Axway, and for API Days in Paris, but then in 2019, continue productizing and delivering these API workshops, helping encourage other enterprise organizations to invest more in their own API Journey, get more structured in how they think about the delivering of microservices across the enterprise. Helping them realize that the transformation they are going through right now isn’t going to stop. It is something that will be ongoing, and require their organization to learn to accept perpetual change and evolution in how they deliver the data, content, and algorithmic resources they’ll need to do business across the enterprise. While also evolving their understanding that all of this is more about people, business, and politics more than it will ever be about technology all by itself.

If you have any questions about the SDIO Journey workshops we are doing, feel free to reach out, and I’ll get you more details about how to get involved.


YAML API Management Artifacts From AWS API Gateway

I’ve always been a big supporter of creating machine readable artifacts that help define the API lifecycle. While individual artifacts can originate and govern specific stops along the API lifecycle, they can also bring value when applied across other stops along the API lifecycle, and most importantly when it comes time to govern everything. The definition and evolution of individual API lifecycle artifacts is the central premise of my API discovery format APIs.json–which depends on there being machine readable elements within the index of each collection of APIs being documented, helping us map out the entire API lifecycle.

OpenAPI provides us with machine readable details about the surface area of our API which can be used throughout the API lifecycle, but it lacks other details about the surface area of our API operations. So when I do come across interesting approaches to extending the OpenAPI specification which are also injecting a machine readable artifact into the OpenAPI that support other stops along the API lifecycle, I like to showcase what they are doing. I’ve become very fond of one within the OpenAPI export of any AWS API Gateway deployed API I’m working with, which provides some valuable details that can be used as part of both the deployment and management stops along the API lifecycle:


x-amazon-apigateway-integration:
	uri: "http://example.com/path/to/the/code/behind/"
	responses:
		default:
			statusCode: "200"
	requestParameters:
		integration.request.querystring.id: "method.request.path.id"
	passthroughBehavior: "when_no_match"
	httpMethod: "GET"
	type: "http"

This artifact is associated with each individual operation within my OpenAPI. It tells the AWS gateway how to deploy and manage my API. When I first import this OpenAPI into the gateway, it will deploy each individual path and operation, then it helps me manage it using the rest of the available gateway features. From this OpenAPI definition I can design, then autogenerate and deploy the code behind each individual operation, then deploy each individual path and operation to the AWS API Gateway and map them to the code behind. I can do this for custom APIs I’ve deployed, as well as Lambda brokered APIs–I prefer the direct way, because it is still easier, stabler, more flexible and cost effective for me to write the code behind each of my API operations, than to go full serverless.

However, this artifact demonstrates for me the importance of artifacts associated with each stop along the API lifecycle. This little bit of OpenAPI extended YAML gives me a significant amount of control when it comes to the automation of deploying and managing my APIs. There are even more properties available for other layers of the AWS Gateway not included in this example, but is something that I will keep mapping out. Having these types of machine readable artifacts present within our OpenAPI specifications for describing the surface area of our APIs, as well as present within our APIs.json indexes for describing the surface area of our API operations will be critical to further automating, scaling, and defining the API lifecycle as it exists across the enterprise.


What Does The Next Chapter Of Storytelling Look Like For API Evangelist?

I find myself refactoring API Evangelist again this holiday season. Over the last eight years of doing API Evangelist I’ve had to regularly adjust what I do to keep it alive and moving forward. As I close up 2018, I’m finding the landscape shifting underneath me once again, pushing me to begin considering what the next chapter of API Evangelist will look like. Pushing me to adjust my presence to better reflect my own vision of the world, but hopefully also find balance with where things are headed out there in the real world.

I started API Evangelist in July of 2010 to study the business of APIs. As I was researching things in 2010 and 2011 I first developed what I consider to be the voice of the API Evangelist, which continues to be the voice I use in my storytelling here in 2018. Of course, it is something that has evolved and matured over the years, but I feel I have managed to remain fairly consistent in how I speak about APIs throughout the journey. It is a voice I find very natural to speak, and is something that just flows on some days whether I want it to or not, but then also something I can’t seem to find at all on other days. Maintaining my voice over the last eight years has required me to constantly adjust and fine tune, perpetually finding the frequency required to keep things moving forward.

First and foremost, API Evangelist is about my research. It is about me learning. It is about me crafting stories that help me distill down what I’m learning, in an attempt to articulate to some imaginary audience, which has become a real audience over the years. I don’t research stuff because I’m being paid (not always true), and I don’t tell stories about things I don’t actually find interesting (mostly true). API Evangelist is always about me pushing my skills forward as a web architect, secondarily about me making a living, and third about sharing my work publicly and building an audience–in short, I do API Evangelist to 1) learn and grow, 2) pay the bills, and 3) cultivate an audience to make connections.

As we approach 2019, I would say my motivations remain the same, but there is a lot that has changed in the API space, making it more challenging for me to maintain the same course while satisfying all these areas in a meaningful way. Of course, I want to keep learning and growing, but I’d say a shift in the API landscape toward the enterprise is making it more challenging to make a living. There just aren’t enough API startups out there to help me pay the bills anymore, and I’m having to speak and sell to the enterprise more. To do this effectively, a different type of storytelling strategy is required to keep the paychecks coming in. Something I don’t think is unique to my situation, and is something that all API focused efforts are facing right now, as the web matures, and the wild west days of the API come to a close. It was fun while it lasted–yee haw!!

In 2019, the API pioneers like SalesForce, Twitter, Facebook, Instagram, Twilio, SendGrid, Slack, and others are still relevant, but it feels like API storytelling is continuing it’s migration towards the enterprise. Stories of building an agile, scrappy startup using APIs isn’t as compelling as they used to be. They are being replaced by stories of existng enterprise groups become more innovative, agile, and competitive in a fast changing digital business landscape. The technology of APIs, the business of APIs, and the stories that matter around APIs have all been caught up in the tractor beam of the enterprise. In 2010, you did APIs if you were on the edge doing a startup, but by 2013 the enterprise began tuning into what is going on, by 2016 the enterprise responded with acquisitions, and by 2018 we are all selling and talking to the enterprise about APIs.

Despite what many people might believe, I’m not anti-enterprise. I’m also not pro-startup. I’m for the use of web infrastructure to deliver on ethical and sensible private sector business objectives, strengthen expectations of what is possible in the public sector, while holding both sectors accountable to each other. I understand the enterprise, and have worked there before. I also understand how it is evolving over the last eight years through API discussions I have been having had with enterprise folks, workshops I’ve conducted within various public and private sector groups, and studying this latest shift in technology adoption across large organizations. Ultimately, I am very skeptical that large business enterprises can adapt, decouple, evolve, and embrace API and microservice principles in a way that will mean success, but I’m interested in helping educate enterprise teams, and assist them in crafting their enterprise-wide API strategy, and contribute what I can to incentivize change within these large organizations.

A significant portion of my audience over the last eight years is from the enterprise. However, I feel like these are the people within the enterprise who have picked up their heads, and consciously looked for new ways of doing things. My audience has always been fringe enterprise folks operating at all levels, but API Evangelist does not enjoy mainstream enterprise adoption and awareness. A significant portion of my storytelling speaks to the enterprise, but I recognize there is a portion of it that scares them off, and doesn’t speak to them at all. One of the questions I am faced with is around what type of tone do I strike as the API Evangelist in this next chapter? Will it be a heavy emphasis on the politics of APIs, or will it be more about the technology and business of APIs? To continue learning and growing in regards to what is happening on the ground with APIs, I’m going to need enterprise access. To continue making a living doing APIs, I’m going to need more enterprise access. The question for me is always around how far I put my left foot in the enterprise or government door, and how far I keep my right found outside in the real world–where there is no perfect answer, and is something that requires constant adjustment.

Another major consideration for me is always around authenticity. An area I posses a natural barometer in, and while I have a pretty high tolerance for API blah blah blah, and writing API industry white papers, when I start getting into areas of technology, business, or politics where I feel like I’m not being authentic, I automatically begin shutting down. I’ve developed a bulshit-o-meter over the years that helps me walk this line successfully. I’m confident I can maintain and not sell out here. My challenge is more about continuing to do something that matters to someone who will continue investing in my work, and having relevance to the audience I want to reach, and less about keeping things in areas that I’m interested in. I will gladly decline conversations, relationships, and engagements in unethical areas, shady government or business practices, avoid classified projects, and pay for play concepts along the way. Perpetually pushing me to always strike a balance between something that interests me, that pushes my skills, bring value to the table, has a meaningful impact, enjoys a wide reach, while also paying the bills. Which reflects what I’m thinking through as I write this blog post, demonstrating how I approach my own professional development.

So, what does the next chapter of storytelling look like for API Evangelist? I do not know. I know it will have more of a shift towards the enterprise. Which means a heavy emphasis on the technology and business of APIs. However, I’m also thinking deeply about how I present the political side of the API equation, and how I voice my opinions and concerns when it comes to privacy, security, transparency, observability, regulation, surveillance, and ethics that swirls around APIs. I’m guessing they can still live side by side in some way, I just need to be smarter about the politics of it, and less rantier and emotional. Maybe separate things into a new testament for the enterprise that is softer, wile also maintaining a separate old testament for the more hellfire and brimstone. IDK. It is something I’ll continue mulling over, and make decisions around as I continue to shift things up here at API Evangelist. As you can tell my storytelling levels are lower than normal, but my traffic is still constant, reflecting other shifts in my storytelling that have occurred in the past. I’ll be refactoring and retooling over the holidays, and no doubt have more posts about the changes. If you have any opinions on what value you get from API Evangelist, and what you’d like to see present in the next chapter, I’d love to hear from you in the comments below, on Twitter, or personally via email.


The Ability To Link To API Service Provider Features In My Workshops And Storytelling

All of my API workshops are machine readable, driven from a central YAML file that provides all the content and relevant links I need to deliver what I need during a single, or multi-day API strategy workshop. One of the common elements of my workshops are links out to relevant resource, providing access to services, tools, and other insight that supports whatever I’m covering in my workshop. There are two parts to this equation, 1) me knowing to link to something, and 2) being able to link to something that exists.

A number of API services and tooling I use don’t follow web practices and do not provide any easy way to link to a feature, or other way of demonstrating the functionality that exists. The web is built on this concept, but along the way within web and mobile applications, we’ve have seemed to lose our understanding for this fundamental concept. There are endless situations where I’m using a service or tool, and think that I should reference in one of my workshops, but I can’t actually find any way to reference as a simple URL. Value buried within a JavaScript nest, operating on the web, but not really behaving like you depend on the web.

Sometimes I will take screenshots to illustrate the features of a tool or service I am using, but I’d rather have a clean URL and bookmark to a specific feature on a services page. I’d rather give my readers, and workshop attendees the ability to do what I’m talking about, not just hear me talk about it. In a perfect world, every feature of a web application would have a single URL to locate said feature. Allowing me to more easily incorporate features into my storytelling and workshops, but alas many UI / UX folks are purely thinking about usability and rarely thinking about instruct-ability, and being able to cite and reference a feature externally, using the fundamental building blocks of the web.

I understand that it isn’t easy for all application developers to think externally like this, but this is why I tell stories like this. To help folks think about the externalities of the value they are delivering. It is one of the fundamental features of doing business on the web–you can link to everything. However, I think we often forgot what makes the web so great, as we think about how to lock things down, erect walled gardens around our work, something that can quickly begin to work against us. This is why doing APIs is so important as it can helps us think outside of the walls of the gardens we are building, and consider someone else’s view of the world. Something that can give us the edge when it comes to reaching a wider audience with whatever we are creating.


Flickr And Reconciling My History Of APIs Storytelling

Flickr was one of the first APIs that I profiled back in 2010 when I started API Evangelist. Using their API as a cornerstone of my research, resulting in their API making it into my history of APIs storytelling, continuing to be a story I’ve retold hundreds of times in the conversations I’ve had over the eight years of being the API Evangelist. Now, after the second (more because of Yahoo?) acquisition, Flickr users are facing significant changes regarding the number of images we can store on the platform, and what we will be charged for using the platform–forcing me to step back, and take another look at the platform that I feel has helped significantly shape the API space as we know it.

When I step back and think about Flickr, it’s most important contribution to the world of APIs was all about the resources it made available. Flickr was the original image sharing API, powering the growing blogosphere at the beginning of this century. Flickr gave us a simple interface for humans in 2004, and an API for other applications just six months later, that provided us all with a place to upload the images we would be using across our storytelling on our blogs. Providing the API resources that we would be needed to power the next decade of storytelling via our blogs, but also set into the motion the social evolution of the web, demonstrating that images were an essential building block of doing business on the web, and in just a couple of years, on the new mobile devices that would become ubiquitous in our lives.

Flickr was an important API resource, because it provided access to an important resource–our images. The API allowed you to share these meaningful resources on your blog, via Facebook and Twitter, and anywhere else you wanted. In 2005, this was huge. At the time, I was working to make a shift from being an developer lead, to playing around with side businesses built using the different resources that were becoming available online via simple web APIs. Flickr quickly became a central actor in my digital resource toolbox, and I was using it regularly in my work. As an essential application, Flickr quickly got out of my way by offering an API. I would still use the Flickr interface, but increasingly I was just publishing images to Flickr via the API, and embedding them in blogs, and other marketing, becoming what we began to call social media marketing, and eventually was something that I would rebrand as API Evangelist while making it more about the tooling I was using, than the task I was accomplishing.

After thinking about Flickr as a core API resource, next I always think about the stories I’ve told about Flickr’s Caterina Fake who coined the phase, “business development 2.0”. As I tell it, back in the early days of Flickr, the team was getting a lot of interest in the product, and unable to respond to all emails and phone calls. They simply told people to build on their API, and if they were doing something interesting, they would know, because they had the API usage data. Flickr was going beyond the tech and using an API to help raise the bar for business development partnerships, putting the burden on the integrator to do the heavy lifting, write the code, and even build the user base, before you’d get the attention of the platform. If you were building something interesting, and getting the attention of users, the Flickr team would be aware of it because of their API management tooling, and they would reach out to you to arrange some sort of partner relationship.

It makes for a good story. It resonates with business people. It speaks to the power of doing APIs. It is also enjoys a position which omits so many other negative aspects of doing startups, which as a technologist becomes too easy to look the other way when you are just focused on the tech, and as a business leader after the venture capital money begins flowing. Business development 2.0 has a wonderful libertarian, pull yourself up by your bootstrap ring to it. You make valuable resources available, and smart developers will come along and innovate! Do amazing things you never thought of! If you build it, they will come. Which all feeds right into the sharecropping, and exploitation that occurs within ecosystems, leading to less than ethical API providers poaching ideas, and thinking that it is ok to push public developers to work for free on their farm. Resulting in many startups seeing APIs as simply a free labor pool, and source of free road map ideas, manifesting concepts like the “cloud kill zone”. Business development 2.0 baby!!

Another dimension of this illness we like to omit is around the lack of a business model. I mean, the shit is free! Why would we complain about free storage for all our images, with a free API? It is easier for us to overlook the anti-competitive approaches to pricing, and complain down the road when each acquisition of the real product (Flickr) occurs, than it is to resist companies who lack a consumer level business model, simply because we are all the product. Flickr, Twitter, Facebook, Gmail, and other tools we depend on are all free for a reason. Because they are market creating services, and revenue is being generated at other levels out of our view as consumers, or API developers. We are just working on Maggie’s Farm, and her pa is reaping all the benefit. When it come’s to Flickr, Maggie and her {a cashed out a long time ago, and the farm keeps getting sold and resold, all while we still keep working away in the soil, giving them our digital bits that we’ve cultivate there, until conditions finally become unacceptable enough to run us off.

I’ve begun moving off of Flickr a couple years ago. I stopped using them for blog photo hosting in 2010. I stopped uploading photos there regularly over the last couple years. The latest crackdown doesn’t mean much to me. It will impact my storytelling to potentially lose such an amazing resource of openly licensed photos. However, I’ve saved each photo I use, and it’s attribution locally–hopefully my attribution link doesn’t begin to 404 at some point. Hopefully other openly licensed photo collections emerge on the horizon, and ideally SmugMug doesn’t do away with openly licensed treasure trove they are stewards of now. The latest acquisition and business model shift occurring across the Flickr platform doesn’t hit me too hard, but the situation does give me an opportunity to step back and reassess my API storytelling, and the role that Flickr plays in my API Evangelist narrative. Giving me another opportunity to eliminate bullshit and harmful myths from my storytelling and myth making–which I feel like is getting pretty close to leaving me with nothing left to tell when it comes to APIs.

In the end, if I just focus purely on the tech, and ignore the business and politics of APIs, I can keep telling these bullshit. This is the real Flickr lesson for me. I’d say there is two reasons we perpetuate stories like this. One, “because we just didn’t know any better”. Which is pretty weak. Two, it is how capitalism works. It is why us dudes, especially us white dudes thrive so well in a Silicon Valley tech libertarian world, because this type of myth making benefits us, even when it repeatedly sets us up for failure. This is one of the things that makes me throw up a little (a lot) in my mouth when I think about the API Evangelist persona I’ve created. This entire reality makes it difficult for me to keep doing this API Evangelist theater each day. APIs are cool and all, but when they are wielded as part of this larger money driven stream of consciousness, we (individuals) are always going to lose. In the end, why the fuck do I want to be a mouthpiece for this kind of exploitation. I don’t.

Photo Credit: Kin Lane (The First Photo I Uploaded to Flickr)


The Impact Of Travel On Being The API Evangelist

Travel is an important part of what I do. It is essential to striking up new relationships, and reenforcing old ones. It is important for me to get out of my bubble, expose myself to different perspectives, and see the world in different ways. I am extremely grateful for the ability to travel around the US, and the world the way that I do. I am also extremely aware of the impact that travel has on me being the API Evangelist–the positive, the negative, and the general shift in my tone in storytelling after roaming the world.

One of the most negative impact that traveling has on my world is on my ability to complete blog posts. If you follow my work, when I’m in the right frame of mind, I can produce 5-10 blog posts across the domains I write for, on a daily basis. The words just do not flow in the same way when I am on the road. I’m not in a storyteller frame of mind. At least in the written form. When I travel, I am existing in a more physical and verbal sense as the API Evangelist, something that doesn’t always get translated into words on my blog(s). This is something that is ok for short periods of time, but after extended periods of time on the road, it is something that will begin to take a toll on my overall digital presence.

After the storytelling impact, the next area to suffer when I am on the road, is my actual project work. I find it very difficult to write code, or think at architectural levels while on the road. I can flesh out and move forward smaller aspects of the projects I’m working on, but because of poor Internet, packed schedules, and the logistics of being on the road, my technical mind always suffers. This is something that is also related to the impact on my overall storytelling. Most of the stories I publish on a daily basis evolve out of me moving forward actual projects as part of my API Evangelist work. If I am not actually developing a strategy, designing a specific API, or working on API definitions, discovery, governance, or one of the loftier aspects of my work, the chances I’m telling interesting stories will significantly be diminished.

Once I land back home, one of the first orders of business is to unclog the pipes with a “travel is hard” story. ;-) Pushing my fingers to work again. Testing out the connections between my brain and my fingers. While I also open up my IDE, command line, API universe dashboard, and begin refining my paper notes about what the fuck I was actually doing before I got on that airplane. Make it all work again is tough. Even the simplest of tasks seem difficult, and many of the projects I’m working on just seem too big to even know where to even begin. However, with a little effort, focus, and lack of a plane, train, or meeting to be present for, I’ll find my way forward again, slowly picking back up the momentum I enjoy as the API Evangelist. Researching, coding, telling stories, and pushing forward my projects so that they can have an impact on the space, and continue paying the bills to keep this vessel moving forward in the direction that I want.


What Are Your Enterprise API Capabilities?

I spend a lot of time helping enterprise organizations discover their APIs. All of the organizations I talk to have trouble knowing where all of their APIs are–even the most organized of them. Development and IT groups have just been moving too fast over the last decade to know where all of their web services, and APIs are. Resulting in large organizations not fully understanding what all of their capabilities are, even if it is something they actively operate, and may drive existing web or mobile applications.

Each individual API within the enterprise represents a single capability. The ability to accomplish a specific enterprise tasks that is valuable to the business. While each individual engineer might be aware of the capabilities present on their team, without group wide, and comprehensive API discovery across an organization, the extent of the enterprise capabilities is rarely known. If architects, business leadership, and any other stakeholder can’t browse, list, search, and quickly get access to all of the APIs that exist, the knowledge of the enterprise capabilities will not be able to be quantified or articulated as part of regular business operations.

In 2018, the capabilities of any individual API is articulated by it’s machine readable definition. Most likely OpenAPI, but could also be something like API Blueprint, RAML, or other specification. For these definitions to speak to not just the technical capabilities of each individual API, but also the business capabilities, they will have to be complete. Utilizing a higher level strategic set of tags that help label and organize each API into a meaningful set of business capabilities that best describes what each API delivers. Providing a sort of business capabilities taxonomy that can be applied to each API’s definition and used across the rest of the API lifecycle, but most importantly as part of API discovery, and the enterprise digital product catalog.

One of the first things I ask any enterprise organization I’m working with upon arriving, is “do you know where all of your APIs are?” The answer is always no. Many will have a web services or API catalog, but it almost always is out of date, and not used religiously across all groups. Even when there are OpenAPI definitions present in a catalog, they rarely contain the meta data needed to truly understand the capabilities of each API. Leaving developer and IT operations existing as black holes when it comes to enterprise capabilities, sucking up resources, but letting very little light out when it comes to what is happening on the inside. Making it very difficult for developers, architects, and business users to articulate what their enterprise capabilities are, and often times reinventing the wheel when it comes to what the enterprise delivers on the ground each day.


Join Me For A Fireside Chat At The Paris API Meetup This Wednesday

I am in Europe for most of October, and while I am in Paris we thought it would be a good idea to pull together a last minute API Meetup. Romain Simiand (@RomainSimiand), the API Evangelist at PeopleDoc was gracious enough to help pull things together, and the Streamdata.io team is stepping up to help with food and drink. Pulling together a last minute gathering at PeopleDoc in Paris, and bringing me on stage to talk about the technology, business, and politics of APIs, well as about some of my recent work on API discovery, and event-driven architecture.

You can find more details on the Paris API Meetup site, with directions on how to find PeopleDoc. Make sure you RSVP so that we know you are coming, and of course, please help spread the word. We are over 30 people attending so far, but I think we can do better. I’m happy to get on stage and help drive the API discussion, but I’d prefer to have a healthy representation of the Paris API community asking questions, helping me understand what is happening across the area when it comes to APIs. I always have plenty of knowledge to share, but it becomes exponentially more valuable when people on the ground within communities are asking questions, and making it relevant to what is happening within the day to day operations of companies in the local area.

While I enjoy doing conference keynotes and panels, my favorite format of event is the Meetup. Bringing together less than 100 people have a discussion about APIs. I always find that I learn the most in this environment, and able to actually engage with developers and business folks about what really matters when it comes to APIs. The larger the audience the more it is just about me broadcasting my message, and when it is a smaller and more intimate venue, I feel like I can better connect with people. In my opinion, this is how all API events should be–small, intimate, and a real world conversation about APIs. Not just an API pundit pushing their thoughts out, ensuring that all participants feel like they are actually part of the conversation.

If you are in the Paris region, or can make the time to hope on a plane or train and make it to Paris this Wednesday, I love to hang out. If you can’t make it, I’ll be back for API Days Paris in December, but it will be a bigger event, and it might be more difficult to carve out the time to hang. So, bring your API questions, and come over to the PeopleDoc office this Wednesday, and we’ll have a proper discussion about the technology, business, or politics of APIs. Helping drive the API discussion going on in France, continuing to push it forward. Making France a leader when it comes to doing business in the growing API economy. I look forward to seeing you all in Paris this week!


I Participated In An API Workshop With The European Commission Last Week

I was in Ispra, Italy last week for a two day workshop on APIs with the European Commission. The European Commission’s DG CONNECT together with the Joint Research Centre (JRC) launched a study with the purpose to gain further understanding of the current use of APIs in digital government and their added value for public services, and they invited me to participate. I was joined by Mehdi Medjaoui (@medjawii), David Berlind (@dberlind), and Mark Boyd (@mgboydcom), along with EU member states, and European cities, to help provide feedback and strategies for consideration by the commission.

This European Commission study is looking at “innovative ways to improve interconnectivity of public services and reusability of public sector data, including dynamic data in real-time, safeguarding the data protection and privacy legislation in place.” Looking to:

  • assess digital government APIs landscape and opportunities to support the digital
  • transformation of public sector
  • identify the added value for society and public administrations of digital government APIs (key enablers, drivers, barriers, potential risks and mitigates)
  • define a basic Digital Government API EU framework and the next steps

David Berlind from ProgrammableWeb gave a couple talks, with myself, Mehdi, and Mark following up. The rest of the time spent was hearing presentations from EU member states, and other municipal efforts–learning more about the successes and the challeges they face. What I heard reflected what I’ve experienced in federal government, as well as city, county, and state level API efforts I’ve participated in across the United States. <p></p>All groups were struggling to win over leaders and the public, modernize legacy system, build on top of open data efforts, and push forward the conversation using a modern approach to delivering web APIs.

I am eager to see what comes out of the European Commission API project. While there are still interesting things happening in the United States, I feel like there is an opportunity for the EU to leap frog us when it comes to meaningful API adoption within government. While many cities, counties, and states are still investing in open data and APIs, the investment at the federal level has stagnated with the current administration. There are still plenty of agencies moving forward the API conversation, but the leadership is coming from the GSA, and from within individual agencies, not from the executive branch. What is happening at the European Commission has the potential to be adopted by all the countries in the European Union, and making a pretty significant impact in how government works using APIs.

I’ll be staying in touch with the group leading the effort, and making myself available for future gatherings. There was talk of holding another gathering at API Days in Paris, and I am sure there will be further workshops as the project evolves. Clearly the European Commission has a huge amount of work ahead of them, but the fact that they are coming together like this, and highlighting, as well as learning from the existing work going on across the member states, shows significant promise. I made it clear as we were wrapping up regarding the importance of continued storytelling between the member states, as well as out of the European Commission. Emphasizing it will take a regular drumbeat of activity, and sharing of the work in real-time, for all of this to evolve as they desire. However, with the right cadence, the API effort out of Europe could make a pretty significant impact across the EU, and beyond.


API Evangelist API Lifecycle Workshop on API Discovery

I’ve been doing more workshops on the API lifecycle within enterprise groups lately. Allowing me to refine my materials on the ground within enterprise groups, further flesh out the building blocks I recommend to API groups to help them craft their own API strategy. One of the first discussions I start with large enterprise groups is always in the area of API discovery, or commonly asked as, “do you know where all your APIs are?”

EVERY group I’m working with these days is having challenges when it comes to easy discovery across all the digital resources they possess, and put to use on a daily basis. I’m working with a variety of companies, organizations, institutions, and government agencies when it comes to the API discovery of their digital self:

  • Low Hanging Fruit (outline) - Understanding what resources are already on the public websites, and applications, by spidering existing domains looking for data assets that should be delivered as API resources.
  • Discovery (outline) - Actively looking for web services and APIs that exist across an organization, industry, or any other defined landscape. Documenting, aggregating, and evolving what is available about each API, while also publishing back out and making available relevant teams.
  • Communication (outline) - Having a strategy for reaching out to teams and engaging with them around API discovery, helping the remember to register and define their APIs as part of wider strategy.
  • Definitions (outline) - Work to make ensure that all definitions are being aggregated as part of the process so that they can be evolved and moved forward into design, development and production–investing in all of the artifacts that will be needed down the road.
  • Dependencies (outline) - Defining any dependencies that are in play, and will play a role in operations. Auditing the stack behind any service as it is being discovered and documented as part of the overall effort.
  • Support (outline) - Ensure that all teams have support when it comes to questions about web service and API discovery, helping them initially, as well as along the way, making sure all their APIs are accounted for, and indexed as part of discovery efforts.

API discovery will positively or negatively impact the rest of the API lifecycle at any organization. Not knowing where all of your resources are, and not having them properly defined for discovery at critical design, development, production, and integration movements, is an illness all companies are suffering from in 2018. We’ve deployed layers of services to deliver on enterprise growth, and put down a layer of web APIs to service the web, mobile, and increasingly device-based applications we’ve been delivering. Resulting in a tangled web of services, we need to tame before we can move forward properly.

Let me know if you need help with API discovery where you work. It is the fastest growing aspect of my API workshop portfolio. Aside from security, I feel like API discovery is the biggest challenge facing large enterprise groups learning to be more agile, flexible, and pushing forward with a microservices, and event-driven way of doing business. I definitely don’t have all the solutions when it comes to API discovery, but I knew have a lot of experience to share around how we are defining our enterprise capabilities and resources, and making them more discoverable across our entire API catalog.


API Evangelist API Lifecycle Workshop on API Design

I’ve been doing more workshops on the API lifecycle within enterprise groups lately. Allowing me to refine my materials on the ground within enterprise groups, further flesh out the building blocks I recommend to API groups to help them craft their own API strategy. One area of the API lifecycle I find more groups working on these days, centers around a design-first approach to the API lifecycle.

While not many groups I work with achieved a design-first approach doing APIs, almost all of them I talk to express interest in making this a reality at least within some groups, or projects. The appeal of being able to define, design, mock, and iterate upon an API contract before code gets written is very appealing to enterprise API groups, and I’m looking to help them think through this part of their API lifecycle, and work towards making API design first a reality at their organization.

  • Definition (outline) - Using definitions as the center of the API design process, developing an OpenAPI contract for moving things through the design phase, iterating, evolving, and making sure the definitions drive the business goals behind each service.
  • Design (outline) - Considering the overall approach to design for all APIs, executing upon design patterns that are in use to consistently deliver services across teams. Leveraging a common set of patterns that can be used across services, beginning with REST, but also evetually allowing the leveraging of hypermedia, GraphQL, and other patterns when it comes to the deliver of services.
  • Versioning (outline) - Managing the definition of each API contract being defined as part of the API design stop for this area of the lifecycle, and having a coherent approach to laying out next steps.
  • Virtualization (outline) - Providing mocked, sandbox, and virtualized instances of APIs and other data for understanding what an API does, helping provide an instance of an API that reflects exactly how it should behave in a production environment.
  • Testing (outline) - Going beyond just testing, and making sure that a service is being tested at a granular level, using schema for validation, and making sure each service is doing exactly what it should, and nothing more.
  • Landing Page (outline) - Making sure that each individual service being designed has a landing page for acccessing it’s documentation, and other elements during the design phase.
  • Documentation (outline) - Ensuring that there is always comprehensive, up to date, and if possible interactive API documentation available for all APIs being designed, allowing all stakeholders to easily understand what an API is going to accomplish.
  • Support (outline) - Ensuring there is support channels available for an API, and stakeholders know who to contact when providing feedback and answering questions in real, or near real time, pushing forward the design process.
  • Communication (outline) - Making sure there is a communication strategy for moving an API through the design phase, and making sure stakeholders are engaged as part of the process, with regular updates about what is happening.
  • Road Map (outline) - Providing a list of what is being worked on with each service being designed, and pushed forward, providing a common list for everyone involved to work from.
  • Discovery (outline) - Make sure all APIs are discoverable after they go through the design phase, ensuring each type of API definition is up to date, and catalogs are updated as part of the process.

I currently move my own APIs forward in this way using a variety of open source tooling, and GitHub. I’m working with some groups to do this in Stoplight.io, as well as Postman. I don’t think there is any “right way” to go API define and design first. I’m here to just educate teams about what is going on out there. What some of the services and tools that help enable an API design first reality, and talk through the technological, business, and political challenges are preventing a team, or entire enterprise group from becoming API design first.

Let me know if you need help thinking through the API design strategy where you work. I’ve been studying this area since it emerged as a discipline in 2012, led by API service providers like Apiary, but continue with other next generation platforms like Stoplight.io, APIMATIC, and others. For me, API design is less about REST vs Hypermedia vs GraphQL, and more about the lifecycle, services, tooling, and API definitions you use. I’m happy to share my view of the API design landscape with your group, just let me know how I can help.


The Layers Of Completeness For An OpenAPI Definition

Everyone wants their OpenAPIs to be complete, but what that really means will depend on who you are, what your knowledge of OpenAPI is, as well as being driven by your motivation for having an OpenAPI in the first place. I wanted to take a crack at articulating a complete(enough) definition for OpenAPIs I create, based upon what I’m needing them to do.

Info & Base - Give the basic information I need to understand who is behind, and where I can access the API.

Paths - Provide an entry for every path that is available for an API, and should be included in this definition.

Parameters - Provide a complete list of all path, query, and header parameters that can be used as part of an API.

Descriptions - Flesh out descriptions for all the path and parameter descriptions, helping describe an API does.

Enums - Publish a list of all the enumerated values that are possible for each parameter used as part of an API.

Definitions - Document the underlying schema being returned by creating a JSON schema definition for the API.

Responses - Associate the definition for the API with the path using a response reference, connecting the dots regarding what will be returned.

Tags - Tag each path with a meaningful set of tags, describing what resources are available in short, concise terms and phrases.

Contacts - Provide contact information for whoever can answer questions about an API, and provide a URL to any support resources.

Create Security Definitions - Define the security for accessing the API, providing details on how each API request will be authenticated.

Apply Security Definitions - Apply the security definition to each individual path, associating common security definitions across all paths.

Complete(enough) - That should give us a complete (enough) API description.

Obviously there is more we can do to make an OpenAPI even more complete and precise as a business contract, hopefully speaking to both developers and business people. Having OpenAPI definitions are important, and having them be up to date, complete (enough), and useful is even more important. OpenAPIs provide much more than documentation for an API. They provide all the technical details an API consumer will need to successfully work with an API.

While there are obvious payoffs for having an OpenAPI, like being able to publish documentation, and generate code libraries. There are many other uses for an OpenAPI like loading into Postman, Stoplight, and many other API services and tooling that helps developers understand what an API does, and reduce friction when they integrate, and have to maintain their applications. Having an OpenAPI available is becoming a default mode of operation, and something every API provider should have.


A Quick Manual Way To Create An OpenAPI From A GET API Request

I have numerous tools that help me create OpenAPIs from the APIs I stumble across each day. Ideally I’m crawling, scraping, harvesting, and auto-generating OpenAPIs, but inevitably the process gets a little manual. To help reduce friction in these manual processes, I try to have a variety of services, tools, and scripts I can use to make my life easier, when it comes to create a machine readable definition of an API I am using–in this scenario it is the xignite CloudAlerts API.

One way I’ll create an OpenAPI from a simple GET API request, providing me with a machine readable definition of the surface area of that API, is using Postman. When you have the URL copied onto your clipboard, open up your Postman, and paste the URL with all the query parameters present.

You’ll have to save your API request, and add it to a collection, but then you can choose to share the collection, and retrieve the URL to this specific requests Postman Collection.

This gives you a machine readable definition of the surface area of this particular API, defining the host, baseURL, path, and parameters, but it doesn’t give me more detail about the underlying schema being returned. To begin crafting the schema for the underlying definition of the API, and connect it to the response for my API definition, I’ll need an OpenAPI–which I can create from my Postman Collection using API Transformer from APIMATIC.

After pasting the URL for the Postman Collection into the API transformer form, you can generate an OpenAPI from the definition. Now you have an OpenAPI, except it is missing the underlying schema, which I will just grab the response from my last request, and convert it into JSON schema using JSONSchema.net. I’ll just grab the properties section of these, as the bottom definitions portion of the OpenAPI specification is just JSON Schema.

I can merge my JSON schema with my OpenAPI, adding it to the definition collection at the bottom. With a little more love, adding a more coherent title, description, and fluffing up some of the summaries, descriptions, tags, etc., I now have a fairly robust profile of this particular API. Ideally, this is something the API provider would do, but in the absence of an OpenAPI or Postman Collection, this is a pretty quick and dirty way to produce an OpenAPI and Postman Collection from a simple GET API, but the formula works for other types of API requests as well–leaving me with a machine readable definition for an API I will be integrating with.

There are definitely other ways of scraping API documentation, processing .HAR files generated from a proxy, but I think this is a way that anyone, even a non-developer can accomplish. I did my version in JSON, but the same process will work for YAML, making the resulting definition a little more human readable, while still maintaining it’s machine readability. I like documenting these little processes so that my readers can put to use, but it also provides a nice definition that I can use to remember how I get things done–my memory isn’t what it used to be.

The resulting API definitions from this process are:

  • OpenAPI
  • ]Postman Collection](https://www.getpostman.com/collections/02a6658c25631d716407)

API Evangelist And Streamdata.io API Lifecycle Workshops

I have been partnering with Streamdata.io to evolve how I work with enterprise groups on their API lifecycle strategy. After working closely with the Streadata.io sales team, it became clear that many enterprise organizations weren’t quite ready for the event-driven infrastructure Streamdata.io provides. Most groups were in desperate need of stepping back and developing their own formal strategy for delivering APIs across the enterprise, before they could every scale their operations and take advantage of things being more event-driven and real time.

In response, I set out to evolve my own API lifecycle research, gathered over the last eight years of studying the API space, and make it more accessible to the enterprise, as self-service short form and long form content, in-person workshops, and forkable blueprints that any enterprise can set in motion on their own. The results is a series of evolvable API projects, that we are using to drive our ongoing workshop engagements with enterprise API groups, focusing in on six critical areas of the API lifecycle:

  • Discovery (demo) - Knowing where all of your APIs and services are across groups.
  • Design (demo) - Focus in on an a design and virtualized API lifecycle before deployment.
  • Development (demo) - Understanding the many ways in which APIs can be developed & deployed.
  • Production (demo) - Thinking critically about properly operating API infrastructure.
  • Governance (demo) - Understanding how to measure, report, and evolve API operations.

Not all of our workshops will cover all of these areas. Depending on the time we have available, the scope of the team participating in a workshop(s), and how far along teams are in their overall API journey, the workshops might head in different directions, and expand or contract the depth in which we dive into each of these area (ie. not everyone is ready for governance). After several workshops this year, we have found these areas of the API lifecycle to be the most meaningful ways to organize a workshop, and help enterprise group think more critically about their API strategy.

Craft An API Strategy For Your Enterprise
The purpose of our API lifecycle workshops is to help enterprise organizations develop a strategy. Bring in outside API knowledge, learn more about where an enterprise API group is in their overall API journey, and leave them with a structured artifact that helps them step back and look at the entire lifecycle of their APIs. Moving the API conversation across the enterprise forward in a meaningful way with three distinct actions:

  • Starter API Lifecycle Strategy - Provide a template API lifecycle strategy in GitHub or GitLab as README, and YAML file. Providing a framework to consider as you craft your own API strategy, providing a starting point for your journey. Generated from eight years of research on the API space, providing a living document that can be used to execute and evolve the overall API lifecycle strategy for an enterprise organization.
  • API Lifecycle Workshop - Schedule and conduct a single, or multi-day API lifecycle workshop on-site, with as many enterprise and / or partner stakeholders as possible. We will come on site, and walk teams through each stop along a modern API lifecycle, helping customize, personalize, and make the API strategy better fit the enterprises strategy.
  • Evolve API Lifecycle Strategy - Coming out of the workshop, you will be given an updated API lifecycle YAML document. Providing a human and machine readable framework that represents your API lifecycle strategy, helping provide a scaffolding for future discussions. Producing a usable artifact out of the gathering, encapsulating the research and experience we bring to the table, adding what we learned during the workshop, and hopefully continually being used to drive the API strategy road map.
  • Provide Execution & Support - After the workshop is done, and we have provided an updated API lifecycle, we are still here to support. We can answer questions via the repository we leave an API strategy artifact, as well as email. We are happy to conduct virtual meetings to help check in on where you are at, and of course we are happy to always come back and conduct future workshops as you need.

We are happy to continue the conversation around the API lifecycle artifact we will leave with you. We don’t expect you to use everything we propose. We are more interested in teaching you about what is possible, and continuing to work with you to refine, evolve, and make the API lifecycle your own. We’ve just worked hard to identified many different ways to operating API infrastructure at scale, and continue to help standardize and make it more accessible by large enterprise organizations.

Helping You On Your Journey
The resulting API lifecycle strategy we leave behind after the workshop is done will possess all the knowledge we’ve aggregated across API research, gathered across leading API providers, and polished by conducting API workshops within the enterprise. Embedded within this API lifecycle artifact we’ll leave you with some added elements that will help you in your journey, going beyond just advice on process, and helping the rubber meet the road.

  • Links - Provide a wealth of references to external resources, attached to each stop along the API lifecycle, bring our API research into your organization, allowing you to put to use inline as you are building your API strategy.
  • Services - Embedding links to API services that you are already using, and introducing you to other useful API services along the way. Making sure specific services are associated with each stop along the API lifecycle, across the different areas, and even sub-linking to specific features that can help accomplish a specific aspect of API operations.
  • Tooling - Embedding links to open source API tools, specifications, and other solutions that can be used to accomplish a specific aspect of operating an API platform. Brining in open source solutions that can be considered as you are crafting the API strategy for your enterprise organization.

While not all organizations will be ready to use a YAML API lifecycle artifact as part of their API orchestration, it helps to have the API lifecycle well defined, even if many steps are still manual. It helps teams think more critically about how they approach the deliver of APIs, while also being something that can be downloaded, forked, and reused by different groups across the enterprise. Eventually it is something that can be further automated, measured, and used to help quantify the maturity level of each APIs, as well as API across distributed teams.

Next Steps For Developing A Strategy
If you are interested in what we are offering with our API lifecycle workshops, there are few things you can do to get things started, to help us bring an API lifecycle workshop your way:

  • Email Me - Happy to answer any questions, and get you the information you need to sell the workshop to your team, and get you in the calendar.
  • Take Survey - Take a quick survey about your operations, helping us tailor a workshop for your needs.
  • Demo Workshop - Take a stroll around one of our demos that we use in our workshop, introducing you to what we’ll be discussing.

While I work from a common set of workshop material when designing these workshops, I work to tailor each engagement for the company, organization, institution, or government agency I’m working with. All of my API lifecycle workshop blueprints are meant to be forkable, customizable, and something anyone can turn into their own working API lifecycle strategy.


API Developer Outreach Research For The Department of Veterans Affairs

This is a write-up for research I conducted with my partner Skylight Digital. The team conducted a series of interviews with leading public and private sector API platforms regarding how they approached developer outreach, and then I wrote it up as a formal report, which the Skylight Digital team then edited and polished. We are looking to provide as much information as possible regarding how the VA, and other federal agencies should consider crafting their API outreach efforts.

This is Skylight’s report for the U.S. Department of Veterans Affairs (VA) microconsulting project, “Developer Outreach.” The VA undertook this project in order to better understand how private- and public-sector organizations approach Application Programming Interface (API) developer outreach.

In preparing this report, we drew on nearly a decade’s worth of our own API developer outreach expertise, as well as information obtained through interviews with seven different organizations. For each interview, we followed an interview script (see Appendix A) and took notes. The Centers for Medicare & Medicaid Services (CMS) Blue Button API program (see Appendix B), the Census Bureau (see Appendix C), the OpenFEC program (see Appendix D), and Salesforce (see Appendix E) all agreed to releasing our notes publicly. The other three organizations (a large social networking site, a government digital services delivery group, and a cloud communications platform) preferred to keep them private.

We structured this report to largely reflect the interview conversations that we held with people who are involved in developer outreach programs and activities. These conversations focused around the following questions:

  1. What is the purpose and scope of your API developer outreach program?

  2. What does success look like for you?

  3. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you are using to drive and sustain effective adoption of your API(s)?

  4. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

  5. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

  6. How is your outreach program structured and staffed? How did it start out and evolve over time?

  7. Imagine that you are charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

  8. What big ideas do you have for evolving your outreach program to make it even more effective?

if we were forced to distill all of the interviews and all of our existing information down to a single essential piece of advice, it would be this: involve the programmers who are going to be using your APIs. By “involve,” we mean:

  1. Engage them early.

  2. Support the users with documentation, hackathons, forums, and other types of engagement activities.

  3. Measure the happiness of the programmers who are using your APIs, as well as the number of times that they use them and how they use them.

  4. Prioritize your APIs so as to maximize the utility to would-be programmers.

In the pages below, you will find a large number of specific suggestions culled from extensive interviews and our collective personal experience. All of these specific techniques are in service to the idea of designing the API program with the programmers who will use the API in mind at all times.

Purpose and scope of developer outreach programs

What we learned

Different API programs have different purposes, and these purposes are fulfilled by varying levels of API outreach resources. Just as organizations have invested differing amounts of resources into their web presence over the last 25 years, the API providers we talked to are each at different stages in their API journey. This variety presented us with a range of informative stories concerning outreach programs.

The purpose behind APIs

Our interviews revealed a number of potential purposes for API development, a number of which are presented below:

  • Build it and they will come: It is common for companies, organizations, institutions, and government agencies to launch an API effort with no specifically-stated purpose. They make resources available, build awareness amongst developers, and encourage innovation. While such organizations may not know what the future will hold, they believe that their investment in their API platform in a sensible and pragmatic way will attract developers and the inevitably innovative applications they will bring.

  • A focus on APIs as a product: Some interviewees we spoke to are fully invested in their API efforts, treating their APIs as a product in and of themselves. They develop formal programs around their API operations, treat API consumers as end customers, and consistently ensure their programs have the resources they need. In short, they treat API operations like a business and run them as efficiently as possible, even if commercial sales is not the end goal.

  • A focus on APIs as an enabler: Other interviewees we consulted focus on ensuring the availability of APIs as a means to support an existing web or mobile application; in a sense, the API is relegated to a secondary role relative to the primary application’s existence. APIs for these kinds of organizations serve to drive traffic, adoption, and integration when it comes to a larger vision, but the APIs themselves are simply enablers for the overall platform.

  • A focus on the developer: Beyond the API as a product — and the web and mobile apps they power — API efforts tend to focus on the developer. Development-focused APIs emphasize what developers will bring to the table when they have the resources and support they need, and are thus central to the investment and engagement necessary for successful outreach/development.

  • Attract unique entities: API platforms are often aimed at attracting the attention of interesting/progressive companies, universities, institutions, and other entities. These external entities can often put API resources to use in new and innovative ways, and in doing so, they can bring attention and potential partnerships to the platform.

  • Leverage the network effect: Some API providers we interviewed expressed an interest in joining a wider network of collaborative open data and API efforts. They felt that working in concert with others, by bringing together many different federal, state, or municipal agencies, would benefit the platforms. Further, they felt this would allow the API initiatives to be led by either policy or private interests.

  • Save developers time: Overall, the most positive motivation for API development expressed by existing providers was to streamline API onboarding and integration. Further development would help internal, partner, and 3rd party public developers be more successful in putting API resources to work in their web, mobile, and device applications.

The scope of API investment

The programs we spoke to each had differing levels of investment, access to resources, and primary purposes. Since the scope of an API investment is naturally a function of these factors, this meant that the organizations we interviewed had different intended scopes for their API projects. We have collated a selection of these scopes below.

  • Starting small: A common theme across API operations we spoke with was the importance of starting small and building a stable foundation before attempting larger infrastructure development. Beginning with a basic, yet valuable set of API resources, fleshing out the entire lifecycle, and processing around what is necessary to be successful before scaling and increasing the scope of API operations was routinely described as a critical part of any API scope.

  • Invest as it makes sense: A lack of resources is a common element of slow growth and unrealized API operations. Bringing significant levels of investment to projects while simultaneously applying appropriate human and technological capital to API operations were universally mentioned by our interviewees as critical to seeing desired results.

  • Working with what you have: Almost all the API programs we spoke to worked in resource-starved environments. They wished to do more and to be more, but lacked the time, human investment, and technical resources to make it happen. This forced the organizations to work with what they had, making the most of their limited opportunities while hoping for eventual greater contributions.

  • A focus on improvement: All the API programs we interviewed expressed that they wanted to be doing more with their programs. They want to formalize their approach and increase their resource and labor investment in the areas in which they have seen success. Ultimately, their target scope was to focus not on just meeting expectations, but on moving towards excellence and mastery of what it takes to scale their API efforts.

Every API effort clearly has its own personality, and while there are common patterns, the environment, team, and amount of resources available appears to dictate much of the scope of developer outreach programs. However, the scope of any effort will always start small and move forward incrementally as confidence grows.

What our thoughts are

We learned a lot talking to API providers about the purpose and scope of their API operations. Our interviews also reinforced much of what we know about the API operations of leading providers like Amazon and Google. When it comes to the motivations for developing APIs, ensuring an appropriate level of investment and planning for future scalable growth are necessary steps in giving an API the best chance to succeed.

Augmenting what we learned from providers, we recommend focusing on the following areas in order to achieve API purposes, grow and scale API operations, and integrate API technology into the fabric of an organization’s overall operations.

  • APIs are a journey: Always view API operations as a journey and not a destination, setting expectations appropriately from the beginning. Do not raise the bar too high when it comes to achieving goals, reaching metrics, and getting the job done. API operations will always be an ongoing effort, with both wins and losses along the way.

  • Always start small: Reiterating what we researched in the API sector as well as what we confirmed in our interviews, it is important to build a small, stable base before moving forward. This does not mean you cannot develop rapidly, but before you do, make sure you’ve researched the API and planned for its success, understanding what will be needed throughout the lifecycle of all APIs you intend to deliver.

  • Center on the end user: It is always important that every goal and purpose you set for your API platform center on its end-users. While the platform and developer ecosystem is always a consideration, the end-user is the central focus of the “why” and the “how” for API technologies, especially consider the scope of operating an API in both the private and the public sector.

  • A dedicated team: Even if it is a small group, always work to dedicate a team to API operations. While APIs will ultimately be an organization-wide effort, it will take a dedicated team to truly lead the API to success. Without centralized, dedicated leadership, APIs will never attain true relevance within an organization, leaving to be a side project that rarely delivers as expected.

  • Everyone pitches in: While a dedicated team is a requirement, an API’s development should always invite individuals from across an organization to join in the process. Ideally, API operations are a group effort led by a central team. It is important to encourage individual API teams to feel a sense of ownership over their API, just as it is important to encourage business users to participate in development conversations.

  • Striving for excellence: API operations will always be forced to deal with a lack of resources. That is simply a fact of dealing in APIs. However, each API program should be seeking excellence, working to understand what is needed to move APIs forward in a healthy way. Improving upon what has already been done, refining processes that are in motion, and making sure all APIs are iteratively improved continually benefits a platform’s end users.

  • Continued investment: Always be regularly investing in the API platform. Continued input of both human resources and financial resources helps to ensure that a platform is not starved along the way. Otherwise the API-in-question will constantly fall short and threaten the viability of the platform in the long term.

While the purpose of an API may depend on the mission of its developing organization, in the end, APIs always exist to serve end-users while protecting the interest of a platform. The scope of any such API will depend on the commitment of an organization, and as such, there is no “right answer” to the question of determining the purpose of any single API platform. The only true constraint is the assurance that the API remains in alignment with an organization’s mission as it grows and scales.

How success is defined

With a variety of possible purposes, scopes, and approaches, an API’s success can be defined in a myriad of ways. Depending on the motivations behind the API, and the investment made, the measure of success will depend on what matters to the organization. However, there are some common patterns to defining success, which we extracted from both the interviews we conducted and the research we performed as part of this outreach study.

What we learned

Along with what we learned about the purpose and scope of API platforms, we discovered more about the different ways in which API providers are defining success. We have collected the highlights among these metrics below.

  • Building awareness: API success revolves around building awareness that an API exists, as well as awareness of the value of the API resources that are being made available. Awareness is not simply a consumer side consideration, though; providers, too, must possess an awareness of the value of their resources in relation to both other API developers and consumers alike.

  • Attracting new users: Bringing attention to an API and attracting new users is one of the most common ways of measuring the success of API operations. While new users won’t always immediately become active users, their interest and involvement will bring attention to and awareness of what the API platform can deliver. Attracting new users is one of the easiest and most accessible ways to measure the success of any API, according to our interviews, but importantly, none of the platform providers we spoke to recommended that an organization should stop there.

  • Incentivizing active users: While attracting new users is easy, producing active users is much harder. The easier it is to onboard with an API and make the first series of API calls, the greater the likelihood that API consumers will integrate the platform into their own resources and work, which is a critical metric for any API provider.

  • Applications: Applications and development are the cornerstones that incentivize API providers to invest in APIs, and across the board, our interviewees cited application integration and involvement as a prime candidate for determining an API’s success. This could be quantified both in terms of new applications relying on the API platform, as well as active application processes that integrate with the API’s platform. In either case, measuring usage was considered an excellent means to justify the existence and growth of an API platform.

  • Entities: Getting the attention of companies, organizations, institutions, and other government agencies is an important part of the the API journey. In particular, developing an awareness of and encouraging the usage and adoption of APIs among groups already leveraging the technology is an important metric by which success can be determined.

  • End users: Of course, API providers articulated the importance of serving end-users. Besides serving an organization’s mission, the true purpose of an API is to satisfy an end-user, and so measuring success based upon how much value is created and delivered to these users and customers, and even the public at large, can directly verify that an API is living up to its billing.

  • Stakeholders: Further discussions with API providers implied that success is also defined in terms of involvement with other stakeholders. Ensuring that the definition of success was crafted in an inclusive way allows everyone involved and impacted by the project to input their voice. This widens the target audience to make sure success is a large umbrella that covers as many individuals within an organization as possible.

  • New resources: An additional area that was used to define success was the number of new API resources added to a platform. If an organization is currently in the development phase and deploying APIs into production, it is likely that a platform already has a handle on what it takes to successfully deliver APIs throughout their lifecycle. Making new APIs a great way to understand the velocity of any platform, and how well it is ultimately doing.

Measuring the success of an API platform is a much more ambiguous goal than determining scope, purpose, and investment. Our interviews revealed that success often means different things to different providers. Moreover, an organization’s understanding of success is also something that will evolve over time. We learned a lot from API providers about pragmatic views on what API success looks like, and now we would like to translate that into some basic guidance on how to help ensure the success of providing APIs.

What our thoughts are

Defining, measuring, and quantifying the success of API operations is not easy. As discussed above, measuring success functionally amounts to hitting a moving target. It is important to start with the basics, be realistic, and define success in a meaningful way. Adding to what has been gathered from interviews with API providers, we recommend a consideration of the following factors when it comes to defining just exactly what success is.

  • Know your resources: Understand the resources you are making available via an API. Ensure that they are well defined and made accessible in ways that consider security, privacy, and the interests of all stakeholders. Do not just open up APIs for the sake of delivering APIs — make sure the resources are well defined, designed, and serve a purpose.

  • Manage your APIs: API management is essential to measuring success. It is extremely difficult to define success without requiring all developers to authenticate, log, quantify, and analyze their consumption. Measuring these kinds of consumption activities helps to quantify the value produced by the API and its related platform, and an understanding of this value serves as the foundation for any API’s future success.

  • Have a plan: There is no success without a plan. A set of plans are required to apply at the management level in order to quantify the addition of new accounts, define whether they are active or not, and understand how applications are putting resources to work. Providing a plan for how resources are made available, and how they are consumed, generates a framework to think about and measure what API success means.

  • Measure portal activity: Treat your API developer portals as you would any other web property and actively measure its traffic. Apply data-analytic solutions to track sessions, time, and visitors, and use this information to contribute to a sales and marketing funnel that can be used to understand how developers are using portal resources. Importantly, this kind of analysis can also discover points of friction that developers may be encountering when trying to use your API platform.

  • Analyze and report: Produce weekly, monthly, quarterly, and annual reports from data gathered across the API stack, API portal, and from social media. Developing an understanding of what is happening based upon actual data, and consistently reporting upon findings with all stakeholders in API operations, ensures both transparency of API knowledge and information access to formulate plans for growth.

  • Discuss and evangelize: Have a strategy for taking any analysis or reporting from API operations and disseminating it amongst stakeholders. With these resources distributed, consider conducting regular on- and off-line discussions around what they mean. Work with everyone involved to understand the activity across a platform, and use these discussions to transform the understanding of success as platform awareness grows.

  • Make things observable: Make every building block of API operations observable. Ensure that everything has well defined inputs and outputs, and consider how these can be used to better understand whether the platform is working or not. Allowing every single aspect of the platform to be able to contribute to an overall definition of what success means by providing real-time and historic data around how resources are being used can signal important insights about an API and how it might be improved.

The success of an API platform will mean different things to different groups and will evolve over time as awareness around an organization’s resources grows. Know your resources, properly manage your APIs, and have a plan, but make sure you are constantly reassessing exactly what success means while having ongoing conversations with stakeholders. With more experience, you will find that API platform success becomes much more nuanced, but importantly, it also becomes easier to define once you know what it is that you want.

Key practices for driving and sustaining adoption of APIs

After a decade of leading tech companies operating API programs, and a little over five years of government agencies following their lead, a number of common practices emerged that helped drive the adoption of APIs and support relationships between provider and consumer. We spent some time talking to API providers about their approaches, while also bringing our existing research and experience to the table, and have collected our responses and analysis below.

What we learned

This is one area where we believe that our existing research outweighed what we learned in talking to API providers, but the conversations did reinforce what we know while also illuminating some new ways to look at operational components. Here are the key practices our interviewees provided for driving and sustaining the adoption of APIs.

  • Documentation: Documentation is the single most important element that needs to accompany an API that is being made available. This transforms the process of learning about what an API can do from static to interactive (such as by using OpenAPI specifications) and renders the API a hands-on experience.

  • Code: Providing samples, SDKs, start solutions, and other code elements is vital to making sure developers understand through demonstration how to integrate with APIs in a variety of programming languages.

  • Content: Content is critical. Invest in blog posts, samples, tutorials, case studies, and anything else you think will assist your consumers in their journey. We heard over and over how important a regular stream of content is for attracting new developers, keeping active ones engaged, and putting API resources to work.

  • Forums: Provide a forum where developers can find existing answers to their questions while also being able to ask new questions. Offering a safe, up to date, well moderated place to engage in asynchronous conversations around an API platform ensures that dialogue is always happening, which means that use and progress are in continued development.

  • Conferences: Conducting workshops and attending relevant conferences where potential API consumers will be is an important practice in furthering the outreach of an API platform. Engage with your community — both consumers and developers — instead of just pushing content to them online.

  • Proactive: Make sure you are proactive in operating your API platform by constantly marketing your work to developers (remember, continually attracting new attention is vital). At the same time, work to provide existing developers with what they will need based upon common practices across the API sector. Investing in developers by giving them resources they will need before they have to ask for it makes an API’s community feel alive and healthy.

  • Reactive: While proactivity is important, an API team must also be able to react to any questions, feedback, and concerns submitted by API consumers and other stakeholders. Ensuring people do not have to wait very long for an answer to their question makes consumers, developers, and stakeholders alike feel like they are considered a relevant and important part of the API community.

  • Feedback loops: Having feedback loops in place are essential to driving and sustaining the adoption of APIs. Without one or more channels for consumers to provide feedback, as well as responsive and attentive API providers who analyze how the feedback can fit into the overall API plan, API operations will never quite rise to the occasion.

  • Management: Almost all API providers we talked to articulated that having a proper strategy for API management, as well as an investment in services and tooling, was essential to onboarding new consumers. Additionally, this kind of investment facilitates an understanding of how to engage with and incentivize the usage of API resources by existing users. Without the ability to authenticate, define access tiers, quantify application usage, log all activity, and report upon usage and activity across dimensions, it is extremely difficult to scale platform adoption.

  • Webinars: For an enterprise audience, webinars were a viable way to educate new users about what an API platform offers, as well as helping to bring existing API consumers up to speed on new features. Not all communities are well-suited for webinar attendance, but for those that are, it is a valuable tool in the API toolbox.

  • Tutorials: Providing detailed tutorials on how to use an API, understand business logic, and take better advantage of platform resources were all common elements of existing API provider options. Breaking down the features of the platform and providing simple walkthroughs that help consumers put those features to work can streamline the integration and onboarding process that users face when working with APIs.

  • Domain: Our interviewees mentioned that having a dedicated domain or subdomain for an API developer portal significantly helped in attracting new users by providing a known location for existing developers to find the resources they are looking for.

  • Explorer: In some cases, it is important to provide a more hands-on, visual way to explore resources available within an API rather than simply listing or describing such features in documentation. For new and particularly inexperienced users of API technologies, resources that can “connect the dots” between the API’s functional support and the actual implemented pathway of using a particular API tool can be immeasurably important in user retention.

We learned that many API providers in the public sector are actively learning from API providers in the private sector. They employ many of the same elements used by leading API providers who have been doing it for a while now. However, we also found evidence of innovation by some of the public sector API providers we interviewed, especially in the realm of onboarding and retaining new users.

What our thoughts are

Below, we have constructed a list of common building blocks that should be considered when developing, operating, and evolving any API platform. These recommendations are the results of formalizing what we learned as part of the interview process, as well as leveraging eight years worth of research. Our objective is to give API providers the elements they need to attract and engage with new users, while also pushing existing users to be more active. We have broken down our recommendations into eleven separate areas, which are further discussed below.

Portal

It is important to provide a single known location where API providers and consumers can work together to integrate the offered resources into a variety of web, mobile, device, and network applications, as well as directly into other systems. Several components play into the successful adoption and consumption of APIs published to a single portal.

  • Overview: A simple overview explaining what a platform does and clearly defining the value the API offers to consumers.

  • Getting started: A simple series of steps that help onboard a new user so that they can begin putting API resources to work.

  • Documentation: Interactive documentation for all APIs and schema (preferably created in OpenAPI or another interactive API specification format).

  • Errors: A simple, clear list of all the possible errors an API consumer will encounter, starting with HTTP status codes and then proceeding to any specialized schema used to articulate when API errors occur.

  • Explorer: A visual representation of the resources available within the API that allows consumers to search, browse, and explore all available resources without needing to know or write code. Note: it is always helpful to provide a direct link to replicate a search using the API.

These elements set the foundation for any API operations, providing the basic elements that will be needed to onboard with an API. They establish an interface for the other features that will be needed to incentivize deep and sustaining integrations with any platform.

Definitions

Besides the basic functionality described above, industries have turned toward a suite of machine-readable definitions to drive API integrations. Due to the ubiquity of a number of these definitions, we have collected a handful of specification formats that we recommend making a part of the base of any API operations.

  • OpenAPI: An interactive documentation standard that describes the functionality of an API in a machine-readable way.

  • Postman: A standard collection that provides a transactional, runtime-oriented definition of the feature interface of an API for use in client tooling.

  • JSON Schema: A widely used specification that describes the objects, parameters, and other structural elements of the consumption of API resources.

  • APIs.json: A discovery document that provides a machine-readable index of API operations with references to the portal, documentation, OpenAPI, Postman, and other building blocks of an API platform.

Code

It is common practice for API providers to invest in a variety of code-focused resources to help jumpstart the onboarding process for API developers. This reduces the number of technical steps necessary for the technology to successfully integrate with other platforms. Here are the building blocks we recommend considering when crafting the code portion of any developer outreach strategy.

  • Github/Gitlab: Use a social coding platform to manage many of the technical components used to support API developers.

  • Samples: Publish simple examples of making individual API calls in a variety of programming languages.

  • SDKs: Provide more comprehensive software development kits in a variety of programming languages for developers to use when integrating with API resources.

  • PDKs: Provide platform development kits that help developers integrate with existing solutions they may already be using as part of their operations.

  • MDKs: Provide mobile development kits that help jumpstart the development of mobile applications that take advantage of a platform APIs.

  • Starters: Publish complete applications that provide starter kits that developers can use to jumpstart their API integrations.

  • Embeddables: Provide buttons, badges, widgets, and bookmarklets for any API consumer to use when quickly integrating with API resources.

  • Spreadsheets: Offer spreadsheet connectors that allow API consumers to use platform APIs within Microsoft Excel and Google Sheets.

  • Integrations: Invest in a suite of existing integrations with other platforms that API consumers can take advantage of, providing low-code or no-code solutions for integrating with APIs.

While we have presented a variety of code-related resources, we want to point out the caveat that these tools should only be employed if an organization possesses the resources to properly maintain and support them. These elements can provide some extremely valuable coding solutions for developers and consumers to put to work, but if not properly done, they can also quickly become a liability, driving developers away.

Event-driven

In addition to simpler request-and-response delivery and documentation methods, we also recommend thinking about the following event-driven possibilities, which can also be used to incentivize deeper engagement and workflow with an API.

  • Webhooks: These can provide ping and data push opportunities, which allow API consumers to be notified when any event occurs across an API platform.

  • Streams: Providing high-level or individual streams of data allow for long-running HTTP or TCP connections with API resources.

  • Event types: In many cases, it is helpful to publish a list of the types of possible API events, as well as opportunities for subscribing to webhook or streaming channels.

  • Topics: Similarly, developers and consumers alike may find a published list of platform-related topics useful, particularly one that allows API consumers to search, browse, and discovery exactly the topical channels they are interested in.

These event-based tools help augment existing APIs and make them more usable by API consumers at scale. They facilitate a meaningful application experience for end-users, allowing them to stay tuned to specific topics. This in turn fine-tunes the experience for developers,which further drives adoption, ultimately establishing more loyal consumers at the API integration and application user levels.

Management

One of the cornerstones for defining, quantifying, and delivering successful API onboarding and engagement is API management. The following list contains some core elements of API management that should be considered as any API provider is planning, executing, and evolving their operational strategy.

  • Authentication: Providing clear options for onboarding using Basic Auth, API Keys, JWT, or OAuth keeps things standardized, well-explained, and frictionless to implement.

  • Plans/tiers: Establishing well-defined tiers of API consumers in terms of how they access all available API resources can inform the provision of structured plans that define how an API’s resources are being utilized.

  • Applications: Individual applications should be at the center of consumer API engagement. In particular, applications that help onboard new users so that they can begin consuming API resources are imperative.

  • Usage reporting: Tools and metrics that provide real-time and historical data, as well as access to multi-dimensional reporting across all API consumers is useful in analyzing the API’s usage and performance. This information can be helpful to developers in defining the stage of their API journey, as well as any additional resources they might wish to consider.

There are many other aspects of API management, but these building blocks reflect the consumer-facing elements that help onboard new users and drive increased engagement with existing consumers. API management is an area in which API providers should not be reinventing proven methods that already work: the best practices established over the last decade by leading API providers already account for strong engagement and retention levels for users and service providers.

Communications

Engagement is important for consumers not only with the tools of the API, but the communications and news surrounding the API. Streams of information across multiple channels can help unite a communications strategy for any API platform. We have collected the best examples of such information feeds below.

  • Blog: An active blog with an Atom feed, one for each individual API and/or overall platform.

  • Twitter: A dedicated Twitter account for the entire API platform, providing updates and support.

  • GitHub: A GitHub organization dedicated to the platform, with accounts for each API team member. The organization leverages the platform for content as well as code management.

  • Reddit: A helpful forum for answering questions, sharing content, and engaging with consumers on the social bookmarking platform.

  • Hacker News: Another helpful discussion board for answering questions, sharing content, and engaging with consumers.

  • LinkedIn: A business social network enterprise devoted to engaging with consumers. An established LinkedIn page for the platform can be useful for regularly publishing content, as well as engaging in conversations via the platform.

  • Facebook: Similar to LinkedIn, a Facebook page for the platform is helpful in engaging with API consumers via their social media presence. It can be used to regularly publishing content and engage in network-broadcast conversations via social platforms.

  • Press: A platform section detailing the latest releases, as well as a feed that users can subscribe to in order to receive a regular stream of news on the platform.

A coherent communication, content, and social media strategy will be the number one driver of new users to the platform while also keeping existing developers engaged. These communication building blocks provide a regular stream of information and signals that API consumers can use to stay informed and engaged while putting API resources to use in their applications.

Direct support

Besides communication, direct support channels are essential to completing the feedback loop for the platform. There are a handful of common channels API providers use to provide direct support to API consumers, allowing them to receive support from platform operations. We recommend the following selections.

  • Email: Establish a single, shared email address for the platform in which all platform support team can provide assistance.

  • Twitter: Provide support via Twitter, pushing it beyond just a communication channel and making it so that API consumers can directly interact with the platform team.

  • GitHub: Do not just use GitHub for code or content management: leverage individual team member accounts to actively support using GitHub issues, wikis, and other channels the social coding platform provides.

  • Office hours: Provide regular office hours where API consumers can join a hangout or other virtual group chat application in order to have their questions answered.

  • Webinars: Provide regular webinars around platform specific topics, allowing API consumers to engage with team members via a virtual platform that can be recorded and used for in-direct, more asynchronous support in addition to live feedback.

  • Paid: Provide an avenue for API consumers to pay for premium support and receive prioritization when it comes to having their questions answered.

With a small team, it can be difficult to scale direct support channels properly. It makes sense to activate and support only the channels you know you can handle until there are more resources available to expand to new areas. Ensuring that all direct support channels are reactive in terms of communication will help deliver value by bringing the feedback loop back full circle.

Indirect support

After direct support options, there should always be indirect/self-support options available to help answer API consumers’ questions. Such options allow users to get support on their own while still leveraging the community effect that exists on a platform. There are a handful of proven indirect support channels that work well for public as well as private API programs.

  • Forums: Provide a localized, SaaS, or wider community forum for API consumers to have their questions answered by the platform or by other users within the ecosystem.

  • FAQ: Publish a list of common questions broken down by category, allowing API consumers to quickly find the most common questions that get asked. Regularly update the FAQ listing based on questions gathered using the platform feedback loop(s).

  • Stack Overflow: Leverage the question and answer site Stack Overflow to respond to inquiries and allow API consumers to publish their questions, as well as answers to questions posed by other members of the community network.

Indirect, self-service support will be essential to scaling API operations and allowing the API team to do more with less. Outsourcing, automating, and standardizing how support is offered through the platform can make API support available 24 hours per day, turning it into an always-available option for motivated developers to find the answers they are looking for.

Resources

Beyond the documentation and communication, it is helpful to provide other resources to assist API consumers in onboarding and to strengthen their understanding of what is offered via the platform. There are a handful of common resources API providers make available to their consumers, helping to bring attention to the platform and drive usage and adoption of APIs.

  • Guides: Providing step by step guides covering the entire platform, or individual sections of the platform, helps consumers understand how to use the API to solve common challenges they face.

  • Case studies: Examples such as real-world case studies of how companies, organizations, institutions, and other government agencies have put APIs to work in their web, mobile, device, and network applications can help demonstrate the variety of functions that an API platform can perform.

  • Videos: Make video content available on YouTube, and other platforms. Providing video walkthroughs of how the APIs work and the best way to integrate features into existing applications can demystify the process of onboarding with API technologies.

  • Webinars: While webinars can be a helpful source of information to API consumers trying to understand specific concepts, maintaining and publishing an archive of webinars can serve as a historic catalog of such searches, which can provide targeted Q&A for how to put API platforms to work.

  • Presentations: Provide access to all the presentations that have been used in talks about the platform, allowing consumers to search, browse, and learn from presentations that have been given at past conferences, meetups, and other gatherings.

  • Training: It can be immensely helpful to invest in formal curriculum and training materials to help educate API consumers about what the platform does. This provides a structured approach to learn more about the APIs and gives developers access to comprehensive training materials users can tap into on their own schedule.

Like other areas of this recommendation, these resource areas should only be invested in if an organization has the resources available to develop, deliver, and maintain them over time. Providing additional resources like guides, case studies, presentations, and other materials help further extend the reach of the API platform, allowing the API team behind operations do more with less, as well as reach more consumers with well constructed, self-service resources that are easy to discover.

Observability

One important attribute of API platforms that successfully balance attracting new users and creating long term relationships is platform observability. Being able to understand the overall health, availability, and reliability of an API platform allows API consumers to stay informed regarding the services they are incorporating into their applications. There are several key areas that contribute to the observability of an API platform.

  • Roadmap: A simple list of what is being planned for the future of a platform, one that provides as much detail and ranges as far into the future as possible.

  • Issues: A document of any open issues that exist, allowing API consumers to quickly understand if there are any open issues that might impact their applications.

  • Status: A dashboard that describes the health of the overall platform, as well as the status of each individual API being made available via the platform.

  • Change log: A simple list of what has changed on a platform, taking the roadmap and issues that have been satisfied and rolling it into a historical registry that consumers can use to understand what has occurred.

  • Security: Share information about platform security practices and the strategies used to secure platform resources, and share the expectations held of developers when it comes to application security practices.

  • Breaches: Be proactive and communicative around any breaches that occur, providing immediate notification of the breach and a common place to find information regarding current and historic breaches on the platform.

Observability helps build trust with API consumers. In order to develop this trust, platform providers have to invest in APIs in ways that make consumers feel like the platform is stable and reliable. The less transparent that the elements of the platform are, the less likely that API consumers are going to expand and increase their usage of services.

Real-world presence

The final set of recommendations centers on maintaining a real-world presence for the platform. It is important to ensure that the platform does not just have a wide online presence, but is also engaging with API consumers in a face-to-face capacity. There are a handful of ways that leading API providers get their platforms face-time with their community.

  • Meetups: Speaking at and attending meetup events in relevant markets.

  • Hackathons: Throwing, participating in, and attending hackathon events.

  • Conferences: Speaking, exhibiting, and attending conferences in relevant areas.

  • Workshops: Conducting workshops within the enterprise, with partners, and the public.

These four areas help extend and strengthen the relationship between the API platform provider and consumers.

How API developer sandboxes are used to drive adoption

One of the more interesting and forward-thinking aspects of this research is around the delivery of sandbox development, labs, and virtualized environments. Providing a non-production area of a platform where developers can play with API resources in a much safer environment than a live production area can encourage creativity and innovation as well as exploration of the API’s resources.

What we learned

Some of the API providers we interviewed for this proposal had sandbox environments. Their insights into the merits of these environments provided us with some ideas to reduce friction for new developers when onboarding, as well as to help certify applications as they mature with their integrations. Here is what we learned about sandbox environments from the API providers we talked to.

  • Sandboxes are used: Sandbox environments are used, and they do provide value to the API integration process, making them something all API providers should be considering.

  • Sandboxes are not default: Sandboxes are not a default feature of all APIs but have become more critical when PII (personally identifying information) and other sensitive resources are available.

  • Data is virtualized: It was enlightening to see how many of the API providers we talked to provided virtualized data sets and complete database downloads to help drive the sandbox experience.

  • Doing sandboxes well is difficult: We learned that providing sandbox environments that reflect the production environment is, quite simply, hard. It takes significant investment and support to make it something that will work for developers.

  • Safe onboarding: Sandbox environments allow for the safe onboarding of developers and their applications. This helps ensure that only high-quality applications enter into a production environment, which protects the interests of the platform as well as the security and privacy of end-users.

  • Integrated with training: We learned how sandbox environments should also be integrated with other content and training materials. This facilitates access for API consumers to test out the training materials they need while also directly learning about the API.

  • Leverage API management: It was interesting to learn the role that API management plays in being able to deliver both sandbox and production environments. API gateways and management solutions are used to help mock and deliver virtualized solutions, but also to manage the transition of applications from development to a production environment.

Talking to API providers, it was clear that sandboxes provided value to their operations. It was also clear that they aren’t essential for every API implementation and took a considerable investment to do right. API virtualization is a valuable tool when it comes to engaging with API consumers, and it is something that should be considered by most API providers, but it should be approached pragmatically and realistically, with an awareness of both the costs as well as the benefits of moving from transition to production environments.

What our thoughts are

Sandboxes, labs, and virtualized environments are commonplace across the API sector but are not as ubiquitous they should be. We commend the presence of virtualized building blocks for any API that is dealing with sensitive data. A sandbox should be a default aspect of all API platforms, but should be especially applied to help ensure quality control as part of the API journey from development to production. Here are some of the building blocks we recommend when looking at structuring a sandbox environment for a platform.

  • Virtualize APIs: Provide virtualized instance of APIs and a sandbox catalog of API resources.

  • Virtualize data: Provide virtualized and synthesized data along with APIs in order to create as realistic an experience as possible.

  • Virtualized instances: Consider the availability of virtualized instances of an API as computable instances on major cloud platforms, allowing for the deployment of sandboxes anywhere.

  • Virtualized containers: Consider the availability of virtualized instances of an API as containers, allowing API sandboxes to be created locally, in the cloud, and anywhere containers run.

  • Bake into onboarding: Make a sandbox environment a default requirement in the API onboarding process, providing a place for all developers to learn about what a platform offers and pushing applications to prove their value, security, and privacy before entering a production environment.

  • Applications: Center the sandbox experience on the application by certifying it meets all the platform requirements before allowing it to move forward. All developers and their applications get access to the sandbox, but only some applications will be given production access.

  • Certification: Provide certification for both developers and applications. Establishing a minimum bar for what is expected of applications before they can move into production helps developers understand what it takes to move an application from sandbox to distribution, which ensures a high-quality application experience at scale.

  • Showcase: Always provide a showcase for certified applications as well as certified developers. Allow for the browsing and searching of applications and developers, while also highlighting them in communications and other platform resources.

When it comes to API resources that contain sensitive data, virtualized APIs, data, and environments are essential. They should be a default part of ensuring that developers push only high-quality applications to production by forcing them to prove themselves in a sandbox environment first.

Types of metrics for measuring adoption and making decisions

This area of our research overlaps somewhat with the earlier section on measuring success, but here, we provide a more precise look at what can be measured to help quantify success while also ensuring that findings are used as part of the decision-making process throughout the API journey.

What we learned

Our interviews reminded us that it is useful to consider that not all API providers have a fully fleshed out strategy for measuring activity across their platforms. However, we did come away with some interesting lessons from those providers that were using metrics to drive API decision making.

  • Look at it as a funnel: Treat API outreach and engagement as a sales and marketing funnel. Attract as many new users as you can, but then profile the target demographic to try to understand who they are and what their working objectives comprise. From there, devote efforts to incentivizing users to “move down the funnel” through the sandbox environment and eventually to production status. In short, treat API operations like a business and platform users like they are customers.

  • Do not have formal metrics: It was illuminating to also learn that some providers felt that having an overly formal metrics strategy might constrain developer outreach and engagement. Providing words of caution when it comes to measuring too much, as well as only examining data when it comes to making critical decisions, can keep outreach efforts more constructively interacting with API consumers regarding their needs.

  • API keys registered: For data accessibility purposes, it is worthwhile to ensure that all developers have an application and API key before they can access any API resources. Requiring all internal, partner, and eternal developers to pass along their API key with each API call allows all platform activity to be measured, tracked, and used as part of the wider platform decision-making process.

  • Small percentage of users: We also heard that it is common for a small percentage of the overall platform users to generate the majority of API calls to the platform. This makes it important to measure activity on the platform in terms of which users are the most salient (and thereby driving the majority of value on a platform).

  • Amount of investment: Importantly, the usage rates of a platform’s resources can provide a strong justification for investing more resources into the platform’s success, making tracking that data of paramount importance. This transforms investment into a data-driven decision that responds to the actual needs of the platform.

The interview portion of our research provided a valuable look at how API providers are measuring activity across their platforms. Data and metrics are not only being used to define success, but are also used as part of the ongoing decision making process around the direction API providers take their platform, particularly when it comes to measuring adoption of API resources across a platform.

What our thoughts are

When it comes to measuring adoption and understanding how API consumers are putting resources to work, we recommend starting small. Activity tracking is something that will change and evolve as the organization develops a better understanding of platform resources and the interests of internal stakeholders, partners, and 3rd party consumers. These are just a handful of areas we recommend collecting data on to begin with.

  • Traffic: Measure and understand traffic (network and otherwise) across the platform developer portal.

  • New accounts: Track and profile all new accounts signing up for API access.

  • New applications: Track and profile all new applications registered for access.

  • Active applications: Measure and track the usage of the active platform applications.

  • Number of API calls: Understand how many APIs are being called in different dimensions.

  • Conversation: Measure all conversation happening across the platform and use these estimates to develop awareness.

  • Support: Measure all support activity in order to pinpoint the needs of API consumers.

  • Personas: Quantify the different types of consumers who are putting a platform to use.

Begin here when it comes to tracking: develop an awareness of the community and get to know what is important. Then expand from there. Add and remove the metrics that make sense for your organization without tracking metrics for no reason (i.e. all tracking should add value to the API platform or its decision making process).

Structuring and staffing outreach programs

Each platform will have its own mandate for how API programs should be staffed, but there are some common patterns that exist across the space.

What we learned

The API providers we talked to had a lot to share about what it takes to staff their operations, including their structures (or in a few cases lack thereof!).

  • Dedicated evangelist: Make sure there is a dedicated evangelist to help talk-up and support the platform.

  • Include marketing: Include the marketing team in conversations around amplifying the platform and its presence.

  • Provide support: Invest in the resources to support all aspects of the platform effectively, not just those that are consumer-facing or particularly visible.

  • Conduct regular calls: Conduct regular calls with internal, partner, and external stakeholders in order to bring everyone together to discuss the progress of the platform.

  • Use a CRM for automation: Put a CRM to use when it comes to tracking and automating outreach efforts for the platform. Do not reinvent the wheel; leverage an existing service to track all of the details that can be automated.

  • Include other teams: Do not just invest in an isolated API team; make sure other teams across the organization are included in the API conversation.

  • Develop an in-person presence: Make sure to obtain human resources that can be sent to meetups, conferences, and other in-person events so that the organization can expanding and strengthening the presence of the platform.

  • Speak to leadership: Regularly invest time to report platform results and progress to leadership, making sure they understand the overall health and activity of the platform.

We learned that a dedicated evangelist is essential, as well as significant investments in marketing and support. It was good to hear how important in-person events were, supporting stakeholders and leadership when it came to overall outreach efforts. Overall, the conversation we had reflects what we have been seeing across the landscape with other public and private sector providers.

What our thoughts are

Echoing what we heard from our conversations with API providers, we have some advice when it comes to structuring and allocating resources to API efforts. This is a tough area to generalize because different platforms will have different needs, but there are well established traits of successful API providers, with the following characteristics making the top of the list.

  • Dedicated program: Establish a formal program for overseeing API operations across an organization to make sure it gets the attention it needs.

  • Dedicated team: Allocate a dedicated team to the API program and platform to represent its needs and push for its continued advancement.

  • Business involvement: Make sure to involve business groups in the conversation, bringing them in as stakeholders to give their thoughts on the organization’s API program.

  • Marketing involvement: Make sure marketing teams play an influential role in amplifying the message coming from a platform. This is especially important to ensure that a platform does not just speak to developers, but to consumers and users as well.

  • Sales involvement: Ensure that a platform has the sales resources to actually close deals when necessary. This ensures that a platform has continuing end-user participation and the resources it needs to function. Remember, though, that sales is not always about selling commercial solutions.

  • External users: Make a place for external actors to take part in advancing the API-relevance conversation. External users can often rise to highly influential levels within a community, and this influence can be put to work assisting with outreach efforts (but only if the culture of a platform allows for it).

  • Contractors: Consider how vendors and contractors will be contributing to the overall outreach effort by creating a means for vendors to assist with logistics and communication channels, allowing the core team to focus on moving the platform forward while contractors tackle the details.

It will take dedicated resources to move the API conversation forward at a large organization. However, it will also take the involvement of other stakeholders to get the business and political capital to legitimately spark the platform initiative. How a program gets structured, and the amount of resources dedicated to evangelism, communication, support, and other critical areas will set the tone for the platform and determine the overall health and viability of the API.

Anti-patterns

One unorthodox part of our research involved asking API providers about the things they felt would contribute to the failure of their API platform. Below, we have collected some thoughts about the “anti-patterns” that might generate friction on the platform, chase developers away, or make things harder for everyone involved.

What we learned

The API providers we talked to had plenty of ideas for how to run developers off and make integration with platform resources harder. We’ve provided a list of things API providers can avoid when operating their platforms and when reaching out to developers. (To be clear, each of the things presented in the list below is a bad practice — the bolded phrase is a “mistake” an API provider could make, and the text that follows further explains the nature of the poor practice.)

  • Do not take a build it and they will come approach: Building API operations with tremendous resource investment before consumers ask for certain functions or resources is a poor way to achieve stable platform growth.

  • Do not reach out to developers: Not being proactive about contacting developers makes it hard to build working relationships that clarify development needs.

  • Hand built documentation: Providing hand-crafted, bloated, or out of date documentation is not only difficult to maintain, it reduces the value of the API to developers.

  • Breaking changes: Changing the platform too often or too fast may introduce too many breaking changes for developers to deal with, reducing the platform’s effectiveness.

  • Do not communicate: Not communicating platform operation updates leaves developers in the dark about what is happening with the very tool they rely on.

  • Do not listen to your developers: Not listening to developers or including their feedback in the roadmap is sure to eventually run developers off.

  • Do not measure happiness: Measure everything except for the happiness of developers, because such a metric (if you can even operationalize it as such) will never really lead to understanding if they are truly happy about how the platform is being run.

  • Gated content: Putting documentation, content, and other resources behind a login or paywall, or else restricting what developers have access to, is a great way to get developers to ignore your platform.

  • Do not provide a developer edition: Avoid providing developers with a set of APIs to play with simply for the sake of “experimentation.” Without being able to see the value of real, production-strength applications, users will not be building their projects on the platform any time soon.

  • Do not reach out to partners: Focusing on general developers without identifying and reaching out to potential partners can rob a platform of trusted allies, who might have otherwise facilitated operations, integrations, and even broader application development.

It was clear that the API providers we interviewed were aware of the pitfalls that could jeopardize an API’s success. They provided us with a comprehensive look at what an organization should avoid when it comes to operations and investment.

What our thoughts are

Adding to what the API providers mentioned during their interviews, we have some additional recommendations based on some common deficiencies we have seen. These are the areas all API providers should be investing in to strengthen their operations and to avoid the anti-patterns highlighted so far.

  • Documentation: Make documentation a priority by keeping it simple, updated, and interactive for developers.

  • Entry level: Ensure there is entry-level access to a platform, allowing developers to onboard without commitments or procedural friction.

  • Communication: Develop a robust communication strategy for ensuring a platform has a regularly updated stream of new content about what is happening.

  • Support: Ensure that a platform is supported, and that API consumer’s questions are being answered in a timely manner.

  • Feedback loops: Make sure that feedback loops exist and are strengthened by communication, support, and actually listening to developers’ needs.

  • Evangelism: Invest in evangelism amongst leadership, partners, 3rd party developers and all stakeholders to make sure the platform is always being spoken for.

  • Observability: Insist on observability for all components involved in operating a platform to maximize measurements and comprehension of the system.

  • Partners: Seek out partners. They are critical to taking the platform to the next level. Cultivate developers and applications, and produce strong partners who can help take things into the future.

We see anti-patterns as deficient areas of platform operations that can largely be avoided with proper time and resource investments.

Future plans

We concluded our provider interviews with a few questions on what they believe the future holds. Their visions for their growth and development can serve as strong examples and recommendations for what other platforms can include in their own plans.

What we learned

It was interesting to learn about the desires our interviewees possessed when it came to future investment in their platforms. We have generated this short list of areas that API providers can think about when it comes to building out their roadmap.

  • Go with what works: Make sure to continue investing in and scaling the parts of the platform that are earning attention for their success.

  • Product excellence: Emphasize product excellence over simply responding to new developments.

  • Marketing and evangelism: It has never bad to augment marketing efforts to broadcast information about a platform.

  • Attend conferences: Increase the platform’s presence at relevant conferences.

  • Throw hackathons: Invest in throwing hackathons to encourage developers to do more.

  • Conduct conferences: Invest in and produce conferences that support the API community.

  • Building community: Do what it takes to keep building and strengthening the community.

  • Publish more to GitHub: Publish more code, content, and other resources to the GitHub or other version control platform.

  • Make connect more discoverable: Work to make resource more discoverable so that developers can find what they need.

  • Publish to code.gov: Publish code to code.gov, making code open source and available as part of the wider government effort to make resources available.

The common theme we heard from API providers was that allocation for future investment should be spent reinforcing the building blocks already in place that made the API platform successful in the first place. We learned a lot about the motivations and visions of the API providers we interviewed, which helped shape our recommendations for what could be next.

What our thoughts are

With so many possibilities for the path that an API platform’s development can take, it is easy to get caught up in the next step without reflecting on the fact that what is next usually involves reinforcing what is happening now. Our recommendations involve focusing on what matters, and not just what is new, with the following areas of investment:

  • Continue investing in what works: To once again echo our interviewees, continue to invest in building what is already working. Further develop the API platform and its associated vision conversation by looking to what already exists rather than looking for new ways to get things done.

  • Excellence and mastery: Invest in perfecting the process, refining how things get done, and operating the platform in a way that benefits everyone involved. Doing it well, and do it it right, will pay off in the long run.

  • Involve other teams: Expand involvement with the API to include other teams across an organization. This helps distribute the workload of operating the API platform and allows all subgroups within an organization to have a voice in the future of the API.

Conclusion

To produce this research, we spoke with leading API providers from across the public and private sector while also leveraging eight years of analysis gathered from directly studying the API sector. This research should provide a wealth of options to consider when it comes to helping the VA be more effective in reaching out to developers. However, this is not meant to be a complete list of building blocks to consider, but rather, is meant to present a selection of proven elements that can be implemented with the right teams to execute. While things like API documentation are critical, there is no single element of this research that will make or break API outreach efforts; when the right elements are combined and executed properly, the results can be extremely positive.

This research reflects the experience of platform providers like SalesForce who have been iterating upon their API platforms and engaging with consumers since 2001. It also follows the lead of next generation providers like Twilio who have been making developers happy for almost a decade, while also managing to take their company public. It looks at how CMS, Census, and FEC are approaching the outreach around their API programs, bringing in the unique perspective of federal agencies who are finding success with their APIs. Finally, the insights gathered across these API providers is organized, augmented, and rounded off with insight gathered by the API Evangelist as part of research on the business of APIs, research that has been ongoing since July of 2010.

We want to end this look at developer outreach by emphasizing once again that this is a journey. While there are many common patterns in play across the API sector, no two API platforms are the same. The types of developers attracted to a platform, as well as the ones that remained engaged, will vary depending on the organization, the types of resources being made available, and the tone set by the teams(s) operating the platform. It is up to the VA to decide what type of platform they will operate and what kind of tone they will set when reaching out to developers. There will not be any single right answer to how the VA should be reaching out to new developers, but with the right amount of planning, communication, support, and observability, the VA will undoubtedly find its footing.

Appendix A: interview questions

For each interview, we asked the following questions:

  1. What role do you play within the organization?

  2. What is the purpose and scope of your API developer outreach program?

  3. What does success look like for you?

  4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

  5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

  6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

  7. How is your outreach program structured and staffed? How did it start out and evolve over time?

  8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

  9. What big ideas do you have for evolving your outreach program to make it even more effective?

  10. Any other thoughts you’d like to share? If so, please feel free!

Appendix B: CMS Blue Button API interview notes

1. What role do you play within the organization?

Product Manager.

2. What is the purpose and scope of your API developer outreach program?

CMS is offering several APIs. One approach is put it out there and hope they come. Second approach is leveraging APIs on the Socrato platform and hoping there’s adoption. Third approach is partnership agreements with various private partners.

Purpose of the Blue Button outreach program is to drive adoption of the API.

3. What does success look like for you?

Worked with various stakeholders to define the metrics in terms of value. For example, number of unique organizations who are experimenting with the API. Originally set a target of 500 organizations. Up to 450 currently. Use it as a proxy measurement of value. Not sure how beneficiaries are benefiting yet, but adoption is proxy indicator.

Also measuring number of beneficiaries who have linked their data to Blue Button API — for example, Google Verily.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Documentation is the first key element. Originally hosted documentation on GitHub pages, but now using bluebutton.cms.gov.

Had challenges over the year making clear what Blue Button is, value, etc.

Relaunched during recent HIMSS conference. Resulted in a number of sign-ups from press.

Set up a blog to provide a getting started guide, how sign-up works, etc. Working pretty well.

Assigned a designated Developer Evangelist, who pushes content, participates in forums, and hits the niche conference circuit.

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

Sandbox has been a big part. Have a synthetic dataset. In healthcare, this is essential from a privacy standpoint.

Have had challenge with synthetic data in terms of making them realistic.

Have a streamlined process for accessing and onboarding into environment. Experience is not currently great, and working on improving.

Developer portal is only for sandbox environment. No portal for production, so disjointed at this point.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

Look at the adoption as a funnel. That has helped to drive a culture shift around how folks think about metrics and the role they play.

Top of funnel is evangelism/traffic/etc., portal sign-ups, registered an app, and how many of those who have registered made API calls.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

Have a developer evangelist.

Periodically, team gets on phone with developer evangelist and talks about ideas for building awareness and driving adoption.

Using email + CRM tools for outreach. Trying to improve marketing automation.

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Don’t take a “build it and they will come” approach.

Not acting quickly enough on insights gained from the funnel process, including reaching out to individuals to help them through the process (i.e., proactive outreach).

9. What big ideas do you have for evolving your outreach program to make it even more effective?

Double down on what’s working to incrementally grow — conferences, webinars, etc.

Serving as a blueprint for other organizations, particularly at a state & local level.

10. Any other thoughts you’d like to share? If so, please feel free!

N/A

Appendix C: Census interview notes

1. What role do you play within the organization?

Developer engagement lead, serving as the comms arm for Census’ API efforts. First-line of defense for any developer engagement. Participate in hackathons, etc.

Also working on CitySDK. This was in response to the observation that developers were having trouble working with Census APIs. SDK currently not being maintained because lost lead developer and funding for it. Personally learning how to develop in order maintain myself.

2. What is the purpose and scope of your API developer outreach program?

Save developers time and help them understand the nuances of Census data and how to use the data. Engage in communications and help inform the development of API products.

3. What does success look like for you?

Happy developers.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Proactive engagement: work on driving the government’s engagement in National Day of Civic Hacking in collaboration with NASA and Code for America. Hackathons are great for user research and testing new features.

Reactive: have a Slack and Gitter channel.

Haven’t tried blog posts yet. Thinking about interviewing developers who are using APIs and turning their stories into blog posts. Haven’t been able to get legal approval yet for that.

Being able to figure out what data Census has and how that data can be made available to developers is key.

Webinars are good, especially those focused on showing how to use Census data and build something simple using the API.

Use a data search tool — American FactFinder; developers can use this to find out what variables they are interested in and then trace that data back to an API that they can use to access programmatically.

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

No, haven’t done anything like yet. Have a discovery tool that is part of the API.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

Initially focused on number of API keys registered. Starting focusing on metrics of use. A small number of users drive 80% of the API use. Developers tend to move data into their environment for performance purposes (e.g., caching). Plus, they worry about government shutting down and data not being accessible.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

One person focused on outreach.

Entire team dedicated to building the APIs and making the data available and accessible. Each product gets its own endpoint; there are about 30 now and planning to do 70 more.

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Create terrible, hand-rolled documentation with a lot of pictures; don’t give developers a channel to communicate; don’t listen to them; don’t give them a way to test out data before granting a key; and don’t have metrics for measuring developer happiness and just focus on usage metrics.

9. What big ideas do you have for evolving your outreach program to make it even more effective?

Focus more on product excellence and marketing. A lot of developers don’t like being pushed to. Most developers go to hackathons for social interaction and career progression, not because they love working with your data and using your APIs.

10. Any other thoughts you’d like to share? If so, please feel free!

Instead of trying to focus too much on attracting random developers, focus on the developers who are engaged now and reach out to them. Would like to introduce a CRM system to help manage these relationships better.

Learned it’s important to grab as much data during the registration process in order to gain better insight into developer intent, behavior, and characteristics. Found it’s very hard to get information after developers have gained access; they don’t typically respond to email requests for information.

Hackathons, etc. are huge investment of time and resources.

Biggest lesson learned is to always focus on the principle of how to make it as easy for the developers as possible.

Make heavy use of GraphQL because it’s introspective and helps developers understand the API in greater detail. Like GraphQL because it can evolve without breaking things.

Appendix D: OpenFEC interview notes

1. What role do you play within the organization?

Tech Lead at FEC, mostly focused on OpenFEC. Involved in FEC campaign finance data for ten years.

2. What is the purpose and scope of your API developer outreach program?

Don’t have an overarching formal approach in place yet, but do provide support to developers through a variety of channels when they reach out.

3. What does success look like for you?

Number one measure of success is providing developers accurate data, and making sure that is available at all times. Also making sure that developers can find the data that they need.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Don’t have a formally developer outreach program. Have small team of developers, designers, content designers, and product managers.

Would like to be more proactive about engaging.

Do have some mechanisms for getting feedback. For example, the project is open source, which encourages developers to add issues, contribute, etc. Developers can send emails to [email protected]. Existing team tries to get back as quickly as possible. Would like to have a ticket management system to help facilitate workflow around support. Do have a google group in which community of developers can ask and answer questions.

Do have a developer page. And work hard to keep those up-to-date.

Use GSA’s API umbrella to manage users, which handles rate limiting and caching.

Have a try-it-out feature on developer page powered by Swagger API, and can just put params in to see how the API works.

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

Don’t have a sandbox, but do provide instructions for setting up locally. And also provide a sample database that can be copied. Since all data is public, don’t have to worry about PII. Amazon provides a free sandbox environment to anyone with .gov domain that can be used to set-up a test environment.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

Don’t have formal metrics. Using the GSA API Umbrella, there is some data available, not currently using that data to drive decisions.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

The existing team of developers, etc. are providing outresearch support.

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Making breaking changes and don’t tell anyone; not responding to and fixing data quality issues, which would undermine the credibility of the API; couldn’t serve the data in an efficient manner.

9. What big ideas do you have for evolving your outreach program to make it even more effective?

Have discussed holding a hackathon or conference, similar to what Blue Button is doing.

Holding a quarterly developer conference call to starting answering questions and discuss ideas.

Regardless of activity, starting to build a community.

Pushing our code to code.gov to increase awareness.

10. Any other thoughts you’d like to share? If so, please feel free!

There are commercial restrictions around the use of open FEC data.

Appendix E: Salesforce interview notes

1. What role do you play within the organization?

Vice President of Developer Relations. My team doesn’t create APIs but we drive awareness and adoption through producing:

  • Technical content

  • Demos & sessions at events

  • Tools like SDKs and interactive docs

  • Marcom (social, email, etc)

  • Community Building

2. What is the purpose and scope of your API developer outreach program?

Unlike companies who solely monetize services, our primary purpose is to enable developers to create custom integrations with Salesforce.

As an advanced enterprise software platform, its critical that Salesforce work well with all the other enterprise apps you might have. In addition, many of our customers are building custom web or mobile apps that need access to customer data.

Because we are a platform, our purpose is not limited to just “API outreach” but is inclusive both of our APIs and also our app development platform.

Some of the primarily use-cases for our APIs are to extract data for archival purpose, surface customer data in third-party or bespoke apps, or to enable business systems to work together in other ways. We also have a family of machine learning APIs known as Einstein Vision and Einstein Language that can be used by any application to create predictions from unstructured images and text.

3. What does success look like for you?

Ultimately, we know that our customers are demanding more skilled Salesforce Developers every day to support their custom implementations so our primary metrics are monthly signups and active monthly usage.

Our goal is to keep the growth of our developer population in line with our overall customer growth to ensure there are enough developers in the ecosystem to service our customers needs.

We also measure job postings for Salesforce Developers, traffic to our website, engagement with our events like Dreamforce and TrailheaDX, and usage of our online learning portal trailhead.salesforce.com.

4. What are the key elements or practices (e.g, documentation, demos, webinars, conferences, blog posts) that you’re using to drive and sustain effective adoption of your API(s)?

Blogging, webinars, documentation (including an API Explorer), training classes, sample code, first-party events (we do regional workshops called DevTrails, large regional events called Salesforce World Tours, and big global events called Dreamforce and TrailheaDX)

In addition, we have built a substantial community program with over 200 developers groups around the world and an MVP program to recognize top contributors in the community. We’re in the process of expanding our open source footprint (sample code, apps, and SDKs).

5. Do you make use of an API developer sandbox to drive and sustain adoption? If so, please describe how you’ve designed that environment to be useful and usable to developers.

We have a free environment called the Salesforce Developer Edition that anyone can sign up for. There is no expiration date so you can use it as long as you’d like, and you’re allowed to sign up for as many as you’d like for the times that you need a “clean sandbox.”

We’ve also tightly integrated these developer editions into the Trailhead learning platform, so that as a developer is going through an online course (called a module) or a tutorial (called a project) they can actually complete hands-on challenges in their developer environment and get validation that they have done it correctly.

To protect our business interests, these developer environments have limits in terms of API usage, storage, user licenses, etc. With these limits, we do our best to find a balance between empowering the developers and protecting the business. And we have a process where developers can request increased limits.

6. What types of metrics do you use to measure adoption effectiveness and to inform future decisions about how you evolve your program?

We use our CRM platform to measure all the ways we touch our developers and how that impacts success.

7. How is your outreach program structured and staffed? How did it start out and evolve over time?

When we launched the program, we started small. 1-2 evangelists, a program manager for community/events, and a web developer. It grew from there.

We don’t share details about our current team size publicly. However, the key components of our program are as follows with approximate percentages:

  • Marketers (12%)

  • Evangelists (35%)

  • Community Program Managers (12%)

  • Event Producers/Program Managers (13%)

  • Marketing Operations: Email, Development, Creative, etc (25%)

8. Imagine that you’re charged with ensuring the complete failure of an API developer outreach program. What things would you do to ensure that failure?

Gate all the content.

Don’t provide a developer edition.

Don’t share use-cases or sample code.

Don’t find any partners.

Don’t engage with your users or help them be successful.

9. What big ideas do you have for evolving your outreach program to make it even more effective?

We have a ton of content but it’s not all discoverable.

Working on making our pages more data-driven and dynamic so we can expose all our resources.

We’re also moving a lot of our content to GitHub as well as creating a more organized process to store our code samples.

Many of them are orphaned in blog posts and other places.

10. Any other thoughts you’d like to share? If so, please feel free!

N/A


Having The Dedication To Lead An API Effort Forward Within A Large Enterprise Organization

I work with a lot of folks who work in large enterprise organizations, institutions, and government agencies who are moving the API conversation forward within their groups. I’m all too familiar with what it takes to move forward the API conversation within large, well established enterprise organizations. However, I am the first to admit that while I have a deep understanding of what it involves, I do not have the fortitude to actually lead an effort for the sustained amount of time it takes to actually make change. I just do not have the patience and the personality for it, and I’m eternally grateful for those that do.

There are regular streams of emails in my inbox from people embedded within enterprise organizations, looking for guidance, counseling, and assistance in moving forward the API conversation at their organizations. I am happy to provide assistance in an advisory capacity, and consulting with groups to help them develop their strategies. A significant portion of my income comes from conducting 1-3 day workshops within the enterprise, helping teams work through what they need to. There is one thing I cannot contribute to any of these teams, the dedication and perseverance it will need to actually make it happen.

It takes a huge amount of organization knowledge to move things forward at a large organization. You have to know who the decision makers are, and who are the gatekeepers for all of the important resources–this is knowledge you have to acquire by being embedded, and working within an organization for a very long time. You just can’t walk in the door and be able to make sense of things within days, or weeks. You have to be able to work around schedules, and personalities–getting to know people, and truly begin to understand their motivations, and willingness to contribute, or whether they’ll actually decide to work against you. The culture of any enterprise organization will be the most important area of concern for you as you craft and evolve your API strategy.

I often wish I had the fortitude to work in a sustained capacity within a large organization. I’ve tried. It just doesn’t fit my view of the world. However, I am super thankful for those of you who are. I’m super happy to help you in your journey. I’m happy to help you think through what you are experiencing as part of my storytelling here on my blog–just email me your questions, thoughts, and concerns. I’m happy to anonymize as I work through my responses here on the blog, about 60% of the stories you read here are the anonymized result of emails I receive from y’all. I’m happy to vent for you, and use you as my muse. I’m also happy to help out in a more dedicated capacity, and provide my consulting assistance to your organization–it is what I do, and how I pay the bills. Let me know how I can help.


Understanding The Event-Driven API Infrastructure Opportunity That Exists Across The API Landscape

I am at the Kong Summit in San Francisco all day tomorrow. I’m going to be speaking about research into the event-driven architectural layers I’ve been mapping out across the API space. Looking for the opportunity to augment existing APIs with push technology like webhooks, and streaming technology like SSE, as well as pipe data in an out of Kafka, fill data lakes, and train machine learning models. I’ll be sharing what I’m finding from some of the more mature API providers when it comes to their investment in event-driven infrastructure, focusing in on Twilio, SendGrid, Stripe, Slack, and GitHub.

As I am profiling APIs for inclusion in my API Stack research, and in the API Gallery, I create an APIs.json, OpenAPI, Postman Collection(s), and sometimes an AsyncAPI definition for each API. All of my API catalogs, and API discovery collections use APIs.json + OpenAPI by default. One of the things I profile in each of my APIs.json, is the usage of webhooks as part of API operations. You can see collections of them that I’ve published to the API Gallery, aggregating many different approaches in what I consider to be the 101 of event-driven architecture, built on top of existing request and response HTTP API infrastructure. Allowing me to better understand how people are doing webhooks, and beginning to sketch out plans for a more event-driven approach to delivering resources, and managing activity on any platform that is scaling.

While studying APIs at this level you begin to see patterns across how providers are doing what they are doing, even amidst a lack of standards for things like webhooks. API providers emulate each other, it is how much of the API space has evolved in the last decade. You see patterns like how leading API providers are defining their event types. Naming, describing, and allowing API consumers to subscribe to a variety of events, and receive webhook pings or pushes of data, as well as other types of notifications. Helping establish a vocabulary for defining the most meaningful events that are occurring across an API platform, and then providing an even-driven framework for subscribing to push data out when something occurs, as well as sustained API connections in the form of server-sent event (SSE), HTTP long polling, and other long running HTTP connections.

As I said, webhooks is the 101 of event-driven technology, and once API providers evolve in their journey you begin to see investment in the 201 level solutions like SSE, WebSub, and more formal approaches to delivering resources as real time streams and publish / subscribe solutions. Then you see platforms begin to mature and evolve into other 301 and beyond courses, with AMQP, Kafka, and often times other Apache Projects. Sure, some API providers begin their journey here, but for many API providers, they are having to ease into the world of event-driven architecture, getting their feet wet with managing their request and response API infrastructure, and slowly evolving with webhooks. Then as API operations harden, mature, and become more easily managed, API providers can confidently begin evolving into using more sophisticated approaches to delivering data where it needs to be, when it is needed.

From what I’ve gathered, the more mature API providers, who are further along in their API journey have invested in some key areas, which has allowed them to continue investing in some other key ways:

  • Defined Resources - These API providers have their APIs well defined, with master planned designs for their suite of services, possessing machine readable definitions like OpenAPI, Postman Collections, and AsyncAPI.
  • Request / Response - Who have fined tuned their approach to delivering their HTTP based request and response structure, along with their infrastructure being so well defined.
  • Known Event Types - Which has resulted in having a handle on what is changing, and what the most important events are for API providers, as well as API consumers.
  • Push Technology - Having begun investing in webhooks, and other push technology to make sure their API infrastructure is a two-way street, and they can easily push data out based upon any event.
  • Query Language - Understanding the value of investment in a coherent querying strategy for their infrastructure that can work seamlessly with the defining, triggering, and overall management of event driven infrastructure.
  • Stream Technology - Having a solid understanding of what data changes most frequently, as well as the topics people are most interested, and augmenting push technology with streaming subscriptions that consumers can tap into.

At this point in most API providers journey, they are successfully operating a full suite of event-driven solutions that can be tapped internally, and externally with partners, and other 3rd party developers. They probably are already investing in Kafka, and other Apache projects, an getting more sophisticated with their event-driven API orchestration. Request and response API infrastructure is well documented with OpenAPI, and groups are looking at event-driven specifications like AsyncAPI to continue to ensure all resources, messages, events, topics, and other moving parts are well defined.

I’ll be showcasing the event-driven approaches of Twilio, SendGrid, Stripe, Slack, and GitHub at the Kong Summit tomorrow. I’ll also be looking at streaming approaches by Twitter, Slack, SalesForce, and Xignite. Which is just the tip of the event-driven API architecture opportunity I’m seeing across the existing API landscape. After mapping out several hundred API providers, and over 30K API paths using OpenAPI, and then augmenting and extending what is possible using AsyncAPI, you begin to see the event-driven opportunity that already exists out there. When you look at how API pioneers are investing in their event-driven approaches, it is easy to get a glimpse at what all API providers will be doing in 3-5 years, once they are further along in their API journey, and have continued to mature their approach to moving their valuable bits an bytes around using the web.


Talking Healthcare APIs With The CMS Blue Button API Team At #APIStrat In Nashville Next Week

We have the API evangelist from one of the most significant APIs out there today at #APIStrat in Nashville next week. Mark Scrimshire (@ekivemark), Blue Button Innovator and Developer Evangelist from NewWave Telecoms and Technologies will be on the main stage next Tuesday, September 25th 2018. Mark will be bringing his experience helping stand up the Blue Button API with the Centers for Medicare and Medicaid Services (CMS), and sharing the stories from the trenches while delivering this critical piece of health API infrastructure within the United States.

I consider the Blue Button API to be one of the most significant APIs out there right now for several key factors:

  • API Reach - An API that has potential to reach 44 million Medicare beneficiaries, which is 15 percent of the U.S. population–that is a pretty significant audience to reach when it comes to the overall API conversation.
  • Fast Healthcare Interoperability Resources (FHIR) - The Blue Button API supports Hl7 / FHIR, pushing the specification forward in the overall healthcare API interoperability discussion, making it extremely relevant to APIStrat and the OpenAPI Initiative (OAI).
  • Government API Blueprint - The way in which the Blue Button API team at CMS and USDS is delivering the API is providing a potential blueprint that other federal and stage level agencies can follow when rolling out their own Medicare related APIs, but also any other critical infrastructure that this country depends on.

This is why I am always happy to support the Blue Button API team in any way I can, and I am very stoked to have them at APIStrat in Nashville next week. I’ve spent a lot of time working with, and studying what the Blue Button API team is up to, and I spoke at their developer conference hosted at the White House last month. They have some serious wisdom to share when it comes to delivering public APIs at this scale, making the keynote with Mark something you will not want to miss.

You can check out the schedule for APIStrat next week on the website. There are also still tickets available if you want to join in the conversation going on there Monday, Tuesday, and Wednesday next week. APIStrat is operated by the OpenAPI Initiative (OA), making it the place where you will be having high level API conversation like this one. When it comes to APIs, and industry changing API specifications like FHIR, APIStrat is the place to be. I’ll see you all in Nashville next week, and I look forward to talking APIs with all y’all in the halls, and around town for APIStrat 2018.


Sadly Stack Exchange Blocks API Calls Being Made From Any Of Amazon's IP Block

I am developing an authentication and access layer for the API Gallery that I am building for Streamdata.io, while also federating it for usage as part of my API Stack research. In addition to building out these catalogs for API discovery purposes, I’m also developing a suite of tools that allow users to subscribe to different topics from popular sources like GitHub, Reddit, and Stack Overflow (Exchange). I’ve been busy adding one or two providers to my OAuth broker each week, until the other day I hit a snag with the Stack Exchange API.

I thought my Stack Exchange API OAuth flow had been working, it’s been up for a few months, and I seem to remember authenticating against it before, but this weekend I began getting an error that my IP address was blocked. I was looking at log files trying to understand if I was making too many calls, or some other potential violation, but I couldn’t find anything. Eventually I emailed Stack Exchange to see what their guidance once, to which I got a prompt reply:

“Yes, we block all of Amazon’s AWS IP addresses due to the large amount of abuse that comes from their services. Unfortunately we cannot unblock those addresses at this time.”

Ok then. I guess that is that. I really don’t feel like setting up another server with another provider just so I can run an OAuth server from there. Or, maybe I guess I might have to if I expect to offer a service that provides OAuth integration with Stack Exchange. It’s a pretty unfortunate situation that doesn’t make a whole lot of sense. I can understand adding another layer of white listing for developers, pushing them to add their IP address to their Stack Exchange API application, and push us to justify that our app should have access, but blacklisting an entire cloud provider from accessing your API is just dumb.

I am going to weigh my options, and explore what it will take to setup another server elsewhere. Maybe I will start setting up individual subdomains for each OAuth provider I add to the stack, so I can decouple them, and host them on another platform, in another region. This is one of those road blocks you encounter doing APIs that just doesn’t make a whole lot of sense, and yet you still have to find a work around–you can’t just give in, despite API providers being so heavy handed, and not considering the impact of the moves on their consumers. I’m guessing in the end, the Stack Exchange API doesn’t fit into their wider business model, which is something that allows blind spots like this to form, and continue.


Justifying My Existence In Your API Sales And Marketing Funnel

I feel like I’m regularly having to advocate for my existence, and the existence of developers who are like me, within the sales and marketing funnel for many APIs. I sign up for a lot of APIs, and have the pleasure of enjoy a wide variety of on-boarding processes for APIs. Many APIs I have no problem signing up, on-boarding, and beginning to make calls, while others I have to just my existence within their API sales and marketing funnel. Don’t get me wrong, I’m not saying that I shouldn’t be expected to justify my existence, it is just that many API providers are setup to immediately discourage, create friction for, and dismiss my class of API integrator–that doesn’t fit neatly into the shiny big money integration you have defined at the bottom of your funnel.

I get that we all need to make money. I have to. I’m actually in the business of helping you make money. I’m just saying that you are missing out on a significant amount of opportunity if you only focus on what comes out the other side of your funnel, and discount the nutrients developers like me can bring to your funnel ecosystem. I’m guessing that my little domain apievangelist.com does return the deal size scope you are looking for, but I think you are putting too much trust into the numbers provided to you by your business intelligence provider. I get that you are hyper focused on making the big deals, but you might be leaving a big deal on the table by shutting out small fish, who might have oversized influence within their organization, government agency, or within an industry. Your business intelligence is focusing on the knowns, and doesn’t seem very open to considering the unknowns.

As the API Evangelist I have an audience. I’ve been in business since 2010, so I’ve built up an audience of enterprise folks who read what I write, and listen to “some” of what I say. I know people like me within the federal government, within city government, and across the enterprise. Over half the people I know who work within the enterprise, helping influence API decisions, are also kicking the tires of APIs at night. Developers like us do not always have a straightforward project, we are just learning, understanding, and connecting the dots. We don’t always have the ready to go deal in the pipeline, and are usually doing some homework so that we can go sell the concept to decision makers. Make sure your funnel doesn’t keep us out, run us away, or remove channels for our voice to be heard.

In a world where we focus only on the big deals, and focus on scaling and automating the operation of platforms, we run the risk of losing ourselves. If you are only interested in landing those big customers, and achieving the exit you desire, I understand. I am not your target audience. I will move. It also means that I won’t be telling any stories about what you are doing, building any prototypes, and generally amplifying what you are doing on social media, and across the media landscape. Providing many of the nutrients you will need to land some of the details you are looking to get, generating the internal and external buzz needed to influence the decision makers. Providing real world use cases of why your API-driven solution is the one an enterprise group should be investing in. Make sure you aren’t locking us out of your platform, and you are investing the energy into getting to know your API consumers, more about what their intentions are, and how it might fit into your larger API strategy–if you have one.


I Am Needing Some Evidence Of How APIs Can Make An Impact In Government

Eric Horesnyi (@EricHoresnyi), the CEO of Streamata.io and I were on a call with a group of people who are moving forward the API conversation across Europe, with the assistance of the EU. The project has asked us to assist them in the discovery of more data and evidence of how APIs are making an impact in how government operates within the European Union, but also elsewhere in the world. Aggregating as much evidence as possible to help influence the EU API strategy, and learn from what is already being done. I’m heading to Italy next month to present to the group, and participate in conversations with other API practitioners and evangelists, so I wanted to start my usual amount of storytelling here on the blog to solicit contributions from my audience about what they are seeing.

I am looking for some help from my readers who work at city, county, state, and federal agencies, or at the private entities who help them with their API efforts. I am looking for official, validated, on the record examples of APIs making a positive impact on how government serves its constituents. Quantifiable examples of how a government agency have published a private, partner, or public API, and it helped the agency better meet its mission. I’m looking for anything mundane, as well as the unique and interesting, with tangible evidence to back it all up. Like number of developers, partners, apps, cost saving, efficiencies, or any other positive effect. Demonstrating that APIs when done right can move the conversation forward at a government agency. For this round, I’m going to need first hand accounts, because I will need to help organize the data, and work with this group to submit it to the European Union as part of their wider effort.

This is something I’ve been doing loosely since 2012, but I need to start getting more official about how I gather the stories, and pull together actual evidence, going beyond just my commentary from the outside in. I’ll be reaching out to all my people in government, asking for examples. If you know of anything, please email me at [email protected] with your thoughts. We have an opportunity to influence the regulatory stance in Europe when it comes to government putting APIs to work, which will be something that washes back upon the shores of the United States during each wave of API regulations to come out of the EU. My casual storytelling about how government APIs are making change on my blog has worked for the last five years, but moving forward we are going to need to better at gathering, documenting, and sharing examples how APIs are working across government. Helping establish more concrete blueprints for how to do all of this properly, and ensuring that we aren’t reinventing the wheel when it comes to API in government.

If you know someone working on APIs in at any level of government, feel free to share a link to my story, or send an introduction via [email protected] I’d love to help share the story, and evidence of the impact they are making with APIs. I appreciate all your support in making this happen–it is something I’ll put back out to the community once we’ve assembled it and talked through it in Italy next month.


Being Open Is More About Being Open So Someone Can Extract Value Than Open Being About Any Shared Value

One of the most important lessons I’ve learned in the last eight years, is that when people are insistent about things being open, in both accessibility, and cost, it is often more about things remaining open for them to freely (license-free) to extract value from it, that it is ever about any shared or reciprocal value being generated. I’ve fought many a battle on the front lines of “open”, leaving me pretty skeptical when anyone is advocating for open, and forcing me to be even critical of my own positions as the API Evangelist, and the bullshit I peddle.

In my opinion, ANYONE wielding the term open should be scrutinized for insights into their motivations–me included. I’ve spend eight years operating on the front line of both the open data, and the open API movements, and unless you are coming at it from the position of a government entity, or from a social justice frame of mind, you are probably wanting open so that you can extract value from whatever is being opened. With many different shades of intent existing when it comes to actually contributing any value back, and supporting the ecosystem around whatever is actually being opened.

I ran with the open data dogs from 2008 through 2015 (still howl and bark), pushing for city, county, state, and federal government open up data. I’ve witnessed how everyone wants it opened, sustained, maintained, and supported, but do not want to give anything back. Google doesn’t care about the health of local transit, as long as the data gets updated in Google Maps. Almost every open data activist, and data focused startup I’ve worked with has a high expectation for what government should be required to do, and want very low expectations regarding what should be expected of them when it comes to pay for commercial access, sharing enhancements and enrichments, providing access to usage analytics, and be observable and open to sharing access to end-users of this open data. Libertarian capitalism is well designed to take, and not give back–yet be actively encouraging open.

I deal with companies, organizations, and institutions every day who want me to be more open with my work. Are more than happy to go along for the ride when it comes to the momentum built up from open in-person gatherings, Meetups, and conference. Always be open to syndicating data, content, and research. All while working as hard as possible to extract as much value, and not give anything back. There are many, many, many companies who have benefitted from the open API work that I, and other evangelist in the space do on a regular basis, without ever considering if they should support them, or give back. I regularly witness partnerships scenarios in all of the API platforms I monitor, where the larger more proprietary and successful partner extracts value from the smaller, more open and less proven partner. I get that some of this is just the way things are, but much of it is about larger, well-resourced, and more closed groups just taking advantage of smaller, less-resourced, and more open groups.

I have visibility into a number of API platforms that are the targets of many unscrupulous API consumers who sign up for multiple accounts, do not actively communicate with platform owners, and are just looking for a free hand out at every turn. Making it very difficult to be open, and often times something that can also be very costly to maintain, sustain, and support. Open isn’t FREE! Publicly available data, content, media, and other resources cost money to operate. The anti-competitive practices of large tech giants setting the price so low for common digital resources have set the bar so low, for so long, it has change behaviors and set unrealistic expectations as the default. Resulting in some very badly behaved API ecosystem players, and ecosystems that encourage and incentivize bad behavior within specific API communities, but also is something that spreads from provider to provider. Giving APIs a bad name.

When I come across people being vocal about some digital resource being open, I immediately begin conducting a little due diligence on who they are. Their motivations will vary depending on where the come from, and while there are no constants, I can usually tell a lot about someone whether they come from a startup ecosystem, the enterprise, government, venture capital, or other dimensions of ur reality that the web has reached into recently. My self-appointed role isn’t just about teaching people to be more “open” with their digital assets, it is more about teaching people to be more aware and in control over their digital assets. Because there are a lot of wolves in sheeps clothing out there, trying to convince you that “open” is an essential part of your “digital transformation”, and showcasing all the amazing things that will happen when you are more “open”. When in reality they are just interested in you being more open so that they can get their grubby hands on your digital resources, then move on down the road to the next sucker who will fall for their “open” promises.


Providing Minimum Viable API Documentation Blueprints To Help Guide Your API Developers

I was taking a look at the Department of Veterans Affairs (VA) API documentation for the VA Facilities API, and intending on providing some feedback on the API implementation. The API itself is pretty sound, and I don’t have any feedback without having actually integrated it into an application, but following on the heals of my previous story about how we get API developers to follow minimum viable API documentation guidance, I had lots of feedback on the overall deliver of the documentation for the VA Facilities API, helping improve on what they have there.

Provide A Working Example of Minimum Viable API Documentation
One of the ways that you help incentivize your API developers to deliver minimum viable API documentation across their API implementations is you do as much of the work for them as you can, and provide them with a forkable, downloadable, clonable API documentation that meets the minimum viable requirements. To help illustrate what I’m talking about I created a base GitHub blueprint for what I’d suggest as a minimum viable API documentation at the VA. Providing something the VA can consider, and borrow from as they are developing their own strategy for ensuring all APIs are consistently documented.

Covering The Bare Essentials That Should Exist For All APIs
I wanted to make sure each API had the bare essentials, so I took what the VA has already done over at developer.va.gov, and republished it as a static single page application that runs 100% on GitHub pages, and hosted in a GitHub repository–providing the following essential building blocks for APIs at the VA:

  • Landing Page - Giving any API a single landing page that contains everything you need to know about working with an API. The landing page can be hosted as its own repo, and subdomain, and the linked up with other APIs using a facade page, or it could be published with many other APIs in a single repository.
  • Interactive Documentation - Providing interactive, OpenAPI-driven API documentation using Swagger UI. Providing a usable, and up to date version of the documentation that developers can use to understand what the API does.
  • OpenAPI Definition - Making sure the OpenAPI behind the documentation is front and center, and easily downloaded for use in other tools and services.
  • Postman Collection - Providing a Postman Collection for the API, and offering it as more of a transactional alternative to the OpenAPI.

That covers the bases for the documentation that EVERY API should have. Making API documentation available at a single URL to a human viewable landing page, complete with documentation. While also making sure that there are two machine readable API definitions available for an API, allowing the API documentation to be more portable, and useable in other tooling and services–letting developers use the API definitions as part of other stops along the API lifecycle.

Bringing In Some Other Essential API Documentation Elements
Beyond the landing page, interactive documentation, OpenAPI, and Postman Collection, I wanted to suggest some other building blocks that would really make sure API developers at the VA are properly documenting, communicating, as well as supporting their APIs. To go beyond the bare bones API documentation, I wanted to suggest a handful of other elements, as well as incorporate some building blocks the VA already had on the API documentation landing page for the VA Facilities API.

  • Authentication - Providing an overview of authenticating with the API using the header apikey.
  • Response Formats - They already had a listing of media types available for the API.
  • Support - Ensuring that an API has at least one support channel, if not multiple channels.
  • Road Map - Making sure there is a road map providing insights into what is planned for an API.
  • References - They already had a listing of references, which I expanded upon here.

I tried not to go to town adding all the building blocks I consider to be essential, and just contribute couple of other basic items. I feel support and road map are essential and cannot be ignored, and should always be part of the minimum viable API documentation requirements. My biggest frustrations with APIs are 1) Up to date documentation, 2) No support, and 3) Not knowing what the future holds. I’d say that I’m also increasingly frustrated when I can’t get at the OpenAPI for an API, or at least find a Postman Collection for the API. Machine readable definitions moved into the essential category for me a couple years ago–even though I know some folks don’t feel the same.

A Self Contained API Documentation Blueprint For Reuse
To create the minimum viable API documentation blueprint demo for the VA, I took the HTML template from developer.va.gov, and deployed as a static Jekyll website that runs on GitHub Pages. The landing page for the documentation is a single index.html page in the root of the site, leverage Jekyll for the user interface, but driving all the content on the page from the central config.yml for the API project. Providing a YAML checklist that API developers can follow when publishing their own documentation, helping do a lot of the heavy lifting for developers. All they have to do is update the OpenAPI for the API and add their own data and content to the config.yml to update the landing page for the API. Providing a self-contained set of API documentation that developers can fork, download, and reuse as part of their work, delivering consistent API documentation across teams.

The demo API documentation blueprint could use some more polishing and comments. I will keep adding to it, and evolving it as I have time. I just wanted to share more of my thoughts about the approach the VA could take to provide function API documentation guidance, as a functional demo. Providing them with something they could fork, evolve, and polish on their own, turning it into a more solid, viable solution for documentation at the federal agency. Helping evolve how they deliver API documentation across the agency, and ensuring that they can properly scale the delivery of APIs across teams and vendors. While also helping maximize how they leverage GitHub as part of their API lifecycle, setting the base for API documentation in a way that ensures it can also be used as part of a build pipeline to deploy APIs, as well as manage, testing, secure, and helping deliver along almost every stop along a modern API lifecycle.

The website for this project is available at: https://va-working.github.io/api-documentation/ You can access the GitHub repository at: https://github.com/va-working/api-documentation


Please Refer The Engineer From Your API Team To This Story

I reach out to API providers on a regular basis, asking them if they have an OpenAPI or Postman Collection available behind the scenes. I am adding these machine readable API definitions to my index of APIs that I monitor, while also publishing them out to my API Stack research, the API Gallery, APIs.io, work to get them published in the Postman Network, and syndicated as part of my wider work as an OpenAPI member. However, even beyond my own personal needs for API providers to have a machine readable definition of their API, and helping them get more syndication and exposure for their API, having an definition present significantly reduces friction when on-boarding with their APIs at almost every stop along a developer’s API integration journey.

One of the API providers I reached out to recently responded with this, “I spoke with one of our engineers and he asked me to refer you to https://developer.[company].com/”. Ok. First, I spend over 30 minutes there just the other day. Learning about what you do, reading through documentation, and thinking about what was possible–which I referenced in my email. At this point I’m guessing that the engineer in question doesn’t know what an OpenAPI or Postman Collection is, they do not understand the impact these specifications are having on the wider API ecosystem, and lastly, I’m guessing they don’t have any idea who I am(ego taking control). All of which provides me with the signals I need to make an assessment of where any API is in their overall journey. Demonstrating to me that they have a long ways to go when it comes to understanding the wider API landscape in which they are operating in, and they are too busy to really come out of their engineering box and help their API consumers truly be successful in integrating with their platform.

I see this a lot. It isn’t that I expect everyone to understand what OpenAPI and Postman Collections are, or even know who I am. However, I do expect people doing APIs to come out of their boxes a little bit, and be willing to maybe Google a topic before responding to question, or maybe Google the name of the person they are responding to. I don’t use a gmail.com address to communicate, I am using apievangelist.com, and if you are using a solution like Clearbit, or other business intelligence solution, you should always be retrieving some basic details about who you are communicating with, before you ever respond. That is, you do all of this kind of stuff if you are truly serious about operating your API, helping your API consumers be more successful, and taking the time to provide them with the resources they need along the way–things like an OpenAPI, or Postman Collections.

Ok, so why was this response so inadequate?

  • No API Team Present - It shows me that your company doesn’t have any humans their to support the humans that will be using your API. My email went from general support, to a backend engineer who doesn’t care about who I am, and about what I need. This is a sign of what the future will hold if I actually bake their API into my applications–I don’t need my questions lost between support and engineering, with no dedicated API team to talk to.
  • No Business Intelligence - It shows me that your company has put zero thought into the API business model, on-boarding, and support process. Which means you do not have a feedback loop established for your platform, and your API will always be deficient of the nutrients it needs to grow. Always make sure you conduct a lookup based upon on the domain, or Twitter handle or your consumers to get the context you need to understand who you are talking to.
  • Stuck In Your Bubble - You aren’t aware of the wider API community, and the impact OpenAPI, and Postman are having on the on-boarding, documentation, and other stops along the API lifecycle. Which means you probably aren’t going to keep your platform evolving with where things are headed.

Ok, so why should you have an OpenAPI and Postman Collection?

  • Reduce Onboarding Friction - As a developer I won’t always have the time to spend absorbing your documentation. Let me import your OpenAPI or Postman Collection into my client tooling of choice, register for a key and begin making API calls in seconds, or minutes. Make learning about your API a hands on experience, something I’m not going to get from your static documentation.
  • Interactive API Documentation - Having a machine readable definition for your API allows you to easily keep your documentation up to date, and make it a more interactive experience. Rather than just reading your API documentation, I should be able to make calls, see responses, errors, and other elements I will need to truly understand what you do. There are plenty of open source interactive API documentation solutions that are driven by OpenAPI and Postman, but you’d know this if you were aware of the wider landscape.
  • Generate SDKs, and Other Code - Please do not make me hand code the integration with each of your API endpoints, crafting each request and response manually. Allow me to autogenerate the most mundane aspects of integration, allowing OpenAPI and Postman Collection to act as the integration contract.
  • Discovery - Please don’t expect your potential consumers to always know about your company, and regularly return to your developer.[company].com portal. Please make your APIs portable so that they can be published in any directory, catalog, gallery, marketplace, and platform that I’m already using, and frequent as part of my daily activities. If you are in my Postman Client, I’m more likely to remember that you exist in my busy world.

These are just a few of the basics of why this type of response to my question was inadequate, and why you’d want to have OpenAPI and Postman Collections available. My experience on-boarding will be similar to that of other developers, it just happens that the application I’m developing are out of the normal range of web and mobile applications you have probably been thinking about when publishing your API. But this is why we do APIs, to reach the long tail users, and encourage innovate around our platforms. I just stepped up and gave 30 minutes of my time (now 60 minutes with this story) to learning about your platform, and pointing me to your developer.[company].com page was all you could muster in return?

Just like other developers will, if I can’t onboard with your API without friction, and I can’t tell if there is anyone home, and willing to give me the time of day when I have questions, I’m going to move on. There are other platforms that will accommodate me. The other downside of your response, and me moving on to another platform, is that now I’m not going to write about your API on my blog. Oh well? After eight years of blogging on APIs, and getting 5-10K page views per day, I can write about a topic or industry, and usually dominate the SEO landscape for that API search term(s) (ego still has control). But…I am moving on, no story to be told here. The best part of my job is there are always stories to be told somewhere else, and I get to just move on, and avoid the friction wherever possible when learning how to put APIs to work.

I just needed this single link to provide in response to my email response, before I moved on!


Stack Exchange Has An API That Returns The Details For All Of Your Access Tokens

I’m a big fan of helpful authentication features, where API providers make it easier to manage our increasingly hellish environment, application, token, and other management duties of the average API integrator. To help me better manage my API apps, and the OAuth tokens I have in play, I am trying to document all the sensible approaches I come across while putting different APIs to work, and scouring the API landscape for stories.

One example of this in action is out of the Stack Exchange API, where you can find an API endpoint for accessing the details of your OAuth tokens, and invalidate, and de-authorize them. A pretty useful API endpoint when you are integrating with APIs, and find yourself having to manage many tokens across many APIs, apps, and users. Helping you check in on the overall health and activity of your tokens, revoking, renewing, and making sure they work when you need them the most.

It is helpful for me to write about the helpful authentication practices I come across while using APIs. It helps me aggregate them into a nice list of features API providers should consider supporting. If I don’t write about it here on the blog, then it doesn’t exist in my research, and my future storytelling. My goal is to help spread the knowledge about what is working across the sector, so that more API providers will adopt along the way. You know what is better than Stack Exchange providing an API to manage your access tokens? All API providers providing you with an API to manage your access tokens!

These stories, and any other relevant links I’ve curated will be published to my API authentication research. Eventually I’ll roll all the features I’ve aggregated into either a long form blog post, or white paper I’ll publish and put out with the assistance of one of my partners. I’m interested in the authentication portion of this, but also I’m looking to begin defining processes for helping us better manage our API integration environments, application ids, secrets, tokens, and other goodies we depend on to secure our consumption of APIs across many different providers. It is something that will continue to expand, multiply, and grow more complex with each additional API we add to our growing list of dependencies.


Some Ideas For API Discovery Collections That Students Can Use

This is a topic I’ve wanted to set in motion for some time now. I had a new university professor city my work again as part of one of their courses recently, something that floated this concept to the top of the pile again–API discovery collections meant for just for students. Helping k-12, community college, and university students quickly understand where to find the most relevant APIs to whatever they are working on. Providing human, but also machine readable collections that can help jumpstart their API education.

I use the API discovery format APIs.json to profile individual, as well as collections of APIs. I’m going to kickstart a couple of project repos, helping me flesh out a handful of interesting collections that might help students better understand the world of APIs:

  • Social - The popular social APIs like Twitter, Facebook, Instagram, and others.
  • Messaging - The main messaging APIs like Slack, Facebook, Twitter, Telegram, and others.
  • Rock Star - The cool APIs like Twitter, Stripe, Twilio, YouTube, and others.
  • Amazon Stack - The core AWS Stack like EC2, S3, RDS, DynamoDB, Lambda, and others.
  • Backend Stack - The essential App stack like AWS S3, Twilio, Flickr, YouTube, and others.

I am going to start there. I am trying to provide some simple, usable collections or relevant APIs for students are just getting started If there are any other categories, or stacks of APIs you think would be relevant for students to learn from I’d love to hear your thoughts. I’ve done a lot of writing about educational and university based APIs, but I’ve only lightly touched upon what APIs should students be learning about in the classroom.

Providing ready to go API collections will be an important aspect of the implementation of any API training and curriculum effort. Having the technical details of the API readily available, as well as the less technical aspects like signing up, pricing, terms of service, privacy policies, and other relevant building blocks should also be front and center. I’ll get to work on these five API discovery collections for students. Get the title, description, and list of each API stack published as a README, then I’ll get to work on publishing the machine, and human readable details for the technology, business, and politics of using APIs.


The Path To Production For Department of Veteran Affairs (VA) API Applications

This post is part of my ongoing review of the Department of Veteran Affairs (VA) developer portal and API presence, moving on to where I take a closer look at their path to production process, and provide some feedback on how the agency can continue to refine the information they provide to their new developers. Helping map out the on-boarding process for any new developer, ensuring they are fully informed about what it will take to develop an application on top of VA APIs, and move those application(s) from a developer state to a production environment, and actually serving veterans.

Beginning with the VA’s base path to production template on GitHub, then pulling in some elements I found across the other APIs they have published to developer.va.gov, and finishing off with some ideas of my own, I shifted the outline for the path to production to look something like this:

  • Background - Keeping the background of the VA API program.
  • [API Overview] - Any information relevant to specific API(s).
  • Applications - One right now, but eventually several applications, SDK, and samples.
  • Documentation - The link, or embedded access to the API documentation, OpenAPI definition, and Postman Collection.
  • Authentication - Overview of how to authenticate with VA APIs.
  • Development Access - Provide an overview of signing up for development access.
  • Developer Review - What is needed to become a developer.
    • Individual - Name, email, and phone.
    • Business - Name, URL.
    • Location - In country, city, and state.
  • Application Review - What is needed to have an application(s).
    • Terms of Service - In alignment with platform TOS.
    • Privacy Policy - In alignment with platform TOS.
    • Rate Limits - Aware of the rate limits that are imposed.
  • Production Access - What happens once you have production access.
  • Support & Engagement - Using support, and expected levels of engagement.
  • Service Level Agreement - Platform working to meet an SLA governing engagement.
  • Monthly Review - Providing monthly reviews of access and usage on platform.
  • Annual Audits - Annual audits of access, and usage, with developer and application reviews.

I wanted to keep much of the content that the VA already had up there, but I also wanted to reorganize things a little bit, and make some suggestions for what might be next. Resulting in a path production section that might look a little more like this.


Department of Veteran Affairs (VA) API Path To Production

Background

The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem. We are in the alpha stage of the project, wherein we provide one API that enables parties external to the VA to submit of VBA forms and supporting documentation via PDF on behalf of Veterans.

[API Specific Overview]

Applications

Take a look at the sample code and documentation we have on GitHub at https://github.com/department-of-veterans-affairs/vets-api-clients. We will be developing more starter applications, developing open source SDKs and code samples, while also showcasing the work of other API developers in the near future–check back often.

Documentation

Spend some time with the documentation for the API(s) you are looking to use. Make sure the VA has the resources you will need to make your application work, before you sign up for a developer account, and submit your application for review.

Authentication

VA uses token-based authentication. Clients must provide a token with each HTTP request as a header called apiKey. This token is issued by VA. To receive a developer token to access this API in a test environment, please request access.

Development Access

All new developers must sign up for development access to the VA API platform, providing a minimal amount of information about yourself, business or organization you represent, where you operate in the United States, and the application you will be developing.

Developer Review

You will provide the VA with details about yourself, the business you work for, and your location. Submitting the following information as a GitHub issue, or via email for more privacy:

  • Individual Information
    • Name - Your name.
    • Email - You email address.
    • Phone - Your phone number.
  • Business Information
    • Name - Your business or organizational name.
    • Website - Your business or organization website.
  • Location Information
    • City - The city where you operate.
    • State - The state where you operation.
    • Country - Acknowledge that you operate within the US.

We will take a look at all your details, then verify you and your business or organization as quickly as possible. Once you hear back from us via email, you will either be declined, or we will send you a Client ID and Secret for access the VA API development environment. When you are ready, you can submit your application for review by the VA team.

Application Review

You will provide us with details about the application you are developing, helping us understand the type of application you will be providing to veterans. Submitting the following information as a GitHub issue, or via email for more privacy:

  • Name - The name of your application.
  • Details - Details about your application.
  • Terms of Service - Your application is in alignment with ur terms of service.
  • Privacy Policy - Your application is in alignment with our privacy policy
  • Rate Limits - You are aware of the rate limits imposed on your application.
  • Working Demo - A working demo of the application will be needed for the review.
  • Code Review - A code review will be conducted when application goes to production.

We will take a look at all your details, and contact you about scheduling a code review. Once all questions about your applications are answered, and a code review has been conducted you will be notified letting you know if your application is accepted or not.

Production Access

Once approved for production access, you will receive an email from the VA API team notifying you of your application’s status. You will receive a new Client ID and Secret for your application for use in production, allowing use the base URL api.vets.gov instead of dev-api.vets.gov, and begin access live data.

Support & Engagement

The VA will be providing support using GitHub issues, and via email. All developers will be required to engage through these channels to be able to actively engage with VA API operations, to be able to maintain access to VA APIs

Service Level Agreement

The VA will be providing a service level agreement for each of the APIs we provide, committing to a certain quality of service, support, and communication around the APIs you will be integrating your applications with.

Monthly Review

Each month developers will receive an email from the VA regarding your access and usage. Providing a summary of our engagement. A response is required to maintain an active status as a VA developer, and application. After 90 days of no response, all developer or production application keys will be revoked, until contact is made. Keeping all applications active, with a responsive administrator actively managing things.

Annual Audits

Each year developers will receive an email from a VA API audit representative, reviewing your access, usage, and conducting a fresh developer review, as well as review application, access tokens, and end-usage. Ensuring that all applications operating in a production environment continually meet the expected security, privacy, support, and operational standards.


It Is Not Just A Path, But Journey
I’m trying to shift the narrative for the path to production into being a more complete picture of the journey. What is expected of the developers, their applications, as well as setting the table of what can be expected of the VA. I don’t expect the VA to bite on all of these suggestions, but I can’t help but put them in there when they are relevant, and I have the opportunity ;-)

Decouple Developer And Application(s)
Some of the reasons I separated the developer review from the application review is so that a developer could sign up and immediately get a key to kick the tires and begin developing. When ready, which many will never be, they can submit an application for potential production access. Developers might end up having multiple applications, so if we can decouple them early on, and allow all developers to have a default application in the development environment, but also be able to have multiple applications in a production environment, I feel things will be more scalable.

Not Forgetting The Path After Production
I wanted to make sure and put in some sort of process that would help ensure both the VA, and developers are investing in ongoing management of their engagement. Ensuring light monthly reviews of ALL applications using the APIs, and pushing for developers and the VA to stay engaged. While also making sure all developers and applications get annual reviews, preventing what happened with the Cambridge Analytica / Facebook situation, where a malicious application gets access to such a large amount of data, without any regular review. Making sure the VA isn’t forgetting about applications once they are in a production state. Pushing this to be more than just about achieving production status with an application, and also continuing to ensure it is serving end-users, the veterans.

Serving Veterans With Your Application
To close, I’d say that calling this a path to production doesn’t do it justice. It should be guide to being able to serve a veteran with your application. Acknowledging that it will take a significant amount of development before your application will be ready, but also that the VA will work to review your developer account, and your application, and ensure that VA APIs, and the applications that depend on them will operate as expected, and always be in service of the veteran. Something that will require a certain level of rigor when it comes to the development, certification, and support of applications across the VA API ecosystem.


An API Value Generation Funnel With Metrics

I’ve had several folks asking me to articulate my vision of an API centric “sales” funnel, which technically is out of my wheelhouse in the sales and marketing area, but since I do have lots opinions on what a funnel should look like for an API platform, I thought I’d take a crack at it. To help articulate what is in my head I wanted to craft a narrative, as well as a visual to accompany how I like to imagine a value generation funnel for any API platform.

I envision a API-driven value generation funnel that can be tipped upside down, over and over, like an hour glass, generating value is it repeatedly pushes API activity through center, driven by a healthy ecosystem of developers, applications, and end-users putting applications to work / use. Providing a way to generate awareness and engagement with any API platform, while also ensuring a safe, reliable, and secure ecosystem of applications that encourage end-user adoption, engagement, and loyalty–further expanding on the potential for developers to continue developing new applications, and enhancing their applications to better serve end-users.

I am seeing things in ten separate layers right now, something I’ll keep shifting and adjusting in future iterations, but I just wanted to get a draft funnel out the door:

  • Layer One - Getting folks in the top of the funnel.
    • Awareness - Making people aware of the APIs that are available.
    • Engagement - Getting people engaged with the platform in some way.
    • Conversation - Encouraging folks to be part of the conversation.
    • Participation - Getting developers participating on regular basis.
  • Layer Two
    • Developers - Getting developers signing up and creating accounts.
    • Applications - Getting developers signing up and creating applications.
  • Layer Three
    • Sandbox Activity - Developers being active within the sandbox environment.
  • Layer Four
    • Certifed Developers - Certifying developers in some way to know who they are.
    • Certified Application - Certifying applications in some way to ensure quality.
  • Layer Five
    • Production Activity - Incentivizing production applications to be as active as possible.
  • Layer Six
    • Value Generation (IN / OUT) - Driving the intended behavior from all applications.
  • Layer Seven
    • Operational Activity - Doing what it takes internally to properly support applications.
  • Layer Eight
    • Audit Developers - Make sure there is always a known developer behind the application.
    • Audit Applications - Ensure the quality of each application with regular audits.
  • Layer Nine
    • Showcase Developers - Showcase developers as part of your wider partner strategy.
    • Showcase Applications - Showcase and push for application usage across an organization.
  • Layer Ten
    • Loyalty - Develop loyal users by delivering the applications that user are needing.
    • End-Users - Drive end-user growth by providing the applications end-users need.
    • Engagement - Push for deeper engagement with end-users, and the applications they use.
    • End-Usage - Incentivize the publishing and consumption of all platform resources.

I’m envisioning a funnel that you can turn on its head over and over and generate momentum, and kinetic energy, with the right amount of investment–the narrative for this will work in either direction. Resulting in a two-sided funnel both working in concert to generate value in the middle of the two-sided funnel.

To go along with this API value generation funnel, I’m picturing the following metrics being applied to quantify what is going on across the platform, and the eleven layers:

  • Layer One - Unique visitors, page views, RSS subscribers, blog comments, tweets, GitHub follows, forks, and likes.
  • Layer Two - New developers who are signing up, and adding new applications to the platform.
  • Layer Three - API calls on sandbox API resources, and overall activity in the development environment.
  • Layer Four - New certified developers and applications that have been reviewed and given production access.
  • Layer Five - API calls for production API resources, understanding the overall activity across the platform.
  • Layer Six - GET, POST, PUT, DELETE on different types of resources, in different types of service plans, at different rates.
  • Layer Seven - Support requests, communication, and other new resources that have occurred in support of operations.
  • Layer Eight - Number of developers and applications audited on a regular basis ensuring quality of application catalog.
  • Layer Nine - Number of new and existing developers and applications being showcased as part of platform operations.
  • Layer Ten - Number of end-users, sessions, page views, and other activity across the applications being delivered.

Providing a stack of metrics you can use to understand how well you are doing within each layer, understanding not just the metrics for a single area of your operations, but how well you are doing at building momentum, and increasing value generation. I hesitate to call this a sales funnel, because sales isn’t my jam. It is also because I do not see APIs as something you always sell–sometimes you want people contributing data and content into a platform, and not always just consuming resources. A well balanced API platform is generating value, not just selling API calls.

I am not entirely happy with this API value generation funnel outline and diagram, but it is a good start, and gets me closer to what I’m seeing in my head. I’ll let it simmer for a few weeks, engage in some conversations with folks, and then take another pass at refining it. Along the way I’ll think about how I would apply to my own API platform, and actually take some actions in each area, and begin fleshing out my own funnel. I’m also going to be working on a version of it with the CMO at Streadmata.io, and a handful of other groups I’m working with on their API strategy. The more input I can get from a variety of API platforms, the more refined I can make this outline and visual of my suggested API value generation funnel.


My API Storytelling Depends On The Momentum From Regular Exercise And Practice

I’ve been falling short of my normal storytelling quotas recently. I like to have at least 3 posts on API Evangelist, and two posts on Streamdata.io each day. I have been letting it slip because it was summer, but I will be getting back to my regular levels as we head into the fall. Whenever I put more coal in the writing furnace, I’m reminded of just how much momentum all of this takes, as well as the regular exercise and practice involved, allowing me to keep pace in the storytelling marathon across my blog(s).

The more stories I tell, the more stories I can tell. After eight years of doing this, I’m still surprised abut what it takes to pick things back up, and regain my normal levels of storytelling. If you make storytelling a default aspect of doing work each day, finding a way to narrate your regular work with it, it is possible to achieve high volumes of storytelling going out the door, generating search engine and social media traffic. Also, if you root your storytelling in the regular work you are already doing each day, the chances it will be meaningful enough for people to tune in only increases.

My storytelling on API Evangelist is important because it helps me think through what I’m working on. It helps me become publicly accessible by generating more attention to my work, firing up new conversations, and reenforces the existing ones I’m already having. When the storytelling slows, it means I’m either doing a unhealthy amount of coding or other work, or my productivity levels are suffering overall. This makes my API storytelling a heartbeat of my operations, and a regular stream of storytelling reflects how healthy my heartbeat is from regular exercise, and usage of my words (instead of code).

I know plenty of individuals, and API related operations that have trouble finding their storytelling voice. Expressing that they just don’t have the time or resources to do it properly. Regular storytelling on your blog is hard to maintain, even with the amount of experience I have. Regardless, it is something you just have to do, and you will have mandate that storytelling just becomes a default aspect of your work each day. If you work on it regularly, eventually you’ll find your voice. However, there will always be times where you lose it, and have to work to regain it again. It is just the fight you will have to fight, but ultimately if you continue, it will be totally worth it. I’m very thankful I’ve managed to keep it going for over eight years now, resulting in a pretty solid platform that enables me to do what I do.


Allowing Users To Get Their Own OAuth Tokens For Accessing An API Without The Need For An API Application

I run a lot of different applications that depend on GitHub, and use GitHub authentication as the identity and access management layer for these apps. One of the things I like the most about GitHub and how I feel it handles it’s OAuth more thoroughly than most other platforms, is how they let you get you own OAuth token under your settings > developer settings >personal access tokens. You don’t need to setup an application, and do the whole OAuth dance, you just get a token that you can use to pass along with each API call.

I operate my own OAuth server which allows me to authenticate using OAuth with many leading APIs, so generating an OAuth token, and setting up a new provider isn’t too hard. However, it is always much easier to go under my account settings, create a new personal access token for a specific purpose, and get to work playing with an API. I wish that ALL API providers did this. At first glance, it looks like GitLab, Harvest, TypeForm, and ContentFul all provide personal access tokens as a first option for on-boarding with their APIs. Demonstrating this is more of a pattern, than just a GitHub feature.

One of these days I’m going to have to do another story documenting the entire GitHub OAuth system, because they have a lot of interesting bells and whistles that make using their platform much more secure, and just a more frictionless experience than other API providers I use on a regular basis. GitHub has ground down a lot of the sharp corners on the whole authentication experience when it comes to OAuth. It would make a nice blueprint to share, and work to convince other API providers it is a pattern worth following. Reducing the cognitive load around OAuth management for any API integration, and standardizing how API providers support their API consumers, and end-users.

I have 3 separate Twitter Apps setup for specific purposes, but I wanted to have a separate personal application just for managing my person @kinlane account. I submitted a Twitter application for review, but haven’t heard back after almost a month. As a individual user of any platform, I should be able to instantly issue a personal access token that let’s me, or someone I sanction, to access my data and content on the platform. Personal access tokens should be a default feature for any consumer focused platform, putting API access more within the control of each end-user, and the platform power users.


What Have You Done For Us Lately (API Partner Edition)

I’ve been working on developing and evolving the Streamdata.io partner program, trying to move forward conversations with other service providers in the space that have existed long before I started working on things, as well as other newer relationships that I’ve helped bring in. I’m fascinated by how partner programs work, or do not work, and have invested a lot of time trying to optimize and improve how I do my own operations, and assist my partners and clients in evolving and delivering on their own partner vision.

It is difficult to establish, and continue meaningful and balanced partnerships between technology service and tooling providers. Sometimes providers have enough compatibility and synergy, that they are able to hit the ground running with meaningful activities that strengthen, and build partnership momentum. We are trying to establish a meaningful, yet effective way of measuring partner activity, and understanding the value that is being generated, and where reciprocity exists. Looking at the following activities produced by Streamdata.io and it’s partners:

  • Partner Page - Being published to both of our partner pages.
  • Testimonials - Providing quotes for each other about our services.
  • Blog Posts - Publishing blog posts about partnership and each others services.
  • White Papers - Publishing white papers or guides about partnership and each others services.
  • Press Releases - Working on join press releases about partnership and each others services.
  • Integrations - Publishing open source repositories demonstrating integration and usage of each others services.
  • Workshops - Conduct workshops for each others customers, helping deliver each others services within our ecosystems.
  • Business - Actually provide business referrals from our customers, and conversations occurring across both companies.

There are other activities we like to see happening, but these eight areas represent the most common exchanges we encourage amongst our partners. The trick is always pushing for reciprocity across all these areas, help deliver on a balanced partnership, and make sure there is equal value being generated for both sides of the partnership. Each of our partners look at this list of activities differently, requiring different levels of participation, and having expectations of results set at different levels.

There are some “potential partners” who don’t want to event talk about any of these items until we have that first business deal. While other partners are more than happy when we engage in these activities, but are hesitant about reciprocating on their side. We are more than happy to take the lead on many of these activities, but increasingly we are tracking on the activity on both sides of the track, to help quantify each partnership, guide our conversations, and our marketing, development, and evangelism efforts. Leaving us to ask regularly of our partners, what have you done for us lately? While also asking ourselves the same question about what we have done for our partners.


The Federal Agencies Who Use Their developer.[domain].gov Subdomain

I was reviewing the new developer portal for the Department of Veterans Affairs (VA), and one of things I took notice of, was their use of the developer.va.gov subdomain. In my experience, the API efforts that invest in a dedicated subdomain, and specifically a developer dot subdomain, tend to more invested in what they are doing than efforts that publish to a subfolder, or subsection of their website. As I was writing this post, I had a question in arise in my mind, regarding how many other federal agencies use a dedicated subdomain for their developer programs–something I wanted to pick up later, and understand the landscape a little more.

I took a list of current federal agency domains from the GSA and wrote a little script to append developer. to each of the domains, and conduct an HTTP status code check to see whether or not these pages existed. Here are the dedicated developer areas I found for the US federal government:

  • Department of Veterans Affairs (VA) - https://developer.va.gov/
  • Department of Labor - https://developer.dol.gov
  • International Trade Administration (ITA) - https://developer.trade.gov & https://developer.export.gov
  • United States Patent and Trademark Office - https://developer.uspto.gov
  • National Renewable Energy Laboratory - https://developer.nrel.gov
  • Centers for Medicare & Medicaid Services - https://developer.cms.gov
  • The Advanced Distributed Learning Initiative - http://developer.adlnet.gov & http://developers.adlnet.gov
  • United States Environmental Protection Agency - http://developer.epa.gov
  • USA Jobs - http://developer.usajobs.gov

These nine agencies have decided to invest in a subdomain for their developer portals. I have to recognize two others who provide these subdomains, but then redirect to a subsection of their websites:

  • National Park Service - http://developer.nps.gov redirect to https://www.nps.gov/subjects/developer/index.htm
  • Data.gov - http://developer.data.gov redirects to https://www.data.gov/developers/

Additionally, there is a single domain I noticed that used the plural version of the subdomain:

  • Code.gov - https://developers.code.gov (plural)

Along the way, I also noticed that many agencies would redirect their subdomain, and I assume all subdomains to the root of their agency’s domain. Ideally, all federal agencies would have a Github account, and publish a developer portal using Github Pages, and publish the developer.[agencydomain].gov as the address for the portal. Even if they just provide access to the agency’s data inventory, it is important to lay down the foundation for a developer platform across data, APIs, and open source software out of all federal agencies, providing a common, well-known location develop upon the government platform.

As part of my larger API discovery work I am going to keep lobbying that federal agencies work to publish a common developer.[agencydomain].gov portal. It would begin to transform how applications are built if you knew that you could automatically find a government agency’s data, APIs, and open source tooling at a single location. Especially if it was something that was default across ALL federal agencies, who were also actively publishing their public data assets, entire API catalog, and showcase of open source solutions they depend on and produce.


The Basics Of The VA API Feedback Loop

I’m working to break down the moving parts of API efforts over at the VA, and work to provide as much relevant feedback as I possibly can. One of the components I’m wanting to think about more is the feedback loop for the VA API efforts. The feedback loop is one of the most essential aspects of doing an API, and is quickly can become one of the most debilitating, paralyzing, and nutrient starving aspects of operating an API platform if done wrong, or non-existent. However, the feedback loop is also one of the most valuable reasons for wanting to do APIs in the first place, providing the essential feedback you will need from consumers, and the entire API ecosystem to move the API forward in a meaningful way.

Current Seeds Of The VA API Feedback Loop
Current the VA is supporting the VA API developer portal using GitHub Issues and email. I mentioned in my previous review of the VA API developer portal that the personal email addresses provided for email support should be generalized, sharing the load when it comes to email support for the platform. Next, I’d like to address the usage of GitHub issues for support, along with email, and step back to look at how this contributes to, or could possibly take away from the overall feedback loop for the VAPI API effort. Defining what the basics of an API feedback loop for the VA might be.

Expanding Upon The VA Usage Of GitHub Issues
I support the usage of GitHub issues for public support of any API related project. It provides a simple, observable way for anyone to get support around the VA APIs. While I’m guessing it was accidental, I like the specialization of the current repo, and usage of GitHub issues, and that it being dedicated to VA API clients and their needs. I’d encourage this type of continued focus when it comes to establishing additional feedback loops, keeping them dedicated to a specific aspect of operating on the VA API platform. It is something that might seem a little messy at first, but could easily be managed with the proper strategy, and usage of GitHub APIs, which I’ll highlight below.

Makes API Operations More Public And Observable
One of the most important reasons for using GitHub as the cornerstone of the VA API feedback loop is that it allows for transparent, observable, auditable operation of the feedback loop across the VA API platform. One of the critical aspects of the overall health of the VA API platform in the future, will be feedback loops being as open as they possibly can. Of course, there are some feedback loops that should remain private, which GitHub issues can accommodate, but whenever possible the feedback loop for the VA API platform should be in the public domain, allowing all stakeholders, veterans, and the public to actively participate in the process. In a way that can ensures every aspect of API operations is documented, and auditable, providing as much accountability as possible across VA API operations.

Allowing For More Modular Organization Of Feedback Loops
Using GitHub Issues for the deployment, management, and organization of more modular feedback loops. Treating your feedback loops just as you would your APIs, making them small, meaningful, and doing one thing and doing it well. Any GitHub repository can have its own GitHub Issues, allowing for the deployment of specialized feedback loops based upon single project that are part of different organizational groups. Beyond the modularity available when you leverage GitHub repositories, and organize them within GitHub Organizations, Github Issues can also be tagged, allowing for even more meaningful organization of feedback as it comes in, tagging and bagging for consideration as part of the road map, and other decision making processes that will be feeding off the VA API platform’s feedback loop.

Enabling Feedback Loop Automation With The GitHub API
Another benefit of using GitHub Issues as an API feedback loop building block, is that they also have an API. Allowing for the automation of all aspects of the VA API platform feedback loop(s). The GitHub API can be used to aggregate, automate, audit, and work with the Github Issues for any GitHub organization and repo the VA has access to. Providing the ability to manage not just the Github Issues for a single GitHub repository, but for the orchestration of feedback loops across many different GitHub repositories, managed by many different GitHub organizations. Establishing a distributed feedback loop system in which VA API leadership can use to coordinate with different internal, agency, partner, vendor, or public stakeholder(s) at scale, across many different projects, and components of the VA API platform.

Augmenting Public Feedback With Private Github Repos
While it is critical that as many of the feedback loops across the VA API platform are publicly accessible, and observable by everyone as possible, it is also important that there are private channels for communication around some of the components of the platform. This is another reason why GitHub Issues can work as a building block for not just public feedback loops, but also being able to operate feedback loops as well. Taking advantage of private repositories when it comes to establishing modular, automated, and private conversations to occur around certain VA API platform projects. Balancing the public aspects of the platform, with feedback loops amongst trusted groups, while still leveraging GitHub for delivering the identity and access management aspects of governing private VA feedback loops.

Extending Private GitHub Repos With Email Support
Beyond the private GitHub repositories, and using their issues to facilitate private conversations, it always makes sense to have a generalized and dedicated email account as part of the feedback loop for any API platform. Providing another private, but also a vendor neutral way of supporting the platform. People just are familiar with email, and it makes sense to have a general account that is managed by many individuals who are coordinating around platform operations. Make it easy to provide feedback around the API the VA API operations, and support anyone participating within the VA API ecosystem.

Auditing, Documenting, And Reporting Upon The VA Feedback Loop
I suggested in my review of the VA API platform that email should be standardized and delivered via a dedicated email account, so that multiple stakeholders can participate in support of the platform from a VA operational perspective. This way emails can be tagged, organized, and archived in support of the larger VA API feedback loop. Making sure all questions get answered, and that they are also contributed to the evolution of the platform. Something that can also be done via the automation described earlier using the GitHub API. Allowing all threads, across any project and organization to be audited, documented, and reported upon across all VA API operations. Ensuring that their is transparency, observability, and accountability across the VA API platform feedback loop.

Have A Strategy In Place For The VA API Feedback Loop
GitHub Issues and email are the two basic building blocks of any API platform, and I support the VA starting their official journey here. I think GitHub makes for an essential building block of any API platform, when used right. It just helps to have a plan in place for when a repo’s GitHub is included in the overall feedback loop framework, and the organization and prioritization of the conversation going on there. GitHub Issues spread across many different GitHub repositories, without any real strategy to how they are organized, tagged, and engaged with can seem overwhelming, and become a mess. However, with a little planning, and the establishment of even the most basic approach to managing them, can help develop a pretty robust feedback loop across the VA API platform, that follows the lead of how open source software gets delivered.

Consider Other API Feedback Loop Building Blocks
I wanted to keep this post just about the basics of the feedback loop for the VA, or for any API platform–GitHub Issues, and email. However, I’d also like suggest the consideration of some other building blocks, to help augment GitHub Issues, providing some other direct, and indirect approaches to operating the VA API platform feedback loop:

  • FAQ - Providing a frequently asked question that is an aggregate of all the questions that get asked across the GitHub issues, and via email.
  • Newsletter - Providing a regular channel for updating platform stakeholders, via a structured email newsletter. Offering up private, and public editions, targeting different groups.
  • Road Map - Publishing a road map regarding what is getting built across all projects included within the VA API platform perimeter, aggregating GitHub Issues that evolve as part of the feedback loop and get tagged as milestones for adding to the road map.

I’m always hesitant to make suggestions about where to go next, when an organization is just getting started on their API journey. However, I think the VA team knows when to ignore my advice, and when they can cherry pick the things they want to include in their strategy. I just want to make sure I provide as much constructive criticism about what is there, and constructive feedback around what can be invested in next.

Hopefully this post provides a basic overview of the VA API platform feedback loop. Expands on what they are already doing, but shines a light on some of the positive aspects of using GitHub for the VA API platform feedback loop. I was the one who worked with the former VA CIO Marina Martin (@MarinaNitze) to get the the VA GitHub organization setup back in 2013. So it makes me happy to see it being used as a cornerstone of the VA API platform. I am happy to give feedback on how they can continue to put the powerful platform to such good use.


Remembering That APIs Are Used To Reduce Everything Down To A Transaction

This is our regular reminder that APIs are not good, nor bad, nor neutral. They are simply a tool in our technological toolbox, and something that is often used for very dark reasons, and occasionally for good. One of the most dangerous things I’ve found about APIs is just the general thought process that goes along with them, regarding how all roads lead to reducing, and distilling things down to a single transaction. APIs, REST, microservices, and other design patterns are all about taking something from our physical world, and redefining it as something that can be transmitted back and forth using the low cost request and response infrastructure of the web.

No matter what you are designing your API for, your mission is to reduce everything to a simple transaction that can be exchanged between your server, and any other system, web, mobile, device, or network application. This digital resource could be a photo of your kids, a message to your mother, the balance of your bank account, your personal thoughts going into your notebook, the latest song you listened to, your DNA, your test results for cancer, or any other piece of relevant data, content, media, object, or other resource that is being sent or received online. APIs are all about reducing all of our meaningful digital bits to the smallest possible transaction, and then daisy chaining them together to produce some desired set of results.

This API-ification of everything can be a good thing. It can make our lives better, but one of the negative side effects of this reducing of everything to a transaction, is that now that transaction can be bought and sold. The digitization of everything in our lives is rarely ever about making our lives better and whatever the reasons we are told up front, and almost always are about reducing that little piece of our lives to a transaction that can be quantified, have a value place on it, and then sold individually, or in bulk with millions of other transactions. As consumers of a digital reality, we rarely see the reasons why something around us are being digitized, and API-ified so that it can transacted online, resulting in something we’ve heard a lot–that we are the product.

It’s easy to believe in the potential of APIs. It is easy to get caught up in the reducing of everyday things down to transactions. It takes discipline, and the ability to stop and consider the bigger picture on a regular basis to avoid being stuck in the strong under currents of the API economy. Making sure we are regularly asking ourselves if we want this piece of our reality digitized and reduced to a transaction, and what the potential negative consequences of this element of our existence being a transaction. Thinking a little more deeply about how we’d feel if someone was buying and selling the digital bits of our life, and are we only ok with this as long as it is someone else’s bits and bytes–demonstrating that APIs are winning, and humanity is losing in this game we’ve developed online.


Why I Feel The Department Of Veterans Affairs API Effort Is So Significant

I have been working on API and open data efforts at the Department of Veterans Affairs (VA) for five years now. I’m passionate about pushing forward the API conversation at the federal agency because I want to see the agency deliver on its mission to take care of veterans. My father, and my step-father were both vets, and I lived through the fallout from my step-fathers two tours in Vietnam, exposure to the VA healthcare and benefits bureaucracy, and ultimately his passing away from cancer which he acquired from to his exposure to Agent Orange. I truly want to see the VA streamline as many of its veteran facing programs as they possibly can.

I’ve been evangelizing for API change and leadership at the VA since I worked there in 2013. I’m regularly investing unpaid time to craft stories that help influence people I know who are working at the VA, and who are potentially reading my work. Resulting in posts like my response to the VA’s RFI for the Lighthouse API management platform, which included a round two response a few months later. Influence through storytelling is the most powerful tool I got in my API evangelist toolbox.

This Is An Amazon Web Services Opportunity
The most popular story on my blog is, “The Secret to Amazon’s Success–Internal APIs”. Which tells a story of the mythical transition of Amazon from an online commerce website to the cloud giant, who is now powering a significant portion of the web. The story is mostly fiction, but continues to be the top performing story on my blog six years later. I’ve heard endless conference talks about this subject, I’ve seen my own story framed on the wall in enterprise organizations in Europe and Australia, and as a feature link on internal IT portals. This is one of the most important stories we have in the API sector, and what is happening at the VA right now will become similar to the AWS story when we are talking about delivering government services a decade from now.

The VA Is Going All In On An API Vision
One of the reasons the VA will obtain the intended results from their API initiative is because they are going all in on APIs across the agency. The API effort isn’t just some sideshow going on in a single department or group. This API movement is being led out of the VA’s Digital Innovation center, but is being adopted, picked up, and moved forward by many different groups across the large government agency. When I did my landscape analysis for them, I scanned as much of their public digital presence as possible in a two week timeframe, and provided them with a list of targets to go after. I see the scope of the results obtained from VA landscape analysis present in the APIs I’m seeing published to their portal, and in development by different groups, revealing in the beginnings of an agency-wide API journey.

The Use Of developer.va.gov Demonstrates The Scope
One way you can tell that the VA is going all in on an API vision, is their usage of the developer.va.gov subdomain. This may seem like it is a trivial thing, but after looking at thousands of API operations, and monitoring some of them for eight years, the companies, organizations, institutions, and government agencies to dedicate a subdomain to their API programs are always more committed to them, and invest the appropriate amount of resources needed to be successful. These API leaders always stand out from the organizations that publish their API efforts as an entry in their help center or knowledge-base, or make it just a footnote in their online presence. The use of the developer.va.gov subdomain demonstrates the scope of investment going on over at the VA in my experience.

The VA Is Properly Funding Their API Efforts
One of the most common challenges I see API teams face is the lack of resources to do what they need to do. API teams that can’t afford to deliver across multiple stops along the API lifecycle, cutting corners on testing, monitoring, security, documentation, and other common building blocks of a successful API platform. Properly funding an API initiative, and making it a first class citizen within the enterprise is essential to success. The number one response an API platform gets rendered ineffective is due to a lack of resources to properly deliver, evangelize, and scale API operations. This condition often leaves API programs unable to effectively spread across large organizations, properly reach out to partners, and generate the attention a public API program will need to be successful. From what I’ve seen so far, the VA is properly funding the expansion of the API vision at the agency, and will continue to do so for the foreseeable future.

The VA Is Providing Access To Meaningful API Resources
I’ve seen thousands of APIs get launched. Large enterprise always start with the safest resources possible. Learning by delivering resources that won’t cause any waves across the organization, which can be a good thing, but after a while, it is important that the resources you put forth do cause waves, and make change across an organization. The VA started with simple APIs like the VA Facilities API, but is quickly shifting gears into benefits, veteran validation, health, and other APIs that are centered around the veteran. I’m seeing the development of APIs that provide a full stack of resources that touch on every aspect of the veterans engagement with the VA. In order to see the intended results from the VA API efforts, they need to be delivering meaningful API resources, that truly make an impact on the veteran. From what I’m seeing so far, the VA is getting right at the heart of it, and delivering the useful API resources that will be needed across web, mobile, and device based applications that are serving veterans today.

There Is Transparency And Storytelling
Every one of my engagements with the VA this year has ended up on my blog. One of the reasons I stopped working within the VA back in 2013 was there were too many questions about being able to publish stories on my blog. I haven’t seen such questions of my work this year, and I’m seeing the same tone being struck across other internal and vendor efforts. The current API movement at the VA understands the significance of transparency, observability, and of doing much of the API work the VA out in the open. Sure, there is still the privacy and security apparatus that exists across the federal government, but I can see into what is happening with this API movement from the outside-in. I’m seeing the right amount of storytelling occurring around this effort, which will continue to sell the API vision internally to other groups, laterally to other federal agencies, and outwards to software vendors and contractors, as well as sufficiently involving the public throughout the journey.

Evolving The Way Things Get Done With Micro-Procurement
Two of the projects I’ve done with the VA have been micro-procurement engagements: 1) VA API Landscape Analysis, and 2) VA API Governance Models In The Public And Private Sector. Both of these projects were openly published to GitHub, opening up the projects beyond the usual government pool of contractors, then were awarded and delivered within two week sprints for less than $10,000.00. Demonstrating that the VA is adopting an API approach to not just changing the technical side of delivering service, but also working to address the business side of the equation. While still a very small portion of what will be needed to get things done at the VA, it reflects an important change in how technical projects can be delivered at the VA. Working to decompose and decouple not just the technology of delivering APIs at the VA, but also the business, and potentially the internal and vendor politics of delivering services at scale across the VA.

The VA Has Been Doing Their API Homework
As of the last couple of months, the VA is shifting their efforts into high gear with an API management, as well as an API development and operations solicitation(s) to help invest in, and build capacity across the agency. However, before these solicitations were crafted the VA has been doing some serious homework. You can see this reflected in the RFI effort which started in 2017, and continued in 2018. You can see this reflected in the micro-procurement contracts that have been executed, and are in progress as we speak. I’ve seen a solid year of the VA doing their homework before moving forward, but once they’ve started moving forward, they’ve managed to be able to shift gears rapidly because of the homework they’ve done to date.

Investing In An API Lifecycle And Governance
I am actively contributing to, and tuning into the API strategy building going on at the VA, and I’m seeing investment into a comprehensive approach to delivering all APIs in a consistent way across a modern API lifecycle. Addressing API design, mocking, deployment, orchestration, management, documentation, monitoring, and testing in a consistent way using an OpenAPI 3.0 contract. Something that is not just allowing them to reliably deliver APIs consistently across different groups and vendors, but is also allowing them to develop a comprehensive API governance strategy to measure, report upon, and evolve their API lifecycle and strategy over time. Dialing in how they deliver services across the VA, by leveraging the development, management, and operational level capacity they are bringing to the table with the solicitations referenced above. This approach demonstrates the scope in which the VA API leadership understands what will be necessary to transform the way the VA delivers services across the massive federal agency.

Providing An API Blueprint For Other Agencies
What the VA is doing is poised to change the way the VA meets its mission. However, because it is being done in such a transparent and observable way, with every stop along the lifecycle being so well documented and repeatable, they are also developing an API blueprint that other federal agencies can follow. There are other healthy API blueprints to follow across the federal government, out of Census, Labor, NASA, CFPB, FDA, and others, but there is not an agency-wide, API definition driven, full life cycle, complete with governance blueprint available at the moment. The VA API initiative has the potential to be the blueprint for API change across the federal government, becoming the Amazon Web Services story that we’ll be referencing for decades to come. All eyes are on the VA right now, because their API efforts reflect an important change at the agency, but also an important change for how the federal government delivers services to it’s people.

I am all in when it comes to support APIs at the VA. As I mentioned earlier, my primary motivation is rooted in my own experiences with the VA system, and helping it better serve veterans. My secondary motivation is all about contributing to, and playing a role in the implementation of one of the significant API platforms out there, which if done right, will change how our federal government works. I’m not trying to be hyperbolic in my storytelling around the VA API platform, I truly believe that we can do this. As always, I am working to be as honest as I can about the challenges we face, and I know that the API journey is always filled with twists and turns, but with the right amount of observability, I believe the VA API platform can deliver on the vision being set by leadership at the agency, and why I find this work to be so significant.


Understanding Where Folks Are Coming From When They Say API Management Is Irrelevant

I am always fascinated when I get push back from people about API management, the authentication, service composition, logging, analysis, and billing layer on the world of APIs. I seem to be find more people who are skeptical that it is even necessary anymore, and that it is a relic of the past. When I first started coming across the people making these claims earlier this year I was confused, but as I’ve pondered on the subject more, I’m convinced their position is more about the world of venture capital, and investment in APIs, that it is about APIs.

People who feel like you do not need to measure the value being exchanged at the API layer aren’t considering the technology or business of delivering APIs. They are simply focused on the investment cycles that are occurring, and see API management as something that has been done, it is baked into the cloud, and really isn’t central to API-driven businesses. They perceive that the age of API management as being over, it is something the cloud giants are doing now, thus it isn’t needed. I feel like this is a symptom of tech startup culture being so closely aligned with investment cycles, and the road map being about investment size and opportunity, and rarely the actual delivery of the thing that brings value to companies, organizations, institutions, and government agencies.

I feel this perception is primarily rooted in the influence of investors, but it is also based upon a limited understanding of API management, and seeing APIs being a about delivering public APIs, maybe with a complimenting a SaaS offering, and a free, pro, and enterprise tiers of access. When in reality API management is about measuring, quantifying, reporting upon, and in some cases billing for the value that is exchanged at the system integration, web, mobile, device, and network application levels. However, to think API operators shouldn’t be measuring, quantifying, reporting, and generating revenue from the digital bits being exchanged behind ALL applications, just demonstrates a narrow view of the landscape.

It took me a few months to be able to see the evolution of API management from 2006 to 2016 through the eyes of an investment minded individual. Once the last round of consolidation occurred, Apigee IPO’d, and API management became baked into Amazon, Google, and Azure, it fell of the radar for these folks. It’s just not a thing anymore. This is just one example of how investment influences the startup road map, as well as the type of thinking that goes on amongst investor influence, painting an entirely different picture of the landscape, than what I see going on. Helping me understand more about where this narrative originates, and why it gets picked up and perpetuated within certain circles.

To counter this view of the landscape, from 2006 to 2016 I saw a very different picture. I didn’t just see the evolution of Mashery, Apigee, and 3Scale as acquisition targets, and cloud consolidation. I also saw the awareness that API management brings to the average API provider. Providing visibility into the pipes behind the web, mobile, device, and network applications we are depending on to do business. I’m seeing municipal, state, and federal government agencies waking up to the value of the data, content, and algorithms they possess, and the potential for generating much needed revenue off commercial access to these resources. I’m working with large enterprise groups to manage their APIs using 3Scale, Apigee, Kong, Tyk, Mulesoft, Axway, and AWS API Gateway solutions.

Do not worry. Authenticating, measuring, logging, reporting, and billing against the value flowing through our API pipes isn’t going anywhere. Yes it is baked into the cloud. After a decade of evolution, it definitely isn’t the early days of investing in API startups. But, API management is still a cornerstone of the API life cycle. I’d say that API definitions, design, and deployment are beginning to take some of the spotlight, but authentication, service composition, logging, metrics, analysis, and billing will remain an essential ingredient when it comes to delivering APIs of any shape or size. If you study this layer of the API economy long enough, you will even see some emerging investment opportunities at the API management layer, but you have to be looking through the right lens, otherwise you might miss some important details.


API Portals Designed For API Provider And API Consumers

I’ve been working a couple organizations who are struggling with providing information within their API developer portal intended for API publishers, pushing their API portal beyond just being for their API consumers. Some of the folks I’ve been working with haven’t thought about their API developer portals being for both API publishers and consumers, and asked me to weigh in on the pros and cons of doing this. Helping them understand how they can continue their journey towards not just being an API platform, but also an API marketplace.

Some of the conversations we were having about providing API lifecycle materials to API developers, helping them deliver APIs consistently across all stops along lifecycle, focused on creating a separate portal for API publishers, decoupling them from where the APIs would be discovered and consumed. After some discussion, and consideration, it feels like it would be an unnecessary disconnect, to have API publishers going to a different location than where their APIs would end up being discovered, and integrated with. That having them actively involved in the publishing, presentation, and even support of, and engagement with consumers would benefit everyone involved.

Think of it being like Rapid API, but a large company, organization, institution, or government agency. You can find APIs, and integrate with existing APIs, or you can also become an API publisher, and be someone who helps publish and manage APIs as well. You will have one account, but you can find documentation, usage information and other resources for the APIs you consume, but then you will also access information, and usage data on the APIs you’ve published. Pushing API developers within an organization to actively think about both sides of the API coin, and learn how to be both provider and consumer. Helping add to the catalog of APIs, but also helping evolve and grow the army of API people across an organization.

We still have a lot of work ahead of us when it comes to fleshing out what type of information we should provide to API publishers, and how to cleanly separate the two worlds, but I feel the realization that a portal can be both for API publishers and consumers was an important one for these groups. I feel like it represents a milestone in the maturity and growth of their API programs, where the API developer portal has grown into something that everyone should be tuning into, and participating in. It isn’t just something that a single team, or handful of individuals managed, and it is has become something that is becoming a group effort. Sharing the load of operating the API portal, and keeping things up to date and active, further contributing to the potential success of the platform, shielding it from becoming yet another web service or API catalog that gets forgotten about on the network.


Trying To Define An Algorithm For My AWS API Cost Calculations Across API Gateway, Lambda, And RDS

I am trying to develop a base base API pricing formula for determining what my hard costs are for each API I’m publishing using Amazon RDS, EC2, and AWS API Gateway. I also have some APIs I am deploying using Amazon RDS, Lambda, and AWS API Gateway, but for now I want to get a default base for determining what operating my APIs will cost me, so I can mark up and reliably generate profit on top of the APIs I’m making available to my partners. AWS has all the data for me to figure out my hard costs, I just need a formula that helps me accurately determine what my AWS bill will be per API.

Math isn’t one of my strengths, so I’m going to have to break this down, and simmer on things for a while, before I will be able to come up with some sort of working formula. Here are my hard costs for what my AWS resources will cost me, for three APIs I have running currently in this stack:

  • AWS RDS - I am running a db.r3.large instance which costs me $0.29 per hour or 211.70 per month, with the bandwidth free to my Amazon EC2 instances in the same availability zone. I do not have any public access, so I don’t have any incoming or outgoing traffic, except from the EC2 instance.
  • AWS EC2 - I am running a t2.large instance which costs me $0.0928 per hour or $67.74 per month with bandwidth out costing me $0.155 per GB. I’m curious if they have an Amazon EC2 to AWS API Gateway data consideration? I couldn’t find anything currently.
  • AWS API Gateway - Overall using AWS API Gateway costs me $3.50 per million API calls received, with the first 1GB costing me $0.00/GB, and costing me $0.09/GB for the next 9.999 TB.

Across these three APIs, I am seeing an average of 5KB per responses, which is an important variable to use in these calculations. The AWS API Calculator helps me figure out your monthly bill across services, but it doesn’t help me break down what my hard costs are per API call. I need to establish a flat rate of what it costs for a single API call to exist. Each API will be in its own plan, so I can charge different rates for different APIs, but I need a baseline that I start with for each API call to make sure I’m covering my hard AWS costs. Sure, the more API calls I make, the more profitable I’ll be, but at some point I’ll also have to scale the infrastructure to keep a certain quality of service–there are a number of things to consider here.

I envision my API pricing having the following components:

  • Base - A base cost to cover my AWS bill, considering AWS RDS, EC2, and API Gateway hard costs.
  • Resource - A price for covering investment in resource. Work finding, cleaning up, refining, and evolving.
  • Markup - The percentage of markup for each API call, allowing me to generate a profit from the resources I’m providing.
  • Partner - Provide a volume discount, charging light users more, and giving a break to my partners who are consumer larger volumes.

I’m studying other pricing models from the telco, and software hosting spaces. I’ll also be doing some landscape analysis to see what people are charging for comparable API resources. I possess a wealth of data on what API providers and service providers are charging for their services. The trick will be finding comparable services to what I’m offering, and for the unique resources I possess, I’m going to have to be able to set my own price, and then test out my assumptions, and formula over time–until I find the sweet spot for covering my hard costs, and generating profit at some point from a specific service I’m offering.

If you have an advice for me. Help on the math side of things, or examples from other industries, I’d love to hear more. I’ll be open sourcing and sharing everything I figure out, and tell the story of how it is being applied to each API I am publishing. I want the history to be present for each of my APIs, adding to my wider API monetization and API planning research. In the end, I don’t think there is a perfect answer to what the pricing for an API should be. The best path forward involves covering your hard costs, and then experimenting over time to see what the market will bear. This is why AWS has gotten so good at doing this for cloud, because they have been doing this work for over a decade now. I am sure they have a lot of data, as well as experience understanding how to price API resources so they are both competitive, disruptive, and profitable.


Reviewing The Department Of Veterans Affairs New Developer Portal

I wanted to take a moment and review the Department Of Veterans Affairs (VA) new developer portal. Spending some time considering at how far they’ve come, what they published so far, and brainstorm on what the future might hold. Let me open by saying that I am working directly and indirectly with the VA on a variety of paid projects, but I’m not being paid to write about their API effort–that is something I’ve done since I worked there in 2013. I will craft a disclosure to this effect that I put at the bottom of each story I write about the VA, but I wanted to put out there in general, as I work through my thoughts on what is happening over at the VA.

The VA Has Come A Long Way
I wanted to open up this review with a nod towards how far the VA has come. There have been other publicly available APIs at the VA, as well as a variety of open data efforts, but the VA is clearly going all in on APIs with this wave. The temperature at VA in 2013 when it came to APIs was lukewarm at best. With the activity I’m seeing at the moment, I can tell that the temperature of the water has gone way up. Sure, the agency still has a long way to go, but from what I can tell, the leadership is committed to the agencies API journey–something I have not seen before.

Developer.VA.Gov Sends The Right Signal
It may not seem like much, but providing a public API portal at developer.va.gov sends a strong signal that the VA is seriously investing in their API effort. I see a lot of API programs, and companies who have a dedicated domain, or subdomain for their API operations are always more committed than people who make it just a subsection of their existing website, or existing as a help entry in a knowledge-base. It is important for federal agencies to have their own developer.[domain].gov portal that is actively maintained–which will be the next test for the VA’s resolve, keeping the portal active and growing.

The General Look And Feel Of The Portal
I like the minimalist look of the VA developer portal. It is simple. Easy on the eyes. I feel like the “site is currently under development” is unnecessary, because this should never cease to be. I like the “an official website of the United States government”, it is clean, and official looking. I’m happy to see the “Get help from Veterans Crisis Line”, and is something that should be on any page with services, data, or content for veterans. I like the flexible messaging area (FMA), where it says “Put VA Data to Work”. I’d like to see this section become an evolving FMA, with a wealth of messages rolling through it, educating the ecosystem about what is happening across the VA developer platform at any given moment.

Getting Started And Learning More
The learn more about VA APIs off the FMA area on home page drops me into benefits API overview, which happens to be the first category of APIs on the documentation page. I recommend isolating this to its own “getting started” page, which provides an overview of how to get started across all APIs. Providing background on the VA developer program, as well as the other building blocks of getting started with APIs, like requesting access, studying the API documentation, and the path to production for any application you are developing. The getting started for the VA developer portal should be a first class citizen, with its own page, and a logical presentation of exactly the building blocks developers will need to get started–then they can move onto documentation across all the API categories.

There Are Valuable APIs Available
Once you do actually begin looking at the API documentation available within the VA developer portal, you realize that there are truly valuable APIs available in there. Don’t get me wrong, the Arlington National Cemetery API is important, which has been the only publicly available API from the VA for several years, but when I think about VA APIs, I’m looking for resources that make a meaningful and significant impact on a vets life today:

  • Benefits Intake - Veterans Benefits Administration (VBA) document uploads.
  • Appeals Status - Enables approved organizations to submit benefits-related PDFs and access information on a Veteran’s behalf.
  • Facilities API - Use the VA Facility API to find relevant information about a specific VA facility. For each VA facility, you’ll find contact information, location, hours of operation and available services.
  • Veterans Health API - [There is no concise description for this API, and what is there needs some serious taming, and pulling out as part of the portal.]
  • Veteran Verification - We are working to give Veterans control of their information – their service history, Veteran status, discharge information, and more – and letting them control who sees it.

One minor thing, but will significantly contribute to the storytelling around VA APIs, is the consistent naming of APIs. Notice that only two of them have API in the title. I’m really not advocating for API to be in the title or not in the title. I am really advocating for consistency in naming, so that storytelling around them flows better. I lean towards using API in each title, just so that their titles in the documentation, OpenAPI contract behind, and everywhere these APIs travel are consistent, meaningful, and explain what is available.

I like the organizing of APIs into the three categories of benefits, facilities, and health. I’d say veteran verification is a little out of place. Maybe have a veteran category, and it is the first entry? IDK. I’m thinking there needs to be a little planning for the future, and what constitutes a category, and some guidance on how things are defined, broken down, and categories, so that there is some thought put into it the API catalog as it grows. A little separation of concerns in categorization, that can maybe begin to contribute the overall microservices strategy across the VA.

The Path To Production For Alpha API Clients
I like the path to production information for alpha API clients, I felt like it should be its own dedicated page, as one building blocks of the getting started section. However, once I started scrutinizing, it seemed like it was a separate process potentially for each individual API or category of APIs. If it can be a standalone item, I’d make it one, and link to it from each individual API category, or individual API. It it can’t be, I’d figure out to make it an expandable section, or subsection of the docs. It isn’t something I want to have to scroll through when working with an API and its documentation. Sure, I want to be aware of it, and be able to understand it as part of on-boarding, and revisiting it at a later date, but I don’t need it to be part of the core documentation page–I just want to get at the interactive API documentation.

Self-Service Signup
The process for signing up seems smooth. I haven’t been approved for access, and the review process makes a lot of sense. I’ll invest time in a separate story taking a look at the signup and on-boarding process, as well as the path to production flow for API clients, but I wanted to lightly reference as part of the review. I’d say the one confusing piece was leaving the website for the signup form, and then being dropped back at https://www.oit.va.gov/developer/, without much of any information about what was happening. It was a little jarring, and the overall flow, and process needs some smoothing out. I get that we are just getting started here, so I’m too worried about this–I just wanted to make a note of it.

The Essentials Are There
Overall, the essentials are present at the VA developer area. It is a great start, and has the making of a good developer API portal. It just isn’t very mature, and you can tell things are just getting started. You can signup, get at API documentation, and understand what it takes to build an application on top of VA API resources. Adding to, refining, and further polishing what is there will take time, so I do not want to be too critical of what the VA has published–it is a much better start than I’ve seen out of other federal agencies.

There are some other random items I’d like to reference, as well as brainstorm a little on what I’d like see invested in next, helping ensure the VA API portal provides what it needed for developers to be successful:

  • Terms of Service - It is good to see the basics of the TOS, and privacy policy there. I’d like to see more about privacy, security, and service level agreements (SLA).
  • Use Of GitHub Issues - I like the submission GitHub issues to request production access, and think it is a healthy use of GitHub issues forms, and is something the brings observability to the on-boarding process across the community.
  • Email Support - Beyond using GitHub issues for support and on-boarding, I see [email protected] a lot across the site. I get why you want to be at the center of things, but email support should be made generic, and enable group ownership of the email support workflow.
  • Road Map - I’d like to see a roadmap of what is being planned, as well as a change log for what has already been accomplished.
  • Frequently Asked Questions - I’d like to see an FAQ page, with questions and answers broken down by category, allowing me to browse some of the common questions as I’m getting up to speed.
  • Code Samples & SDKs - I’d like to see some more code samples in a variety of programming languages, either baked into the interactive documentation, or available on its own SDK / Code Libraries pages. I get if the VA doesn’t want to get in the business of doing this, but with an OpenAPI core, there are more options available out there to generate code samples, libraries, and SDKs. I think this vets API client effort needs to be pulled onto a code samples and SDKs page, and there be more investment in projects like this.
  • OpenAPI Download - I’d like to see a clear icon or button for downloading the OpenAPI 3.0 contract for each of the available APIs. I want to be able to use the OpenAPI definition for more than just documentation.
  • OpenAPI 3.0 - I’m very happy to see OpenAPI 3.0 powering the API documentation, which I think is a little detail that shows the VA API team has been doing their homework.
  • Postman Collections - I’d like to see a Run in Postman button at the top right corner of each API’s documentation, allowing me to quickly load up each API into my Postman API integration and development environment.
  • Communications - I’d like to see a blog, a Twitter account, and emphasis on the VA Github account. As an API consumer I’d like to see that someone is home besides [email protected], and be able to have regular asynchronous conversations, and may engage synchronously during API office hours, or other format.

I’ll stop there. I have endless more ideas of what I’d like to see in the VA developer portal, but I’m just happy to see such a clean, informative portal up and running at developer.va.gov. I’m stoked they are working on delivering real APIs that offer value to the Veteran–that is why we are doing all of this, right? I’m curious to learn about what other APIs already exist and can just be hung within this portal, as well as what APIs are planned for the immediate road map. While there are still missing parts and pieces, what they’ve published is a damn fine start to the VA developer program.

Next, I’m going to do a deep dive into what I’d like to see in the getting started page, as well as the path to production guidance. I’d also like to do some deep thinking on the production application (regular) check-in and review processes. I have a short list of concepts I want to flesh out, and questions I would like to answer in future posts. I just wanted to make sure I took a moment to review the VA’s hard work on their new developer portal. The publishing of their developer portal marks a significant milestone in the agency’s API journey, marking the spot where their API platform is beginning to shift into a higher gear.


Not All Companies Are Interested In The Interoperability That APIs Bring

I’ve learned a lot in eight years of operating API Evangelist. One of the most important things I’ve learned to do is separate my personal belief and interest in technology from the realities of the business world. I’ve learned that not all businesses are ready for the change that doing APIs bring, and that many businesses really aren’t interested in the interoperability, portability, and observability that APIs bring to the table. Despite what I may believe, APIs in the real world often have a very different outcome.

I see the potential of having a public API developer portal where you publish all the digital resources your company offers. Providing self-service registration to access these digital resources at a fair, transparent, and pay for what you use pricing model. I get what this can do for companies when it comes to attracting developer talent to help deliver the applications that are needed for any platform to thrive. I’ve seen the benefits to the end-users of these applications when it comes to giving them control over their data, the ability to leverage 3rd party applications, while also better understanding, managing, and ultimately owning the digital resources they generate each day. I also regularly see how this all can be a serious threat to how some businesses operate, and work to reveal the illnesses that exist within many businesses, and the shady things the occur behind the firewall each day.

I regularly see businesses pay lip service to the concept of APIs, but in reality, are more about locking things up, and slowing things down to their benefit, instead of opening up access, and streamlining anything. I’m not saying that businesses do this by default, and are always being led from the top down to behave this way, I am saying it gets baked into the fabric of how teams, groups, and individuals cells in the overall organizational organism. These cells learn to resist, fight back, appear like they are on board with this latest wave of how we deliver technology, but in reality, they are not interested in the interoperability that APIs bring to the table. There is just too much power, control, and revenue to be generated by locking things up, slowing things down, and making things hard to get.

After eight years of doing this, plus another 22 years of working in the industry, I’m always skeptical of people’s motivation motivation behind doing APIs. Why do you think this resources is important enough to make accessible? Who will get access to this resources? What is the price of this resource? Is pricing observable across all tiers of access? Can we talk about your SLA? Can we talk about your road map? Why are you doing APIs? Who do they benefit? There are so many questions to be asked when getting at the soul of each company’s API efforts. Before you can truly understand if a company is truly interested in the interoperability that APIs bring to the table. Before you can begin to understand what their API journey will involve. Before you understand whether or not you want to do business with a company using their API, and make it something you bake into your own operations and applications.

I write about this only to remind myself that some companies will have other plans. I write about this to remind myself to ask the hard questions of all the organizations I’m engaging with, all along the way. I tend to default to a belief that most people are straight up, and share their real intentions, yet I need a regular reminder that this really isn’t true. Most successful businesses are doing aggressive, shady, and manipulative things to get ahead. The concept of creating the best product, and running a smart business, and you’ll win, is a myth. I’m not saying it doesn’t happen, or can’t happen, I am saying it isn’t the normal mode of the business world–despite popular belief. This is all a reminder that just because a business has APIs, doesn’t mean their belief system around doing APIs reflects my vision, or the popular API community vision around what doing APIs is all about.


Helping The Federal Government Get In Tune With Their API Uptime And Availability

Nobody likes to be told that their APIs are unreliable, and unavailable on a regular basis. However, it is one of those pills that ALL APIs have to swallow, and EVERY API provider should be paying for an external monitoring service to tell us when are APIs are up or down. Having a monitoring service to tell us when our APIs are having problems, complete with a status dashboard, and history of our API's availability are essential building blocks of any API provider. If you expect consumers to use your API, and bake it into their systems and applications, you should committed to a certain level of availability, and offering a service level agreement if possible.

My friends over at APImetrics monitor APIs across multiple industries, but we've been partnering to keep an eye on federal government APIs, in support of my work in DC. They've recently [shared an informative dashboard tracking on the performance of federal government APIs](https://apimetrics.io/us-government-api-performance-dashboard/), providing an interesting view of the government API landscape, and the overall reliability of APIs they provide.

They continue by breaking down the performance of federal government APIs, including how the APIs perform across multiple North American regions across four of the leading cloud providers:

Helping us visualize the availability of federal government APIs for the last seven days, by applying their APImetrics CASC score to each of the federal government APIs, and ranking their overall uptime and availability:

I know it sucks being labeled as one of the worst performing APIs, but you also have the opportunity to be named one the best performing APIs. ;-) This is a subject that many private sector companies struggle with, and the federal government has an extremely poor track record for monitoring their APIs, let alone sharing the information publicly. Facing up to this stuff sucks, and you are forced to answer some difficult questions about your operations, but it is also something can't be ignored away when you have a public API

You can [view the US Government API Performance Dashboard for July 2018 over at APImetrics](https://apimetrics.io/us-government-api-performance-dashboard/). If you work for any of these agencies and would like to have a conversation your API monitoring, testing, and performance strategy, I am happy to talk. I know the APImetrics team are happy to help to, so don't stay in denial about your API performance and availability. Don't be embarrassed. Tackle the problem head on, improve your overall quality of service, and then having an API monitoring and performance dashboard publicly available like this won't hurt nearly as much--it will just be a normal part of operating an API that anyone can depend on.


Provide Your API Developers With A Forkable Example of API Documentation In Action

I responded about how teams should be documenting their APIs when they have both legacy and new APIs the other day. I wanted to keep the conversation thread going with an example of one possible API documentation implementation. The best way to deliver API documentation guidance in any organization is to provide a forkable, downloadable example of whatever you are talking about. To help illustrate what I am talking about, I wanted to take one documentation solution, and publish it as a GitHub repository.

I chose to go with a simple OpenAPI 3.0 defined API contract, driving a Swagger UI driven API documentation, hosted using GitHub Pages, and managed as a GitHub repository. In my story about how teams should be documenting their APIs, I provided several API definition formations, and API documentation options–for this walk-through I wanted to narrow it down to a single combination, providing the minimum(alist) viable options possible using OpenAPI 3.0 and SwaggerUI. Of course, any federal agency implementing such a solution should wrap the documentation with their own branding, similar to the City Pairs API prototype out of GSA, which originated over at CFPB.

I used the VA Facilities API definition from the developer.va.gov portal for this sample. Mostly because it was ready to go, and relevant to the VA efforts, but also because it was using OpenAPI 3.0–I think it is worth making sure all API documentation moving forward supports is supporting the latest version of OpenAPI. The API documentation is here, the OpenAPI definition is here, and the Github repository is here, showing what is possible. There are plenty of other things I’d like to see in a baseline API documentation template, but this provides a good first draft for a true minimum viable definition.

The goal with this project is to provide a basic seed that any team could use. Next, I will add in some other building blocks, and implementation a ReDoc, DapperDox, or WSDLDoc version. Providing four separate documentation examples that developers can fork and use to document the APIs they are working on. In my opinion, one or more API documentation templates like this should be available for teams to fork or download and implement within any organization. All API governance guidance like this should have the text describing the policy, as well as one or many examples of the policy being delivered. Hopefully this projects shows an example of this in action.


How Do We Get API Developers To Follow The Minimum Viable API Documentation Guidance?

After providing some guidance the other day on how teams should be documenting their APIs, one of the follow up comments was: “Now we just have to figure out how to get the developers to follow the guidance!” Something that any API leadership and governance team is going to face as they work to implement new policies across their organization. You can craft the best possible guidance for API design, deployment, management, and documentation, but it doesn’t mean anyone is actually going to follow your guidance.

Moving forward API governance within any organization represents the cultural frontline of API operations. Getting teams to learn about, understand, and implement sensible API practices is always easier said than done. You may think your vision of the organizations API future is the right one, but getting other internal groups to buy into that vision will take a significant amount of work. It is something that will take time, resources, and be something that will always be shifting and evolving over time.

Lead By Example
The best way to get developers to follow the minimum viable API documentation guidance being set forth is to do the work for them. Provide templates and blueprints of what you want them to do. Develop, provide, and evolve forkable and downloadable API documentation examples, with simple README checklists of what is expected of them. I’ve published a simple example using the VA Facilities API definition published as OpenAPI 3.0 and Swagger UI to Github Pages, with the entire thing forkable via the Github repository. It is very bare bones example of providing API documentation guidance is a package that can be reused, providing API developers with a working example of what is expected of them.

Make It A Group Effort
To help get API developers on board with the minimum viable API documentation guidance being set forth, I recommend making it a group effort. Recruit help from developers to improve upon API documentation templates provided, and encourage them to extend, evolve, and push forward their individual API documentation implementations. Give API developers a stake in how you define governance for API documentation–not everyone will be up for the task, but you’d be surprised who will raise their hand to contribute if they are asked.

Provide Incentive Model
This is something that will vary in effectiveness from organization to organization, but consider offering a reward, benefit, perk, or some other incentive to any group who adopts the API documentation guidance. Provide them with praise, and showcase their work. Bring exposure to their work with leadership, and across other groups. Brainstorm creatives ways of incentivizing development groups to get more involved. Establish a list of all development groups, track on ideas for incentivizing their participation and adoption, and work regularly to close them on playing an active role in moving forward your organization’s API documentation strategy.

Punish And Shame Others
As a last resort, for the more badly behaved groups within our organizations, consider punishing and shaming them for not following API documentation guidance, and contributing to overall API governance efforts. This is definitely not something you should not consider doing lightly, and should only be used in special cases, but sometimes teams will need smaller or larger punitive responses to their inaction. Ideally, teams are influenced by praise, and positive examples of why API documentation standards matter, but there will always be situations where teams won’t get on board with the wider organizational API governance efforts, and need their knuckles rapped.

Making Meaningful Change Is Hard
It will not be easy to implement consistent API documentation across any large organization. However, API documentation is often one of the most important stops along the API lifecycle, and should receive significant investment when it comes to API governance efforts. In most situations doing the work for developers, and providing them with blueprints to be successful will accomplish the goal of getting API developers all using a common approach to API documentation. Like any other stop along the API lifecycle, delivering consistent API documentation across distributed teams will take having a coherent strategy, with regular tactical investment to move everything forward in a meaningful way. However, once you get your API documentation house in order, many other stops along the API lifecycle will also begin to fall inline.


Do Not Miss Internal Developer Portals: Developer Engagement Behind the Firewall by Kristof Van Tomme (@kvantomme), Pronovix (@pronovix) At @APIStrat in Nashville, TN This September 24th-26th

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Do Not Miss Internal Developer Portals: Developer Engagement Behind the Firewall” by Kristof Van Tomme (@kvantomme), Pronovix (@pronovix) on September 25th.

Here is Kristof’s abstract for the API session:

While there are a lot of talks and blogposts about APIs and the importance of an APIs Developer eXperience, most are about public API products. And while a lot of the best practices for API products are also applicable to private APIs, there are significant differences in the circumstances and trade-offs they need to make. The most important difference is probably in their budgets: as potential profit centers, API products can afford to invest a lot more money in documentation and UX driven developer portal improvements. Internal APIs rarely have that luxury.

In this talk I will explain the differences between public and private APIs, introduce upstream DX, and explain how it can improve downstream DX. Introduce experience design (a.k.a. gamification) and Innersourcing (open sourcing practices behind the firewall) and describe how they could be used on internal developer portals.

Kristof is an expert in delivering developer portals and API documentation, making his talk a must attend session. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


May Contain Nuts: The Case for API Labeling by Erik Wilde (@dret), API Academy (@apiacademy)

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is May Contain Nuts: The Case for API Labeling by Erik Wilde (@dret), API Academy (@apiacademy) on September 25th.

I’ll let Erik’s bio set the stage for what he’ll be talking about at APIStrat:

Erik is a frequent speaker at both industry and academia events. In his current role at the API Academy, his work revolves around API strategy, design, and management, and how to help organizations with their digital transformation. Based on his extensive background in Web architecture and technologies, Erik combines deep expertise in protocols and representations with insights into API practices at today’s organizations.

Before joining API Academy and working in the API space full-time, Erik spent time at Siemens and EMC, in both cases working at ways how APIs could be used for their internal service ecosystems, as well as for better ways for customers to use services and products. Before that, Erik spent most of his life in academia, working at UC Berkeley and ETH Zürich. Erik received his Ph.D. in computer science from ETH Zürich, and his diploma in computer science from TU Berlin.

Erik nows his stuff, and can be found on the road with the CA API Academy, making this stop in Nashville, TN a pretty special opportunity. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Living In A Post Facebook, Twitter, and Instagram API World

While Facebook, Twitter, and Instagram will always have a place in my history of APIs, I feel like we are entering a post Facebook, Twitter, and Instagram API world. All three platforms are going through serious evolutions, which includes tightening down the controls on API access, and shuttering of many APIs. These platforms are tightening things down for a variety of reasons, which are more about their business goals, than it is about the community. I’m not saying these APIs will go away entirely, but the era where where these API platforms ruled is coming to a close.

The other day I articulated that these platform only needed us for a while, and now that they’ve grown to a certain size do not need us anymore. While this is true, I know there is more to the story of why we are moving on in the Facebook, Twitter, and Instagram story. We can’t understand the transformation that is occurring without considering that these platform’s business models are playing out, and they (we) are reaping what they’ve sown with their free and open platform business models. It isn’t so much that they are looking to screw over their developers, they are just making another decision, in a long line of decisions to keep generating revenue from their user generated realities, and advertising fueled perception.

I don’t fault Twitter, Facebook, and Instagram for fully opening their APIs, then closing them off over time. I fault us consumers for falling for it. I do fault Twitter, Facebook, and Instagram a little for not managing their APIs better along the way, but when your business model is out of alignment with proper API management, it is only natural that you look the other way when bad things are happening, or you are just distracted with other priorities. This is ultimately why you should avoid platforms who don’t have an API, or a clear business model for their platform. There is a reason aren’t having this conversation about Amazon S3 after a decade. With a proper business model, and API management strategy you deal with all the riff raff early on, and along the way–it is how this API game works when you don’t operate a user-exploitative business.

Ultimately, living in a post Twitter, Facebook, and Instagram API world won’t mean much. The world goes on. There will be some damage to the overall API brand, and people will point to these platforms as why you shouldn’t do APIs. Twitter, Facebook, and Instagram will still be able to squeeze lots of advertising based revenue out of their platforms. Eventually it will make them vulnerable, and they will begin to lose market share being such a closed off ecosystem, but there will always be plenty of people willing to spend money on their advertising solutions, and believe they are reaching an audience. Developers will find other data source, and APIs to use in the development of web, mobile, and device applications.

The challenge will be making sure that we can spot the API platforms early on who will be using a similar playbook to Twitter, Facebook, and Instagram. Will we push back on API provides who don’t have clear business models? Will we see the potential damage of purely eyeball based, advertising fueled platform growth? Will we make sound decisions in the APIs we adopt, or will we continue to just jump on whatever bandwagon that comes along, and willfully become sharecroppers on someone else’s farm. Will we learn from this moment, and what has happened in the last decade of growth for some of the most significant API platforms across the landscape today?

To help paint a proper picture of this problem, let me frame another similar situation that is not the big bad Twitter and Facebook, that everyone loves bashing on. Medium. I remember everyone telling me in 2014 that I should move API Evangelist to Medium – I kicked the tires, but they didn’t have an API. A no go for me. Eventually they launched a read only API, and I began syndicating a handful of my stories there. I enjoyed some network effect. I would have scaled upon my engagement if there was write access to APIs, as well as other platform related data like my followers. I never moved my blog to Blogger, Tumblr, Posterous, or Medium, for all the same reasons. I don’t want to be trapped on any platform, and I saw the signs early on with Medium, and managed to avoid any frustration as they go through their current evolution.

I don’t use the Facebook API for much–it just isn’t my audience. I do use Twitter for a lot. I depend on Twitter for a lot of traffic and exposure. I would say the same for LinkedIn and Github. LinkedIn has been closing off their APIs for some time, but honestly it was never something that was ever very open. I worry about Github, especially after the Microsoft acquisition. However, I went into my Github relationship expecting it to be temporary, and because all my data is in a machine readable and portable format, I’m not worried about every having to migrate–I can’t do this with Facebook, Twitter, Instagram, or LinkedIn. I’m saddened to think about a post Twitter API world, where every API call is monetized, and there is no innovation in the community. It is coming though. It will be slow. We won’t notice much of it. But, it is happening.

I know that Twitter, Facebook, and Instagram all think they are making the best decision for their business, and investors. I know they also think that they’ve done the best job they could have under the circumstances over the last decade. You did, within the vision of the world you had established. You didn’t for your communities. If Facebook and Twitter had been more strict and organized about API application reviews from early days, and had structured access tiers of free, as well as paid access early on, a lot fewer people would be complaining as you made those processes, and access tiers more strict. It is just that you didn’t manage anything for so long, and once the bad things happening began effecting the platform bottom line, and worrying investors, then you began managing your API.

I know that Twitter, Facebook, and Instagram all think they will be fine. They will. However, over time they will become the next NBC, AOL, or other relic of the past. They will lose their soul, if they ever had one. And everyone on the Internet will be somewhere else, giving away their digital bits for free. This same platform model will play out over and over again in different incarnations, and the real test will be if we ever care? Will we keep investing in these platforms, building out their integrations, attracting new users, and keeping them engaged. Or, will we work to strike a balance, and raise the bar which platforms we sign up for, and ultimately depend on as part of our daily lives. I’m done getting pissed off about what Twitter, Facebook, and Instagram do, I”m more focused on evaluating and ranking all the digital platforms I depend on, and turn up or down the volume, based upon the signals they send me about what the future will hold.


How Should Teams Be Documenting Their APIs When You Have Both Legacy And New APIs?

I’m continuing my work to help the Department of Veterans Affairs (VA) move forward their API strategy. One area I’m happy to help the federal agency with, is just being available to answer questions, which I also find make for great stories here on the blog–helping other federal agencies also learn along the way. One question I got from the agency recently, is regarding how the teams should be documenting their APIs, taking into consideration that many of them are supporting legacy services like SOAP.

From my vantage point, minimum viable API documentation should always include a machine readable definition, and some autogenerated documentation within a portal at a known location. If it is a SOAP service, WSDL is the format. If it is REST, OpenAPI (fka Swagger) is the format. If its XML RPC, you can bend OpenAPI to work. If it is GraphQL, it should come with its own definitions. All of these machine readable definitions should exist within a known location, and used as the central definition for the documentation user interface. Documentation should not be hand generated anymore with the wealth of open source API documentation available.

Each service should have its own GitHub/BitBucket/GitLab repository with the following:

  • README - Providing a concise title and description for the service, as well as links to all documentation, definitions, and other resources.
  • Definitions - Machine readable API definitions for the APIs underlying schema, and the surface area of the API.
  • Documentation - Autogenerated documentation for the API, driven by its machine readable definition.

Depending on the type of API being deployed and managed, there should be one or more of these definition formats in place:

  • Web Services Description Language (WSDL) - The XML-based interface definition used for describing the functionality offered by the service.
  • OpenAPI - The YAML or JSON based OpenAPI specification format managed by the OpenAPI Initiative as part of the Linux Foundation.
  • JSON Schema - The vocabulary that allows for the annotation and validation of the schema for the service being offered–it is part of OpenAPI specification as well.
  • Postman Collections - JSON based specification format created and maintained by the Postman client and development environment.
  • API Blueprint - The markdown based API specification format created and maintained by the Apiary API design environment, now owned by Oracle.
  • RAML - The YAML based API specification format created and maintained by Mulesoft.

Ideally, OpenAPI / JSON Schema is established as the primary format for defining the contract for each API, but teams should also be able to stick with what they were given (legacy), and run with the tools they’ve already purchased (RAML & API Blueprint), and convert between specifications using API Transformer.

API documentation should be published to it’s GitHub/GitLab/BitBucket repository, and hosted using one of the service static project site solutions with one of the following open source documentation:

  • Swagger UI - Open source API documentation driven by OpenAPI.
  • ReDoc - Open source API documentation driven by OpenAPI.
  • RAML - Open source API documentation driven by RAML.
  • DapperDox - DapperDox is Open-Source, and provides rich, out-of-the-box, rendering of your OpenAPI specifications, seamlessly combined with your GitHub flavoured Markdown documentation, guides and diagrams.
  • wsdldoc - The tool can be used to generate HTML documentation out of WSDL file.

There are other open source solutions available for auto-generating API documentation using the core API’s definition, but these represent some of the commonly used solutions out there today. Depending on the solution being used to deploy or manage an API, there might be built-in, ready to go options for deploying documentation based upon the OpenAPI, WSDL, RAML or other using AWS API Gateway, Mulesoft, or other existing vendor solution already in place to support API operations.

Even with all this effort, a repository, with a machine readable API definition, and autogenerated documentation still doesn’t provide enough of a baseline for API teams to follow. Each API documentation should possess the following within those building blocks:

  • Title and Description - Provide the concise description of what an API does from the README, and make sure it is based into the APIs definition.
  • Base URL - Have the base URL, or variable representation for a base URL present in API definitions.
  • Base Path - Provide any base path that is constant across paths available for any single API.
  • Content Types - List what content types an API accepts and returns as part of its operations.
  • Paths - List all available paths for an API, with summary and descriptions, making sure the entire surface area of an API is documented.
  • Parameters - Provide details on the header, path, and query parameters used for API path being documented.
  • Body - Provide details on the schema for the body of each API path that accepts a body as part of its operations.
  • Responses - Provide HTTP status code and reference to the schema being returned for each path.
  • Examples - Provide example requests and response for each API path being documented.
  • Schema - Document all schema being used as part of requests and responses for all APIs paths being documented.
  • Authentication - Document the authentication method used (ie. Basic Auth, Keys, OAuth, JWT).

If EVERY API possesses its own repository, and README to get going, guiding all API consumers to complete, up to date, and informative documentation that is auto-generated, a significant amount of friction during the on-boarding process can be eliminated. Additionally, friction at the time of hand-off for any service from on team to another, or one vendor to another, will be significantly reduced–with all relevant documentation available within the project’s repository.

API documentation delivered in this way provides a single known location for any human to go when putting an API to work. It also provides a single known location to find a machine readable definition that can be used to on-board using an API client like Postman, PAW, or Insomnia. The API definition provides the contract for the API documentation, but it also provides what is needed across other stops along the API lifecycle, like monitoring, testing, SDK generation, security, and client integration–reducing the friction across many stops along the API journey.

This should provide a baseline for API documentation across teams. No matter how big or small the API, or how new or old the API is. Each API should have API documentation available in a consistent, and usable way. Providing a human and programmatic way for understanding what an API does, that can be use to on-board and maintain integrations with each application. The days of PDF and static API documentation are over, and the baseline for each APIs documentation always involves having a machine readable contract as the core, and managing the documentation as part of the pipeline used to deploy and manage the rest of the API lifecycle.


The Importance Of Postman API Environment Files

I’m a big fan of Postman, and the power of their development environment, as well as their Postman Collection format. I think their approach to not just integrating with APIs, but also enabling the development and delivery of APIs has shifted the conversation around APIs in the last couple of years–not too many API service providers accomplish this in my experience. There are several dimensions to what Postman does that I think are pushing the API conversation forward, but one that has been capturing my attention lately are Postman Environment Files.

Using Postman, you can manage many different environments used for working with APIs, and if you are a pro or enterprise customer, you can export a file that represents an environment, making each of these API definitions more portable and collaborative. Managing the variety of environments for the hundreds of APIs I use is one of the biggest pain points I have. Postman has significantly helped me get a handle on the tokens and keys I use across the internal, as well as partner and public APIs that I depend on each day to operate API Evangelist.

Postman environments allows me to define environments within the Postman application, and then share them as part of the pro / enterprise team experience. You can also manage your environments through the Postman API, if you need to more deeply integrate with your operations. The Postman Environment File makes all of this portable, sharable, and used across environments. It is one of the reasons that makes Postman Collections more valuable to some users, in specific contexts, because it has that run time aspect to what it does. Postman let’s you communicate effectively around the APIs you are deploying and integrating with, and solves relevant pain points like API environment management, that can stand in the way of integration.

There aren’t many features of API service providers I get very excited about, but the potential of Postman as an environment management solution is significant. If Postman is able to establish itself as the broker of credentials at the API environment level, it will give them a significant advantage of other service providers. With the size of their developer base, having visibility at the environment level puts their finger on the pulse of what is going on in the API economy, from both an API provider and consumer perspective. With Postman Environment Files acting as a sort of key, or currency, that has to exist before any API transaction can be executed. And, as the number of APIs we depend on increases, the importance of having a strategy (and solution) for managing our environment will grow exponentially–putting Postman in a pretty sweet position.


<< Prev Next >>