The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Generating Operational Revenue From Public Data Access Using API Management

This is part of some research I'm doing with Streamdata.io. We share a common interest around the accessibility of public data, so we thought it would be a good way for us to partner, and Streamdata.io to underwrite some of my work, while also getting the occasional lead from you, my reader. Thanks for supporting my work Streamdata.io, and thanks for support them readers!

A concept I have been championing over the years involves helping government agencies and other non-profit organizations generate revenue from public data. It is a quickly charged topic whenever brought up, as many open data and internet activists feel public data should remain freely accessible. Something I don’t entirely disagree with, but this is a conversation, that when approached right can actually help achieve the vision of open data, while also generating much needed revenue to ensure the data remains available, and even has the opportunity to improve in quality and impact over time.

Leveraging API Management I’d like to argue that APIs, and specifically API management has been well established in the private sector, and increasingly in the public sector, for making valuable data and content available online in a secure and measurable way. Companies like Amazon, Google, and even Twitter are using APIs to make data freely available, but through API management are limiting how much any single consumer can access, and even charging per API call to generate revenue from 3rd party developers and partners. This proven technique for making data and content accessible online using low-cost web technology, requiring all consumers to sign up for a unique set of keys, then rate limiting access, and establishing different levels of access tiers to identify and organize different types of consumers, can and should be applied in government agencies and non-profit organizations to make data accessible, while also asserting more control over how it is used.

Commercial Use of Public Data While this concept can apply to almost any type of data, for the purposes of this example, I am going to focus on 211 data, or the organizations, locations, and services offered by municipalities and non-profit organizations to hep increase access and awareness of health and human services. With 211 data it is obvious that you want this information to be freely available, and accessible by those who need it. However, there are plenty of commercial interests who are interested in this same data, and are using it to sell advertising against, or enrich other datasets, and products or services. There is not reason why cash strapped cities, and non-profit organizations carry the load to maintain, and serve up data for free, when the consumers are using it for commercial purposes. We do not freely give away physical public resources to commercial interests (well, ok, sometimes), without expecting something in return, why would we behave differently with our virtual public resources?

It Costs Money To Serve Public Data Providing access to public data online costs money. It takes money to run the database, servers, bandwidth, and websites and applicatiosn being used to serve up data. It takes money to clean the data, validate phone numbers, email addresses, and ensure the data is of a certain quality and brings value to end-users. Yes this data should be made freely available to those who need it. However, the non-profit organizations and government agencies who are stewards of the data shouldn’t be carrying the financial burden of this data remaining freely available to commercial entities who are looking to enrich their products and services, or simply generate advertising revenue from public data. As modern API providers have learned there are always a variety of API consumers, and I’m recommending that public data stewards begin leverage APIs, and API management to better understand who is accessing their data, and begin to put them into separate buckets, and understand who should be sharing the financial burden of providing public data.

Public Data Should Be Free To The Public If it is public data, it should be freely available to the public. One the web, and through the API. The average citizen should be able to come use human service websites to find services, as well as us the API to help them in their efforts to help others find services. As soon as any application of the public data moves into the commercial realm, and the storage, server, and bandwidth costs increase, they shouldn’t be able to offload the risk and costs to the platform, and be forced to help carry load when it comes to covering platform costs. API management is a great way to measure each application consumption, and then meter and quantify their role and impact, and either allow them to remain freely accessing information, or be forced to pay a fee for API access and consumption.

Ensuring Commercial Usage Helps Carry The Load Commercial API usage will have a distinctly different usage fingerprint than the average citizen, or smaller non-commercial application. API consumers can be asked to declare they application upon signing up for API access, as well as be identified throughout their consumption and traffic patterns. API management excels at metering and analyzing API traffic to understand where it is being applied, either on the web or in mobile, as well as in system to system, and other machine learning or big data analysis scenarios. Public data stewards should be in the business of requiring ALL API consumers sign up for a key which they include with each call, allowing the platform to identify and measure consumption in real-time, and on recurring basis.

API Plans & Access Tiers For Public Data Modern approaches to API management lean on the concept of plans or access tiers to segment out consumers of valuable resources. You see this present in software as a service (SaaS) offerings who often have starter, professional, and enterprise levels of access. Lower levels of the access plan might be free, or low cost, but as you ascend up the ladder, and engage with platforms at different levels, you pay different monthly, as well as usage costs. While also enjoying different levels of access, and loosened rate limits, depending on the plan you operate within. API plans allows platforms to target different types of consumers with different types of resources, and revenue levels. Something that should be adopted by public data stewards, helping establish common access levels that reflect their objectives, as well as is in alignment with a variety of API consumers.

Quantifying, Invoicing, And Understanding Consumption The private sector focuses on API management as a revenue generator. Each API call is identified and measured, grouping each API consumers usage by plan, and attaching a value to their access. It is common to charge API consumers for each API call they make, but there are a number of other ways to meter and charge for consumption. There is also the possibility of paying for usage on some APIs, where specific behavior is being encouraged. API calls, both reading and writing, can be operated like a credit system, accumulating credits, as well as the spending of credits, or translation of credits into currency, and back again. API management allows for the value generated, and extracted from public data resources is measured, quantified, and invoiced for even if money is never actually transacted. API management is often used to show the exchange of value between internal groups, partners, as well as with 3rd party public developers as we see commonly across the Internet today.

Sponsoring, Grants, And Continued Investment in Public Data Turning the open data conversation around using APIs, will open up direct revenue opportunities for agencies and organizations from charging for volume and commercial levels of access. It will also open up the discussion around other types of investment that can be made. Revenue generated from commercial use can go back into the platform itself, as well as funding different applications of the data–further benefitting the overall ecosystem. Platform partners can also be leveraged to join at specific sponsorship tiers where they aren’t necessarily metered for usage, but putting money on the table to fund access, research, and innovative uses of public data–going well beyond just “making money from public data”, as many open data advocates point out.

Alternative Types of API Consumers Discovering new applications, data sources, and partners is increasingly why companies, organizations, institutions, and government agencies are doing APIs in 2017. API portals are becoming external R&D labs for research, innovation, and development on top of digital resources being made available via APIs. Think of social science research that occurs on Twitter or Facebook, or entrepreneurs developing new machine learning tools for healthcare, or finance. Once data is available, identified as quality source of data, it will often be picked up by commercial interests building interesting things, but also university researchers, other government agencies, and potentially data journalists and scientists. This type of consumption can contribute directly to new revenue opportunities for organization around their valuable public data, but it can also provide more insight, tooling, and other contributions to a cities or organizations overall operations.

Helping Public Data Stewards Do What They Do Best I’m not proposing that all public data should be generating revenue using API management. I’m proposing that there is a lot of value in these public data assets being available, and a lot of this value is being extracted by commercial entities who might not be as invested in public data stewards long term viability. In an age where many businesses of all shapes and sizes are realizing the value of data, we should be helping our government agencies, and the not for profit organizations that serve the public good realize this as well. We should be helping them properly manage their digital data assets using APIs, and develop an awareness of who is consuming these resources, then develop partnerships, and new revenue opportunities along the way. I’m not proposing this happens behind closed doors, and I’m interested in things following an open API approach to providing observable, transparent access to public resources.

I want to see public data stewards be successful in what they do. The availability, quality, and access of public data across many business sectors is important to how the economy and our society works (or doesn’t). I’m suggesting that we leverage APIs, and API management to work better for everyone involved, not just generate more money. I’m looking to help government agencies, and non-profit organizations who work with public data understand the potential of APIs when it comes to access to public data. I’m also looking to help them understand modern API management practices so they can get better at identifying public data consumers, understanding how they are putting their valuable data to work, and develop ways in which they can partner, and invest together in the road map of public data resources. This isn’t a new concept, it is just one that the public sector needs to become more aware of, and begin to establish more models for how this can work across government and the public sector.


The Many Meanings Of "Do Not Make The Same Mistake As Twitter Did With Their API"

I remember the first time I heard someone say that they didn’t want to make the same mistake as Twitter did with their API. It was from Pinterest. After that I heard the phrase uttered by many companies, with almost an entirely different meaning behind what the mistake was. Twitter is a darling of the API community when it comes to being the poster child for what not to do in the API space. I consider Twitter to be in the top 10 most important APIs out there, as well as being in the top ten APIs I wouldn’t want to be responsible for, and is a platform full of endless examples of how to do APIs right, and how to do them wrong.

When some companies say this phrase, they mean they don’t want to make the mistake Twitter did by having an API at all–usually heard from executives. Other times, it is said in response to anti competitive behavior in their API ecosystem, and treating startups badly. When you hear from developers, it is usually about their rate limits, and their rules of the road they published a few years back. It coming years I predict we’ll be saying it about automation, and using Twitter as case study for how not to assert control of bots on your API platform. You’ll find me leveraging this statement regularly to talk about making sure you have a real API monetization strategy, and don’t wait a decade to start offering premium access to your APIs that are accessible to EVERYONE.

I’ve been complaining about access to the Twitter API for over five years now. API plans are the heart of every API I keep an eye on. They set the tone for ALL conversations that go on around an API. The lack of a coherent, equitable, API access plan at Twitter has set into motion almost every other illness on the platform from harassment to bots. Many of the reasons I would utter the phrase “you don’t want to make the same mistake as Twitter” all stem from the lack of a coherent API plan for the platform. Not having a dedicated page, with a coherent plan for access to your API is the number one mistake you can make operating your APIs in 2017.

When I say this, I am not making any assumptions around what you should be charging, or how you should be limiting access. I’m simply saying that you should have to have a plan, and your community needs a URI to be able to become aware of your plan before ever consuming an API. There are may ways you can make your plan too restrictive, and inject other problems into your API plan, but having one is a good start, and really helps distinguish the APIs who have their act together and those who do not. An API plan demonstrates that you, well, have a plan. Having it publicly available in your developer portal, demonstrates some transparency of your plan, and that you are somewhat willing to include your API community in this plan–always a good idea, but you’d be surprised who doesn’t quite understand this.

Twitter’s biggest crime in my book is not having an API plan over the years. There are many ways we can beat up on Twitter for their shortcomings over the years. I’ve done my fair share. However, I do try and understand the scope of their challenges, and showcase the good that comes from the ecosystem as well. I’m hopeful that their move in offering premium APIs is more about defining a coherent plan for the API, and not simply about chasing a new revenue stream. The release of the APIs, and the structure of the plans and pricing seem to reflect they’ve done some deep thinking from the consumer perspective, so I am optimistic they are moving towards having a plan over just squeezing more money out of our information.

The moral of today’s story kids, is that if you don’t want to make the same mistake as Twitter, do not wait a decade to have a plan in place for your API. Make sure all your APIs have a clear monetization strategy, with a coherent plan in place regaring how you’ll manage your APIs in a way that delivers on your monetization strategy, but also provides a shared plan that includes your partners, and regular API consumers. Without a solid plan in place for your API, your community will never truly be in sync with your organization, and you’ll be incentivizing the worst behavior amongst your consumers. Don’t be like Twitter, and have a plan in place from day one.


API Management Is About Awareness And Control Over Our Digital Resources

I’ve been diving into the fundamentals of API management as part of several projects I am working on. I am setting up API management for a single API project, as well as thinking through API management practices across many API implementations in a single industry. I also just had lunch with a friend at an API startup I work with who is looking to invest in me doing some further research and storytelling when it comes to API management. All of this is providing me with a great opportunity to step back and think about API management from the small detailed moving parts, all the way up to the industry, regulatory, and macro levels of managing digital resources online.

API management is the oldest aspect of my research, and one I still think is one of the most critical aspects of doing APIs in my opinion. While there are many features modern API management brings to the table, the core of it is all about allowing consumers to sign up to access some data, content, media, or algorithm. Each consumer receives a set of keys that will identify and allow for their access to be measured, which then allows the owners or stewards of digital resources to develop awareness around who is accessing a resource, and what they are doing with it. Some call it security, others analytics, but I see it about developing an awareness and asserting control over our digital resources.

If you are focused on monetization, API management is about generating revenue. If you are worried about who has access to your digital assets, API management is about security. If you are doing API management right you realize it is about being aware of the digital resources you have, working to make sure they are well defined, and are tuned into your API management dashboard to understand how they are being used (or not used). I feel like this has been one of the shortcomings of an VC led world of API management, is that it became heavily focused on restricting and controlling access, and fixated on generating revenue, leaving a significant amount of opportunity on the table for making sense of the digital resources we all depend on, and maximizing their access, usage, and yes, revenue.

I see more investment in APIs when it comes to startups getting access to resources. I also see heavy investment when it comes to APIs generating new data points (home, auto, wearables, sensors, etc.) However, when it comes to understanding and quantifying the data, content, and algorithms already in use, there just isn’t much investment. Not there isn’t value there. There just isn’t enough value there to attract VC level interest. I feel like the tech sector wants APIs for all the wrong reasons. The reasons that benefit them. The pervasiveness nature of this way of thinking has stagnated companies, organizations, institutions, and government agencies from establishing control over their vital digital resources, developing awareness, and establishing control, using API management.


We Love What You Do In The API Space But Could You Do It Our Way

I hear it daily in my inbox, on Twitter, and via LinkedIn. We love what you do! We’ve followed your work for a while, and love your unique voice, and the way you tell stories on your blog. I’m not very good at accepting praise on my work, especially when I know that much of it isn’t sincere and genuine. Saying it casually to me is weird, and I am not sure why people feel like they should be saying it, but it is the folk who go the distance to say it, but then also try to change the way I am, after acknowledging over and over, that they like what I do.

From running a major conference, to my everyday storytelling, I get waves of people who like what I’ve done historically, want to support and be part of it, but once engaged actively try to change the conversation, and change the tone of what I do. The community has really seemed to rally around your conference, and clearly you’ve built a loyal group by making your event about ideas–we’d like to sponsor, but we really need a main stage talk where we can talk about our products. We love the tone of your storytelling on the blog and how you educated people people about the real world aspects of doing APIs–we’d love to sponsor, but we need you to talk about our products, and shift the focus to what we are doing. There are so many ways people acknowledge the value of what I do, but then want me to do the same old tired thing they’ve been doing.

I get why you do it. It is easy. It is going from zero to what you want in as little time as possible. However, you seem to be all too willing to completely ignore why my thing is working and why your old tired thing isn’t, and why you are even attracted to my thing in the first place. It is because your approach isn’t creative. It is’t genuine. Nobody cares. It takes work to actually care about something, and find the way to share it in a way that folks will actually care about it. You can’t just do this with any technology, product, or service. If I just do your thing, then I’d be just like you, and people like you wouldn’t even notice me. I wouldn’t have any audience, or people who trust me. Maybe that is just want you want though? Maybe I make you look bad, and it would be easier if I just went away.

Honestly, I just can’t get into the heads of why folks are attracted to what I do, approach me, then want to change what I do. Why 1000 lb gorillas are so used to getting their way sponsoring conferences, and getting the tech blogosphere to be their mouthpiece? I guess the ROI on it is still greater than the work of having to do anything meaningful. They can use up small bloggers like me, and move along without any meaningful consequences. I guess many haven’t really stopped to evaluate why I’ve managed to build and maintain an audience of 7 years of doing this, they just hear people talking about what I do, and think “I need some of that!”. Well, I guess you’ll keep doing your version, and I’ll keep doing mine, and we’ll keep meeting like oil and water until one of us gets our way. I’m guessing your type of approach will ultimately win out, but as long as I’m alive I’m going to keep doing my way, just so I can be a monkey wrench in your way of doing things.


My Basic YAML For Starter API Plans

I started developing a machine readable format for describing the API plans and pricing for leading API providers a few years back. Eventually I’d like to see the format live alongside OpenAPI, Postman, and other machine readable API specifications within a single APIs.json index. I am looking to adequately describe the plans and pricing for APIs, which are often just as important as the technical details, in the same way we’ve describe the technical surface area of an API using OpenAPI for some years now. People love to tell me that I will never be able to do it, which only makes me want to do it more.

I’m revisiting my work as part of work I’m doing on a clients project, which I’m also using to push forward my API portal and management toolbox. The project I’m working on has two API plans:

1) Starter - The free level of access everyone gets when signing up for access to an API. 2) Verified - A verified level of pay as you go usage once you have credit card on file.

I’ve taken the common elements across these plans and described them in a YAML format which allows me to remix the elements into the two plans I currently have, while also allowing me to reuse them for possible future plans, helping keep my approach consistent.

I’m using the plan elements in this YAML file to generate the plans and pricing page for each API. Generating two separate plan boxes, with the details, and elements of each plan. I keep all the moving parts of each plan defined as separate fields and collections so that I can reuse in any new plans. I also make use of the individual elements in comparison charts, and other pricing and plan related resources through an APIs portal. The specification isn’t perfect, but it provides me a starting point for considering how I make my API plans and pricing machine readable, and indexed as part of the APIs.json for each of my projects.

Next, I am taking API plan templates and auto-generating plans using AWS API Gateway. I’m going to play around with recreating some of the common plans we see for leading API providers out there using an existing gateway solution. Similar to generating the technical surface area of an API using AWS API Gateway, I’m looking to generate the business surface area of each API using the API Gateway in the same way. Definitely still a lot of work ahead of me when it comes to polishing my API plan specification format, but I feel like it is pretty good start, and after publishing a version for leading API providers, as well as some of the custom projects I’m working on, I think it will be a little more ready for prime time.


Three Stripe OpenAPI Vendor Extensions

As part of my work on my OpenAPI toolbox I am keeping an eye out for how leading API providers are using OpenAPI. One layer of this part of my research is understanding how teams are extending the OpenAPI specification, while also encouraging other companies to understand that they can extend the specification in the first place. I’m always surprised how many people I come across that say they do not use the specification because it doesn’t do everything they need. I alternatively feel like it is my responsibility to understand what the spec can do, and then bend it to do what I need it to using vendor extensions.

I have been studying how payment provider Stripe has been crafting their OpenAPI throughout the week, while also understanding how they are applying it across their platform operations. As part of their Github repository for managing the Stripe OpenAPI they share three vendor extensions they are using to evolve what is possible with OpenAPI:

  • x-expandableFields - Resources include an x-expandableFields that contains a list of fields that are expandable by making an API request with an expand parameter. See expanding objects.
  • x-polymorphicResources - Some API responses are “polymorphic” in that they might return multiple types of resources which is a case that OpenAPI can’t handle. In these cases the spec will reference a “synthetic” resource which is an aggregate of the properties common to all the possible resources. It will also include the field x-polymorphicResources which references those resources more precisely.
  • x-resourceId - Resources include x-resourceId which is a canonical name for each resource. It can be used in conjunction with openapi/fixtures{2,3}.{json,yaml} to look up a sample representation (otherwise known as a “fixture”) of the resource.

Some interesting extensions. The expandable fields, and resource id is pretty straight forward, but the polymorphic resources opens up some interesting questions when it comes to API design. It makes me want to learn more about the how and why Stripe does this with their API. Maybe it is just me, but I find OpenAPI a very useful tool for quantifying the design decisions that go into an API. I’m eager to learn more about how consistent providers are, as well as understanding where they deviated, and I find vendor extension are useful in revealing clues behind the decisions to deviate from common API design patterns.

I am going to be spending a lot of time studying Stripe’s usage of OpenAPI. It is significant that top tier providers to share their OpenAPI on Github like Stripe does. It helps us learn more about the Stripe API, and the design and documentation decisions that have gone into the payment API. I wish more API providers would share their OpenAPI definition(s) via Github, and share any vendor extensions they have defined. A machine readable API definition on Github, with easy to find vendor extensions, across many API providers sounds like the beginning of a new generation of API discovery. One that can help drive the future of the OpenAPI Initiative (OAI) through real world usage, and shaping the OpenAPI specification road map through actively defining and sharing vendor extensions.


The Information You Get When Allowing Developers To Sign Up For An API Using Github

I’m a big Github user. I depend on Github for managing all my projects, and Github Pages for the presentation layer around all my research. When anything requires authentication, whether for accessing an API, or gaining access to any of my micro apps, I depend on Github authentication. I have a basic script that I deploy regularly after setting up a Github OAuth application, which I use to enable authentication for my API portals and applications, handling the OAuth dance, and returning me the information I need for my system.

After a user authenticates I am left with access to the following fields: id, avatar_url, gravatar_id, url, html_url, followers_url, following_url, gists_url, starred_url, subscriptions_url, organizations_url, repos_url, events_url, received_events_url, type, site_admin, name, company, blog, location, email, hireable, bio, public_repos, public_gists, followers, following, created_at, updated_at, private_gists, total_private_repos, owned_private_repos, disk_usage, collaborators, two_factor_authentication, and plan. Not all these fields are filled out, and honestly I don’t care about most of them for my purposes, but it does provide an interesting look at what you get from Github, over a basic email and password approach to authentication.

I’m just looking for any baseline information to validate someone is a human being when signing up. Usually a valid email is this baseline. However, I prefer some sort of active profile for a human being, and have chosen Github as the baseline. When anyone signs up I also quickly calculate some other considerations regarding how long they’ve had a Github account, how active it is, and some numbers regarding this history and activity. I don’t expect everyone to have a full blown public Github profile like I do, but if you are looking to use on of my APIs, or API-driven micro tools I’m looking for something more than just a valid email–I want some sign of life. I will be evolving this algorithm, and enforcing it in different ways at different times.

I always hesitate using Github as the default login for my API portals and applications, but honestly I think it is a pretty low bar to expect folks to have a Github account. I feel like we should be raising the bar a little when it comes to who is accessing our resources online. The APIs and tooling I’m making available are mine, and I just want to make sure you are human, and are verifiable on some level, and I find that the links available as part of your Github profile provide me with more reliable and verifiable aspects of being human in the tech space. Making the fields returned as part of Github authentication pretty valuable for verifying humans in my self-service, and increasingly automated world.


You Thinking I Mean REST When I Say API Is About Your Limited Views, Not Mine

I’m fascinated by the baggage people bring to the table when engaging in discussions around technology with me. A common opener for many conversations with season technologists centers around REST not penciling out as everyone thought, failing to be the catch-all solution, and will quickly move to how I feel about some new technology (GraphQL, gRPC, Kafka, other) making my work irrelevant. I wish I had some quick phrase to help folks understand how this line of questioning demonstrates their extremely limiting views of the tech sector, as well as my work with APIs, but alas I find silence usually does the job in these situations–allowing everyone to quickly move.

For me, application programming interface, or API, is all about finding the right interface for programming against for a specific application. I’d say the closest things that anchors my belief system to REST, is that I tend to focus on leveraging the web when it comes to defining the web because it is low coast, usually well known, and avoids reinventing the wheel. I’m not a RESTafarian, and you will not find me online arguing the finer details of REST over other approaches. It just isn’t my style, and I leave it to ya’ll to work out these finer details, and share the stories about what is working, and what is not working in your operations.

Your assumptions around what APIs means to me demonstrates your limited views, only partially because of the technological underpinnings. The technical details of API is only 1/3 of the equation for me, and the majority of my research and storytelling focuses on the business and politics of doing APIs, but I’m guessing you aren’t aware of this. I find that the technology definitely sets the tone for API implementations and conversations, but the ones that actually make a significant impact always transcend the technology, and help acknowledge, and understand the other aspects of operating online which is making doing APIs a good or bad thing. From your opening statements, I’m guessing our conversation won’t be transcending anything.

I appreciate you taking a moment to share your limited view of APIs and what I do. You’ll have to excuse me for not having much to say, but after doing this for seven years I know that I will have little effect to shifting your limited views of what is API, and what it is that I do as the API Evangelist. I know that you feel pretty strongly REST APIs didn’t deliver, but I’m pretty busy helping folks understand how they can effectively manage their digital resources on the web, and securely share and provide access to their data, content, and algorithms using web technology. I don’t see the need for managing and moving our these digital bits going away anytime soon, and I find my time is better spent avoiding the political eddies that swirl at the edges of the API mainstream.


Stripe Elements And How We Organize Our API Embeddables

I am setting up Stripe for a client, and I found myself browsing through Stripe Elements, and the examples they have published to Github. If you aren’t familiar, “Stripe Elements are pre-built rich UI components that help you build your own pixel-perfect checkout flows across desktop and mobile.” I put Stripe Elements into my bucket of API embeddables, which overlaps with my API SDK research, but because they are JavaScript open up a whole new world of possibilities for developers and non-developers, I keep separate.

Stripe.js and supporting elements provides a robust set of solutions for integrating the Stripe API into your website, web or mobile application. You can choose the pre-made element, customize as you see fit, or custom build your own using the Stripe.js SDK. It provides a great place to start when learning about Stripe, reverse engineering some existing solutions, and figuring out what integration will ultimately look like. In my scenario, the default Stripe element in their documentation works just fine for me, but I couldn’t help but playing with some of the others just to see what is possible.

You can tell Stripe has invest A LOT into their Sripe.js SDK, and the overall user experience around it. It provides a great example of how far you can go with embeddable API solutions. I like that they have the Stripe Elements published to Github, and available in six different languages. As I was learning and Googling, I came across other examples of Stripe.js deployment on other 3rd party sites, making me think it would be nice if Stripe had a user generated elements gallery as part of their offering, accepting pull requests from developers in the Stripe community. It wouldn’t be that hard to come up with a template markdown page that developers could fill out and submit, sharing their unique approach to publishing Stripe Elements.

Having a Github repository to display example embeddable API tooling makes sense, and is something I’ll add to my embeddable research. While not all SDKs warrant having their own Github repository, I’d say that embeddable JavaScript SDKs rise to the occasion, and when they are as robust as Stripe Elements might benefit from their own landing page, and forkable, continuously integratable elements. Actually, on second thought, in a CI/CD world I’m feeling like ALL API SDKs should have their own repository, opening up the possibility for them to have their own landing page, issues, and code in a separate repo, for easier integration. I’m going to do a round-up of Stripe’s embeddable efforts, as well as the other SDKs they support, and see what other examples I can extract for other API providers to consider as they pull together their approach to supporting the intersection of embeddable and SDK.


Form Posts As Gateway For Showing People They Can Program The Web Using APIs

I am always looking for new avenues to help on-board folks with APIs. I’m concerned that folks aren’t quite ready for the responsibility that comes with a programmable web, and I’m looking for ways to help show them how the web is already programmable, and that APIs can help them take more control over their data and content online. A significant portion of my low hanging fruit API work centers around the forms already in use across websites, and how these forms are a doorway for data, and content that should also be available via an API. If information is already available on your website, and being gathered or displayed in response to a form on your website, it is a great place to start a conversation around providing an API that delivers the same functionality.

Sometimes forms drive website searches, and act as a way to gather some data before presenting results–providing a good example of a GET API. In other situations forms act as an input for data, such as a contact form, or survey response. In these scenarios, forms provide a good example of a writable, or POST API path–allowing data and content to be added into a system. I’m always pointing out that if data is displayed in tables, or accessible through a form submission on a website, this is where you should start with readable APIs. The same holds true for form submissions that gather data, being where folks should bet starting with writable APIs.

I feel like contact, messaging, and survey forms are all good examples of how companies, organizations, institutions, and government agencies can begin with write APIs. Business users get the concept of a form, and even its submission via a POST on the web. It is a great place to start the conversation with folks about having APIs that don’t just deliver information, but also accept new information using the web. I’m thinking about how I can use Google Forms in conjunction with the work I’ve been doing around managing data using Google Sheets, and exposing it publicly using the Google Sheets API. Demonstrating how simple it is to make data reusable across many applications using Google Sheets, but then also open up access to submit data using Google Forms.

Forms are nothing new in my work as the API Evangelist. I see regular waves of starts emerge to try and crack open the intersection of forms and APIs. I feel like it is one of those areas where we need a lot more training and educational materials, as well as simple prototypes and open source tooling to help folks understand how forms and APIs work together before the next waves of form based startups can get the traction they desire. I feel like there is a significant opportunity to open up the mind of the average business user regarding the programmatically of the web using APIs, but it isn’t just a business opportunity, its an educational opportunity. Which the API space isn’t always good at delivering in because we tend to be so distracted with the building and selling of startups. Hopefully, someone comes along and can use forms as a way to onboard the next wave of API savvy folks, which eventually will pencil out in the success of one or more form centered API startups.


Deploy Low Hanging Fruit Rogue API Portals For Those Who Are Behind The Curve

<p</p>The concept of rogue APIs isn’t anything new. Instagram started out as a rogue API, and many leading platforms who are less than open with their platforms have rogue APIs. They are usually APIs that have been reverse engineered from mobile applications, and published to Github for other developers to use. I’m looking to marry this concept with my low hanging fruit API work, where I help organizes start their API journey using data and content that is already on their website. Meaning, if it is already available on the web as table, form, or as CSV, spreadsheet, or other machine readable fie, it should be available via an API. As APIs are just the next step in the evolution, this is the logical place for the API journey to begin for many companies, organizations, institutions, and government agencies.

I’ve spidered the entire web site of organizations to extract lists of data sources they should be turning into APIs. I’ve done this at the request of the website owner, as well as without the permission. Honestly, it provides a pretty compelling look at the digital presence for an organization when you harvest raw data like this and publish to a Github repository. It isn’t a view that every organization is ready for, or has thought about. Making it an even more important place for organizations to start their API journey. APIs aren’t just about providing access to your data and content for your partners and 3rd party developers, it is about getting a handle on your digital assets, and how you present and provide access to this digital representation of your organization–something many suck at profoundly.

I’d like to invest more cycles into my low hanging fruit API research. I’d love to take some government agencies and not just identify the low hanging fruit, but actually deploy a rogue API portal, and hang some of the APIs there. I’d like to do this to a couple of companies, institutions, as well as government agencies. I know that I’d get in trouble doing this with some companies, and even other entities, but I think it is a good way to instigate the API conversation, and I am willing to take the chance. Ihad the University of Oklahoma contact me after I scraped their web site, and I think I could recreate the effect with other groups. The trick is doing it in a transparent and observable way, with everything on Github, and communicated in a clear way. So, that someone knows who is behind it, and can reach out to do things in a more formal way–moving from a rogue API, to an official API.

To move this forward I am going to target a single government agency, scrape their website, and any other open daa I can find, and then public an official rogue API portal, and begin hanging some of the APIs there. I’m even going to open up read and write capabilities via the API for any developer who wants to register, and pay for access to the API. I’ll make sure things are clearly marked as being a unofficial rogue API, and provide contact information for anyone looking to communicate with me. I see low hanging fruit rogue APIs as being a way I can get the attention of companies, organizations, institutions, and government agencies when it comes to APIs. Even begin to build awarness and critical mass within a community around the digital assets shared on the website, and now via an API portal. A kind of activist API deployment, and beginning the public API journey.

This goes well beyond the concept of scraping for me. Which I’ve seen a number of startups come and go trying to accomplish. This is about helping show organizations the importance having a website as well as APIs to help counter scraping efforts, and get a better handle on their digital presence. It is meant to start the conversation with some very entrenched folks around the digital resources they are making public, and how APIs can help them better quantify their digital presence, and take control over that presence beyond just their website. If there is an agency, institution, or organization you’d like to see target, or even would be willing to invest some money in deploying a low hanging fruit rogue API portal for, feel free to let me know. I’ll be investing some cycles into this area of my research, just to make sure my content is fresh, while also seeing what new conversations I might be able jumpstart, so its a good time to get involved and fund what I’m doing.


Headless CMS And The API Evolution Beyond WordPress

I am a fan of what WordPress has done for the online world. I feel like it has enabled a lot of folks to take some control over their web presence, and in some situations even made programmers out of business people who never thought that is what they’d end up doing. Even with all the positive benefits of WordPress, it has had some significant negative side effects which I think warrant us to begin looking beyond the existing ecosystem–something I’m hoping the headless CMS, and static website movement can help fuel. I’m not anti-WordPress, but I think the movement has run its course, and we can do better when it comes to helping folks take control over their web presence, as well as avoid much of the security challenges we experience as a result of WordPress.

If you aren’t familiar with the concept of headless, it is just about doing APIs, but centered around the end deliverable–the application. Headless focuses on decoupling content for use in apps, websites, or any other data-driven projects, allowing content to be created and managed independently from where it’s used. To us API-aware folks this is how All applications should behave, but I feel like the headless CMS concept is an important API gateway for business users who have drank the WordPress kool-aid, and are looking to do more with their CMS, and break free of some of the challenges of operating exclusively in a WordPress state of mind.

The most damaging aspect of WordPress I have felt is it’s emphasis on the blog. Everything is centered around the blog with WordPress installs, which isn’t the reason many folks should be doing a website in the first place, but because of their platform they feel compelled to. Headless CMS can be about managing ANY content, and crafting an API backend, that can deliver exactly the content you need in any website, web or mobile backend, bypassing the concept of the blog entirely. Which may not seem like much, but I’ve seen the blog become a pretty big obstacle for some individuals and companies looking to get a handle on their digital content.

The second most challenging aspect of operating WordPress is security. Managing a dynamically driven CMS that is so ubiquitous, and ultimately a huge target by bad actors is daunting for any seasoned IT people, but can be damaging for any unaware individual just trying to manage their website. I stopped running API Evangelist on WordPress because I couldn’t keep up with the security concerns, and operating my public presence as a series of statically published websites has done wonders to the security of my platform. I just do not feel that WordPress is sustainable as a CMS for individuals and companies who do not have the resources to properly manage and secure. There are many flavors of headless CMS, I’m looking to push the more static flavor I’be been seeing on the landscape.

I’m hopeful that the concept of headless coupled with a static CMS can be a proper gateway for individuals, companies, organizations, institutions, and government agencies who want to get a handle on a specific layer of managing their data and content, to learn about APIs. I find that most people aren’t interested in learning about APIs, they are interested in delivering API-driven solutions. Once they get a taste of this, they want to learn more. I feel like the WordPress API is going to be a complicated introduction to the world of APIs for many, but a simple, static, website implementation with a robust API backend provides a much more approachable view of what APIs can do. I’m going to keep an eye on everything headless, and keep scratching out stories to tel my audience about what they can do. I feel like they are key to helping us evolve beyond the Web 2.0 world WordPress set into motion, and begins to give us more control over our backends using APIs, but in a way that help us manage many different front-end applications, with CMS being the first stop for many individuals.


Twitter Finally Begins To Monetize Their APIs

It has been a long time coming, but Twitter has finally started charging for premium access to their APIs. Until now, you could only access data via the free Twitter API with limitations, or pay to use Gnip at the enterprise level–nothing in between. I’ve long complained about how Twitter restricts access to our tweets, as well as the lack of transparency and honesty around their business model. I’ve complained so much about it, I eventually stopped writing about it, and I never thought I’d see the day where Twitter starts charging for access to their platform.

While I have concerns about Twitter further limiting access to our data by charging for API access, their initial release has some positive signs that give me hope that they are monetizing things in a sensible way. They seem to be focused on monetizing some of the key paint points around Twitter API consumption, like being able to get more access to historical data, offer more Tweets per request, higher rate limits, a counts endpoint that returns time-series counts of Tweets, more complex queries and metadata enrichments, such as expanded URLs and improved profile geo information. Twitter seems to be thinking about the primary pain we all experience at the lower rungs of Twitter access, and focusing on making the platform more scalable for companies of all shapes and sizes–which has been core to my complaints.

Twitter even provides a quote from a client which highlights what I’ve been complaining about for some time about the inequity in Twitter API access:

I wish these premium APIs were available during our first few years. As we grew, we quickly ran into data limitations that prevented expansion. Ultimately, we raised a round of funding in order to accelerate our growth with the enterprise APIs. With the premium APIs, we could have bootstrapped our business longer and scaled more efficiently. - Madeline Parra, CEO and Co-Founder, Twizoo (now part of Skyscanner) @TwizooSocial

This demonstrates for me how venture capital, and the way the belief systems around it railroad folks down a specific path, and is blind to those of us who do not choose the path of venture capital. Twitter’s new pricing page starts things off at $149.00 / month, after the current free public tier of access, and climbs up five tiers to $2,499.00 / month. Giving you more access with each access tier you climb. While not a perfect spacing of pricing tiers, something that sill might be difficult for some startups to climb, it is much better than what was there before, or should I say, what wasn’t there. At least you can scale your access now, in a sensible, above the board vertical way, and not horizontally with separate accounts. Incentivizing the more positive behavior Twitter should want to see via their API.

Twitter should have started doing this back in 2008 and 2009 to help throttle bad behavior on their platform. The platform would look very different today if there had been a level playing field in the API ecosystem, and developers weren’t forced to scale horizontally. API monetization and planning isn’t just about generating revenue to keep a platform up and running serving everyone. Sensible API monetization is about being aware of, and intimately understanding platform behavior and charging to restrict, shape, and incentivize the behavior you want to see on your platform. Twitter has missed out on a huge opportunity to truly understand the API community at this level, as well as generate much needed revenue over the years. Shutting out many good actors, incentivizing bad actors, and really only recognizing trusted partners, while being blind to everything else.

After a decade of complaining about Twitter’s practices in their ecosystem, and their clear lack of a business model around their API. I am happy to see movement in this area. While there is a HUGE amount of work ahead of them, I feel like monetization of the API ecosystem, and crafting of a sensible API plan framework is essential to the health and viability of the platform. It is how they will begin to get a hold on the automation that plagues the platform, and begin de-platforming the illness that has become synonymous with Twitter during the 2016 election, while also leveling the playing field for many of us bootstrapped startups who are trying to do interesting things with the Twitter API. I’ll keep an eye on the Twitter premium APIs, and see where things head, but for now I’m cautiously optimistic.


The SEO Benefits Of Publishing Your API Operations To Github

I’ve been operating 100% of my public presence for API Evangelist on Github for almost five years now. I really like the public layer of my world being static, but I also like the modularity that using Github repos for my projects have injected into my workflow. API Evangelist runs as almost 100 separate Github repositories, all using a common Jekyll template for the UI, making it look like you are always on the same API Evangelist website. Any website, application, data, or API begins as a Github repository in my world, and grows from there depending on how much energy I give a project during my daily work.

When I first started doing all of this, I worried a little bit about the search engine optimization of my public websites. From what I could tell in 2013, my sites ranked lower after the switch, but since I’m not in this for the numbers game, I shrugged it off. However, in 2017 the numbers look different, and some of the projects I’ve been cultivating on Github actually rank pretty high, even with minimal optimization on my part. This isn’t just the web front-end for my projects–I am also seeing the Github repositories themselves showing up pretty prominently in search engine results.

All of this is anecdotal. I haven’t done any official measurements or testing on the topic, I just don’t care enough to invest that amount of work in it all. I just have to note that in the last year I am seeing significant benefit for my SEO by running things on Github. When you bundle this with the search and discovery opportunities via Github, the benefits to running an API project on Github as much as possible makes sense. It is something I encourage all of my clients who are operating public APIs consider as part of their overall marketing, communications, and evangelism strategy.

I’ve been profiling all the possible ways that an API provider can use Github as part of operations for a number of years, but I think I will be reassessing all of this in light of the SEO benefits. Exploring the ways that you can increase the exposure of your public API operations using Github. I don’t think my usage of Github is the only reason behind my SEO domination when it comes to the world of APIs, but I am beginning to think it is playing a bigger role than I had expected. I’m publishing a new API for a client right now, where they have given me full control over publishing to the Github ecosystem, which is a perfect opportunity to rethink all of this, and begin to think a little more constructively about the SEO benefits of using Github for API operations.


Glitch Is Where You Will Learn The Essential Human Side Of Operating Your API

The biggest deficiency I see in the world of APIs is an ability to understand the human side of what we are all doing. The space is dominated by men, and people who have an understanding of, and deep belief in technology, over that of humans. The biggest problems APIs face across their life cycle is humans, and increasingly one of the biggest threats to humans is an API (ie. Twitter API automation & harassment, IoT device exploitation, Facebook advertising, etc.) APIs encounter human friction because their creators didn’t anticipate the human portion of the equation, and APIs often get used against humans because their creators again didn’t anticipate human nature, and how people might use their technology for doing harmful things.

I rarely see folks in the API sector focusing on the human side of the equation, but I am pleasantly surprised to see a constant drumbeat coming out of Glitch, “the friendly community where you’ll build the app of your dreams.” Glitch is a platform where API consumers can remix apps that use APIs, and API providers can engage with API consumers who are building and remixing interesting things. Glitch has been on my list to write about more, and is something I’ll be using, and focusing more time on in future posts, but I wanted to just highlight how much focus is spent on the human side of the API world over at Glitch.

Take a look at the articles coming out of the Glitch blog, Dev Rel success requires an ongoing connection to a community of peers, and Dev Rel must be supported with ongoing investment in professional development–all part of the ongoing stories around a Developer Bill of Rights, which Glitch has been very vocal about, emphasizing the importance of the human aspects of doing APIs and building applications. Which is the first startup I’ve seen come along that is investing so much energy into discussing what really makes all of this actually work.

The core of Glitch is all about building apps. Which is the same core objective of API providers. However, as you begin to spend time there, you begin to learn a lot more about developer relations (dev rel), and the focus on applications just becomes part of the conversation. They do a great job to identify the human elements of building applications, and delivering meaningful things for not just humans, but humans at large organizations. There is talk of working with marketing and sales, and helping developers and API providers not forget about the little meaningful details that can make or break your API efforts. I’m going to spend some time building an app on Glitch, and remix using some of what is already there. I found I’ve learned a lot on their blog, and I am interested in learning more about what they are bringing to the community.


Could I Please Get An API Discovery Tool That Evaluates An OpenAPI Diff

I am increasingly tracking on OpenAPI definitions published to Github by leading API providers I track on. Platforms like Stripe, Box, New York Times are actively managing their OpenAPI definitions using Github, making them well suited for integration into their platform operations, API consumer scenarios, and even within analyst systems like what I have going on as the API Evangelist.

Once I have an authoritative source of an OpenAPI, meaning a public URI for an OpenAPI that is actively being maintained by the API provider, I have a pretty valuable feed into the roadmap, as well as change log for an API. I feel like we are getting to the point where there are enough authoritative OpenAPIs that we can start using as a machine readable notification and narrative tool for helping us stay in tune with one or many APIs across the landscape. Helping us stay in tune with APIs in real-time, and giving APIs an effective tool for communicating out changes to the platform–we just need more OpenAPIs, and some new tooling to emerge.

I’m envisioning an OpenAPI client that regularly polls OpenAPIs and caches them. Anytime there is a change it does a diff, and isolated anything new. Think of an RSS reader, but for OpenAPIs, and going well beyond new entries, and actually creates a narrative based upon the additions and changes. Tell me about the new paths added, and any new headers, parameters, or maybe how the schema has grown. Provide me insights on what has changed, and possibly what has been removed, or will be removed in future editions. As an API analyst, I’d like to be able to have an OpenAPI-enabled approach to receiving push notifications when an API changes, with a short, concise summary about what has change in my inbox, via Twitter, or Github notification.

OpenAPI already provides API discovery features through the documentation it generates, and I’m increasingly using Github to find new APIs after they publish their OpenAPIs to Github, but this type of API discovery and notification at the granular level would be something new. If there was such tooling out there, it would provide yet another incentive for API provides to publish and maintain an active, up to date OpenAPI definition. This is a concept I’d like to also see expanded to the API operational level using APIs.json, where we can receive notifications about changes to documentation, pricing, SDKs, and other critical aspects of API integration, beyond just the surface area of the API. All of this stuff will take many years to unfold, as it has taken over five years for us to reach a critical mass of OpenAPI definitions to emerge, I suspect it will take another five to ten years for robust tooling to emerge at this level, which also depends on many API definitions to be available.


I Added A Simple Bulk API For My Human Services Data API

The core Human Services Data API allows for adding of organizations, locations, services, and contacts one by one using a single POST on the core API paths for each available resource. However, if you want to add thousands, or even hundreds of records, it can quickly become cumbersome to submit each of the calls, so I wanted to introduce a simple Human Services Bulk API for helping handle the adding of large quantity of data, on a one-time, or recurring basis. I know there job queuing solutions available out there, but my goal with this project is to focus on the API definition, as well as the backend system(s). For this round, I just want to get a simple baseline definition in place, with a simple API backend for orchestrating. I’ll update to support AWS, and other queuing solutions as part of the road-map–further hammering out a consistent HSDA Bulk API.

The first dimension of this new HSDA Bulk API focuses on providing paths for POSTing large quantities of data across the core human service resources:

  • organizations/ - POST complete organizations JSON records as array.
  • locations/ - POST complete locations JSON records as array.
  • services/ - POST complete services JSON records as array.
  • contacts/ - POST complete contacts JSON records as array.

You can submit as many records to each of these paths (well, within reason), including the sub-resources for each object like physical address, phones, etc. When POSTed each record doesn’t immediately go into the main HSDA database. Each entry is entered into a jobs system, which can be run on a schedule, based upon events, or maybe just wait until the middle of the night. The goal is to offload the bulk insert to a job system, which can spread things out over time, and minimize negative impact on resources strapped human services database. HSDA Bulk API runs as a separate microservice which can be run side by side with the core HSDA implementation, or possibly scaled on separate infrastructure to allow for handling of expected loads.

The next dimension of this new HSDA Bulk API allows for importing of HSDS datapackage.json files, bringing things back to basics with the Human Services Data Specification(HSDS). The JSON file contains a list of paths to individual HSDS CSV files, which are then processed as individual resources, with each record inserted as a job, for running on schedule, event, or other approach. Adding a more comprehensive approach to loading up large datasets into any human services system using an HSDA API, while also continuing to smooth out the impact of the core system using jobs.

With both of these dimension, you can perform bulk uploads of data using the individual organizations, locations, services, and contacts page, as well as a complete datapackage.json file. Next I will be hammering on the demo API I have a bunch more, to harden the code, and see if I’m missing any details. After that I’ll started looking at switching out the custom backend I have with an AWS API Gateway managed, and possibly AWS SQS or other jobs solution. I’m trying to keep the HSDA Bulk API definition simple, but also allow for scaling up with other more robust backend system. For now, I’d call this edition of the HSDA Bulk API to be a decent v1.0 start for this new HDSA microservice.


I Added A Taxonomy API To Support The Human Services Data API (HSDA)

I have been organizing my Human Services Data API (HSDA) specification work into separate microservices as part of version 1.0 for the API definition that cities and other organizations running 211 operations can pick and choose which aspects they want to run. One service I carved off of the move from version 1.0 to 1.1 of the specification was taxonomy, and how the human services are categories and organized. I saw there was more research to be done around 211 taxonomy, and I felt it had the potential to be a separate but supporting service to augment what Open Referral is already trying to do with the Human Services Data API (HSDA) specification.

The HSDA Taxonomy API specification provides a handful of API paths for creating, reading, updating, and deleting taxonomy used in any HSDA implementation. I have populated my demo API with the Open Elegibility taxonomy to help jumpstart folks, but any HSDA provider can populate with their own custom taxonomy, or other existing format. Then you can apply any taxonomy to any of the services stored within an HSDA database, and there is an API path for querying services by taxonomy. Next I’ll make sure you can search by taxonomy, and see the taxonomy as part of the response body for all services returned across HSDA, HSDA Search, and HSDA Taxonomy.

HSDA Taxonomy API is a simple service, but it is one that I want to get up and running, and maturing quickly. I feel like there is a lot of opportunity around 211 taxonomy aggregation, an further standardizing and evolving on top of Open Eligibility, or establishing a separate Open Referral taxonomy that can be open source. The current 211 taxonomy dominating the landscape is the AIRS 211 format, which is a proprietary taxonomy, something that I’ll address in a separate post. Have a shared vocabulary around human services is almost as important as the data itself–if you can’t find meaningful services, in a consistent way across providers, the data itself becomes much less valuable.

I have launched a demo copy of the API at my Human Services Demo API, and next I am working on a handful of UI elements for managing, browsing, and searching for services using the HSDA Taxonomy API. My goal is to get v1.0 of the HSDA Taxonomy specification being discussed as part of the overall HSDA governance process, while I’m also hardening the API, and UI tooling via a couple of HSDA implementations I’m working on. Then I’ll circle back in a couple months and see where things are, learn more about some other taxonomies, and hopefully pull together more of a strategy around how to get people sharing 211 taxonomies, and speaking a common language about how they categorize human services, as well as store and provide access to human service organizations, locations, and services.


Stripes OpenAPI Is Available On Github In Version 3.0

I can’t write about every API provider who publishes their OpenAPI to Github, there are just too many. But, I can write about the rockstar API providers who do though, and showcase what they are doing, so I can help influence the API providers who have not started publishing their OpenAPIs in this way. If you are looking for a solid example of a leading API provider publishing their OpenAPI to Github, I recommend taking a look at the payment provider Stripe.

Their repository contains OpenAPI specifications for Stripe’s API, with multiple files available in the in the openapi/ directory:

  • spec3.{json,yaml} - OpenAPI 3.0 spec.
  • spec2.{json,yaml} - OpenAPI 2.0 spec. We’re continuing to generate this for now, but it will be deprecated in favor of spec3.
  • fixtures3.{json,yaml} - Test fixtures for resources in spec3. See below for more information.
  • fixtures2.{json,yaml} - Test fixtures for resources in spec2.

It is pretty exciting to see them already embracing version 3.0. They even provide a listing of the OpenAPI vendor extensions they are using, which are specific to their API. I’ll be adding these to my OpenAPI toolbox when I have the time, adding to the number of vendor extensions I have indexed. Stripe provides another pretty solid example of an API provider taking ownership of their OpenAPI spec, publishing to Github for their consumers to put tow rok, but clearly they are also using as part of their own internal workflows as well.

Every API provider should have a Github repository with an up to date OpenAPI like Stripe does. I know many API architects envision a hypermedia API discovery landscape, where APIs are defined and discoverable by default, but I think an OpenAPI on Github is the best we can hope for at this stage in the evolution of the space. With the momentum I’m seeing in the number API providers publishing their OpenAPIs to Github, I’m feeling like Github is going to become the continuous integration, API discovery engine we’ve all been looking for over the last decade. Allowing us to discover, integrate and orchestrate with our APIs across the API life cycle–we just need everyone to follow Stripe’s lead. ;-)


Locking Up Any Open Data Taxonomy Is Short Sighted In Todays Online Environment

I published a taxonomy API as part of my Human Services Data API (HSDA) work recently, and as part of the work I wanted it to support a handful of the human services taxonomies available currently. The most supported taxonomy available out there is the AIRS/211 LA County Taxonomy. It is a taxonomy in use by 211 of LA County, as well as owned and licensed by them. From what I gather, it is the most common format in use, and you can find licensing pages for it from other municipal 211 providers. Before you can download a copy of the taxonomy you have to agree to the license I’ve posted at the bottom of this post, something I was unwilling to do.

Taxonomies shouldn’t be locked up this way. Let alone taxonomies for use in open data, helping citizens at the municipal level. I understand that 211 LA will argue that they’ve put a bunch of work into the schema, and therefore they want to protect what they view their intellectual property, but in 2017 this is wrong. This isn’t the way things should be done, sorry. The AIRS taxonomy should be openly available, and reusable in a machine readable format, and evolved by an open governance process. There is no reason for this valuable taxonomy, that has the potential to make our cities better, should be locked up like this–it needs to be widely used, and adopted without any legal friction along the way.

I understand that it takes work, and resources to keep a taxonomy meaningful, and usable, but we should not stand in the way of people finding human services, and restricting 211 providers from using the same vocabulary. There are other was to generate revenue, and evolve forward a taxonomy in an online, collaborative environment, much like we are currently doing with open source software. This kind of stuff drives me nuts, and the licensing around this important technology is something I’ll keep an eye on, and contributing whatever I can to help stimulate the discussion in favor of open sourcing. In the absence of AIRS, I have adopted an open source 211 taxonomy called Open Eligibility, but alas it seems like an effort that has gone dormant. I have forked the specification, added more simpler JSON, CSV, and JSON formats which I will be working with it alongside the rest of my Human Services Data API (HSDA) taxonomy work.

Taxonomy licensing is another area of consideration I’ll add to my API licensing research, as well as for guidance around my HSDA work. I wish this type of stuff didn’t still happen in 2017. It is a relic of another time, and in a digital age, taxonomies for any aspect of public infrastructure should be openly licensed, and reusable by everyone. I would like to see Open Referral expand its portfolio to push forward one or more taxonomies for not just human services, but also organizations, locations, and potentially other relevant schema we are pushing forward. I see Open Referral as an incubator for schema, OpenAPI definitions, as well as datasets like 211 taxonomy, helping provide a commons where 211 organizations can find what they need.


TAXONOMY SUBSCRIPTION AGREEMENT

CAREFULLY READ THIS TAXONOMY SUBSCRIPTION AGREEMENT BEFORE DOWNLOADING, INSTALLING OR USING THE TAXONOMY (DEFINED BELOW). TAKING ANY STEP TO DOWNLOAD, INSTALL OR USE THE TAXONOMY IN ANY WAY CONSTITUTES YOUR ASSENT TO AND ACCEPTANCE OF THIS AGREEMENT AND IS A REPRESENTATION BY YOU THAT YOU HAVE THE AUTHORITY TO ASSENT TO AND ACCEPT THIS AGREEMENT. IF YOU DO NOT AGREE WITH THE TERMS AND CONDITIONS OF THIS AGREEMENT, YOU MUST NOT DOWNLOAD, INSTALL OR USE THE TAXONOMY AND YOU MUST IMMEDIATELY RETURN THE TAXONOMY (AND NOT KEEP ANY COPIES) TO 211 LA AND SO NOTIFY 211 LA OF SUCH FAILURE TO AGREE.

Taxonomy Subscription Agreement (“Agreement”) contains the terms and conditions by which Information and Referral Federation of Los Angeles, Inc., doing business as 211 of LA County (“211 LA”) provides a subscription license to use the Taxonomy. This Agreement is a binding legal contract between you (both the individual downloading, installing and/or using the Taxonomy and, if applicable, the legal entity on behalf of which such individual is acting) (“Subscriber”) and 211 LA.

1.Definitions

Subscriber Database means a database of health and human services information and resources that is created and maintained by Subscriber.

Subscriber Directory means a printed directory or electronic read-only directory of health and human services that is created and maintained by Subscriber using a Subscriber Database. As used in this definition, “read-only” means an electronic version of a directory in which the information in such directory, including the Taxonomy, cannot be modified or altered in any way.

Subscription Order Form means the form(s) used by 211 LA for allowing subscribers to purchase or renew subscription licenses of the Taxonomy.

Taxonomy means the Taxonomy of Human Services maintained and made generally available by 211 LA for purposes of indexing health and human services information and resources, including its terms, definitions, codes and references and any and all updates, upgrades, enhancements or other modifications that may be made available by 211 LA to Subscriber from time to time.

Taxonomy Website means a website hosted by 211 LA through which subscription licenses for the Taxonomy can be purchased and the Taxonomy can be downloaded.

2.Taxonomy License.

Limited License. Subject to Subscriber’s compliance with the terms and conditions of this Agreement (including, without limitation, Sections 2.2, 2.3, 3 and 4), 211 LA hereby grants to Subscriber a limited, non-exclusive, non-transferable, non-sublicensable license to:

access portions of the Taxonomy Website designated by 211 LA for subscribers and download the Taxonomy as made generally available thereon;

use the Taxonomy, including its codes, terms, definitions, and references, as a classification structure for indexing health and human services information and resources in a Subscriber Database;

include Taxonomy terms in a survey instrument prepared by Subscriber as reasonably necessary to collect information from third party organizations regarding health and human services, provided that such survey instrument may only include the Taxonomy terms that Subscriber has used to index health and human services information and resources in the Subscriber Database;

include Taxonomy terms as an index in a Subscriber Directory distributed or otherwise made available to third parties (including over the Internet), provided that (i) the proceeds of any monetary or other consideration provided in connection with such distribution are provided to a non-profit, charitable organization and (ii) any such Subscriber Directory may only include the Taxonomy terms that Subscriber has used to index health and human services information and resources in the Subscriber Database, along with any higher level terms on the same branch that are needed to display the hierarchical structure; and

include the Taxonomy definitions in the Subscriber Directory referenced in Section 2.1(d) above, provided that such Subscriber Directory may include only those definitions for terms that Subscriber has used to index health and human services information and resources in the Subscriber Database.

License Restrictions. Nothing contained in this Agreement will be construed as conferring upon Subscriber, by implication, operation of law or otherwise, any license or other rights except as expressly set forth in Section 2.1. Subscriber shall not, and shall not allow any third party to: (i) copy, display or otherwise use all or any portion of the Taxonomy except as incorporated into a Subscriber Directory as permitted in Section 2.1; (ii) transmit or otherwise distribute the Taxonomy (or any portion of the Taxonomy) as a separate product, module or material to any end user or other third party, or make the Taxonomy (or any portion of the Taxonomy) available to any end user or other third party as a downloadable file separate from a Subscriber Directory; (iii) transmit or otherwise distribute any portion of the Taxonomy for commercial purposes or otherwise for profit or other monetary gain; (iv) incorporate the Taxonomy (or any portion of the Taxonomy) into any database or software through which the Taxonomy (or any portion of the Taxonomy) can be printed, downloaded or modified; or (v) loan, lease, resell, sell, offer for sale, sublicense, adapt, translate, create derivative works of or otherwise modify or alter all or any part of the Taxonomy. Further, Subscriber acknowledges and agrees that use of the Taxonomy Website is also subject to 211 LA’s Terms of Use, as made available thereon and updated from time to time.

Notice.

If the Taxonomy, or any potion of the Taxonomy (including its terms, codes, definitions or references), are utilized in a Subscriber Directory that is published, transmitted, distributed or otherwise made available to third parties, by any means or medium, the Subscriber shall prominently display the following notice in such directory:

Copyright © 1983-2007, Information and Referral Federal of Los Angeles County, Inc. All rights reserved. The index, codes and definitions of the terms contained herein are the intellectual property of Information and Referral Federal of Los Angeles, Inc. and are protected by copyright and other intellectual property laws. No part of this listing of human services terms and definitions may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electrical, mechanical, photocopying, recording or otherwise without the prior written permission of the Information and Referral Federal of Los Angeles County, Inc.

If the Taxonomy is displayed or otherwise made available in a Subscriber Directory via the Internet, Subscriber shall prominently include a link to a copyright acknowledgement statement that 211 LA maintains online, currently located at: http://www.211la.org/Content.asp?Content=Taxonomy&SubContent=Copyright.

3.Intellectual Property Ownership.

Ownership of the Taxonomy. As between 211 LA and Subscriber, 211 LA retains and shall own all right, title and interest in and to the Taxonomy and any derivative works or other modifications thereof, including, without limitation, all copyright, trademark, trade secret and other intellectual rights, subject only to the limited license set forth herein. Subscriber does not acquire any other rights, express or implied, in the Taxonomy. Subscriber hereby assigns, and agrees to assign, to 211 LA all right, title and interest (including all intellectual property rights) throughout the world that Subscriber has or may have in the Taxonomy (including with respect to any modifications suggested by, or other contributions made by, Subscriber), which assignment shall be deemed effective as to any future modifications or contributions immediately upon the creation thereof. Subscriber further irrevocably waives any “moral rights” or other rights with respect to attribution of authorship or integrity of any modifications suggested by, or other contributions made by, Subscriber under any applicable law under any legal theory.

Access and Security.

Subscriber is solely responsible for providing, installing and maintaining at Subscriber’s own expense all equipment, facilities and services necessary to access and use the Taxonomy, including, without limitation, all computer hardware and software, modems, telephone service and Internet access.

Subscriber may be issued or otherwise assigned a user identification and/or password (collectively, “User Identifications “) to access the Taxonomy and/or Taxonomy Website as permitted hereunder. Subscriber is solely responsible for tracking all use of the User Identifications and for ensuring the security and confidentiality of all User Identifications. Subscriber acknowledges that Subscriber is fully responsible for all liabilities incurred through the use of any User Identification and that any download, transmission or transaction under a User Identification will be deemed to have been performed by Subscriber.

Subscriber shall ensure that each of its employees complies with this Agreement, including, without limitation, the license restrictions in Section 2.2 and shall protect the Taxonomy from any use that is not permitted under this Agreement. Subscriber shall promptly notify 211 LA of any unauthorized copying, display, modification, transmission, distribution, or use of the Taxonomy of which it becomes aware.

211 LA reserves the right at any time and without prior notice to Subscriber to change the hours of operation of the Taxonomy Website or to limit Subscriber’s access to the Taxonomy (i) in order to perform repairs or to make updates, upgrades, enhancements or other modifications or (ii) in response to unforeseen circumstances or circumstances beyond 211 LA’s reasonable control. 211 LA may add or withdraw elements of to or from the Taxonomy and/or Taxonomy Website from time to time in its sole discretion, although Subscriber acknowledges and agrees that 211 LA has no obligation to maintain or provide any updates, upgrades, enhancements, or other modifications to the Taxonomy or Taxonomy Website.

Verification. 211 LA may, during the term of this Agreement and with seven (7) days prior notice, request and gain access to Subscriber’s premises for the limited purpose of conducting an inspection to determine and verify that Subscriber is in compliance with the terms and conditions hereof. Subscriber shall promptly grant such access and cooperate with 211 LA in the inspection; provided, however, the inspection shall be conducted in a manner not intended to disrupt unreasonably Subscriber’s business and shall be restricted in scope, manner and duration to that reasonably necessary to achieve its purpose.

4.Payment

Payment. In consideration for the subscription license granted under Section 2.1, Subscriber shall pay the applicable fees as set forth in the Subscription Order Form and/or Taxonomy Website. All fees are nonrefundable. Any information that you may provide in connection with obtaining the subscription (including any nonprofit and/or AIRS membership information) may be verified and your subscription license may be placed on hold or terminated in the event of inaccuracies or discrepancies.

Taxes. In addition to all applicable fees, Subscriber shall pay all sales, use, personal property and other taxes resulting from this Agreement or any activities under this Agreement, excluding taxes based on 211 LA’s net income, unless Subscriber furnishes proof of exemption from payment of such taxes in a form reasonably acceptable to 211 LA. Further, If Subscriber is required by law to deduct or withhold any taxes, levies, imposts, fees, assessments, deductions or charges from or in respect of any amounts payable hereunder, (a) Subscriber shall pay the relevant taxation authority the minimum amounts necessary to comply with the applicable law, (b) Subscriber shall make such payment prior to the date on which interest or penalty is attached thereto, and (c) the amounts payable hereunder shall be increased as may be necessary so that after Subscriber makes all required deductions or withholdings, 211 LA shall receive amounts equal to the amounts it would have received had no such deductions or withholdings been required.

Discounts. From time to time, and in 211 LA’s sole discretion, 211 LA may offer discounts to particular organizations (such as nonprofits, government agencies and members of the Alliance of Information and Referral Systems (AIRS)). If such a discount is offered to Subscriber, Subscriber may be required to submit proof of nonprofit status (such as a federal EIN number), a valid AIRS membership number and/or additional information in order to receive such discounts.

Delivery. The Taxonomy is only made available electronically via download from the Taxonomy Website and, unless otherwise agreed by 211 LA in writing on a case-by-case basis, will not be delivered in any other form or via in any other method.

5.Term and Termination

Term. Subscriber’s rights with respect to the Taxonomy will commence on the date full payment of license fees are received and approved by 211 LA and will continue for an initial period of one (1) year, at which point Subscriber’s rights and this Agreement shall expire. If available, Subscriber may renew its subscription license via the Taxonomy Website, which renewal will be subject to and governed by 211 LA’s then-current fees and then-current terms and conditions (which may include a new or updated Taxonomy Subscription Agreement).

Termination of Agreement. Subscriber may terminate this Agreement at any time by sending an email message addressed to taxonomy@infoline la.org, with the subject “Subscription Cancellation.” Further, if Subscriber commits any breach of any provision of this Agreement, 211 LA will have the right to terminate this Agreement (including the rights granted to Subscriber under 2.1) by written notice, unless Subscriber remedies such breach to 211 LA’s reasonable satisfaction within thirty (30) calendar days after receiving written notice from 211 LA.

Effect of Termination.

Upon termination or expiration, and except as expressly provided in Section 5.3(b), the rights and license granted to Subscriber hereunder shall immediately cease and Subscriber will immediately cease all use of the Taxonomy, will destroy all copies of the Taxonomy and will promptly certify such action to 211 LA in writing. Without limitation of the foregoing, 211 LA may immediately terminate Subscriber’s account and ability to access the Taxonomy via the Taxonomy Website upon any expiration or termination of this Agreement. Expiration or termination of this Agreement will not limit either party from pursuing other remedies available to it, including injunctive relief.

The parties’ rights and obligations under Sections 2.2, 3, 4, 5.3, 6, and 7 will survive expiration or termination of this Agreement. Further, unless this Agreement has been terminated by 211 LA for breach, the rights granted to Subscriber under Sections 2.1(b) through (e), as well as the rights and obligations set forth in Section 2.3, shall survive and continue following any expiration or termination of this Agreement, but only with respect to the most recent version of the Taxonomy downloaded by Subscriber from the Taxonomy Website as of the date of expiration or termination and provided that Subscriber’s rights shall continue to be subject to termination by 211 LA under Section 5.2.

6.Disclaimer of Warranty and Limitation of Liability.

Disclaimer of Warranty. 211 LA does not represent that the Taxonomy will meet any expectations or specifications of Subscriber. THE TAXONOMY AND ANY OTHER INFORMATION, PRODUCTS OR SERVICES PROVIDED BY 211 LA TO SUBSCRIBER ARE PROVIDED “AS IS,” WITHOUT WARRANTY OF ANY KIND. 211 LA HEREBY DISCLAIMS ANY AND ALL WARRANTIES OF ANY KIND, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING, WITHOUT LIMITATION, THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, SATISFACTORY QUALITY, ACCURACY, TITLE AND NONINFRINGEMENT, AND ALL WARRANTIES THAT MAY ARISE FROM COURSE OF PERFORMANCE, COURSE OF DEALING OR USAGE OF TRADE.

Limitation of Liability. TO THE EXTENT PERMITTED BY APPLICABLE LAW: (I) IN NO EVENT WILL 211 LA BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL OR PUNITIVE DAMAGES, OR DAMAGES FOR LOSS OF PROFITS, REVENUE, BUSINESS, SAVINGS, DATA, USE OR COST OF SUBSTITUTE PROCUREMENT, INCURRED BY SUBSCRIBER OR ANY THIRD PARTY, WHETHER IN AN ACTION IN CONTRACT OR TORT, EVEN IF THE SUBSCRIBER OR THE OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES OR SUCH DAMAGES ARE FORESEEABLE AND (II) 211 LA’S LIABILITY FOR DAMAGES HEREUNDER WILL IN NO EVENT EXCEED THE FEES RECEIVED BY 211 LA HEREUNDER. SUBSCRIBER ACKNOWLEDGES THAT THE LIMITATIONS OF LIABILITY IN THIS SECTION 6.2 AND IN THE OTHER PROVISIONS OF THIS AGREEMENT, AND THE ALLOCATION OF RISK HEREIN, ARE AN ESSENTIAL ELEMENT OF THE BARGAIN BETWEEN THE PARTIES, WITHOUT WHICH 211 LA WOULD NOT ENTER INTO THIS AGREEMENT.

7.Miscellaneous Provisions

No Assignment. Subscriber may not assign, sell, transfer, delegate or otherwise dispose of, whether voluntarily or involuntarily, by operation of law or otherwise, this Agreement, or any rights or obligations under this Agreement. Any purported assignment, transfer, or delegation by Subscriber will be null and void. Subject to the foregoing, this Agreement will be binding on the parties and their respective successors and assigns.

Relationship Between the Parties. The parties shall at all times be and remain independent contractors. Nothing in this Agreement creates a partnership, joint venture or agency relationship between the parties.

Governing Law. This Agreement is to be construed in accordance with and governed by the internal laws of the State of California (as permitted by Section 1646.5 of the California Civil Code (or any similar successor provision)) without giving effect to any choice of law rule that would cause the application of the laws of any jurisdiction other than the internal laws of the State of California to the rights and duties of the parties. In the event of any controversy, claim or dispute between the parties arising out of or relating to this agreement, such controversy, claim or dispute may be tried solely in a state or federal court located within the County of Los Angeles, California and the parties hereby irrevocably consent to the jurisdiction and venue of such courts.

Severability and Waiver. If any provision of this Agreement is held to be illegal, invalid, or otherwise unenforceable, such provision will be enforced to the extent possible consistent with the stated intention of the parties, or, if incapable of such enforcement, will be deemed to be severed and deleted from this Agreement, while the remainder of this Agreement will continue in full force and effect. The waiver by either party of any default or breach of this Agreement will not constitute a waiver of any other or subsequent default or breach.

Headings. The headings used in this Agreement are for convenience only and shall not be considered in construing or interpreting this Agreement.

No Third Party Beneficiaries. This Agreement is made and entered into for the sole protection and benefit of the parties hereto, and is not intended to convey any rights or benefits to any third Party, nor will this Agreement be interpreted to convey any rights or benefits to any person except the parties hereto.

Entire Agreement. This Agreement, along with the Terms of Use made available on the Taxonomy Website, constitutes the complete agreement between the parties and supersedes any prior or contemporaneous agreements or representations, whether written or oral, concerning the subject matter of this Agreement. This Agreement may be changed by 211LA from time to time immediately upon notice to Subscriber (which may be achieved by posting an updated copy of this Agreement on the Taxonomy Website) or by written agreement of the parties. Continued use of the subscriber portions of the Taxonomy Website following any change constitutes acceptance of the change.


I Finally Have A Weekly Email Newsletter Roundup Of API Evangelist Posts

I’ve had people asking me for an email newsletter containing everything I’ve done over the week for quite a while now, and I finally got around to do doing it. I’m now using MailChimp to pull in the last 20 API Evangelist blog posts and send out as a digest each Monday morning. Providing a summary of everything I wrote for the previous week.

I’m thankful for services like MailChimp which help me get up and running with things like a newsletter quickly. Then can scale it over time, and even use their API if I want. Without service providers like this I’d never have things like a newsletter. Here is the email newsletter sign up form from MailChimp, and you can always find on the bottom of the home page for API Evangelist:

I am not a big email person, as many of you know. However, I understand that many of you are, and I’m seeing a resurgence of readership via email newsletter over using RSS. While I still love me some RSS, I understand that many folks have moved on, often filling the void with their inboxes. After some high profile folks asked me for an email digest, because they were too busy to always remember to tune in, I had to make it happen.

Thanks for tuning in. If there is anything else you’d like to see in the weekly round up–let me know. I’m trying to keep basic for now, but I am thinking about adding some additional thoughts, or other aspects of my API industry research in there–like links to my guides, tools, and services. Right now, I’m giving a shout out to all my sponsors in the footer, but as the list grows it might be another place I entertain sponsorship of this crazy train I call API Evangelist. Thanks for all your support.


I Can Keep Evangelizing The Same API Stories For The Next Decade In Government

I spoke on a panel at the Red Hat, Fed Scoop Government Symposium in Washington D.C. yesterday. I had some great conversations with technology vendors, as well as government agencies about everything API. I enjoy being outside the Silicon Valley echo chamber when it comes to technology because I enjoy helping folks understand the basics of what is going on with the basics of APIs, over getting too excited over the latest wave of new technology, and a constant need to be moving forward before ever getting a handle on the problems on the table.

It can be hard to to repeat some of the same stories I’ve been telling for the last seven years while in these circles, but honestly the process helps me refine what I’m saying, and continue to actively think through the sustained relevancy of the stories I’ve been telling. After this round of discussions in D.C. I feel there are a some themes in my work, I can keep refining, and crafting stories for sharing in the government space.

  • Open - I know its a tired term, but learning to be more open with other agencies, partners, and the public is an essential component of doing APIs in the federal government.
  • Documentation - Do not reinvent the wheel with documentation, and leverage OpenAPI to help you keep documentation usable, up to date, and valuable to developers using existing open source API documentation solution.
  • Support - Provide email, office hours, Twitter, ticketing, Github issues, and other common support building blocks for API consumers, making sure people know they can get help when they need.
  • Communication - Talk to your consumers. Have a blog, Twitter account, and other social channels for communicating internally, with partners, and publicly with API consumers.
  • Experiment - See your APIs as an R&D extension of an agency, and allow for experimentation with APIs, as well as the consumption of the APIs. Think about sandboxes, data virtualization, and other ways of minimizing agency risk.
  • Education - Make sure you are reaching out, educating, and providing training for all API stakeholders, ensuring that everyone is up to speed, and making no assumptions about what people know, or do not know.

None of this is technical. This is all basic API knowledge that any business or technical API stakeholder can take ownership of. These are all things that I see kill API efforts in the public, as well as private sector. These are all things that IT and developer folks scoff at and feel are unnecessary, and business users do not always see as an essential part of technical implementations. These are all deficient elements present across the 100 developer portals, and the 500 APIs I keep an eye on across the federal government. They are common building blocks of API operations that I’ll keep beating a drum about on my blog, and in person at events I attend in D.C.

The API environment in D.C. would frustrate your average API developer, architect, and evangelist. I get frustrated at the speed of things, and having to say the same things over and over. However, I also understand the scope of what the federal government does, and the number of people we have to get up to speed on things. The number of APIs available is actively growing, but the consistency, usability, and effectiveness of APIs isn’t keeping pace. To keep things in balance we are going to need even more evangelism around operating government in an online environment, and how APIs can help provide access to data, content, and even algorithms across all branches of government in a safe and secure ways.


Admitting There Is So Much I Do Not Understand Makes Be Better At APIs

One of the reasons I’m so good at APIs is because I embrace how little I know. This rolling realization keeps my appetite wet when it comes to learning to things, and working hard to discover, and realize sensible API practices. I am comfortable with the fact that I do not know something. I enjoy coming up against things I do not understand, eager to learn more. However, I think there is one big difference in the way I approach technology from other developers, is that I’m not confident that I will ever be able to fully understand a particular domain, let alone think that technology, or specifically APIs are a solution to a specific set of problems within every domain.

Many developers are overly confident in what they know. They are also overly confident in their ability to learn new things. They are also overly confident that they can hammer out a technological solution that will solve all problems within a domain. I feel like many technologists aren’t in the game to learn, they are in the game to prove they have the chops to solve problems, and when they can’t they just walk away. When you approach APIs in this way you are leaving a lot of opportunity for learning and growth on the table. APIs shouldn’t be seen as simply a solution. APIs are just a tool (like the web) in a business toolbox, that should be applied when appropriate, and not applied when it doesn’t make sense.

Beyond developers, I feel like many business users are scared off by the uncertainty in the world of APIs. They don’t thrive in an environment where there are so many possibilities, configurations, and ways to do things right and wrong. APIs give you more control over your data, content, and algorithms, allowing you to provide access to them in many ways, and reach across many client channels like the web, mobile, and other device or network implementations. I feel like many business users want this amount of control, but aren’t willing to invest the time to be able to make decisions in this environment, and own the responsibility surrounding so much uncertainty. Unlike developers, they may have the domain expertise, but aren’t willing to experiment, play around with, and bang their head on the different ways APIs can help deliver solutions.

I feel like developers tend to suck at admitting they don’t understand things, and have to much belief in our technological toolbox when it comes to filling in the gaps. I feel like business users just aren’t confident enough with technology to thrive in environments where you perpetually do not know everything. I feel like my ability to admit I do not know, and maybe never fully understand a specific domain–coupled with my insatiable appetite for learning, puts me in a good position to be studying APIs. I can dive into new domains without thinking I’m going to save or disrupt the world. I can develop API prototypes, and help define API specifications, fully aware that I may not get it right on the first release. I know this is ok. I know that APIs are a journey, and not a destination. I know that APIs shouldn’t always be done. I know that sometimes they are a bad idea. I respect that there are domain experts in specific industries that I should be learning from. Admitting there is so much I do not know, helps me be better at what I do with APIs.


Are People Ready For An Online API-Driven World That Is Programmable?

I am struggling with helping some folks get beyond their API being just readable, and helping them understand the potential of having POST, PUT, and other writable aspects to their resources, making things much more programmable. My client has a firm grasp on the ability to GET data from their API and publish on websites. They also have the concept that they can GET other data from other 3rd party APIs, and display on their website alongside their data. Where they are struggling is that they can also add new data to their API, and update existing data they are making available via their API, and ultimately their website as well.

This hurdle isn’t limited to any single project I’m working on. I find a number of people who seem to have a decent grasp on APIs in general, struggling with or completely avoiding conversations around making the data writeable. They are able to make the transition from web to API when it comes to retrieving data, but making the same jump when it comes to adding and updating data is proving to be more difficult. I think there will always be a cognitive load with jumping from read to write, as you have to think more about security, data quality, and other common concerns. However, beyond that, I’m trying to explore what might be the challenges people are facing. Many of the folks I’m working with are a bit shaky on their grasp of APIs, and aren’t too confident in sharing what they don’t understand.

As I do, I’ll put it out to the universe and ask my audience what they’ve seen. On the surface, I’d say that adding or updating data into a database online is tougher to wrap your head around without some context, and some of the affordances we enjoy in the browser. Adding a Tweet through mobile application or website? No problem. POST a tweet through API, is a little tougher to envision. Updating a contact record in your administrative system? No problem. PUT via API to the /contact/ path doesn’t compute as quickly. API developers can quickly see these interfaces as programmable, but for the average business user, there is more dependence on the affordances provided in the browser, and us developers are taking the importance of these affordances for granted.

Beyond that, I’m guessing there is a perceived lack of control. Anyone can add or update? How do we address quality control? IDK, just spitball’n here. The majority of APIs I come across are GET only, so I have to believe there are issues around control, and beliefs around ownership. I can’t believe all of these API providers don’t grasp the technical aspects of writeable APIs. It is the same in the federal government. ALL the web APIs are read only. We’ve started to see this ice break a little in recent years, but federal agencies are rightfully wary of the responsibility of letting the public write data to their databases. I almost feel like the Facebook and Twitters of the world are almost too open to allowing folks to write, without being thoughtful around privacy, security, and data ownership or stewardship–just to get their hands on the data.

I wondering if POST, PUT, and DELETE should be 101 concepts that I’m teaching to folks. Should I be starting out with just the safer, more straightforward reading of data, content, and algorithmic resources. Then down the road introduce the ability to add, update, and delete information. I know the concept of DELETE freaks people out pretty quickly. IDK. I’m just doing my job, and questioning things at all levels, and wondering if people are even ready for a programmable online world. I think SaaS has delivered a programmable world with a wealth of affordances that help onboard people with all of this, and us API people are failing at translating the significance of an API-driven world that is programmable to our business users.


You Can Lead A Horse To Water But You Cannot Make Them Drink--The API Edition

I have seven years of API research available at apievangelist.com. I regularly publish short form, and long form versions of this information on my blogs on a weekly basis. I publish prototypes, demo websites and portals, and develop API training curriculum for use across a wide variety of industries. I regularly take versions of my API research, and rework, rebrand, and dial in to speak to a specific company, organizations, institution, agency, or industry. In many cases I make this information freely available, helping make sure it is available to those who need it. Despite all this work, many folks who are already doing APIs refuse to read, listen, and learn from what is already going on in the API space, and doomed to repeat the mistakes many of us have already made and learned from in our API journeys.

Many folks don’t really understand my motivations and think I have some sort of agenda to sell them something, disrupt their current reality, or other uninformed perspective. Ultimately, not trusting what I’m putting out there. I guess viewing that the water is poised in some way. Others don’t feel they need it, either because they feel like they have all the answers, or the problems haven’t become a reality in their worlds yet, so my solutions seem irrelevant. I find it tough to argue with someone about preventative care when it comes to their API operations, when they spend their days triaging bugs, problems, and legacy technology challenges. They are fire fighters, water isn’t for drinking!

A long standing example of this can be found in the hypermedia realm. No matter how much some very smart people, with a wealth of experience deploying and managing APIs warn about challenges with maintaining API SDKs and clients, some folks will never see it as a problem until they actually face it themselves. I can showcase endless numbers of healthy practices extract from companies like AWS, Twitter, and Twilio for people to learn from, but many folks will never see their relevance until they directly experience the problem. Most people have trouble looking forward, as well as stepping outside their own API operations and looking at them side by side with other leading API pioneers. They are different. Special. Often times, people never even engage in these thought exercises at all. There just isn’t the room in their operations for thinking proactively.

I regularly get frustrated when my clients, or people I’m asked to speak with about healthy API practices actively ignore, criticize, and dismiss what I’m sharing. I shouldn’t. I can’t force people to trust me. I can’t force people to drink from the water I’m providing. All I can do is plant the seed in their minds that there is water over here when you get thirsty. It seems to be a chronic condition across many industries, that folks only drink when they are super thirsty, after they get dehydrated, rather than proactively drinking water regularly throughout the day. In the end, my energy is better spend doing what I do best–researching, then openly sharing what I’m learning across the API space from other people doing APIs well. My job will often involves leading a horse to water, but does not ever involve forcing anyone to drink. I considered creating a blog called API Water Boarding, but it just didn’t seem like a good idea. ;-)


The Impact Of API Management On API Security

This is a story from my latest API Evangelist API security industry guide. My partner ElasticBeam has underwritten my API security research, allowing me to publish a formal PDF of my guide, providing business and technical users with a walk-through of the moving parts, tools, and companies doing interesting things with API security. When I publish each guide, I publish each story here on the blog, helping build awareness around my research–this is a short one on API management.

API management has done an amazing job in helping companies, organizations, institutions, and government agencies make their digital resources more available on-line in a secure way. Allowing API providers to require developers to sign up, obtain keys, and tokens which need to accompany all API calls. This, along with encryption by default has gone a long way towards making data, content, and algorithms more accessible, while also being secure. However, many API providers have stopped here, and think their resources are secure, when in reality there is so much more work to be done.

Requiring all developers obtain keys to access resources, and encryption data in transit is an important part of API security, but it is just one tool in the API security toolbox. Out of API management you also receive an enhanced set of logging, analysis, and reporting tools for how developers are putting API resources to work. When done well, this pushes the API security conversation forward, allowing API providers to balance access with security, and be proactive when it comes to limiting access, or even shutting off access when their is abuse. The problem is not all API providers are investing here, let alone going beyond what API management providers offer.

The awareness brought to the table my API management is valuable, but there are so many aspects of API operations at the web server, DNS, and other levels that are often left out of the API management conversation. I’ll be pushing API providers to look beyond just the API management layer, and expanding API security awareness to every other stop along the API life beyond just management.

You can download or purchase my API Evangelist API security industry guide over at my API security research, and if you want to point out any corrections, and share your thoughts on what is missing, feel free to submit a Github issue on the research project’s Github repository. I appreciate your support of my work, and depend on folks like you, and ElasticBeam to make this all work.


Using APIs To Manage My APIs

I’m going further down the AWS rabbit hole lately with my APIs. Historically my APIs ran on an AWS EC2 instance with leveraged Linux for the OS, Apache for the web server, and Slim for the RESTful framework of my APIs–all with an RDS MySQL backend. I’ve now evolved the EC2 instance to be spread across numerous AWS Lambda scripts, tied together into various stacks of APIs using AWS API Gateway. At first, I was hesitant to go further down the AWS rabbit hole, but the security benefits of AWS-driven solutions, as well as the API-driven aspects of operating my APIs, is slowly shifting my view of how I need to be managing my APIs.

AWS RDS, Lambda, and API Gateway all have APIs. I’ve been spending the week developing Lambda scripts that help me manage my APIs, using the AWS APIs behind these three services, leveraging them to setup, configure, deploy, manage, and test my APIs. I enjoy how APis push me to think about my digital resources, and when my digital resources are APIs, the benefts begin to feel like API inception. I’m increasingly having APIs that do one thing and do it well, when it comes to API operations, allowing me to distill down the building blocks of my API operations, into a very workable world of API functionality.

I am now deploying, backing up, migrating, and working with the database behind my APIs using APIs. I primarily use AWS RDS MySQL instances behind my APIs, but when I’m using AWS DynamoDB, I leverage AWS APIs even more, as DynamoDB lets you do adds, updates, queries, and deletes using the API, elevating beyond SQL to manage the contents of each data store. Whether it is RDS or DynamoDB, I’m using APIs manage the operational side of each database behind the API, and I’m beginning to explore how to make my database more ephemeral, and scalable using AWS APIs, giving me more control over my database budgets.

Moving up the API stack, I’m deploying, optimizing, testing, and working with all my AWS Lambda scripts using the API. I’m working to organize, index, and manage all my Lambda scripts using AWS S3, bringing some order to the serverless madness I’m unleashing. This approach to using APIs to manage the code layer for my API operations is entirely new. I’ve never had API access to the code I deployed in the Slim framework, on my Linux servers. While there is significant overhead in architecting and setting up this new approach, the benefits down the road for deploying, updating, and managing the code that drives my APIs will be significant.

Lastly, I’m crafting a whole suite of API Gateway scripts, allowing me to setup plans, issue and revoke keys, quantify API consumption, and a whole host of other API gateway driven functionality across all my APIs. I’m even to the point where I’m launching new APIs in the gateway using OpenAPI definitions, and now I am figuring out how I can wire each API resource and method to their appropriate Lambda function behind. I’m still figuring this part out. This is where I find myself in my API driven journey to automate my API lifecycle using APIs. I’ve long had APIs for my API management, but only recently was I able to deploy new APIs that help me manage my APIs, beyond the base stack given to me by my API management provider.

Using APIs to manage APIs quickly becomes API inception in my brain. I quickly find myself looping infinitely through what is possible, but discovering new ways to orchestrate my world, and save myself work. I thoroughly enjoy the concept of using my cloud provider APIs to manage my APIs, but I’m also finding it rewarding to design, deploy, and operate new APIs that contribute to the management of my APis. This is where I think the whole API game is going to shift. When I’m not just deploying all my data, content, and algorithms using APIs, but also using APIs to manage these APIs, a new found freedom emerges to program the digital world around me.


Learning To Play Nicely With Others Using APIs

This is a topic I talk about often, write about rarely, but experience on a regular basis doing APIs. It has to do with encounters I have with people in companies who do not know how to share and play nicely with other companies and people, and want to do APIs. For a variety of reasons these folks approach me to learn more about APIs, but are completely unaware of what it takes, and how it involves working with external actors. Not all of these APIs are public, but many of them involve engaging with individuals outside the corporate firewall, possess a heavy focus on the technical, and business of doing APIs, but rarely ever consider the human and more political aspects of engagements with APIs.

I find that people tend to have wildly unrealistic expectations of technology, and believe that APIs will magically connect them to some unlimited pool of developers, or seamlessly connect disparate organizations across vast distances. People come to me hoping I will be able to explain it all to them in a single conversation, or provide in a short white paper, which is a state of being that also makes individual very susceptible to vendor promises. People want to believe that APIs will fix the problems they have introduced into their operations via technology, and effortlessly connect them with outside revenue streams and partnerships, unencumbered by the same human challenges they face within their firewalls.

Shortly after getting to know people it often becomes apparent that they possess a lot of baggage, legacy processes and beliefs, that they are often unaware of. Or, I guess more appropriately they are unaware that these legacy beliefs are not normal, and are something everybody faces. Where they are usually pretty uniquely dysfunctional to a specific organization. People usually have hints that these problems aren’t normal, but just aren’t equipped to operate outside their regular sphere of influence, and within days or weeks of engagement, their baggage becomes more visible. They start expecting you to jump through the same hoops they do. Respond to constraints that exist in their world. See data, content, and algorithms through the same lens as their organization. At this point folks are usually unaware they’ve opened the API door on their world, and the light is shining in on their usually insulated environment–revealing quite a mess.

This is why you’ll hear me talk of “dating” folks when it comes to API engagements. Until you start opening, and leaving the door open on an organization for multiple days at a time, you really can’t be sure what is happening behind those closed doors. I am also not always confident that someone can keep internal dysfunction from finding its way out. I don’t expect that all organization be dysfunction free, I’m just looking for folks who are aware of their dysfunction, and are looking to actively control, rate limit, and minimize the external impacts. As well as being willing to work through some of this dysfunction, as they engage with external actors. I have to also acknowledge that this game is a two way street, and I also encounter API consumers who are unable to leave their baggage at home, and bring their seemingly natural internal dysfunction to the table when looking to put external resources to work across their operations. In general, folks aren’t always equipped to play nicely with others by default, it takes some learning, and evolving, but this is why we do APIs. Right?

This is why I suggest people become API consumers, before they become API providers, so that they can truly understand what it is like to be on both sides of the equation. API providers who haven’t experienced what its like to be on the receiving end of an external API, rarely make for empathetic API providers. Our companies, organizations, institutions, and government agencies often insulate us from the outside world, which works in many positive ways, but as you are aware, it also works in some negative, and very damaging political ways. The trick with doing APIs, is to slowly change this behavior, and being able to let in outside ideas, and resources, without too much disruption, internally or externally. Then over time, learn to play nicely with others, and build relationships with other groups, and individuals who might benefit our organizations, and we might also help bring value to their way of doing things as well.


The Open Web Application Security Project (OWASP) And API Security

This is a story from my latest API Evangelist API security industry guide. My partner ElasticBeam has underwritten my API security research, allowing me to publish a formal PDF of my guide, providing business and technical users with a walk-through of the moving parts, tools, and companies doing interesting things with API security. When I publish each guide, I publish each story here on the blog, helping build awareness around my research–this is a short one on OWASP.

The Open Web Application Security Project (OWASP) is a 501(c)(3) worldwide not-for-profit charitable organization focused on improving the security of software, with a mission to make software security visible, so that individuals and organizations are able to make informed decisions. OWASP is looking to provide impartial, practical information about application security (AppSec) to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a community of like-minded professionals, OWASP issues software tools and knowledge-based documentation on application security.

As the web API space has expanded OWASP has expanded its focus to include the most common threats to APIs. OWASP has acknowledged the overlap between web applications, and web APIs, and quickly becoming a valuable source for API specific security knowledge, expanding beyond its web application roots. Providing one of the best resources to find security related information, and tooling you can apply throughout your API operations.

OWASP doesn’t endorse commercial services, and is a member driven organization, so you will find all the information they provide to be vendor neutral, and focused on the task at hand. You will find me regularly anchoring my API security work in what the OWASP community is doing, as security should always be a team effort. API security isn’t my primary focus as API Evangelist, but helping guide you to where you can find the latest information is what this guide is about.

OWASP is your source for unbiased API security information!

You can download or purchase my API Evangelist API security industry guide over at my API security research, and if you want to point out any corrections, and share your thoughts on what is missing, feel free to submit a Github issue on the research project’s Github repository. I appreciate your support of my work, and depend on folks like you, and ElasticBeam to make this all work.


APIs And Other Ways Of Serving Up Machine Learning Models

As with most areas of the tech sector, behind the hype there are real world things going on, and machine learning is one area I’ve been studying, learning, and playing withd what is actually possible when it comes to APIs. I’ve been studying the approach across each of the major cloud platforms, including AWS, Azure, and Google t push forward my understanding of the ML landscape. Recently the Google Tensorflow team released an interesting overview of how they are serving up Tensorflow models, making machine learning accessible across a wide variety of use cases. Not all of these are API specific, but I do think they are should be considered equally as part of the wider machine learning (ML) application programming interface (API) delivery approach.

Over the past year and half, with the help of our users and partners inside and outside of Google, TensorFlow Serving has advanced performance, best practices, and standards for ML delivery:

  • Out-of-the-box Optimized Serving and Customizability - We now offer a pre-built canonical serving binary, optimized for modern CPUs with AVX, so developers don’t need to assemble their own binary from our libraries unless they have exotic needs. At the same time, we added a registry-based framework, allowing our libraries to be used for custom (or even non-TensorFlow) serving scenarios.
  • Multi-model Serving - Going from one model to multiple concurrently-served models presents several performance obstacles. We serve multiple models smoothly by (1) loading in isolated thread pools to avoid incurring latency spikes on other models taking traffic; (2) accelerating initial loading of all models in parallel upon server start-up; (3) multi-model batch interleaving to multiplex hardware accelerators (GPUs/TPUs).
  • Standardized Model Format - We added SavedModel to TensorFlow 1.0, giving the community a single standard model format that works across training and serving.
  • Easy-to-use Inference APIs - We released easy-to-use APIs for common inference tasks (classification, regression) that we know work for a wide swathe of our applications. To support more advanced use-cases we support a lower-level tensor-based API (predict) and a new multi-inference API that enables multi-task modeling.

I like that easy-to-use APIs are on the list. I predict that ease of use, and a sensible business model will provide to be just as important as the ML model itself. Beyond these core approaches, the Tensorflow team is also exploring two experimental areas of ML delivery:

  • Granular Batching - A key technique we employ to achieve high throughput on specialized hardware (GPUs and TPUs) is “batching”: processing multiple examples jointly for efficiency. We are developing technology and best practices to improve batching to: (a) enable batching to target just the GPU/TPU portion of the computation, for maximum efficiency; (b) enable batching within recursive neural networks, used to process sequence data e.g. text and event sequences. We are experimenting with batching arbitrary sub-graphs using the Batch/Unbatch op pair.
  • Distributed Model Serving - We are looking at model sharding techniques as a means of handling models that are too large to fit on one server node or sharing sub-models in a memory-efficient way. We recently launched a 1TB+ model in production with good results, and hope to open-source this capability soon.

I’m working to develop my own ML models, and actively using a variety of them available over at Algorithmia. I’m pushing forward a project to help me quantify and categorize ML solutions that I come across, and when they are delivered using an API, I am generating an OpenAPI, and applying a set of common tags to help me quantify, categorize, and index all the ML APIs. While I am focused on the technology, business, and politics of ML APIs, I’m interested in understanding how ML models are being delivered, so that I can help keep pace with exactly how APIs can augment these evolving practices, making ML more accessible, as well as observable.

We are in the middle of the algorithmic evolution of the API space, where we move beyond just data and content APIs, and where ML, AI, and other algorithmic voodoo is becoming more mainstream. I feel pretty strongly that APIs are an import piece of this puzzle, helping us quantify and understand what ML algorithms do, or don’t do. No matter how the delivery model for ML evolves, batches, distributes, or any other mutation, there should be an API layer for accessing, auditing, and shining a light on what is going on. This is why I regularly keep an eye on what teams like Tensorflow are up to, so that I can speak intelligently to what is happening on this evolving ML front.


The API Evangelist API Security Industry Guide

This edition of my API security industry guide has been underwritten by ElasticBeam, who provides next generation API security, leveraging machine learning, and behavorial analysis that works with the existing web and API management solutions you already have in place across your API operations.

I have been working on this resulting guide from my API security research for over a year now. Thanks to ElasticBeam I’ve finally gotten it out the door. As with all my industry guides, it is a work in progress, and something that will never be finished. I’ll keep taking what I’ve learned, and publishing in as a PDF every couple months, and receive the edits, and feedback from my readers and the wider community, then publish again. I’m feeling like I’m finally finding my groove again with these guides, and there is no better time to be back on game, especially when it comes to API security.

Security is the number one concern of companies, organizations, institutions, and government agencies considering investing more resources into their API infrastructure, as well as companies who are ramping up their existing efforts. At the same time it is also the most deficient area when it comes to investment in API infrastructure by existing API providers. Many groups are rushing along their API journey, and deploying web, mobile, device, and other applications, but rarely stopping to properly secure things with each step along the way.

In 2016 I began investing more into the topic of API security. I have been ramping up my research into how APIs were being secured, and how they weren’t being secured. I’ve been tracking on breaches, vulnerabilities, as well as the companies who are offering products and services that help API providers secure their APIs, as well as some of the open source tooling that is available. As I do with my approach to researching everything APIs, along the way I’m keeping notes on the common building blocks, and other patterns that are contributing to the wider API conversation–in this case it is all about securing our APIs.

From my research into where things are at with API security in 2016, it is clear that one of the reasons things were deficient in the area of API security was that API management had sucked much of the oxygen out of the conversation. Numerous API providers I talked with about API security thought it was all about making sure APIs were keyed up, applying OAuth, using encryption, and rate limiting their APIs. With that, API security was taken care of. Very few were actively scanning, testing, and looking through web server, DNS and other logs for signs of security threats and incidents. While a major contributing factors to API security deficiency is that API providers are short on resources, which means API security is often under-invested in by a company, but beyond this, I think API management has been the biggest reason API security still lags behind in 2017.

Another thing I have noticed in my research, is that many of the APIs being operated were in service of mobile applications, and many API providers were investing in mobile application security, and considered their APIs behind secure as a result. APIs in service of mobile applications were living in the shadows, despite being easily reverse engineered using off the shelf proxy tools. This perception of what API security is has added on a whole other dimension to why investment in this area is so far behind. Many companies feel like they don’t have public APIs when they are developing them in the service of mobile phones, despite using HTTP as the transport, and leverage public DNS.

Even with all of the deficiencies in API security, I’m beginning to see forward motion in the API conversation in 2017. I’m seeing new service providers emerge to help secure web APIs, addressing the specific threats API providers face. I’m seeing machine learning, behavioral analysis, and other new approaches to analyzing log files, and studying the surface area of our API infrastructure. There is still much work ahead, but there are signs of renewed conversations on the subject. The biggest challenge is going to be helping companies, institutions, organizations, and agencies understand that they should be investing significantly more resources into API security. That is the objective of this API security industry guide, to provide a simple walk through of the space, and introduce the services, tools, and common building blocks that are in use by successful providers.

If you find any mistakes in the guide, or would like to make suggestions, feel free to head over to the Github repository for my API security research and submit an issue. I depend on my audience ot help me refine, polish, and keep these guides updated. Hopefully this is just the first of many updates to the API security industry guide, allowing me to keep in sync with the ever-changing security landscape. Thank you for your support!


I Appreciate This API Walk Through From Fannie Mae But Just Give Me The API!

I came across the new Desktop Underwriter (DU) API from Fannie Mae which provides lenders a comprehensive credit risk assessment data that determines whether a loan meets Fannie Mae’s eligibility requirements. They have a slick new website for the project, with the tag line “building on certainty”, and a smooth HTML story to walk you through what the new DU API can do. While the API seems very exciting, and valuable, the whole production is missing one thing–the API!

I am sure you have to be a partner to get access to the API, but you can tell the whole things is being led by people who have never actually used an API. Otherwise you would give us an API to actually use, and allow us to kick the tires. A hallmark of modern APIs is that you get to play with it. Marketing materials, and a sharp single page application website isn’t enough. We need the documentation, and be able to actually see what the request and response structure is, so that we can better understand the value being generated, and how we will be integrating with it. Without this, there isn’t any value. Of course, you don’t have to make the real API 100% public, you can always create API access tiers, and even deploy a sandboxed or virtualized version of the API and data for new users, protecting your valuable resources–just do not hide the API away from us, and make us consumers beg for access.

When you hide your APIs, you leave first impressions like you did with me. Wouldn’t it be better if my first impression was all about writing a story on how cool your API was, and how all my readers should be using it? Instead, I’m using you as a case of how to not do APIs. There is no reason the Fannie Mae Desktop Underwriter (DU) API can’t be publicly available, allowing us analysts and developers to read the documentation, and kick the tires. Another thing I want to push on back on, is the use of acronyms. I had to Google what DU meant. I know this is ironic, because of API and all, but I work overtime to spell out application programming interface (API) for my (new) users on a regular basis. Please don’t assume all your API consumers will immediately know what DU is. Please help unpack your acronyms, using a glossary, or expanding them inline as you explain what you do. It will make on-boarding much easier for the edges of your target audience.

Anyways, I look forward to some day testing driving the API (someday). I looked for a good five minutes for how to onboard with the API, and eventually gave up. Maybe someone involved in the project will read this, and email me with more information about how I could test drive, so that I can do a proper write up on what Fannie Mae is up to with aPIs. I’m happy to see such large entities working to make their valuable resources available via APIs, but I’m feeling like you might not have done all your homework on what API is, but luckily I’m here as the API Evangelist to help you understand the essence of why APIs work, beyond just the technology and acronym. I look forward to helping you along in your API journey Fannie Mae.


Additional Call Pricing Info Are The Pressure Relief Valves For API Plans

I’ve complained about unfair API pricing tiers several times over the last couple years, even declaring API access tiers irrelevant in a mult-API consumer world. Every time I write about this subject I get friends who push back on me that this is a requirement for them to generate revenue as a struggling startup. With no acknowledgement that their API consumers might also be struggling startups trying to scale consumption within these plans, only to reach a rung in the ladder they might not be able to actually reach. My goal in this storytelling isn’t to condemn API providers, but make them aware of what things look like from the other side, and that their argument essentially pulls up the ladder after they’ve gotten theirs–leaving the rest of us at the bottom.

My complaint isn’t with startups crafting pricing tiers, and trying to make their revenue projects more predictable. My complaint is when the plans are priced too far apart and I can’t afford to move from one plan to the next. More importantly, my complaint is when the tier I can’t moved from is rate limited with a cap on usage, and I can’t burst beyond my plans limits without scaling to the next access tier which I cannot afford to reach. I understand putting hard caps on public or free tier plans, but when you are squarely in a paid access tier, you shouldn’t be shut down when you reach the ceiling. Sure, I might pay a premium for each additional call, but I shouldn’t be shut down and forced to move to the next higher access tier–which might be out of my monthly price range. I just can’t go from $49.95 to $499.95 in monthly payments as a small business, sorry.

The key element that needs to be present for me, even in situations where I cannot afford to jump to the next tier, is the ability to go beyond my plans usage, with clear pricing regarding what I will pay. I may not be able to jump from $49.95 to $499.95 as monthly payments, but I might be able to burst, and absorb the costs as I need. If my plan is capped, and I cannot burst, and know what I will be charged for my usage (even if at a premium), it is a deal breaker for me. While I would prefer API providers do away with access tiers, and go with straight pay for what you use model (like Amazon), I accept the reality that API access plans help startups better predict and communicate their revenue. As long as they have this relief valve for each plan, allowing me to stay in my plan, but consume resources beyond what my tier allows.

Having relief valves for plans won’t hurt your revenue forecasting. I would argue it will actually help your revenue (not forecasting) with bursts in revenue that you wouldn’t see with just a capped plan approach. If you have trouble seeing API access in this way, I would argue that you are primarily an API provider, and building business exclusively focused on an exit. If you can empathize, I would say you are focused on delivering a set of services that people need, and you are probably an active consumer of other services, broadening your empathy spectrum beyond just a startup exit objectives. I honestly don’t want to mess with startups ability to generate revenue with this storytelling, I’m just trying to make the case for us startups on the API consumption side of the coin. Ok, I lied, I kind of do want to mess up the revenue strategy for startups who are exclusively focused on an exit, because when there isn’t a relief valve, you won’t find me signing up for one of your predictable plans in the first place.


When We Are Told That API Security Investments Will Affect Profitability

I was listening to Mark Zuckerberg talk about how security investments will affect the platforms profitability on the Facebook earnings call this last week. This line of thinking sounds pretty consistent with what I’m hearing from other folks when it comes to why they haven’t been investing more into their API security. My challenge for this line of thought is about shutting down proactive security investments, and does not speak of responsive security investments–meaning after you’ve had a breach, or when there is other security investment. From a leadership perspective this view of security just doesn’t do it for me, and I’d push back, and require it consider what profitability will look like if we do not invest properly in security.

Viewing security in this way is common. It is also a short-sighted view of security, in the name of profits today, over health of a platform down the road. It demonstrates that leadership is more focused on profits, than whatever the platform focus actually doing. I would add that I think this line of thinking reflects a perspective of leadership that is out of sync with the technical details of operating a platform, and the current threat landscape. I get that a company has to be profitable, and that it is the job of the CEO is to represent the investors, but after Equifax, and the many other breaches, as well as what I’m seeing on the ground at companies I’m talking to, it is pretty clear that things are out of whack when it comes to overall security investment.

I work with a lot of folks who want to invest in API security more, but they just don’t have the resources. I’ve been in leadership roles where I’ve had my hands tied when it came to decisions around infrastructure to deliver on PCI, and other compliance, as well as being able to hire security focused talent. This type of thought regarding security practices tends to make investors and other leadership happy, but is corrosive to the actual health of operations. This stuff shouldn’t be about profits or security, it should be about doing what is needed for security, then making assessments regarding how that impacts the bottom line. Security shouldn’t be polarized like this, and it should reflect proactive, as well as responsive costs, as well as practices.

This isn’t a technology of API security story, this is a politics of API security story. This type of response and tone from leadership is something that the majority of my readers will experience when trying to grow their API security efforts. Investment in API security will continue to be a challenge for most companies, organizations, institutions, and government agencies in coming years. As I do with other stops along the API life cycle, I’m going to spend more time developing stories to push back on leadership telling stories about investments in security. My goal is to have a toolbox of examples to help educate the people making security investment decisions that investment in API security now, will pay off later, and cost a lot less than investment in API security after the fact.


Hiding APIs In Plain Sight

I’m always surprised by how secretive folks are. I know that it is hard for many folks to be as transparent as I am with my work, but if you are doing public APIs, I have a basic level of expectation that you are going to be willing to talk and share stories publicly. I regularly have conversations with enterprise folks who are unwilling to talk about what they are doing on the record, or allow me to share stories about their PUBLIC API EFFORTS!!! I get the super secret internal stuff. I’ve worked for the government. I don’t have a problem keeping things private when they should be, but the secretive nature of companies around public API efforts continues to keep me shaking my head.

People within enterprise groups almost seem paranoid when it comes to people keeping an eye on what they are up to. I don’t doubt their competitors keep an eye on what they are doing, but thinking that people are watching every move, everything that is published, and will be able to understand what is going on, and be able to connect the dots is borderline schizophrenic. I publish almost everything I do public by default on Github repositories, and my readers, clients, and other folks still have trouble finding what I am doing. You can Google API Evangelist + any API topic and find what I’m working on each day, or you can use the Github search to look across my repositories, and I still have an inbox and social messaging full of requests for information.

My public by default stance has done amazing things for my search engine and social media presence. I don’t even have to optimize things, and I come up for almost every search. I get regular waves of connections from folks on topics ranging from student art to the government, because of my work. The schema, API definitions, documentation, tests, and stories for any API I am working on is public from day one of the life cycle. Until I bring it together into a single landing page, or public URL, the chances someone will stumble across it, and leverage my work before I’m ready for it to be public is next to zero. The upside is that when I’m ready to go public with a project, by the time I hit publish, and make available via the primary channels for my network, things are well indexed, and easily found via the usual online channels.

I feel like much of the competitive edge that enterprise groups enjoy is simply because they are secretive. There really isn’t a thing there. No secret sauce. Just secret. I find that secret tends to hide nothing, or at least is hiding incompetency, or shady behavior. I was talking with a big data group the other day that was looking for a contractor that was skilled in their specific approach to doing APIs. I asked for a link to their public APIs, so I could assess this specific approach, and they declined, stating that things private. Ok, so you want me to help find someone who knows about your API data thing, is well versed in your API data thing, but I can’t find this API data thing publicly, and neither can anyone else? I’m sorry, this just doesn’t make sense. How can your API data thing ever really be a thing, if nobody knows about it? It just all seems silly to me.

Your API, it’s documentation, and other resources can be public, without the access to your API being public. Even if someone can mimic your interface, they still don’t have all your data, content, and algorithmic solutions. You can vet every consumer of your API, and monitor what they are doing, this is API management 101. You can still protect your valuable digital assets while making them available for discovery, and consideration by potential consumers. You can make controlled sandbox environments available for talent acquisition, building of prototypes, crafting visualizations, doing analysis, and other ways businesses benefit from doing APIs. My advice to companies, institutions, organizations, and government agencies looking to be successful with APIs is stop being so secretive, and start hiding everything you are doing with your public APIs out in plain sight.


Postman As A Live Coding Environment In Presentations At APIStrat

We just wrapped up the 8th edition of APIStrat in Portland, Oregon this last week. I’ll be working through my notes, and memory of the event in future posts, but thing that stood out for me was the presence of Postman at the event. No, I’m not talking about their booth, and army of evangelists and company reps on site–although this was the first time I’ve seen them out in such force. I’m talking about the usage of the API development environment by presenters, as a live coding environment in their talks, replacing the command line and browser for how you demonstrate the magic of APIs to your audience.

On the first day of the conference I attended two separate workshops where Postman was the anchor for the talk. As they worked their way through their slides they kept switching back to the Postman application to show some sort of real results, from an actual, or mocked API. It is the new live coding environment for API evangelist, architects, designers, developers, and security folks. It is the quickest way to go from API concept, to demonstrating API solutions in any presentation. What I also really like is that it transcends any single programming language. In the past, I’ve always hated when someone would bust out some .NET code to show an API call, or something very language or platform specific. Postman reflects a more API way of doing things, that is elevated above the dogma of any single programming language community.

I am beginning to use Postman and Restlet client in my API training and curriculum more. Directing my users to actually try something out in the API client before moving on to the next step. It is kind of becoming the new interactive API documentation, but something that is linkable from any story, training materials, or incorporated directly into a live talk. As an evangelist it is yet another reason to maintain OpenAPI definitions and Postman Collections of all your most common API use cases. So that you can find, and include all your relevant API calls directly into your storytelling. I’m going to start exploring the viability of doing this with non-developers, and folks who aren’t familiar with Postman yet. I have a feeling that within the echo chamber there is a sort of familiarity taking hold when it comes to Postman, but I want to make sure the environment isn’t a shock for newcomers, and is something that can help users go from zero to API response in 60 seconds or less.

It is interesting to watch the API space evolve, and what tools become common place like Postman has become. For a while the Apigee API Explorer was common place, but has quickly fallen into the background. Swagger UI has enjoyed dominance when it comes to interactive API documentation, but is something I see beginning to shift. I’m hoping that API clients and development environment like Postman, Restlet, PAW, and Insomnia continue to develop mindshare. These tools are all good for helping make API integration easier, but as I saw at APIStrat, they can also help us communicate with our readers, and audience about what is possible with APIs. Speaking to as wide of a group as possible, elevating above any single programming language, and just speaking API.


Developing A Talent Pool Within Your API Community

There are many reasons for having an API. The direct reason is to provide your partners and 3rd party developers access to your data, content, and algorithmic resources using the web. However, there are many indirect, and less obvious reasons for having an active API program at your company, organization, institution, or government agency. Things that you probably haven’t thought of, but the groups who are already doing APIs have known about these benefits for a while. One of these benefits is in the area of talent acquisition, and building relationships with, and identifying folks with the skills you are looking for.

I remember the first time I heard the executives at Paypal say that they often hire out of their development community. After hearing that I began asking more API providers about the talent acquisition opportunities within their API developer ecosystem, and about 50% of the people I talked said they had hired someone building an application on their API, or had shown up to a hackathon to participate and build interesting things. It is easy to think of our API platforms as simply a place for developing applications, but along with each integration and application, there is one or many humans behind it, who have an understanding of your API, and possibly the industry you are targeting. Once you are introduced to the concept, it makes complete sense to be using an API as a talent acquisition vehicle, allowing developers to flex their skills in a place you are already paying attention in.

This is my new response to people who are either looking for talent. Have you tried your API community? Many of the people I’ve tested this out on do not have a public API program. They aren’t at this place in their journey where they have a public presence in this way. Either because they don’t have any public resources to make available, or they just haven’t evolved to the point where they are ready to make things publicly available. Talent acquisition is another motivator in this journey. Maybe your not ready for making your data, content, and algorithms public, but there has to be something that is already available on your website, or some sort of sandbox environment you could make public to begin attracting talented folks. Its another motivation for being public around your digital assets. Even if you aren’t entertaining direct public access to all your resources, you should be casting an SEO net in your industry, publishing relevant (sample) data sets, content, or algorithms that someone might find Googling, or some other discovery process.

A public API program can be so many different things. Those who have embarked on this journey are the ones who are pushing the boundaries, and are discovering the unexpected benefits like talent acquisition. Those who aren’t, will struggle to keep up, and secure the talent they need. API portals aren’t just for building the next Twitter. They are your externally facing R&D lab, where you explore new ideas, and learn to play nicely with others. It is where you develop new products, integrations, applications, and when done right, new talent. Your APIs don’t have to be production quality, they might just be a sampling, or sandboxed version of the real thing. Just enough to attract the interest of just the right people in your industry. Don’t let your lack of imagination about what an API is hold you back from experimenting and playing around, and realizing these benefits.

If you do not have your API platform available yet, this is all probably news to you, because you haven’t been in the game. You haven’t been building relationships with other companies, and individuals. You haven’t been publishing documentation, white papers, and telling other stories that will be attracting the interesting folks you are looking to hire. You are probably still relying on job sites, and social network sites to find the talent you need, where you are competing with everyone else in your industry. Why not get started on your API program, publish some safe data, and content that is already on your web site. Or, maybe get creative and publish some sandbox APIs that allow folks to play, explore, understand, while you are studying who the most interesting consumers are , and using your API as a talent acquisition funnel for your organization.


A Simple API Using AWS RDS, Lambda, and API Gateway

I wrote about a simple API with AWS DynamoDB, Lambda, and API Gateway last week. I like this approach because of the simple nature of AWS DynamoDB. One benefit of going this route is that you can even bypass Lambda, as the AWS API Gateway can work directly with AWS DynamoDB API. I’m just playing around with different configurations and pushing forward my understanding of what is possible, and this week I switched out the database in this with AWS RDS, which opens up the ability to use MySQL or Postgres as the backend for any API.

For this example, I’m using a simple items database, which you can build with this SQL script after you fire up an RDS instance (I’m using MySQL):

Next I wanted to have the basic CRUD operations for my API. I opted to use Node.js running in Lambda for the code layer of this API, starting with the ability to get all records from the database:

After that I want to be able to insert new records:

Then of course be able to get a single record:

Then be able to update a single record:

And of course I want to be able to delete records:

Now that I have the business logic setup in AWS Lambda for reading, and writing data to my relational database I want an API front-end for this backend setup. I am using AWS API Gateway as the API layer, and to setup I’m just importing an OpenAPI definition to jumpstart things:

This gives me the skeleton framework for my API, with the paths and methods I need to accomplish the basics of reading and writing data. Now, I just need to wire up each API method to its accompanying Lambda function, something API Gateway makes easy.

Now I have an API for my basic backend. There is one thing you have to do to make each method work properly with the Lambda function. You have to setup a body mapping to the item_id when passed in the path for the PUT, GET, and DELETE functions. If you don’t the item_id won’t be passed on to the Lambda function–it took me a while to get this one. There are other things you have to do, like setting up a usage plan, turning on API key access for each API, and setting up custom domain if you want, but hopefully this simple gets the point across. I will work on other parts in future posts. Hopefully it provides a basic example of an API using RDS, Lambda, and API Gateway, which is something I have wanted to have in my toolbox for some time.

The process has opened my eyes up wider to the serverless world, as well as playing more with Node.js–which has been on my list for some time now. It provides a pretty solid, scalable, manageable way to deploy an API using AWS. I have all the code on Github, and will be evolving as I push it forward. If you apply the Lambda scripts, make sure you upload individually as zipped files, so that the mysql dependencies are there, otherwise the script won’t connect to the database. It should provide a base template you can use to seed any basic data API. This is why I’ve added it to my API Evangelist toolbox, giving me a simple, forkable set of scripts I can use as a seed for any new API. I will add more scripts and templates to it over time, rounding off the functionality as I evolve in my understanding of deploying API using AWS RDS, Lambda, and API Gateway.


API Security Beginning To Outweigh My Vendor Lock-In Concerns

I’ve been on the AWS train since day one. I’ve been integrating Amazon S3 and EC2 into my business(es) since they first launched a decade ago. While the platform has faithfully provided my storage and compute for over a decade I’ve always been wary of vendor lock-in. After a decade long ride on Microsoft (1998-2008), I felt pretty burned. Then recently after a similar decade long ride on Google (2005-2015), I felt burned, but in a different way. After a decade on AWS I’m nervous, but I don’t feel as burned, however I’d say there is one aspect of doing business online that is making me put aside some of my concerns regarding vendor lock-in on AWS, and even on Google, and Azure–SECURITY. I’m just not convinced I can do this alone, and I need the help of the platforms I operate on to help make sure my operations are secure.

I spent the weekend setting up a set of APIs using AWS RDS as the backend database, AWS Lambda as the code layer, and AWS API Gateway as the API front-end. As part of this work I established an identity and access management (IAM) role with policies tailored for brokering the exchanges between RDS, Lambda, and the API Gateway. I’m leaning on AWS for two key API security pieces here: 1) API key management with AWS API Gateway, and 2) backend security using AWS IAM. The key management stuff is pretty straightforward and something I can easily replicate, but the AWS IAM stuff is a little more involved, and I’m grateful for how easy AWS IAM makes it for me to setup roles, cherry pick from common policies, and lock down the infrastructure I a using to drive the backend of my APIs and other applications.

As I moved another API project into the home stretch this weekend, I migrated an RDS, and EC2 instance into a separate account for a client. I had been defining, designing, and developing an API for them over the last coupe of months, and as we approached completion, I wanted everything in their own account where they could be accountable for the AWS bill, and take control over operations, as well as security of their own API operations. Being able to configure IAM for each project, and pass it off to clients using a separate AWS account is super beneficial to me, as well as my clients. I’m not confident my clients can tackle API security any better than I can in the long run, and being able to configure roles, policies, and stand-up a working set of APIs that are secure, is a critical aspect of doing business online using APIs for both me and my clients.

While I still have vendor lock-in concerns with AWS, especially when it comes to adopting a serverless approach with Lambda, the ability to define API and backend security in this way is softening these concerns. In the volatile online environment we find ourselves in I am just not confident that I have the time, skills, and resources to do API security properly. I’m also not confident my clients can either. I’m thankful for AWS IAM, and the other security solutions they bring to the table. I’m feeling like we are going to need our cloud platform providers to have our backs like this in the future, as well as the services of a suite of other API security providers to help us do all of this properly. I wish it was a utopian online world where we could hand roll our infrastructure and remain safe online, but in 2017 I’m not interested in doing any of this alone. There are just too many threats out there.


An Example Of How Every API Provider Should Be Using OpenAPI Out Of The Slack Platform

The Slack team has published the most robust and honest story about using OpenAPI, providing a blueprint that other API providers should be following. What I like most about approach by Slack to develop, publish, and share their OpenAPI, is the honesty behind why their are doing it to help standardize around a single definition. They publish and share the OpenAPI to Github, which other API providers are doing, and I think should be standard operating procedure for all API providers, but they also go into the realities regarding the messy history of their API documentation–an honesty that I feel ALL API providers should be embracing.

My favorite part of the story from Slack is the opening paragraph that honestly portrays how they’ve got here: “The Slack Web API’s catalog of methods and operations now numbers nearly 150 reads, writes, rights, and wrongs. Its earliest documentation, much still preserved on api.slack.com today, often originated as hastily written notes left from one Slack engineer to another, in a kind of institutional shorthand. Still, it was enough to get by for a small team and a growing number of engaged developers.” Even though we all wish we could do APIs correctly, and supporting API document perfectly from day one, this is never the reality of API operations, and something OpenAPI will not be a silver bullet for fixing all of this, but can go a long way in helping standardize what is going on across teams, and within an API community.

Slack focuses on SDK development, Postman client usage, alternative forms of documentation, and mock servers as the primary reasons for publishing the OpenAPI for their API. They also share some of the back story regarding how they crafted the spec, and their decision making process behind why they chose OpenAPI over other specifications. They also share a bit of their road map regarding the API definition, and that they will be adopting v3.0 of OpenAPI v3.0, providing “more expressive JSON schema response structures and superior authentication descriptors, specifications for incoming webhooks, interactive messages, slash commands, and the Events API tighter specification presentation within api.slack.com documentation, and example spec implementation in Slack’s own SDKs and tools”.

I’ve been covering leading API providers move towards OpenAPI adoption for some time. Writing about the New York Times publishing of their OpenAPI definition to Github, and Box doing the same, but providing even more detail behind the how and why of doing OpenAPI. Slack continues this trend, but showcases more of the benefits it brings to the platform, as well as the community. All API providers should be publishing and up to date OpenAPI definition to Github by default like Slack does. They should also be standardizing their documentation, mock and virtualized implementations, generating SDKs, and driving continuous integration and testing using this OpenAPI, just like Slack does. They should be this vocal about it too, encouraging the community to embrace, and ingest the OpenAPI across the on-boarding and integration process. I know some folks are still skeptical about what OpenAPI brings to the table, but increasingly the benefits are outweighing the skepticism–making it hard to ignore OpenAPI.

Another thing I want to highlight in this story, is that Taylor Singletary (@episod), reality technician, documentation & developer relations at Slack, brings an honest voice to this OpenAPI tale, which is something that is often missing from the platforms I cover. This is how you make boring ass stories about mundane technical aspects of API operations like API specifications something that people will want to read. You tell an honest story, that helps folks understand the value being delivered. You make sure that you don’t sugar coat things, and you talk about the good, as well as some of the gotchas like Taylor has, and connect with your readers. It isn’t rocket science, it is just about caring about what you are doing, and the human beings your platform impacts. When done right you can move things forward in a more meaningful way, beyond what the technology is capable of doing all by itself.


I Like The Scope Of The AWS SDK for JavaScript

I’ve tried picking up Node.js as a server side approach to delivering APIs a couple of times in the past few years. Both times express, and a handful of other issues ran me off from using it as my default backend programming language for APIs. I use a lot of JavaScript in the client, but it just didn’t feel like it was what I needed to drive my backend solutions. Over the last week I’ve been architecting a handful of APIs using Lambda and Node.js scripts, and I have to admit I’m pretty sold on Node.js as a backend solution now. This conversion is partly due to Lambda and a serverless way of doing things, but is also due to the AWS SDK for JavaScript, which is a pretty robust example of an API SDK.

The AWS SDK for JavaScript covers 106 separate AWS services, and I haven’t actually calculated how many methods are available–a lot. I’ve been using it recently to manage AWS RDS, DynamoDB, and API Gateway, and the SDK is a delight to use, is well documented, and enjoys excellent coverage of AWS services. There were two elements at play here that made this work for me, 1) simplicity of the AWS SDK in Node, and 2) the simplicity of implementing in the serverless environment that is Lambda. I found Node.js scaffolding to be overly complex for APIs in previous experiences, something that melted away with a serverless implementation. I am well versed in JavaScript, and with the AWS SDK being so simple enough, I was able to get up and running with a variety of functions on Lambda pretty quickly. Something that becomes really powerful when exposed as an API resource using AWS API Gateway.

This is going to date me, and I know that only a handful of people will understand, but the AWS SDK for JavaScript, combined with Lambda and API Gateway reminds me of VBScript on Windows server, circa 2000. Back in the day, I had my own server farm, on my own dedicated T1, on an entire C block of IP addresses, running Windows 2000. I was able to orchestrated some pretty cool cross-server voodoo using VBScript when it came to managing the server, database, web server, DNS (Active Directory), and other moving parts of my world. I have way more resources at my disposal with AWS, and a much more robust vocabulary with the AWS SDK for Javascript than I did back in the day, but the feeling I have is very similar. I am able to orchestrate pretty effectively across my operations, using a variety of AWS, 3rd party, as well as my own home grown APIs. I have the control over my infrastructure that I am looking to have, allowing me to get what I need done.

I like the scope of the AWS SDK for JavaScript. It gives me access to what I need, while also introducing me to other AWS APIs at the same time. I set out to use the SDK for DynamoDB, then RDS, and API Gateway, and now I’m exploring how I can use it for managing EC2 ad S3, evolving the current PHP SDK I’ve had in place for the last five years. I was’t looking to evolve some of my legacy API infrastructure using this new approach which I’ve evolving for a client, but the ease of use, scalability, and benefits brought to the table are proving to be worth the extra work. I’m rolling out a suite of Lambda functions for managing RDS, EC2, S3, and API gateway, as well as expanding to work with Github, Stripe, and other external APIs I depend on. This ease of use, plus the security benefits introduced by AWS IAM is pushing me further into the AWS universe, without the vendor lock-in regret I was expecting to have.


My Response On The Department Of Veterans Affairs (VA) RFI For The Lighthouse API Management Platform

I am working with my partners in the government API space (Skylight, 540, Agile Six) to respond to a request for information (RFI) out of the Department of Veterans Affairs (VA), for what they call the Lighthouse API Management platform. The RFI provides a pretty interesting look into the way the government agency which supports our vets is thinking about how they should be delivering government resource using APIs, but also how they play a role in the wider healthcare ecosystem. My team is meeting today to finalize our response to the RFI, and in preparation I wanted to prepare my thoughts, and in my style of doing things, involves publishing them here on API Evangelist.

You can read the whole RFI, but I’ll provide the heart of it, to help set the table for my response.

Introduction: To accelerate better and more responsive service to the Veteran, the Department of Veterans Affairs (VA) is making a deliberate shift towards becoming an Application Programming Interface (API) driven digital enterprise. A cornerstone of this effort is the setup of a strategic Open API Program that is adopting an outside-in, value-to-business driven approach to create APIs that are managed as products to be consumed by developers within and outside of VA.

Objectives: VA has started the process of establishing an API Management Platform, named Lighthouse. The purpose of Lighthouse is to establish the Next Generation Open Digital Platform for Veterans, accelerating the transformation in core domains of VA, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem. These APIs will allow VA to leverage its investment in various digital assets, support application rationalization, and allow it to decouple outdated systems and replace them with new, commercial, off the shelf, Software as a Service (SaaS) solutions. It will enable creation of new, high value experiences for our Veterans, VA’s provider partners, and allow VA’s employees to provide better service to Veterans.

With some insight into how they will achieve those objectives:

  • Integrate more effectively with the community care providers by connecting members of a Veteran’s “Care Team” from within and outside the Veterans Health Administration (VHA); and, generate greater opportunities for collaboration across the care continuum with private sector providers,
  • Effectively shift technology development to commercial electronic health record (EHR) and administrative systems vendors that can integrate modular components into the enterprise through open APIs, thus allowing VA to leverage these capabilities to adopt more efficient and effective care management processes,
  • Foster an interoperable, active, innovation ecosystem of solutions and services through its Open API Framework that contributes to the next generation of healthy living and care models that are more precise, personalized, outcome-based, evidence-based, tiered and connected across the continuum of care regardless of where and how care is delivered,
  • Create an open, accessible platform that can be used not only for Veterans care but also for advanced knowledge sharing, clinical decision support, technical expertise, and process interoperability with organizations through the U.S. care delivery system. By simplifying access to the largest data set of clinical data anywhere, albeit de-identified, it will accelerate the discovery and development of new clinical pathways for the benefit of the Veterans and the community at large.

They include some bullets for what they see as the road map for the Lighthouse API management platform:

  • The API Gateway, through creating the facades for common, standard Health APIs will allow multiple EHRs to freely and predictably interoperate with each other as VA deploys the COTS EHR through the enterprise and finds itself with multiple EHR’s during the transition and stabilization phases of the implementation.
  • Establish the Single Point of Interoperability with health exchanges and EHR systems of Community Care Providers.
  • The development of APIs across the enterprise spanning additional domains following a federated development approach that can see an exponential growth in the number APIs offered by the platform.
  • Operate at scale to support the entire population of Veterans, VA Employees and Community Providers that provide a variety of services to the Veterans

This is all music to my ears. These are objectives, and a road map that I can get behind. It is an RFI that reflects where all federal agencies should be going, but it also is also extra meaningful for me to see this coming out of the VA. Definitely making it something I want to support however I can. The window for responding is short, so I wanted to be able to stop what I’m doing, and give the RFI a proper response, regardless of who ends up running the platform. I’ve done type of RFI storytelling several times in the past, with the FAFSA API for Department of Education, the Recreational Information Database (RIDB) out of Department of Interior, when Commerce hired a new Chief Data Officer, and just in general support of the White House API strategy. This time is different, only because I have a team of professionals who can help me deliver beyond just the RFI.

Some Background On Me And The VA Before I get to the questions included as part of the RFI, I wanted to give some background on my relationship to the VA, and supporting veterans. First, my biological father was career military, and the father that raise me was a two tour Vietnam veteran, exposing me to the veterans administration and hospitals at an early age. He passed away in the 1990s from cancer he developed as a result of his Agent Orange exposure. All this feeds into my passion for applying APIs in this area of our society. Additionally, I used to work for the Department of Veterans affairs, as a Presidential Innovation Fellow in 2013. I didn’t stay for the entire fellowship, exiting during the government shutdown, but I have continued to work on opening up data sets, supporting VA related API conversations, and trying to keep in tune with anything that is going on within the agency. All contributing to this RFI making me pretty happy.

My Answers To The RFI Questions The VA provides 20 separate questions as part of their RFI. Shining a light on some of the thinking that is occurring within the agency. I’ve bolded their questions, and provide some of my thought inline, sharing my official responses. My team will work to combine everyones feedback, and publish a formal response to the VA’s RFI.

1. Drawing on your experience with API platforms, how do you see it being leveraged further within the healthcare industry in such a manner as described above? What strengths and opportunities exist with such an approach in healthcare?

An API first platform as proposed by the VA reflects what is already occurring in the wider healthcare space. With veteran healthcare spending being such a significant portion of healthcare spending in this country, the presence of an API platform like this would not just benefit the VA, veterans, veteran hospitals, and veteran service organizations (VSO), it would benefit the larger economy. A VA API platform would be the seed for a federated approach to not just consuming valuable government resources, but also deploying APIs that will benefit veterans, the VA, as well as the larger healthcare ecosystem.

Some of the strengths of this type of approach to supporting veterans via an open data and API platform, with a centralized API operations strategy would be:

  • Consistency - Delivering APIs as part of a central vision, guided by a central platform, but echoed by the federated ecosystem can help provide a more consistent platform of APIs.
  • Agility - An API-first approach across all legacy, as well as modern VA systems will help break the agency, and its partners into much smaller, bite-size, reusable components.
  • Scope - API excel at decouple legacy systems that are often larger and more complex, delivering much smaller, reusable components that can be developed, delivered and deprecated in smaller chunks, that work at scale.
  • Observability - A modern approach to delivering an API platform for government with a consistent model for API definitions, design, deployment, management, monitoring, testing, and other steps of the API lifecycle, combined with a comprehensive approach to measuring and analysis brings much needed observability to all stops along the lifecycle. Letting in the sunlight required for success at scale.
  • Feedback Loops - An API platform at this scale, which the appropriate communication and support channels open up feedback loops for all stops along the lifecycle, benefitting from community, and partner input from design to deprecation. Making all VA systems

These platform strengths set the stage for some interesting and beneficial opportunities to emerge from within the platform community, but also the wider healthcare ecosystem that already exists, and is evolving, which sill step up and participate and engage with available VA resources. Here are some examples of how this can evolve based upon existing API ecosystem within and outside the healthcare ecosystem:

  • Practitioner Participation - An open API platform always starts with engaging backend domain experts, allowing for the delivery of APIs that deliver access to critical resources. Second, modern API platforms help open up, and attract external domain experts, and in the case of a VA platform, this would mean more engagement with health care practitioners, further rounding off what goes into the delivery of critical veteran services.
  • Healthcare Vendor Integration - Beyond practitioners, a public API platform for the VA would attract existing healthcare vendors, providing EHR, and other critical services. Vendors would be able to define, design, deploy, and integrate their API driven solutions outside the tractor beam of internal VA operations.
  • Partner Programs - The establishment of formal partner program accompany the most mature API platforms available, allowing for more trusted engagement across the public and private sector. Partners will bring needed resources to the ecosystem, while benefitting from deeper access to VA resources.
  • Certification - Another benefit of a mature API platform is the ability to certify partners, applications, and developers, establishing a much more trusted pool of vendors, and developers that the VA can benefit from, as well as rest of the ecosystem.
  • Standardization - A centralized API platform helps aggregates common schema, API definitions, models, and blueprints that can be used across the federated public and private sector. Helping establish consistency across VA API resources, but also in the tooling, services, and other applications built on top of VA APIs.
  • Data Ecosystems - A significant number of VA APIs available on the platform will be driven by valuable data sets. The more APIs, and the consistency of APIs across many internal data stewards, as well as trusted partner sources will establish a data ecosystem that will benefit veterans, the VA, as well as the rest of the ecosystem.
  • Community Participation - Open API platform bring in a wide variety of developers, data scientists, healthcare practitioners, and folks who are passionate about the domain. When it comes to supporting veterans, it is essential that there is a public platform and forum for the community to engage around VA resources. Helping share the load with the community, when it comes to serving veterans.
  • Marketplace - With an API platform as proposed by this RFI, a natural opportunity is with a marketplace. An established process and platform for internal groups to publish their APIs, as well as supporting SDKs, and other applications. This marketplace could also be made available to trusted partners, and certified developers, allowing them to showcased their verified applications, and be listed as a certified developer or agency. The marketplace opportunity will bring the most benefit to the VA, when it comes to delivering across many internal groups.

The benefits of an API platform such as the one proposed as part of this RFI is that they make the delivery of critical services a team effort. Domains within the VA can do what they do best, and benefit from having a central platform, and team to help them deliver consistent, reliable, scalable API access to their resources, for use across web, mobile, device, spreadsheet, analysis, and other applications. Externally, healthcare vendors, hospitals, practitioners, veterans, and the people that support them can all work together to put valuable API resources to work, helping make sense of data, deliver more modular, meaningful applications that all focus on the veteran.

The fact that this RFI exists shows that the VA is looking to keep pace with where the wider healthcare sector is headed. Four our of five of the leading EHR providers have existing API platforms. Demonstrating where the healthcare sector is headed, and something that reflects the vision established in this RFI. The strength and the opportunity with such an approach to delivering healthcare services for veterans will be fully realized when the VA becomes interoperable, and plug and play with the wider healthcare sector.

2. Describe your experience with different deployment architecture supported by MuleSoft (e.g. SaaS only, On Premise Only, Private cloud, Hybrid – Mix of SaaS and Private Cloud) and in what industry or business process it was used? Please include whether your involvement was as a prime or subcontractor, and whether the work was in the commercial or government sector.

My exposure to the Mulesoft in a production environment has been limited in recent years, especially within the last couple of years. However, during my time at the Veterans Affairs, and working in government, I was actively involved with the company, regularly engaging with their product and sales team in several areas:

  • Industry Research - I was paid to conduct API industry research back in 2012 and 201 by Mulesoft, providing regular onsite presentations regarding my work, contributing to their road map.
  • Messaging Partner - I worked with Mulesoft on the creation of short form and long form content around the API industry, and as part of my API industry conference.
  • Sales Process - I have sat in on several sales meetings for Mulesoft with government agencies, enterprise organizations, and higher educaitonal institutions, and are familiar with what they offer.
  • RAML Governance - I was part of the early discussion, formation, and feedback on the RAML API specification form, but left shortly after I left the government.

Based upon my experience working with the Mulesoft team, and through my regular monitoring of their services, tooling, and engaging with some of their clients I am confident in my ability to tailor a platform strategy that would work with their API gateway, IAM, definitions, discovery, and other solutions. I have been one of the analysts in the API sector who studies Mulesoft, and the other API management providers, and understand the strengths, and weaknesses of each of the leading vendors, as well maintain and understanding of how API management is being commoditized, standardized, and applied across all the players. I’m confident this knowledge will transfer to delivering an effective vision for the VA, involving Mulesoft solutions.

3. Describe any alternative API Management Platforms that are offered as SaaS offerings, On Premise, Private Cloud, or Hybrid. Please detail how these solutions can scale to VA’s needs managing approximately 56,000 transactions per second through connecting to VistA and multiple commercial and open source EHRs (conforming to US Core Profiles of the HL7 FHIR standards), multiple commercial Enterprise Resource Planning (ERP) systems, various home grown systems including Veterans Benefit Management Service (VBMS), Corporate Data Warehouse, and VA Time & Attendance System (VATAS), and commercial real-time analytics software packages, and open source tools enabling rapid web and mobile application development.

Monitoring and understanding API management platforms is something I’ve done since 2010 in a full time capacity. I’ve studied the evolution of the enterprise API gateway from its service oriented architecture days, into the cloud as a commodity with AWS, Google, and Azure, as well as the deployment on-premise, in hybrid scenarios, and even on-device. To support my partners, clients, and the readers of my blog, I work regularly to test drive, and understand leading API management solutions available on the market today.

While I study the proprietary and open source API gateway and management solutions out there, I also work to understand where API providers will be operating these solutions, which means having a deep understanding of how native and installed API management occurs in the cloud, hybrid, on-premise, on-device, and anywhere it needs to be in 2017.

  • Multi-Vendor - API gateways have become a commodity and are baked into every one of the major cloud providers. AWS has the lead in this space, with Google, and Azure following up. We stay in tune with a multi-vendor gateway approach to be able to support, and deliver solutions within the context of how our customers and clients are already operating. To support the number of transactions the VA is currently seeing, as well as with future API implementation, a variety of approaches will be required to support the requirements of many different internal groups, as well as trusted partners.
  • Multi-Cloud - As I already mentioned, always look to support multiple gateways across multiple datacenter, and cloud providers. Our goal is to understand the native opportunities on each cloud platform, the environment to deliver hybrid networks and environments, as well as how each of the leading open source and proprietary API management providers operate within a cloud environment. Making it easier to integrate with, and deliver API facades for any backend resources available within any cloud environment.
  • Reusable Models - One key to successful API management in any environment is the establishment of reusable models, and the reuse of definitions, schema, and templates which can be applied consistently across groups. Any API gateway, and management solution should have an accompanying database, machine, or container image, and be driven using a machine readable API definition using OpenAPI, API Blueprint, or RAML. All data should possess an accompanying definition using JSON Schema, and leverage standards like JSON Path for mapping relationships, and establishing mappings between resources. Everything should be defined for implementation, reuse, and continuous deployment as well as integration.
  • API-Driven - Every mature API management platform automates using APIs. Mulesoft, Apigee, 3Scale, AWS Gateway, all have APIs for managing the management of APIs. This allows for the seamless integration across all systems, even disparate backend legacy systems. Extending identity, service compositions, plans, rate limits, logging, analysis, and other exhaust from API management into any system it is needed. Lighthouse should be an API driven platform for defining, designing, deploying, and managing APIs the serve the VA.

In full disclosure, I’ve worked with almost every API management provider out there in some capacity. Similar to providing industry analysis to Mulesoft, I helped WSO2 define and evolve their open source API management solution. Engaged and partnered with 3Scale (now Red Hat), Apigee (Now Google), Mashery (now Tibco), and others. I’ve also engaged with NREL and their development of API Umbrella, as well as the GSA when it came to implementing in support of api.data.gov. I’m currently taking money from Tyk, 3Scale, Runscope, and Restlet–all servicing the space. It is my job to understand the API management playing field, as well as the nuts and bolts of all the leading providers.

While it is important to be able to dive deep and support specific solutions for specific project when it comes to API management, for a platform to handle the scale and scope of the Lighthouse API management platform it will have to be able to provide support for connecting to a robust set of known, and unknown backend systems. While many organizations we’ve worked with strive for a single gateway or API management solution, in reality many often operating multiple solutions, across many vendors, and multiple cloud or on-premise environments. The key to a robust, scalable platform is the ability to easily define, configure, and repeat regularly across environment, providing a consistent API surface area across implementations, groups, backend, and gateway infrastructure.

4. Describe your company’s specific experience and the strategies that you have employed for ensuring the highest level of availability and responsiveness of the platform (include information about configuration based capabilities such as multi-region deployments, dynamic routing, point of presence, flexible/multi-level caching, flexible scaling, traffic throttling etc.).

Our approach to delivering API infrastructure involves assessing scalability at every level of API management within our control. When it comes to API deployment and management we aren’t always in control over every layer of the stack, but we always work to configure, optimize, and scale whenever we possibly can. Every API will be different, and a core team will enjoy a different amount of control over backends, but we always consider the full architectural stack when ensuring availability and responsiveness across API resources, looking at eight common layers:

  • Network - When possible we work to configure the network to allow for prioritization of traffic, and isolation of resources at the network level.
  • Sytem - We will work with backend system owners to understand what their strengths and weaknesses are, and understand how systems are currently optimize and assess what new ways are possible as part of Lighthouse API operations.
  • Database - When there is access to the database level will will assess the scalability of instances and tables in service of API deployment. If possible we will work with database groups to deploy slave implementations that can be dedicated to supporting API implementations.
  • Serverless - Increasingly we are using serverless technology to help carry the load for backend systems, adding another layer for optimization behind the gateway. In some situations a serverless layer can act as a proxy, cache, and redundancy for backend systems. Opening up new approaches to availability.
  • Gateway - At the gateway level there are opportunities for scaling, and delivering performance, as well as enabling caching, and rate limiting to optimize specific types of behaviors amongst consumers. Each API will have its own plan for optimization and reliability, tailored for its precise configuration and time to live (TTL).
  • DNS - DNS is another layer which will play a significant role in the reliability and performance of API operations. There are numerous opportunities for routing, caching, and multi-deployment optimizations at the DNS to support API operations.
  • Caching - There are multiple levels to think about caching in API infrastructure, from the backend up to DNS. The entire stack should be considered on an API by API basis, with a robust approach to knowing when to use, and where not to use.
  • Regional - One of the benefits of being multi-vendor, and multi-cloud, and on-premise, is the ability to deliver API infrastructure where it is needed. Delivering in multiple geographic regions is an increasingly common reason for using the cloud, as well as allowing for flexibility in the deployment, management, and testing of APIs in any US region.
  • Monitoring - One aspect of availability and reliability is monitoring infrastructure 24x7, keeping an eye on availability and performance. Sustained monitoring, across providers, and regions is essential to understanding how to ensure APIs are available and dependable.
  • Analysis - Extracted from monitoring, and logging, the analysis across API operations feeds into the overall availability and reliability of APIs. Real-time analysis of API availability and performance over time is critical for dependable infrastructure.
  • Partners - I’ve experience the lack of dependability of VA APIs first hand. During my time at the agency I was forced to shut down APIs I had worked on during the government shutdown. Introducing me to another dimension of API reliability, where I feel external partners can help contribute to the redundancy and availability of APIs beyond what the agency can deliver on its own.

API stability isn’t purely a technical game. There are plenty of tools at our disposal for scaling, optimizing, and injecting efficiencies into the API life cycle. However, the most important contributor to reliability is experience, and making sure you measure, analyze, and understand the needs of each API. This is where modern approaches to API management come into play, and understand how API consumers are putting resources to work, and optimizing each API to support this at which every layer possible.

5. The experiences you have and the strategies that you have employed to ensure the highest level of security for the platform. Please address policy / procedure level capabilities, capabilities that ensure security of data in transit (e.g. endpoint as well as payload), proactive threat detection etc.

API security is another dimension of the API sector I’ve been monitoring and studying on a regular basis. I have just finished a comprehensive guide to the world of API security, which will be the first guide of its kind when publish next week. I’ve been monitoring general approaches to securing infrastructure, as well as how that impacts API security. I’ve been taking what I’ve found in my research, as well as work with clients, and aggregated into 15 separate areas to consider as we develop a strategy to deliver the high levels of security.

  • Encryption - Encryption in transport, on the disk, in storage, and in database.
  • API Application Keys - Requiring keys for ALL API access, no matter whether internal or external.
  • Identity & Access Management (IAM) - Establish roles, groups, users, policies and other IAM building blocks, healthy practices and guidance across API implementations and teams.
  • Service Composition - Provide common definitions for how to organize APIs, establish and monitor API plans and usage, and use to secure and measure access to ALL API resources.
  • Common Threats - Be vigilant for the most common threat from SQL injection, to DDoS, making sure all APIs don’t fall victim to the usual security suspects.
  • Incident Response - Establish an incident response team, and protocol, making sure when a security incident occurs there is a standard response.
  • Governance - Bake security into API governance, taking beyond just API design, and actually connecting API security to the service level agreements governing platform operations.
  • DNS - Leverage the front line of API security and identifying threats at the DNS layer, and establish, and maintain a frontline of defense for API security.
  • Firewall - Building on top of existing web security practices, and deploying, and maintaining a comprehensive set of rules at a firewall level.
  • Automation - Implement security scanning, fuzzing, and automated approaches to crawling networks and infrastructure looking for security vulnerabilities.
  • Testing & Monitoring - Dovetail API security with API testing and monitoring, identifying malformed API requests, responses, and other illnesses that can plague API operations.
  • Isolation - Leveraging virtualization, serverless, and other API design and infrastructure patterns to keep API resources isolated, and minimizing any damage that can occur in a breach.
  • Orchestration - Acknowledging the security risks that come with modern approaches to orchestrating API operations, continuously deploying, and integrating across the platform.
  • Client Automation - Develop a strategy for managing API consumption, and identifying, understanding, and properly managing bots, and other types of client automation.
  • Machine Learning - Leveraging the development and evolution of machine learning models that help look through log files, and othe threat information sources, and using artificial intelligence to respond and address evolving security concerns.
  • Threat Information - Subscribing to external sources of threat information, and working with external organizations to open up the learning opportunities across platforms regarding the threats they face.
  • Metrics & Analysis - Tap into the metrics and analysis that comes with API management, and build upon the awareness introduced when you consistently manage API consumption. Translating this awareness into a responsive and agile approach to delivering API security across groups, and externally with partners.
  • Bug Bounties - Defining and executing public and private bug bounties to help identify security vulnerabilities early on.
  • Communications - Establish a healthy approach to communicating around platform security. Openly discussing healthy practices, speaking at conferences, and participating in external working groups. While have a regular cadence to communication in normal times, as well as protocol for communication during security incidents.
  • Education - Develop curriculum and other training materials around platform security, and make sure API consumers, partners, and internal groups are all getting regular doses of API security education.

Our API security experience comes from past projects, working with vendors, and studying the best practices of API leaders. Increasingly API security is a group effort, with a growing number of opportunities to work with security organizations, and major tech platforms who see the majority of threats present today. Increasing the volume of information available to integrate directly into platform security operations.

6. Please describe your experience with all the capabilities that the platform offers and the way you have employed them to leverage existing enterprise digital assets (e.g. other integration service buses, REST APIs, SOAP services, databases, libraries)

I have been working with databases for 30 years. I began working with web services in 2000, and have worked exclusively with web APIs in a full time capacity since 2010. I’ve worked hard to keep myself in tune with the protocols, and design patterns in use across the public and private API sectors. Here are the capabilities I’m tuned into currently.

  • SOAP - Making sure we are able to support web services, and established approaches to accessing data and other resources.
  • REST - Web APIs that follow RESTful patterns is ur primary focus, leveraging low-cost web infrastructure to securely making resources available.
  • Hypermedia - Understanding the benefits delivered by adopting common hypermedia media types, delivering more experience focused APIs that emphasize relationships between resources.
  • GraphQL - Understanding when the use of GraphQL makes sense for some data focused APIs, delivering resources to single page adn mobile applications.
  • gRPC - Staying in tune with being able to deliver higher performance APIs using gRPC, augmenting the the features brought by web APIs, for use internally, and with trusted partners.
  • Websockets - Leverage websockets for streaming of data in some situations, providing more real time integration for applications.
  • Webhooks - Developing webhook infrastructure that makes APIs a two-way street, pushing notifications and data externally based upon events or on a schedule.
  • Event Sourcing - Developing a sophisticated view of the API landscape based upon events, and identifying, orchestrating and incentivizing event-based behavior.

This reflects the core capabilities present across the API landscape. While SOAP and REST make up much of the landscape, hypermedia, GraphQL, event sourcing, and other approaches are seeing more adoption. I emphasize every platform make REST, or web APIs the central focus of the platform, keeping the bar low for API consumers, leverage web technology to reach the widest audience possible.

7. Please describe your experience and strategies that you have employed at enterprises to create Experience Services from mashup/aggregation/combination of other API’s, services, database calls etc.

I’ve been tracking on API aggregation as a discipline for over five years now, having worked with several groups to aggregate common APIs. It is an under developed layer to the web API sector, but one that has a number of proprietary, as well as open source solutions available.

I’ve personally worked on a number of API aggregation project, spanning a handful of business sectors:

  • Social - Developed an open source , as well as SaaS solution for aggregating Facebook, LinkedIn, Twitter, and other social network.
  • Images - Extended a systems to work across image APIs, working with Flickr, Facebook, Instagram, and other image sharing solutions.
  • Wearables - Developed a strategy for aggregating of health data across a handful of the leading wearable device providers.
  • Documents - Partnered with a company who provides document aggregation across Box, Google Drive, and other leading document sharing platforms.
  • Real Estate - I founded a startup that did MLS data aggregation across APIs, FTP locations, and scraped targets, providing regional, and specific zip code API solutions.
  • Advertising - I’ve worked with a company that aggregates advertising APIs bringing Google, Facebook, Twitter, and other platforms together into a single interface.

These are all the reasons why I was working on API aggregation, and spending time researching the subject. Here are some of the technology, and approaches I have been using to deliver on API aggregation.

  • OpenAPI - I heavily use the OpenAPI specification to aggregate API definitions, and leverage JSON Schema to make connections and map API and data resources together.
  • APIs.json - After working on the data.json project with the White House, inventorying open data inventory across federal agencies, I developed an open specification for indexing APIs, and building collections, and other aggregate definitions for processing at discovery or runtime.
  • JSON Schema - All databases and data used as part of API operations possesses a JSON schema definition, that accompanies the OpenAPI that defines access to an API. JSON Schema provides the ability to define references across and between individual schema.
  • JSON Path - XPath for JSON, enabling the ability to analyze, transform and selectively extract data from API requests and responses.

In an API utopia, everyone would use common API definitions, media types, and schema. In the real world, everyone does things their own way. API aggregation, and tools like OpenAPI, APIs.json, JSON Schema, and JSON Path allow us to standardize through machine readable definitions that connect the dots for us. API aggregation is still very much a manual process, or something offered by an existing SaaS provider, without much innovation–the tools are out there, we just need more examples we can emulate, and tooling to support.

8. Please describe your experience and strategies for Lifecycle Management of APIs for increasing productivity and enabling VA to deliver high value APIs rapidly, on-boarding app developers and commercial off the shelf applications.

I feel like this overlaps with question number 14, but is more focused on on-boarding, and client perspective of the API lifecycle. Question 14 feels like lifecycle optimization from an API provider perspective, and this is about efficiencies regarding consumption. Maybe I’m wrong, but I am a big believer in the ability of a central API portal, as well as federated portals, to increase API production, because they are delivering the resources consumers need, in a way that allows them to onboard and consume across many developers and commercial software platforms.

My experiences in fulfilling an API lifecycle from the consumer perspective always begins with a central portal, but one that posses all the required API management building blocks to not just deliver aPIs rapidly, incentivize consumption, and feedback in a way that evolves them throughout their life cycle:

  • Portal - Have a central location to discover and work with all API resources, even if there is a federated network of portals that work together in concert across groups.
  • Getting Started - Providing clear, frictionless on-boarding for all API consumers in a consistent way across all API resources.
  • Authentication - Simple, pragmatic authentication with APIs, that are part of a larger IAM scaffolding.
  • Documentation - Up to date, API definition driven, interactive API documentation that demonstrates what an API does.
  • Code Resources - Code samples, libraries, and SDKs that make integration painless, and frictionless, allow applications to quickly move from development to production.
  • Support - Providing direct and indirect support services that funnel back through feedback loops into the product roadmap.
  • Communications - A regular drumbeat around API Platform communications, helping API consumers navigate discovery to integration, and keeps them informed regarding what is happening across the platform.
  • Road Map - A clear roadmap, as we as resulting change log that keeps API consumers in tune with whats next, reducing the gap that can exist between API provider and consumer.
  • Applications - A robust directory of certified applications, browser, platform, and 3rd party plugins and connectors, demonstrating what is possible via the platform.
  • Analysis - Logging, measuring, and analyzing API platform traffic, sign ups, logins, and other key data points, developing an awarness of what is working when it comes to API consumption, and quickly identifying where the friction is, and eliminating with future releases.

Developer Experience (DX) is something that significantly speeds up the overall API lifecycle. Backend teams can efficiently define, deliver, and evolve their APIs without API consumers on-boarding, integrating, and providing feedback on what works, and what doesn’t work. A central API portal strategy for Liqhthouse API management is key to facilitating movement along the API lifecycle, reducing friction, eliminating road blocks, and reducing the chance an API will never full be realized in production.

Lighthouse API management is a central API platform portal, but could also be a forkable DX experience that can be emulated by other internal groups, adding federated edges to the Lighthouse API management platform. Providing a network of consistent API portals, across many different groups, which share a common on0boarding, authentication, documentation, support, communication, and analysis approach. Shifting the lifecycle beyond just a linear start to finish thing, and making something that works as a network of API nodes that all work together as a single platform.

9. Please describe your experience and strategies for establishing effective governance of the APIs.

I’ve been working with a variety of organizations around the topic of API governance for 2-3 years now, and with some recent advances I’m starting to see more API governance strategies mature, and become something that goes well beyond just API design.

API governance is something i’ve tuned into across a variety of enterprise, higher educational institutions, and government agencies. These are the areas I’m researching currently as part of my work as the API Evangelist.

  • Design Guide - Establishing API design guide(s) for use across organizations.
  • Deployment - Ensuring there are machine readable definitions for every deployment pattern, and they are widely shared and implemented.
  • Management - Quantify what API management services, tooling, and best practices groups should be following, and putting to work in their API operations.
  • Testing - Defining the API testing surface area, and provide machine readable definitions for all test patterns, while leverage OpenAPI and other definitions as a map of the landscape.
  • Communication - Plan out the communication strategy as part of the API governance strategy.
  • Support - What support mechanisms are in place for governance, as well as definitions for best practices for supporting indvidiaul API implementations.
  • Deprecation - Define what deprecation looks like even before an API is defined or ever delivered, establishing best practices for deprecation of all resources.
  • Analysis - Measuring, analyzing, and reporting on API governance. Identifying where it is being applied effectively, and where the uncharted territory lies.
  • Coaches - Train and deploy API coaches who work with every group on establishing, evolving, incentivizing, and enforcing API governance across groups.
  • Rating - Establishing a set of rating system for quantifying how compliant APIs are when it comes to API governance, providing an easy to understand how close an API is, or isn’t.
  • Training - The development and execution of training around API governance, working with all external groups to help define API governance, and take active role in its implementation.

Sadly, API governance isn’t something I’ve seen consistently applied across the enterprise, at institutions and government agencies. There is no standard for API governance. There are few case studies when it comes to API governance to learn from. Slowly I am seeing larger enterprises share their strategies, and seeing some universities publish papers on the topic. Providing some common building blocks we can organize into a coherent API governance strategy.

10. Please describe your experience and methodologies for DevOps/RunOps processes across deployments. Highlight strategies for policy testing, versioning, debugging etc.

API management is a living entity, and we focus on delivering API operations with flat teams who have access to the entire stack, from backend to understanding application and consumer needs. All aspects of the API life cycle embraces a microservices, continuous evolution pace, with Github playing a central from from define to deprecation.

  • CI/CD - Shared repositories across all definition, code, documentation, and other modules that make up the lego pieces of API operations.
  • Testing & Monitoring - Ensuring every building block has a machine readable set of assertions, tests, and other artificats that can be used to verify operational integrity.
  • Microservices - Distilling down everything to a service that will meet a platform objective, and benefit an API consumer in as small as package as possible.
  • Serverless - Leverage functions that reflect a microservices frame of mind, allowing much smaller unit of compute to be delivered by developers.
  • Versioning - Following a semantic versioning with version numbers reflecting a MAJOR.MINOR.PATCH, and kept in sync across backend, API, and client applications.
  • Dependencies - Scanning and assessing for vulnerabilities in libraries, 3rd party APIs, and other ways dependencies are established across architecture.
  • Security - Scanning and assessing security risk introduced by dependencies, and across code, ensuring that testing and monitoring reflects wider API security practices.

A DevOps focus is maintained across as many stops along an API life cycle, and reflected in API governance practices. However, it is also recognized that a DevOps will not always be compatible with existing legacy practices, and custom approaches might be necessary to maintain specific backend resources, until their practices can be evolved, and brought in alignment with wider practices.

11. Please describe your experience and strategies employed for analytics related to runtime management, performance monitoring, usage tracking, trend analysis.

Logging and anaysis is a fundamental component of API management, feeding an overall awareness of how API are being consumer, which contributes to the product road map, security, and overall platform reliability. The entire API stack should be analyzed from the backend, to the furthest application endpoints, whenever possible.

  • Network - Logging, and anaysis at the packet and network level, understanding where the network bottlenecks are.
  • Instance - Monitoring and measuring any server instance, whether it is bare metal, virtual, or containerized, providing a complete picture of how each instance in the API chain is performing.
  • Database - When access to the database layer is present, adding the logging, and other diagnostic information to the analysis stack, understand how backend databases are doing their job.
  • Serverless - Understanding the performance and usage of each function in the API supply chain, making sure each lego building block is accounted for.
  • Regional - Understanding how API resources deploy in various regions are performing, but also adding the extra dimension of measuring and monitoring APIs from a variety of geographic regions.
  • Management - Leveraging API plans at the API gateway, and understanding API consumption at the user, application, and individual resource level. Painting a picture of how APIs are performing across access tiers.
  • DNS - Logging, measuring, and runtime responsiveness at the DNS layer, understanding any setting, configuration, or other detail that may be creating friction at the DNS layer.
  • Client - Introducing logging, analysis, and tracking of errors at the SDK level, developing usage and performance reports from the production client vantage point.

API management has matured over the last decade to give us a standard approach to managing API access at consumption time, optimizing usage, limiting bad behavior, and incentivizing healthy behavior. API monitoring and testing practices have evolved this perspective of the health, availability, and how APIs are being used from the client perspective. All of this information gets funneled into the road map, refining API gateway plans, rate limits, and other run time systems to adjust and support desired API usage.

12. Please describe your experience with integrating MuleSoft with Enterprise Identity Management, Directory Services and implementing Role Based access.

I do not have any experience in this area with Mulesoft specifically, but have worked in general with IAM and directory solutions in other platforms, and heavy usage on AWS, for securing the API gateway’s interaction with existing backend system.

13. Please describe your experience with generating and documenting API’s and the type of standards they follow. Describe approaches taken for hosting these documentations and keeping them evergreen.

API documentation is always a default aspect of the API platform, and delivered just like code as part of DevOps and CI/CD workflows using Github. With almost every stop along the API life cycle, all documentation is API definition driven, keeping things interactive, with the following elements always in play:

  • Definitions - All documentation is driven using OpenAPI, API Blueprint, or RAML, acting as the central machine readable truth of what an API does, and what schema it uses. API definition driven API documentation contributes to them always being up to date, and reflect the freshest view of any API.
  • Interactive - All API documentation is API driven, allowing for the documentation to be interactive, acting as an explorer, and dynamic dashboard for playing with and understanding what an API does.
  • Modular - Following a microservices approach, all APIs should have small, modular, API definitions, that drive modular, meaningful, and useful API documentation.
  • Visualizations - Evolving on top of interactive documentation features, we are beginning to weave in more visual, dashboard, and reporting features directly into API document.
  • Github - Github plays a central role in the platform life cycle, with all aPI definitions and documentation running within Github repository, and integrated as part of CI/CD workflows, and DevOps frame of mind.

All platform documentation is a living, breathing, element of the ecosystem. If should be versioned, evolved, and deployed along with other supporting microservice artifacts. Mulesoft has documentation to support this approach, as well as there being a suite of open source solutions we can consider to support a variety of different types of APIs.

14. Please describe your proposed complete process lifecycle for publishing high quality, high value APIs with highest speed to market.

Evolving beyond question number 8, and addressing the API provider side of the coin, I wanted to share a complete (enough) view of the life cycle from the API provider perspective. Addressing the needs of backend API teams, as well as the core API team when it comes to delivering usable, reliable, APIs that are active, and enjoy health release cycles. While not entirely a linear approach, here as many of the stops along our proposed API lifecycle, as applied to individual APIs, but applied consistently across the entire platform.

  • Definitions - Every API starts as an API definition, which follows the API throughout its life, and drives every other stop along the way.
  • Design - Planning, applying, evolving, and measuring how API design is applied, taking the definition, and delivering as a consistent API that complies with API governance.
  • Virtualization - Delivery of mock, sandbox, and other virtualized instances of an API allowing for prototyping, testing, and playing around with an API in early stages, or throughout its life
  • Deployment - Deploying APIs using a common set of deployment patterns that drives API gateway behavior, connectivity with backend systems, IAM, and other consistent elements of how APIs will be delivered.
  • Management - The employment of common approaches to API management, some of which will be delivered and enforced by the API gateway, but also through other training, and sharing of best practices made available via API governance.
  • Testing & Monitoring - Following DevOps, CI/CD workflows when it comes to developing tests, and other artifacts that can be used to monitor, test, and assert that APIs are meeting service level agreements, and in compliance with governance.
  • Security - Considering security at every stop along the API life cycle, scanning, testing, and baking in security best practices all along the way.
  • Client & SDK - Establish common approaches to generating, delivering, and deploying client solutions that help API consumer get up and running using an API.
  • Discovery - Extending machine readable definitions to ensure an API is indexed and discoverable via service directories, documentation, and other catalogs of platform resources.
  • Support - Establishing POC, and support channels that are available as part of any single APIs operations, ensuring it has required support that meets the minimum bar for service operations.
  • Communications - Identifying any additions to an overall communication strategy around an APIs life cycle, announcing it exists on the blog, changes and releases are on the road map, and showcasing integrations via the platform Twitter account.
  • Road Map - Making sure there are road map, and change log entries for an API showing where it is going, and where it has been.
  • Deprecation - Lay out the deprecation strategy for an API as soon as it is born, setting expectations regarding when an API will eventually go away.

This life cycle will play out over and over for each API published on the Lighthouse API platform. It will independently be executed by API teams for each API they produce, and replayed with each major and minor release of an API. A well defined, and well traveled API life cycle helps ensure consistency across teams, helping enforce compliance, familiarity, and reusability across APIs, no matter which team is behind the facade.

15. Please describe your experience and approach towards establishing a 24x7 technical support team for the users, developers and other stakeholders of the platform.

Support is an essential building block of any successful API platform, as well as a default aspect of every single APIs individual life cycle. We break API platform support into two separate distinct categories.

API support should begin with self-service support, encouraging self-sufficiency of API consumers, and minimizing the amount of platform resources needed to support operations:

  • FAQ - Publishing a list of frequently asked questions as part of getting started and API documentation, keeping it an easy to access, up to date list of the most common questions API consumers face.
  • Knowledge Base - Publishing of a help directory or knowledge base of content that can help API consumers understand a platform, providing access to high level concepts, as well as API specific functionality, errors, and other aspects of integration.
  • Forum - Operate an active, moderated, and inclusive forum to assist with support, providing self-service answers to API consumers, which also allows them to asynchronously get access to answers to their questions.

Beyond self-service support all API platforms should have multiple direct support channels available to API consumers:

  • Monitoring - Providing a live, 3rd party status page that shows monitoring status across the platform, and individual APIs, including historical data.
  • Email - Allow API consumers to email someone and receive support.
  • Tickets - Leveraging a ticketing system, such as Zendesk, or other SaaS or possibly open source solution, allowing API consumers to submit private tickets, and move each ticket through a support process.
  • Issues - Leveraging Github issues for supporting each component of the API lifecycle, from API definition, to API code, documentation, and every other stop. Providing relevant issues and conversation going on around each of the moving parts.
  • SMS - SMS notifications regarding platform events, support tickets, platform monitoring, and other relevant areas of platform operations.
  • Twitter - Providing public support via a Twitter account, responding to all comments, and routing them to the proper support channel for further engagement.

A combination of self-service and direct support channels allows for a resource starved core API team, as well as individual backend teams to manage many developers and applications, across many API resources in an efficient manner. Ensuring API consumers get the answers they are looking for, while ensuring relevant feedback and comments end up back in the product road map, with each appropriate product team.

16. Please describe your experience in establishing metrics and measures for tracking the overall value and performance of the platform.

This is somewhat overlapped with question #11, but I’d say focuses in on the heart of metrics and analysis at the API management level. Understand performance and reliability through the entire stack is critical to platform reliability, but the API management core is all about developing an awareness of how APIs are being consumed, and the value generated as part of this consumption. While performance definitely impacts value, we focus on API management to help us measure and understand consumption of each resource, and understand and measure the value delivered in consumption.

I’ve been studying how API management establishes the metrics needed, and measures and tracks API consumption, trying to understand value generation using the following elements:

  • Application Keys - Every API call possess a unique key identifying the application and user behind consumption. This key is essential to understanding how value is being generated in the context of each API resources, as well as across groups of resources.
  • Active Users - New, no name, automated API users are easy to attack. It is the active, identifiable, communicative users we are shooting for, so quantifying and measuring what an active user is, and how many exist on the platform is essential to understanding value generation.
  • Active APIs - Which APIs are the APIs that consumers are integrating with, and using on a regular basis? Not all APIs will be a hit, and not all APIs will be transformative, but there can be utility, and functional APIs that make a significant impact. API management is key to understanding how APIs are being used, as well as how they are not being used, painting a complete picture of what is valuable, and what is not.
  • Consumption - Measuring the nuance of how API consumers are consuming APIs, beyond just the number of calls being made. What is being consumed? What is frequency of consumption? What are the limitations, constraints, and metrics for consumption, and incentives for increasing consumption?
  • Plans - Establishing many different access tiers and plans, containing many different APIs, allowing for API service composition to evolve create different API packages, or products that deliver across many different web, mobile, and device channels.
  • Design Patterns - Which design patterns are being used, and driving consumption and value creation. Are certain types of APIs favored, or specific types of parameters favored and backed up by API consumption numbers. Measuring successful API design, deployment, management, and testing patterns, and identifying the not so successful patterns, then incorporating these findings into the road map as well as platform governance.
  • Applications - Considering how metrics can be applied at the client, and SDK level, providing visibility and insight into how APIs are being consumed, from the perspective of each type of actual application being used.

API management is the future of public data, content, and other resource management. Understanding API consumption, measuring, analysing, and reporting upon API consumption is required to being able to evolve healthy API resources forward, and develop viable applications, that are putting API resources to use. API management will continue to become the tax system, the royalty system, and value generation capture mechanism across government agencies and public resources in coming years.

17. Please describe your experience and the approach you would take as the API Program Core Team to deploy an effective strategy that will allow VA to distribute the development of API’s across multiple teams and multiple contractor groups while reducing friction, risk and time to market.

A microservice, CI/CID, DevOps approach to providing and consuming APIs has begun to shift the landscape for how API platforms are supporting the API life cycle across many disparate teams, trusted external partners, and even 3rd party application and system developers. Distilling all elements of the API supply chain to modular, well defined components, helps establish a platform that can be centralized, but encouraging a consistent way to approach the federated delivery of APIs and supporting resources.

I focus on making API operations more reusable, agile, and effective in a handful of areas:

  • Github - Everything starts, lives, and is branched and committed to via Github, either as a public or private repository. Definitions, code, documentation, training, and all other resources exist as repositories allowing them to be versioned, forked, collaborated around, and delivered wherever they are needed.
  • Governance - Establishing API design, deployment, management, testing, monitoring, and other governance practices, then making them available across teams as living, collaborative documents and practices, and allow teams to remain independent, but working together in concert at scale, using a common set of health practices.
  • Training - Establishing a training aspect to platform operations, making sure materials is readily developed to support all stops along the API lifecycle, and the curriculum is circulated, evolved, and made available across teams. Ensuring that all training remains relevant, up to date, and versioned to keep in sync with all platform changes.
  • Coaching - Cultivate a team of API coaches who are embedded across teams, but also report back and worth with the central API team, helping train and educate around platform governance, but help work with and improve a teams ability to deliver within this framework. API coaches make API compliance a two way street, feeding resources downstream to teams, but then also working with teams to ensure their needs are being met, and reducing any friction that exists with delivering in a consistent way that meets the platform objectives.
  • Modular - Keeping everything small, modular, and limiting the time and budget needed to define, design, deploy, manage, version, and deprecate, allowing the entire platform to move forward in concert, using smaller release cycles.
  • Continuous - API governance, training, coaching, and all other operational level support across teams is also continuously being deployed, integrated, and evolved, ensuring that teams can work independently, but also be in sync when possible. Reducing bottlenecks, eliminating road blocks, and minimizing friction between teams, and across APIs that originate from many different systems.

API operations needs to work much like each individual API. It needs to be small, modular, well defined, doing one thing well, but in a way that can be harnessed at scale to deliver on platform objectives. The API lifecycle needs to be well documented, with disparate teams well educated regarding what healthy API design, deployment, management, and testing looks like. We make operations reusable, and forkable across teams by establishing machine readable models and supporting tooling that help teams be successful independently, but in alignment with the greater platform good.

18. Please describe the roles and skill sets that you would be assembling to establish the API Program core team.

Our suggested team for the Lighthouse Core API management implementation reflects how a federated, yet centralized API platform team should work. There are a handful of disparate groups, each brining a different set of skills to the table. We are always working to augment and grow this to meet each projects we work on, and open to working with 3rd party developers. Bringing forward some skills we already have on staff, while also introducing some new skills to bring a well rounded set of voices to the table to support the Lighthouse API platform.

  • Architects - Backend, platform, and distributed system architects.
  • Database - Variety of database administration, analysts, and scientists.
  • Veterans - Making sure the veteran perspective is representative on the platform team.
  • Healthcare - Opening the door to some healthcare practitioner advisor roles, making sure a diverse group of end users are represented.
  • Designers - Bringing forward the required API design talent required to deliver consistent APIs at web scale.
  • Developers - Provide a diverse set of developer skills supporting a variety of programming languages and platforms.
  • Security - Ensure there is necessary security operations skills to pay attention to the bigger security picture.
  • Testing - Provide dedicated testing and quality assurance talent to round off the set of skills on the table.
  • Evangelists - Make sure there is an appropriate amount of outreach, communication, and storytelling occurring.

As the number of APIs grow, and an awareness of what consumers are desiring additional team members will be sought to address specific platform needs around emerging areas like machine learning, streaming and real time, or other aspects we haven’t addressed in this RFI.

19. How would you recommend the Government manage performance through Service Level Agreements? Do you have commercial SLAs established that can be leveraged?

Each stop along the provider dimensions, as well as the consumer perspective will be quantified, logged, measured, and analyzed as part of API management practices. As part of this process, each individual area will have it’s set of metrics and questions that support API governance objectives. These are some of the areas that will be quantified, and measured, for inclusion in service level agreements.

  • Database - What are the benchmarks for the database, and what are the acceptable ranges for operation. Log, measure, and evaluate historically, establishing a common scoring for delivery of database resources.
  • Server - Similar to the database layer. What are the units of compute available for API deployment and management, and what are the benchmarks for measuring the performance, availability, and reliability of each instance, no matter how big or small.
  • Serverless - Establishing benchmarks for individual functions, and defining appropriate scaling, service levels for each script beyond each API.
  • Gateway - Through gateway logging, measuring, and understanding performance and availability at the gateway level. Factoring in caching, rat limits, as well as backend dependencies, then establishing a level of service that can / should be met.
  • APIs - Through monitoring and testing each API should have its tests, assertions, and overall level of service defined. Is the API directly meeting its service level agreement? Is an API meeting API governance requirements, as well as meeting its SLA? What is the relationship between performance and compliance?

Each stop along the API lifecycle should have its guidance when it comes to governance, and possess a set of data points for measuring monitoring, testing, and performance. All of this can be used to establish a comprehensive service level agreement definition that goes beyond just uptime and availability, ensuring that APIs are meeting the SLA, but are doing so through API governance practices that are working.

20. If the Government required offerors to build APIs as part of a technical evaluation, what test environments and tools would be required?

We’d be willing to build APIs in support of a technical evaluation. We are willing to do this using an existing API gateway solution, via any of the existing cloud platforms. We would need access to deploy mock and virtualized APIs using a gateway environment, as well as configure and work with IAM, and some sort of DNS layer, to access addressing and routing for all prototypes deployed.

Our team is capable of working in any environment, and suggest test environments and tool reflect existing operations, supporting what is already in motion. We can make recommendations, and bring our open cloud, SaaS, and open tooling to the table, and build a case for why another tool might be needed, and make more sense as part of the technical evaluation.

To facilitate the defining, and evolution of a technical evaluation we recommend starting a private Github repository, either initiated by the VA, or off error, where a set of requirements, and definitions can be established as a seed for the technical evaluation, as well as quickly become many seeds for actual platform implementation.

In Conclusion
That was a title more work than I anticipated. I had scheduled about six hours to think through this, and go through my research to find relevant answers. When I started this response, I was planning on adding a whole other section for suggestions beyond the questions that were being asked, but I think they are able to get at more of the proposed strategy than I anticipated. There were some redundant areas of this RFI, as well as some foreshadowing regarding what would be possible with such an API, but I think the questions were able to get at what is needed.

I’m not sure how my responses will size up with my partners, or alongside the other responders to the RFI. My responses reflect what I see working across the wider public API space, and are based upon what I’ve learned opening up data, and trying to move forward other API projects across other federal agencies, all the way down to the municipal level. I’m guessing my approach will be heavier on the business and politics of API platform operations, and lighter on the reliance on the technical elements, addressing in my opinion the biggest obstacle to API success–culture, silos, and delivering consistently across disparate groups working under a common vision.

Next, I will publish this and share with my partners for inclusion in the final response for the group. I’ll make it available on API Evangelist, so that others can learn from the response to the RFI. I’m not sure where this will go next, but I can’t ever miss out on an opportunity to respond to inquiries about API operations at this level. This RFI was the most sophisticated API-focused vision to come out of VA in my experience. Regardless, of what happens next, it demonstrates to me that there are groups within the VA that understand the API potential, and what it can do for veterans, and the organizations, hospitals, and communities that support them.


We Are All Using APIs

When I talk to ordinary people about what I do as the API Evangelist, they tend to think APIs don’t have much of anything to do with their world. APIs exist in a realm of startups, technology, and make believe that doesn’t have much to do with their ordinary lives. When trying to make the connection with folks on airplanes, in the hotel lobby, and at the coffee shop, I always resort to the most common API-driven thing in all of our lives–the smart phone. Pulling out my iPhone is the quickest way I can go from zero to API understanding, with almost anyone.

When people ask what an API is, or how it has anything to do with them, I always pull out my iPhone, and say that all of the applications on the home page of your mobile phone use APIs to communicate. When you post something to your Facebook wall, you are using the Facebook API. When you publish an image to Instagram, you are using the Instagram API. When you check the balance on your bank account, you are using your banks API. APIs are everywhere. We are all using APIs. We are all impacted by good APIs, and bad APIs. Most of the time we just don’t know it, and are completely unaware of what is going on behind the curtain that is our smart phones.

I started paying attention to APIs in 2010 when I saw the impact mobile phones were beginning to have in our lives, and the role APIs were playing behind this new technological curtain. In 2017, I’m watching APIs expand to our homes via our thermostats, and other appliances. I’m seeing APIs in our cars. Applied to security cameras, sensors, signage, and other common objects throughout public spaces. APIs aren’t something everyone should be doing, however I feel they are something that everyone should be aware of. I usually compare it to the financial and banking system. Not everyone should understand how credit and banking systems work, but they should dam sure understanding who has access to their bank account, how much money they have, and other functional aspects of the financial system.

When it comes to API awareness I don’t expect you to be able to write code, or understand how OAuth works. However, you should know whether or not an online service you are using has an API or not, and you should understand whether or not OAuth is in use. Have you used OAuth? I’m pretty sure you have as part of your Facebook or Google logins, when you have authenticated with 3rd party applications, giving them access to your Facebook or Google data. You probably just didn’t know what you were using was OAuth, and that it was providing API access, as you clicked next, next, through the process. I’m not expecting you to understand the technical details, I am just injecting a little more awareness around the API-driven things you are already doing.

We are all using APIs. We are all being impacted by APIs existing, or not existing. We are being impacted by unsecured APIs (ie. Equifax). We are all being influenced, manipulated, and manipulated by bots who are using Twitter, Facebook, and other APIs to bombard us with information. On a daily basis we are being targeted with advertising that is personalizing, and surveilling us using APIs. We should all be aware that these things are happening, and have some ownership in understanding what is going on. I don’t expect you to become an API expert, or learn to hack, I’m just asking that you assume a little bit more accountability when it comes to understanding what goes on behind the digital curtain for the production of our online world.

I recently had the pleasure of engaging with my friend Bryan Mathers (@bryanmathers) of Visual Thinkery, and as I was telling this story, he doodled the image you see here. Giving me a digital representation of how I use my mobile phone to help people understand APIs. It gives me a visual anchor for telling this story over and over here on my blog. My readers who have heard it before can tune it out. These stories are for my new readers, and for me to link to when helping share this story with folks who are curious about APIs, and what I do. I feel it is important to help evangelize APIs, not because everybody should be doing them, it is because everyone should be aware that others are doing APIs. I’m stoked to have visual elements that I can use in this ongoing storytelling, that help connect my readers and listeners with APIs, using objects that are relevant and meaningful to their world. Thanks Bryan!!

Photo Credit: Bryan Mathers (@bryanmathers), of Visual Thinkery.


I Am Talking About Jekyll As A Hypermedia Client At APIStrat in Portland OR Next Week

Static website, and headless CMS approaches to providing API driven solutions have grown in popularity in recent years. Jekyll has been leading the charge when it comes to static website deployment, partly due to Github being behind the project, and their adoption for Github Pages. I’ve been pushing forward a new approach to using Jekyll as a hypermedia client to help deliver some of my API training and curriculum, and as part of this work I’m giving a talk at APISTrat next week on the concept. APIStrat is a great forum for this kind of project, helping me think through things in the form of a talk, the opportunity to share with an audience, and get immediate feedback on its viability, which I can then use to push forward my thinking on this aspect of my API work.

If you aren’t familiar with Jekyll, I recommend spending some time reading about it, and even implementing a simple site using Github Pages. I have multiple non-developers in my life who I’ve introduced to Jekyll, and it has made a significant impact on the way they do their work. Even if you do know about Jekyll, additionally I recommend spending time learning more about the underscore data folder, and the concept of Jekyll collections. Every Jekyll site has its default data collection, and the opportunity to create any additional collection, using any name you desire. Within these folders you can publish static CSV, JSON, and YAML data files, using any schema you desire. All of these data collections then become available for referencing through the static Jekyll website or application.

All of this functionality is native Jekyll. Nothing revelatory from me. What I’m looking to push things forward around is what happens when the data collections are using a hypermedia media type. I’m using Siren, which allows me to publish structured data collections, complete with links that define a specific experience across the data and content stored within collections. Next piece of the puzzle involves leveraging Jekyll and its use of Liquid as a hypermedia client, allowing me to build resilient websites and applications that are data driven, but can also evolve and change without breaking. All I do is update the static data behind, with minor revisions to the client–all using Github’s API or Git infrastructure.

I’ve found Jekyll’s data features to be infinitely beneficial, and is something I use to run my entire network of sites on. Adding the hypermedia layer is allowing me to push forward the storytelling experience around my API research and curriculum. The first dimension I find interesting, is that I can publish updates to the data for each project without breaking the client–classic hypermedia benefits. However, additionally, I can publish data and changes using Github workflows, and because every application runs as independent Github Pages driven website, it can stay in sync with updates via commits, or stay completely independent and never receiving any changes, which will also introduce another dimension of client stability, which is also so hypermedia.

I am still in the beta phase of evolving some of my API projects to use this approach. I’m pioneering this work using my API design curriculum. I’ll be publishing an instance as part ofmy session at APIStrat in Portland, OR next week. I’ll make sure I cover some of the Jekyll and Github basics for people in the audience who aren’t familiar with these areas. Then I’ll walk through how I’m using the Jekyll collections, includes, and the Liquid syntax to behave as a hypermedia client for my Siren data collections. All my talks live on Github, so if you can’t make it you can still follow along with my work. However, if you are going to be in Portland next week, make sure you come by and give a listen to my talk–I’d love to get your feedback on the concept.


Helping Business Users Get Over Perceived Technical Gaps When It Comes To API Design

Every single API project I’m working on currently has one or more business users involved, or specifically leading the work. With every business user, no matter how fearless they are, there is always a pretty heavy perception that some things are over their head. I see this over and over when it comes to API design, and the usage of OpenAPI to define an API. I’ve known a handful of folks who aren’t programmers, and have learned OpenAPI fluently, but for the most part, all business users tend to put up a barrier when it comes to learning OpenAPI–it lives in the realm of code, and exists beyond what they are capable of.

I get that folks are turned off by being exposed to code. Learning to read code takes a significant amount of time, and with the more framework, libraries, and other layers, you can find yourself pretty lost, pretty quickly. However, with OpenAPI, everything I do tends to be YAML, making it much more readable to humans. While there are rules and structure to things, I don’t feel it is out of the realm of the average user to study, learn, and eventually bring into focus. Along with the OpenAPI rules, there are a good deal of HTTP literacy required to fully understand what is going on, but I feel like the API design process is a much more forgiving environment to learn these things for both developers and business users.

I put the ownership of everything technical being complicated squarely on the shoulders of IT and developer folks. We’ve gone out of our way to make this stuff difficult to access, and out of reach of the average person. We’ve spent decades keeping people we didn’t see fit from having a seat at the table. However, increasingly I also feel like there is a significant amount of ownership to be given to business users who are willing to put up walls when there really is not reason. I think the API design layer is one place aspect of doing business with APIs where the average business user throws up unnecessary bluster around technical complexity, where it doesn’t actually exist. The definition of an API is simply a contract, no more complicated than a robust word document or spreadsheet, completely within the realm of understanding for any engaged user.

There is a pretty big opportunity to narrow the gap that exists between technical and business users with APIs. The challenge will be that many technologists want to keep things closed off to business users so they can charge for, and generate revenue when it comes to bridging the gap. We also have to figure out how to help business users better understand where the technical line is, and the importance that they become OpenAPI fluent, and have a stake in the API design conversation. Once I get back to my API training work, I’m going to think about a version of my API design curriculum, specifically for non-developer users. My goal is to figure out what it takes to onboard more business users to the world of APIs design, and make sure we balance out who is at the table when we are defining API solutions.


A Suite Of Human Services API Definitions Using OpenAPI

I’m needing to quantify some of the work that has occurred around my Human Services Data Specification work as part of a grant we received from Stanford. The grant shas helped us push forward almost three separate revisions of the API definition for working with human services data, and one of the things I’m needing to do is quantify the work that has occurred specifically around the OpenAPI definitions. At this point the specification is pretty verbose, and is now spanning multiple documents, making it difficult to quantify and share within an executive summary. To help support I wanted to craft some language that could help introduce the value that has been created to a non-technical audience.

The Human Services Data Specification (HSDS) provides the common schema for accessing, storing, and sharing of human data, providing a machine readable definition that human service practitioners can follow in their implementations. The Human Services Data API (HSDA) takes this schema, and provides an API definition for accessing, as well as adding, updating, and deleting data using a web API. While there are a growing number of code projects emerging that support HSDS/A, the center of the project is a set of OpenAPI definitions that outline the finer details of working with human services data via a web API.

With the assistance of the grant from Stanford, Open Referral was able to move forward the HSDA specification from version 1.0, to 1.1, 1.2, and now we are looking at delivering version 1.3 of the specification before end of 2017. The core OpenAPI definition for HSDA provides guidance for the design of human services APIs, with a focus on the core set of resources needed to operate a human services project. There are the handle of core resources defined as part of what is called HSDA core:

  • Organizations (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting organizational data. Describing the paths, parameters, and HSDS schema required to effectively communicate around organizations delivering human services.
  • Locations (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting location data. Describing the paths, parameters, and HSDS schema required to effectively communicate around specific locations that provide human services.
  • Services (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting services data. Describing the paths, parameters, and HSDS schema required to effectively communicate around specific human services.
  • Contacts (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting contact data. Describing the paths, parameters, and HSDS schema required to effectively communicate around contacts associated with organizations, locations, and services delivering human services.

This is considered to be the HSDA core specification, focusing on exactly what is needed to manage organizations, locations, services, and contacts involved with human services, and nothing more. As of version 1.2 of the specification we have spun off several additional specifications, which support the HSDA core specification, but are meant to reduce the complexity of the core specification. Here are seven additional projects we’ve managed to accomplish as part of our recent work:

  • HSDA Search (OpenAPI Definition) - A machine readable API definition for searching across organizations, location, services, and contacts, as well as their sub-resources.
  • HSDA Bulk (OpenAPI Definition) - A machine readable API definition dictating how bulk operations around organizations, location, services, and contacts can occur, minimizing the impact of core systems.
  • HSDA Meta (OpenAPI Definition) - A machine readable API definition describing all the meta data in use as part of any HSDA implementation, providing a history of everything that has occurred.
  • HSDA Taxonomy (OpenAPI Definition) - A machine readable API definition for managing one or more taxonomies in use, and how these taxonomies are applied to core organizations, locations, services, and contacts.
  • HSDA Management (OpenAPI Definition) - A machine readable API definition for the management level details of operating an HSDA implementation, providing guidance for user access, application management, service usage, and authentication and logging for the HSDA meta system.
  • HSDA Orchestration (OpenAPI Definition) - A machine readable API definition for orchestration data exchanges between HSDA implementations, allowing for schedule or event based integrations to be executed, involving one or many HSDA implementations.
  • HSDA Validation (OpenAPI Definition) - A machine readable API definition for validating an HSDA implementation follows the API and data schema, providing a validation API that can be used keep any platform compliant.

The Stanford grant has allowed us to move forward HSDA three separate versions, but has also given us the time to properly break down HSDA into sensible, pragmatic services that do one thing, and does it well. The OpenAPI specifications provide a machine readable blueprint that can be used by any city, county, or other organization to define, implement, and manage their human services implementation, keeping it in sync with Open Referrals guidance. Each of the OpenAPI definitions provide a distilled down version of Open Referral guidance that anyone can follow within individual operations–designed for both business and technical users to adopt.

The HSDA specification is defined using OpenAPI, which is an open source specification format for defining web APIs, which is now part of the Linux Foundation. OpenAPI allows us to define HSDA using JSON Schema, but then also allows us to define how that schema is used as part of API requests and responses. It provides a YAML or JSON definition of the HSDA core, as well as each individual project. These definitions are then used to design, deploy, and manage individual API implementations, including providing documentation, code libraries, testing, and other essential aspects of operating an API that supports web, mobile, device, voice, spreadsheet, and other types of applications.

Hopefully this breaks down how the HSDA OpenAPI specifications are central to what Open Referral is doing. It provides the guidance that other human service providers can follow, going beyond just the data schema, and actually helping ensure access across many different implementations can work in concert. The HSDA OpenAPIs act as a set of rules that can guide individual implementations, while also ensuring their interoperability. When each human services provider adopts one of the HSDA specifications they are establishing a contract with their API consumers and application developers, promising they’ll deliver against this set of rules, ensuring their API will speak the same language as other human service APIs that also speak HSDA. This is the objective of Open Referral, and why we use OpenAPI as the central definition for everything we do.


Some New API Evangelist Art

When I first started API Evangelist I spent two days trying to create a logo. I then spent another couple days trying to find a service to create something. Unhappy with everything I produced, I resorted to what I considered a temporary logo, where I just typed a JSON representation of the logo, mimicking what a JSON response for calling a logo through an API might look like. Seven years later, the logo has stuck, resulting in me never actually invested any more energy into my logo.

The API Evangelist imagery is long overdue for an overhaul. I stopped wearing the logo on my signature black t-shirts a couple years back, and I do not want to reach the 10 year mark before I actually do anything new. My logo, and the other artwork I’ve accumulated over the last several years played their role, but I’m stoked to finally begin the process of evolving some artwork for API Evangelist. To help me move the needle I’ve began working with my friend Bryan Mathers (@BryanMMathers), where I have experienced his Visual Thinkery induced journey, where he generates image ideas by engaging in a series of conversations. Producing some whimsical, colorful, and entertaining images for anything he can imagine out of our discussions. It is something I’ve experienced as part of our parent company Contrafabulists, and stoked to experience as part of my API Evangelist work.

Bryan doodled the butterfly image while we were talking, and didn’t anticipate it would be the one I’d choose to be the logo. Honestly, at first I didn’t think it was logo quality, but after spending time looking and thinking about, I feel it suits what I’m doing very well. It still has the technical elements in the brackets in the wing outline, and the digital or pixel nature of the color in the wings, but is moving beyond the tech and represents the natural, more human side of things I would like to emphasize. The logo represents my API Evangelist ideas, and what I want them to be when I put them out there on my blog, and in my other work.

At first, a butterfly didn’t feel like Kin Lane, but I’m not creating a Kin Lane logo. I am creating an API Evangelist logo. When you think about the caterpillar to a butterfly evolution, and even the butterfly effect, it isn’t a stretch to apply all of this to what I work to do as the API Evangelist. I’m just looking to plant ideas in people’s heads with my work. I want my ideas to grow, expand, and take flight on their own. Making an impact on the world. I enjoy seeing my ideas fluttering around, adding color, adding motion, and presence around the API community. At the moment where I couldn’t imagine any image to represent API Evangelist, Bryan was able to extract a single image that I think couldn’t better represent what it is I do. #VisualThinkery

Bryan has created a horizontal, and vertical logo for me, using the butterfly. He’s also created a handful of stamps, and supporting images for my work. I will be sharing these other elements out as part of my storytelling, and see if I can find places for them to live more permanently somewhere on my network of site(s). Luckily, my website is pretty minimal, and the change of logo works well with it. I’m pretty happy with the change, and the introduction of color. Thanks Bryan for the work. I’m a little sad to retire my other logo, but it has run its course. I’ll still use it as a backdrop / background in some of my social profiles, and other work, as I think it reflects the history of API Evangelist. However, I am pretty stoked to finally have some new art for API Evangelist, that adds some new color to my work.


The API Portal Outline For A Project I Am Working On

I am working through a project for a client, helping them deliver a portal for their API. As I do with any of my recommendations with my clients, I take my existing API research, and refine it to help craft a strategy to meets their specific needs. Each time I do this it gives me a chance to rethink some of my recommendations I’ve already gathered, as well as learn from new types of projects. I’ve taken the building blocks from my API portal, as well as my API management research, and have taken a crack at organizing them into an outline that I can use to guide my current project.

Here is a walk through of the outline I’m recommending as part of a basic API portal implementation, to support a simple public API:

  • Overview - / - Everything starts with a landing page, with a simple overview of what an API does.

Then you need some basics to help make on-boarding as frictionless as possible, providing everything an API consumer needs to get going:

  • Getting Started - /getting-started/ - Handful of steps with exactly what is needed to get started.
  • Authentication - /authentication/ - An overview of what type of authentication is used.
  • Documentation - /documentation/ - Documentation for the APIs that are available.
  • FAQ - /faq/ - Answer the most common questions.
  • Code - /code/ - Provide code samples, libraries, and SDKs to get going.

Then get API consumers signed up, or able to login and get at their API keys as quickly as you possibly can:

  • Sign Up - /developer/ - Provide a simple sign up for a developer account.
  • Login - /developer/ - Allow users to quickly log back in after they have account.

Next, provide a wealth of communication and support mechanisms, helping make sure developers are aware of what is going on:

  • Blog - /blog/ - A simple blog dedicated to sharing stories about the API.
  • Support - /support/ - Offer up a handful of support channels like email and tickets.
  • Road Map - /road-map/ - Provide a road map of what the future will hold for the API.
  • Issues - /issues/ - Share what the known issues are for the API platform.
  • Change Log - /change-log/ - Publish a log of everything that has changed with the platform.
  • Status - /status/ - Provide a status page showing the availability of API services.
  • Security - /security/ - Publish an overview of what security practices are in place.

Make sure your consumers know how to get involved at the right level, making plans, pricing, and partnership opportunities easy to find:

  • Plans - /plans/ - Offer a single page with API access plans and pricing information.
  • Partners - /partners/ - Share what the partnership opportunities are, as well as existing partners.

Then let’s take care of the legal side of things, making sure API consumers are fully aware of the TOS, and other aspects of operations:

  • Terms of Service - /terms-of-service/ - Make the terms of service easy to find.
  • Privacy - /Privacy/ - Publish a privacy statement for all API consumers.
  • Licensing - /Licensing/ - Share the licensing for API, code, data, and other assets.

I also wanted to make sure I took care of the basics for the developer accounts, flushing out the common building blocks developers will need to be successful:

  • Dashboard - /developer/dashboard/ - Provide a simple, comprehensive dashboard for developers.
  • Account - /developer/account/ - Allow API consumers to change their account information.
  • Applications - /developer/applications/ - Allow API consumers to add multiple applications, and receive API keys for each application.
  • Plans - /developer/plans/ - If there are multiple plans, allow developers to change plans.
  • Usage - /developer/usage/ - Show history of all API usage for API consumers.
  • Billing - /developer/billing/ - Allow API consumers to add, update, and remove billing information.
  • Invoices - /developer/invoices/ - Share a listing, as well as detail of all invoices for use.

Then I had a handful of other looser items that I wanted to make sure were here. Some of these will be for the second phase of developoment, but I want to make sure is on the list:

  • Branding - /branding/ - Providing a logo, branding guidelines, and requirements.
  • Embeddable - /embeddable/ - Share a handful of butts and widgets for use by consumers.
  • Webhooks - /Webhooks/ - Publish details on how to setup and manage webhook notifications.
  • iPaaS - /ipaas/ - Information on Zapier, and other iPaaS integration solutions that are available.
  • Spreadsheets - /spreadsheets/ - Share any spreadsheet plugins or connectors that are available for integration.

That concludes what I’d consider to be the essential building blocks for an API. Sure, there are some more minor APIs that can operate on a bare bones version of this list, but for any API looking to conduct business at scale, I’d recommend considering everything on this list. It reflects what I find across the leading providers like Stripe and Twilio, and reflect what I like to see from an API provider when I am getting up and running using any API.

I have Jekyll templates for each of these building blocks, which use the Bootstrap UI for the layout. I am updating it for this project, then I will publish again as a set of forkable tools that anyone can implement. I’m going to publish a new API portal that runs as an interactive outline of essential building blocks, then creates new Github repository for a user, and publishes each of the selected components to the repo. Providing a buffet of API developer portal tools anyone can put to work for their API without much work.


AWS API Gateway Export In OpenAPI and Postman Formats

I wrote about being able to import an OpenAPI into the AWS API Gateway to jumpstart your API the other day. OpenAPI definitions are increasingly used for every stop along the API life cycle, and being able to import an OpenAPI to start a new API, or update an existing in your API gateway is a pretty important feature for streamlining API operations. OpenAPI is great for defining the surface area of deploying and managing your API, as well as potentially generate client SDKs, and interactive API documentation for your API developer portal.

Another important aspect of this API lifecycle is being able to get your definitions out in a machine readable format as well. All service providers should be making API definitions a two-way street, just like Amazon does with the AWS API Gateway. Using the AWS Console you can export an OpenAPI definition for any API. What makes things interesting is you can also export an OpenAPI complete with all the OpenAPI extensions they use to customize each API within the API Gateway. Also, they provide an export to OpenAPI, but with Postman specific extensions, allowing you to use use in the desktop client tooling when developing as well as integrating with any API.

I’ve said it before, and I’ll say it again. Every API service provider should allow for the importing and exporting of common API definition formats like OpenAPI and Postman. If you are selling services and tooling to API designers, developers, architects, and providers, you should ALWAYS provide a way for them to generate a static definition of what is going on–bonus, if you allow them to publish API definitions to Github. I know some API service providers like to allow for API import, but worry about customers abandoning their service if there is an export. Too bad, offer better services, and affordable pricing models, and people will stick around.

Beyond selling services to folks with day jobs, having the ability to import and export an OpenAPI allows me to play aorund with tools more. Using the AWS API Gateway I am setting up a number of APIs, then exporting them, and saving the blueprints for when I need to tell as part of a story, or use in a clients project. Having the ability to import and export an OpenAPI helps me better deliver actual APIs, but it also helps me tell better stories around what is possible with an API service or tool. If I can’t get in there and actually test drive, play with and see how things work, it is unlikely I will be telling stories about it, or have in my toolbox when I come across a project where I’ll need to apply a specific service or tool.

If you are having trouble making your API service or tool speak OpenAPI, Postman, or other formats, make sure and check out API Transformer, which provides an API you can use to convert between the format. So if you can support one format, you can support them all. They even have the ability to take a .har file and generate an OpenAPI. I recommend natively supporting the importing and exporting of OpenAPI, then using API Transformer you can also allow for the importing and exporting in other formats. When you handle the import, you’ll always just make sure it speaks OpenAPI before you ingest. When a service supports OpenAPI import and export, it is much more likely I’m going to play with more, as well as increases the chance I’m going to be baking it into the life cycle for one of the projects I’m working on for a client. I’m sure other folks are thinking along the same lines, and are grateful when they can get their API definitions in and out of your platform.


Your API Road Map Helps Others Tell Stories About Your API

There are many reasons you want to have a road map for your API. It helps you communicate with your API community where you are going with your API. It also helps you have a plan in place for the future, which increases the chances you will be moving things forward in a predictable and stable way. When I’m reviewing and API I don’t see a public API road map available, I tend to give them a ding on the reliability and communication for their operations. One of the reasons we do APIs is to help us focus externally with our digital resources, which communication plays an important role, and when API providers aren’t communicating effectively with their community, there are almost always other issues right behind the scenes.

A road map for your API helps you plan, and think through how and what you will be releasing for the foreseeable future. Communicating this plan externally helps force you think about your road map in context of your consumers. Having a road map, and successfully communicating about it via a blog, on Twitter, and other channels helps keep your API consumers in tune with what you doing. In my opinion, an API road map is an essential building block for all API providers, because it has direct value on the health of API operations, but because it also provides an external sign of the overall health of a platform.

Beyond the direct value of having an API road map, there are other reasons for having one that will go beyond just your developer community. In a story in Search Engine Land about Google Posts, the author directly references the road map as part of their storytelling. “In version 4.0 of the API, Google noted that “you can now create Posts on Google directly through the API.” The changelog include a bunch of other features, but the Google Posts is the most notable.” Adding another significant dimension to the road map conversation, and helping out with SEO, and other API marketing efforts.

As you craft your road map you might not be thinking about the fact that people might be referencing it as part of stories about your platform. I wouldn’t sweat it too much, but you should at least make sure you are having a conversation about it with your team, and maybe add an editorial cycle to your road map publishing process. Making sure what you publish will speak to your API consumers, but also make for quotable nuggets that other folks can use when referencing what you are up to. This is all just a thought I am having as I’m monitoring the space, and reading about what other APIs are are up to. I find road maps to be a critical piece of the API communication, and support, and I depend on them to do what I do as the API Evangelist. I just wanted to let you know how important your API road map is, so you don’t forget to give it some energy on a regular basis.


<< Prev Next >>