{"API Evangelist"}

Google Shares Insight On How To Improve Upon The API Experience

We all like it when the API providers we depend on make using their APIs easier to put to work. I also like it when API providers also share the story behind how they are making their APIs easier to use because it gives me material for a story, but more importantly it provides examples that other API providers can consider as part of their own operations.

Google recently shared some of the improvements they have made to help make our API experience better--here are some of the key takeaways:

  • Faster, more flexible key generation - Making this step simpler, by reducing the old multi-step process with a single click.
  • Streamlined getting started flow - Introduced an in-flow credential set up procedure directly embedded within the developer documentation.
  • An API Dashboard - To easily view usage and quotas, so you can view all the APIs you’re using along with usage, error and latency data.

If you spend any time-consuming APIs you know that these areas represent the common friction many of us API developers experience regularly. It is nice to see Google addressing these areas of friction, as well as sharing their story with the rest of us, providing us all a reminder of how we can cut off these sharp corners in our own operations.

These areas represent what I'd say are the two biggest pain points with getting up and going using an API, and the API dashboard represents the biggest pain point we face once we are up and running--where do we stand with our API consumption, within the rate limits provided by the platform. If you use a modern API management platform you probably have a dashboard solution in place, but for API providers who have hand-rolled their own solution, this continues to be a big problem area.

While some of the historical Google API experiences have left us API consumers desiring more (Google Translate, Google+, Web Search), they have over 100 public APIs, and their approach to standardizing their approach is full of best practices and positive examples we can follow. As they continue to step up their game, I'll keep tuning in to see what else I can share.


Embrace, Extend, and Exterminate In The World Of APIs

I am regularly reminded in my world as the API Evangelist that things are rarely ever what they seem on the surface. Meaning that what a company actually does, and what a company says it does are rarely in sync. This is one of the reasons I like APIs, is they often give a more honest look at what a company does (or does not do), potentially cutting through the bullshit of marketing.

It would be nice if companies were straight up about their intentions, and relied upon building better products, offering more valuable services, but many companies prefer being aggressive, misleading their customers, and in some cases an entire industry. I'm reminded of this fact once again while reading a post on software backward compatibility, undocumented APIs and importance of history, which provided a stark example of it in action from the past:

Embrace, extend, and extinguish“,[1] also known as “Embrace, extend, and exterminate“,[2] is a phrase that the U.S. Department of Justice found[3] that was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.

This behavior is one of the contributing factors to why the most recent generation(s) of developers are so adverse to standards and is behavior that exists within current open API and open source efforts. From experience, I would emphasize that the more a company feels the need to say they are open source, or open API, the more likely they are indulging in this type of behavior. It is like, some sort of subconscious response, like the dishonest person needing to state that they are being honest, or that you need to believe them--we are open, trust us.

I am not writing this post as some attempt to remind us that Microsoft is bad--this isn't at all about Microsoft. It is simply to remind us that this behavior has existed in the past, and it exists right now. Not all companies involved in helping define the API space are not interested in things being open, and that there are common specifications in place for us all to use. Some companies are more interested in slowing what happens within the community, and ensuring that when possible, all roads lead to their proprietary solution. This is just my regular reminder to always be aware.


The Anatomy Of API Call Failure

I have been spending time thinking about how we can build in fault tolerance, and change resiliency into our API SDKs, and client code. I want to better understand what is necessary to develop the best possible integrations as possible. While doing my regular monitoring this week I came across a Tweet from @Runscope, with a pretty interesting image on this subject crafted by @realm, a mobile platform for sync.

There is a wealth of building blocks here to apply at the client and SDK level, helping us achieve more fault tolerance, and make our applications, systems, and device integrations more change resilient. I wanted to break them out, providing a bulleted list I could include in my research:

  • Is the API Online?
  • Did the server receive the request?
  • Was URL request successful?
  • Did the request timeout?
  • Was there a server error?
  • Was JSON receive successfully?
  • Was JSON malformed?
  • Was there an unexpected response?
  • Were we able to map to JSON successfully?
  • Is the JSON valid?
  • Does local model match server model?

There are some valuable nuggets present in this diagram. It should be crafted into some sort of algorithmic template that developers can apply when developing their API integrations, as well as for API providers when developing the SDK and client solutions they make available to their API communities. I'm taking note so that next time I spend some cycles on my API SDK research I can help solidify my own definition.

This is a very micro look at fault-tolerance when it comes to API integration, and I'm continuing to look for other examples of change resiliency at this layer. Meaning, is there a plan B for the API call? Is there revenue ceiling considerations? Or other more non-technical, business and political considerations that should be baked into the code as well. Helping us all think more deeply around how we encourage change resiliency across the API community.


Regulatory API Monitoring For Validating Algorithmic Assertions

As I was learning about behavior driven development (BDD) and test driven development (TDD) this week, I quickly found myself applying this way of thought to my existing API regulation, and algorithmic transparency research. BDD and TDD are both used by API developers to ensure APIs are doing what they are supposed to, in development, QA, and production environments. There is no reason that this line of thought can't be elevated beyond just development groups to other business units, up to a wider industry level, or possibly employed by regulators to validate data or algorithmic solutions.

I am not a huge fan of government regulation, but I am a fan of algorithms doing what is being promised, and APIs plus BDD and TDD testing is one way that we can accomplish this. Similar to how the federal government is working together to define OAuth scopes which help sets the bar for how a user data is accessed, BDD assertion templates can be defined, shared, and validated within regulated industries.

Right now we are just focused at the very local level when it comes to API assertions. With time I'm hoping an API assertion template format will emerge (maybe already something out there), and I'm hoping that we evolve ways for allowing the average business user to be part of defining and validating API assertions. I know my friends over at Restlet are working towards this with their DHC client solution, which provides testing solutions. 

BDD, TDD, and API assertions still very much exist in the technical environments where APIs are born and managed. I'm hoping to help define the space, identify opportunities for establishing common patterns while encouraging more reuse of leading patterns. Like other layers of the API economy, I am hoping that API assertions will expand beyond just the technical, and enjoy use amongst business groups, including industry leaders, and government regulators when it applies.


Harmonizing API Definitions Across Government With The U.S. Data Federation

Sharing of API definitions is critical to any industry or public sector where APIs are being put to work. If the API sector is going to scale effectively, it needs to be reusing common patterns, something that many API and open data providers have not been that great at historically. While this is critical in any business sector, there is no single area where this needs to happen more urgently than within the public sector.

I have spent years trying wade through the volumes of open data that comes out of government, and even spent a period of time doing this in DC for the White House. The lack of open API definition formats like OpenAPISpec, API Blueprint, APIs.json, and JSON Schema across government is a passion of mine, so I'm very pleased to the new US Data Federation project coming out of the General Services Administration (GSA).

"The U.S. Data Federation supports data interoperability and harmonization across Federal, state, and local government agencies by highlighting common data formats, API specifications, and metadata vocabularies."

The U.S. Data Federation has focused in on some of the existing patterns that exist in service of the public sector, including seven existing initiatives:

  • Building & Land Development Specification
  • National Information Exchange Model
  • Open Referral
  • Open311
  • Project Open Data
  • Schema.org
  • The Voting Information Project

I am a big supporter of Open Referral, Open311, Project Open data, and Schema.org. I will step up and get more familiar with the building & land development specification, national information exchange model, and the voting information projects. The US Data Federal project echoes the work I've been doing with Environmental Protection Agency (EPA) Envirofacts Data Service API, Department of Labor APIs, FAFSA API, and my general Adopta.Agency efforts.

Defining the current inventory of government APIs and open data using OpenAPI Spec, and indexing the with APIs.json is how we do the hard work of identifying the common patterns that are already in place and being used by agencies on the ground. Once this is mapped out, we can begin the long road towards defining the common patterns that could be proposed as future initiatives for the US Data Federation. I think the project highlights this well on their about page:

 "These examples will highlight emerging data standards and API initiatives across all levels of government, convey the level of maturity for each effort, and facilitate greater participation by government agencies."

The world of API definitions is a messy one. It may seem straightforward if you are a standards oriented person. It may also seem straightforward if you are a scrappy startup person. In reality, the current landscape is a tug of war between these two words. There are a wealth of existing web API concepts, specifications, and data standards available to us, but there are also a lot of leading definitions being defined by tech giants like Amazon, Google, Twitter, and others. With the tone set by VC investment, and distorted views on what intellectual property is, the sharing of open API definitions and schemas has been deficient across many sectors, for many years.

What the GSA is doing with the US Data Federation project is important. They are mapping out the common patterns that already exist, and providing a forum for helping identify others, as well as to help evolve the less mature, or disparate API and schema patterns out in the wild. A positive sign that they are heading in the right direction is also that the US Data Federation project is operating on Github. It is important that these common patterns exist on the social coding platform, as it's increasingly being used as an engine for the API economy--touch all stops along the API life cycle.

I will carve out the time to go through some of my existing government open data work, which includes rebooting my Open Referral leadership role. I'm finding that just doing the hard work crafting OpenAPI Specs for government APIs is a very important piece of the puzzle. We need a machine-readable map of what already exists, otherwise, it is very difficult to find a way forward in the massive amounts of government open data available to us. However, I believe that when you take these machine readable API definitions and put them on Github, it becomes much easier to find the common patterns that the GSA is looking to define with US Data Federation.


Hacking on Amazon Alexa with AWS Lambda and APIs At @APIStrat

I'm neck deep in studying how Amazon is operating their Alexa platform, so I'm pretty excited about the chance to listen and learn from the Alexa team at APIStrat in Boston. Even if you aren't building voice-enabled applications, the approach to developing, managing, and evangelizing the Alexa platform provides a wealth of best practices that we should all strive to emulate in our own operations.

Rob McCauley (@RobMcCauley) from the Amazon Alexa team is doing a workshop, as well as a keynote at @APIStrat in Boston next month. This is relevant to what is going on in the wider space because voice-enablement is a fast-moving layer when it comes to delivering API resources, helping define what is being dubbed as the conversational interface movement, while also providing the best practices for a modern API strategy that I mentioned above.

There are a number of things that the Alexa team does which have captured my attention, including their approach to developing skills, their investment ($$) into their developers, and their overall communication strategy. I'm working on profiling all of this as part of what I call a blueprint reports, where I map out the approach of the Alexa team in a way that other API providers can put to work in their own operations.

I'm thinking I will have to wait until after @APIStrat to finish my blueprint report, as I'd like to attend the Alexa workshop, hear his keynote, and possibly even talk to him personally about their approach, in the hallway. I hope to see you there, and hear you share your story, even if you aren't on the stage at APIStrat, the hallways tend to be a great place to listen to the story of leaders from across the space, as well as share your own--no matter how big or small you might be.

Make sure you get registered for APIStrat before it is sold out, and I'll see you there!


Amazon Launches Their Own QA Solution Called AWS Answers

Amazon launched their own questions and answers site called simply called AWS Answers. Amazon is definitely in a class of their own, but I thought the move reflects illnesses in the wider QA space and an approach that smaller API providers might want to consider for their operations.

Quora doesn't have an API, so why would we use as a QA solution for the API space? I don't care how much network they have. While Stack Overflow is a wealth of API related questions and answers, the environment has been found to be pretty toxic for some API providers. Making hand rolling your own QA site a more interesting option.

AWS answers is a pretty basic implementation but also has a wealth of valuable content. it wouldn't take much to handroll your own FAQ or wider answers solution within your API developer portal. I can understand why AWS would do their own, to help ensure their users are able to find the answers they need, without leaving the AWS platform. It depends on the type of platform you are operating, but keeping QA local might make more sense than using 3rd party solutions--allowing for more precise control over the answers your customers receive.

As I work to expand my API portal definition beyond just the minimum version, I'm adding a FAQ solution to the stack, and now I'm going to consider adding a separate answers solution modeled after AWS Answers. While I think platforms like Stack Overflow and Quora will continue to do well, I'm more interested in supporting API providers to roll their own solution, maybe even provide an API, and allow for more interoperability, and control over their own resources.


Your Southwest Airlines Flight Has An API

A friend of mine messaged me this photo of the Southwest Airlines flight API on Facebook the other day. After doing a little homework I found that every flight has this available on the planes local network. There is a pretty interesting write up on it from Roger Parks if you care to learn more.

Looking through the response it has all the information you need for your flight update screen. It might seem scary for folks like us poking around the network on airplanes looking for things like this, but this is just the nature of the Internet and something any network operator should consider as normal.

The API is available at getconnected.southwestwifi.com/current.json when you are on the planes local network, and I'd consult Roger's post if you want more details about how to sniff it out using your browser. Anytime I am on a guest network on a plane or in a hotel, I enjoy turning on my Charles Proxy to log a list of all the domains and IP address in use.

This is a good way to learn about how people are architecting their networks, and delivering their resources to web, mobile, and device users. The problem with this activity is that sometimes you can discover things that you shouldn't. A line that I worry about a lot. I feel pretty strongly that if companies are using public DNS, or opening up their private network to the public, they should be aware that this is going to happen.

I hope that someday this type of behavior is embraced by companies, institutions, and government agencies. Not everyone will have good intentions like I do, but network operators should know this will happen, and make the those of us where white hats welcome, so that we will report insecure infrstructure, and help keep things locked down--before the bad guys get in.

Thanks to my friend Jason for pinging me with this. From reading up on it, it is nothing new, but still worthy of noting, and talking about. I love learning about all the APIs that exist in the cracks.


Providing Inline API Documentation Within Your SaaS User Interface

The common approach to discovering that a SaaS provider has an API is through a single, external link in the footer of a website, simply labeled API or developers. Whenever I can I'm on the lookout for evolutionary approaches to making users aware of an API, and I just found a good one over at CloudFlare.

When you are logged into CloudFlare managing your DNS, right below the area for adding, editing, and deleting DNS records you are given some extra options, including expandable access to your API--down in the right-hand corner, between Advanced and Help.

Once you click on the API option, you are given a listing of DNS record related API endpoints, allowing me to bake the same functionality available to me in the CloudFlare UI, into my own systems and application. A summary, path, and verb is provided for each relevant API, with a link to the full API documentation.

I really like this approach. It is a great way to make APIs more accessible to the muggles (thanks @CaseySoftware). It is also a great way to think about connecting UI functionality to the (hopefully) API behind. Imagine if every UI element had an API link in the corner to see the API behind, and a link to its documentation . You could even display the request and response bodies for the API call made by the UI, allowing people to easily reverse engineer what an API does. 

I have suggested this approach at several events, and to other API technologists who felt it was a bad idea, as the user doesn't want to be bothered by the details of why something does what it does, they just want it to be done. I disagree. I strongly believe that this is an extension of old school beliefs by the IT wizards, that the muggles aren't smart enough, and IT should have all the power (one ring and all that).

Seriously, though. There is no reason that everyone shouldn't be exposed to the API behind, and if they want to learn more they can. If they do not want to learn more, they do not have to. I'm going to be evangelizing for more links to the API developer portal, API documentation, and other resources from within the UI of the SaaS solutions we use. This will help make sure that all users are aware of the API behind, and the opportunities for putting it to use in external applications, tooling, and services.


An Auditing API For Checking In On API Client Activity

Google just released a mobile audit solution for their Google Apps Unlimited users looking to monitor activity across iOS and Android devices. At first look, the concept didn't strike me as anything I should write about, but once I got to thinking about how the concept applies beyond mobile to IoT, and the potentially for external 3rd party auditing of API and endpoint consumption--it stood out as a pattern I'd like to have in the filing cabinet for future reference.

Using the Google Admin SDK Reports API you can access mobile audit information by users, device, or by auditing event. API responses include details about the device including model, serial numbers, user emails, and any other element that included as part of device inventory. This model seems like it could easily be adapted to IoT devices, bot and voice clients.

One aspect that stood out for me as a pattern I'd like to see emulated elsewhere, is the ability to verify that all of your deployed devices are running the latest security updates. After the recent IoT launched DDOS attack on Krebs on Security, I would suggest that the security camera industry needs to consider implementing an audit API, with the ability to check for camera device security updates.

Another area that caught my attention was their mention that "mobile administrators have been asking for is a way to take proactive actions on devices without requiring manual intervention." Meaning you could automate certain events, turning off, or limiting access to specific API resources. When you open this up to IoT devices, I can envision many benefits depending on the type of device in play.

There are two dimensions of this story for me. That you can have these audit events apply to potentially any client that is consuming API resources, as well as the fact that you can access this data in real time, or on a scheduled basis via an API. With a little webhook action involved, I could really envision some interesting auditing scenarios that are internally executed, as well as an increasing number of them being executed by external 3rd party auditors making sure mobile, devices, and other API-driven clients are operating as intended.


Adding Behavior-Driven Development Assertions To My API Research

I was going through Chai, a behavior, and test driven assertion library, and spending some time learning about behavior driven development, or BDD, as it applies to APIs today. This is one of the topics I've read about and listened to talks from people I look up to, but just haven't had the time to invest too many cycles in learning more. As I do with other interesting, and applicable areas, I'm going to add as a research area, which will force me to bump it up in priority.

In short, BDD is how you test to make sure an API is doing what is expected of it. It is how the smart API providers are testing their APIs, during development, and production to make sure they are delivering on their contract. Doing what I do, I started going through the leading approaches to BDD with APIs, and came up with these solutions:

  • Chai - A BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.
  • Jasmine - A behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. 
  • MochaMocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.
  • Nightwatch.js - Nightwatch.js is an easy to use Node.js based End-to-End (E2E) testing solution for browser based apps and websites. 
  • Fluent AssertionsFluent Assertions is a set of .NET extension methods that allow you to more naturally specify the expected outcome of a TDD or BDD-style test.
  • Vows - Asynchronous behaviour driven development for Node.
  • Unexpectd - The extensible BDD assertion toolkit

If you know of any that I'm missing, please let me know. I will establish a research project, add them to it, and get to work monitoring what they are up to, and better track on the finer aspects of BDD. As I was searching on the topic I also came across these references that I think are worth noting, because they are from existing providers I'm already tracking on.

  • Runscope - Discussing BDD using Runscope API monitoring.
  • Postman - Discussing BDD using Postman API client.

I am just getting going with this area, but it is something I'm feeling goes well beyond just testing and touches on many of the business and political aspects of API operations I am most concerned with. I'm looking to provide ways to verify an API does what it is supposed to, as well as making sure an API sizes up to claims made by developers or the provider. I'm also on the hunt for any sort of definition format that can be applied across many different providers--something I could include as part of APIs.json indexes and OpenAPI Specs.

Earlier I had written on the API assertions we make, believe in, and require for our business contracts. This is an area I'm looking to expand on with this API assertion research. I am also looking to include BDD as part of my thoughts on algorithmic transparency, exploring how BDD assertions can be used to validate the algorithms that are guiding more of our personal and business worlds. It's an interesting area that I know many of my friends have been talking about for a while but is now something I want to work to help normalize for the rest of us who might not be immersed in the world of API testing.


A Machine Readable Jekyll Jig For Each Area Of My API Research

I have over 70 areas of research occurring right now as part of my API lifecycle work--these are areas that I feel directly impact how APIs are provided and consumed today. Each of these areas lives as a Github repository, using Github Pages as the front-end of the research. 

I use Github for managing my research because of its capabilities for managing not just code, but also machine readable data formats like JSON, CSV, and YAML. I'm not just trying to understand each area of the API lifecycle, I am working to actually map it out in a machine readable way. 

This process takes a lot of effort, and is always work in progress. To help me manage the workload I rely on Github, the Github API, and Github Pages. On top of this Github base, I leverage the data and content capabilities of Jekyll when you run it on Github Pages (or any other Jekyll enabled server or cloud service). 

Each of my research areas begins with me curating news from across space, then I profile companies and individuals who are doing interesting things with APIs, and the services, tooling, and APIs they are developing. I process all of this information on a weekly basis and publish to each of my research projects as its YAML core. 

An example of this can be seen with my API monitoring research (the most up to date) with the following machine-readable components:

I also have several machine readable elements available which use Jekyll to drive the content for each research project:

When I update any of my research areas I just publish the YAML to each of my research project "jigs", and everything is updated. The content is dynamically driven using Liquid, which leverages a YAML-driven core. This allows me to manage 70+ research projects as a one-person operation. The news and analysis is published automatically each day as I do my monitoring, but the organizations, APIs, and tooling is manually triggered as I get the time to dive into each area.

I am writing about this because I just locked down this machine readable core for my API monitoring research, which will set the bar for the rest of my research occurring over the next year. I will replicate the latest definition across all 70+ areas over the next couple of weeks as I get the bandwidth to spend within in each area. I couldn't do what I do without Github, its API, Github Pages, and Jekyll--they make my world go round.


Where Is The WordPress For APIs?

I feel like I have said this before, but probably is something that is worth refreshing--where is the WordPress for APIs? First, I know WordPress has an API, that isn't what I'm talking about. Second, I know WordPress is not our best foot forward when it comes to the web. What I am talking about is a ready to go API deployment solutions in a variety of areas, that are as easy to deploy and manage as WordPress.

There is a reason WordPress is as popular as it is. I do not run WordPress for any of my infrastructure, but I do help others setup and operate their own WordPress installs from time to time. I get why people like it. I personally think its a nightmare in there, when you start having to make it do things as a programmer, but I fully grasp why others dig it, and willing to support that whenever I can.

I want the same type of enabling solution for APIs. If you want a link API -- here you go. If you want a product API -- download over here. There should be a wealth of open source solutions that you can just download, unzip, upload, and go through the wizard. You get the API and a simple management interface. I would get to work building one in PHP / MySQL just to piss all the real programmers off, but I have too many projects on my plate already.

If you want to develop the WordPress of APIs for the community and make it push-button deployment via Heroku, AWS, Google, or Azure, please let me know and I'm happy to help amplify. ;-)


The Web Evolved Under Different Environment Than Web APIs Are

I get the argument from hypermedia and linked data practitioners that we need to model our web API behavior on the web. It makes sense, and I agree that we need to be baking hypermedia into our API design practices. What I have trouble with is the fact that the web is a cornerstone that we should be modeling it after. I do not know what web y'all use every day, but the one I use, and harvest regularly is quite often is a pretty broken thing.

It just feels like we overlooking so much to support this one story. I'm not saying that hypermedia principles don't apply because the web is shit, I'm just saying maybe it isn't as convincing of an anchor to build a story that currently web APIs are shit. I understand that you want to sell your case, and trust me...I want you to sell your case, but using this argument just does not pencil out for me.

There is another aspect of this that I find difficult. That the web was developed and took root in a very different environment than web APIs are. We had more time and space to be more thoughtful about the web, and I do not think we have that luxury with web APIs. The stakes are higher, the competition is greater, and the incentives for doing it thoughtfully really do not exist in the startup environment that has taken hold. We can't be condemning API designers and architects for serving their current master (or can we?). 

While I will keep using core web concepts and specs to help guide my views on designing, defining, and deploying my web APIs, I'm going to explore other ways to articulate why we should be putting them to use. I'm going to also be considering the success or failure of these elements based on the shortcomings of the web, and web APIs, while I work to better polish the existing stories we tell, as well as hopefully evolve new ones that help folks understand what the best practices for web APIs are.


Github As The API Life Cycle Engine

I am playing around with some new features from the SDK generation as a service provider APIMATIC, including the ability to deploy my SDKs to Github. This is just many of the ways Github, and more importantly Git is being used as what I'd consider as an engine in the API economy. Deploying your SDKs is nothing new, but when your autogenerating SDKs from API definitions, deploying to Github and then using that to drive deployment, virtualization, containers, serverless, documentation, testing, and other stops along the API life cycle--it is pretty significant.

Increasingly we are publishing API definitions to Github, the server side code that serves up an API, the Docker image for deploying and scaling our APIs, the documentation that tell us what an API does, the tests that validate our continuous integration, as well as the clients and SDKs. I've been long advocating for use of Github as part of API operations, but with the growth in the number of APIs we are designing, deploying, and managing--Github definitely seems like the progressive way forward for API operations.

I will keep tracking on which service providers allow for importing from Github, as well as publishing to Github--whether its definitions, server images, configuration, or code. As these features continue to become available in these companies APIs I predict we will see the pace of continuous integration and API orchestration dramatically pick up. As we are more easily able to automate the importing and exporting of essential definitions, configurations, and the code that makes our businesses and organizations function.


Evolving The API SDK With APIMATIC DX Kits

I've been a big supporter of APIMATIC since they started, so I'm happy to see them continuing to evolve their approach to delivering SDKs using machine readable API definitions. I got a walkthrough of their new DX Kits the other day, something that feels like an evolutionary step for SDKs, and contributing to API providers making onboarding and integration as frictionless as possible for developers.

Let's walk through what APIMATIC already does, then I'll talk more about some of the evolutionary steps they are taking when auto-generating SDKs. It helps to see the big picture of where APIMATIC fits into the larger API lifecycle to assist you in getting beyond any notion of them simply being just an SDK generation service.

API Definitions
What makes APIMATIC such an important service, in my opinion, is that they just don't speak using modern API definition formats, they speak in all of the API definition formats, allowing anyone to generate SDKs from the specification of your choice: 

  • API Blueprint
  • Swagger 1.0 - 1.2
  • Swagger 2.0 JSON
  • Swagger 2.0 YAML
  • WADL - W3C 2009
  • Google Discovery
  • RAML 0.8
  • I/O Docs - Mashery
  • HAR 1.2
  • Postman Collection
  • APIMATIC Format

As any serious API service provider should do be doing, APIMATIC then opened up their API definition transformation solution as a standalone service and API. This allows this type ofAPI  transformations to occur and be baked in, at every stop along a modern API lifecycle, by anyone.

API Design
Being so API definition driven focused, APIMATIC needed a practical way to manage API definitions, and allow their customers to add, edit, delete, and manipulate the definitions that would be driving the SDK auto generation process. APIMATIC provides one of the best API design interfaces I've found across the API service providers that I monitor, allowing customers to manage:

  • Endpoints
  • Models
  • Test Cases
  • Errors

Because APIMATIC is so heavily invested in having a complete API definition, one that it will result in a successful SDK, they've had to bake healthy API design practices into their API design interface--helping developers craft the best API possible. #Winning

SDK Auto Generation
Now we get to the valuable, and time-saving portion of what APIMATIC does best--generate SDKs in 10 separate programming language and platform environments. Once your API definition validates, you can select to generate in their preferred language.

  • Visual Studio - A class library project for Portable and Universal Windows Platform
  • Eclipse - A compatible maven project for Java 5 and above
  • Android Studio - A compatible Gradle project for Android Gingerbread and above
  • XCode - A project based on CoCoaPods for iOS 6 and above
  • PSR-4 - A compliant library with Composer dependency manager
  • Python - A package compatible with Python 2 and 3 using PIP as the dependency manager
  • Angular - A lightweight SDK containing injectable wrappers for your API
  • Node.js - A client library project in Node.js as an NPM package
  • Ruby - A project to create a gem library your API based on Ruby>=2.0.0
  • Go - A client library project for Go language (v1.4)

APIMATIC takes their SDKs seriously. They make sure they aren't just low-quality auto-generated code. I've seen the overtime they put into make sure the code they produce matches the styling and the reality on the ground for each language and environment being depoyed.

Github Integration
Deploying your API SDKs to Github is nothing new, but being able to autogenerate your SDK from a variety of API definition languages, and then publish to Github opens up a whole new world of possibilities. This is when Github can become a sort of API definition driven engine that can be installed into not just the API life cycle, but also every web, mobile, device, bot, voice, or any other client that puts an API to use.

This is where we start moving beyond SDK for me, into the realm of what APIMATIC is calling a DX Kit. APIMATIC isn't just dumping some auto-generated code into the Github repo of your choice. They are publishing the code, and now complete documentation for the SDK to a Github README, so that any human can come along and learn about what it does, as well as any other system can also come along and put the API definition auto-generated code to work.

Continuous Integration
The evolution of the SDK continues with...well, continuous integration, and orchestration. If you go under the settings for your API in APIMATIC, you now also have the option to publish configuration files for four leading CI solutions:

APIMATIC had already opened up beyond just doing SDKs with the release of their API Transformer solution, and their introduction of detailed documentation for each kit (SDK) on Github. Now they are pushing into API testing and orchestration areas by allowing you to publish the required config files for the CI platform of your choosing.

I feel like their approach represents the expanding world of API consumption. Providing an API and SDK is not enough anymore. You have to provide and encourage healthy documentation, testing, and continuous integration practices as well. APIMATIC is aiming to "simplify API Consumption", with their DX Kits, which is a very positive thing for the API space, and worth highlighting as part of my API SDK research.


Considering A Web API Ecosystem Through Feature-Based Reuse

I recently carved out some time to read A Web API ecosystem through feature-based reuse by Ruben Verborgh (@RubenVerborgh) and Michel Dumontier. It is a lengthy, very academic proposal on how we can address the fact that "the current Web API landscape does not scale well: every API requires its own hardcoded clients in an unusually short-lived, tightly coupled relationship of highly subjective quality."

I highly recommend reading their proposal, as there are a lot of very useful patterns and suggestions in there that you can put to use in your operations. The paper centers around the notion that the web has succeeded because we were able to better consider interface reuse, and were able to identify the most effective patterns using analytics, and pointing out that there really is no equivalent to web analytics for measuring an APIs effectiveness. 

In order to evolve Web API design from an art into a discipline with measurable outcomes, we propose an ecosystem of reusable interaction patterns similar to those on the human Web, and a task-driven method of measuring those.

To help address these challenges in the world of web APIs, Verborgh and Dumontier propose that we work to build web interfaces, similar to what we do with the web, employing a bottom-up to composing reusable features such as full-text search, auto-complete, file uploads, etc.--in order to unlock the benefits of bottom-up interfaces, they propose 5 interface design principles:

  1. Web APIs consist of features that implement a common interface
  2. Web APIs partition their interface to maximize feature reuse.
  3. Web API responses advertise the presence of each relevant feature
  4. Each feature describes its own invocation and functionality.
  5. The impact a feature on a Web API should be measured across implementations.

They provide us with a pretty well thought out vision involving implementations and frameworks, and the sharing of documentation, while universally applying metrics for being able to identify the successful patterns. It provides us with a compelling, "feature-based method to construct the interface of Web APIs, favoring reuse overreinvention, analogous to component-driven interaction design on the human Web."

I support everything they propose. I cannot provide any critique on the technical merits of their vision. However, I find it lacks an awareness of the current business and political landscape that I find regularly present in the hypermedia, and linked data material I consume.

Here are a few of the business and political considerations that contribute to the situation we find ourselves in that Verborgh and Dumontier are focused on, which will also work to slow the adoption of their proposed vision:

  • Venture Capital - The current venture capital driven climate does not incentivize sharing and reuse, and their startups investing time and energy into web technologies.
  • Intellectual Property - Modern views of the intellectual property, partially fueled by VC investment, but further exacerbated by legal cases like Oracle v Google force developers and designers to hold patterns close to their chest, limiting sharing and reuse again.
  • Lazy Developers - Not all developers are knowledge seekers like the authors of this paper, and myself, many are just looking to get the job done and get home. There are few rewards for contributing back to the community, and once I have mine, I'm done.
  • The Web Is Shit - One area that linked data and hypermedia folks tend to lose me is their focus on modeling things after the web. I agree the web is "working", but I don't know which one you use, but the one I use is shit, and only getting worse--have you scraped web content lately?
  • Metrics & Analytics - Google Analytics started out providing us with a set of tools to measure what works and doesn't work when it comes to the parts and pieces of our websites, but now it just does that for advertising. Also we do have analytics in the API space, but due to the other areas cited above, there is no sharing of this wisdom across the space.

These are just a handful of areas I regularly see working against the API design, definition, and hypermedia areas of the space, and will flood in slow the progress of their web API ecosystem vision. It doesn't mean I'm not supportive. I see the essence of a number of positive things present in their proposal, like reuse, sharing, and measurement. I feel the essence of existing currents in the world of APIs like microservices, DevOps, and continuous integration (aka orchestration).

My mission, as it has been since 2010, is make sure really smart folks like Ruben and Michel at institutions, startups, and the enterprise better understand the business and political currents that are flowing around them. It can be very easy to miss significant signals around the currents influencing what is working, or not working with APIs when you are heads down working on a product, or razor focused on getting your degree within an institution. The human aspects of this conversation are always well cited, but I'm thinking we aren't always honest about the human elements present on the API side of the equation. Web != API & API !=Web.


Please Share Your OpenAPI Specs So I Can Use Across The API Life Cycle

I was profiling the New Relic API, and while I was pleased to find OpenAPI Specs behind their explorer, I was less than pleased to have to reverse engineer their docs to get at their API definitions. It is pretty easy to open up my Google Chrome Developer Tools and grab the URLs for each OpenAPI Spec, but you know what would be easier? If you just provided me a link to them in your documentation!

Your API definitions aren't just driving the API documentation on your website. They are being used across the API life cycle. I am using them fire up and playing with your API in Postman, generating SDKs using APIMATIC, or creating a development sandbox so I do not have to develop against your live environment. Please do not hide your API definitions, bring them out of the shadow of your API documentation and give me a link I can click on--one click access to a machine-readable definition of the value your API delivers.

I'm sure my regular readers are getting sick of hearing about this, but the reality of my readers is that they are a diverse, and busy group of folks and will most likely not read every post on this important subject. If you have read a previous post on this subject from me, and are reading this latest one, and still do not have API definitions or prominent links--then shame on you for not making your API more accessible and usable...because isn't that what this is all about?


Making Data Serve Humans Through API Design

APIs can help make technology better serve us humans when you execute them thoughtfully. This is one of the main reasons I kicked off API Evangelist in 2010. I know that many of my technologist friends like to dismiss me in this area, but this is more about their refusal to give up the power they possess than it is ever about APIs.

I have been working professionally with databases since the 1980s, and have seen the many ways in which data and power go together, and how technology is used as smoke and mirrors as opposed to serving human beings. One of the ways people keep data for themselves is to make it seem big, complicated, and only something a specific group of people (white men with beards (wizards)) can possibly make work.

There is a great excerpt from a story by Sara M. Watson (@smwat), called Data is the New “___” that sums up this for me:

The dominant industrial metaphors for data do not privilege the position of the individual. Instead, they take power away from the person to which the data refers and give it to those who have the tools to analyze and interpret data. Data then becomes obscured, specialized, and distanced.

We need a new framing of a personal, embodied relationship to data. Embodied metaphors have the potential to bring big data back down to a human scale and ground data in lived experience, which in turn, will help to advance the public’s investment, interpretation, and understanding of our relationship to our data.

DATA IS A MIRROR portrays data as something to reflect on and as a technology for seeing ourselves as others see us. But, like mirrors, data can be distorted, and can drive dysmorphic thought.

This is API for me. The desire to invest, interpret, and understand our relationship to our data is API design. This is why I believe in the potential of APIs, even if the reality of it all often leaves me underwhelmed. There is no reason that the databases have to be obscured, specialized, and distant. If we want to craft meaningful interfaces for our data we can. If we want to craft useful interfaces for our data, that anyone can understand and put to work without specialized skills--we can.

The problem in this process is often complicated through our legacy practices, the quest for profits, or there are vendor-driven objectives in the way of properly defining and opening up frictionless access to our data. Our relationships with our data are out of alignment because it is serving business and technological masters, and do not actually benefit the humans whom it should be serving.


Increased Analytics At The API Client And SDK Level

I am seeing more examples of analytics at the API client and SDK level, providing more access to what is going on at this layer of the API stack. I'm seeing API providers build them into the analytics they provider for API consumers, and more analytic services from providers for the web, mobile, and device endpoints. Many companies are selling these features in the name of awareness, but in most cases, I'm guessing it is about adding another point of data generation which can then be monetized (IoT is a gold rush!).

As I do, I wanted to step back from this movement and look at it from many different dimensions, broken down into two distinct buckets:

  • Positive(s)
    • More information - More data than can be analyzed
    • More awareness - We will have visibility across integrations.
    • Real-time insights - Data can be gathered on real time basis.
    • More revenue - There will be more revenue opportunities here.
    • More personalization - We can personalize the experience for each client.
    • Fault Tolerance - There are opportunities for building in API fault tolerance.
  • Negative(s)
    • More information - If it isn't used it can become a liability.
    • More latency - This layer slows down the primary objective.
    • More code complexity - Introduces added complexity for devs.
    • More security consideration - We just created a new exploit opportunity.
    • More privacy concerns - There are new privacy concerns facing end-users.
    • More regulatory concerns - In some industries, it will be under scrutiny.

I can understand why we want to increase the analysis and awareness at this level of the API stack. I'm a big fan of building in resiliency in our clients & SDKs, but I think we have to weigh the positive and negatives before jumping in. Sometimes I think we are too willing to introduce unnecessary code, data gathering, and potentially opening up security and privacy holes chasing new ways we can make money.

I'm guessing it will come down to each SDK, and the API resources that are being put to work. I'll be aggregating the different approaches I am seeing as part of my API SDK research and try to provide a more coherent glimpse at what providers are up to. By doing this, I'm hoping I can better understand some of the motivations behind this increased level of analytics being injected at the client and SDK level.