The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


I Am Stuck On The Datadog Integration Page

I wrote about having an integrations page for your API service the other day, and as I'm continuing to study the approach of other providers I find myself stuck on b DataDog's integration page. Datadog provides the monitoring layer across many of the top service providers in the space, making for a pretty stellar list of what solutions are being put to use across the sector. 

The Datadog integration page has been open in my browser for the last week, as I make way through each provider. Some of them I'm very familiar with, but others are entirely new to me. Integration pages like this show me what is possible with a service provider like Datadog, but also provides me an opportunity to learn about new services that I can put to use in my own operations, and what the cool kids are using.

When you click on the detail for each of the potential integrations you get more information about what is possible, as well as a configuration file in YAML, and what metrics are made available when monitoring is activated. Datadog even groups their integrations by a tag, something they don't expose via the user interface very well, but is something I'll include in my suggestions for crafting an integration page for any other API. 

An integration page is definitely a building block I will be suggesting to other API providers, and I am also a big fan of sharing configuration, and other integration details like Datadog does. There are infinite learning opportunities available on this type of API integration pages, for analysts like me, for API providers, as well as service providers who sell to API providers. These are the types of common API building blocks that I feel contribute in a positive way to the tech sector, reflecting what APIs do best--enabling API integration, and API literacy all in the same motion.


Expanding On The 3rd Party Analysis Of Security Threats

I was learning from the Splunk's analysis of the Mirai Botnet, which was behind the massive attack against Krebs on Security, implemented via common Internet of Things devices like security cameras, and printers. I've been reading several of these types of security event analysis, which is something I think is extremely important in helping the industry deal with the increasing number of security events that are occurring across the online landscape. 

The sharing of log files from compromised systems in this way is super important. We need as many eyes as we can get on these attacks, helping analyze what happened, and maybe possibly who was behind it. Of course, there are some scenarios where you might want to be cautious in opening up this data to the general public, but using common approaches to API deployment and management, this can be managed sensibly--while also adding another logging layer to the conversation, keeping track of who joins participates in the analysis

At a minimum, the DNS, application, and server logs should be made available via Github, leveraging it's Git core, as well as the Github API as part of the evaluation and analysis of the attack information. Ideally, key aspects of the data, attack vector, and other elements should also be added to some sort of shared API infrastructure for continued community security threat analysis. In addition to the growing number of attacks, and analysis by leading analysts like Splunk, I'm also seeing increased discussion around the sharing of threat data in a standardized way--APIs can act as a distributed engine for this operation.

I learn a lot from the analysis that occurs on security events like this. I know that other security analysts learn from this as well. With digital security being such a critical issue, right along with environmental events like hurricanes, or health care concerns like the Zika virus, I'm suggesting that APIs be employed in a standardized way. We should have a common checklist for making log files from these security events accessible via APIs to help guide future release. We should also have a common set of API definitions and schema for making it available so that we can begin to standardize the client tooling we use to sense of each event. 

We have a lot of work ahead of us when it comes to putting APIs to work when it comes to online security. It is another area I will continue to evangelize as I see more API and open data patterns emerge. I am looking to help stimulate 3rd party analysis of cybersecurity, and security events. In my opinion, it is the only way we are going to make sense of such a massively complex problem.


API Embeddables With Skills and Intent

I am seeing some renewed interest and discussion around API driven embeddable(s)--an area of my API research that has been going on for years, focusing on buttons, badges, and widgets, but is something that I'm seeing continued investment in from API providers lately. To help fuel the innovation that is already occurring, I figured I'd contribute with my API thoughts extracted from across the bot and voice API landscape.

As I monitor the bot community growing out of the Slack platform, the voice API integration emerging from the Alexa development community, and read news about Google's latest push into the space, I'm thinking about how APIs are being used to define the intents, skills, and actions that are driving these bot and voice implementations. I am also processing this intersection with the latest release of Push by Zapier. All of this about delivering the meaningful API responses, to where the end users desire--in their browser, their chat, or voice enablement in the business and home.

While processing the wisdom shared by Yelp about their deployment of embeddable reviews, I'm thinking about how these embeddable JavaScript widgets can be used to further allow users to quantify the intent, discover the skills, and achieve the action they are looking for. How can API providers, and the savvy API developers make valuable API resources accessible to users on their terms, and in the client they desire. For example, I might need to know my availability next Thursday while talking to my Amazon Echo, engaging in a Slack conversation, or possibly filling out a form on my corporate network--in all these scenarios I will need API access to the calendar(s) that I depend on, in the way that is required in each unique situation.

I am not always the biggest fan of voice and bot enabled scenarios, but I do think they provide us with some interesting constraints on API design. I'm hoping that some of these constraints can further be applied to legacy approaches of API deployment. Voice and bots are nothing new, but the current wave of evolution, in this time of abundant API-driven data, content, and algorithms, hold a lot of potential. This is why I spend so much time looking at so many different areas, is that the cross-pollination opportunities are sometimes the most interesting ones, and are often the ones that folks can't always see from the individual verticals where they are putting APIs to work.


The Internet of Things Shows Us How Regulatory Beasts Are Created

I am watching the world of Internet of Things (IoT) unfold, not because I'm a big fan of it, but more because I'm concerned that it is happening, and often worried that much of it is happening without any focus on security, and privacy. As I look at this week's stories in my API IoT bucket I can't help but think that IoT is a live demonstration of how the regulatory beasts, that we love to hate on in America, are created.

It starts with a bunch of fast moving, greedy, corner cutting capitalists who are innovating and all that shit. These are not always the first wave of movers in a space, but usually the second and third waves of opportunists with one thing in mind--making some money. These are the companies that are so focused on revenue and profits they ignore things like security, and they see the data generated being key to their success, and concepts around privacy often do not even exist--it's the new oil motha fuckkers!

As the number of security and privacy events increase, things like the unprecedented attack on Krebs on Security, the calls for a fix will only grow. Eventually, these calls for help are heard by the government, if they are negatively impacting enough well to do white folk, and the government steps up to figure out what to do. Often times, these investigative forces aren't fully up to speed on the area they are investigating, but with the resources they have, they'll usually inflict some regulatory and legal response. If there are any existing companies with a strong lobbying presence, the immediate response will be significantly watered down, making it more of a nuisance than anything else.

This is when the market voice begins its complaining about the government overstepping its responsibilities, and stepping in to throw a wet blanket on business. The government is bad. Regulation is bad. Then we repeat, rinse, and go about our days. This is how regulatory beasts are born, nurtured, and fed on a regular basis.

The Internet of Things is the modern poster child for this process.  In a couple years, after more of this bad behavior continues, we will see an increasing amount of government legislation and regulatory intervention into the world of IoT. Then we will hear more squealing from the startups and enterprises who could have just behaved sensibly in the beginning, as they turn up the volume about how the government is so bad, and so anti-business and anti-innovation.


Opportunity For Someone To Help Organize Auto Industry Data

There is a lot of data coming out of the automobile industry. I was just reading about Udacity open sources an additional 183GB of driving data and the global public registry of electric vehicle charging locations with 42K+ listings, providing us with two examples from the wild. I'm seeing an increasing number of these stories about institutions, government agencies, and the private sector making automobile related data available in this way--pointing to a pretty big opportunity when it comes to aggregating this valuable "data exhaust" (pun intended) in a coherent way.

Whether its self-driving, electric, car share, rental or otherwise, the modern automobile is generating a lot of data. There are some significant ways in which the automobile industry is being expanded upon, and the need to understand, and become more aware using data are immense. Making API access for this public, and private data will be increasingly important. 

There is a number of different interests producing this data, and they aren't always immediately thinking about sharing, and reuse of the data, let alone making sure there is standardized APIs and data schema at play. Opening up a pretty big opportunity for someone to focus on aggregating all these emerging automobile datasets, make available via a unified API, and helping define a common set of API definitions and schema for accessing this valuable data exhaust from the industry.

I do not think automobile industry data aggregation is the next VC fundable idea, but I to think that with some hard work, and the slow build of some expertise in this fast moving area, an individual, or small group of folks could do very well. I know from dabbling in this area that the auto industry, the department of transportation, and the aftermarket product and service providers don't always see eye to eye, and a neutral, 3rd party aggregator and evangelist has the potential to make a significant impact.


Google Shares Insight On How To Improve Upon The API Experience

We all like it when the API providers we depend on make using their APIs easier to put to work. I also like it when API providers also share the story behind how they are making their APIs easier to use because it gives me material for a story, but more importantly it provides examples that other API providers can consider as part of their own operations.

Google recently shared some of the improvements they have made to help make our API experience better--here are some of the key takeaways:

  • Faster, more flexible key generation - Making this step simpler, by reducing the old multi-step process with a single click.
  • Streamlined getting started flow - Introduced an in-flow credential set up procedure directly embedded within the developer documentation.
  • An API Dashboard - To easily view usage and quotas, so you can view all the APIs you’re using along with usage, error and latency data.

If you spend any time-consuming APIs you know that these areas represent the common friction many of us API developers experience regularly. It is nice to see Google addressing these areas of friction, as well as sharing their story with the rest of us, providing us all a reminder of how we can cut off these sharp corners in our own operations.

These areas represent what I'd say are the two biggest pain points with getting up and going using an API, and the API dashboard represents the biggest pain point we face once we are up and running--where do we stand with our API consumption, within the rate limits provided by the platform. If you use a modern API management platform you probably have a dashboard solution in place, but for API providers who have hand-rolled their own solution, this continues to be a big problem area.

While some of the historical Google API experiences have left us API consumers desiring more (Google Translate, Google+, Web Search), they have over 100 public APIs, and their approach to standardizing their approach is full of best practices and positive examples we can follow. As they continue to step up their game, I'll keep tuning in to see what else I can share.


Embrace, Extend, and Exterminate In The World Of APIs

I am regularly reminded in my world as the API Evangelist that things are rarely ever what they seem on the surface. Meaning that what a company actually does, and what a company says it does are rarely in sync. This is one of the reasons I like APIs, is they often give a more honest look at what a company does (or does not do), potentially cutting through the bullshit of marketing.

It would be nice if companies were straight up about their intentions, and relied upon building better products, offering more valuable services, but many companies prefer being aggressive, misleading their customers, and in some cases an entire industry. I'm reminded of this fact once again while reading a post on software backward compatibility, undocumented APIs and importance of history, which provided a stark example of it in action from the past:

Embrace, extend, and extinguish“,[1] also known as “Embrace, extend, and exterminate“,[2] is a phrase that the U.S. Department of Justice found[3] that was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.

This behavior is one of the contributing factors to why the most recent generation(s) of developers are so adverse to standards and is behavior that exists within current open API and open source efforts. From experience, I would emphasize that the more a company feels the need to say they are open source, or open API, the more likely they are indulging in this type of behavior. It is like, some sort of subconscious response, like the dishonest person needing to state that they are being honest, or that you need to believe them--we are open, trust us.

I am not writing this post as some attempt to remind us that Microsoft is bad--this isn't at all about Microsoft. It is simply to remind us that this behavior has existed in the past, and it exists right now. Not all companies involved in helping define the API space are not interested in things being open, and that there are common specifications in place for us all to use. Some companies are more interested in slowing what happens within the community, and ensuring that when possible, all roads lead to their proprietary solution. This is just my regular reminder to always be aware.


The Anatomy Of API Call Failure

I have been spending time thinking about how we can build in fault tolerance, and change resiliency into our API SDKs, and client code. I want to better understand what is necessary to develop the best possible integrations as possible. While doing my regular monitoring this week I came across a Tweet from @Runscope, with a pretty interesting image on this subject crafted by @realm, a mobile platform for sync.

There is a wealth of building blocks here to apply at the client and SDK level, helping us achieve more fault tolerance, and make our applications, systems, and device integrations more change resilient. I wanted to break them out, providing a bulleted list I could include in my research:

  • Is the API Online?
  • Did the server receive the request?
  • Was URL request successful?
  • Did the request timeout?
  • Was there a server error?
  • Was JSON receive successfully?
  • Was JSON malformed?
  • Was there an unexpected response?
  • Were we able to map to JSON successfully?
  • Is the JSON valid?
  • Does local model match server model?

There are some valuable nuggets present in this diagram. It should be crafted into some sort of algorithmic template that developers can apply when developing their API integrations, as well as for API providers when developing the SDK and client solutions they make available to their API communities. I'm taking note so that next time I spend some cycles on my API SDK research I can help solidify my own definition.

This is a very micro look at fault-tolerance when it comes to API integration, and I'm continuing to look for other examples of change resiliency at this layer. Meaning, is there a plan B for the API call? Is there revenue ceiling considerations? Or other more non-technical, business and political considerations that should be baked into the code as well. Helping us all think more deeply around how we encourage change resiliency across the API community.


Regulatory API Monitoring For Validating Algorithmic Assertions

As I was learning about behavior driven development (BDD) and test driven development (TDD) this week, I quickly found myself applying this way of thought to my existing API regulation, and algorithmic transparency research. BDD and TDD are both used by API developers to ensure APIs are doing what they are supposed to, in development, QA, and production environments. There is no reason that this line of thought can't be elevated beyond just development groups to other business units, up to a wider industry level, or possibly employed by regulators to validate data or algorithmic solutions.

I am not a huge fan of government regulation, but I am a fan of algorithms doing what is being promised, and APIs plus BDD and TDD testing is one way that we can accomplish this. Similar to how the federal government is working together to define OAuth scopes which help sets the bar for how a user data is accessed, BDD assertion templates can be defined, shared, and validated within regulated industries.

Right now we are just focused at the very local level when it comes to API assertions. With time I'm hoping an API assertion template format will emerge (maybe already something out there), and I'm hoping that we evolve ways for allowing the average business user to be part of defining and validating API assertions. I know my friends over at Restlet are working towards this with their DHC client solution, which provides testing solutions. 

BDD, TDD, and API assertions still very much exist in the technical environments where APIs are born and managed. I'm hoping to help define the space, identify opportunities for establishing common patterns while encouraging more reuse of leading patterns. Like other layers of the API economy, I am hoping that API assertions will expand beyond just the technical, and enjoy use amongst business groups, including industry leaders, and government regulators when it applies.


Harmonizing API Definitions Across Government With The U.S. Data Federation

Sharing of API definitions is critical to any industry or public sector where APIs are being put to work. If the API sector is going to scale effectively, it needs to be reusing common patterns, something that many API and open data providers have not been that great at historically. While this is critical in any business sector, there is no single area where this needs to happen more urgently than within the public sector.

I have spent years trying wade through the volumes of open data that comes out of government, and even spent a period of time doing this in DC for the White House. The lack of open API definition formats like OpenAPISpec, API Blueprint, APIs.json, and JSON Schema across government is a passion of mine, so I'm very pleased to the new US Data Federation project coming out of the General Services Administration (GSA).

"The U.S. Data Federation supports data interoperability and harmonization across Federal, state, and local government agencies by highlighting common data formats, API specifications, and metadata vocabularies."

The U.S. Data Federation has focused in on some of the existing patterns that exist in service of the public sector, including seven existing initiatives:

  • Building & Land Development Specification
  • National Information Exchange Model
  • Open Referral
  • Open311
  • Project Open Data
  • Schema.org
  • The Voting Information Project

I am a big supporter of Open Referral, Open311, Project Open data, and Schema.org. I will step up and get more familiar with the building & land development specification, national information exchange model, and the voting information projects. The US Data Federal project echoes the work I've been doing with Environmental Protection Agency (EPA) Envirofacts Data Service API, Department of Labor APIs, FAFSA API, and my general Adopta.Agency efforts.

Defining the current inventory of government APIs and open data using OpenAPI Spec, and indexing the with APIs.json is how we do the hard work of identifying the common patterns that are already in place and being used by agencies on the ground. Once this is mapped out, we can begin the long road towards defining the common patterns that could be proposed as future initiatives for the US Data Federation. I think the project highlights this well on their about page:

 "These examples will highlight emerging data standards and API initiatives across all levels of government, convey the level of maturity for each effort, and facilitate greater participation by government agencies."

The world of API definitions is a messy one. It may seem straightforward if you are a standards oriented person. It may also seem straightforward if you are a scrappy startup person. In reality, the current landscape is a tug of war between these two words. There are a wealth of existing web API concepts, specifications, and data standards available to us, but there are also a lot of leading definitions being defined by tech giants like Amazon, Google, Twitter, and others. With the tone set by VC investment, and distorted views on what intellectual property is, the sharing of open API definitions and schemas has been deficient across many sectors, for many years.

What the GSA is doing with the US Data Federation project is important. They are mapping out the common patterns that already exist, and providing a forum for helping identify others, as well as to help evolve the less mature, or disparate API and schema patterns out in the wild. A positive sign that they are heading in the right direction is also that the US Data Federation project is operating on Github. It is important that these common patterns exist on the social coding platform, as it's increasingly being used as an engine for the API economy--touch all stops along the API life cycle.

I will carve out the time to go through some of my existing government open data work, which includes rebooting my Open Referral leadership role. I'm finding that just doing the hard work crafting OpenAPI Specs for government APIs is a very important piece of the puzzle. We need a machine-readable map of what already exists, otherwise, it is very difficult to find a way forward in the massive amounts of government open data available to us. However, I believe that when you take these machine readable API definitions and put them on Github, it becomes much easier to find the common patterns that the GSA is looking to define with US Data Federation.


Hacking on Amazon Alexa with AWS Lambda and APIs At @APIStrat

I'm neck deep in studying how Amazon is operating their Alexa platform, so I'm pretty excited about the chance to listen and learn from the Alexa team at APIStrat in Boston. Even if you aren't building voice-enabled applications, the approach to developing, managing, and evangelizing the Alexa platform provides a wealth of best practices that we should all strive to emulate in our own operations.

Rob McCauley (@RobMcCauley) from the Amazon Alexa team is doing a workshop, as well as a keynote at @APIStrat in Boston next month. This is relevant to what is going on in the wider space because voice-enablement is a fast-moving layer when it comes to delivering API resources, helping define what is being dubbed as the conversational interface movement, while also providing the best practices for a modern API strategy that I mentioned above.

There are a number of things that the Alexa team does which have captured my attention, including their approach to developing skills, their investment ($$) into their developers, and their overall communication strategy. I'm working on profiling all of this as part of what I call a blueprint reports, where I map out the approach of the Alexa team in a way that other API providers can put to work in their own operations.

I'm thinking I will have to wait until after @APIStrat to finish my blueprint report, as I'd like to attend the Alexa workshop, hear his keynote, and possibly even talk to him personally about their approach, in the hallway. I hope to see you there, and hear you share your story, even if you aren't on the stage at APIStrat, the hallways tend to be a great place to listen to the story of leaders from across the space, as well as share your own--no matter how big or small you might be.

Make sure you get registered for APIStrat before it is sold out, and I'll see you there!


Amazon Launches Their Own QA Solution Called AWS Answers

Amazon launched their own questions and answers site called simply called AWS Answers. Amazon is definitely in a class of their own, but I thought the move reflects illnesses in the wider QA space and an approach that smaller API providers might want to consider for their operations.

Quora doesn't have an API, so why would we use as a QA solution for the API space? I don't care how much network they have. While Stack Overflow is a wealth of API related questions and answers, the environment has been found to be pretty toxic for some API providers. Making hand rolling your own QA site a more interesting option.

AWS answers is a pretty basic implementation but also has a wealth of valuable content. it wouldn't take much to handroll your own FAQ or wider answers solution within your API developer portal. I can understand why AWS would do their own, to help ensure their users are able to find the answers they need, without leaving the AWS platform. It depends on the type of platform you are operating, but keeping QA local might make more sense than using 3rd party solutions--allowing for more precise control over the answers your customers receive.

As I work to expand my API portal definition beyond just the minimum version, I'm adding a FAQ solution to the stack, and now I'm going to consider adding a separate answers solution modeled after AWS Answers. While I think platforms like Stack Overflow and Quora will continue to do well, I'm more interested in supporting API providers to roll their own solution, maybe even provide an API, and allow for more interoperability, and control over their own resources.


Your Southwest Airlines Flight Has An API

A friend of mine messaged me this photo of the Southwest Airlines flight API on Facebook the other day. After doing a little homework I found that every flight has this available on the planes local network. There is a pretty interesting write up on it from Roger Parks if you care to learn more.

Looking through the response it has all the information you need for your flight update screen. It might seem scary for folks like us poking around the network on airplanes looking for things like this, but this is just the nature of the Internet and something any network operator should consider as normal.

The API is available at getconnected.southwestwifi.com/current.json when you are on the planes local network, and I'd consult Roger's post if you want more details about how to sniff it out using your browser. Anytime I am on a guest network on a plane or in a hotel, I enjoy turning on my Charles Proxy to log a list of all the domains and IP address in use.

This is a good way to learn about how people are architecting their networks, and delivering their resources to web, mobile, and device users. The problem with this activity is that sometimes you can discover things that you shouldn't. A line that I worry about a lot. I feel pretty strongly that if companies are using public DNS, or opening up their private network to the public, they should be aware that this is going to happen.

I hope that someday this type of behavior is embraced by companies, institutions, and government agencies. Not everyone will have good intentions like I do, but network operators should know this will happen, and make the those of us where white hats welcome, so that we will report insecure infrstructure, and help keep things locked down--before the bad guys get in.

Thanks to my friend Jason for pinging me with this. From reading up on it, it is nothing new, but still worthy of noting, and talking about. I love learning about all the APIs that exist in the cracks.


Providing Inline API Documentation Within Your SaaS User Interface

The common approach to discovering that a SaaS provider has an API is through a single, external link in the footer of a website, simply labeled API or developers. Whenever I can I'm on the lookout for evolutionary approaches to making users aware of an API, and I just found a good one over at CloudFlare.

When you are logged into CloudFlare managing your DNS, right below the area for adding, editing, and deleting DNS records you are given some extra options, including expandable access to your API--down in the right-hand corner, between Advanced and Help.

Once you click on the API option, you are given a listing of DNS record related API endpoints, allowing me to bake the same functionality available to me in the CloudFlare UI, into my own systems and application. A summary, path, and verb is provided for each relevant API, with a link to the full API documentation.

I really like this approach. It is a great way to make APIs more accessible to the muggles (thanks @CaseySoftware). It is also a great way to think about connecting UI functionality to the (hopefully) API behind. Imagine if every UI element had an API link in the corner to see the API behind, and a link to its documentation . You could even display the request and response bodies for the API call made by the UI, allowing people to easily reverse engineer what an API does. 

I have suggested this approach at several events, and to other API technologists who felt it was a bad idea, as the user doesn't want to be bothered by the details of why something does what it does, they just want it to be done. I disagree. I strongly believe that this is an extension of old school beliefs by the IT wizards, that the muggles aren't smart enough, and IT should have all the power (one ring and all that).

Seriously, though. There is no reason that everyone shouldn't be exposed to the API behind, and if they want to learn more they can. If they do not want to learn more, they do not have to. I'm going to be evangelizing for more links to the API developer portal, API documentation, and other resources from within the UI of the SaaS solutions we use. This will help make sure that all users are aware of the API behind, and the opportunities for putting it to use in external applications, tooling, and services.


An Auditing API For Checking In On API Client Activity

Google just released a mobile audit solution for their Google Apps Unlimited users looking to monitor activity across iOS and Android devices. At first look, the concept didn't strike me as anything I should write about, but once I got to thinking about how the concept applies beyond mobile to IoT, and the potentially for external 3rd party auditing of API and endpoint consumption--it stood out as a pattern I'd like to have in the filing cabinet for future reference.

Using the Google Admin SDK Reports API you can access mobile audit information by users, device, or by auditing event. API responses include details about the device including model, serial numbers, user emails, and any other element that included as part of device inventory. This model seems like it could easily be adapted to IoT devices, bot and voice clients.

One aspect that stood out for me as a pattern I'd like to see emulated elsewhere, is the ability to verify that all of your deployed devices are running the latest security updates. After the recent IoT launched DDOS attack on Krebs on Security, I would suggest that the security camera industry needs to consider implementing an audit API, with the ability to check for camera device security updates.

Another area that caught my attention was their mention that "mobile administrators have been asking for is a way to take proactive actions on devices without requiring manual intervention." Meaning you could automate certain events, turning off, or limiting access to specific API resources. When you open this up to IoT devices, I can envision many benefits depending on the type of device in play.

There are two dimensions of this story for me. That you can have these audit events apply to potentially any client that is consuming API resources, as well as the fact that you can access this data in real time, or on a scheduled basis via an API. With a little webhook action involved, I could really envision some interesting auditing scenarios that are internally executed, as well as an increasing number of them being executed by external 3rd party auditors making sure mobile, devices, and other API-driven clients are operating as intended.


Adding Behavior-Driven Development Assertions To My API Research

I was going through Chai, a behavior, and test driven assertion library, and spending some time learning about behavior driven development, or BDD, as it applies to APIs today. This is one of the topics I've read about and listened to talks from people I look up to, but just haven't had the time to invest too many cycles in learning more. As I do with other interesting, and applicable areas, I'm going to add as a research area, which will force me to bump it up in priority.

In short, BDD is how you test to make sure an API is doing what is expected of it. It is how the smart API providers are testing their APIs, during development, and production to make sure they are delivering on their contract. Doing what I do, I started going through the leading approaches to BDD with APIs, and came up with these solutions:

  • Chai - A BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.
  • Jasmine - A behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. 
  • MochaMocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.
  • Nightwatch.js - Nightwatch.js is an easy to use Node.js based End-to-End (E2E) testing solution for browser based apps and websites. 
  • Fluent AssertionsFluent Assertions is a set of .NET extension methods that allow you to more naturally specify the expected outcome of a TDD or BDD-style test.
  • Vows - Asynchronous behaviour driven development for Node.
  • Unexpectd - The extensible BDD assertion toolkit

If you know of any that I'm missing, please let me know. I will establish a research project, add them to it, and get to work monitoring what they are up to, and better track on the finer aspects of BDD. As I was searching on the topic I also came across these references that I think are worth noting, because they are from existing providers I'm already tracking on.

  • Runscope - Discussing BDD using Runscope API monitoring.
  • Postman - Discussing BDD using Postman API client.

I am just getting going with this area, but it is something I'm feeling goes well beyond just testing and touches on many of the business and political aspects of API operations I am most concerned with. I'm looking to provide ways to verify an API does what it is supposed to, as well as making sure an API sizes up to claims made by developers or the provider. I'm also on the hunt for any sort of definition format that can be applied across many different providers--something I could include as part of APIs.json indexes and OpenAPI Specs.

Earlier I had written on the API assertions we make, believe in, and require for our business contracts. This is an area I'm looking to expand on with this API assertion research. I am also looking to include BDD as part of my thoughts on algorithmic transparency, exploring how BDD assertions can be used to validate the algorithms that are guiding more of our personal and business worlds. It's an interesting area that I know many of my friends have been talking about for a while but is now something I want to work to help normalize for the rest of us who might not be immersed in the world of API testing.


A Machine Readable Jekyll Jig For Each Area Of My API Research

I have over 70 areas of research occurring right now as part of my API lifecycle work--these are areas that I feel directly impact how APIs are provided and consumed today. Each of these areas lives as a Github repository, using Github Pages as the front-end of the research. 

I use Github for managing my research because of its capabilities for managing not just code, but also machine readable data formats like JSON, CSV, and YAML. I'm not just trying to understand each area of the API lifecycle, I am working to actually map it out in a machine readable way. 

This process takes a lot of effort, and is always work in progress. To help me manage the workload I rely on Github, the Github API, and Github Pages. On top of this Github base, I leverage the data and content capabilities of Jekyll when you run it on Github Pages (or any other Jekyll enabled server or cloud service). 

Each of my research areas begins with me curating news from across space, then I profile companies and individuals who are doing interesting things with APIs, and the services, tooling, and APIs they are developing. I process all of this information on a weekly basis and publish to each of my research projects as its YAML core. 

An example of this can be seen with my API monitoring research (the most up to date) with the following machine-readable components:

I also have several machine readable elements available which use Jekyll to drive the content for each research project:

When I update any of my research areas I just publish the YAML to each of my research project "jigs", and everything is updated. The content is dynamically driven using Liquid, which leverages a YAML-driven core. This allows me to manage 70+ research projects as a one-person operation. The news and analysis is published automatically each day as I do my monitoring, but the organizations, APIs, and tooling is manually triggered as I get the time to dive into each area.

I am writing about this because I just locked down this machine readable core for my API monitoring research, which will set the bar for the rest of my research occurring over the next year. I will replicate the latest definition across all 70+ areas over the next couple of weeks as I get the bandwidth to spend within in each area. I couldn't do what I do without Github, its API, Github Pages, and Jekyll--they make my world go round.


Where Is The WordPress For APIs?

I feel like I have said this before, but probably is something that is worth refreshing--where is the WordPress for APIs? First, I know WordPress has an API, that isn't what I'm talking about. Second, I know WordPress is not our best foot forward when it comes to the web. What I am talking about is a ready to go API deployment solutions in a variety of areas, that are as easy to deploy and manage as WordPress.

There is a reason WordPress is as popular as it is. I do not run WordPress for any of my infrastructure, but I do help others setup and operate their own WordPress installs from time to time. I get why people like it. I personally think its a nightmare in there, when you start having to make it do things as a programmer, but I fully grasp why others dig it, and willing to support that whenever I can.

I want the same type of enabling solution for APIs. If you want a link API -- here you go. If you want a product API -- download over here. There should be a wealth of open source solutions that you can just download, unzip, upload, and go through the wizard. You get the API and a simple management interface. I would get to work building one in PHP / MySQL just to piss all the real programmers off, but I have too many projects on my plate already.

If you want to develop the WordPress of APIs for the community and make it push-button deployment via Heroku, AWS, Google, or Azure, please let me know and I'm happy to help amplify. ;-)


The Web Evolved Under Different Environment Than Web APIs Are

I get the argument from hypermedia and linked data practitioners that we need to model our web API behavior on the web. It makes sense, and I agree that we need to be baking hypermedia into our API design practices. What I have trouble with is the fact that the web is a cornerstone that we should be modeling it after. I do not know what web y'all use every day, but the one I use, and harvest regularly is quite often is a pretty broken thing.

It just feels like we overlooking so much to support this one story. I'm not saying that hypermedia principles don't apply because the web is shit, I'm just saying maybe it isn't as convincing of an anchor to build a story that currently web APIs are shit. I understand that you want to sell your case, and trust me...I want you to sell your case, but using this argument just does not pencil out for me.

There is another aspect of this that I find difficult. That the web was developed and took root in a very different environment than web APIs are. We had more time and space to be more thoughtful about the web, and I do not think we have that luxury with web APIs. The stakes are higher, the competition is greater, and the incentives for doing it thoughtfully really do not exist in the startup environment that has taken hold. We can't be condemning API designers and architects for serving their current master (or can we?). 

While I will keep using core web concepts and specs to help guide my views on designing, defining, and deploying my web APIs, I'm going to explore other ways to articulate why we should be putting them to use. I'm going to also be considering the success or failure of these elements based on the shortcomings of the web, and web APIs, while I work to better polish the existing stories we tell, as well as hopefully evolve new ones that help folks understand what the best practices for web APIs are.


Github As The API Life Cycle Engine

I am playing around with some new features from the SDK generation as a service provider APIMATIC, including the ability to deploy my SDKs to Github. This is just many of the ways Github, and more importantly Git is being used as what I'd consider as an engine in the API economy. Deploying your SDKs is nothing new, but when your autogenerating SDKs from API definitions, deploying to Github and then using that to drive deployment, virtualization, containers, serverless, documentation, testing, and other stops along the API life cycle--it is pretty significant.

Increasingly we are publishing API definitions to Github, the server side code that serves up an API, the Docker image for deploying and scaling our APIs, the documentation that tell us what an API does, the tests that validate our continuous integration, as well as the clients and SDKs. I've been long advocating for use of Github as part of API operations, but with the growth in the number of APIs we are designing, deploying, and managing--Github definitely seems like the progressive way forward for API operations.

I will keep tracking on which service providers allow for importing from Github, as well as publishing to Github--whether its definitions, server images, configuration, or code. As these features continue to become available in these companies APIs I predict we will see the pace of continuous integration and API orchestration dramatically pick up. As we are more easily able to automate the importing and exporting of essential definitions, configurations, and the code that makes our businesses and organizations function.


Evolving The API SDK With APIMATIC DX Kits

I've been a big supporter of APIMATIC since they started, so I'm happy to see them continuing to evolve their approach to delivering SDKs using machine readable API definitions. I got a walkthrough of their new DX Kits the other day, something that feels like an evolutionary step for SDKs, and contributing to API providers making onboarding and integration as frictionless as possible for developers.

Let's walk through what APIMATIC already does, then I'll talk more about some of the evolutionary steps they are taking when auto-generating SDKs. It helps to see the big picture of where APIMATIC fits into the larger API lifecycle to assist you in getting beyond any notion of them simply being just an SDK generation service.

API Definitions
What makes APIMATIC such an important service, in my opinion, is that they just don't speak using modern API definition formats, they speak in all of the API definition formats, allowing anyone to generate SDKs from the specification of your choice: 

  • API Blueprint
  • Swagger 1.0 - 1.2
  • Swagger 2.0 JSON
  • Swagger 2.0 YAML
  • WADL - W3C 2009
  • Google Discovery
  • RAML 0.8
  • I/O Docs - Mashery
  • HAR 1.2
  • Postman Collection
  • APIMATIC Format

As any serious API service provider should do be doing, APIMATIC then opened up their API definition transformation solution as a standalone service and API. This allows this type ofAPI  transformations to occur and be baked in, at every stop along a modern API lifecycle, by anyone.

API Design
Being so API definition driven focused, APIMATIC needed a practical way to manage API definitions, and allow their customers to add, edit, delete, and manipulate the definitions that would be driving the SDK auto generation process. APIMATIC provides one of the best API design interfaces I've found across the API service providers that I monitor, allowing customers to manage:

  • Endpoints
  • Models
  • Test Cases
  • Errors

Because APIMATIC is so heavily invested in having a complete API definition, one that it will result in a successful SDK, they've had to bake healthy API design practices into their API design interface--helping developers craft the best API possible. #Winning

SDK Auto Generation
Now we get to the valuable, and time-saving portion of what APIMATIC does best--generate SDKs in 10 separate programming language and platform environments. Once your API definition validates, you can select to generate in their preferred language.

  • Visual Studio - A class library project for Portable and Universal Windows Platform
  • Eclipse - A compatible maven project for Java 5 and above
  • Android Studio - A compatible Gradle project for Android Gingerbread and above
  • XCode - A project based on CoCoaPods for iOS 6 and above
  • PSR-4 - A compliant library with Composer dependency manager
  • Python - A package compatible with Python 2 and 3 using PIP as the dependency manager
  • Angular - A lightweight SDK containing injectable wrappers for your API
  • Node.js - A client library project in Node.js as an NPM package
  • Ruby - A project to create a gem library your API based on Ruby>=2.0.0
  • Go - A client library project for Go language (v1.4)

APIMATIC takes their SDKs seriously. They make sure they aren't just low-quality auto-generated code. I've seen the overtime they put into make sure the code they produce matches the styling and the reality on the ground for each language and environment being depoyed.

Github Integration
Deploying your API SDKs to Github is nothing new, but being able to autogenerate your SDK from a variety of API definition languages, and then publish to Github opens up a whole new world of possibilities. This is when Github can become a sort of API definition driven engine that can be installed into not just the API life cycle, but also every web, mobile, device, bot, voice, or any other client that puts an API to use.

This is where we start moving beyond SDK for me, into the realm of what APIMATIC is calling a DX Kit. APIMATIC isn't just dumping some auto-generated code into the Github repo of your choice. They are publishing the code, and now complete documentation for the SDK to a Github README, so that any human can come along and learn about what it does, as well as any other system can also come along and put the API definition auto-generated code to work.

Continuous Integration
The evolution of the SDK continues with...well, continuous integration, and orchestration. If you go under the settings for your API in APIMATIC, you now also have the option to publish configuration files for four leading CI solutions:

APIMATIC had already opened up beyond just doing SDKs with the release of their API Transformer solution, and their introduction of detailed documentation for each kit (SDK) on Github. Now they are pushing into API testing and orchestration areas by allowing you to publish the required config files for the CI platform of your choosing.

I feel like their approach represents the expanding world of API consumption. Providing an API and SDK is not enough anymore. You have to provide and encourage healthy documentation, testing, and continuous integration practices as well. APIMATIC is aiming to "simplify API Consumption", with their DX Kits, which is a very positive thing for the API space, and worth highlighting as part of my API SDK research.


Considering A Web API Ecosystem Through Feature-Based Reuse

I recently carved out some time to read A Web API ecosystem through feature-based reuse by Ruben Verborgh (@RubenVerborgh) and Michel Dumontier. It is a lengthy, very academic proposal on how we can address the fact that "the current Web API landscape does not scale well: every API requires its own hardcoded clients in an unusually short-lived, tightly coupled relationship of highly subjective quality."

I highly recommend reading their proposal, as there are a lot of very useful patterns and suggestions in there that you can put to use in your operations. The paper centers around the notion that the web has succeeded because we were able to better consider interface reuse, and were able to identify the most effective patterns using analytics, and pointing out that there really is no equivalent to web analytics for measuring an APIs effectiveness. 

In order to evolve Web API design from an art into a discipline with measurable outcomes, we propose an ecosystem of reusable interaction patterns similar to those on the human Web, and a task-driven method of measuring those.

To help address these challenges in the world of web APIs, Verborgh and Dumontier propose that we work to build web interfaces, similar to what we do with the web, employing a bottom-up to composing reusable features such as full-text search, auto-complete, file uploads, etc.--in order to unlock the benefits of bottom-up interfaces, they propose 5 interface design principles:

  1. Web APIs consist of features that implement a common interface
  2. Web APIs partition their interface to maximize feature reuse.
  3. Web API responses advertise the presence of each relevant feature
  4. Each feature describes its own invocation and functionality.
  5. The impact a feature on a Web API should be measured across implementations.

They provide us with a pretty well thought out vision involving implementations and frameworks, and the sharing of documentation, while universally applying metrics for being able to identify the successful patterns. It provides us with a compelling, "feature-based method to construct the interface of Web APIs, favoring reuse overreinvention, analogous to component-driven interaction design on the human Web."

I support everything they propose. I cannot provide any critique on the technical merits of their vision. However, I find it lacks an awareness of the current business and political landscape that I find regularly present in the hypermedia, and linked data material I consume.

Here are a few of the business and political considerations that contribute to the situation we find ourselves in that Verborgh and Dumontier are focused on, which will also work to slow the adoption of their proposed vision:

  • Venture Capital - The current venture capital driven climate does not incentivize sharing and reuse, and their startups investing time and energy into web technologies.
  • Intellectual Property - Modern views of the intellectual property, partially fueled by VC investment, but further exacerbated by legal cases like Oracle v Google force developers and designers to hold patterns close to their chest, limiting sharing and reuse again.
  • Lazy Developers - Not all developers are knowledge seekers like the authors of this paper, and myself, many are just looking to get the job done and get home. There are few rewards for contributing back to the community, and once I have mine, I'm done.
  • The Web Is Shit - One area that linked data and hypermedia folks tend to lose me is their focus on modeling things after the web. I agree the web is "working", but I don't know which one you use, but the one I use is shit, and only getting worse--have you scraped web content lately?
  • Metrics & Analytics - Google Analytics started out providing us with a set of tools to measure what works and doesn't work when it comes to the parts and pieces of our websites, but now it just does that for advertising. Also we do have analytics in the API space, but due to the other areas cited above, there is no sharing of this wisdom across the space.

These are just a handful of areas I regularly see working against the API design, definition, and hypermedia areas of the space, and will flood in slow the progress of their web API ecosystem vision. It doesn't mean I'm not supportive. I see the essence of a number of positive things present in their proposal, like reuse, sharing, and measurement. I feel the essence of existing currents in the world of APIs like microservices, DevOps, and continuous integration (aka orchestration).

My mission, as it has been since 2010, is make sure really smart folks like Ruben and Michel at institutions, startups, and the enterprise better understand the business and political currents that are flowing around them. It can be very easy to miss significant signals around the currents influencing what is working, or not working with APIs when you are heads down working on a product, or razor focused on getting your degree within an institution. The human aspects of this conversation are always well cited, but I'm thinking we aren't always honest about the human elements present on the API side of the equation. Web != API & API !=Web.


Please Share Your OpenAPI Specs So I Can Use Across The API Life Cycle

I was profiling the New Relic API, and while I was pleased to find OpenAPI Specs behind their explorer, I was less than pleased to have to reverse engineer their docs to get at their API definitions. It is pretty easy to open up my Google Chrome Developer Tools and grab the URLs for each OpenAPI Spec, but you know what would be easier? If you just provided me a link to them in your documentation!

Your API definitions aren't just driving the API documentation on your website. They are being used across the API life cycle. I am using them fire up and playing with your API in Postman, generating SDKs using APIMATIC, or creating a development sandbox so I do not have to develop against your live environment. Please do not hide your API definitions, bring them out of the shadow of your API documentation and give me a link I can click on--one click access to a machine-readable definition of the value your API delivers.

I'm sure my regular readers are getting sick of hearing about this, but the reality of my readers is that they are a diverse, and busy group of folks and will most likely not read every post on this important subject. If you have read a previous post on this subject from me, and are reading this latest one, and still do not have API definitions or prominent links--then shame on you for not making your API more accessible and usable...because isn't that what this is all about?


Making Data Serve Humans Through API Design

APIs can help make technology better serve us humans when you execute them thoughtfully. This is one of the main reasons I kicked off API Evangelist in 2010. I know that many of my technologist friends like to dismiss me in this area, but this is more about their refusal to give up the power they possess than it is ever about APIs.

I have been working professionally with databases since the 1980s, and have seen the many ways in which data and power go together, and how technology is used as smoke and mirrors as opposed to serving human beings. One of the ways people keep data for themselves is to make it seem big, complicated, and only something a specific group of people (white men with beards (wizards)) can possibly make work.

There is a great excerpt from a story by Sara M. Watson (@smwat), called Data is the New “___” that sums up this for me:

The dominant industrial metaphors for data do not privilege the position of the individual. Instead, they take power away from the person to which the data refers and give it to those who have the tools to analyze and interpret data. Data then becomes obscured, specialized, and distanced.

We need a new framing of a personal, embodied relationship to data. Embodied metaphors have the potential to bring big data back down to a human scale and ground data in lived experience, which in turn, will help to advance the public’s investment, interpretation, and understanding of our relationship to our data.

DATA IS A MIRROR portrays data as something to reflect on and as a technology for seeing ourselves as others see us. But, like mirrors, data can be distorted, and can drive dysmorphic thought.

This is API for me. The desire to invest, interpret, and understand our relationship to our data is API design. This is why I believe in the potential of APIs, even if the reality of it all often leaves me underwhelmed. There is no reason that the databases have to be obscured, specialized, and distant. If we want to craft meaningful interfaces for our data we can. If we want to craft useful interfaces for our data, that anyone can understand and put to work without specialized skills--we can.

The problem in this process is often complicated through our legacy practices, the quest for profits, or there are vendor-driven objectives in the way of properly defining and opening up frictionless access to our data. Our relationships with our data are out of alignment because it is serving business and technological masters, and do not actually benefit the humans whom it should be serving.


Increased Analytics At The API Client And SDK Level

I am seeing more examples of analytics at the API client and SDK level, providing more access to what is going on at this layer of the API stack. I'm seeing API providers build them into the analytics they provider for API consumers, and more analytic services from providers for the web, mobile, and device endpoints. Many companies are selling these features in the name of awareness, but in most cases, I'm guessing it is about adding another point of data generation which can then be monetized (IoT is a gold rush!).

As I do, I wanted to step back from this movement and look at it from many different dimensions, broken down into two distinct buckets:

  • Positive(s)
    • More information - More data than can be analyzed
    • More awareness - We will have visibility across integrations.
    • Real-time insights - Data can be gathered on real time basis.
    • More revenue - There will be more revenue opportunities here.
    • More personalization - We can personalize the experience for each client.
    • Fault Tolerance - There are opportunities for building in API fault tolerance.
  • Negative(s)
    • More information - If it isn't used it can become a liability.
    • More latency - This layer slows down the primary objective.
    • More code complexity - Introduces added complexity for devs.
    • More security consideration - We just created a new exploit opportunity.
    • More privacy concerns - There are new privacy concerns facing end-users.
    • More regulatory concerns - In some industries, it will be under scrutiny.

I can understand why we want to increase the analysis and awareness at this level of the API stack. I'm a big fan of building in resiliency in our clients & SDKs, but I think we have to weigh the positive and negatives before jumping in. Sometimes I think we are too willing to introduce unnecessary code, data gathering, and potentially opening up security and privacy holes chasing new ways we can make money.

I'm guessing it will come down to each SDK, and the API resources that are being put to work. I'll be aggregating the different approaches I am seeing as part of my API SDK research and try to provide a more coherent glimpse at what providers are up to. By doing this, I'm hoping I can better understand some of the motivations behind this increased level of analytics being injected at the client and SDK level.


An Integrations Page For Your API Solution

A new way that I am discovering the new tech services that the cool kids are using is from the dedicated integrations pages of API service providers I track on. Showcasing the services your platform integrates with is a great way of educating consumers about what the possibilities are when it comes to your tools and services. It is also a great way for analysts like me to connect the dots around which services are most important to the average user.

API service providers like DataDog, OpsClarity, and Pingometer are providing dedicated integration pages showcasing the other 3rd party platforms they integrate with. Alpha API dogs like Runscope also have integration APIs, allowing you to get a list of integrations your team depends on (perfect for another story). I'm just getting going tracking on tracking the existence of these integration pages, but each time I have come across one lately I find myself stopping and looking through each of the services included.

Directly, API integrations provide a great way to inform customers about which of the other services they use can be integrated with this platform, potentially adding to the number of reasons why they might choose to go with a service. Indirectly, API integration pages provide a great way to inform the sector about which API driven platforms are important to service providers, and their customers. After I get a number of these integration pages bookmarked as part of my research, I will work on other stories showcasing the various approaches I find.


Amazon Alexa As An Example When It Comes To API Communications

I'm always looking for specific API providers to showcase as examples we can follow when crafting different portions of our API strategies. The Amazon Alexa team is doing a pretty kick ass job at blogging, and owning the conversation when it comes to developing conversational interfaces, so I thought I'd highlight them as an example to follow when planning the communications portion of your strategy.

Take a look at the #Alexa tag for the AWS blog. They have a regular stream of storytelling coming out of the platform. Its a mix of talking about the tech of the platform, and showcasing what it can do. What really captured my attention for this story is there regular showcasing of the interesting solutions developers are building on top of the platform. Many platform blogs I read are a one trick pony, just talking about their service, and I think the AWS Alexa team has found a compelling blend.

Ok, AWS probably has just a few more resources than your API team, but trust me, one person can do a lot when they are really engaged. I produce at least five posts a day (ok, they are ranty and weird), and always work to keep thing as diverse as possible, and not about my products or services (fact I don't have any probably helps as well). I do not recommend you using API Evangelist as a model for your platform blogging. I do recommend you use Amazon Alex as a model for how you can create a compelling API platform communication experience.


The Different Reasons Behind Why We Craft API Definitions

I wrote a post about the emails I get from folks telling me the API definitions contained within my API stack research, something that has helped me better see why it is I do API definitions. I go through APIs and craft OpenAPI Specs for them because it helps me understand the value each company offers, while also helping me discover interesting APIs and the healthy practices behind them.

The reason I create API definitions and organize them into collections is all about discovery. While some of the APIs I will be putting to use, most of them just help me better understand the world of APIs and the value and the intent behind the companies who are doing the most interesting things in the space.

I would love it if all my API definitions were 100% certified, and included complete information about the request, response, and security models, but just having the surface area defined makes me happy. My intention is to try and provide as complete of a definition as possible, but the primary stop along the API lifecycle I'm looking to serve is discovery, with other ones like design, mocking, deployment, testing, SDKs, and others following after that.

Maybe if we can all better understand the different reasons behind why we all craft and maintain API definitions we can better leverage Github to help make more of them complete. For now, I'll keep working on my definitions, and if you want to contribute head over to the Github repo for my work, and share any of your own definitions, or submit an issue about which APIs you'd like to see included.


Running Synthetic Data And Content Through Your APIs

I was profiling the New Relic API and came across their Synthetics service,which is a testing and monitoring solution that lets you "send calls to your APIs to make sure each output and system response are successfully returned from multiple locations around the world"--pretty straight forward monitoring stuff. The name is what caught my attention, and got me thinking the data and content that we run through our APIs.

Virtualization feels like it defines the levers and gears our API-driven systems, and synthetics feels like it speaks to the data and content that flows through flows through these systems. It feels like everything in the API stack should be able to be virtualized, and sandboxes, including the data and content, which is the lifeblood--allowing us to test and monitor everything.

It also seems like another reason we'd want to share our data schemas, as well as employ common ones like schema.org, so that others can create synthetic data and content sets for variety of scenarios--then API providers could put these sets to work in testing and monitoring their operations. A sort of synthetic data and content marketplace for the growing world of API testing and monitoring.

I see that New Relic has the name Synthetics trademarked, so I'll have to play around with variations to describe the data and the content portion of my API virtualization research. I'll use virtualization to describe gears of the engine, and something along the lines of synthetic data and content to describe everything that we run through it. I am just looking for ways to better describe the different approaches I am seeing, and tell more stories about API virtualization, and sandboxing in ways that resonate with folks.


APIs Can Give An Honest View Of What A Company Does

One of the reasons I enjoy profiling APIs is that they give an honest view of what a company does, absent of all the marketing fluff, and the promises that I see from each wave of startups. If designed right, APIs can provide a very functional, distilled down representation of data, content, and algorithmic resources of any company. Some APIs can be very fluffy and verbose, but the good ones are simple, concise, and straight to the point.

As I'm profiling the APIs for the companies included in my API monitoring research, what API Science, Apica, API Metrics, BMC Software, DataDog, New Relic, and Runscope offer quickly become pretty clear. A simple list of valuable resources you can put to use when monitoring your APIs. Crafting an OpenAPI Spec allows me to define each of these companies APIs, and easily articulate what it is that they do--minus all the bullshit that often comes with the businesses side of all of this. 

I feel like the detail I include for each company in an APIs.json file provides a nice view of the intent behind an API, while the details I put into the OpenAPI Spec provide insight into whether or not a company actually has any value behind this intent. It can be frustrating to wade through the amount of information some providers feel they need to publish as API documentation, but it all becomes worth it once I have the distilled down OpenAPI Spec, giving an honest view of what each company does.


A Service Level Agreement API For API Service Providers

I am spending some time profiling the companies who are part of my API monitoring research, specifically learning about the APIs they offer as part of their solutions. I do this work so that I can better understand what API monitoring service providers are offering, but also for the discoveries I make along the way--this is how I keep API Evangelist populated with stories. 

An interesting API I came across during this work was from the Site24X7 monitoring service, specifically their service level agreement (SLA) API. An API for adding, managing, and reporting against SLA's that you establish as part of the monitoring of your APIs. Providing a pretty interesting API pattern that seems like it should be part of the default API management stack for all API providers.

This would allow API providers to manage SLA's for their operations, but also potentially expose this layer for each consumer of the API, letting them understand SLA"s that are in place, and whether or not they have been met--in a way that could be seamlessly integrated with existing systems. An API for SLA management for API providers seems like it could also be a standalone operation as well, helping broker this layer of the API economy, and provide a rating system for how well API providers are holding up their end of the API contract.


A Dedicated Security Page For Your API Portal

One area I am keeping an eye on while profiling APIs, and API service providers, are any security-related practices that I can add to my research. While looking through DataDog I came across their pretty thorough security page, providing some interesting building blocks that I will add to my API security research. This is all I do as the API Evangelist--aggregate the best practices of existing providers, and shine a light on what they are up to. 

On their security page, DataDog provides details on physical and corporate security, information about data in transit, at rest, as well as retention, including personally identifiable information (PII), and details surrounding customer data access. They also provide details of their monitoring agent and how it operates, as well as how they patch, employ SSO, and require their staff to undergo security awareness training. The important part of this is that they encourage you to disclose any security issues you find--critical for providers to encourage this.

Transparency when it comes to security practice is an important tool in our API security toolbox. It is important that API providers share their security practices like DataDog does, helping build trust, and demonstrate competency when it comes to operations. I'm working on an API security page template for my default API portal, and DataDog's approach provides me with some good elements I can add to my template.


You Can't Say AI Benefits Outweigh Risk Without Some Algorithmic Transparency

I am increasingly hearing the phrase, "the benefits outweigh the risks" applied when talking about AI, machine learning, and the increasing number of algorithmic decisions that are being made in all parts of our digital world. This seems to be the new default of AI and machine learning advocates looking to tip the scales in favor of their technology, over the human side of the discussion.

This can be found used in discussions about AI used in self-driving cars, all the way to policing algorithms making decisions on the street or in a court of law. I'm not opposed to this argument if it is truly the case, but it seems something you can claim without providing the data behind this decision, and simply relying on your lack of faith in humans being able to consistently making decisions.

This is why I wrote about the important of data sharing in industries where algorithms are making an impact, and I am an advocate for providing API access for journalists, analysts, and regulators to actually follow-up with claims that are being made. Allowing 3rd parties to actual weight the pros and cons, and make a collective, more fair and balanced determination of whether or not the benefits truly do outweigh the risk.

I'm not saying that folks who make these claims are being dishonest, but in my experience, in the API space most folks blindly believe in tech and their algorithms, and seem to have almost no faith in humans, and are more than happy to make false claims in the service of the algorithm. This is why I have to say that you can't ever tell me the benefits outweigh the risk without some algorithmic transparency involved--it just won't mean anything to me.


If You Have An API For Your Platform You Are A Stage For Cybersecurity.Theater

Adding to the many reasons you would want, or not want APIs these days, is the escalating cyber war playing out on the web around the world. APIs aren't playing a role in the cyber security realm in the way you'd think, allowing the bad guys, or even the good guys to get into systems, but they are how these actors are spreading information or disinformation about their cyber activities. 

Increasingly Facebook, Twitter, Instagram, Reddit, and other API driven platforms are being used to broadcast, engage, and study the fast growing world of cyber security. Whether it is the Israeli Defense Force, U.S. Cyber Command, or a 15-year-old hacker in your basement, they are using these API driven channels to broadcast their message, as well as monitor the message of their adversaries, with us analysts following up behind trying to make sense of it all--using the same channels. 

Moving forward if you have a platform with an API, you will have a stage for the Cybersecurity.Theater to play out. Actors will use you to tell their story, to communicate, syndicate images, publish their videos, and make their payments. This will scare the shit out of many of you, but for others, it will be an opportunity to sell popcorn and other concession items. 

Since "securing cyberspace is a 24/7 responsibility (United States army Cyber Command and Second Army), it will need a 24/7, API driven theater to perform in.


Defining A Conversational Layer On Top Of APIs

As I am exploring, and writing about Meya's Bot Flow Markup Language (BFML),  I came across the announcement from Google about their acquisition of API.AI, titled "Making Conversational Interfaces Easier to Build". I feel like this description reflects what I was writing about "Beyond Mobile: API Ready For iPaaS, Voice, and Bots", and sounds better to me than saying voice, bot, or integration workflow.

Whether its skills for voice enablement, intents and flows for bot interactions, or triggers, actions, and integrations with iPaaS, I'm guessing we are going to need a way to define, and convey meaning through this growing conversation we'll be having using API resources. With OpenAPI Spec and API Blueprint we finally have adequate ways to describe where our data, content, and algorithmic resources reside, and a little bit about what they do, but it feels like we need a similar way of defining the conversational layer on top.

I see the beginning of this present in Meya's Bot Flow Markup Language (BFML), which is a YAML definition description a flow, made of components that can each make an API call, all in the services of what they consider "intent".  I"ll have to see how other bot providers are defining this layer, as well as learn more about how Alexa is defining the conversational layer for their skills. All of this smells like we need some Hydra injected into the conversation, but I need to do more research before I start evangelizing anything.

The whole Slackbot thing is interesting to me from a technical point of view--not so much from a business side of things. Twitter bots I find intriguing because they are public, and can wreak havoc, or be very creative. Alexa is interesting from both a technical and business perspective for me. But, helping define a conversational layer on top of the world of APIs is intriguing to me, mostly because it continues building on top of what I consider to be one of the key strengths of APIs--making very abstract technical things more accessible and meaningful to humans in a digital world.


API SDKs Getting More Specialized

I have been doing a lot of thinking about the client and SDK areas of my research lately, considering how these areas overlap with the world of bots, as well as with voice, and iPaaS. I'm thinking about the hand-crafted, autogenerated, and even API client as a service like Postman, and Paw. I'm thinking about how APIs are being put to work, across not just web and mobile, but also systems to system integration, and the skills in voice platforms like Alexa, and the intents in bot platforms like Meya.

I'm considering how APIs can deliver the skills needed for the next generation of apps beyond just a mobile phone. I kicked off my SDK research over a year ago, where I track on the approaches of leading platforms who are offering up code samples, libraries, and SDKs in a variety of programming languages. While conducting my research, I've been seeing the definition of what is an SDK slowly expand and get more specialized, with most of the expansion in these areas:

  • Mobile Development Kit - Providing code resources to help developers integrate their iPhone, Android, Windows, and other mobile applications with APIs.
  • Platform Development Kits - Provide code resources for using APIs in other specific platforms like WordPress, SalesForce, and others.

In addition to mobile, and specific platform solutions, I am seeing API providers stepping up and providing iPaaS options, like ClearBit is doing with their Zapier solutions. As part of this brainstorm exercise, I feel like I should also add a layer dedicated to delivering via iPaaS:

  • Integration Platform as a Service Development Kits - Delivering code resources for use in iPaaS services like Zapier, allowing for simpler system to system integration across many different platforms, with some having a specific industry focus.

Next, if I scroll down the home page of API Evangelist, I can easily spot 11 other areas of my research that stand out as either areas I'm seeing SDK movement or an area I'd consider to be an SDK opportunity:

  • Voice Development Kit - Code resources to support voice application and device integrations.
  • Bot Development Kit - Code resources for delivering bot implementations on various platforms.
  • Visualization Development Kit - Code resources and tooling for helping deliver visualizations derived from data, content, and algorithms via API.
  • Virtualization Development Kit - Code resources to support the creation of sandbox environments, use of dummy data, and other virtualization scenarios.
  • Command Line Development Kit - Code resources to support usage of API resources accessed via command line interfaces.
  • Embeddable Development Kit - Code resources providing embeddable JavaScript buttons, badges, and other widgets.
  • Orchestration Development Kit - Code resources and schema for use in continuous integration and other delivery tooling and services.
  • Real Time Development Kit - Code resources designed to augment API resources with real-time features and technology.
  • Spreadsheet Development Kit - Code resources designed for using spreadsheets as API data source, as well as putting API resources to use in spreadsheet environments.
  • Drone Development Kit - Code resources designed to support drones, and the hardware, application, and networks that support them.
  • Auto Development Kit - Code resources designed specifically for integration with major manufacturer and 3rd party aftermarket platforms.

I am thinking about open source solutions in a variety of programming languages, that put APIs to work for delivering data, content, and algorithms for delivery to the specialized endpoints above. I've seen mobile development kits evolve out of the standard approach to providing APIs SDKs out of the need to deliver resources to mobile phones, over questionable networks, and a constrained UI. This type of expansion will continue to grow, increasing the demand for specialized client solutions that all employ the same stack of web APIs.

This is just some housekeeping and brainstorming client and SDK areas of my research. I'm just working to understand how some of the other areas of my research are potentially overlapping with these layers. After seeing common patterns in iPaaS, Voice, and Bots, it got me thinking about other areas I've seen similar patterns occur. Obviously, these aren't areas of SDK development that all API providers should be thinking about, but depending on your target audience, a handful of them may apply.

I go back on forth regarding the importance of SDKs to API integration. I enjoy watching the API client as a service providers like Postman and Paw, as well as straight up SDK solutions like APIMATIC. When I crack open tooling like the Swagger Editor, and services like APIMATIC, I'm impressed with the increasing number of platforms available for deployment--enabling both API deployment and consumption. As I watch API service providers like Restlet and Apiary evolve their design, deployment, and management solutions to cater t more stops along the API life cycle, I find myself more interested in what could be next.


A Plan B API Switch

I've had an idea for a bot-related service I call "plan b", which would act as a secondary action for any sort of bot request / response to an API. When developers are providing common bot responses like looking up a business address, sports statistic or stock quote, it could be exposed to suggestions for a "plan b". When a request is made, it can travel via its regular path, but it would also be included in a queue where other 3rd party developers could provide plan b suggestions, either free or paid. When a user is engaging with the bot and didn't like the primary response, they could click on the "plan b" option, opening up alternative responses. In theory, the user could cycle through each "plan b" suggestion until they find a suitable response. 

Since I don't have any startup aspirations I enjoy working through these ideas on my blog as part of my wider research, I found myself thinking my Plan B bot idea as I was learning about Meya's Bot Flow Markup Language, and in the context of how we can build in resiliency into API client code. The concept of a plan B seems extremely relevant to this discussion, and worth consideration beyond just bots, into voice, iPaaS, and other clients being put to work on top of APIs.

In the context of fault and change resistance, it seems like we'd have a "plan b" layer in our SDKs to deal with when an API goes away temporarily, or even permanently. I know I do not have ANY plan b in place for any of my API integrations, either directly in the SDK, or in my business strategy--I am guessing this is the case for most API integrations. It seems like responding to status codes etc could be considered fault-tolerance (micro), where a plan b option would be in the change resistance category (macro).

I had pictured "plan b" being some sort of hypermedia layer that could be applied to the world of bots, providing alternative options alongside each API call. I am going to expand on this definition to include resiliency. Maybe we can incentivize resiliency through the discovery of better responses, or even possibly monetization opportunities when commerce behavior(s) are involved. I'll keep brainstorming on my plan b idea, something that is a little more interesting now that it isn't just about bot response discovery and monetization, and might actually provide a plan b switch for resiliency and API brokering at the client and SDK level.


SchemaHub's Usage Of Github To Launch Their API Service Is A Nice Approach

I'm looking through a new API definition focused service provider called SchemaHub today, and I found their approach to using Github as a base of operations was interesting and provided a nice blueprint for other API server providers to follow. I'm continually amazed at the myriad of ways that Github can be put to use in the world of APIs, which is one of the things I love about it.

As a base for SchemaHub, they created a Github Org, and made their first repository the website for the service, hosted on Github Pages. In my opinion, this is how all API services should begin, as a repo, under an organization on Github--leveraging the social coding platform as a base for their operations.

SchemaHub is taking advantage of Github for hosting their API definition focused project--free, version controlled, static website hosting for schemahub.io. 

As I was looking through their site, learning about what they are doing I noticed a subscription button at the bottom of the page, asking me to subscribe, and they'll notify me when things are ready.

Once I clicked on the button, I was taken for a Github OAuth dance, which now makes SchemaHub not just a Github repo for the site, it is an actual Github Application that I've authenticated with using my Github account. They only have access to my profile and email, but is the types of provider to developer connection I like to see in the API world.

Once I authorize and connect I am taken to a thank you page back on their website, letting me know I will be contacted shortly with any updates about the service. Oh, and I'm offered a Twitter account as well, allowing me to stay in tune with what they are up to--providing a pretty complete picture for how new API services can operate. 

SchemaHub's approach reflects what I'm talking about when I say that Github should offer an Oauth service, something that would enable applications running on Github to establish a Github app as part of their organization and website. I like this model because it enables connections like Schema has established to occur, maximizing the social powers of the Github platform.

SchemaHub wins for making a great first impression on me with their API service. Github Org, simple static Github Pages hosted website, connectivity with my Github profile, and a Twitter account to follow. Now I know who they are, I'm connected, and when they are ready with their API service, they have multiple channels to update me on. My only critique is that I would also like to have a blog with Atom feed, so I can hear stories about what they are trying to accomplish, but that is something that can come later. For now, they are off to a pretty good start.


Flow Abstraction And Intent Layer On Top Of APIs To Feed The Bots

I was reading an interesting post on developing bots from Meya, a bot platform provider, which I think describes the abstraction layer between what we are calling bots, and what we know as APIs. I have been trying to come up with a simple way of quantifying the point where bots and APIs work together, and Meya's approach to flow and intent provides me with a nice scaffolding.

The flow step of their bot design rationale provides a nice way to think about how bots will work, breaking out each step of the bot interaction in plain English. They use a YAML format they call Bot Flow Markup lLnguage, or BFML, to describe the flows, comparing BFML to HTML, with this definition:

HTML is spatial, and BFML is temporal. HTML determines where UI exists, and BFML determines when UI exists.

The second part of their bot design rationale involves Intents, providing this additional definition:

If BFML is like HTML, then intents are like URLs.

According to Meya, "intents can be keywords, regular expressions, and natural language models as you get more sophisticated". This seems to be where the more human aspect of what is getting done here is defined, mapping each intent to a specific flow, which can execute one or many steps to potentially satisfy the intent.

The third step is components, which is where the direct API connection comes clear. If you look at their example, in the component they are simply making a call to the Chuck Norris joke API, returning the results as part of the flow. Each part of the flow calls its targeted component, and each component can make a GET, POST, PUT, PATCH, or DELETE to an API that provides the data, content, or algorithm behind the component.

This provides me with a beginning scaffolding to think about how bot platforms are constructing the API abstraction layer behind bot activity. I will be going through other bot platforms to understand each individual napproach. Bots to me are just another endpoint for the API economy, and like mobile phones, we can have the API layer be shadowy and dark, or we can have it be more transparent and standardized, with platforms sharing their approach like Meya does.

I am picturing a world where we share open definitions of bot flows, and the intents that trigger them in YAML.  There will be marketplaces of flows, sharing the logic behind the what is working (or not) within the bot community. These flows shouldn't be a company's secret sauce, any more than the API definitions that they employ within each function are. The secret sauce should be the data, content, and algorithms behind each API, that is called as part of any flow, designed to satisfy a specific intent.

When providers like Meya share they approach via their blog it gives me the opportunity to learn about their approach. It also gives me the opportunity to explore, and compare with the rest of my research, without having to always fire up their platform--which I do not always have the time to do (I wish I did). This helps me push forward my bot research in baby steps, derived from people who are doing interesting things in the space and are willing to share with the community--which is what API Evangelist is all about.


Code Resiliency Lessons In How Twitter Deploys Their Embeddables

I am learning about how Twitter deploys their widgets. Extracting some insight for my research around how we can build change resiliency into our client code. As I'm doing my regular monitoring of the API space I am trying to keep an eye out for any examples from leading providers of how there are investing in client code being more change resilient. This Twitter blog post provides me with three concepts I wanted to add to my research:

  • Reversibility: ‘Rollback first, debug later’ is our motto. Rollback should be fast, easy, and simple. Ideally, it’s a giant red button that can get our heart rates down.
  • Incremental release: All code has bugs and deploys have an uncanny way of surfacing them. That’s why we wanted the ability to release new code in phases.
  • Visibility: We need to have graphs to show how both versions of widgets.js are doing at all times. We also need the ability to drill down by country, browser type, and widget type. These graphs should be real time so we can quickly tell how a deploy is going and take action as necessary.

These are change elements that seem like they need consideration as we craft our web, mobile, device, visualization, bot, voice, and other types of API clients. These three elements should be present in the code, anywhere I'm making an API call. Being able to reverse how I'm interacting with an API, the incremental release of new API paths or changes to existing APIs, and having an analytics layer can contribute to helping us deal with change.

I think I am going to get started with an analytics layer for my own client code. Start thinking about logging the calls I'm making to any API I depend on. I have this in place for the server side of the APIs that I manage but do not have any sort of logging at the client level. Not only do I not have any plan for change at the client layer, I might not even know there was a change because I do not have any visibility.


Beyond Mobile: API Ready For iPaaS, Voice, and Bots

I enjoy being able to switch gears between all the different areas of my API research. It helps me find the interesting areas of overlap and potentially synchronicity in how APIs are being put to work. After thinking about the API abstraction layer present in Meya's bot platform, I was reading about Clearbit's iPaaS integration layer with Zapier. Zaps are just like the components employed by Meya, and Clearbit walks us through delivering intended workflows with the valuable APIs they provide, executed Zapier's iPaaS service.

Whether its skills for voice, intents for bots, or triggers for iPaaS, an API is delivering the data, content, or algorithmic response required for these interactions. I've been pushing for API providers to be iPaaS ready, working with providers like Zapier for some time. I predict you'll hear find me showcasing examples of API providers sharing their voice and bot integration solutions, just like with Clearbit has with their iPaaS solutions, in the future.

I would say that even before API providers think about the Internet of Things, they should be thinking more deeply about iPaaS, voice, and bots. Not that all these areas will be relevant, or valuable to your API operations, but they should be considered. If you have the resources, they might provide you with some interesting ways to make your API more accessible to non-developers--as Clearbit opens their blog post opening.

When it comes to skills, intents, and iPaaS workflows, I am thinking we are going to have to be more willing to share our definitions (broken record),  like we see Meya doing with their Bot Flow Markup Language (BFML) in YAML. I will have to do some more digging to see how Amazon is working to make Alexa Skills more shareable and reusable, as well as take another look edition of the Zapier API to understand what is possible--I took a look at it back in the spring, but will need a refresher. 

While the world of voice and bots API integration seems to be moving pretty fast, I predict it will play out much like the iPaaS world has, and take years to evolve, and stabilize. I'm still skeptical about the actual adoption of voice and bots, and it all living up to the hype, but when it comes to iPaaS I'm super hopeful about the benefits to actual humans--maybe if we consider all of these channels together, we can leverage them all equally as common tools in our API integration toolbox.


An Opportunity For A RESTful API Layer On Top Of New TensorFlow Models

I was looking the open source models available for execution via the machine learning platform TensorFlow, and couldn't help but think there is a pretty big opportunity for a web API layer on top of it. After a little Googling, I see there is someone asking on Stack Overflow, Google Groups, and a student project to tackle the need. Maybe there are some other projects out there already in the works, but I couldn't find anything with 10 minutes of Googling (mad skills).

Google has twelve pretty compelling machine learning models available on Github:

  • autoencoder -- various autoencoders
  • inception -- deep convolutional networks for computer vision
  • namignizer -- recognize and generate names
  • neural_gpu -- highly parallel neural computer
  • privacy -- privacy-preserving student models from multiple teachers
  • resnet -- deep and wide residual networks
  • slim -- image classification models in TF-Slim
  • swivel -- the Swivel algorithm for generating word embeddings
  • syntaxnet -- neural models of natural language syntax
  • textsum -- sequence-to-sequence with attention model for text summarization.
  • transformer -- spatial transformer network, which allows the spatial manipulation of data within the network
  • im2txt -- image-to-text neural network for image captioning.

That would make a pretty stellar machine learning API stack, with a simple, intuitive, RESTl wrapper. Once done it seems like there would also be a pretty big opportunity for containerized deployment of these machine learning APIs, on a wholesale basis. I'm still not sure how the whole open source code to commercial API implementation model will work, but I'm sure there is some money to made in there somewhere--at least when it comes to implementation and support.

I will add to the list of open source software I'd like to see have an accompanying web API, as well as containerized, or even serverless implementation. It makes me happy that Google is helping commoditize machine learning by open sourcing their tools, but I'd also like to see them further simplified and polished for consumption by a wider developer, or even non-developer audience, using web APIs.


We Focus On Interacting With The API Developer Community Where They Live

Another story I harvested fro a story by Gordon Wintrob (@gwintrob) about how Twilio's distributed team solves developer evangelism, was about how they invest in having a distributed team, providing an on the ground presence in the top cities they are looking to reach. I know this isn't something all API providers can afford, but I still think it was still an important approach worth noting.

Like with many other aspects of Twilio's approach, they are pretty genuine about why they invest in a distributed API evangelism team:

We also focus on interacting with the developer community where we actually live. We don’t think it’s valuable to parachute into a tech community, do an event, and then leave. We need to participate in that community and make a real impact. 

I wish there was a way that smaller API providers could deliver like this. I wish we all had the resources of Twilio, but in reality, most API providers won't even be able to "parachute into a tech community", let alone have a dedicated presence there. I've seen several attempts like this fail before, so I am hesitant to say it, but I can't help but think there is an opportunity for evangelists in certain cities.

There isn't any startup potential here (let me make that clear), but I think there is an opportunity for developer advocates, evangelists, and would-be evangelists to band together, network, and offer up services to API providers. All you'd have to do is take the page from the Twilio playbook and execute in a decentralized way--where multiple evangelists could work together as a co-op. The trick is to bring together evangelists who actually give a shit about the space--something that would be very difficult to accomplish.

Anyways, just some more thoughts from my API notebook, inspired by Gordon's post. If nothing else, Twilio's approach should help guide other larger API providers, showing how important it is to invest in developers, in-person at the local level. The value brought to the table via Twilio's APIs has been key to their success, but I can't help but think a significant portion of their success has been the result of their investment at the local level.


Thinking About How I Can Build Change Resilience Into My API Integrations

After I wrote a piece on guidance from the USGS around writing fault-resistant code when putting their API to use, my friend Darrel Miller expanding on this by suggesting I include "change resilience" as part of the definition. 

It is something that has sat in my notebook for a couple weeks, and keeps floating up as a concept I'd like to explore further. I have some initial thoughts on what this means but is something that I need to write about before I grasp better. Hopefully, it will bring more suggestions about what change resilient code means to other people.

Ok, so off the top of my head, what elements would I consider when thinking about producing change resilient client code:

  • Status Codes - Making sure clients read, and pay attention to HTTP status codes used by API providers.
  • Hypermedia - Links are fragile, and avoiding baking them into clients makes a whole lotta sense. 
  • Plan B API - Have a backup API identified, that can be used when the A API provider goes away.
  • Circuit Breaker - Build in a circuit breaker into code that responds to specific status codes and events.

Now that I'm exploring, I have to ask, who's responsibility is it to build change resilience into the clients? Provider or consumer? Seems like there is a healthy responsibility on both parties? IDK. I guess we should just all be honest about how fragile the API space is, and providers should be honest with consumers when it comes to thinking about change resiliency, but ultimately API consumers have to begin to thinking more deeply and investing more when it comes planning for change--not just freaking out when it happens.

I have to admit that the code I have written as part of my API monitoring system, which integrates with over 30 APIs, isn't very fault or change resistant. When things break, they break. As the only user, this isn't a showstopper for me, but thinking about change is something I"m going to be considering as I kick the tires on my client. While these APIs have been incredibly stable for me, I can't help but listen to Darrel and want to be asking more questions when it comes to dealing with change across my API integrations.


The Bot Platform That Operates Like Alexa Will Win

I'm going through Amazon's approach to their Alexa voice services, and it is making me think how bot platforms out there should be following their lead when it comes crafting their own playbook. I see voice and bots in the same way that I see web and mobile--they are just one possible integration channel for APIs. They each have their own nuances of course, but as I'm going through Amazon's approach, there are quite a few lessons on how to do it correctly here--that apply to bots. 

Amazon's approach to investment in developers on the Alexa platform and their approach to skills development should be replicated across the bot space. I know Slack has an investment fund, but I don't see the skills development portion present in their ecosystem. Maybe it's in there, but it's not as prominent as Amazon's approach. Someday, I envision galleries of specific voice and bot skills like we have application galleries today--the usefulness and modularity of these skills will be central to each provider's success (or failure).

I had profiled Slack's approach before I left for the summer, something I will need to update as it stands today. I will keep working on profiling Amazon's approach to Alexa, and put together both as potential playbook(s). I would like to eventually be able to lay them side by side and craft a common definition that could be applied in both vthe oice API, as well the bot API sector. I need to spend more time looking at the bot landscape, but currently I'm feeling like any bot platform that can emulate Amazon's approach is going to win at this game--like Amazon is doing with voice.


Learning About OPC, The Interoperability Standard For Industrial Automation

I am spending a portion of my time each week learning about how APIs are being applied at the industrial level. An example of this can be found over at Opto 22, with their approach to using REST across their Programmable Automation Controllers (PAC). As I do with other industries I spend my time looking through the approaches of API pioneers in the space, which leads me to other contributing factors to why web APIs are being used to change how things are done in any industry.

For now, my industrial API research is a pretty big umbrella, encompassing oil & gas, manufacturing, and often moving into other areas I'm already tracking agriculture and energy. This approach allows me to identify companies who are leading the charge (like Opto 22), as well as specifications, tools, and other elements that are contributing to the evolution of APIs in each area--in this case, its broadly industrial usage of web APIs.

In my researching of industrial APIs I have come across the OPC format which was originally known as the Object Linking and Embedding for Process Control, which is defined as:

OPC is the interoperability standard for the secure and reliable exchange of data in the industrial automation space and in other industries The OPC standard is a series of specifications developed by industry vendors, end-users and software developers. These specifications define the interface between Clients and Servers, as well as Servers and Servers, including access to real-time data, monitoring of alarms and events, access to historical data and other applications.

I'm still getting going with the world of industrial automation, but I am looking through the OPC Unified Architecture to see where I can find any common definitions and schemas that could apply to industrial API design. I don't have any sense of how open these standards bodies are with their specifications, and I don't want to end up like Carl Malumud, but I do want to help identify and encourage common patterns in use for industrial automation.

Many consumer and B2B API implementations don't get me that interested, but I find the usage of them at the industrial level often more compelling, prompting me to add companies like Rockwell and Opto 22 to my industrial research. I'm adding the OPC standard as well, and will keep working to learn which other companies are doing interesting things with industrial APIs, and the standards that are guiding, or I guess possibly hindering expansion in the usage of web APIs across the industrial landscape.


Every Government Agency Should Have An FAQ API Like The DOL

I wrote about my feelings that all government agencies should have a forms API like the Department of Labor (DOL), and I wanted to separately showcase their FAQ API, and say same thing--ALL government agencies should have a frequently asked question (FAQ) API. Think about the websites and mobile applications that would benefit from ALL government agencies at the federal, state, and local level having frequently asked questions available in this way--it would be huge. 

In a perfect world, like any good API provider, government agencies should also use their FAQ API to run their website(s), mobile, and internal systems--this way the results are always fresh, up to date, and answering the relevant questions (hopefully). I get folks in government questioning the opening up of sensitive information via APIs, but making FAQs available in a machine readable way, via the web, just makes sense in a digital world.

Like the forms API, I will be looking across other government agencies for any FAQ APIs. I will be crafting an OpenAPI Spec for the DOL FAQ API (man that is a lot of acronyms). I will take any other FAQ APIs that I find and consider any additional parameters, and definitions I might want to include in a common FAQ API definition for government agencies. This is another area that should have not just a common open API and underlying schemas defined, but also a wealth of server and client side code--so any government agency can immediately put it to work in any environment.


Thanks For Reaching Out About Your API

I get a number of folks emailing me about their API and API-focused services. When I have the bandwidth I spend time in my inbox and respond to these emails. To help me do this a little more efficiently (I'm not always very quick about it), I'm formalizing some snippets I can use in my response(s). I want to thank them for reaching out, while also helping them understand my approach to successfully operating API Evangelist.

Here is one basic email I crafted today, in response to a pretty slick API provider that I will be writing about shortly:

Hi There,

I received your email. Thanks for the kind words. Appreciate you introducing me to your [API / API related service]. I'm going to have to pass on the posting of the [guest post, infographic, white paper, case study, etc] to apievangelist.com, but I'm happy to keep an eye on what you are up to as part of my regular work.

I visited your site and see that you have a blog (with feed), Twitter, and a Github account. These are the channels I’ll be keeping an eye on, and when you post a blog post or press release, Tweet something out, or I see a Github repo or commit of interest, I'll definitely include in my research, and craft a story for the blog.

I have also added your company, blog, feed, twitter, and Github accounts to my monitoring system. Keep on doing interesting things with APIs and I'll make sure it becomes part of my storytelling in the space.

So far, and reviewing your web site and developer, your API efforts [looked pretty polished / could use some work / is not very modern] I’ll keep digging around and publish anything interesting that I find.

Thanks!

Kin Lane
@kinlane

This is a basic template I will use moving forward. I'll tweak it some for each response, but ultimately I am trying to keep thing consistent with folks who are emailing with me. I'm not trying to be less personal with folks, but as I work to scale API Evangelist, and keep things operating as smoothly as they have been since I returned, I need to automate things a bit.

This approach also helps encourage API providers to standardize how they interface with the world and helps underline that having a blog, feed, Twitter, and Github accounts makes sense in 2016. I can pay attention to more companies this way, and companies should be able to more successfully communicate with their developers, the general public, and analysts like me with this approach


Github Needs Client OAuth Proxy For More Complete Client-Side Apps On Pages

I'm building what I am calling "micro tools", that run 100% on Github. To push my work forward I developed a base template I can use for deploying apps that run 100% on Github, using Github Pages, the Github API, and Github OAuth as the engine. As a next step I wanted to develop a simple YAML editor that run on Github, allowing me to edit the YAML core of each tool, that is stored in the _data folder for each Jekyll site I host on Github Pages.

The key to all of this working securely is Github personal access tokens, which every Github user has in their accounts under settings. I have employed this approach to running apps on Github Pages before using OAuth.io as the broker, something that works very well, and I highly recommend it. I have also run using my own Github OAuth proxy, where I had server side code that would do the OAuth dance for me, when authenticating via these apps. The problem is I want them to run 100% on Github, and be forkable by anyone, leaving personal access tokens as my only option.

What would really rock, is if Github provide us with a solution to client-side authentication via the Github API. We can already accomplish the hole thing, we just need Github to offer the same functionality that OAuth.io -- heck I recommend you just buy them and implement. An increasing number of API providers are managing their API operations on Github. From API portal, to documentation and SDKS--they are using Github and Github Pages to take care of business. So having Github OAuth, plus authentication via other providers would be a huge benefit.

Additionally, it would open up Github Pages to be more than just static project pages--they could become little mini apps, or micro tools as I call them. Forking one of my micro tools, then finding your personal access tokens is not that high of a bar, but it would be much nicer if I could just provide them a Github icon, and I could route them through a secure Github OAuth proxy, all without any outside infrastructure. Just a thought Github. Some ramblings about what I'd like to see. For now, I'll rely on the personal access tokens, that is until Github decides to provide us with some sort of OAuth proxy for client-side apps to operate on Github Pages.


Every Government Agency Should Have A Forms API Like DOL Does

I was taking another look at the API efforts out of the Department of Labor (DOL), to help refresh my awareness of what they are serving up, and I came across the DOL Forms API. The API does what it says, providing access on " the most frequently requested Department of Labor forms", which seems like to me should be the default for ALL government agencies.

The API returns some valuable details about each agency from including OMB number, URL, file extension, file size, and other meta information like a description, tags, and revision. I know that many in the API community would like all forms to be APIs, but I would be happy if we just started by making the concept of a forms API default across all government agencies first.

Before I dig into this individual API, I'm thinking that I will craft an OpenAPI Spec for the DOL Forms API, and see if there are any other form APIs available across US federal agencies that I should be considering. With a little work maybe I can merge them into a single open API definition that any government agency can follow, when thinking about which APIs they should be making available.


<< Prev Next >>