The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Every API Should Begin With A Github Repository

I’m working on my API definition and design strategy for my human services API work, and as I was doing this Box went all in on Opening, adding to the number of API providers I track on who not just have an OpenAPI but they also use Github as the core management for their API definition.

Part of my API definition and design advice for human service API providers, and the vendors who sell software to them is that they have an OpenAPI and JSON schema defined for their API, and share this either publicly or privately using a Github repository. When I evaluate a new vendor or service provider as part of the Human Services Data API (HSDA) specification I’m beginning to require that they share their API definition and schema using Github–if you don’t have one, I’ll create it for you. Having a machine-readable definition of the surface area of an API, and the underlying schema in a Github repo I can checkout, commit to, and access via an API is essential.

Every API should begin with a Github repository in my opinion, where you can share the API definition, documentation, schema, and have a conversation around these machine readable docs using Github issues. Approaching your API in this way doesn’t just make it easier to find when it comes to API discovery, but it also makes your API contract available at all steps of the API design lifecycle, from design to deprecation.


Craft Your API Design Guide So You Can Move To Other Areas of The Lifecycle

I am working on an API definition and design guide for my human services API work, helping establish a framework for approaching API design as part of the human services data and API specification, but also for implementers to follow in their own individual deployments. Every time I work on the subject of API design, I’m reminded of how far behind the API sector is when it comes to standardizing what it is we do.

Every month or so I see a new company publicly share their API design guide. When they do my friend Arnaud always adds to his API Stylebook, adding it to the wealth of information available in his work. I’m happy to see each API design guide release, but in reality, ALL API providers should have an API design guide, and they should also be open to publishing it publicly, showing their consumers they have their act together, and sharing with the wider API community the best practices in play.

The lack of companies sharing their API design practices and their API definitions is why we have such a deficiency when it comes to common API patterns in use. It is why we have so many variations of web APIs, as well as the underlying schema. We have an API industry because early practitioners like SalesForce, Amazon, eBay, Flickr, Delicious, Twitter, Youtube, and others were open with their API operations. People emulate what they see and know. Each wave of the API sector depends on the previous wave sharing what they do publicly–it is how this all works.

To demonstrate even further about how deficient we are, I do not find companies sharing their guides for API deployment, management, testing, monitoring, clients, and other stops along the API lifecycle. I’m glad we are seeing an uptick in the number of API design guides, but we need this practice to spread to every other stop. We need successful providers to share how they deploy their APIs, and when any company hires a new developer, you should ALWAYS be given a standard guide for deploying, managing, testing, as well as designing APIs.

It’s not rocket science, and honestly, it’s not even technical. It just means pausing for a moment, thinking about how we approach each stop in the API lifecycle, writing up an overview, publishing, and sharing it with API stakeholders, and even the wider API community. Every company doing APIs in 2017 should be crafting an API design guide so you can get to work on guides for the other areas of your lifecycle, thinking through and standardizing your approach, and making it known to every person involved–ideally, you are also being very public about all of this, and sharing your work with me and Arnaud, so we can get the word out about the good stuff you are up to! ;-)


Spreadsheet To Github For Sample Data CI

I’m needing data for use in human service API implementations. I need sample organizations, locations, and services to round off implementations, making it easier to understand what is possible with an API, when you are playing with one of my demos.

There are a number of features that require there to be data in these systems, and is always more convincing when it has intuitive, recognizable entries, not just test names, or possibly latin filler text. I need a variety of samples, in many different categories, with a complete phone, address, and other specific data points. I also need this across many different APIs, and ideally, on demand when I set up a new demo instance of the human services API.

To accomplish this I wanted to keep things as simple as I can so that non-developer stakeholders could get involved, so I set up a Google spreadsheet with a tab for each type of test data I needed–in this case, it was organizations and locations. Then I created a Github repository, with a Github Pages front-end. After making the spreadsheet public, I pull each worksheet using JavaScript, and write to the Github repository as YAML, using the Github API.

It is kind of a poor man’s way of creating test data, then publishing to Github for use in a variety of continuous integration workflows. I can maintain a rich folder of test data sets for a variety of use cases in spreadsheets, and even invite other folks to help me create and manage the data stored in spreadsheets. Then I can publish to a variety of Github repositories as YAML, and integrated into any workflow, loading test data sets into new APIs, and existing APIs as part of testing, monitoring, or even just to make an API seem convincing.

To support my work I have a spreadsheet published, and two scripts, one for pulling organizations, and the other for pulling locations–both which publish YAML to the _data folder in the repository. I’ll keep playing with ways of publishing test data year, for use across my projects. With each addition, I will try and add a story to this research, to help others understand how it all works. I am hoping that I will eventually develop a pretty robust set of tools for working with test data in APIs, as part of a test data continuous publishing and integration cycle.


How Twitter Handles Sorting For Their API

I was looking into some of the common approaches by API providers for sorting of data in API responses. I’m not in the business of finding the right answer, I am in the business of finding successful examples from APIs(brands) that people are familiar with–I thought Twitter’s page in their API documentation dedicated to sorting was worth noting.

When you craft your Twitter API request you just append sort_by=[attribute name]-[asc/desc] where the attribute is a valid attribute that is returned in the JSON of your GET request. An example of this is using ?name-asc to sort by name alphabetically or ?name-desc to sort in reverse. Providing a pretty basic approach that API providers can consider when designing sort functionality in their API.

I’ll be documenting all the approaches I find from known providers, developing a catalog of options for a variety of use cases. I’ll also spend some time looking at how GraphQL tackles the problem, providing a much more holistic approach to managing data using APIs. When I go through my API design checklist each round, I like adding a variety of diverse tooling to it, based upon examples I find from the strongest API providers. Healthy diversity in your API toolbox will be increasingly important to assist in tackling a variety of data, content, or algorithmic challenges.


Considering Using HTTP Prefer Header Instead Of Field Filtering For This API

I am working my way through a variety of API design considerations for the Human Services Data API (HSDA)that I’m working on with Open Referral. I was working through my thoughts on how I wanted to approach the filtering of the underlying data schema of the API, and Shelby Switzer (@switzerly) suggested I follow Irakli Nadareishvili’s advice and consider using RFC 7240 -the Prefer Header for HTTP, instead of some of the commonly seen approaches to filtering which fields are returned in an API response.

I find this approach to be of interest for this Human Services Data API implementation because I want to lean on API design, over providing parameters for consumers to dial in the query they are looking for. While I’m not opposed to going down the route of providing a more parameter based approach to defining API responses, in the beginning I want to carefully craft endpoints for specific use cases, and I think the usage of the HTTP Prefer Header helps extend this to the schema, allowing me to craft simple, full, or specialized representations of the schema for a variety of potential use cases. (ie. mobile, voice, bot)

It adds a new dimension to API design for me. Since I’ve been using OpenAPI I’ve gotten better at considering the schema alongside the surface area of the APIs I design, showing how it is used in the request and response structure of my APIs. I like the idea of providing tailored schema in responses over allowing consumers to dynamically filter the schema that is returned using request parameters. At some point, I can see embracing a GraphQL approach to this, but I don’t think that human service data stewards will always know what they want, and we need to invest in a thoughtful set design patterns that reflect exactly the schema they will need.

Early on in this effort, I like allowing API consumers to request minimal, standard or full schema for human service organizations, locations, and services, using the Prefer header, over adding another set of parameters that filter the fields–it reduces the cognitive load for them in the beginning. Before I introduce any more parameters to the surface area, I want to better understand some of the other aspects of taxonomy and metadata proposed as part of HSDS. At this point, I’m just learning about the Prefer header, and suggesting it as a possible solution for allowing human services API consumers to have more control over the schema that is returned to them, without too much overhead.


My API Design Checklist For This Version Of The Human Services Data API

I am going through my API design checklist for the Human Services Data API work I am doing. I’m trying to make sure I’m not forgetting anything before I propose a v1.1 OpenAPI draft, so I pulled together a simple checklist I wanted to share with other stakeholders, and hopefully also help keep me focused.

First, to support my API design work I got to work on these areas for defining the HSDS schema and the HSDA definition:

  • JSON Schema - I generated a JSON Schema from the HSDS documentation.
  • OpenAPI - I crafted an OpenAPI for the API, generating GET, POST, PUT, and DELETE methods for 100% of the schema, and reflective its use in the API request and response.
  • Github Repo - I published it all in a Github repository for sharing with stakeholders, and programmatic usage across any tooling and applications being developed.

Then I reviewed the core elements of my API design to make sure I had everything I wanted to cover in this cycle, with the resources we have:

  • Domain(s) - Right now I’m going with api.example.com, and developer.example.com for the portal.
  • Versioning - I know many of my friends are gonna give me grief, but I’m putting versioning in the URL, keeping things front and center, and in alignment with the versioning of the schema.
  • Paths - Really not much to consider here as the paths are derived from the schema definitions, providing a pretty simple, and intuitive design for paths–will continue adding guidance for future APIs.
  • Verbs - A major part of this release was making sure 100% of the surface area of the HSDS schema add the ability to POST, PUT, and DELETE, as well as just GET a response. I’m not addressing PATCH in this cycle, but it is on the roadmap.
  • Parameters - There are only a handful of query parameters present in the primary paths (organizations, locations, services), and a robust set for use in /search. Other than that, everything is mostly defined through path parameters, keeping things cleanly separated between path and query.
  • Headers - I’m only using headers for authentication. I’m also considering using the HTTP Prefer Header for schema filtering, but nothing else currently.
  • Actions - Nothing to do here either, as the API is pretty CRUD at this point, and I’m awaiting more community feedback before I add any more detailed actions beyond what is possible with the default verbs–when relevant I will add guidance to this area of the design.
  • Body - All POST and PUT methods use the body for request transport. There are no other uses of the body across the current design.
  • Pagination - I am just going with what is currently in place as part of v1.0 for the API, which uses page and per_page for handling this.
  • Data Filtering - The parameters for core resources (organizations, locations, and services all have a query parameter for filtering data, and the search path has a set of parameters for filtering data returned in response. Not adding anything new for this version.
  • Schema Filtering - I am taking Irakli Nadareishvili’s advice and going to go with RFC 7240 - Prefer Header for HTTP, and craft some different representations when it comes to filtering the schema is returned.
  • Sorting - There is no sorting currently. I did some research in this area, but not going to make any recommendations until I hear more requests from consumers, and the community.
  • Operation ID - I went with camelCase for all API operation IDs, providing a unique reference to be included in the OpenaPI.
  • Requirements - Going through and making sure all the required fields are reflected in the definitions for the OpenAPI.
  • Status Codes - Currently I’m going to just reflect the 200 HTTP status code. I don’t want to overwhelm folks with this release and I would like to accumulate more resources so I can invest in a proper HTTP status code strategy.
  • Error Responses - Along with the status code work I will define a core set of definitions to be used across a variety of responses and HTTP statuses.
  • Media Types - While not a requirement, I would like to encourage implementors to offer four default media types: application/json, application/xml, text/csv, and text/html.

After being down in the weeds I wanted to step back and just think about some of the common sense aspects of API design:

  • Granularity - I think the API provides a very granular approach to getting at the HSDS schema. If I just want a phone number for a location, and I know its location id I can get it. It’s CRUD, but it’s a very modular CRUD that reflects the schema.
  • Simplicity - I worked hard to keep things as simple as possible, and not running away with adding dimensions to the data, or adding on the complexity of the taxonomy that will come with future iterations and some of the more operational level APIs that are present in the current metadata portion of the schema.
  • Readability - While lengthy, the design of the API is readable and scannable. Maybe I’m biased, but I think the documentation flows, and anyone can read and get an idea of the possibilities with the human *services API.
  • Relationships - There really isn’t much sophistication in the relationships present in the HSDA. Organizations, locations, and services are related, but you need to construct your own paths to navigate these relationships. I intentionally kept the schema flat, as this is a minor release. Hypermedia and other design patterns are being considered for future releases–this is meant to be a basic API to get at the entire HSDS schema.

I have a functioning demo of this v1.1 release, reflecting most of the design decisions I listed above. While not a complete API design list, it provides me with a simple checklist to apply to this iteration of the Human Services Data API (HSDA). Since this design is more specification than actual implementation, this API design checklist can also act as guidance for vendors and practitioners when designing their own APIs beyond the HSDS schema.

Next, I’m going to tackle some of the API management, portal, and other aspects of operating a human services API. I’m looking to push my prototype to be a living blueprint for providers to go from schema and OpenAPI to a fully functioning API with monitoring, documentation, and everything else you will need. The schema and OpenAPI are just the seeds to be used at every step of a human services API life cycle.


Thinking About The Privacy And Security Of Public Data Using API Management

When I suggest modern approaches to API management be applied to public data I always get a few open data folks who push back saying that public data shouldn’t be locked up, and needs to always be publicly available–as the open data gods intended. I get it, and I agree that public data should be easily accessible, but there are increasingly a number of unintended consequences that data stewards need to consider before they publish public data to the web in 2017.

I’m going through this exercise with my recommendations and guidance for municipal 211 operators when it comes to implementing Open Referral’s Human Services Data API (HSDA). The schema and API definition centers around the storage and access to organizations, locations, services, contacts, and other key data for human services offered in any city–things like mental health resources, suicide assistance, food banks, and other things we humans need on a day to day basis.

This data should be publicly available, and easy to access. We want people to find the resources they need at the local level–this is the mission. However, once you get to know the data, you start understanding the importance of not everything being 100% public by default. When you come across listings Muslim faith, and LGBTQ services, or possibly domestic violence shelters, and needle exchanges. They are numerous types of listings where we need to be having sincere discussions around security and privacy concerns, and possibly think twice about publishing all or part of a dataset.

This is where modern approaches to API management can lend a hand. Where we can design specific endpoints, that pull specialized information for specific groups of people, and define who has access through API rate limiting. Right now my HSDA implementation has two access groups, public and private. Every GET path is publicly available, and if you want to POST, PUT, or DELETE data you will need an API key. As I consider my API management guidance for implementors, I’m adding a healthy dose of the how and why of privacy and security using existing API management solutions and practice.

I am not interested in playing decider when it comes to what data is public, private, and requires approval before getting access. I’m merely thinking about how API management can be applied in the name of privacy and security when it comes to public data, and how I can put tools in the hands of data stewards, and API providers that help them make the decision about what is public, private, and more tightly controlled. The trick with all of this is how transparent should providers be with the limits and restrictions imposed, and communicate the offical stance with all stakeholders appropriately when it comes to public data privacy and security.


Avoid Moving Too Fast For My API Audience

I am stepping back to today and thinking about a pretty long list of API design considerations for the Human Services Data API (HSDA), providing guidance for municipal 211 who are implementing an API. I’m making simple API design decisions from how I define query parameters all the way to hypermedia decisions for the version 2.0 of the HSDA API.

There are a ton of things I want to do with this API design. I really want folks involved with municipal 211 operations to be adopting it, helping ensure their operations are interoperable, and I can help incentivize developers to build some interesting applications. As I think through the laundry list of things I want, I keep coming back to my audience of practitioners, you know the people on the ground with 211 operations that I want to adopt an API way of doing things.

My target audience isn’t steeped in API. They are regular folks trying to get things done on a daily basis. This move from v1.0 to v.1 is incremental. It is not about anything big. Primarily this move was to make sure the API reflected 100% of the surface area of the Human Services Data Specification (HSDS), keeping in sync with the schema’s move from v1.0 to v1.1, and not much more. I need to onboard folks with the concept of HSDS, and what access looks like using the API–I do not have much more bandwidth to do much else.

I want to avoid moving too fast for my API audience. I can see version 2,3, even 4 in my head as the API architect and designer, but am I think of me, or my potential consumers? I’m going to seize this opportunity to educate my target audience about APIs using the road map for the API specification. I have a huge amount of education of 211 operators, as well as the existing vendors who sell them software when it comes to APIs. This stepping back has pulled the reigns in on my API design strategy, encouraging me to keep things as simple as possible for this iteration, and helping me think more thoughtfully about what I will be releasing as part of each incremental, as well as major releases the future will hold.


Some Thoughts On OpenAPI Not Being The Solution

I get regular waves of folks who chime in anytime I push on one of the hot-button topics on my site like hypermedia and OpenAPI. I have a couple of messages in my inbox regarding some recent stories I’ve done about OpenAPI recently, and how it isn’t sustainable, and we should be putting hypermedia practices to work. I’m still working on my responses, but I wanted to think through some of my thoughts here on the blog before I respond–I like to simmer on these things, releasing the emotional exhaust before I respond.

When it comes to the arguments from the hypermedia folks, the short answer is that I agree. I think many of the APIs I’m seeing designed using OpenAPI would benefit from some hypermedia patterns. However, there is such a big gap between where people are, and where we need to be for hypermedia to become the default, and getting people there is easier said than done. I view OpenAPI as a scaffolding or bridge to transport API designers, developers, and architects at scale from where we are, to where we need to be–at scale.

I wish I could snap my fingers and everyone understood how the web works and understood the pros and cons of each of the leading hypermedia types. Many developers do not even understand how the web works. Hell, I’m still learning new things every day, and I’ve been doing this full time for seven years. Most developers still do not even properly include and use HTTP status codes in their simple API designs, let alone understand the intricate relationship possibilities between their resources, and the clients that will be consuming them. Think about it, as developer, I don’t even have time, budget or care to articulate the details of why a response failed, and you expect that I have the time, budget, are care about link relations, and the evolution of the clients build on top of my APIs? Really?

I get it. You want everyone to see the world like you do. You are seeing us headed down a bad road, and want to do something about it. So do I. I’m doing something about it full time, telling stories about hypermedia APIs that exist out there, and even crafting my own hypermedia implementations, and telling the story around them. What are you doing to help equip folks, and educate them about best practices? I think once you hit the road doing this type of storytelling in a sustained way, you will see things differently, and begin to see how much investment we need in basic web literacy–something the OpenAPI spec is helping break down for folks, and providing them with a bridge to incrementally get from where they are at, to where they need to be.

Folks need to learn about consistent design patterns for their requests and responses. Think about getting their schema house in order. Think through content negotiation in a consistent way. Be able to see plainly that they only use three HTTP status codes 200, 404, and 500. OpenAPI provides a scaffolding for API developers to begin thinking about design, learn how the web works, and begin to share and communicate around what they’ve learned using a common language. It also allows them to learn from other API designers who are further along in their journey or just have had success with a single API pattern.

For you (the hypermedia guru), OpenAPI is redundant, meaningless, and unnecessary. For many others, it provides them with a healthy blueprint to follow and allows you to be spoon fed web literacy, and good API design practices in the increments your daily job, and current budget allows. Few API practitioners have the luxury of immersing them in deep thought about the architectural styles and the design of network-based software architectures or wading through W3C standards. They just need to get their job done, and the API deployed, with very little time for much else.

It is up to us leaders in the space to help provide them with the knowledge they’ll need. Not just in book format. Not everyone learns this way. We need to also point out the existing healthy patterns in use already. We need to invest in more blueprints, prototypes, and working examples of good API design. I think my default response for folks who tell me that OpenAPI isn’t a solution will be to ask them to send me a link to all the working examples and stories of the future they envision. I think until you’ve paid your dues in this area, you won’t see how hard it is to educate the masses and actually get folks headed in the right direction at scale.

I often get frustrated with the speed at which things move when I’m purely looking at things from what I want and what I know and understand. However, when I step back and think about what it will truly take to get the masses of developers and business users on-boarded with the API economy–the OpenAPI specification makes a lot more sense as a scaffolding for helping us get where we need. Oh, and also I watch Darrel Miller (@darrel_miller) push on the specification on a daily basis, and I know things are going to be ok.


Keen IO Pushing Forward The Data Schema Conversation

I wrote earlier this year that I would like us all to focus more on our schema and definitions of our data we use across API operations. Since then I’ve been keeping an eye out for any other interesting signs in this area like Postman with their data editor, and now I’ve come across the Streams Manager for inspecting the data schema of your event collections in Keen IO..

With Streams Manager you can:

  • Inspect and review the data schema for each of your event collections
  • Review the last 10 events for each of your event collections
  • Delete event collections that are no longer needed
  • Inspect the trends across your combined data streams over the last 30-day period

Keen IO provides us with an interesting approach to getting in tune with the schema across your event collections. I’d like to see more of this across the API lifecycle. I understand that companies like Runscope, Stoplight, Postman, and others already let us peek inside of each API call, which includes a look at the schema in play. This is good, but I’d like to see more schema management solutions at this layer helping API providers from design to deprecation.

In 2017 we all have an insane amount of bits and bytes flowing around us in our daily business operations. APIs are only enabling this to grow, opening up access to our bits to our partners and on the open web. We need more solutions like Keen’s Stream Manager, but for every layer of the API stack, allowing us to get our schema house in order, and make sense of the growing data bits we are producing, managing, and sharing.


Box Goes All In On OpenAPI

Box has gone all in on OpenAPI. They have published an OpenAPI for their document and storage API on Github, where it can be used in a variety of tools and services, as well as be maintained as part of the Box platform operations. Adding to the number of high-profile APIs managing their OpenAPI definitions on Github, like Box, and the NY Times.

As part of their OpenaPI release, Box published a blog post that touches on all the major benefits of having an OpenAPI, like forking on Github for integration into your workflow, generating documentation and visualizations, code, mock APIs, and even monitoring and testing using Runscope. It’s good to see a major API provider drinking the OpenAPI Kool-Aid, and working to reduce friction for their developers.

I would love to not be in the business of crafting complete OpenAPIs for API providers. I would like every single API provider to be maintaining their own OpenAPI on Github like Box and [NY Times(http://apievangelist.com/2017/03/01/new-york-times-manages-their-openapi-using-github/) does. Then I (we) could spend time indexing, curating, and developing interesting tooling and visualizations to help us tell stories around APIs, helping developers and business owners understand what is possible with the growing number of APIs available today.


Evolving API SDKs at Google With Storage, Logging and Analytics

One layer of my API research is dedicated to keeping track on what is going on with API software development kits (SDK). I have been looking at trends in SDK evolution as part of continuous integration and deployment, increased analytics at the SDK layer, and SDKs getting more specialized in the last year. This is a conversation that Google is continuing to move forward by focusing on enhanced storage, logging, and analytics at the SDK level.

Google provides a nice example of how API providers are increasing the number of available resources at the SDK layer, beyond just handling API requests and responses, and authentication. I’ll try to carve out some time to paint a bigger picture of what Google is up to with SDKs. All their experience developing and supporting SDKs across hundreds of public APIs seems to be coming to a head with the Google Cloud SDK effort(s).

I’m optimistic about the changes at the SDK level, so much I’m even advising APIMATIC on their strategy, but I’m also nervous about some of the negative consequences that will come with expanding this layer of API integration–things like latency, surveillance, privacy, and security are top of mind. I’ll keep monitoring what is going on, and do deeper dives into SDK approaches when I have the time and $money$, and if nothing else I will just craft regular stories about how SDKs are evolving.

Disclosure: I am an advisor of APIMATIC.


Its Not Just The Technology: API Monitoring Means You Care

I was just messing around with a friend online about monitoring of our monitoring tools, where I said that I have a monitor setup to monitor whether or not I care about monitoring. I was half joking, but in reality, giving a shit is actually a pretty critical component of monitoring when you think about it. Nobody monitors something they don’t care about. While monitoring in the world of APIsn might mean a variety of things, I’m guessing that caring about those resources is a piece of every single monitoring configuration.

This has come up before in conversation with my friend Dave O’Neill of APIMetrics, where he tells stories of developers signing up for their service, running the reports they need to satisfy management or customers, then they turn off the service. I think this type of behavior exists at all levels, with many reasons why someone truly doesn’t care about a service actually performing as promised, and doing what it takes to rise to the occasion–resulting in the instability, and unreliability that APIs that gets touted in the tech blogosphere.

There are many reasons management or developers will not truly care when it comes to monitoring the availability, reliability, and security of an API. Demonstrating yet another aspect of the API space that is more business and politics, than it is ever technical. We are seeing this play out online with the flakiness of websites, applications, devices, and the networks we depend on daily, and the waves of breaches, vulnerabilities, and general cyber(in)security. This is a human problem, not a technical, but there are many services and tools that can help mitigate people not caring.


On Device Machine Learning API Stack

I was reading about Google’s TensorFlowLite in Techcrunch, and their mention of Facebook’s Caffe2Go, and I was reminded of a conversation I was having with the Oxford Dictionaries API team a couple months ago.

The OED and other dictionary and language content API teams wanted to learn more about on-device API deployment, so their dictionaries could become the default. I have asked when we will have containers natively on our routers a while ago, but I’d also like to add to that request–when will we have a stack of containers on device where we can deploy API resources that can be used by applications, and augment the existing on-device hardware and OS APIs?

API providers should be able to deploy their APIs exactly here they are needed. API deployment, management, monitoring, logging, and analytics should exist by default in these micro-containerized environments on any devices. Whether it’s our mobile phones, our automobiles, or the weather, solar, or other industrial device integration, we are going to new API-driven data, ML, AI, augmented, and other resources on-device, in a localized environment.


My Google Sheet Driven Product API And Web Page

I am in the process of eliminating the MySQL backend behind much of my research, eliminating a business expense, as well as an unnecessary complexity in my architecture. There really is no reason for the data I use in my business to be in a database. Nothing I track on tends to go beyond 10K rows, with most of the tables actually being less than 100 rows–perfect for spreadsheets, and my new static approach to delivering APIs, and websites for my research.

The time had come to update some of the products on my website, and I thought my product page was a perfect candidate for this approach, providing me with the following elements:

Products Google Sheet - I have a simple spreadsheet with all of my products in it. Jekyll YAML Data Store - I have a YAML data store in the _data folder for API Evangelist. Google Sheet to YAML Sync - I have a JavaScript function that pulls the data from the Google Sheet, converts it to YAML, and writes to the _data folder in the Jekyll repository. Products Web Page - I have a page that lists all the products in the YAML file as HTML using Liquid. Products API - I have a JSON page that lists all the products in the YAML file as JSON using Liquid.

This simple approach to publishing static APIs using Google Sheets and Github is working well for little data like this–I am all about the little data, while everyone else is excited about big data. ;-) I even have the beginning of some documentation and an updated APIs.json for my website.

Next, I’ll work through the rest of my projects, organizations, tools, and other data I track on as part of my API research. I’ll be publishing a complete snapshot of this data at API Evangelist, as well as subsets of it at each of the individual research projects. When I’m done I’ll have a nice static stack of APIs for all of my research, easily managed via Google Sheets, and YAML on Github.


15 Topics To Help Folks See The Business Potential Of APIs

One of my clients asked me for fifteen bullet points of what I’d say to help convince folks at his company that APIs are the future, and have potentially viable business models. While helping convince people of the market value of APIs is not really my game anymore, I’m still interested in putting on my business of APIs hat, and playing this game to see what I can brainstorm to convince folks to be more open with their APIs.

Here are the fifteen stories from the API space that I would share with folks to help them understand the potential.

  1. Web - Remember asking about the viability about the web? That was barely 20 years ago. APIs are just the next iteration of the web, and instead of just delivering HTML to humans for viewing in the browser, it is about sharing machine-readable versions for use in mobile, devices, and other types of applications.
  2. Cloud - The secret to Amazon’s success has been APIs. Their ability to disrupt retail commerce, and impact almost every other business sector with the cloud was API-driven.
  3. Mobile - APIs are how data, content, and algorithms are delivered to mobile devices, as well as provides developers with access to other device capabilities like the camera, GPS, and other essential aspects of our ubiquitous mobile devices.
  4. SalesForce - SalesForce has disrupted the CRM and sales market with it’s API-driven approach to software since 2001, generating 50% of its revenue via APIs.
  5. Twitter - Well, maybe Twitter is not the poster child of revenue, but I do think they provide an example of how an API can create a viable ecosystem where there is money to be made building businesses. There are numerous successful startups born out of the Twitter API ecosystem, and Twitter itself is a great example of what is possible with APIs–both good and bad.
  6. Twilio - Twilio is the poster child for how you do APIs right, build a proper business, then go public. Twilio has transformed messaging with their voice and SMS API solutions and provides a solid blueprint for building a viable business using APIs.
  7. Netflix - While the Netflix public API was widely seen as a failure, APIs have long driven internal and partner growth at Netflix, and has enabled the company to scale beyond the data center, into the cloud, then dominate and transform an industry at a global level.
  8. Apigee - The poster child for an API sector IPO, and then acquisition by Google–demonstrating the value of API management to the leading tech companies, as they position themselves for ongoing battle in the clouds.
  9. Automobiles - Most of the top automobile manufacturers have publicly available API programs and are actively courting other leading API providers like Facebook, Pandora, Youtube, and others–acknowledging the role APIs will play in the automobile experience of tomorrow.
  10. iPaaS - Integration platform as service providers like IFTTT and Zapier put APIs in the hands of the average business users, allowing them to orchestrate their presence and operations across the growing number of platforms we depend on each day.
  11. Big Data - Like it or not, data is often seen as the new oil, elevating the value of data, and increasing investment into acquiring this valuable data. APIs are the pipes for everything big data, analysis, visualization, prediction, and everything you need to quantify markets using data.
  12. Investments - Investment in API-focused companies have not slowed in a decade. VC’s are waking up to the competitive advantage of having APIs, and their portfolios are continuing to reflect the value APIs bring to the table.
  13. Acquisitions - Google buying Apigee, Red Hat buying 3Scale, Oracle buying Apiary, are just a few of the recent high profile API related acquisitions. Successful API startups are a valuable acquisition target, continuing to show the viability of building API businesses.
  14. Regulations - We are beginning to see the government move beyond just open data and getting into the API regulations and policy arena, with banking and PSD2 in Europe, FHIR in healthcare from Health and Human Services in the U.S., and other movements within a variety of sectors. As the value and importance of APIs grows, the need for government to step in and provide guidance for existing industries like taxicabs with Uber, or newer media outlets like Facebook or Twitter.
  15. Elections - As we are seeing in the UK, US, EU, and beyond, elections are being driven by social media, primarily Twitter, and Facebook, which are API-driven. This is all being fueled by advertising, which is primarily API driven via Facebook, Twitter, and Google.

If I was sitting in a room full of executives, these are the fifteen points I’d bring up to drive a conversation around the business viability of doing APIs. There are many other stories I’d include, but I would want to work from a variety of perspectives that would speak to leadership across a potentially wide variety of business sectors.

Honestly, APIs really don’t excite me at VC scale, but I get it. Personally, I think Amazon, Google, Azure, and a handful of API rock stars are going to dominate the conversation moving forward. However, there will still be a lot of opportunities in top performing business sectors, as well as in the long tail for the rest of us little guys. Hopefully, in the cracks, we can still innovate, while the tech giants partner with other industry giants and continue to gobble up API startups along the way.


The Parrot Sequoia API Is Nice And Simple For IoT

I’m profiling a number of drone APIs lately and I came across some interesting APIs out of Parrot. Not all of the APIs are for drones, but I thought they were clean and simple examples of what IoT APIs can look like.

The API for the Parrot Sequoia camera can be controlled over USB, WIFI, allowing you to change settings, calibrate the sensors, trigger image capture and manage memory, and files.

Here are the paths for the device:

  • /capture: to get the Sequoia capture state, start and stop a capture
  • /config: to get and set the configuration of the camera
  • /status: to get all information about the Sequoia physical state
  • /calibration: to get the calibration status, start and stop a calibration
  • /storage: to get informations about memory
  • /file: to get files and folders information
  • /download: to download files
  • /delete: to delete files and folders
  • /version: to get serial number and software version
  • /wifi: to get the Sequoia SSID
  • /manualmode: to get and set ISO and exposure manually
  • /websocket: to use WebSocket notifications on asynchronous events

I like the simple use of API design to express what is possible with an IoT device and that a small hand-held deployable camera and sensor can be defined in this way. While you still need some coding skills to bring any integration life, anyone could land on the API page and pretty quickly understand what is possible with the device API.

When I come across simple approaches to IoT devices using web APIs I try to write about them, adding them to my research when it comes to IoT APIs. It gives me an easy way to find it again in the future, but also hopefully provides IoT manufacturers some examples of how you can do this as simply and effectively as you can.

I need as many examples of how APIs can be in the cloud, or even on a device like the Parrot Sequoia API.


Focusing On What You Do Best While Leveraging APIs To Not Reinvent The Wheel

There are some pretty proven API solutions out there these days. I had to explain to someone a call the other day that in 2017 you shouldn’t ever roll your own API signup, registration, rate limiting, reporting, logging, and other API management features–there are too many proven API management solutions on the market these days (cough, 3Scale, Restlet, DreamFactory, or Tyk)

As a penny-pinching small business owner who is also a programmer, I am always struggling with the question of whether I should be buying or building. However, when it comes to some of the more proven, well-laid API sectors–I know better. One of these areas I will never develop my own tooling is when it comes to analytics. There are just too many quality analytics solutions available out there who are API-driven.

One of these is Keen.io, who describe themselves as “APIs for capturing, analyzing, and embedding event data in everything you build”, provided an example of this in action, in their unstoppable API era post:

One of Keen’s largest customers stealthily uses Keen’s APIs to white label an in-app analytics suite for their Fortune 500 customers. Their delivery manager probably made the point most succinctly: “It would have taken our team two years to deliver what we’ve done with Keen in nine weeks.”

As a developer, we like to think we can doing anything on our own, and we often also underestimate the time it will take to accomplish something. I think I’m going to go through the high grade, well-established resources available in my API Stack research, and see which areas I feel contain solid API providers, and I would think twice about reinventing the wheel in.

Something like analytics always seems like it is easy to do at first, but then once you are down the rabbit hole trying to do at scale, and things begin to change. I prefer to not get distracted by the nuts and bolts of things like analytics, and be allowed to focus on what I’m good at. The best part of APIs is that you get to look good doing what you do, but it is something that can also be enhanced and augmented by other experts like Keen.io who are rocking what they do.


Studying How Providers Are Supporting Batch API Requests

A recent addition to my API research is the concept of making batch API requests. I was reminded of this during a webinar I did with Cloud Elements when they cited batch API requests as an area needing improvement in their State of API Integration report. I had also recently come across several batch APIs while profiling the Google API stack, so I already had the topic in my notebook, but Cloud Element pushed me to add the topic to my research.

Here are a handful of batch API implementations I am working through, to better understand how providers are approaching the problem:

Facebook Graph API [Google Cloud Storage](https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch Full Contact Zendesk SalesForce Microsoft Office Amazon MailChimp Meetup

As I do, in my approach to API research, I will process the common patterns I come across in each of these implementations, then add as building blocks in my API design research, hopefully providing some details API providers can consider early on in the API lifecycle. I’m not looking to tell people how to deliver batch APIs–I am just looking to shine a light on how the successful APIs are already doing it.

I feel like batch APIs are a response to more APIs, and less direct database access or full data download availability. While modular, simple APIs that do one thing well works in many situations, sometimes you need to move large amounts of resources around and make API requests that do more than just update a single resource or database record. I’ll file this research under API design, but I’ll migrate make a mention of it at the database and other levels, as I identify the variances in how bulk API requests are being made and the solutions they are providing.


JSON Schema For OpenAPI Version 3.0

We are inching closer to a final release of version 3.0 for the OpenAPI specification, with the official version currently set at 3.0.0-rc1. We are beginning to see tooling emerge, and services like APIMATIC are already supporting version 3.0 when it comes to SDK generation, as well their API Transformer conversion tool.

I am working on an OpenAPI validation solutions tailored specifically for municipal API deployments and was working with the JSON Schema for version 2.0 of the API specification. I wanted to help make my work be as ready for the future of the API specification and wanted to see if there was a JSON Schema for version 3.0 of the OpenAPI specification. I couldn’t find anything in the new branch of the repository, so I set out seeing if anyone else has been working on it.

While I was searching I saw Tim Burks share that he and Mike Ralphson were working on one over at the Google Gnostic project. It looks like it is still a work in progress, but it provides us with a starting point to work from.

I will keep an eye out on the OpenAPI repo for when they add an official JSON Schema for version 3.0. As I’ve learned more about JSON Schema, the more I’m learning about how it helps harmonize and stabilizing tooling being developed around any schema. Without it, you get some pretty unreliable results, but with it, you can achieve some pretty scalable consistency across API tooling and services–something the API space needs as much as it can get in 2017.