The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.

THe Concern Around Availability And Reliability Of Government APIs

There is some rumors circling about more government open data going way, this round is at the EPA. The EPA says the data isn't going anywhere, but understandably there are some serious concerns about the availability and reliability of environmental data from federal agencies during a Trump administration. With the looming government shutdown, I'll renew some of my old arguments around how we can make government data more accessible and making a difference--thoughts I developed during the 2013 shutdown.

Communication And Collaboration
First and foremost, communication around API operations is essential for any kind of reliability. Too often the providers behind APIs, and even consumers of APIs are radio silent. Refusing to coorindate and communicate around the realities of API operations and integrations. I don't expect all government APIs to stay in operation for every, but I to expect all API providers to be honest and communicative about past, present, and future of API operations.

Access To The Raw Data
Alongside every government data API, there should be a download to the raw data behind the API. This is the best way to keep data available. Allowing developers to download and put data to use directly in their local environment, bypassing or augmenting any dependence on the API. Remember that not everyone will have the resources and skills to make this happen, so it shouldn't be instead of API, but complimenting any API, providing more complete access for those who need it. 

Use And Share A Common API Definition And Schema
When you define and share your API using machine readable specifications like OpenAPI it has the potential to turn an single API into a common set of federated APIs, all working together for a common goal. In addition to helping generate API documentation, and code in a variety of languages, an OpenAPI for all government APIs would make them more accessible, as well as reusable.

Cached Or Slave API Deployment
When any API employs a common API definition and schema, as well as full datasets for download, it opens up the possibility for cached editions, or slave implementations of an API. The federal government APIs would always be a master version, but when there are outages, either temporary or permanent, the slave or cached versions of an API can help ensure websites, and applications continue operating.

Government API Service Level Agreements (SLA)
Make the implementation of service level agreements across federal government APIs standard practice. When any government agency prepares to launch an APIs, they should make sure a service level agreement is in place, setting expectations for the availability of the service. I'm not sure what the enforcement of this would look like, but the presence of SLAs, and the conversation they stimulate would be an important first step toward ensuring better access and availability to APIs and open data. 

Government API Deprecation Policies
Similar to the SLA discussion, let's put the conversation around the deprecation of government APIs front and center. Let's make it standard operating procedure to have an API deprecation policy as part of ALL API operations in the federal government. There are plenty of blueprints to follow when it comes to setting deprecation policy, but mostly it is just about having the discussion around the expectations around the availability of any API and the open data behind it.

Everything Goes Away Eventually
There are many reasons why federal government goes away. The government shutdown in 2013 frustrated me endlessly, but there are many reasons why a government API can go way, including budgetary, or even administration ideology. There are many reasons why private sector APIs go away as well. We just need to keep having discussions about the realities of doing business with APIs, and ensure there are as many layers present to help minimize the pain associated with deploying web, mobile, and other applications on top of government open data and APIs. 

Separating The Licensing Layers Of Your Valuable Data Using APIs

Data is power. If you have valuable data, people want it. While this is the current way of doing things on the Internet, it really isn't a new concept. The data in databases has always been wielded alongside business and political objectives. I have worked professionally as a database engineer for 30 years this year, with my first job building COBOL databases for use in schools across the State of Oregon in 1987, and have seen many different ways that data is the fuel for the engines of power.

Data is valuable. We put a lot of work into acquiring, creating, normalizing, updating, and maintaining our data. However, this value only goes so far if we keep it siloed, and isolated. We have to be able to open up our data to other stakeholders, partners, or possibly even the public. This is where modern approaches to APIs can help us in some meaningful ways, allowing data stewards to sensibly, and securely open up access to valuable API resources using low-cost web technology. One of the most common obstacles I see impeding companies, organizations, institutions, and agencies from achieving API success, center around restrictive views on data licensing, not being able to separate the data layers by using APIs, and being overly concerned about a loss of power when you publish APIs.

You worked hard to develop the data you have, but you also want to make accessible. To protect our interests I see many folks impose pretty proprietary restrictions around their data, which ends up hurting its usage and viability in partner systems, and introducing friction when it comes to accessing and putting data to work--when this is the thing you really want as a data steward. Let me take a stab at helping you reduce this friction by better understanding how APIs can help you peel the licensing onion layers back when it comes to your valuable data.

Your Valuable Data
This is an example point of contact record. I've worked hard to create this bit of data (not really), and maintain a relationship with this point of contact. It takes time to validate that their record is up to date, always relfecting reality in my database.

While openly licensed data is one important piece of the puzzle, and data should be openly licensed when it makes sense, this is the layer of this discussion you may want to be a little more controlling in who has access to, and how partners and the public are able to put your data to use in their operations.

In an online, always on, digital environment, you want data accessible, but to be able to do this you need to think critically about how you peel back the licensing onion when it comes to this data.

The Schema For Your Data
The first layer to peel back when you are looking to make data more accessible with APIs is at the schema level. This is the names, description, data type, and other details about the meta layer of your valuable data--it isn't the data, but the description of the structure of your data.

Ideally, your schema already employs predefined schemas like we find at, or Open Referral. Following common definitions will significantly widen the audience for any dataset, allowing data to seamlessly be used across a variety of systems. These forms of schema or openly licensed, usually putting into the public domain.

The schema layer of open data can often resemble the data itself, using machine readable formats like XML, JSON, and YAML. This is most likely the biggest contributing factor for data stewards failing to see this as a separate layer of access from the data itself, and sometimes applying a restrictive license, or forgetting to license it at all.

Data is often more ephemeral than the schema. Ideally, schemas do not change often, are shared and resused, as well as free from restrictive licensing. For system integrations to work, and for partnerships to be sustainable, we need to speak a common language, and schema is how we describe our data so it can be put to use outside our firewall.

Make sure the schema for your data is well-defined, machine readable, and openly licensed for the widest possible use.

Defining Access To Data Using OpenAPI 
The third layer of this licensing onion is the API layer, which governs access to data, defining how requests are made upon data, and how responses are structured. Many API providers are putting OpenAPI to work to define this layer of data operations.

As with the schema layer of data operations, you are hoping that other companies, organizations, institutions, and government agencies will integrate this layer into their operations. This layer is much more permanent than the ephermeral data layer, and should be well defined, ideally sharing common patterns, and free from restrictive licensing.

Per the Oracle v Google Java API copyright case in the United States, the API layer is subject to copyright enforcement, meaning the naming, ordering of the surface area of your API can be copyright. If you are looking for others to comfortably integrate this definition into their operations, it should be openly licensed. 

The API layer is not your data. It defines how data will be accessed, and put to use. It is important to separate this layer of your data operations, allowing it to shared, reused, and implemented in many different ways supporting web, mobile, voice, bot, and a growing number of API driven applications.

Make sure the API layer to data operations is well-defined, machine readable, and free from restrictive licensing when possible. 


Currently, many data providers I talk to see this all as a single entity. It's our data. It's valuable. We need to protect it. Even at the same time they really want it simultaneously put to work in other systems, by partners, or even the public. Because they cannot separate the layers, and understand the need for separate licensing considerations, they end up being very closed with the data, schema, and API layers of their operations--introducing friciton at all steps of the application and data life cycle.

Modern approaches to API management and logging at the API layer is how savvy data stewards are simultaenoulsy opening up access, and maintaing control over data, while also increasing awareness around how data is being put to use, or not used. Key-based access, rate limits, access plans, are all approaches to opening up access to data, while maximizing control, and maintaining a desired balance of power between steward, partners, and consumers. In this model, your schema and API definition needs to be open, accessible, and shareable, where the data itself can be much more tightly controlled, depending on the goals of the data steward, and the needs of consumers.

Let me know if you want to talk through the separation of these layers of licensing and access around your data resources. I'm all for helping you protect your valuable data, but in a pragrmatic way. If you want to be successful in partnering with other stakeholders, you need to be thinking more critically about separating the layers of your data operations, and getting better at peeling back the onion of your data operations--something that seems to leave many folks with tears in their eyes.

A Ranking Score to Determine If Your API Was SLA Compliant

I talked about Google's shift towards providing an SLA across their cloud services last week, and this week I'd like to highlight APIMetric's Cloud API Service Consistency (CASC) score, and how it can be applied to determine if an API is meeting their SLA or not. APIMetric came up with the CASC Score as part of their APImetrics Insights analytics package, and has shown been very open with the API ranking system, as well as the data behind.

The CASC score provides us with an API reliability and stability ranking for us to apply across our APIs, providing one very important layer of a comprehensive API rating system that we can use across the industries being impacted by APIs. I said in my story about Google's SLAs that companies should have an API present for their APIs. They will also need to ensure that 3rd party API service providers like APIMetrics are gathering the data, and providing us with a CASC score for all the APIs we depend on in our businesses. 

I know that external monitoring service providers like APIMetrics, and API ranking systems like the CASC score make some API providers nervous, but if you want to sell to the enterprise, government, and other critical infrastructure, you will need to over it. I also recommend that you work hard to reduce barriers for API service providers to get at the data they need, as well as get out there way when it comes to publish the data publicly, and sharing or aggregating it as part of industry level rating for groups of APIs.

If we want this API economy, that we keep talking about to scale, we are going to have to make sure we have SLAs, and ensuring we are meeting them throughout our operations. SLA monitoring helps us meet our contractual engagements with our customers, but they are also beginning to contribute to an overall ranking system being used across the industries we operate in. This is something that is only going to grow and expand, so the sooner we get the infrastructure in place to determine our service level benchmarks, monitor and aggregate the corresponding data, the better off we are going to be in a fast-growing API economy.

The SLA Is Becoming Standard Across Google APIs

I've been working my way through all of the Google APIs, making sure I have an OpenAPI for each API, as well as an APIs.json for the entire API operations. One of the things I index as part of each APIs.json when it is present, is a service level agreement (SLA). Something I found to quickly becoming standard across Google APIs.

In this round of research, I found 17 API-driven services at Google that had an SLA present:

Like the rest of the startup space recently, Google has been making the shift towards selling to the enterprise. The days of having open access to APIs, with no business model involved, and no guarantees a certain level of service are gone. Before any large organization is going to bake an API into their operations, they are going to need an SLA present as part of the deal.

I'll keep an eye out for other API providers who make an SLA available, and see if I can keep track of this shift to doing business the enterprise in startup land. I have some more questions about SLA beyond just revenue exchanged for an API, and how an API contract, backed by an SLA holds up as part of the startup life cycle--more specifically, how it holds up as part of an acquisition.

How I Can Help Make Sure Your API Is Ready For Use

As one of my clients is preparing to move their API from deployment to management, I'm helping them think through what is necessary to make sure their API is ready for use by a wider, more public group of developers. Ideally, I am brought into the discussion earlier on in the lifecycle, to influence design and deployment decisions, but I'm happy to be included at any time during the process. This is a generalized, and anonymized version of what I'm proposing to my client, to help make sure their API is ready for prime time--I just wanted to share with you a little of what goes on behind the scenes at API Evangelist, even when my clients aren't quite ready to talk publicly.

External Developer Number One
The first place I can help with the release of your API is when it comes to being the first external developer and fresh pair of eyes on your API. I can help with signing up, and actually making calls against every API, to make sure things are coherent and stable before putting in the hands of 3rd party developers at a hackathon, amongst partners, or the general public. This is a pre-requisite for me when it comes to writing a story on any API or doing additional consulting work, as it puts me in tune with what an API actual does, or doesn't do. The process will help provide you with a new perspective on the project after you have put so much work into the platform--in my case, it is a fresh pair of eyes that have on-boarded with 1000s of APIs.

Crafting Your API Developer Portal
Your operations will need a single place to go to engage with everything API. I always recommend deploying API portals using Github Pages, because it becomes the first are to engage with developers on Github, as part of your overall API efforts. Github is the easiest way to design, develop, deploy, and manage the API portal for your API efforts. I suggest focusing on all of the essential building blocks that any API operations should possess:

  • Landing Page
    • Tag Line - A short tagline describing what is possible using your API.
    • Description - A quick description (single paragraph) about what is available.
  • On-boarding
    • Signup Process - A link to the sign-up process for getting involved (OpenID).
    • Getting Started - A simple description, and numbered list of what it takes to get started.
    • Authentication Overview - A page dedicated to how authentication works.
    • FAQ - A listing of frequently asked questions broken down into categories, for easy browsing.
  • Integration
    • Documentation - Interactive documentation generated by the swagger / OpenAPI definition.
    • Code - Conde sample, or software development kits for developers to put to work.
    • Postman Collection - A Postman Collection + Button for loading up APIs into Postman client.
  • Support
    • Github - Set up Github account, establish profile, and setup portal as the first point of support.
    • Twitter - Set up a Twitter account, establish a presence, and make know open for business.
    • Email - Establish a single, shared email account that ca provide support for all developers.
  • Communications
    • Blog - Establish a blog using Jekyll (easy with Github Pages), and begin telling stories of the platform.
    • Twitter - Get the Twitter account in sync with communication, as well as support efforts.
  • Updates
    • Roadmap - Using Github issues, establish a label, and rhythm for sharing out the platform roadmap.
    • Issues - Using Github issues, establish a label, and rhythm for sharing out current issues with the platform.
    • Change Log - Using Github issues, establish a label and rhythm for sharing out changes made to the platform.
    • Status - Publish a monitoring and status page keeping users in tune with the platform stability and availability.
  • Legal
    • Terms of Service - Establish the terms of service for your platform.
    • Privacy Policy - Establish the privacy policy for your platform.

All of these building blocks have been aggregated from across thousands of APIs, and are something ALL successful API providers possess. I recommend starting here. You will need this as a baseline to get up and running with developers, whether generally on the web or through specific hackathons and your private engagements. Being developer number one, and helping craft, deploy, and polish the resources available via a coherent developer portal are what I bring to the table, and willing to set aside time to help you make happen.

Additionally, I'm happy to set into motion some other discussions regarding pushing forward your API operations:

  • Discovery - Establish a base discovery plan for the portal, including SEO, APIs.json development
  • Validation - Validate each API endpoint, and establish JSON assertions as part of the OpenAPI & testing.
  • Testing - Establish a testing strategy for not just monitoring all API endpoints, but make sure they return valid data.
  • Security - Think about security beyond just identity and API keys, and begin scanning API endpoints, and looking for vulnerabilities.
  • Embeddable - Pull together a strategy for embeddable tooling including buttons, badges, and widgets.
  • Webhooks - Consider how to develop a webhook strategy allowing information to be pushed out to developers, reducing calls to APIs.
  • iPaaS - Think through how to develop an iPaaS strategy to allow for integration with platforms like Zapier, empowering developers and non-developers alike.

This is how I am helping companies make sure their APIs are ready for prime time. I regularly encounter many teams who have great APIs but have been too close to the ovens baking the bread, and find it difficult to go from development to management in a coherent way. I have on-boarded and hacked on thousands of APIs. I have been doing this for over a decade, and exclusively as a full-time job for the last seven years. I am your ideal first developer and can save you significant amounts of time when it comes to crafting and deploying your API portal.

As a one person operation, I don't have the time to do this for every company that approaches me, but I'm happy to engage with almost everyone who reaches out, to understand how I can best help. Who knows, I might help prevent you from making some pretty common mistakes, and I am happy to be a safer, early beta user of your APIs--one tha will give you the feedback you are looking for.

Developing Internal API Curriculum And Workshops For Your Organization

I am working with an enterprise group to develop a curriculum that will be used across internal training workshops executed around the globe. They are looking to push their entire company towards an API way of doing things, and empower business and IT groups to realize their API potential. I'm anonymizing the company, as they have not agreed to me talking about publicly, but as I do, I wanted to share my work behind the scenes, and help other organizations be aware of the work that I do.

The API training curriculum is going to be designed to reach and bring up to speed three levels of internal users:

  1. API Beginners - The introduction to the world of APIs for business or technical groups.
  2. API Partners - An outward focus for how APIs bring value to the organization and operations.
  3. API Trainers - Training the next generation of trainers, evangelists, and storytellers within the organization.

The beginner level will be designed for all employees to attend, no matter which group they come from. The second level of training will focus on equipping the more outward focused employees, with the third tier focusing on the long-term institutionalization of an API-first approach across the company. Customized tiers can be developed in the future, but these three levels will get things moving across all areas of global operations.

Here is a summary of how the training will be developed and executed:

  1. Design Curriculum - Design, develop, and prepare the curriculum, and the materials for all three workshop levels.
  2. Run Pilot Sessions - Conduct 2-3 pilot sessions with a controlled group of users, with possible changes, and additions to the content.
  3. Execute Workshops - Execute workshops in up to five countries initially, targeting only English speaking regions in the first round.
  4. Train the Trainer - Organize a train-the-trainer session for future API Evangelists, and workshop trainers, preparing for future rollout.

I develop and evolve my curriculum and training materials as I would an API, using Github. All content is developed as an interactive repository in HTML, CSS, and JS, with exportable PDFs, and presentations are done using Deck.js. The goal is to create content that can be versioned, forked, and localized for workshops, training, and across operations.

I am willing to offer a 10% discount if your organization is willing to allow the story of this process to be told publicly. A blog series about the process, from design to implementation. I'm happy to keep specific operational, and IP related material out of stories, but I would like to tell the story from the perspective of the trainer and evangelist -- adding another storyteller layer to the program.

If you need help with API transformation across your company, organization, institution, or government agency, and would like to develop training and workshop curriculum, feel free to reach out. API for beginners is always a great place to start, but I'm happy to help craft other internally distributed curriculum like API design guides, API operational strategies, all the way to API evangelism material. Let me know how I can help--developing materials for established organizations to assist them in making the shift towards doing business with APIs is something I'm investing significant cycles into in 2017 and 2018.

Continuous Integration Conversational Interfaces

I recently wrote about how Zapier's new command line interface has a continuous integration feel to it, and while I was writing the piece, I kept thinking about how these integration apps could be used as part of conversational interfaces. I'm thinking about messaging, voice, or even embeddable conversational interfaces, and Zapier's CLI could be used to define some known conversational scenarios we encounter on a regular basis.

I'm thinking about the side of conversational interfaces that is more known and scripted. I'm not thinking about creating applications that could hold their own in a natural language conversation, but ones could be defined as part of known business processes, matching a well-defined question and set of rules. "put new RSS feed post into Google Sheet", or "take new Instagram photos, and Tweet it out". A well-scripted set of business actions that I conduct on a regular basis. Applications defined and managed via Zapier CLI, that are continuously integrated, into the conversational interfaces I use regularly use - Slack, Twitter, Facebook, my browser, and SMS on my iPhone.

I want an application for each of the micro conversation I have with online services each day. If a new conversation hasn't been defined, I want an easy way to articulate myself in terms of the 750+ applications that Zapier integrates with, and a way to have these Node.js applications introduced into the continuous integration for my conversational interfaces. I want all my conversational interfaces to be automated, with hundreds or thousands of little conversations going on in the background of my operations each day. The command line seems like an appropriate layer to make these conversational requests a reality--especially since Zapier is already having the conversations with the service I'm depending each day.

Stories Are The Best Way To Keep The Door Open

The world is built on stories. People enjoy telling and hearing stories. Stories are the lifeblood of what I do as the API Evangelist and are the number one way I stay in touch with people across many different industries and around the globe. As a single person shop there is only so many calls I can conduct in any single day, and there are only so many folks I can ping on a regular basis to stay in touch--I rely on the power of stories to do the hard work for me, acting as the distributed glue in my world.

When I connect with someone new via Twitter, email, or in person, I always close up the conversation with, "if you ever have any good stories for me to tell, either anonymously, or directly, make sure and reach out". I do need the stories, but my primary objective in doing this is to keep the communication channels open and encourage folks to do the heavy lifting when it comes to remembering to reaching out to me and renew the connection between us. The urge to share (and be heard) or hear a good story is always much stronger than the desire to buy or sell a product (aka sales lead), making this approach produce the return on investment (ROI) I am looking for.

I see my approach in contrast to the urge to tip of major tech blogs about a product release or investment because it often isn't about the tech--it almost always is about the humans. Companies want to talk to Techcrunch when a product, feature, investment, or press releases, and people contact me when something good or bad has happened involving the humans. Sure, I still get the regular release engagement, but it is the stories that truly matter to me, and to my readers. Storytelling is the way I reach a large audience on a regular basis, and the way that a global group of people stay engagement with me, even when the migrate companies, organizations, institutions, and government agencies--stories are everything, and always the best way to keep the door open.

Human Service APIs On AWS, Azure, Google, and Heroku

I have several volunteers available to do work on Open Referral's Human Services Data Specification (API). I have three developers who are ready to work on some projects, as well as an ongoing stream of potential developers I would like to keep busy working on a variety of implementations. I am focusing attention on the top four cloud platforms that companies are using today: AWS, Azure, Google, and Heroku. 

I am looking to develop a rolling wave of projects that will run on any cloud platform, as well as taking advantage of the unique features that each provider offers. I've setup Github projects for managing the brainstorming and development of solutions for each of the four cloud platforms:

  • AWS - A project site outlining the services, tooling, projects, and communication around HSDS AWS development.
  • Azure - A project site outlining the services, tooling, projects, and communication around HSDS Azure development.
  • Google - A project site outlining the services, tooling, projects, and communication around HSDS Google development.
  • Heroku - A project site outlining the services, tooling, projects, and communication around HSDS Heroku development.

I want to incentivize the develop of APIs, that follow v1.1 of the HSDS OpenAPI. I'm encouraging PHP, Python, Ruby, and Node.js implementations, but open to other suggestions. I would like to have very simple API implementations in each language, running on all four of the cloud platforms, with push button (or at least easy) installation from Github for each implementation.

Ideally, we focus on single API implementations, until there is a complete toolbox that helps providers of all shapes and sizes. Then I'd love to see administrative, web search, and other applications that can be implemented on top of any HSDS API. I can imagine the following elements:

  • API - Server-side implementations, or API implementation using specialized services available via any of the providers like Lambda, or Google Endpoints.
  • Validator - A JSON Schema, andany other suggested validotr for the API definition, helping implementations validate their APIs.
  • Admin - Develop an administrative system for managing all of the data, content, and media that is stored as part of an HSDS API implementation.
  • Website - Develop a website or application that allows data, content, and media within an HSDS API implementation to be searched, browsed and engaged with by end-users.
  • Mobile App - Develop a mobile application that allows data, content, and media within an HSDS API implementation to be searched, browsed and engaged with by end-users via common mobile devices.
  • Developer Portal - Develop an API portal for managing and providing access to an HSDS API Implementation, allowing developers to sign up, and integrate with an API in their web, mobile, or another type of application.
  • Push Button Deployment - The ability to deploy any of the server side API implementations to the desired cloud platform of your choice with minimum configuration.

I'm looking to incentivize a buffet of simple API-driven implementations that can be easily deployment by cities, states, and other organizations that help deliver human services. They shouldn't be too complicated or be trying to do everything for everyone. Ideally, they are simple, easily deployed infrastructure that can provide a seed for organizations looking to get started with their API efforts.

Additionally, I am looking understand the realities of running a single API design across multiple cloud platforms. It seems like a realistic vision, but I know it is one that will be much more difficult than my geek brain thinks it will be. Along the way, I'm hoping to learn a lot more about each cloud platform, as well as the nuance of keeping my API design simple, even if the underlying platform varies from provider to provider.

Continous Integration Platform As A Service At The Command Line

Integration platform as a service (iPaaS) provider Zapier recently launched a command line tool for managing your integrations, adding an interesting dimension to the platform--leaning in what feels like a continuous integration direction. The integration platform has long had a web builder for managing your integrations with over 750 API-driven services, using their APIs, but the command line interface feels like it's begging to be embedded into your development workflow and life cycle. 

Zapier is catering to engineers by allowing them to:

  • Bring your Node libraries. Zapier CLI Apps are made entirely of Node JS code. Use whichever libraries from NPM that you like. You can control every aspect of how Zapier interacts with your API, and our schema defines how authentication, Triggers, Actions, and Searches work.
  • Run your tests before you deploy. We believe unit testing is the best way to ensure high-quality code. You can use Mocha or another Node testing framework to feel confident in the code you deploy to Zapier.
  • Your app can live in source control. Every aspect of your integration—and every change you make—is written in code. That means you can track changes with Git or other source control like you do on other projects.
  • Version your app. You work in releases or sprints for the rest of your projects, why not do the same with your Zapier app? Turn your app updates into versions, then migrate some or all of your users to the latest version.
  • Collaborate with teammates. You don’t need to go it alone with your Zapier integration. A CLI app can be owned by more than one Zapier account, with new members quickly added using the tool. That way, the whole team can deploy updates to your Zapier app.

Here are the benefits of the Zapier CLI over the current web builder:

  • Coding locally. As mentioned above, you build CLI Apps on your local machine—not on Zapier's website.
  • Improved control over authentication flow. CLI Apps give you control over the API calls needed to set up authentication. This is particularly helpful if your API uses a slightly different flow for OAuth2 than what the Web Builder assumes. Now, you can determine the necessary calls and store whatever info your API needs.
  • Improved Custom Fields. When setting input fields, you can define static fields as well as functions to compute fields dynamically. These dynamic fields can be the result of API calls, or they can be computed based on the value of a static field.
  • Middleware. Tired of including a function call to add a header to each request? Or a call to parse out the responses? Now you can define middleware that runs before requests or after responses to process calls in a standard way.
  • Resources. You can define a single resource like a “Contact” and the methods allowed on that resource—like “get, list, create.” From this resource definition, Zapier can automatically generate Triggers and Actions, reusing some of the meta information defined on the resource.

First, I see this significantly benefiting companies and organizations when it comes to the orchestration of internal and partner APIs--your engineering team should be developing Zapier applications for your most important business functions. Second, I see this significantly benefiting companies and organizations when it comes to empowering internal business and technical folks with a complete library of workflows for all the 3rd party services you depend on, like SalesForce, Google, Facebook, Twitter, and other leading SaaS providers.

Providing a Command Line Interface (CLI) alongside an Application Programming Interface (API) seems to be coming back into popularity, partly due to continuous integration (CI) trends. Amazon has always had a CLI alongside their APIs, but it is something other API providers, as well as API service providers like Zapier, seem to be tapping into. I'm going to make some time to build a stack of API Evangelist Zapier apps so I can define some of my most common integrations, and explore further automation as part of my own internal continuous integration workflow.

Protecting Our Valuable Data With APIs

In my travels over the last couple of weeks I have found myself in two separate cities, listening to two separate stories about using APIs to help protect some valuable data, that someone was trying to defend, but also putting out there with APIs in hopes of generating revenue to keep things ing. Often times when you mention APIs to someone, they automatically think they will have less control over their data in a digital environment, but increasingly company's are realizing that they actually have the potential to result in more control.

In Oxford, UK
While in Oxford I met with the dictionaries API team, as well as other groups in charge of the important resource, including the OED team. They have been working on their dictionary content since 1884. They have put a significant amount of work into their dictionaries, and are very keen on defending the value of this important resource, while also making it available for use by partners, as well as the public. They are looking for APIs to help them define their dictionary API resources, and evolve them based on the feedback of their trusted partners who are approved to use the APIs.

In Boulder, CO
While in Boulder I met with a 211 data service provider, and a handful of their 211 operators who are looking for APIs to help them protect their valuable resources. Each city, county, or state organization has invested a lot into their databases of organizations, locations, and services that deliver vital human services in their local area. It takes a good deal of investment to maintain an up to date 211 system, and these operators are looking to fund that hard work using APIs, while also defending the integrity of their valuable resource--striking just the right balance with APIs.

APIs are not a guarantee that your data will always be in a perfect state, and only in the hands of the right people, but a well-managed API implementation can go long ways to protect a valuable dataset, while also making it easily accessible to those who should have access. APIs are helping Oxford Dictionaries, and 211 operators define access to their APIs on their terms--all while logging, metering, and stay in tune with who is accessing, and how they are using it.

Both Oxford Dictionaries and city, county, and state 211 operators want their data as widely available as possible. They want their partners to have easy access, but they also want to maintain a certain amount of control over the quality of the data, and how it is used, and made available on the open web. APIs are helping them define this access, while also being able to understand how APIs are consumed by applying rate limiting when it makes sense, and generating much-needed revenue to invest back into future development.

My recent travels have given me two new stories I can tell when helping companies, organizations, institutions, and government agencies are looking understand how you can use APIs to protect your data.

Google Partner API As A Blueprint For Other APIs

I've been tracking on how API providers operate their partner programs for a while now, in hopes of pulling together some sort of blueprint that other API providers can follow. I'm always happy to find stop along the API life cycle where an API provider has already developed a robust operational API, like the Google Partner API.

The Google Partner API provides the essential building blocks of a pretty sophisticated partner program API including company, messaging, company, lead, offer, status, profile, user, relations, and analytics. It is a nicely designed APIs, providing a complete set of paths, with lots of detail and robustness when it comes to the surface area of the API, the data it returns. 

When you look at the documentation for the Google Partner API, it provides a good example of where Google is taking their API design across the company, by providing a REST, as well as a gRPC version of the resource. All future APIs at the search giant will be following this approach, providing simpler REST APIs, then higher performance editions if you want with gRPC.

Next time I update my partner API research, this story will be present. I will look for other partner API examples, and then take the design patterns present in the Google Partner API, and publish the OpenAPI for the project, as a suggestion for how you could design your partner API. Maybe I can even convince someone to craft some server-side implementations of the API, on a variety of platforms.

Google Needs To Get Their API Icon Set In Order

I have been ranting about an icon set for the API community for over a year now. I want there to be more than just a set of SDK programming language icons. Something that would give us a visual API vocabulary, and allow us to plan, share, and implement API infrastructure like AWS is beginning to do with their own icon set, and their latest visual tooling for defining your architecture.

While profiling Google APIs, I'm seeing a pretty disorganized approach to logos beyond just core services, the absence of any dedicated icons that represent specific Google APIs. Amazon is pretty far ahead of the game when it comes visual iconography to represent the services they offer, and will dominate when it comes to visual life cycle tooling, unless Google, Microsoft, or some other provder begin to invest in meaningful icons.

Ideally, someone will do this in a more open source way, giving us the for icons that represent cloud computing, machine learning, and the many other API driven resources being put to work today. Even if it is an open source solution, each service provider will have to be involved in some way, and invested in the process to some degree. I picture a world where API service providers like Zapier don't have to develop their own iconography, and there is a wealth of well thought out, and simply design icons for everyone to pick from.

What are you waiting for Google--get to work! ;-)

YAML Templates For API Operations

I am seeing a significant number of infrastructure orchestration solutions in the cloud start using YAML templates as the core setting of settings and instructions for workflows. Amazon recently introduced YAML templates for your AWS CloudFormations, where you can define the infrastructure templates you are using throughout the API life cycle. These AWS YAML templates are fast becoming the central definition to be used across AWS operations, with support in the AWS Service Catalog.

Whether you use AWS or not, working to define your infrastructure using YAML templates help define what is going on. I'm seeing significant adoption of OpenAPIs in YAML, and I'm even beginning to create API operational indexes using APIs.json converted into YAML (there is a naming lesson in there). I also have YAML templates for each area of my API lifecycle research, providing me machine readable definitions for everything from news to patents, that I then use across my API research and storytelling.

I feel the same way about YAML as I did about JSON a decade ago, and it is quickly becoming my preferred format for any structured data, no matter where it is used in my operations. YAML + Github is quickly becoming the engine for some interesting ways of delivering infrastructure, and specifically API infrastructure, in a consistent way that is easy to communicate with others. This example focuses on AWS usage of YAML, but it is something I'm seeing from Google and other tech giants. I think the readability of YAML (minus quotes and brackets) makes it accessible to a wider audience beyond developer groups, something that is going to be critical to doing APIs at scale.

Log Files Are Only For When Things Go Wrong

I'm always amazed at the number of companies I work with that do not consider log files a first class data system. Log files for servers, web servers, and other systems or applications are only for when something goes wrong. I have to admit, I'm in the same situation. I have APIs on the logging for my API management layer, but I do not have easy API access to my Linux servers or the Apache web server that runs on top of them. 

I know that some companies use popular solutions like New Relic to do this, and I keep track on about eight API friendly logging solutions. I'm going to have to spend some time in my API logging research digging around for a solution I can use to stand up an API for my server(s). I'm not looking for any tooling on top of it, just something dead simple to parse my log files, put into a database, and allow me to wrap in a simple API for developing things on top of it.

The first thing I'll build is some analytics. Maybe there is some ready-to-go solution already out there, that are API-drive and familiar with server, or web server logs? It bothers me that logs aren't a first class data system in my operations. Maybe the awareness of crafting and developing against a logging API would help me get more in tune with this layer of data exhaust being generated from my operations daily. I'm guessing it will also help me get a handle on performance and security across my systems, helping take the health my operations up a notch or two.

IBM Has A Nice API Explorer

IBM has a pretty cool explorer format for their API catalog, allowing you to search and browse the IBM API catalog by category, and even broken down preview, beta, and live APIs. It looks like there are about 60+ APIs in the catalog so far, with a mix of uses.

Each API has the essential building blocks like getting started, documentation, pricing, and other resources. I don't see any machine readable index like APIs.json or OpenAPI but will keep an eye out. If I have the time I will generate an index for the project after I'm done with some of the other leading cloud and machine learning platforms like AWS and Google.

Your API Should Reflect A Business Objective Not A Backend System

I'm in the middle of evolving a data schema to be a living breathing API. I just finished generating 130 paths, all with the same names as the schema tables and their fields. It's a natural beginning to any data-centric API. In these situations, it is easy for us to allow the backend system to dictate our approach to API design, rather than considering how the API will actually be used.

I'm taking the Human Service Data Specification (HSDS) schema, and generating the 130 create, read, update, and delete (CRUD) API paths I need for the API. This allows the organizations, location, services, and other details being managed as part of any human service API that will be managed in a very database-driven way. This makes sense to my database administrator brain, but as I sit in a room full of implementors I'm reminded that none of this matters if it isn't serving an actual business objective.

If my API endpoints don't allow a help desk technician properly search for a service, or a website user browse the possibilities to find what they are looking for, my API means nothing. The CRUD is the easy part. Understanding the many different ways my API paths will (or won't) help someone find the services they need or assist a human service organization to better reach their audience is what the API(s) are all about, not just simply reflecting the backend system, and then walking away calling the job done.

More API Evangelists And Storytellers Please

Everyone once in a while I get a comment from someone regarding competition in the API storytelling space, alluding to someone getting the page views, or audience when it comes to APIs. I rarely worry about these things, and in reality, I want to see way more competition and outlets when it comes to short and long form API storytelling--the API space needs as many voices as it possibly can.

I'd like to see domain specific evangelists emerge, beyond individual API advocates. Someone covering industrial, machine learning, healthcare, and other significant verticals. We need to begin to cultivate domain expertise, and preferably vendor-agnostic, and tooling-comprehensive knowledge and accompanying storytelling. Some of these verticals are in desperate need of leadership, and I just don't have the time to focus in on any single area for too long.

We need more practical, and hands-on API storytelling like we are seeing from the Nordic APIs. They are rocking the storytelling lately, exceeding the quality I can achieve on API Evangelist. They are hitting all the important areas of the API life cycle, and their storytelling is genuine, useful, and not pushing any single product or service. The Nordic APIs is invested in educating the community, something that makes their work stand out--emulate what they are up to if you need a model to follow, don't follow mine. #seriously

If you are an API provider or possibly aggregator and marketplace, consider following the lead of BBVAOpen4U API Market, they produce some interesting content, as well as also have a good way of sharing and syndicating quality content from across the API space. I've seen a lot of companies come and go when it comes to aggregation and syndication of API stories. I like BBVA's open approach, because they have skin in the game, and seem to genuinely want to highlight other interesting things going on in the space--something more API providers should be doing.

If you want to get started in the space, all you need is a blog, a Github, and Twitter account. Come up with a name, and begin learning and writing. Don't worry about page views, or having a voice when you first start, just write. Then as you feel comfortable, and find your voice, begin sharing your work more, as well as highlight the work of other storytellers in the space you are learning from. If you keep at it for a while, you never know, you might actually build a following, and be able to influence the world of APIs through the stories you tell--give it a shot.

Add An API To The Web For Sharing Text, URLs And Images

I am working to push forward my embeddable API research today, so I'm on the hunt for new tools and standards that can be included in my research and put to work by API providers. One of the top reasons for doing embeddable tools is the sharing of information and media on the web. Share buttons have become ubiquitous, so I wanted to have some standard approaches to making them a default part of API operations.

While monitoring the API space I came across "a proposal to add an API to the web for sharing text, URLs and images to an arbitrary destination of the user's choice":

Web Share is a proposed web API to enable a site to share data (text, URLs, images, etc) to an arbitrary destination of the user's choice. It could be a system service, a native app or another website. The goal is to enable web developers to build a generic "share" button that the user can click to trigger a system share dialog.

 The current options for sharing content from a web page are:

  1. Share buttons to specific web services (e.g., Facebook and Twitter buttons).
  2. Share buttons built into the browser UI. Many browsers, especially on mobile, have a "share" button that sends the current page URL to a native app of the user's choice.

Web Share provides a pretty simple, yet useful example for sharing links. I'm adding it to tmy toolbox for my embeddable research. Once I gather a variety of other tools and standards I will step back to look at the 100K view. I'm hoping to stimulate a more embeddable approach to conversational interfaces, something that us asynchronous antisocial folk can work with, going beyond just voice, and messaging. 

The Evolution Of The API Strategy And Practice Conference

In the summer of 2012, Steve Willmott approached me with the idea of doing an API conference. We had both been discussing the need for a vendor-neutral API conference throughout the year, and now he wanted to make it a reality. We got to work talking to potential sponsors to see if the idea would be financially viable, and after a handful of conversation, we quickly had the sponsor support we needed to go ahead with the real thing.

The first API Strategy & Practice (APIStrat) was scheduled for December 2012 in New York City, but had to be rescheduled to February of 2013 due to Hurrican Sandy. Honestly, it ended up working out well, with the first APIStrat ended up being sold out. After we did New York, we followed up with San Francisco, Amsterdam, Chicago, Berlin, Austin, and the latest edition in Boston last November, and after seven events, the conference will now be operated by the Linux foundation, as part of the OpenAPI Initiative.

I'm very happy to see the conference mature to this level. The conference has been a forum for sharing API knowledge for over three years, so it makes sense to elevate it as part of the movement at the OAI, something that is bigger than just the OpenAPI specification. The conference will have more reach, and resources available to it. APIStrat will have a much bigger events team, allowing it to continue to grow and evolve, beyond what Steve, myself, and the 3Scale team were able to make happen. It was a no-brainer to give the conference to the Linux Foundation, and the OAI--I couldn't imagine any better home for the conference I helped create.

I wanted to take a moment and thank Steve, and the 3Scale team for making APIStrat a reality. Without their support, it never would have been a thing. I was fortunate enough to ride the train, and help stimulate the conversation. I feel like the API space has matured and evolved alongside APIStrat, and while the conference is just a small part of the growing conversation, I feel like it definitely heleds drive awareness and healthy growth across the API sector. 3Scale was central to all of this happening, behaving in a humble, vendor-neutral way the entire time, while also ensuring the event was fully funded each round, even when it meant running at a loss--thank you 3Scale (and Red Hat).

The 8th edition of APIStrat is scheduled for October / November in Portland, Oregon. The entire Linux Foundation community is currently being mobilized, and we are beginning to talk about on the ground engagement with the Portland developer and business community, so make sure to get involved where you can. Steve and I are still very involved in the event, we just have a much bigger team to help pull it off now. I'm very happy to see how things have evolved, and I am looking forward to our baby growing up and becoming much bigger than our original vision. Thanks to 3Scale, Red Hat, the Linux Foundation, and the OAI team for pulling this off, I'm looking forward to the ongoing conversations about APIs with the growing APIStrat community.

Gearing Up For Enterprise Sales With An API Service Level Agreement

I am working through the AWS APIs and the Google APIs, and profiling the building blocks across both of these API operations. My objective in doing this work is to learn as much as I possibly can about how these companies are doing APIs. After diving into Google's APIs I noticed that they were slightly more ahead of the curve when it comes to providing server level agreements (SLA) for their cloud APIs, than AWS.

I noticed nine while working through Google APIs:

I noticed five SLAs while working through AWS APIs:

I am sure there are other dedicated SLA pages I'm missing, as well as SLAs embedded in the documentation and terms of service of systems. From what I can tell, these dedicated SLAs for APIs are in response to a shift in the landscape to sell more services to the enterprise, requiring that these details of doing business move front and center. With the volatility often associate with APIs in recent years, I'm guessing an API SLAs will continue to become an essential element in all API integration decisions in coming years.

There has been enough movement in the area of API service level agreements for me to consider adding a dedicated research area to my work. I've written about the SLA APIs I've stumbled across, and seeing SLAs float to the top with the bigcos, so it makes sense to pay closer attention to what is happening. Eventually, I'll aggregate the common building blocks across the API SLAs I'm finding, and the other more unique approaches to ensuring API reliability and stability when doing business online today.

From CRUD To An API Design Conversation With Human Services

I am working to take an existing API, built on top of an evolving data schema, and move forward a common API definition that 211 providers in cities across the country can put to use in their operations. The goal with the Human Services Data Specification (HSDS) API specification is to encourage interoperability between 211 providers, allowing organizations to better deliver healthcare and other human services at the local and regional level.

So far, I have crafted a v1.0 OpenAPI derived from an existing Code for America project called Ohana, as well as a very CRUD (Create, Read, Update, and Delete) version 1.1 OpenAPI, with a working API prototype for use as a reference. I'm at a very important point in the design process with the HSDS API, and the design choices I make will stay with the project for a long, long time. I wanted to take pause and open up a conversation with the community about what is next with the APIs design.

I am opening  up the conversation around some of the usual API design suspects like how we name paths, use headers, and status codes, but I feel like I should be also asking the big questions around use of hypermedia API design patterns, or possibly even GraphQL--a significant portion of the HSDS APIs driving city human services will be data intensive, and maybe GraphQL is one possible path forward. I'm not looking to do hypermedia and GraphQL because they are cool, I want them to serve specific business and organizational objectives.

To stimulate this conversation I've created some Github issues to talk about the usual suspects like versioning, filteringpagination, sortingstatus & error codes, but also opening up threads specifically for hypermedia, and GraphQL, and a thread as a catch-all for other API design considerations. I'm looking to stimulate a conversation around the design of the HSDS API, but also develop some API design content that can help bring some folks up to speed on the decision-making process behind the crafting of an API at this scale.

HSDS isn't just the design for a single API, it is the design for potentially thousands of APIs, so I want to get this as right as I possibly can. Or at least make sure there has been sufficient discussion for this iteration of the API definition. I'll keep blogging about the process as we evolve, and settle in on decisions around each of these API design considerations. I'm hoping to make this a learning experience for myself, as well as all the stakeholders in the HSDS project, but also provide a blueprint for other API providers to consider as they are embarking on their API journey, or maybe just the next major version of its release.

Tooling For Converting Your OpenAPI Definitions From 2.0 to 3.0

I wrote a post asking what it would take to migrate OpenAPI tooling from version 2.0 to 3.0 of the API specification, and Mike Ralphson (@PermittedSoc) commented about some of the projects he's been working on involving the latest specification version. Which I hope is a good sign of things to come, when it comes to moving from version 2.0 to 3.0 in 2017.

Mike has developed an OpenAPI converter and validator to help people migrate their OpenAPI definitions from 2.0 to 3.0. The open source tool also has an online version of the OpenAPI converter and validator for using in the browser, and of course, it also has an OpenAPI conversion and validation API, because ALL API tools and services should have an API--it is just good API craft.

It makes sense that some of the first tools to emerge are for conversion. Many API developers will need to convert their existing API definitions into version 3.0 of the specification to begin learning about what is new with OpenAPI 3.0. If you have examples of OpenAPI 3.0 definitions for your API, please publish them to Github and share with me, so I can help point folks to examples in the wild that they can learn from as we make this shift.

Playing With Different Views For An OpenAPI Diff Tool

I am working on version 1.1 of the API definition for the human services data specification (HSDS), and I needed some help articulating the differences between version 1.0 and 1.1, which are both defined using the OpenAPI 2.0 specification. I manage all of my OpenAPIs using Github, but I needed a friendlier way to show the diff between two JSON files, than what Github provides. I got to work on a version that would run using Liquid, that would work in Jekyll, which all my sites and tools run in.

I have a variety of API documentation tools that run on Github, so I wanted to develop an interface that showed two separate OpenAPI definitions side by side on a simple HTML page, so at this stage, I'm playing with different ways of showing the differences between paths, and other elements of the API definition. I'm not entirely happy with what I have, but I started applying a red DIFF label to any path that isn't represented in the previous API definition. It works well for helping me see which API endpoints have been added or changed in the latest version.

Currently, I am just looking to understand the differences in paths available, but I will be diff for headers, parameters, and other elements of the API definition. I'm worried about things getting too cluttered with bigger API definitions. I'm trying to keep things fast loading, and something I can work with non-developers on, so I want to be thoughtful about what I add, and how I add it to the layout. I'm looking to get a group of business users up to speed on where things are going with the spec, and encourage them to get more involved with future versions--so not scaring them off is an important part of this conversation.

I am finding Jekyll, Liquid, and HTML, with a little JavaScript when necessary to be a perfect medium for developing OpenAPI tooling on top of. It provides a simple, static, and a flexible way to craft API tooling, that can be forked by anyone. As my proficiency with Liquid evolves, I'm getting better at making it work with OpenAPI YAML which is mounted via Jekyll. With everything operating on Github, version control and API access to all my files are adding a valuable layer to my work. Now that I have several of these types of tools available, also I need to get more organized about how I'm evolving the code and maintaining a catalog of these offerings so that others can put to use in their API operations.

Six API Embeddables To Consider For Your API

I have been profiling all of the Google APIs lately, a process that always yields a significant amount of stories for my notebook. One element of Google's approach to delivering APIs that I found relevant in the Google+ portal, was their embeddable tooling. This is an area of the API lifecycle I'm regularly evangelizing, and always looking for good examples to support my research.

I think that the six embeddable tools Google offers up as part of their social API represent the top embeddable tooling I see across this space. Partially because of the dominance of social media platform, but also because they make sense to end-users, and accomplish common things that people want to accomplish online.

Share, follow, and vote buttons are relevant to any platform with user accounts. Is yours available via API, enabling this type of tools? Then the badges, snippets, and embedded posts are relevant to any company, organization, institution, and agency looking to share content over the web (everyone wants to do this). I'm going to start a list of essential embeddable building blocks, providing a getting started list for companies looking to develop embeddable tooling for their consumers--Google gives me another good reference to add to my research.

I wish someone would start an API embeddables service, something like Zapier, but for embeddable content, tooling, and actions. I've talked about this in terms of conversational interfaces, but think there is an opportunity to do specifically for API-driven content. Allowing people to easily authenticate, and generate profile badges, embeddable content listings and details, and other long tail content opportunities. I'll keep beating this drum until something emerges--because it is what I do.

Open Source Drag And Drop API Lifecycle Design Tooling

I'm always on the hunt for new ways to define, design, deploy, and manage API infrastructure, and thought the AWS Cloud Formation Designer provides a nice look at where things might be headed. AWS CloudFormation Designer (Designer) is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates, which translates pretty nicely to managing your API infrastructure as well.

While the AWS Cloud Formation Designer spans all AWS services, all the elements are there for managing all the core stops along the API life cycle liked definition, design, DNS, deployment, management, monitoring, and others. Each of the Amazon services is available with a listing of each element available for the service, complete with all the inputs and outputs as connectors on the icons. Since all the AWS services are APIs, it's basically a drag and drop interface for mapping out how you use these APIs to define, design, deploy and manage your API infrastructure.

Using the design tool you can create templates for governing the deployment and management of API infrastructure by your team, partners, and other customers. This approach to defining the API life cycle is the closest I've seen to what stimulated my API subway map work, which became the subject of my keynotes at APIStrat in Austin, TX. It allows API architects and providers to templatize their approaches to delivering API infrastructure, in a way that is plug and play, and evolvable using the underlying JSON or YAML templates--right alongside the OpenAPI templates, we are crafting for each individual API.

The AWS Cloud Formation Designer is a drag and drop UI for the entire AWS API stack. It is something that could easily be applied to Google's API stack, Microsoft, or any other stack you define--something that could easily be done using APIs.json, developing another layer of templating for which resource types are available in the designer, as well as the formation templates generated by the design tool itself. There should be an open source "API formation designer" available, that could span cloud providers, allowing architects to define which resources are available in their toolbox--that anyone could fork and run in their environment.

I like where AWS is headed with their Cloud Formation Designer. It's another approach to providing full lifecycle tooling for use in the API space. It almost reminds me of Yahoo Pipes for the AWS Cloud, which triggers awkward feels for me. I'm hoping it is a glimpse of what's to come, and someone steps up with an even more attractive drag and drop version, that helps folks work with API-driven infrastructure no matter where it runs--maybe Google will get to work on something. They seem to be real big on supporting infrastructure that runs in any cloud environment. *wink wink*

Exploring Github Curated Galleries

Github has long been my number one source for discovering people doing interesting things with APIs. As I was trying to articulate how API providers can put Github to work as part of their API operations in another story, I came across the Github Explore section. I thought that the list of items on the home page helps demonstrate that Github is more than just about managing open source code--which is the common perception regarding what you do with Github amongst muggles.

I feel that these nine areas reflect the top uses for Github in 2017:

  • Policies - From federal governments to corporations to student clubs, groups of all sizes are using GitHub to share, discuss, and improve laws.
  • Tools for Open Source - Software to make running your open source project a little bit easier.
  • Open source organizations - A showcase of organizations showcasing their open source projects.
  • Design essentials - This collection of design libraries are the best on the web, and will complete your toolset for designing stunning products.
  • Social Impact - Open source projects that are making the world a better place.
  • Open data - Examples of using GitHub to store, publish and collaborate on open, machine-readable datasets.
  • Package managers - Across programming languages and platforms, these popular package managers make it easy to distribute reusable libraries and plugins.
  • DevOps tools - These tools help you manage servers and deploy happier and more often with more confidence.
  • Machine learning - Laying the foundations for Skynet

There are other galleries available via the Github Explore section, but I think this list provides a nice snapshot. I was pleased to see open data as a showcase, which included the API directory, something I'd like to see become an entire category of API catalogs and directories. It also makes sense that the foundations for Skynet are being laid using Github as part of its machine learning showcase.

It is important to also note that the first area on this list is policies. Providing some important leadership for city, state, and federal government agencies to follow when it comes to crafting policy in an observable way via Github. While not an antidote for all the illnesses of government, it provides an environment where the sunshine can be let in to disinfect the process just a bit.

While code is still the central focus of engagement on Github, I think the open data, machine learning, and policy galleries reflect the shifting landscape of what happens on Github. I'm investing some time into crafting more Github 101 stories for API providers and consumers, and will also be improving my own discovery process using the social platform, as there are numerous signals to tune into these days, and the explore galleries is one interesting layer that I'll be keeping an eye on regularly.

What Questions Would You Ask Across 50K API Definitions?

Mike Ralphson‏ (@PermittedSoc) asked me the other day, "if you could run SQL / #GraphQL queries over nearly 50K #API definitions, what would you ask?". I told him I would respond via blog post, which is one way I help amplify the conversation I have with other API folks in the space. Mike is doing som important work when it comes to API discovery, something that needs amplification if we are going to move this conversation forward.

Ok, so what would I ask of nearly 50K API definitions, if I had the opportunity to ask? Here are some of my answers:

  • What are all the paths used? - I'd like to see a list of all path folders, separated by the forward slash, minus any parameters.
  • What paths folders actually are words? - I'd like to know how coherent API design patterns are, and how many are actually words in a dictionary. 
  • What are all the tags applied? - A listing of all tags applied to APIs, with counts for each time it is used.
  • What are all the parameters? - A listing of all the parameters applied to APIs, with counts for each used.
  • What are all the definitions - A listing of all definitions used as part of API requests and responses, with counts next to each.
  • How many don't have definitions? What percentage of the API definitions do not have definitions describing their responses
  • What businesses are involved? - A listing of all companies involved with API definitions from contact information to domain ownership.
  • What is whois information behind each domain(s)? - Pull whois information for all the domains involved in API definitions.
  • Which definitions do not have path summary or description? - Which definitions have omitted the description for the path.
  • What's the average length of path summary and descriptions? - Of the definitions with a description or summary, what is the average length?
  • How many APIs provide a link to terms of service? - Checking to see if a term of service is part of the definition.
  • How many APIs provide contact information for an API? - Checking to see if contact information is part of the definition.
  • How many APIs describe their headers? - Looking for specific headers described as part of each path definition.
  • How many APIs use the body as part of their request? - Looking for heavy body use as part of the request surface area of an API.
  • How many APIs have a security definition? - Which of the APIs has provided a definition for how an API is secured?
  • How many APIs do not use response codes? - Which of the APIs do not provide response codes for their API responses?
  • Which API response HTTP status codes used? - Provide a list of API response HTTP status codes used, with counts for each.

That is a starting list of what I'd like to ask of 50K API definitions. It is something I'd have to think and write about more to be able to come up with more creative questions. I recommend publishing them all to a Github repository and let people start asking questions via an interface. You might not be able to answer all of the questions, but it would be interesting to see what people want to know--I am sure you could develop a pretty interesting look at how folks see API discovery, and what they are looking for.

API discovery is one of the areas of the API life cycle that is pretty deficient due to how people view their APIs and how they often have their blinders on regarding the wider API community. Most folks are thinking about their APIs, and maybe a handful of other APIs they are familiar with but do not consider API usage across industries, or the entire space. We need more work like this to occur. We need more API definitions to be made available, as well as more dynamic ways for folks to discover, understand and learn about APIs that already exist.

OpenAPI As An API Literacy Tool

I've been an advocate for OpenAPI since it's release, writing hundreds of stories about what is possible. I do not support OpenAPI because I think it is the perfect solution, I support it because I think it is the scaffolding for a bridge that will get us closer to a suitable solution for the world we have. I'm always studying how people see OpenAPI, both positive and negative, in hopes of better crafting examples of it being used, and stories about what is possible with the specification.

When you ask people what OpenAPI is for, the most common answer is documentation. The second most common answer is for generating code and SDKs. People often associate documentation and code generation with OpenAPI because these were the first two tools that were developed on top of the API specification. I do not see much pushback from the folks who don't like OpenAPI when it comes to documentation, but when it comes to generating code, I regularly see folks criticizing the concept of generating code using OpenAPI definitions.

When I think about OpenAPI I think about a life cycle full of tools and services that can be delivered, with documentation and code generation being just two of them. I feel it is shortsighted to dismiss OpenAPI because it falls short in any single stop along the API lifecycle as I feel the most important role for OpenAPI is when it comes to API literacy--helping us craft, share, and teach API best practices.

OpenAPI, API Blueprint, and other API definition formats are the best way we currently have to define, share, and articulate APIs in a single document. Sure, a well-designed hypermedia API allows you to navigate, explore, and understand the surface area of an API, but how do you summarize that in a shareable and collaborative document that can be also used for documentation, monitoring, testing, and other stops along the API life cycle. 

I wish everybody could read the latest API design book and absorb the latest concepts for building the best API possible. Some folks learn this way, but in my experience, a significant segment of the community learn from seeing examples of API best practices in action. API definition formats allow us to describe the moving parts of an API request and response, which then provides an example that other API developers can follow when crafting their own APIs. I find that many people simply follow the API examples they are familiar with, and OpenAPI allows us to easily craft and shares these examples for them to follow.

If we are going to do APIs at the scale we need to help folks craft RESTful web APIs, as well as hypermedia, GraphQL, and gRPC APIs. We need a way to define our APIs, and articulate this definition to our consumers, as well as to other API providers. This helps me remember to not get hung up on any single use of OpenAPI, and other API specification formats, because first and foremost, these formats help us with API literacy, which has wider implications than any single API implementation, or stops along the API life cycle.

Getting Feedback From Your API Community When Developing APIs

Establishing a feedback loop with your API community is one of the most valuable aspects of doing APIs, opening up your organization to ideas from outside your firewall. When you are designing new APIs or the next generation of your APIs, make sure you are tapping into the feedback loop you have already created within your community, by providing access to the alpha, beta, and prototype versions of your APIs.

The Oxford Dictionaries API is doing this with their latest additions to their stack of word related APIs, by providing early access for their community with two of their new API prototypes that are currently in development:

  • The Oxford English Dictionary (OED) is the definitive authority on the English language containing the meaning, history, and pronunciation of more than 280,000 entries – past and present – from across the English-speaking world. Its historical record of the English language is traced through more than 3.5 million quotations ranging from classic literature and specialist periodicals to film scripts and cookery books.
  • offers quick and easy translations and answers to everyday language questions. As part of the Oxford Dictionaries family, it provides practical support to people using a language that is not their mother tongue.

To get access to the new prototypes, all you have to do is fill out a short questionnaire, and they will consider giving you access to the prototype APIs. It is interesting to review the questions they ask developers, which help qualify users but also asks some questions that could potentially impact the design of the API. The Oxford Dictionaries API team is smart to solicit some external feedback from developers before getting too far down the road developing your API and making it available in any production environment.

I do not think all companies, organizations, and government agencies have it in their DNA to design APIs in this way. There are also some concerns when you are doing this in highly competitive environments, but there are also some competitive advantages in doing this regularly, and developing a strong R&D group within your API ecosystem--even if your competitors get a look at things. I'm going to be flagging API providers who approach API development in this way and start developing a list of best practices to consider when it comes to including your API community in the design and development process, and leveraging their feedback loop in this way.

The Paradox Of API Evangelist

I recently gave a talk to the API group over at Oxford University Press. During the discussion, one of their team members asked me about the paradox of what I was advising as the API Evangelist. He was speaking of the dangers of opening up APIs, establishing logging and awareness layers for all the data, content, and algorithms being served up as part of everything we do online. My prepared talk for the conversation was purposefully optimistic, but the conversation also was about the realities of all of this on the ground, at a company like the Oxford University Press--making the conversation a pretty good look at the paradox of API evangelism for me. 

I still believe in the potential of APIs, although I'm increasingly troubled by how APIs are used by technology companies. So it is easy for me to slip into the optimistic API Evangelist, over the hellfire and brimstone API Evangelist, when crafting talks with groups like the Oxford University Press. The Oxford Dictionaries API is easy to get excited about, even though I"m not entirely excited about the areas I'm informing them about like voice, bots, and machine learning. It is easy to focus on the good that APIs can do at an organizations like Oxford University Press, understanding that they are mission focused, and will be bringing meaningful solutions to the API space, but I would be negligent if I didn't discuss the challenges and dangers of doing APIs.

I want the Oxford Dictionaries API to be successful. I want them to be able to generate revenue so they can keep doing interesting things, and I want them to sensibly do APIs and protect the valuable resources they possess, that they have invested so much time into creating and maintaining. When I recommend that they do APIs, and log every API call being made by applications on behalf of users, I know that they will use this power responsibility and develop awareness around usage, and not exploit their position--something I see from other API providers. The Oxford University Press is a well established, reputable, and mission-driven organization--I want them to be successful on their API journey.

I do not study and evangelize APIs because I think everyone should do APIs. I evangelize APIs to encourage companies, organizations, institutions, and government agencies who doing good things to be more successful in what they are already doing, but in a more open, collaborative, and observable way. I evangelize APIs because the cat is out of the bag, and we are already connecting computers, mobile phones, and other devices to the Internet, and it is extremely important that we do it in a secure, yet observable way, that is sustainable for everyone involved--not to just get rich by exploiting 3rd party developers, and the end-users of the applications they build.

I can't make companies do APIs in an ethical way. All I can do is stay up to speed, no..stay ahead of what people are doing, and lead them in the right direction. I am looking to convince, and even trick some of them into being more open and observable with the way they deploy technology, be more open and honest with their business model, and how they think about everyone's privacy and security. Ultimately, I don't want people to do APIs unless it makes sense. I don't think we should be connecting everything to the Internet, however when we do, I want to be here to help companies understand the best practices of doing APIs on the web, and help groups like the Oxford University Press be successful, even though I am perpetually being reminded of the paradox of doing my API Evangelist work.

My Oxford Dictionaries Talk About The World Of APIs

This is from a conversation I had with the Oxford Dictionaries API team last week while in Oxford. I led a conversation with 30-40 folks across several teams at the Oxford University Press offices. I tried to paint a relevant and realistic picture of the world of APIs, as it would pertain to their organization. I talked for about an hour, with another hour of discussion with the group, where we discussed some of these areas in more detail.

History of APIs
To help connect the dots of where the world of APIs is going I wanted to take a brief walk through the history of APIs and make sure everyone is up to speed. The current wave of web APIs began in early 2000's with SalesForce, Amazon, and eBay leveraging web technologies to deliver commerce related IT services that leverage web technology over traditional enterprise approaches. These early API efforts focused on the integration of CRM, sales, and auction or affiliate related commerce activity, following the first dot-com bubble. Then around 2003, a new type of companies emerged that was employing APIs to connect people online and help them share links, images, and other social content, over the buying and selling of products. Flickr, then Facebook and Twitter emerged as API pioneers, opening up simple web APIs that allowed their communities of developers to integrate, and develop new websites and applications that helped drive the social evolution of the web.

By 2006, API pioneer Amazon had internalized APIs at the commerce giant, requiring internal groups to use them to conduct business internally at the company, something that would result in a new web service for storing information on the web they called Amazon S3. Shortly after they released Amazon S3 they also launched a second API for deploying and managing servers on the Internet, changing how companies can deploy infrastructure for the web--something we now collectively call cloud computing. This is when the web API thing became real for many across the tech sector, as APIs weren't just about commerce or social activities, you could actually deploy global infrastructure using APIs. Cloud computing is something that would transform how we do business on the web, changing how we delivered web applications, but would also bring IT out of the basement of the enterprise, and onto the web to operate in "the cloud"--setting the stage for the next transformation of computing occurring via mobile phones.

Web, Mobile, and Beyond
Early APIs focused on the syndication of data and content via the web. By 2009, this was changing how we think about computing, something that was greatly aided by the cloud which was put in motion by Amazon's early API efforts. In 2007, Apple introduced their new iPhone to the world, and by 2010 application developers were working hard to develop APIs that could deliver valuable resources to mobile applications, providing what was needed for applications being sold through the Itunes marketplace, and consumed via each wave of smartphones. With the introduction of the Android operating system from Google, the world of computing was clearly no longer just on our desktops, or even via our laptops--our digital bit would now be needed anywhere, in our hands, and via a growing number of Internet-connected devices.

APIs had enabled a whole new approach to delivering resources to mobile applications, as well as allowing these applications to interact with the camera, gyroscope, and other physical features of popular smartphones. This way of thinking about computing would quickly spread to other devices that we would eventually strap to our bodies, install in our homes, cars, and across the landscape of our personal and professional lives. APIs helped deliver content and data to websites, but they were also now driving mobile applications, and now the companies were realizing the potential of connecting sensors, and other devices across a variety of industries, setting the stage for what has become known as the Internet of Things. In 2017, this approach to connect everyday objects to the Internet is helping the Internet penetrate our personal and professional lives, changing how we think from healthcare to transportation. While not all of this connectivity is proving to benefit everyone involved, posing some significant security risks, it is continuing to change how we think about and engage with Internet technology in our lives.

Website + Dev Portal
Along this evolution in computing, companies were realizing that APIs weren't just the latest technological trend, and that they were the next iteration of the web, making the web something we just didn't consume, that it was also something that could be programmed, and tailored for each user or device connected to it. Similar to how companies were struggling with the need to have a website during the years between 1995 and 2000, between 2010 and 2015 many companies were realizing that they needed to have their digital resources available via APIs, accessible via a developers portal, where consumers could find what they needed to put valuable data, content, and algorithms to work, adding a wholesale layer to the web. These central portals often live alongside a companies website, changing how a company does business, opening them up to a new way of thinking if the right building blocks were present.

Some companies and startups quickly realized that there was more to all of this than just having APIs. If they also included documentation, code, and other resources, as well as a variety of communication channels and support features, some new and exciting possibilities often emerged. Providing APIs on the web via a rich, informative, and helpful portal would eliminate many of the bottlenecks and polarizing elements that traditional IT operations are often known for. When you opened up your digital assets, made them easily accessible to all stakeholders, and potential consumers, new things were possible, shifting how companies do business, and opening up new opportunities, that might not have occurred behind the corporate firewalls, and the often toxic environments that had developed within the IT environments of previous generations.

Innovation & Serendipity
API pioneers like SalesForce, Amazon, Twitter, Facebook, and others demonstrated that opening up your digital resources via APIs could enable new and exciting things to occur. In 2006, Twitter launched as a simple website where you could signup, follow people, and "tweet" out messages, six months later they launched the Twitter API and developer portal, and everything else we know as Twitter today was developed by the community. A growing number of technology startups quickly saw that APIs were enabling a new type of innovation, one that allowed 3rd parties to safely and securely use valuable digital resources and develop new web, mobile, and a variety of other integrations and applications. The API approach was fueling innovation that might never have occurred within a company and has the potential to service long tail ideas that teams might not have the time or resources to tackle.

The API approach to doing business was allowing for a form of digital serendipity to enter the picture, allowing for new and beneficial ways of putting digital resources to work. When information has only IT gatekeepers the opportunity for power struggles, and other damaging events to occur, can significantly increase. When there are self-service, yet secure access to resources, complete with identification, logging, and reporting, a much healthier balance to accessing digital resources can often be established. Those of us who are too close to digital resource management are often blinded by the immediate needs of day to day operations, and cannot see the possibilities, or even have the room to breathe and dream about what is possible. More open approaches to web APIs allow for 3rd parties who are free from the operational burdens to ideate, experiment and iterate until they find new applications that can bring some positive change to a platform and it's users.

Internal & Partners
While web APIs employ the same technology that the web operates on, and often have publicly available developer portals alongside websites, the APIs aren't always publicly available to everyone. Making corporate resources for use by partners the number one reason companies publish APIs in 2017, providing self-service, often plug and play access to data, content, and algorithms, as well as access to applications developed by 3rd part developers, and even to much-needed developer talent that is available within API ecosystems. An API-first approach allows business groups to quickly get access to the resources they need for partner engagements often without actually having to get the approval of, or wait for IT groups to weigh in, making business development, and the development of projects much more efficient, without the friction, often experienced in the past.

Amazon's experience with APIs gave us the cloud, but the way they internalized APIs within the company has famously given the rest of us a blueprint to use in thinking how API resources are put to work internally. Jeff Bezos, the Amazon CEO, mythically directed all of his employees to use APIs for internal interactions, standardizing how departments shared resources--again, bypassing classic bottlenecks introduced IT organizations. Many of the tools and applications developed on top of an API allow for even non-developers to put resources to work, allowing business groups to do more, with less, bypassing IT, and the need for highly technical talent in many scenarios. APIs are shifting how companies are doing business on the web, engaging with partners, and internally between disparate groups, setting the stage for thinking about different approaches to doing business.

Modular & Portable
The process of API design involves taking often large systems and breaking them up into smaller, more reusable chunks. The process of doing this helps us think through how we define, store, share and put our digital resources to work. Breaking up digital assets into smaller pieces is one of the things that contributes to the innovation, and reuse described previously, allowing individual resources to be put to work, without the dependencies and legacy constraints that are often introduced by their creators and operators. This modularity allows for innovation to occur at the hand of 3rd parties, but also allows for the evolution of individual resources, without having to do major system-wide upgrades, allowing for change to occur in smaller, more management chunks. This modular way of thinking is what has allowed API pioneers like Amazon and SalesForce to dominate and take the lead, as they are able to move much faster than their competition.

The modular way of thinking that APIs can bring to the table has also quickly spread to all parts of the software stack, changing how we deploy databases, our APIs, as well as the applications that are built on top of them. This evolution of API-driven cloud computing has brought us more containerized approaches to deploying and managing resources, as well as helping us develop more modular and portable versions of our applications. This modular approach to delivering architecture has helped concepts like microservices, DevOps, and continuous integration flourishes, allowing for change to occur at a more individual component level, something that is enabling us to deploy infrastructure and applications anywhere, making operations more portable. Through the standardization of containers, APIs, schema, and the other building blocks of the space, we are now able to deploy infrastructure wherever we need, in any cloud, and on any device.

Automation & iPaaS
It is commonplace to think about 3rd party developers building the web and mobile applications on top of APIs, but it takes more work, imagination, and exploration to understand the potential for automation and integration using a new breed of integration platform as a service (iPaaS) platforms. This is another area that APIs are empowering non-developers to put APIs to work, bypassing IT and the need for more specialized technical talent. This is another area that is proving difficult for many API providers to think about when it comes to what iPaaS providers like Zapier bring to the table, and how to target complementary APIs, process, and workflows that would benefit business operations. Integration is often perceived as a resource intensive, and costly IT project, but this new breed of platforms make integration something anyone can achieve, with projects that can range from a permanent implementation to a solution that is more ephemeral, going away after just a couple of hours, or after a single event occurs.

Most companies encourage developers to make a request for data, content, and algorithmic behavior using their APIs, but the providers who are further along in their API journey don't rely on APIs just being a one-way street, and have begun employing approaches like using webhooks to make API engagement a two-way street. Webhooks allow for other systems to be called via APIs when specific events and circumstances have occurred. The primary need for doing this is to alleviate the number of API calls that developers will have to make on any single API, but the secondary motivation is often about the orchestration and automation using API resources we depend on. Webhooks allow API providers and consumers to automate common business tasks, scheduling them to occur on a regular basis, or being triggered when a specific situation occurs, making the web much more automated and working for whoever is in defining orchestration.

Messaging & Bots
APIs are driving development across a variety of business verticals, and one of those sectors that are being changed in some interesting ways is messaging. Enabling users to communicate via a variety of API driven messaging protocols is at the heart of successful platforms like Facebook Twitter and Slack. These API providers are leading discussions when it comes to enabling users to collaborate and communicate while also encouraging developers to get involved using their APIs. An example of the automation described above can be seen in the latest wave of bots to emerge via Facebook messenger, as well as the Slack messaging platform. Bots are small, automated, API-driven scripts that engage with users via messaging platforms, exposing data, content, and algorithms in a conversational format, and hopefully augmenting human interactions in a meaningful way.

Bos are emerging to help users answer common questions, automate tasks, make payments, receive analysis, and many other digital aspects of our professional lives. Slack is leading the charge when it comes to business use cases, but Twitter and Facebook are also fueling the potential of bots when it comes to influencing the news, stock markets, elections, or just providing entertainment by delivering poetry, jokes, or funny pictures and memes. Messaging between humans is not new, and messaging between systems within code is also nothing new, but we are now seeing a convergence of these two worlds fueling human to computer interactions at a scale we haven't' experienced before, with (some) interesting outcomes. Like the web and mobile, bots need APIs to deliver the data and content they need to engage with users and emulate human behavior via this new wave of conversational interfaces.

Voice & Conversational
Similar to the messaging bot evolution discussed previously, there is another layer of API driven conversations going on, but this time is leveraging actual voice technologies, allowing humans to talk with the web and other online, API-driven systems. Voice enablement isn't anything new, but with the increased availability of API resources, and the advancement of voice technology like Siri and Alexa, having a conversation with a machine is becoming more common. The latest breed of voice solutions translate voice recordings into text, send that text to a variety of API driven systems, and then translate the response back into a familiar voice that responds to humans via their Internet-connected devices. The availability of cloud resources has significantly contributed to this latest explosion in voice-enablement, making it no surprise that Amazon is leading the charge with their Alexa enabled API ecosystem. 

Voice enablement is another area that API providers, who are further along in their API journey are making their APIs available for, crafting endpoints specifically for use in voice, and studying how end users are putting voice technologies to work at home, in automobiles, and in the workplace. Like iterating API operations for mobile usage, conversational interfaces will require the iteration of API resources, something a modular approach to design will aid considerably. Web APIs have gone a long way to simplifying some often very complex back-end systems, making them more intuitive and usable by end-users. This is something that will help determine which APIs are usable in a conversational environment, requiring APIs to be able to speak more fluently to humans, but it will be a fine balance because in order to scale properly they also need to be machine readable as well.

Artificial Intelligence & Machine Learning
Historically, most APIs have been about sending and receiving data or content, with a handful of APIs providing access to more algorithmically focused resources. While these APIs do have data inputs, they are more about applying some sort of computational solution, with data just being the byproduct. This approach to delivering API resources is seeing the significant resurgence with the evolution a growing number of what services that are being labeled Artificial intelligence (AI) or Machine Learning (ML). While the hype around these solutions often exceeds the reality, some of these API resources are providing some very valuable computational resources that can be used in everyday tasks. There are numerous startups emerging to cater to this new wave of Ai and ML solutions, with technology giants like Amazon, Google, Microsoft, and even IBM investing heavily so that they can compete.

The artificial intelligence and machine learning APIs often seem magical and mysterious as they operate in a black box, but the solutions they provide range from practical solutions like sentiment analysis and categorization of text, to image transformation, and object or facial recognition in photos. This is just a small subset of what is possible with artificial intelligence and machine learning, with many specialized solutions emerging for using in specific industries like agriculture, policing, or healthcare. APIs are being used to deliver artificial intelligence and machine learning solutions, by wrapping these algorithms in web technology, making them accessible for use in web, mobile, and device applications. In the world of artificial intelligence and machine learning, there is an often missed opportunity by API providers when it comes to augmenting and complementing artificial intelligence and machine learning APIs with other data and content APIs, allowing for much richer, meaningful, and contextually relevant responses.

Analysis & Awareness
Using APIs can deliver a heightened level of awareness of a company or organization's digital assets, opening them up in a variety of ways. By employing modern approaches to API management, where all API requests and responses are logged, and all applications and their users are identified, an entirely new level of analysis and surveillance is possible regarding how digital resources are being put to use, or possibly not put to use. Companies who are further along in their API journey have come to realize that in addition to have APIs valuable for business operations, it is valuable to also craft APIs on top of the API operations as well, adding an entirely new dimension to how APIs can be put to use across aa platform. This approach to API deployment has opened up entirely new business models, taking the innovation explored earlier to new levels, driving home the opportunity for serendipity to occur across operations.

This newfound awareness (sometimes turning to surveillance) across operations is the number one benefit of doing APIs. Going API-first helps map out digital resources while making them more accessible, and reusable, and when reporting is properly applied across the logging that is common across API operations, the analysis opportunities can be just as compelling as the core APIs themselves. Understanding how resources are used, what is of most interest to users, and what types of applications are making the biggest impact rise to the top. Because this layer is API driven as well, the same approach to integration, iteration and innovation can occur as other areas of the ecosystem. This layer API access should be carefully controlled, and may or may not be available to the public, and often only available internally, or to a select group of partners, making the analysis and awareness much more guarded, so that it protects the privacy and security of end-users, as well as platform operators and developers.

Visualizations & Embeddable
Whether it's directly from core API resources, or the operational level of APIs, building off the modular and portable approach available with modern API architecture, one of the most valuable applications within an ecosystem is centered around the ability to visualize, and deliver embedding solutions on the web, mobile, in emails, and other applications. A dashboard is a common way to look at this, or possibly API-riven infographics, but the savviest API providers are developing a wealth of API embeddable that leverage open source solutions such as D3.js to make everything visual and embeddable. When this approach accompanies a branding strategy, it can extend the value of the platform beyond the domain, extending the reach of the ecosystem. While often centered around more visual elements of the platform, embeddable tools can also be more functional and interactive, bringing another dimension of benefit to operations.

Embeddable tooling comes in all shapes and sizes, with the best solutions adhering to web standards, and leaving proprietary approaches behind the firewall. The most well-known examples of this in action can be found with Twitter and Facebook share buttons, or with the Youtube video player which extends the video platform on sites across the web. Embeddables do not have to be developed by the platform alone and can become a community effort, showcasing simple, copy and pastable solutions crafted by an APIs developer within the ecosystem. The first layer of API-driven embeddable should speak to the value delivered by core APIs, with the second layer of embeddable taking advantage of the programmable web, and making the experience more interactive for consumers, generating content, and data--bringing value to the platform that can contribute to the analysis and awareness layer of API operations discussed previously.

Evangelism & Storytelling
APIs begin with a technological seed but they need to make an impact at the human level if they are to find success, making the single most important tool in the toolbox storytelling, advocacy, and evangelism around what an API does. If people do not know an API exists, or what is possible with, none of this matters. API operations often suffer from the same search engine optimization (SEO) challenges that regular website efforts do. For an API to stand out and be found on the web it takes a great deal of work telling stories about API operations, and the applications that are being build on top of them. This storytelling needs to be included as part of existing marketing strategies, but will also require a special touch for it to pass the inspection of an often very picky developer community, who tend to be a slightly different beast when it comes to getting the attention of, and keep them engaged.

Storytelling and evangelism have to be genuine. It has to speak to the value an API brings to the table, and reflect the solutions that developers and integrators will be looking to provide. This outreach isn't just a public effort, it should include public consumers, as well as partners, and internal users. Ideally, API outreach has a dedicated team that can craft a message that speaks to the platform, but also respond to support operations, and be intimately aware of what is happening within the ecosystem, establishing and maintaining a deep relationship with the community. If outreach is going to be effective, it has to be relevant to the operations, match the technical and other support challenges that developers are facing--possessing strong context for the community. Finding a voice, establishing, and maintaining a true presence for an API is the number challenge I see API platform's struggle with, and is the number one contributor to their demise.

Establishing a Presence
There are thousands of public APIs available on the web, a number that is only growing, so standing out takes a significant amount of work. 75% of the API providers I encounter suffer from the "build it and they will come" disease, and think developers will just find them, and understand the value an API delivers. This is why you see a handful of blog posts on many APIs, a couple months of Tweets, and then radio silence. The successful APIs out there have sustained operations, with robust storytelling, evangelism and outreach both on and offline. This goes for APIs that aren't 100% public too. If you want to reach partners, or even make internal groups aware across large, and often global organizations, evangelism is essential. Establishing a presence is not a one time thing, it take months, and years to do properly, and will require constant investment and maintenance.

The most efficient way to establish a presence is by being transparent and observable in everything you do. Adding new features to the road map? Publish them online, blog and tweet about them. Get the community involved. Ask for feedback, follow and retweet your users. Get to know them on the platforms and in the channels they already exist on. Be a regular presence, but not one that is always preaching and selling. Make sure you listen, study, and understand your target audience, and speak to what they'll need. Even when a developer does learn about an API there is no guarantees they will have an immediately pressing need and put it to use right away. A fine tuned API presence will ensure that potential consumers are aware that an API exists, and is constantly being reminded that it exists by blogging, Tweeting, storytelling, outreach, and engagement--when done properly developers will know where to go when they have a problem to solve.

Repeat, Repeat, Repeat
I have been doing API Evangelist for seven years this summer. Success has been difficult. It has taken a regular drumbeat to stand out in the crowd. It has required some difficult decisions regarding who I partner with, and give access to my resources and brand so that I maintain my desired level of trust amongst my consumers. While I'm not evangelizing a single API, I am helping cultivate the awareness of a variety of APIs, as well as the best practices across these API operations--the process is the same for individual API providers. Often times I'm repeating what I have written before, and giving similar talks at conferences, meetups, and presentations like this. However, I am always iterating and evolving based on feedback I receive, and how my work is applied (or not), something that can be very painful to move forward on some days. The creativity doesn't always come right away, and sometimes I have to just to maintenance work just to keep busy, and come back to my storytelling when it feels right.

The most important part of what I'm doing is that I'm consistently out there, telling stories online, and in person. Early on in my presence, it was critical for me to be at public events shaking hands and making the in-person connections. After three years of doing a majority of in-person events I'm able to do more of it online, than I do offline, but I still have to be blogging, tweeting, and committing regularly. This is how I've established the presence that I have, and made the connection with people that I have. When someone is ready to do APIs or ready for the next step in their API operations, they know where to find me, and the resources that I provide. This is the core of any API operation, my world is no different than yours. I guarantee I have way less budget, and time than you, but somehow I've found a way to keep going. I have also had several times where I faced burnout, but I find if I just keep pushing forward, creating, writing, developing, and iterating I get through it--the solution is always repeat, repeat, and repeat until I find the right path forward.



While this talk was designed just for my conversation with the Oxford University Press team, it is generic enough for any other company to consider, and use to learn about the API space. I worked pretty hard to keep it a pretty positive view of the API landscape, and left most of the realities of this world for the in-person conversation, but is also something I'll cover in future posts this week. Honestly, when working with organizations like the Oxford University Press, I am left pretty optimistic about this whole API thing--something that is increasingly difficult with some of the ways that I am seeing APIs be wielded in 2017.

An Introduction To Github For API Providers

I have had a number of requests from folks lately to write more about Github, and how they can use the social coding platform as part of their API operations. As I work with more companies outside of the startup echo chamber on their API strategies I am encountering more groups that aren't Github fluent and could use some help getting started. It has also been a while since I've thought deeply about how API providers should be using Github so it will allow me to craft some fresh content on the subject.

Github As Your Technical Social Network
Think of Github as a more technical version of Facebook, but instead of the social interactions being centered around wall posts, news links, photos, and videos, it is focused on engagement with repositories. A repository is basically a file folder that you can make public or private, and put anything you want into it. While code is the most common thing put into Github repositories, they often contain data file, presentations, and other content, providing a beneficial way to manage many aspects of API operations.

The Github Basics
When putting Github to use as part of your API operations, start small. Get your profile setup, define your organization, and begin using it to manage documentation or other simple areas of your operations--until you get the hang of it. Set aside any pre-conceived notions about Github being about code, and focus on the handful of services it offers to enable your API operations.

  • Users - Just like other online services, Github has the notion of a user, where you provide a photo, description, and other relevant details about yourself. Avoid making a user accounts for your API, making sure you show the humans involved in API operations. It does make sense to have a testing, or other generic platform Github users accounts, but make sure your API team each have their own user profile, providing a snapshot of everyone involved.  
  • Organizations - You can use Github organizations to group your API operations under a single umbrella. Each organization has a name, logo, and description, and then you can add specific users as collaborators, and build your team under a single organization. Start with a single repository for your entire API operations, then you can consider the additional organization to further organize your efforts such as partner programs, or other aspects of internal API operations.
  • Repositories - A repository is just a folder. You can create a repository, and replicate (check out) a repository using the Github desktop client, and manage its content locally, and commit changes back to Github whenever you are ready. Repositories are designed for collaborative, version controlled engagements, allowing for many people to work together, while still providing centralized governance and control by the designated gatekeeper for whatever project being managed via a repository--the most common usage is for managing open source software.
  • Topics - Recently Github added the ability to label your repositories using what they call topics. Topics are used as part of Github discovery, allowing users to search using common topics, as well as searching for users, organizations, and repositories by keyword. Github Topics is providing another way for developers to find interesting APIs using search, browsing, and Github trends.
  • Gists - A GitHub service for managing code snippets that allow them to be embedded in other websites, documentation -- great for use in blog posts, and communication around API operations.
  • Pages - Use Github Pages for your project websites. It is the quickest way to stand up a web page to host API documentation, code samples, or the entire portal for your API effort.
  • API - Everything on the Github platform is available through the Github API. Making all aspects of your API operations available via an API, which is the way it should be.

Managing API Operations With Github
There are a handful of ways I encourage API providers to consider using Github as part of their operations. I prefer to use Github for all aspects of API operations, but not every organization is ready for that--I encourage you to focus in these areas when you are just getting going:

  • Developer Portal - You can use Github Pages to host your API developer portal--I recommend taking a look at my minimum viable API portal definition to see an example of this in action.
  • Documentation - Whether as part of the entire portal or just as a single repository, it is common for API providers to publish API documentation to Github. Using solutions like ReDoc, it is easy to make your API documentation look good, while also easily keeping them up to date.
  • Code Samples w/ Gists - It is easy to manage all samples for an API using Github Gists, allowing them to be embedded in the documentation, and other communication and storytelling conducted as part of platform operations.
  • Software Development Kits (SDK) Repositories - If you are providing complete SDKs for API integrations in a variety of languages you should be using Github to manage their existence, allowing API consumers to fork and integrate as they need, while also staying in tune with changes.
  • OpenAPI Management - Publish your APIs.json or OpenAPI definition to Github, allowing the YAML or JSON to be versioned, and managed in a collaborate environment where API consumers can fork and integrate into their own operations.
  • Issues - Use Github issues for managing the conversation around integration and operational issues.
  • Road Map - Also use Github Issues to help aggregate, collaborate, and evolve the road map for API operations, encouraging consumers to be involved.
  • Change Log - When anything on the roadmap is achieved flag it for inclusion in the change log, providing a list of changes to the platform that API consumers can use as a reference.

Github is essential to API operations. There is no requirement for Github users to possess developer skills. Many types of users put Github to use in managing the technical aspects of projects to take advantage of the network effect, as well as the version control and collaboration introduced by the social platform. It's common for non-technical folks to be intimidated by Github, ad developers often encourage this, but in reality, Github is as easy to use as any other social network--it just takes some time to get used to and familiar it.

If you have questions about how to use Github, feel free to reach out. I'm happy to focus on specific uses of Github for API operations in more detail. I have numerous examples of how it can be used, I just need to know where I should be focusing next. Remember, there are no stupid questions. I am an advocate for everyone taking advantage of Github and I fully understand that it can be difficult to understand how it works when you are just getting going. 

Taking A Look At The Stoplight API Spec Editor

I'm keeping an eye on the different approaches by API service providers when it comes to providing API editors within their services and tooling. While I wish there was an open source GUI API editor out there, the closest thing we have is from these API service providers, and I am trying to track on what the best practices are so that when someone does step up and begin working on an open, embeddable solution, they can learn from my stories about what is working or not working across the space.

One example I think has characteristics that should be emulated is the API Spec Editor from Stoplight. The GUI editor lets you manage all the core elements of an OpenAPI like the general info, host, paths, and even the shared responses and parameters. They even provide what they call a CRUD builder where you paste the JSON schema, and they'll generate the common paths you will need to create, read, update, and delete your resources. Along the way you can also make calls to API endpoints using their interactive interface, helping ensure your API definition is actually in alignment with your API.

The Stoplight API Spec Editor bridges the process of defining your OpenAPI for your operations, with actually documenting and engaging with an API through an interactive client interface. I like this approach to coming at API design from multiple directions. Apiary first taught us that API definitions were about more than just documentation, and I think our API editors should keeping evolving on this concept, and allowing us to engage with any stops along the API life cycle like we are seeing from API service providers like Restlet.

I'm already keeping an eye on Restlet and APIMATIC's approach to providing a GUI API design editor within their solutions and will keep an eye out on other providers as I have time. Like other areas of the API sector, I'm hoping I can develop a list of best practices that any service provider can follow when developing their tools and services.

REST, Linked Data, Hypermedia, GraphQL, and gRPC

I'm endlessly fascinated by APIs and enjoy studying their evolution. One of the challenges in helping evangelize APIs that I come across regularly is the many different views of what is or isn't an API amongst people who are API literate, as well as helping bring APIs into focus for the API newcomers because there are so many possibilities. Out of the two, I'd say that dealing with API dogma is by far a bigger challenge, than explaining APIs to newbies--dogma can be very poisonous to productive conversations and end up working against everyone involved in my opinion. 

I'm enjoying reading about the evolution in the API space when it comes to GraphQL and gRPC. There are a number of very interesting implementations, services, tooling, and implementations emerging in both these areas. However, I do see similar mistakes being made regarding dogmatic behavior, aggressive marketing tactics, and shaming folks for doing things differently, as I've seen with REST, Hypermedia, and linked data efforts. I know folks are passionate about what they are doing, and truly believe their way is right, but I'm concerned you will all suffer from the same deficiencies in adoption I've seen with previous implementations.

I started API Evangelist with the mission of counteracting the aggressive approach of the RESTafarians. I've spent a great deal of time thinking about how I can turn average developers and even business folks on to the concept of APIs--no not just REST or just hypermedia, but web APIs in general. Something that I now feel includes GraphQL and gRPC. I've seen many hardworking folks invest a lot into their APIs, only to have them torn apart by API techbros (TM) who think they've done it wrong--not giving a rats ass regarding the need to actually help someone understand the pros and cons of each approach.

I'm confident that GraphQL will find its place in the API toolbox, and enjoy significant adoption when it comes to data-intensive API implementations. However, I'd say 75% of the posts I read are pitting GraphQL against REST, stating it is a better solution. Period. No mention of its limitations or use cases where it might not be a good idea. Leaving us to only find out about these from the GraphQL haters--playing out the exact same production we've seen over the last five years with REST v Hypermedia. Hypermedia is finding its place in some very useful API implementations like FoxyCart, and AWS API Gateway (to name just a few), but its growth has definitely suffered from this type of storytelling, and I fear that GraphQL will face a similar fate. 

This problem is not a technical challenge. It is a storytelling and communication challenge, bundled with some very narrow incentive models fueled by a male-dominated startup culture, where folks really, really like being right and making others feel bad for not being right. Stop it. You aren't helping your cause. Even if you do get all your techbros thinking like you do, your tactics will fail in the mainstream business world, and you will only give more ammo to your haters, and further confuse your would be consumers, adopters, and practitioners. You will have a lot more success if you are empathetic towards your readers, and produce content that educates, and empowers, over shames and tearing down.

I'm writing this because I want my readers to understand the benefits of GraphQL, and I don't want gRPC evangelists to make the same mistake. It has taken waaaay too long for linked data efforts to recover, and before you say it isn't a thing, it has made a significant comeback in SEO circles, because of Google's adoption of JSON-LD, and a handful of SEO evangelists spreading the gospel in a friendly and accessible way--not because of linked data people (they tend to be dicks in my experience). As I've said before, we should be investing in a robust API toolbox, and we should be helping people understand that benefits of different approaches, and learn about the successful implementations. Please learn from others mistakes in the sector, and help see meaningful growth across all viable approaches to doing API--thanks.

Using Google Sheet Templates For Defining API Tests

The Runscope team recently published a post on a pretty cool approach to using Google Sheets for running API tests with multiple variable sets, which I thought is valuable at a couple of levels. They provide a template Google Sheet for anyone to follow, where you can plug in your variable, as well as your Runscope API Key, which allows you to define the dimensions of the tests you wish to push to Runscope via their own API.

The first thing that grabs me about this approach is how Runscope is allowing their customers to define and expand the dimensions of how they test their API using Runscope in a way that will speak to a wider audience, beyond just the usual API developer audience. Doing this in a spreadsheet allows Runscope customers to customize their API tests for exactly the scenarios they need, without Runscope having to customize and respond to each individual customer's needs--providing a nice balance.

The second thing that interests me about their approach is the usage of a Googe Sheet as a template for making API calls, whether you are testing your APIs, or any other scenario an API enables. This type of templating of API calls opens up the API client to a much wider audience, making integration copy and pastable, shareable, collaborative, and something anyone can reverse engineer and learn about the surface area of an API--in this scenario, it just happens to be the surface area of Runscope's API testing API.

Runscope's approach is alignment with my previous post about sharing data validation examples. A set of assertions could be defined within a spreadsheets and any stakeholder could use the spreadsheet to execute and make sure the assertions are met. This would have huge implications for the average business user to help make sure API contracts are meeting business objectives. I'm considering using this approach to empower cities, counties, and states to test and validate human services API implementations as part of my Open Referral work.

It told John Sheehan, the CEO of Runscope that their approach was pretty creative, and he said that "Google sheets scripts are underrated" and that Google Sheets is the "API client for the everyperson". I agree. I'd like to see more spreadsheet templates like this used across the API life cycle when it comes to design, deployment, management, testing, monitoring, and every other area of API operations. I'd also like to see more spreadsheet templates available for making calls to other common APIs, making APIs accessible to a much wider audience, who are familiar with spreadsheets, and more likely to be closer to the actual problems in which API solutions are designed to solve.

Complimentary APIs For The Oxford Dictionaries API

Many API providers I meet have the "build it and they will come" mentality, thinking that if they build an API, developers will come and use it. It does happen, but many APIs only have so many direct uses, and will have a limited number of resulting implementations. This is one of the reasons I recommend companies do APIs in the first place, to get beyond the obvious and direct implementations, and incentivize entirely new applications that a provider may not have considered.

Developing innovative applications an API provider may not have considered is the primary focus of companies I talk to, but only a handful have begun thinking about the other APIs that are out there that might compliment an API. An example of this is with my friends over at the Oxford Dictionary APIs, where building applicaitons with built-in dictionaries is pretty obvious. However, the complimentary API partnering and usage might not be as obvious, something I'm encouraging their team to think more about.

What are some examples of complimentary APIs for the Oxford Dictionaries API?

  • API Design - This example is more meta API, than complimentary API, but I'd like to see more API design tooling to begin to use dictionaries as part of their GUI and IDE interfaces. Providing more structure to the way we design our APIs, helping ensure that the design of leading APIs are more intuitive, and coherent--achieving a less technical interface, and something that speaks to humans.
  • Machine Learning - I'm using a number of machine learning APIs to accomplish a variety of business tasks from object recognition in images, to text and language pattern recognition in content I produce or curate. There are a number of opportunities to extend the responses I get back from these APIs using a dictionary, helping increase the reach of the machine learning services I employ, and making them more effective and meaningful in my business operations.
  • Search - I use the Algolia API to build a variety of search indexes for the API space, helping me curate and recall a variety of information that I track on as part of my API monitoring. I'm considering building a augmented synonym layer tha thelps me search for terms, then expand those searches based upon suggestions from the Oxford Dictionaries API. I'm looking to augment and enrich the Algolia search capabilities with the suggestions from the dictionary API, expanding beyond any single key word or phrase I identify relevant to any industry or topic being served by the API sector.
  • Bots - If you are going to build convincing bots using the Slack or Facebook APIs you will need a robust vocabulary to work with. You won't just need a single downloaded and stored dictionary, you will need one like the Oxford Dictionaries API that evolves and changes with the world around us to enrich your bot's presence. 
  • Voice - Similar to bots, if you are going to deliver a robust voice interface via Alexa, or other voice enabled platform, you will need a rich dictionary to work from. Similar to how I'm expanding and enriching the other suggestions listed above, a dictionary layer will expand on how you will be able to interpret the text that is derived from each voice interface engagement.

These are just a couple examples of how one API could provide complimentary features to other APIs out there. It is important for API providers to be reaching out to other API providers and their communities when evangelizing their own APIs. Sure, you should be incentivizing developers to build applications on top of your APIs, but your definition of "application" should include other APIs that your resources compliment, and enrich the solutions being developed on top their platforms.

For a space where integration and interoperability appears to be priority number one, the API sector is notoriously closed, and many API providers I meet have their blinders on. As an API provider you should always be studying the rest of the API sector, looking for good examples to follow, as well as bad examples to avoid--similar to what I do as the API Evangelist. Along this journey you should also be looking for complimentary APIs to your platform, and vice versa. Think of SendGrid and Twilio as an example--email, SMS, and voice all go together very well, and their communities significantly overlap, providing the makings for a ripe partnership.

What are a couple of existing APIs that your APIs complement? This is your homework assignment for the week! ;-)

An Opportunity To Emulate Slack Buttons

Slack released their Slack Buttons last year, to help as they state "reduce the number of small yet high-frequency tasks that quietly devour a user’s time." I know folks are obsessed with voice, bot, and other conversational interfaces, but I agree with Slack, that there is a huge opportunity to help users execute common API-driven functions with a single push of a button. It is something I blog about regularly, helping folks realize the opportunity in the development of API driven, embeddable buttons, that go beyond what Slack is doing and would run anywhere on the web, in the browser, or even on mobile and other Internet connected devices.

Zapier has taken a stab at this with Push by Zapier, and they have the inventory of APIs to support it. However, what I am thinking about should be even easier, embeddable anywhere, and leverage the web technology already in use (forms, inputs, GET, POST, etc.). If in a browser it should work like bookmarklets, and if it exists within the pages of a website or application, it should work as some simple copy/paste HTML with minimum JS when necessary to avoid blockage by ad blockers and other privacy protection tooling.

I understand that it is easy to follow the pack when it comes to the latest technology trends, and I'm sure voice and bots will gain some mindshare when it comes to conversational interfaces. However, sometimes we just need the easy button for those high-frequency tasks that quietly devour our time, as Slack puts it. As I see JSON-LD embedeed into more web pages, further pleasing the Google search algorithm, there would also be more semantic opportunities for browser buttons to engage with the data and content within web pages in a more meaningful, and context aware way.

API driven buttons and similiar embeddables and browser tooling is such an interesting opportunity that I would tackle myself, but I'm so resistant to anything that might take me away from my work as the API Evangelist, which is why I'm putting it out here publicly. I will put more time into some single use button examples that might start the conversation for some, but I recommend taking a look at what Slack and Zapier have done to get started. As the pace of technology continues it's blind march forward at the request of VCs, I'm confident that there is huge opportunity for simple solutions like buttons to make a significant impact--let me know if you'd like to know more.

Learning To Use Our Words Better When Defining Our APIs

I am playing around with the Open API for the Oxford Dictionaries API, and I'm struck by the importance of not just dictionaries like the Oxford Dictionaries, but also the importance of OpenAPI, and API providers defining their APIs like the Oxford folks have. While we aren't as far down the road as we are with the English dictionary, we are beginning to make progress when it comes to defining the paths, parameters, and other characteristics using OpenAPI, learning to speak and communicate in the digital world using APIs.

We use words to craft titles, paragraphs, outlines, and other ways that we communicatie in our personal and professional lives. We also use words to craft titles, paragraphs, outlines, collections, and other ways our systems our communicating in our personal and professional lives using the OpenAPI specification. In both these forms of communicating we are always trying to find just the right words, or series and orders of words to get across exactly the meaning we are looking for--we just have centuries of doing this when it comes writing and speaking, and only a decade or so of doing this with defining our digital resources using APIs.

Eventually, I'd like to see entire dictionaries of JSON Schema, ALPS, or other machine readable specification, available by industry, and topic. The way we craft our API definitions and design our APIs often feels like we have barely learned to speak, let alone read or write. I'd like to see more reuse of common dictionaries already in use by leading API providers, and I'd like to see us get more thoughtful in how we express ourselves via our API definitions. The most successful APIs I find out there don't just provide a machine readable interface, they provide an intuitive interface that makes sense to humans, while also being machine readable for use in other systems.

It feels like to me that we should integrating the Oxford Dictionaries API into our API design tooling, letting us suggest, autocomplete, and discover better ways to articulate the meaning behind our APIs. API design editors could use the Oxford Dictionaries API to help developers attach more precise meaning to the names of paths, parameters, and other aspects of defining our APIs, much like word processors have done for the last couple of decades. Most APIs I come across do not have any sort of coherent name, ordering, or structure, and the few that have deployed an OpenAPI or other machine readable format, often feel like cave writing, and lack any coherent structure, purpose, or meaning--we have a long, long way to go before our systems learn to communicate even at a 1st grade level.

Helping Your Customers Operate Throughout The API LIfe Cycle

When I started API Evangelist back in 2010 the only stop along the API life cycle that service providers were talking about was API management. In 2017, there are numerous stops along the API life cycle from design, to testing, all the way to deprecation. The leading API providers are expanding the number of stops they service, and the smart ones are making sure that if they only service on or two stops, they do so by providing via API definitions like OpenAPI, ensuring their customers are able to seamlessly weave multiple service providers together to address their full life cycle of needs.

I've been working with my partner Restlet to advise them on expanding their platform to be what I consider to be an API life cycle provider. When I first was introduced to Restlet they were the original open source enterprise grade API deployment framework. Then Restlet became a cloud API deployment and management provider, and with their acquisition of DHC they also became an API client, and testing provider. Now with their latest update, they have worked hard to help their developer and business customers service almost every stop along a modern API life cycle, from design to deprecation.

While Restlet is developing tooling to help companies define what the API life cycle means to them, the heartbeat of what Restlet delivers centers around API definitions like OpenAPI and RAML. API definitions provide the framework when your designing, deploying, documenting, managing, and testing your APIs using Restlet. They also provide the ability for you to get your API definitions in and out of the platform, and load them into potentially other API services, allow API operators to get what they need done. In my opinion, making API definitions just as importan tas any other service or tooling you offer along the API life cycle.

Serving a single or handful of stops along the API life cycle can be today's version of vendor lockin. If your customers cannot easily load their API definitions in and out of your system you are locking them in, and while they may stay with you for a while, eventually they will need additional services, and the extra work it takes to keep in sync with your platform will increase, and eventually it won't be worth staying. I'm a big fan of companies doing one thing and doing it well, servicing single stops along the API life cycle, but after watching companies come and go for the last seven years, the one's that don't support API definitions won't be around too long.

All API Startups Should Be More Like Glitch

I was playing around with, and better understanding the new collaborative developer community that is Glitch, and I saw they had published a blog post about how they won't screw up Glitch. The topic was in alignment with another post I was working on regarding what I'd like to see fro API startups, but I think Anil articulates it better than even I could, and I think folks are going to respect it a lot more when it comes from a seasoned veteran like him, over an opinionated evangelist like me.

In hist post, Anil shares five key points I think every startup should be thinking about from day one:

  • No lock-in. We use totally standard infrastructure for Glitch, including regular old Node.js, and normal JavaScript for your code. You can export all of your code to GitHub with a click, or download a zip file of your code instantly at any time. And it’s gonna stay that way. The key thing that we think will keep you using Glitch is by your deep emotional connection to your fellow members of the community. And that’s not lock-in, that’s love! Aww.
  • We’re not gonna take features away from you and then start charging for them. This is one of those tricky things that a lot of companies do when they start building a business model for their product — they ask, “what would people pay for?” And then they realize… oh crap, the stuff people want to pay for is already offered for free. We’ve thought about this pretty carefully so we’ll be able to support our current features going forward. (It doesn’t cost much for us to run your Glitch app, and that cost is going down each month. No biggie.)
  • When we do start charging for stuff, we’ll check with you firstWe imagine we’ll add some paid features on top of what Glitch has now (maybe domain names? Everybody loves mapping domain names!) and when we do, we’ll let you know they’re coming, plus get your feedback on what you think is fair and reasonable pricing. Because we want you to be happy to pay for these valuable features!
  • We won’t let a bunch of jerks take over the community. Ugh, this one is so annoying. Usually, when a company is trying to grow a community, they’ll let just about anybody in because they’re desperate to show growth, and that inevitably means opening the door to some small number of jerks, who then ruin the whole site for everybody. Instead of doing that foolish thing, we’re going to grow Glitch steadily and deliberately, with tons of room for new folks, but a lot of thought put into preventing abuse and harassment. I can’t guarantee we’ll get it perfect, but honestly the thought of working every day to build something that’s mostly used by jerks would be awful, so we’re not gonna do it. And honestly, Glitch is growing pretty quickly because it’s friendly, so hooray for nice people.
  • We want your fun and weird and “not serious” stuff on Glitch, too. While we’re ecstatic to feature Serious Tools from incredible companies like Slack on Glitch, we think your artsy or silly or deeply personal projects are vital to the community, too. The same people who spend their day building a complex API integration on top of Glitch’s tools will come home and collaborate on a generative poetry project at night, and that’s exactly what we’re designing for. So don’t feel embarrassed to show all your many facets here; that’s what our own team does, too.

There are other things I'd like to see other startups focus on like privacy and security, but Anil really get's at athe heart of much of the illness we see regarding API startups. For me, it really comes down to communication and honesty about the business model, which Anil talks about in a very approachable way. I don't expect companies who are doing startups to do everything that us developers want, but I do want you to be open, honest, and communicative with us about what is going on--that is all.

I fully grasp that startups are in the business of making money, and often have different motivations than many (some) of us developers who are consuming their API focused resources. I do not expect these things to always be in perfect balance, but I do expect startups to be honest with developers from day one, and not bullshit us about their business model, changes and the long term road map. I appreciate that Anil and the Glitch team has started things off on this foot, hopefully providing the rest of us with a model to follow when it comes to not screwing over your developer community, and building more trust when it comes to depending on APIs across the sector.

I am looking forwarding to learning more about Glitch, and the potential for the community.

I Think The Parse Twitter Page Sums It Up Pretty Well

Building a business is hard. Building a business that depends on other business is hard. We would like it if all of our vendors stuck around forever, but this is not the reality of doing business in today's climate. My stance on this situation that nothing lasts forever, but startups and the enterprise could be more honest about the business of startups, which is seriously beginning to impact the trust we all have in the platforms, tools, and APIs we depend on for our businesses.

I was working my way through some legacy tweets, and I came across Parse's Twitter home page, which I think sums up the promise being made by each wave of startups, and the end results of these promises--although I have to say that Parse actually handled it pretty well, compared with other startups that I have seen in action.

Building apps is not easy. However, we need solutions that will get us all the way there. I actually think that in the end, Parse handled it pretty well, better than StackMob did, when Paypal bought them. In the end, you could take the open source version of Parse and install it, and they communicated the deprecation pretty well, giving folks quite a bit of time to take care of business. However, this is the way it should be from day one. There should be APIs, and open source available to ease operation and migration--as well as communication about what the future will hold.

If API focused startups don't start being more honest about their true business strategy, and the realities of the future from day one, fewer developers and users will buy what is being sold. Sure, there will always be new waves of young developers who will still believe, but the more experienced folks, who are often in charge of making purchasing decisions, will become increasingly skeptical about baking in APIs into their operations--screwing this up for the rest of us trying to actually make a living, and not just get rich.

Being Able See An API Request In Browser Is Important

There are a number of things at work making this whole web API thing actually work. One of them that came up while I was at Google discussing APIs a couple weeks ago, while we were listening to Dan Ciruli (@danciruli) was the importance of being able to see an API request in the browser. It is something I think we often overlook when it comes to understanding why web APIs have reached such a wide audience.

I remember when I first realized I could change the URL in my Delicious account and get an XML listing of my bookmarks--this is when the API light first went on in my head. The web wasn't just for humans, it could be structured for use in other websites. Seeing the XML in the browser presented me links in a machine readable way, that triggered me to think about else I could with them, and which other systems I could put them to work in.

Being able to see the results of an API call in the browser helps stimulate the imagination when it comes to what is possible. This is similar to why API client tooling like Postman and Restlet Client are popular with developers--they help us see the possibilities. While not all APIs are simple enough to allow for viewing in the browser, when at all possible, we should keep things this easy, because you never know when it will make a mark, and help folks better understand what is going on under the hood, allowing them to put our APIs to work in ways we never expected.

API Definition: API Transformer

This is an article from the current edition of the API Evangelist industry guide to API definitions. The guide is designed to be a summary of the world of API definitions, providing the reader with a recent summary of the variety of specifications that are defining the technology behind almost every part of our digital world

OpenAPI Spec is currently the most used API definition format out there, with the number of implementations, and tooling, with API Blueprint, Postman Collections, and other formats trailing behind. It can make sense to support a single API definition when it comes to an individual platforms operations, but when it comes to interoperability with other systems it is important to be multi-lingual and support multiple of the top machine-readable formats out there today.

In my monitoring of the API sector, one service provider has stood out when it comes to being a truly multi-lingual API definition service provider--the SDK generation provider, APIMATIC. The company made API definitions the heart of its operations, generating what they call development experience (DX) kits, from a central API definition uploaded by users--supporting OpenAPI Spec, API Blueprint, Postman Collections, and other top formats. The approach has allowed the company to quickly expand into new areas like documentation, testing, continuous integration, as well as opening up their API definition translation as a separate service called API Transformer.

API Transformer allows anyone to input an API Blueprint, Swagger, WADL, WSDL, Google Discovery, RAML 0.8 - 1.0, I/O Docs - Mashery, HAR 1.2, Postman Collection 1.0 - 2.0, Mashape, or APIMATIC Format API definition and then translate and export in a  API Blueprint, Swagger 1.0 - 1.2, Swagger 2.0 JSON, Swagger 2.0 YAML, WADL - W3C 2009, RAML 0.8 - 1.0, Postman Collection 1.0 - 2.0, and their own APIMATIC Format. You can execute API definition translations through their interface or seamlessly integrate with the API Transformer API definition conversation API.

There is no reason that an API provider and API service providers shouldn’t be multi-lingual. It is fine to adopt a single API definition as part of your own API operations, but when it comes to working with external groups, there is no excuse for not being able to work with any of the top API definition formats. The translation of API definitions will increasingly be essential to doing business throughout the API life cycle, requiring each company to have an API definition translation engine baked into their continuous integration workflow, transforming how they do business and build software.

 If you have a product, service, or story you think should be in the API Evangelist industry guide to API design you can email me , or you can submit a Github issue for my API definition research, and I will consider adding your suggestion to the road map.

I Predict A Future Flooded With Google Prediction Galleries

I was roaming through Google's Prediction API, and I thought their prediction gallery provides us a look at a shift occurring right now in how we deliver APIs. I predict that machine learning galleries and marketplaces will become all the rage, independently operating like Algorithmia, or part of a specific API like the Google prediction gallery.

Ok, let me put it out there that I hate the use of word prediction. If I was naming the service, I would have called it "execution", or more precisely a "machine training (MT) model execution API". I know I'll never get my way, but I have to put it out there how bullshit many of the terms we use in the space are--ok, back to the API blah blah blah, as my daughter would say.

A common element of API portals for the last decade has included an application gallery, showcasing the apps that are developed on top of an API. Now, when an API offers up machine learning (training) (ML) execution capabilities, either generally or very specialized model execution (ie, genomics, financial), there should also be a gallery of available mobiles, or simply models that have been delivered as part of platform operations--just like we have done with APIs & applications.

I see this stuff as the evolution of the algorithmic layer of API operations. Meaning that most APIs are delivering data or content, but there is a 3rd layer that is about wrapping algorithms, and in the current machine learning and artificial intelligence craze, this approach to delivering algorithmic APIs will continue to be popular. It's not an ML revolution, it is simply an evolution in how we are delivering API resources, leveraging ML models as the core wrapper (Google was smart for open sourcing TensorFlow).

Developing ML models, and making them deployable via AWS, Google, Azure, as well as marketplaces like Algorithmia will become a common approach for the API providers who are further along in their API journey, and have their resources well-defined, modular, and easily deployed in a retail or wholesale manner. While 90% of the ML implementations we will see in 2017 will be smoke and mirrors, 10% will deliver some interesting value that can be used across a growing number of industries--keeping machine training (MT), and machine training execution APIs like Google Prediction something I will be paying attention to. ;-)

API Definition: U.S. Data Federation

This is an article from the current edition of the API Evangelist industry guide to API definitions. The guide is designed to be a summary of the world of API definitions, providing the reader with a recent summary of the variety of specifications that are defining the technology behind almost every part of our digital world.

The U.S. Data Federation is a federal government effort to facilitate data interoperability and harmonization across federal, state, and local government agencies by highlighting common data formats, API specifications, and metadata vocabularies. The project is focusing on being a coordinating interoperability across government agencies by showcasing and supporting use cases that demonstrate unified and coherent data architectures across disparate agencies, institutions, and organizations.

The project is designed to shine a light on “emerging data standards and API initiatives across all levels of government, convey the level of maturity for each effort, and facilitate greater participation by government agencies”--definitely in alignment with the goal of this guide. There are currently seven projects profiled as part of the U.S. Data Federation, including Building & Land Development Specification, National Information Exchange Model, Open Referral, Open311, Project Open Data,, and the Voting Information Project.

By providing a single location for agencies to find common schema documentation tools, schema validation tools, and automated data aggregation and normalization capabilities, the project is hoping to incentivize and stimulate reusability and interoperability across public data and API implementations. Government agencies of all shapes and sizes can use the common blueprints available in the U.S. Data Federation to reduce costs, speed up the implementation of projects, while also opening them up for augmenting and extending using their APIs, and common schema.

It is unclear what resources the U.S. Data Federation will have available in the current administration, but it looks like the project is just getting going, and intends to add more specifications as they are identified. The model reflects an approach that should be federated and evangelized at all levels of government, but also provides a blueprint that could be applied in other sectors like healthcare, education, and beyond. Aggregating common data formats, API specifications, metadata vocabularies, and authentication scopes will prove to be critical to the success of the overall climate of almost any industry doing business on the web in 2017.

 If you have a product, service, or story you think should be in the API Evangelist industry guide to API design you can email me , or you can submit a Github issue for my API definition research, and I will consider adding your suggestion to the road map.

A Looser More Evolvable API Contract With Hypermedia

I wrote about how gRPC API implements deliver a tighter API contract, but I wanted to also explore more thought from that same conversation, about how hypermedia APIs can help deliver a more evolvable API contract. The conversation where these thoughts were born was focused on the differences between REST and gRPC, in which hypermedia and GraphQL also came up. Leaving me thinking about how our API design and deployment decisions can impact the API contract we are putting forth to our consumers.

In contrast to gRPC, going with a hypermedia design for your API, your client relationship can change and evolve, providing an environment for things to flex and change. Some APIs, especially internal API, and trusted partners might be better suited for gRPC performance, but when you need to manage volatility and change across a distributed client base, hypermedia might be a better approach. I'm not advocating one over the other, I am just trying to understand the different types of API contracts brought to the table with each approach, so I can better articulate in my storytelling.

I'd say that hypermedia and gRPC approaches give API providers a different type of control over API clients that are consuming resources. gRPC enables dictating a high performance tighter coupling by generating clients, and hypermedia allows for shifts in what resources are available, what schema are being applied, and changes that might occur with each version, potentially without changes to the client. The API contract can evolve (within logical boundaries), without autogeneration of clients, and interference at this level.

As I learn about this stuff and consider the API contract implications, I feel like hypermedia helps API provides navigation change, evolve and shift to deliver resources to a more unknown, and distributed client base. gRPC seems like it provides a better contract for use in your known, trusted, and higher performance environments. Next, I will be diving into what API discovery looks like in a gRPC world, and coming back to this comparison with hypermedia, and delivering a contract. I am feeling like API discovery is another area that hypermedia will make sense, further helping API providers and API consumers conduct business in a fast changing environment.

Uber Is Painting Bigger Picture Of Their Drivers With Driver API Partnerships

I was taking a look at the new Uber Driver API and trying to understand the possibilities with the API, and some of the the motivations behind Uber's launch of the API. According to Uber, "our Driver API lets you build services and solutions that make the driver experience more productive and rewarding. With the driver's permission, you can use trip data, earnings, ratings and more to shape the future of the on-demand economy." Providing an interesting opportunity for partners to step up and help build useful apps that Uber drivers can leverage in their worlds, helping them be more successful in their work.

The first dimension of the Uber Driver API I find interesting is that it is not an API that is about their core business--ridesharing. It is focused on making their drivers more successful, and developing tools, and integrations that make their lives easier. I could see something like this evolving to other platforms like AirBnB, and other sharing or gig economy platforms, helping operators make sense of their worlds, while also strengthening partnerships and hopefully their relationship with operators along the way. 

The second dimension I find interesting is thinking about why Uber is publishing their Driver API. At first glance it is all about making their drivers happy--something Uber needs a lot of help with currently. However, once you think a little more about it, you can start to see a bigger picture that Uber is looking to paint of their drivers. The company's leverage over drivers has proven to only go so far, and they need to understand more about the lives of their drivers, and if they can invite corporate partners to do their taxes, and potentially other key tasks in their lives, a greater picture of their life will come into focus. 

If an Uber driver is also a Lyft driver, the Uber Driver API gives them more of a look into their competitor's accounting. They also get a better understanding of what else their drivers have going on, and other ways they can increase their leverage over them. It's fascinating to think about. Right now, all the examples Uber providers are tax related, but I'm sure other types of partners will emerge--it is why you do APIs. Open up an API, and people will line up innovate for you, helping you understand the dimensions of your driver's life. It is a fascinating look at why you would do APIs, and how there is almost always more than one dimension to why a company will deploy an API.

I am sure there are other reasons I am not considering here. Maybe now that I've planted the seed in my brain, I'll come up with some other incentives for why Uber operates their APIs as they do--it is fascinating stuff to unpack.

Moving APIs Out Of The Partner Realm And Making Them More Public

It is common for API providers to be really private with their APIs, and we often hear about providers restricting access as time goes by. So, when API providers loosen up restrictions on their APIs, inviting wider use by developers, making them public--I think it is worth taking notice. 

A recent example of this in the wild is from the API poster child Twitter, with their Periscope video service. Twitter has announced that they are slowly opening up access to the Periscope video API, something that has been only available to a handful of trusted partners, and via the mobile application--there was no way to upload a video without using your mobile device. Twitter is still "limiting access to fewer strategic partners for a period", but at least you can apply to see if your interests overlap with Twitters interests. It also sounds like Twitter will continue to widen access as time goes on, and as it makes sense to their Periscope strategy.

While I wished we lived in a world where API developers were all well behaved, and API providers could open up services to the public from day one, this isn't the reality we find on the web today. Twitter's cautious approach to rolling out the Periscope API should provide other API providers with an example of how you can do this sensibly, only letting in the API consumers that are in alignment with your goals. Slowly opening up, making sure the service is stable, and meets the needs of consumers, partners, as well as helps meet a platform's business objectives.

Hopefully, Twitter's approach to launching the Periscope API provides us with a positive example of how you can become more public APIs, instead of always locking things down. There is no single way to launch your APIs, but I'd say that Twitter's cautious approach is probably a good idea when you operate in such a competitive space, or when you don't have a lot of experience operating a fully public API--start with a handful of trusted partners, and open things up to a slightly wider group as you feel comfortable.

Thanks, Twitter for providing me with a positive example I can showcase--it is good to see that after a decade you can still launch new APIs, and understand the value of them to your core business--even amidst the often intense and volatile environment that is the Twitter platform.

API Definition:

This is an article from the current edition of the API Evangelist industry guide to API definitions. The guide is designed to be a summary of the world of API definitions, providing the reader with a recent summary of the variety of specifications that are defining the technology behind almost every part of our digital world.

Keeping up with the standards bodies like International Organization for Standardization (ISO) and Internet Engineering Task Force (IETF)  can be a full-time job. Thankfully,  Erik Wilde (@dret) has help simply and made the concepts and specifications that make the web work more accessible and easier to understand, with his project.

According to Erik, “the Web’s Uniform Interface is based on a large and growing set of specifications. These specifications establish the shared concepts that providers and consumers of Web services can rely on. Web Concepts is providing an overview of these concepts and of the specifications defining them.” His work is a natural fit for what I am trying to accomplish with my API definition industry guide, as well as supporting other areas of my research.

One of the areas that slows API adoption is a lack of awareness of the concepts and specifications that make the web work among developers who are providing and consuming APIs. The modern API leverages the same technology that drives the web--this is why it is working so well. The web is delivering HTML for humans, and APIs are using the same to deliver machine-readable data, content, and access to algorithms online. If a developer is not familiar with the fundamental building blocks of the web, the APIs they provide, and the applications they build on top of APIs will always be deficient.

This project provides an overview of 28 web concepts with 643 distinct implementations  aggregated across five separate organizations including the International Organization for Standardization (ISO), Internet Engineering Task Force (IETF), Java Community Process (JCP), Organization for the Advancement of Structured Information Standards (OASIS), and the World Wide Web Consortium (W3C)--who all contribute to what we know as the web.  An awareness and literacy around the 28 concepts aggregated by Web Concepts is essential for any API developer and architect looking to fully leverage the power of the web as part of their API work.

After aggregating the 28 web concepts from the five standards organization, Web Concepts additionally aggregates 218 actual specifications that API developers, architects, and consumers should be considering when putting APIs to work. Some of these specifications are included as part of this API Definition guide, and I will be working to add additional specifications in future editions of this guide, as it makes sense. The goal of this guide is to help bring awareness, literacy, and proficiency with common API and data patterns, making use of the work Web Concepts project, and building on the web literacy work already delivered by Erik, just makes sense.

Web Concepts is published as a Github repository, leveraging Github Pages for the website. He has worked hard to make the concepts and specification available as JSON feeds, providing a machine-readable feed that can be integrated into existing API design, deployment, and management applications--providing web literacy concepts and specifications throughout the API life cycle.  All JSON data is generated from the source data, which is managed as a set of XML descriptions of specifications, with the build process based upon XSLT and Jekyll, providing multiple ways to approach all concepts and specifications, while maintaining the relationship and structure of all the moving parts that make up the web.

When it comes to the startup space, the concepts that make up the web, and the specifications that make it all work, might seem a little boring, something only the older engineers pay attention to.  Web Concepts helps soften, and make these critical concepts and specifications accessible and digestible for a new generation of web and API developers--think of them as gummy versions of vitamins. If we are going to standardize how APIs are designed, deployed, and managed--making all of this much more usable, scalable, and interoperable, we are going to have to all get on the same page (aka the web).

Web Concepts is an open source project, and Erik encourages feedback on the concepts and specifications. I encourage you to spend time on the site regularly and see where you can integrate the JSON feeds into your systems, services, and tooling. We have a lot of work ahead of us to make sure the next generation of programmers have the base amount of web literacy necessary to keep the web strong and healthy. There are two main ways to contribute to the building blocks of the web: participate as a contributor to the standards body, or you can make sure you are implementing common concepts and specifications throughout your work, contributing to the web, and not just walled gardens and closed platforms.

 If you have a product, service, or story you think should be in the API Evangelist industry guide to API design you can email me , or you can submit a Github issue for my API definition research, and I will consider adding your suggestion to the road map.

<< Prev Next >>