The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


The API Portal Outline For A Project I Am Working On

I am working through a project for a client, helping them deliver a portal for their API. As I do with any of my recommendations with my clients, I take my existing API research, and refine it to help craft a strategy to meets their specific needs. Each time I do this it gives me a chance to rethink some of my recommendations I’ve already gathered, as well as learn from new types of projects. I’ve taken the building blocks from my API portal, as well as my API management research, and have taken a crack at organizing them into an outline that I can use to guide my current project.

Here is a walk through of the outline I’m recommending as part of a basic API portal implementation, to support a simple public API:

  • Overview - / - Everything starts with a landing page, with a simple overview of what an API does.

Then you need some basics to help make on-boarding as frictionless as possible, providing everything an API consumer needs to get going:

  • Getting Started - /getting-started/ - Handful of steps with exactly what is needed to get started.
  • Authentication - /authentication/ - An overview of what type of authentication is used.
  • Documentation - /documentation/ - Documentation for the APIs that are available.
  • FAQ - /faq/ - Answer the most common questions.
  • Code - /code/ - Provide code samples, libraries, and SDKs to get going.

Then get API consumers signed up, or able to login and get at their API keys as quickly as you possibly can:

  • Sign Up - /developer/ - Provide a simple sign up for a developer account.
  • Login - /developer/ - Allow users to quickly log back in after they have account.

Next, provide a wealth of communication and support mechanisms, helping make sure developers are aware of what is going on:

  • Blog - /blog/ - A simple blog dedicated to sharing stories about the API.
  • Support - /support/ - Offer up a handful of support channels like email and tickets.
  • Road Map - /road-map/ - Provide a road map of what the future will hold for the API.
  • Issues - /issues/ - Share what the known issues are for the API platform.
  • Change Log - /change-log/ - Publish a log of everything that has changed with the platform.
  • Status - /status/ - Provide a status page showing the availability of API services.
  • Security - /security/ - Publish an overview of what security practices are in place.

Make sure your consumers know how to get involved at the right level, making plans, pricing, and partnership opportunities easy to find:

  • Plans - /plans/ - Offer a single page with API access plans and pricing information.
  • Partners - /partners/ - Share what the partnership opportunities are, as well as existing partners.

Then let’s take care of the legal side of things, making sure API consumers are fully aware of the TOS, and other aspects of operations:

  • Terms of Service - /terms-of-service/ - Make the terms of service easy to find.
  • Privacy - /Privacy/ - Publish a privacy statement for all API consumers.
  • Licensing - /Licensing/ - Share the licensing for API, code, data, and other assets.

I also wanted to make sure I took care of the basics for the developer accounts, flushing out the common building blocks developers will need to be successful:

  • Dashboard - /developer/dashboard/ - Provide a simple, comprehensive dashboard for developers.
  • Account - /developer/account/ - Allow API consumers to change their account information.
  • Applications - /developer/applications/ - Allow API consumers to add multiple applications, and receive API keys for each application.
  • Plans - /developer/plans/ - If there are multiple plans, allow developers to change plans.
  • Usage - /developer/usage/ - Show history of all API usage for API consumers.
  • Billing - /developer/billing/ - Allow API consumers to add, update, and remove billing information.
  • Invoices - /developer/invoices/ - Share a listing, as well as detail of all invoices for use.

Then I had a handful of other looser items that I wanted to make sure were here. Some of these will be for the second phase of developoment, but I want to make sure is on the list:

  • Branding - /branding/ - Providing a logo, branding guidelines, and requirements.
  • Embeddable - /embeddable/ - Share a handful of butts and widgets for use by consumers.
  • Webhooks - /Webhooks/ - Publish details on how to setup and manage webhook notifications.
  • iPaaS - /ipaas/ - Information on Zapier, and other iPaaS integration solutions that are available.
  • Spreadsheets - /spreadsheets/ - Share any spreadsheet plugins or connectors that are available for integration.

That concludes what I’d consider to be the essential building blocks for an API. Sure, there are some more minor APIs that can operate on a bare bones version of this list, but for any API looking to conduct business at scale, I’d recommend considering everything on this list. It reflects what I find across the leading providers like Stripe and Twilio, and reflect what I like to see from an API provider when I am getting up and running using any API.

I have Jekyll templates for each of these building blocks, which use the Bootstrap UI for the layout. I am updating it for this project, then I will publish again as a set of forkable tools that anyone can implement. I’m going to publish a new API portal that runs as an interactive outline of essential building blocks, then creates new Github repository for a user, and publishes each of the selected components to the repo. Providing a buffet of API developer portal tools anyone can put to work for their API without much work.


AWS API Gateway Export In OpenAPI and Postman Formats

I wrote about being able to import an OpenAPI into the AWS API Gateway to jumpstart your API the other day. OpenAPI definitions are increasingly used for every stop along the API life cycle, and being able to import an OpenAPI to start a new API, or update an existing in your API gateway is a pretty important feature for streamlining API operations. OpenAPI is great for defining the surface area of deploying and managing your API, as well as potentially generate client SDKs, and interactive API documentation for your API developer portal.

Another important aspect of this API lifecycle is being able to get your definitions out in a machine readable format as well. All service providers should be making API definitions a two-way street, just like Amazon does with the AWS API Gateway. Using the AWS Console you can export an OpenAPI definition for any API. What makes things interesting is you can also export an OpenAPI complete with all the OpenAPI extensions they use to customize each API within the API Gateway. Also, they provide an export to OpenAPI, but with Postman specific extensions, allowing you to use use in the desktop client tooling when developing as well as integrating with any API.

I’ve said it before, and I’ll say it again. Every API service provider should allow for the importing and exporting of common API definition formats like OpenAPI and Postman. If you are selling services and tooling to API designers, developers, architects, and providers, you should ALWAYS provide a way for them to generate a static definition of what is going on–bonus, if you allow them to publish API definitions to Github. I know some API service providers like to allow for API import, but worry about customers abandoning their service if there is an export. Too bad, offer better services, and affordable pricing models, and people will stick around.

Beyond selling services to folks with day jobs, having the ability to import and export an OpenAPI allows me to play aorund with tools more. Using the AWS API Gateway I am setting up a number of APIs, then exporting them, and saving the blueprints for when I need to tell as part of a story, or use in a clients project. Having the ability to import and export an OpenAPI helps me better deliver actual APIs, but it also helps me tell better stories around what is possible with an API service or tool. If I can’t get in there and actually test drive, play with and see how things work, it is unlikely I will be telling stories about it, or have in my toolbox when I come across a project where I’ll need to apply a specific service or tool.

If you are having trouble making your API service or tool speak OpenAPI, Postman, or other formats, make sure and check out API Transformer, which provides an API you can use to convert between the format. So if you can support one format, you can support them all. They even have the ability to take a .har file and generate an OpenAPI. I recommend natively supporting the importing and exporting of OpenAPI, then using API Transformer you can also allow for the importing and exporting in other formats. When you handle the import, you’ll always just make sure it speaks OpenAPI before you ingest. When a service supports OpenAPI import and export, it is much more likely I’m going to play with more, as well as increases the chance I’m going to be baking it into the life cycle for one of the projects I’m working on for a client. I’m sure other folks are thinking along the same lines, and are grateful when they can get their API definitions in and out of your platform.


Your API Road Map Helps Others Tell Stories About Your API

There are many reasons you want to have a road map for your API. It helps you communicate with your API community where you are going with your API. It also helps you have a plan in place for the future, which increases the chances you will be moving things forward in a predictable and stable way. When I’m reviewing and API I don’t see a public API road map available, I tend to give them a ding on the reliability and communication for their operations. One of the reasons we do APIs is to help us focus externally with our digital resources, which communication plays an important role, and when API providers aren’t communicating effectively with their community, there are almost always other issues right behind the scenes.

A road map for your API helps you plan, and think through how and what you will be releasing for the foreseeable future. Communicating this plan externally helps force you think about your road map in context of your consumers. Having a road map, and successfully communicating about it via a blog, on Twitter, and other channels helps keep your API consumers in tune with what you doing. In my opinion, an API road map is an essential building block for all API providers, because it has direct value on the health of API operations, but because it also provides an external sign of the overall health of a platform.

Beyond the direct value of having an API road map, there are other reasons for having one that will go beyond just your developer community. In a story in Search Engine Land about Google Posts, the author directly references the road map as part of their storytelling. “In version 4.0 of the API, Google noted that “you can now create Posts on Google directly through the API.” The changelog include a bunch of other features, but the Google Posts is the most notable.” Adding another significant dimension to the road map conversation, and helping out with SEO, and other API marketing efforts.

As you craft your road map you might not be thinking about the fact that people might be referencing it as part of stories about your platform. I wouldn’t sweat it too much, but you should at least make sure you are having a conversation about it with your team, and maybe add an editorial cycle to your road map publishing process. Making sure what you publish will speak to your API consumers, but also make for quotable nuggets that other folks can use when referencing what you are up to. This is all just a thought I am having as I’m monitoring the space, and reading about what other APIs are are up to. I find road maps to be a critical piece of the API communication, and support, and I depend on them to do what I do as the API Evangelist. I just wanted to let you know how important your API road map is, so you don’t forget to give it some energy on a regular basis.


APIs Reduce Everything To A Transaction

My partner in crime Audrey Watters crafted a phrase that I use regularly, that “APIs reduce everything to a transaction”. She first said it jokingly a few years back, but is something I regularly repeat, and think about regularly, as I feel it profoundly describes the world I study. I like the phrase because of its dual meaning to me. If I say it with a straight face, in different company, I will get different responses. Some will be positive, and others will be negative. Which I think represents the world of APIs in a way that show how APIs are neither good, bad, or neutral.

If you are an API believer, when I describe how APIs reduce everything to a transaction, you probably see this as a positive. You are enabling something. You are distilling down aspects of our personal and business worlds down into a small enough representation, so that it can be transmitted via an API. Enabling payments, messages, likes, shares, and other aspects of our digital economy. Your work as an API believer is all about reducing things down to a transaction, so that you can make people’s lives better, and enable some kind of functionality that will deliver value online, and via our mobile phones. API transactions are enablers, and by using APIs, you are working to make the world a better place.

If you aren’t an API believer, when I describe how APIs reduce everything to a transaction, you are probably troubled, and left asking why I would want to do this. Not everything can or should be distilled down into a single transaction. Shifting something to be a transaction opens it up for being bought and sold. This is the nature of transactions. Even if you are delivering value to end-users by reducing a piece of their world to a transaction, now that it is a transaction, it is vulnerable to other market forces. It is these unintended consequences, and side effects of technology that many of us geeks do not anticipate, but market forces see as an opportunity, and benefit greatly from mindless technologists like us doing the hard work to transform the world into transactions that they can get their greedy little hands on.

Our desire to purchase a product is reduced to a transaction. Nothing bad here, right? Our conversations, images we take, videos we watch, and our location becomes a transaction. Getting a little scarier. Then our healthcare, education, and personal thoughts are turned into transactions. They are being bought and sold online, and via our mobile phones. What we buy, photos of our children, health records, and our most personal thoughts are all being reduced to a transaction so they can be transmitted via web and mobile applications. Secondarily, because it has been reduced to a transaction, it can be bought and sold on data markets, and become a transaction in the surveillance economy. Fulfilling the darker side of what we know as reducing everything to a transaction.

This is one of the reasons I’m fascinated with APIs. My technologist brain is attracted to the process of reducing things to a transaction, so they can be transmitted via APIs for use in web, mobile, and device applications. I’m obsessed with reducing everything to a transaction. Along this journey I’m also fascinated by our collective lack of awareness regarding how these transactions can be used for harm. I’m intrigued by our inability to think through this as technologists, and even become annoyed when it is pointed out. I’m fascinated by our lack of accountability as technologists when the transactions we are responsible for are used to harm people, exploit individuals, and are bought and sold on the open market like we are cattle. We seem unable to stop or slow what we have set in motion, and must keep reducing everything to a transaction at all cost.

Why do we need photos of our children to be transactions? Why do we need our DNA to be transactions? Why do we need every moment, every thought, every impulse in our day to be a digital transaction? Once anything is reduced to a transaction in our world, why do we then feel compelled to measure it? The steps we take? The calories we consume? The hours we sleep? These are all aspects of our lives that have been reduced to transactions. Why? For our benefit? To improve our life? Or is it just so that someone can sell us something, or even worse, just sell this little piece of us. This is why everything is being reduced to a transaction. The reasoning behind is always about making our lives better, and improving the quality of our daily life, but the real reason is always about distilling down a piece of who we are into something that can be bought and sold–transacted.


Budget API Management Using Github

I am always looking for the cheapest, easiest ways to get things done in the world of APIs. As a small business owner I’m always on the hunt for hacks to get done what I need, and hopefully make things easier for my users, while keeping things free, or at least minimally priced for my business. When it comes to my simplest of APIs, where I’m not looking to fully manage, but I do want anyone using them to authenticate, and pass in API keys, so that I can track on their use. In some cases I’m going to bill against this usage, but for the most part I just want to secure, and quantify their consumption.

The quickest and dirtiest way I will enable authentication for any API is using Github. First thing you do is create a Github OAuth application, which is available under settings for your Github user or organization. Then I add a JavaScript icon and login link, and then paste a PHP script at another location, where it will be handling the login. All you have to do is update the URLs in both scripts, and when someone clicks on icon, they’ll be authenticated and then dropped back on the original page with username, and valid OAuth token–which then at this point you have a validated user, and valid token.

In some situations I just require that the appid and appkey be passed in with each API call. I do not rate limit, or bill against usage. I am just looking to identify consumers. Other projects I’m actively billing against consumption, but I’m doing this by processing the web server logs for the API. Again, bare bones operations. For the next level up, in the login PHP script I’m actually adding a user to a specific plan I’ve setup for an API using the AWS API Gateway, and adding their key to the gateway. Now a user has access to APIs, and the platform has (limited) access to their Github account, and a way to identify all their API usage.

I use this approach to account authentication for reading and writing to the underlying Github repository I’m using for API developer portals, and other applications. I’m also using these accounts to access API resources I’m serving up across a variety of projects. The combination allows for some pretty easy, but powerful engagement with API consumers via Github. The best part is I can rely on Github security and accounts for this layer of my API management. Next, I like that I can extend the features of their account using the AWS API Gateway. Extending the API management capabilities for specific projects when I have higher needs.

You can find the scripts for this on Github. If you don’t want to host the server side portion of this, I recommend checking out OAuth.io, as they have a simple SaaS version of this that will work beyond just Github. However, if you are looking for quick and dirty solution, this one should work just fine.


A Simple API With AWS DynamoDB, Lambda, and API Gateway

I’ve setup a few Lambda scripts from time to time, but haven’t had any dedicated project time to push forward API serverless concepts. Over the weekend I had a chance to deploy a couple of APIs using AWS DynamoDB, Lambda, and API Gateway, lighting up some of the serverless API possibilities in my brain. Like most areas of the tech sector, I think the term is dumb, and there is too much hype, but I think underneath there is some interesting possibilities, at least enough to keep me playing around with things.

Right now my primary API setup is Amazon Aurora (MysQL) backend, with API deployed on EC2, using Slim API framework in PHP. It is clean, simple, and gets the job done. I use 3Scale, or Github for the API management layer. This new approach simplifies some things for me, but definitely goes further down the AWS rabbit hole with the adoption of API Gateway and Lamdba, but also introduces some interesting enough benefits, that has me considering for use on some specific projects.

Identity and Access Management (IAM) Role The first thing you need to do to make the whole AWS thing work is setup a role using AWS IAM. I created a role just for this project, and added CloudWatchFullAccess, AmazonDynamoDBFullAccess, and AWSLambdaDynamoDBExecutionRole. I need this role to handle a bunch of management level things with the database, and logging. IAM is one of the missing aspects of hand crafting my APIs, and is why I am considering adopting on behalf of my customers, to help them get a handle on security.

Simple API Database Backends Using AWS DynamoDB I am a big fan of relational databases, mostly out of habit and experience. A client of mine is fluent in AWS DynamoDB, which is a simple NoSQL solution, so I felt compelled to ensure the backend database for their APIs spoke DynamoDB. It’s a pretty simple database, so I got to work creating an account table, and added a simple JSON object that contained 4 or 5 fields, and fired up an index for the simple accounts database. The databases I’m creating are meant to track aspects of API management, so the tables won’t end up being too large, or have high performance requirements, regardless, DyanamoDB is a perfect backend for APIs, leaving me unsure why I don’t use the platform more often.

Using Lambda Functions Behind The API Instead of firing up an Amazon EC2 and hand crafting my API framework, I crafted a handful of serverless scripts in Node.js that will run as independent Lambda functions. I’m going to eventually need a whole bunch of functions, but to get me going with this new API I crafted four separate Lambda functions that I can use to drive the API:

  • searchAccounts - Using the DynamoDB API scan method to query the table.
  • addAccount - Using the DynamoDB API putItem method to add a record to the table.
  • updateAccount - Using the DynamoDB API udpateItem method to add a record to the table.
  • deleteAccount - Using the DynamoDB API deleteItem method to add a record to the table.

Using the AWS SDK, I’m simply making calls to the DynamoDB API to get all the work done. I’m fluent in JavaScript, but not well versed in using Node.js, but it doesn’t take much energy to understand what is going on. The serverless functions are pretty utilitarian, and all that is unique is the DynamoDB method to call, and the JSON that is being sent with each call. It is something that is pretty straightforward, and easily replicated for other APIs. I will keep developing functions for my API, but now I can at least handle the basic CRUD functionality around my new database.

Publish An API Using AWS API Gateway The last piece of the puzzle for this story, is the API. Each Lambda function accepts and returns JSON, which is technically an API, but there is no management layer, or RESTful infrastructure present. The AWS API Gateway gives me the ability to craft API paths, with accompanying GET, POST, PUT, DELETE, and other methods. For each method I add, I’m given four options for connecting to my backend, either via HTTP call, create just a mock API, leverage other AWS service, or connect to a Lambda function. I quickly wire up a GET, POST, PUT, and DELETE to each of my functions, and add my API to an AWS API Gateway plan, requiring API keys, and limiting who can access what.

I now have an accounts API which allows me to add, update, delete, and search for accounts using an API. My data is stored in DynamoDB, and served up via Lambda functions, through the API Gateway. It is secured. It is scalable. I can easily quantify what my database, functions, and gateway resource usage and costs will end up being. I get why folks are interested in serverless. It’s clean. It’s modular. It scales. It is very manageable. I don’t feel like it will be the answer for every API I need to deploy, but it does make sense for quickly deploying APIs for customers who are open to AWS, and need things to be secure, highly performant, and scalable.

A serverless approach definitely takes the sysadmin load off a little bit. Especially when you depend on DynamoDB for the backend. DynamoDB, Lambda, and API Gateway offer a pretty nice stack that can auto tune and scale itself. I’m going to fire up five separate APIs using this new approach, and setup some monitoring and testing to see how it delivers, and maybe get a handle on the costs associated with operating an API like this. I still need to attach a custom domain, get a handle on logging with AWS CloudWatch, and some of the other aspects of API management using AWS API Gateway. However, it provides me with a nice look into the serverless world, and how I can use it to deploy, and manage APIs, but also use APIs to manage a serverless approach by publishing functions using the Lambda API, keeping things in tune with my API definitions stored on Github.


API Monetization Framework As Introduced By AWS Marketplace

I am learning about the AWS Marketplace through the lens of selling your API there, adding a new dimension to my API monetization and API plan research. I’ve invested a significant amount of energy to try and standardize what I learn from studying the pricing and plans for the API operations of the leading API providers. As I do this work I regularly hear from folks who like to tell me how I’ll never be able to standardize and normalize this, and that it is too big of a challenge to distill down. I agree that things seem too big to tame at the current moment, but with API pioneers like AWS, who have been doing this stuff for a while, you can begin to see the future of how this stuff will play itself out.

Amazon set into motion a significant portion of how we think about monetizing our API resources. The pay for what you use model has been popularized by Amazon, and continue to dominate conversations around how we generate revenue around our valuable digital assets. AWS has some of the most sophisticated pricing structure around their API services, as well as very mature pricing calculators, and have created markets around their resources (ie. spot instances for compute). You can see these concepts playing out in the guidance they offer software developers in their AWS Marketplace Seller Guide, which helps sellers modify their SaaS products to sell them through AWS Marketplace using two models: 1) metering, or 2) contract. When you list or application in the AWS Marketplace you must choose between one of these models, but both involve thinking critically about your monetization strategy, which includes your hard costs, as well as where the value will lie with your customers–striking the balance necessary to operate a viable API business.

According to the AWS Marketplace Seller Guide, each SaaS application owner listing through AWS Marketplace has two options for listing and billing your software:

  • You can use the AWS Marketplace Metering Service to bill customers for SaaS application use. AWS Marketplace Metering Service provides a consumption monetization model in which customers are charged only for the number of resources they use in your application. The consumption model is similar to that for most AWS services. Customers pay as they go.
  • You can use the AWS Contract Service to bill customers in advance for the use of your software. AWS Marketplace Contract Service provides an entitlement monetization model in which customers pay in advance for a certain amount of usage of your software. For example, you might sell your customer a certain amount of storage per month for a year, or a certain amount of end-user licenses for some amount of time.

No matter which plan you choose to deliver your API resources within, you will have to have the nuts and bolts of your API operations defined as part of your overall API monetization strategy. Each plan you offer needs to be derived from the hard costs involved with operations, but reflect the needs of your consumers. AWS gives you a handful of common dimensions for thinking through which type of plan you go with, and quantifying how you will be monetizing your API solution, in these nine areas:

  • Users – One AWS customer can represent an organization with many internal users. Your SaaS application can meter for the number of users signed in or provisioned at a given hour. This category is appropriate for software in which a customer’s users connect to the software directly (for example, with customer-relationship management or business intelligence reporting).
  • Hosts – Any server, node, instance, endpoint, or other part of a computing system. This category is appropriate for software that monitors or scans many customer-owned instances (for example, with performance or security monitoring). Your application can meter for the number of hosts scanned or provisioned in a given hour.
  • Data – Storage or information, measured in MB, GB, or TB. This category is appropriate for software that manages stored data or processes data in batches. Your application can meter for the amount of data processed in a given hour or how much data is stored in a given hour.
  • Bandwidth – Your application can bill customers for an allocation of bandwidth that your application provides, measured in Mbps or Gbps. This category is appropriate for content distribution or network interfaces. Your application can meter for the amount of bandwidth provisioned for a given hour or the highest amount of bandwidth consumed in a given hour.
  • Request – Your application can bill customers for the number of requests they make. This category is appropriate for query-based or API-based solutions. Your application can meter for the number of requests made in a given hour.
  • Tiers – Your application can bill customers for a bundle of features or for providing a suite of dimensions below a certain threshold. This is sometimes referred to as a feature pack. For example, you can bundle multiple features into a single tier of service, such as up to 30 days of data retention, 100 GB of storage, and 50 users. Any usage below this threshold is assigned a lower price as the standard tier. Any usage above this threshold is charged a higher price as the professional tier. Tier is always represented as an amount of time within the tier. This category is appropriate for products with multiple dimensions or support components. Your application should meter for the current quantity of usage in the given tier. This could be a single metering record (1) for the currently selected tier or feature pack.
  • Units – Whereas each of the above is designed to be specific, the dimension of Unit is intended to be generic to permit greater flexibility in how you price your software. For example, an IoT product which integrates with device sensors can interpret dimension “Units” as “sensors”. Your application can also use units to make multiple dimensions available in a single product. For example, you could price by data and by hosts using Units as your dimension. With dimensions, any software product priced through the use of the Metering Service must specify either a single dimension or define up to eight dimensions, each with their own price.

These dimensions reflect the majority of software services being sold out there today. Make sure you not get stuck thinking about one way of thinking, like just paying per API call. Think about how your different API plans might have one or more dimensions, beyond any single use case.

  • Single Dimension - This is the simplest pricing option. Customers pay a single price per resource unit per hour, regardless of size or volume (for example, $0.014 per user per hour, or $0.070 per host per hour).
  • Multiple Dimensions – Use this pricing option for resources that vary by size or capacity. For example, for host monitoring, a different price could be set depending on the size of the host. Or, for user-based pricing, a different price could be set based on the type of user (admin, power user, and read-only user). Your service can be priced on up to eight dimensions. If you are using tier-based pricing, you should use one dimension for each tier.

Alll of these dimensions reflect the common building blocks of API plans and pricing which I’ve been tracking on for a number of years. It’s based upon Amazon selling their own APIs, as well as watching their customers price and sell their resources. Their pricing guide goes well beyond just APIs, and consider how you can generate revenue from any type of SaaS, but the dimensions they provide a place to start for ALL API providers, whether you are looking to sell them in the AWS Marketplace or not. You can find even more dimensions on my API plan research, but what Amazon provides will work for about 75% of the use cases out there today, and I’m looking to get you thinking critically your API monetization and plans, not provide you with too many options.

There just aren’t too many examples like this available, when it comes to thinking through how to price your APIs. My friends over at Algorithmia have pushed the conversation forward some with their algorithmic API marketplace, but you just don’t see this level of maturity with the pricing of resources over at Azure, Google, or others yet. Amazon is the furthest along in this journey. They have the most experience, and the most data regarding what digital resources are worth, and how you can measure and meter access. I think it will take a number of years to mature, but I think by 2020 we will see more standardization in how we structure the pricing for the most common digital resources available online–even if it is just the APIs we are selling on Amazon.

There will always be an infinite number of ways to charge for your APIs, but for many of the digital commodities that have become staples, we’ll see one or two common approaches stick. We’ll see less innovation in how we price the most used APIs, because those with market share will dictate the model that gets adopted, and others will emulate just so they can get a piece of the pie. As other API resources continue to mature, becoming digital commodities, we’ll see their pricing structure stabilize, and standardize to fit into market frameworks like we see emerging on the AWS platform. It will take time, but we’ll begin to see machine readable templates governing API pricing and plans, allowing cross platform markets to flourish, as API consumers figure out they can make API usage more predictable, budget-able, and orchestrate-able. We aren’t there yet, but you can see hints of this API economy over at AWS within their marketplace approach.


API Management Dashboard: The Provider View

I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I’ve taken a look at the API consumer account basics as well as their usage, and next I want to consider the view of all of this from the API provider vantage point. For both of my current projects, I’m needing to think about the UI elements that deliver on API management elements from the API provider perspective.

To help me think though the UI elements needed for helping manage the essential elements of managing APIs I wanted to create a simple list of each screen that will be needed to get the job done. So far, I have the following X UI elements as part of my API management base:

  • Creation - The landing page for account creation. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
  • Login - The page for logging back in after a user has registered. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
  • Dashboard - The landing page once an API provider is logged in – providing access to all aspects of API management.
  • APIs - A list of APIs, with detail pages for managing each individual API definition.
  • Plans - A list of API plans, with detail pages for managing each individual API plan definition.
  • Accounts - A list of API consumer accounts, with detail pages for managing each individual API consumer.
  • Usage - - A list of API calls from logs, with tools for breaking down by API, plan, or consumer.
  • Invoices - A list of all invoices that have been generated for a specific time period, across specific consumers. With a detail page for seeing individual invoice details.

There may be more of these added to my current projects. Things like forgot password, and other aspects, but this gives me this visibility into the API consumer account, and usage aspects of the API management I’m trying deliver. These are the basic features any API management provider delivers, and represent the default areas of control an API provider should have over API consumption. It doesn’t have to be pretty, and have all the bells and whistles, but it should give enough to help API providers understand what is going on in real-time, and over a variety of time periods.

My clients I refer to 3Scale, Tyk, Restlet and the other API management services that are available won’t have to reinvent the wheel here. It is my clients who are using AWS API Gateway, and developing seamless custom integrations I’m having to specify things at this level. My advice is to make use of existing providers whenever you can, but for this round of projects I’m putting AWS API Gateway to work, so I’m going to need to do a little more custom work on the API management front. When I come out of this I should have a suite of API definitions that can be imported into the AWS API Gateway, as well as some Jekyll page templates for delivering the UI functionality listed above. Augmenting the AWS API Gateway with essential API management features that should be present within any API operations.


Selling Your AWS API Gateway Driven API Through The AWS Marketplace

I am getting intimate with AWS API Gateway. Learning about what it does, and what it doesn’t do. The gateway brings a number of essential API management elements to the table, like issuing keys, establishing plans, and enforcing rate limits. However, it also lacks many of the other active elements of API management like billing for usage, which is an important aspect of managing API consumption for API providers. With AWS, things tend to work like legos, meaning many of their services work together to deliver a larger picture, and I’ve been learning more about how AWS API Gateway works with the AWS Marketplace to deliver some of the business of API features I’m looking for.

Here is the blurb from the AWS API Gateway documentation regarding how you can setup AWS API Gateway to work with AWS Marketplace, making your API available for sale as a SaaS service:

After you build, test, and deploy your API, you can package it in an API Gateway usage plan and sell the plan as a Software as a Service (SaaS) product through AWS Marketplace. API buyers subscribing to your product offering are billed by AWS Marketplace based on the number of requests made to the usage plan.

To sell your API on AWS Marketplace, you must set up the sales channel to integrate AWS Marketplace with API Gateway. Generally speaking, this involves listing your product on AWS Marketplace, setting up an IAM role with appropriate policies to allow API Gateway to send usage metrics to AWS Marketplace, associating an AWS Marketplace product with an API Gateway usage plan, and associating an AWS Marketplace buyer with an API Gateway API key. Details are discussed in the following sections.

To enable your customers to buy your product on AWS Marketplace, you must register your developer portal (an external application) with AWS Marketplace. The developer portal must handle the subscription requests that are redirected from the AWS Marketplace console.

While it is not exactly the billing model I’m looking for in my API management layer currently, it provides a compelling look at selling your APIs in a marketplace setting. There are a growing number of APIs available in the AWS marketplace, but still less than a hundred from what I can tell. Smells like an opportunity for API providers to step up and publish their APIs. The linkage between AWS API Gateway usage plan and AWS Marketplace product makes a lot of sense when you think about having an APIs as a product focus for your organization. I will be writing more about this relationship in the future, as I think it reflects how we should be thinking about our API service composition, and how we craft our API usage plans.

I’m going to setup one of my simple, more utility style APIs, like screen capture, or proper noun search, and deploy using AWS API Gateway, then publish as an AWS Marketplace product. I want to have a clear view of what it takes, and what the on-boarding process looks like for a consumer. Another thing I’m interested in doing, is how can I also offer a more wholesale version of these APIs. Meaning, I’m not metering their usage with AWS API Gateway and billing via AWS Marketplace. It will be a separate product that gets deployed in their AWS infrastructure as an EC2, RDS, and maybe AWS API Gateway, where they aren’t billed by usage, and billed by the implementation. This is a model I’ve been watching, considering, and thinking about for some time, but only now getting to where the public cloud infrastructure is able to support this type of API deployment and management. It will be interesting to see what I can make work.


API Management Dashboard: The Consumer View

I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I’ve taken a look at the API consumer account basics as well as their usage, and next I want to consider the view of all of this from the API provider vantage point. For both of my current projects, I’m needing to think about the UI elements that deliver on API management elements from both the API provider and consumer levels. I’ve already tackled the API provider view, next up is the API consumer view.

To help me think though the UI elements needed for helping manage the essential elements of managing API consumption for developers I wanted to create a simple list of each screen that will be needed to get the job done. So far, I have the following X UI elements as part of my API management base:

  • Creation - The landing page for developer account creation. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
  • Login - The page for logging back in after a developer has registered. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
  • Dashboard - The landing page once an API consumer is logged in – providing access to all aspects of their API access.
  • Account - The ability to update developer account information.
  • Key(s) - The ability to get the master set, or multiple copies of API keys.
  • Plans - Viewing all available plans, with the ability to see which plan a consumer is part of and switch plans if relevant.
  • Usage - See history of all API consumption by API and time period.
  • Credit Card - The addition and updating of account credit card.
  • Billing - See history of all invoices for API consumption.

There may be more of these added to my current projects. Things like forgot password, and other aspects, but this gives me this visibility into the API consumer account, and usage aspects of the API management I’m trying deliver. These are the basic features any API management provider delivers, and represent the default areas of control an API provider should have over API consumption. It doesn’t have to be pretty, and have all the bells and whistles, but it should give each API consumer what they need to manage their access, consumption, and understand what they’ve consumed and been billed for.

My clients I refer to 3Scale, Tyk, Restlet, and the other API management services that are available won’t have to reinvent the wheel here. It is my clients who are using AWS API Gateway, and developing seamless custom integrations which I’m having to specify things at this level. My advice is to make use of existing providers whenever you can, but for this round of projects I’m putting AWS API Gateway to work, so I’m going to need to do a little more custom work on the API management front. When I come out of this I should have a suite of API definitions that can be imported into the AWS API Gateway, as well as some Jekyll page templates for delivering the UI functionality listed above. Augmenting the AWS API Gateway with essential API management features that should be present within any coherent API operations.


API Developer Account Usage Basics

I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I spent some time thinking through the developer account basics, and now I want to break down the aspects of API consumption and usage around these APIs and developer accounts. I want to to think about the moving parts of how we measure, quantify, communicate, and invoice as part of the API management process.

Having A Plan We have developers signed up, with API keys that they’ll be required to pass in with each API call they make. The next portion of API management I want to map out for my clients is the understanding and management of how API consumers are using resources. One important concept that I find many API providers, and would be API providers don’t fully grasp, is service composition. Something that requires the presence of a variety of access tiers, or API plans, which define the business contract we are making with each API providers. API plans usually have these basic elements:

  • plan id - A unique id for the plan.
  • plan name - A name for the plan.
  • plan description - A description for the plan.
  • plan limits - Details about limits of the plan.
  • plan timeframe - The timeframe for limits applied.

There can be more than one plan, and each plan can provide different types of access to different APIs. There might be plans for new users versus verified ones, as well as possibly partners. The why and how of each plan will vary from API provider to provider, but their are all essential to managing API usage. Something that needs to be well defined, in place, with APIs and consumers organized into their respective tiers. Once this is done, we can begin thinking about the next layer, logging.

Everything Is Logged Each call made to the API contains API key which identify the consumer, who should always be operating within a specific plan. Each API call is being logged as part of the web server, and ideally the API management layer, providing timestamped entries for every drop of APIs consumed. These logs are used across API management operations, providing a history of all usage that will be evaluated on a per API, as well as per API consumer level. If a request and response isn’t available in the API logs, then it didn’t happen.

Quantifying API Usage Every log entry recorded will have the keys for a specific API consumer, as well as the path of the API they are consuming. When this usage is viewed through the lens of the API plan an API consumer is operating within, you have the ability to quantify usage, and place a value on overall consumption by any time frame. This is the primary objective of API management, quantifying who is accessing API resources, and how much they are consuming. This is how valuable API resources are being secured, and in many situations monetized, using simple web APIs.

API usage begins with quantifying what consumers have used, but then there are other dimensions that should be considered as well. For example, usage across API users for a single path, or group of paths. Also possibly usage across APIs and consumers within a particular plan. Then you can begin to look across time periods, regions, and other relevant information, providing a more complete picture of how APIs are being put to use. This awareness is why API providers are employing API management techniques, beyond what can be extracted from database logs, or just website or application analytics–providing a more wholesale view of how APIs are consumed.

Invoicing For Consumption Now that we have all API usage defined, and available through the lens of API plans, we have the option of invoicing for consumption. We know each call API consumers have made, and we have a rate specific for each API as part of the API plans each consumer is subscribed to. All we have to do is the math, and generate an invoice for the designated time period (ie. daily, weekly, monthly, quarterly). Invoicing doesn’t have to always be settled with cash, as consumption may be internally, with partners, and with a variety of public consumers. I’d also warn against thinking of consumption always costing the consumer, and sometimes it might be relevant to pay API some consumers for some of their usage–incentivizing a particular type of behavior that benefits a platform.

Measuring, quantifying, and accounting for API consumption is the primary reason companies, organizations, institutions, and government agencies are implementing API management layers. It is how Amazon generates revenue from their Amazon Web Services. It is how Stripe, Twilio, Sendgrid, and other rockstar API providers generate the revenue behind their operations. It is how government agencies are understanding how the public is putting valuable public resources to work. API management is something that is baked into cloud platforms, with a variety of service providers available as well, providing a wealth of solutions for API providers to choose from.

Next I will be looking at the API account and usage layers of API management from the view of API provider, as well as the API consumer via their developer area. Ideally, API management solutions are providing the dashboards needed for both sides of this equation, but in some of the projects I’m working on this isn’t available. There is no ready to go dashboard for API providers or consumers to look at when standing up an AWS API Gateway in front of an API, falling short when it comes to going the distance as an API management solution. You can define accounts, issue keys, establish plans and limit manage API consumption, but we need AWS CloudFront, and other services to deliver on API logging, authentication, and other common aspect of management–with API management dashboards being a custom thing when employing AWS for management. One consideration for why you might go with a more robust API management solution, beyond what an API gateway offers.


API Developer Account Basics

I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. The first element you need to manage API access is the ability for API consumers to be able to sign up for an account, that will be used to identify, measure usage, and engage with each API consumer.

Starts With An Account While each company may have more details associated with each account, each account will have these basics:

  • account id - A unique identifier for each API account.
  • name - A first name and last name, or organization name.
  • email - A valid email address to communicate with each user.

Depending on how we enable account creation and login, there might also be a password. However, if we use existing OpenID or OAuth implementations, like Github, Twitter, Google, or Facebook, this won’t be needed. We are relying on these authentication formats as the security layer, eliminating the need for yet another password. However, we still may need to store some sort of token identifying the user, adding these two possible elements:

  • password - A password or phrase that is unique to each user.
  • token - An OAuth or other token issued by 3rd party provider.

That provides us with the basics of each developer API developer account. It really isn’t anything different than a regular account for any online service. Where things start to shift a little specifically for APIs, is that we need some sort of keys for each account that is signing up for API access. The standard approach is to provide some sort of API key, and possibly a secondary secret to compliment it:

  • api key - A token that can be passed with each API call.
  • api secret - A second token, that can be passed with each API call.

Many API providers just automatically issue these API keys when each account is created, allowing consumers to possibly reset and regenerate at any point. These keys are more about identification than they are about security, as each key is passed along with each API call, and provide identification via API logging, which I’ll cover in a separate post. The only security keys deliver is that API calls are rejected if keys aren’t present.

Allow For Multiple Applications Other API providers will go beyond this base level of API account functionality, allowing API consumers to possibly generate multiple sets of keys, sometimes associated with one or many applications. This opens up the question of whether these keys should be associated with the account, or with one or many registered applications:

  • app id - A unique id for the application.
  • app name - A name for the application.
  • app description - A description of the application.
  • api key - A token that can be passed with each API call.
  • api secret - A second token, that can be passed with each API call.

Allowing for multiple applications isn’t something every API provider will need, but does reduce the need for API consumers to signup for multiple accounts, when they need an additional API key for a separate application down the road. This is something that API providers should be considering early on, reducing the need to make additional changes down the road when needed. If it isn’t a problem, there is no reason to introduce the complexity into the API management process.

This post doesn’t even touch on logging and usage. It is purely about establishing developer accounts, and providing what they’ll need to authenticate and identify themselves when consuming API resources. Next I’ll explore the usage, consumption, and billing side of the equation. I’m trying to keep things decoupled as much as I possibly can, as not every situation will need every element of API management. It helps me articulate all the moving parts of API management for my readers, and customers, and allows me to help them make sensible decisions, that they can afford.

Ideally, developer accounts are not a separate thing from any other website, web or mobile application accounts. If I had my way, API developer accounts would be baked into the account management tools for all applications by default. However, I do think things should be distilled down, standardized, and kept simple and modular for API providers to consider carefully, and think about deeply when pulling together their API strategy, separate from the rest of their operations. I’m taking the thoughts from this post and applying in one project I’m deploying on AWS, with another that will delivered by custom deploying a solution that leverages Github, and basic Apache web server logging. Keeping the approach standardized, but something I can do with a variety of different services, tools, and platforms.


The Tractor Beam Of The Database In An API World

I’m an old database person. I’ve been working with databases since my first job in 1987. Cobol. FoxPro. SQL Server. MySQL. I have had a production database in my charge accessible via the web since 1998. I understand how databases are the center of gravity when it comes to data. Something that hasn’t changed in an API driven world. This is something that will make microservices in a containerized landscape much harder than some developers will want to admit. The tractor beam of the database will not give up control to data so easily, either because of technical limitations, business constraints, or political gravity.

Databases are all about the storage and access to data. APIs are about access to data. Storage, and the control that surrounds it is what creates the tractor beam. Most of the reasons for control over the storage of data are not looking to do harm. Security. Privacy. Value. Quality. Availability. There are many reasons stewards of data want to control who can access data, and what they can do with it. However, once control over data is established, I find it often morphs and evolves in many ways, that can eventually become harmful to meaningful and beneficial access to data. Which is usually the goal behind doing APIs, but is often seen as a threat to the mission of data stewards, and results in a tractor beam that API related projects will find themselves caught up in, and difficult to ever break free of.

The most obvious representation of this tractor beam is that all data retrieved via an API usually comes from a central database. Also, all data generated or posted via an API, also ends up within a database. The central database always has an appetite for more data, whether scaled horizontally or vertically. Next, it is always difficult to break off subsets of data into separate API-driven project, or prevent newly established ones from being pulled in, and made part of existing database operations. Whether due to technical, business, or political reasons, many projects born outside this tractor beam will eventually be pulled into the orbit of legacy data operations. Keeping projects decoupled will always be difficult when your central databases has so much pull when it comes to how data is stored and accessed. This isn’t just a technical decoupling, this is a cultural one, that will be much more difficult to break from.

Honestly, if your database is over 2-3 years old, and enjoys any amount of complexity, budget scope, and dependency across your organization, I doubt you’ll ever be able to decouple it. I see folks creating these new data lakes, which act as reservoirs for any and all types of data gathered and generated across operations. These lakes provide valuable opportunities for API innovators to potentially develop new and interesting ways of putting data to work, if they possess an API layer. However, I still think the massive data warehouse and database will look to consume and integrated anything structured and meaningful that evolves on the shores. Industrial grade data operations will just industrialize any smaller utilities that emerge along the fringes of large organizations. Power structures have long developed around central data stores, and no amount of decoupling, decentralizing, or blockchaining will change this any time soon. You can see this with the cloud, which was meant to disrupt this, when it just moved it from your data center to the someone else’s, and allowed it to grow at a faster rate.

I feel like us API folks have been granted ODBC and JDBC leases for our API plantations, but rarely will we ever decouple ourselves from the mother ship. No matter what the technology whispers in our ears about what is possible, the business value, and political control over established databases will always dictate what is possible and what is not possible. I feel like this is one reason all the big database platforms have waited so long to provide native API features, and why next generation data streaming solutions rarely have simple, intuitive API layers. I think we will continue to see the tractor beam of database culture continue to be aggressive, as well as passive aggressive to anything API, trumping access possibilities brought to the table by APIs, with outdated power and control beliefs rooted in how we store and control our data. These folks rarely understand they can be just as controlling and greedy with APIs, but they seem to be unable to get over the promises of access APIs afford, and refuse to play along at all, when it comes to turning down the volume on the tractor beam so anything can flourish.


Adding Ping Events To My Webhooks And API Research

I am adding another building block to my webhooks research out of Github. As I continue this work, it is clear that Gthub will continue to play a significant role in my webhook research and storytelling, because they seem to be the most advanced when it comes to orchestration via API and webhooks. I’m guessing this is a by-product of continuous integration (CI) and continuous deployment (CD), which Github is at the heart of. The API platforms that have embraced automation and orchestration as part of what they do, always have the most advanced webhook implementations, and provide the best examples of webhooks in action, which we can all consider as part of our operations.

Today’s webhook building block is the ping event. “When you create a new webhook, we’ll send you a simple ping event to let you know you’ve set up the webhook correctly. This event isn’t stored so it isn’t retrievable via the Events API. You can trigger a ping again by calling the ping endpoint.” A pretty simple, but handy features when it comes to getting up and going with webhooks, making sure everything is working properly out of the gate–something that clearly comes from experience, and listening to the problems your consumers are encountering.

These types of subtle webhook features are exactly the types of building blocks I’m looking to aggregate as part of my research. As I do with other areas of my research, is at some point I will publish all of these into a single, (hopefully) coherent guide to webhooks. After going through the webhook implementations across the leading providers like Github, I should have a wealth of common patterns in use. Since webhooks aren’t any formal standard, it is yet another aspect of doing business with APIs we have to learn from the health practices already in use across the space. It helps to emulate providers like Github, because developers are pretty familiar with how Github works, when your webhooks behave in similar ways it reduces the cognitive load API consumers face when they are getting started.

One other thing to note in this story–my link to Github’s documentation goes directly to the section on webhook ping events, because they use anchors for all titles and subtitles. This is something that makes storytelling around API operations soooooooooo much easier, and more precise. Please, please, please emulate this in your API operations. If I can directly link to something interesting within your API documentation, the chances are much greater I will tell a story, and publish a blog post about it. If I have to make a user search for whatever I’m talking about, I’m probably just gonna pass on it. One more trick for your toolbox, when it comes to getting me to tell more stories about what you are up to.


Using APIs To Enrich The Data You Have In Spreadsheets

As my friend John Sheehan over at Runscope says, “the spreadsheet is the most underrated API client”. The spreadsheet is where a significant amount of business gets done each day in the business world, so it make sense that we should be integrating APIs at this level whenever we possibly can. The best tool for doing this today is with Blockspring, which provides non-developers (and developers) with the tools they need to integrate APIs into spreadsheets, either Microsoft Excel or Google Sheets–putting the power of APIs directly into the hands of average business folk.

Blockspring has over 100 APIs available for integration into your spreadsheets, but I wanted to highlight their recent release of bulk connectors, which currently provides 10 separate data enrichment features from a handful of API providers:

  • Bing - Company Domain Lookups
  • Bing - Search Query Lookups
  • FullContact - Email Lookups
  • FullContact - Company Lookups
  • FullContact - Twitter Lookups
  • FullContact - Phone Lookups
  • Mailgun - Validate Emails
  • Clarifai - Deep Learning Image Tagging
  • Google - Shorten URLs
  • Google - Expand URLs

These bulk connectors are meant to help you work with bulk data you have stored in spreadsheets and CSV files by enriching your data using these valuable API services. These connectors are all features I’ve custom developed as part of my internal systems, to help me monitor the world of APIs. They are simple, useful data management features, but instead of having to custom integrate with each API as I had to, anyone can use Blockspring to deliver these features within their own spreadsheets. Making the spreadsheet act as an API client, but for any average business user, not just for developers who are API savvy, and have the skills to deliver custom integration.

I’m a big advocate of companies publishing APIs. I also do a lot of pushing for API providers to make sure their APIs are available to non-developers using Zapier. I consider Blockspring to be on this list of essential API services you should be working with as an API provider. This approach to consuming APIs will increasingly be how business gets done. As much as many of us developers would love for spreadsheets to go away, they ain’t going anywhere. Most normal folk in the world of business live and breathe within their spreadsheets, and if we want to deliver our API services to them, it will have to be through brokers like Blockspring. Just pause for a moment and think about the potential for delivering machine learning models via APIs within the spreadsheet like this, and you’ll begin to realize what an underserved aspect of API consumption spreadsheets are.


Importing OpenAPI Definition To Create An API With AWS API Gateway

I’ve been learning more about AWS API Gateway, and wanted to share some of what I’m learning with my readers. The AWS API Gateway is a robust way to deploy and manage an API on the AWS platform. The concept of an API gateway has been around for years, but the AWS approach reflects the commoditization of API deployment and management, making it a worthwhile cloud API service to understand in more depth. With the acquisition or all the original API management providers in recent years, as well as Apigee’s IPO, API management is now a default element of major cloud providers. Since AWS is the leading cloud provider, AWS API Gateway will play a significant role into the deployment and management of a growing number of APIs we see.

Using AWS API Gateway you can deploy a new API, or you can use it to manage an existing API–demonstrating the power of a gateway. What really makes AWS API Gateway reflect where things are going in the space, is the ability to import and define your API using OpenAPI. When you first begin with the new API wizard, you can upload or copy / paste your OpenAPI, defining the surface area of the API, no matter how you are wiring up the backend. OpenAPI is primarily associated with publishing API documentation because of the success of Swagger UI, and secondarily associated with generating SDKs and code samples. However, increasingly the OpenAPI specification is also being used to deploy and define aspects of API management, which is in alignment with the AWS API Gateway approach.

I have server side code that will take an OpenAPI and generate the server side code needed to work with the database, and handle requests and responses using the Slim API framework. I’ll keep doing this for many of my APIs, but for some of them I’m going to be adopting an AWS API Gateway approach to help standardize API deployment and management across the APIs I deliver. I have multiple clients right now who I am deploying, and helping them manage their API operations using AWS, so adopting AWS API Gateway makes sense. One client is already operating using AWS which dictated that I keep things on AWS, but the other client is brand new, looking for a host, which also makes AWS a good option for setting up, and then passing over control of their account and operations to their on-site manager.

Both of my current projects are using OpenAPI as the central definition for all stops along the API lifecycle. One API was new, and the other is about proxying an existing API. Importing the central OpenAPI definition for each project to AWS API Gateway worked well to get both projects going. Next, I am exploring the staging features of AWS gateway, and the ability to overwrite or merge the next iteration of the OpenAPI with an existing API, allowing me to evolve each API forward in coming months, as the central definition gets added to. Historically, I haven’t been a fan of API gateways, but I have to admit that the AWS API Gateway is somewhat changing my tune. Keep the service doing one thing, and doing it well, with OpenAPI as the definition, really fits with my definition of where API management is headed as the API sector matures.


Most API Developers Will Not Care As Much As You Do

I believe in the potential of what APIs can do, and care about learning how we can do things right. Part of it is my job, but part of it is me wanting to do things well. Master my approach to delivering APIs, using my well-rounded API toolbox. Reading the approach of other leading API providers, and honing my understanding of healthy, and not so healthy practices. I thoroughly enjoy studying what is going on and then applying it across what I do. However I am reminded regularly that most people are not interested in knowing, and doing things right–they just want things done.

As many of us discuss the finer details of API design, and the benefits of one approach over the other, other folks would rather us just point them to the solution that will work for them. They really don’t care about the details, or mastering the approach, they just want it to work in their situation. Whether it is the individual, the project or organization they are working in, the environment is just not conducive to learning, understanding, and growth. They are just interested in services and tools that can deliver the desired solution for as cheap as possible–free, and open source whenever available.

You can see this reality playing out across the space. An example is OpenAPI (fka Swagger). It has largely been successful because of Swagger UI. Most people think OpenAPI is all about documentation, with their understanding reflecting the solution they delivered, not the full benefits brought to the table as part of the process of implementing the specification. This is just one example of how folks across the API space are interested in solutions, rather than the journey. This is why many API programs will stagnate and fail, because folks do them thinking they’ll achieve some easier way of doing things, easy integrations, effortless innovation, or some other myth around API Valhalla.

I feel like much of this reality is set into motion through our education system. We don’t always teach people to learn, we tend to teach them job skills that we (companies) perceive are relevant today. Technology training tends to become about software solutions, brands, and platforms, and rarely concepts. Then I feel like this is baked into company operations because of the escalated pace introduced by startup funding, all the way up to the constant quest for profits of publicly traded companies. An incentive model that encourages folks to just do, not necessary do right. Achieve short term goals, quarterly and annual metrics, and ignore the technical debt we created–that will be someone elses problem.

Helping developers learn about APIs feels like our healthcare system sometimes to me. Developers are focused on fixing problems, treating symptoms, and living with good enough outcomes. Nothing preventative. No healthy diet or exercise. Just solving the problems that are in front of us. Looking for solutions to these immediate needs. I need API documentation. I need some sort of automated testing, or security scanner. I just need that API to be performant. I’m not interested in learning about the web, standards, or other things that would help me in my work. I just want this to work, so I can move on to the next problem. It’s a pretty big problem in the API space which I really don’t have any solution for. I’ll just keep learning, educating, and sharing with folks, but regularly reminding myself that not everyone will care about this stuff as much as I do, and that is just the way things are.


API Design Maturity At Capital One

API design is something that many have tried to quantify and measure, but very few ever establish any meaningful way of doing so properly in my experience. I’ve been learning about the approach to API governance from the Capital One DevExchange team, and found their approach to defining API design maturity pretty interesting. I’m mostly interested in their approach because it speaks to actual business objectives, and aren’t about the common technical aspects we see API design being quantified across the community each day.

Capital One breaks things down into five distinctive layers that offer value to any organization doing APIs. Starting at the bottom of their maturity period, here are the levels of maturity they are measuring things by:

  • Functional - Doing the basics, providing some low-level functionality, and nothing more.
  • Reliable - An API that is reliable, and scalable, beyond just basic functionality.
  • Intuitive - Where an investment in developer experience is made, further standardizing and streamlining what an API does.
  • Empowering - Where an API really begins to deliver value to an organization by being function, reliable, and intuitive, which all contributes significantly to operations.
  • Transformative - APIs that are game changer. Few APIs ever rise to this level, but all should aspire to this level of maturity.

It provides a whole other lens for looking at API design through. Moving beyond just, is it RESTful? Hypermedia, GraphQL, gRPC, or other emerging approach. It also understands that not all APIs will be equal, and that we should be standardizing the value the design of our APIs deliver to our business operations. Forcing us to ask some pretty simple questions about the API design patterns we are using, and the actual value they bring to the table. Moving beyond API design being about technical details, and considering the business, and other political aspects of doing APIs.

The Capital One API design maturity definition also demonstrates another important aspect of all of this. That it takes work, time, and experience to properly design an API that will rise an empowering or transformative level. It can be easy to deliver functional APIs, but making them become reliable, and intuitively get the job done will take investment. I’m adding this definition to my API design research, as well as my upcoming API governance research. I’m looking to evolve my definition of what API design looks like beyond just REST, and I’m wanting to move API governance beyond some IT led concept, and something that is in sync with business objectives, like I’m seeing from the Capital One team. It is easy to think of API design as mature once it enters production, but I’m thinking there is a lot more we should be considering before we consider our approach to API design mature.


AdWords API Release and Sunset Schedule For 2018

APIs are not forever, and eventually will go away. The trick with API deprecation is to communicate clearly, and regularly with API consumers, making sure they are prepared for the future. I’ve been tracking on the healthy, and not so healthy practices when it comes to API deprecation for some time now, but felt like Google had some more examples I wanted to add to our toolbox. Their approach to setting expectations around API deprecation is worthy of emulating, and making common practice across industries.

The Google Adwords API team is changing their release schedule, which in turns impacts the number of APIs they’ll support, and how quickly they will be deprecating their APIs. They will be releasing new versions of the API three times a year, in February, June and September. They will also be only supporting two releases concurrently at all times, and three releases for a brief period of four weeks, pushing the pace of API deprecation alongside each release. I think that Google’s approach provides a nice blueprint that other API provides might consider adopting.

Adopting an API release and sunset schedule helps communicate the changes on the horizon, but it also provides a regular rhythm that API consumers can learn to depend on. You just know that there will be three releases a year, and you have a quantified amount of time to invest in evolving integration before any API is deprecated. It’s not just the communication around the roadmap, it is about establishing the schedule, and establishing an API release and sunset cadence that API consumers can be in sync with. Something that can go a lot further than just publishing a road map, and tweeting things out.

I’ll add this example to my API deprecation research. Unfortunately the topic is one that is widely communicated around in the API space, but Google has long a strong player when it comes to finding healthy API deprecation examples to follow. I’m hoping to get to the point soon where I can publish a simple guide to API deprecation. Something API providers can follow when they are defining and deploying their APIs, and establish a regular API release and deprecation approach that API developers can depend on. It can be easy to get excited about launching a new API, and forget all about it’s release and deprecation cycles, so a little guidance goes a long way to helping API providers think about the bigger picture.


Operating Your API Portal Using Github

Operating on Github is natural for me, but I am regularly reminded what a foreign concept it is for some of the API providers I’m speaking with. Github is the cheapest, easiest way to launch a public or private developer portal for your API. With the introduction of Github Pages, each Github repository is turned into a place to host any API related project. In my opinion, every API should begin with Github, providing a place to put your API definition, portal, and other elements of your API operations.

If you are just getting going with understand how Github can be used to support your API operations, I wanted to provide a simple checklist of the concepts at play, that will lead you being able to publish your API portal to Github.

  • Github Account - You will need an account to be able to use Github. Anything you do on Github that is public will be free. You can do private portals on Github, but this story is about using it for a public API portal.
  • Github Organization - I recommend starting an organization for your API operations, instead of under just a single users account. Then you can make the definition for the API the first repository, and possibly the portal your second repository you create.
  • Github Repo - A Github repository is basically a folder on the platform which you can start the code, pages, and other content used as part of API operations.
  • Github Pages - Each Github repository has the ability to turn on a public project site, which can be used as a hosting location for a developer portal.
  • Jekyll - Github Pages allows any Github repository to become a website hosting location which you can access via your Github user account, or even provide an address using your own domain.

I recommend every API provider think about hosting their API portal on Github. The learning curve isn’t that significant to get up and running, and if your portal is public, it is free. You can version control, and leverage other key aspects of Github for evolving and managing your API portal. There are a growing number of examples of forkable API portals like from the GSA, or an interesting minimum viable API documentation template from my friend James Higginbotham (@launchany). Demonstrating that the practice is growing, with the number of healthy examples to build on diversifying.

If you need help understanding how to use Github for hosting your API developer portal, feel free to reach out. I am happy to see where I can help. Another thing to note is that this approach to running a Jekyll static website isn’t limited to Github. You can always start the project there, and move off to any Jekyll enabled hosting provider. I run my entire network of websites and API project this way, leveraging Github as my plan A, and AWS as my plan B, with a server image ready to go when I need. Github just provides a number of bells and whistles that make it much more usable, as well as something others can collaborate around, enjoying the network effects that come with using the platform.


The Basics Of API Management

I am developing a basic API management strategy for one of my client’s API. With each area of their API strategy I am taking what I’ve learned monitoring the API sector, but pausing for a moment to think about again, and then applying to their operations. Over the years I have separated out many aspects of API management, distilling it down to a core set of elements that reflect the evolution of API management as its evolved into a digital commodity. It helps me to think through these aspects of API operations in general, but also applying to a specific API I am working on, helping me further refine my API strategy advice.

API management is the oldest area of my research. It has spawned every other area of the lifecycle I track on, but also is the most mature aspect of the API economy. This project I am working on gives me an opportunity to think about what is API management, and what should be spun off into separate areas of concern. I am looking to distill API management down to:

  • Portal - A single URL to find out everything about an API, and get up and running working the resources that are available.
  • On-Boarding - Think just about how you get a new developer to from landing on the home page of the portal to making their first API call, and then an application in production.
  • Accounts - Allowing API consumers to sign up for an account, either for individual, or business access to API resources.
  • Applications - Enable each account holder to register one or many applications which will be putting API resources to use.
  • Authentication - Providing one, or multiple ways for API consumers to authenticate and get access to API resources.
  • Services - Defining which services are available across one or many API paths providing HTTP access to a variety of business services.
  • Logging - Every call to the API is logged via the API management layer, as well as the DNS, web server, file system, and database levels.
  • Analysis - Understanding how APIs are being consumed, and how applications are putting API resources to use, identifying patterns across all API consumption.
  • Usage - Quantifying usage across all accounts, and their applications, then reporting, billing, and reconciling usage with all API consumers.
  • APIs - API access to accounts, authentication, services, logging, analysis, and usage of API resources.

There are other common elements bundled with API management, but this reflects the core of what API management is about–the business of APIs. Keeping track of who has access to what, and how much they are using. There are a number of other aspects of API management that many will consider under the API management umbrella, but I’ve elevated to being their own stops along the API lifecycle. Some areas are:

  • Documentation - Static or interactive documentation for all available API paths, parameters, headers, and other details of the request and response surface area of the API.
  • Support - Self-service, or direct support channels that API consumers put to use to get help along the way.
  • SDKs - The SDKs, samples, libraries, and other supporting code elements for web, mobile, or other types of applications.
  • Road Map - Communicating what the future holds when it comes to the API.
  • Issues - Notification of any open issues with the available of the API.
  • Change Log - A history of what has happened when it comes to changes to the API.

These areas compliment API management, but should be approached beyond the day to day management aspects of API operations. I’d also consider authentication, logging, and analysis to be bigger than just about API management, as all three areas should cover more than just the API, but they are still very coupled to the core aspects of API management. In my definition, API management is very much about managing the consumption of resources, and not always the other aspects of API operations. This isn’t just my definition, it is what I’m seeing with the commodization of API management like we see over at Amazon Web Services.

AWS API Gateway is really about accounts, applications, authentication, and services. The logging, analysis are delivered by AWS CloudWatch. For this particular project I’m using Github and Jekyll as the portal, and custom delivering on-boarding, usage, and supporting APIs separately. Further narrowing down my definition of just what is API management. I’d say that AWS represents this evolution in API management well, with the decoupling of concerns between AWS API Gateway and AWS CloudWatch. If you apply AWS Cognito to authentication, you can separate out another. I do not see any viable solution to handling usage, billing, and what used to be the business and monetization side of API management.

This API project I am working on is already using AWS for the backend of their operations, so I’m investing cycles into better understanding the moving parts of API management in context of the AWS platform. It makes sense to think some more about the decoupling of API management in context of AWS, since they are a major player in the commodization, and maturing of the concept. Once I’m done with AWS, I’m going to take another look at Google, then Azure, who are the other major players who are defining the future of API management.


Bots, Voice, And Conversational APIs Are Your Next Generation Of API Clients

Around 2010, the world of APIs began picking up speed with the introduction of the iPhone, and then Android mobile platforms. Web APIs had been used for delivering data and content to websites for almost a decade at that point, but their potential for delivering resources to mobile phones is what pushed APIs into the spotlight. The API management providers pushed the notion of being multi-channel, and being able to deliver to web and mobile clients, using a common stack of APIs. Seven years later, web and mobile are still the dominant clients for API resources, but we are seeing a next generation of clients begin to get more traction, which includes voice, bot, and other conversational interfaces.

If you deliver data and content to your customers via your website and mobile applications, the chance that you will also be delivering it to conversational interfaces, and the bots and assistants emerging via Alexa and Google Home, as well as on Slack, Facebook, Twitter, and other messaging platforms, is increasing. I’m not selling that everything will be done with virtual assistants, and voice commands in the near future, but as a client we will continue to see mainstream user adoption, and voice be used in automobiles, and other Internet connected devices emerging in our world. I am not a big fan of talking to devices, but I know many people who are.

I don’t think Siri, Alexa, and Google Home will live up to the hype, but there is enough resources being invested into these platforms, and the devices that they are enabling, that some of it will stick. In the cracks, interesting things will happen, and some conversational interfaces will evolve and become useful. In other cases, as a consumer, you won’t be able to avoid the conversational interfaces, and be required to engage with bots, and use voice enabled devices. This will push the need to have conversationally literate APIs that can deliver data to people in bite-size chunks. Sensors, cameras, drones, and other Internet-connected devices will increasingly be using APIs to do what they do, but voice, and other types of conversational interfaces will continue to evolve to become a common API client.

I am hoping at this point we begin to stop counting the different channels we deliver API data and content to. Despite many of the Alexa skills, and Slack bots I encounter being pretty yawn-worthy, I’m still keeping an eye on how APIs are being used by these platforms. Even if I don’t agree with all the uses of APIs, I still find the technical, business, and politics beyond them evolving worth tuning into. I tend to not emphasize to my clients that they work on voice or bot applications if they aren’t too far along their API journey, but I do make sure they understand one of the reasons they are doing APIs is to support a wide and evolving range of clients, and that at some point they’ll have to begin studying how voice, bots, and other conversational approaches will be a client they have to consider a little more in their overall strategy.


Air, An Asthma API

You don’t find me showcasing specific APIs often. I’m usually talking about an API because of their approach to the technology, business, or politics of how they do APIs. It just isn’t my style to highlight APIs, unless I think they are interesting, and delivering value that is worth talking about, or possibly reflecting a meaningful trend that is going on. In this case it is a useful API that I think brings value, but also provides an example of an API I can showcase to non-developer folks as a meaningful example of an API.

The API I’m talking about today, is the Air API, an asthma API from Propeller, which provides a set of free tools to help people understand current asthma conditions in their neighborhoods. The project is led by the Propeller data scientists and clinical researchers, looking to leverage Air API to help predict how asthma may be affected by local conditions, including a series of tools that share local asthma conditions, ranging from an email or text subscription, to an embeddable Air Widget for other websites.

The Air API provides an easy to explain example of what is possible with APIs. Environmental APIs will continue to be an important aspect of doing APIs. Aggregating sensor and other data to help us understand the air, water, weather, and other critical environmental factors that impact our lives each day. I like the idea of these APIs being open and available to 3rd party developers to build tools on top of them, while the platforms using them as a marketing vehicle for their other products and services, while making sure to keep the valuable data accessible to everyone.

I’ll put the Air API into my toolbox of APIs I use to help onboard folks with APIs. If they are impacted by asthma, or know someone who is, it helps make the personal connection, which can be important when on-boarding folks with the abstract concepts surrounding APIs. People tend to not care about technology until it makes an impact on their world. Which is one reason I think healthcare and environment APIs are going to play an important role in the sector for years to come. They provide a rich world of data, content, and algorithms, that can be exposed via APIs, and be applied in meaningful ways in people’s lives. Leaving a (hopefully) positive impression on folks about what APIs can do.


Everything Is Headless In A Decoupled API World

It is always funny how long some concepts take to fully capture my attention. Sometimes I understand a concept on the surface, but never really invest the time into thinking deeply about how it actually fits into the big picture of my API research. One of these concepts is “headless”. Most commonly applied to the “headless CMS”. The wikipedia entry for headless CMS proclaims, “a Headless CMS is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTful API for display on any device.

A headless CMS is basically API-first, but instead of API being the first focus, the entire CMS, and administrative system for managing the content gets top billing, with API playing a critical supporting role. It’s what I am always prescribing as a way for API providers to consider the role between their applications, and their backend API resources. Decoupling your apps, from the backend resources. The administration interface for the content management system is one application, and each of your web, mobile, or other applications act as independent solutions. I see headless as just a business view of doing APIs, which makes sense when selling the concept to normals.

I’m adding a research area for headless that augments my API deployment research. Some of the open source implementations I’ve come across like Directus are pretty slick. It’s a pretty quick way to go from API to something that immediately benefits the average user in as short of time as possible. Headless is just another way to frame the the API conversation in the context of delivering an internal content management system, over getting 3rd party developers to build applications on top. It is something that is still possible because their is an API behind, but it is not the primary objective in any headless implementation.

While the main definition of headless is about having an API behind everything, I am also finding examples where Github is the backend, instead of just a RESTful API. Prose.io which is actually the CMS I use for API Evangelist is considered a headless CMS, but Github acts my backend. Some of the headless CMS solutions I’ve used that depend on Github use Git as the connection, while others depend on the Github API for reading and writing data and content. This reflects how I use Github across my projects, beyond just Prose.io, except I am depending on web APIs, Github, as well as Google Sheets for the backend of Jekyll driven websites and applications.

I have been tracking on headless CMS stories for almost two years now, but after diving a little deeper this last week, it finally clicked for me. I feel headless is a good way to help the world move past WordPress, and embrace a more decoupled way of delivering websites, mobile, and other types of applications. I’m going to deploy Directus so that I can better understand headless as an approach to deploying APIs, and see about deploying basic demo implementations that I can point to. I’m hoping to use it as a way to introduce folks to APIs where public APIs is not the first objective, allowing them to get their feet wet with a simple API and website implementation. Helping deliver value without all the immediate risk, allowing folks to learn about how APIs can drive one or many applications in a more controlled internal environment.


Obfuscating The Evolving Code Behind My API

I’m dialing in a set of machine learning APIs that I use to obfuscate and distort the images I use across my storytelling. The code is getting a little more hardened, but there is still so much work ahead when it comes to making sure it does exactly what I needed it to do, with only dials and controls I need–nothing more. I’m the only consumer of my APIs, which I use them daily, with updates made to the code along the way, evolving the request and response structures until they meet my needs. Eventually the APIs will be done (enough), and I’ll stop messing with them, but that will take a couple months more of pushing forward the code.

While the code for these APIs are far from finished, I find the API helps obfuscate and diffuse the unfinished nature of things. The API maintains a single set of paths, and I might still evolve the number of parameters it accepts, and the fields it outputs, the overall will keep a pretty tight surface area despite the perpetually unfinished backend. I like this aspect of operating APIs, and how they can be used as a facade, allowing you to maintain one narrative on the front-end, even with another occurring behind behind the scenes. I feel like API facades really fit with my style of coding. I’m not really an A grade programmer, more a B- level one, but I know how to get things to work–if I have a polished facade, things look good.

Honestly, I’m pretty embarrassed by my wrappers for TensorFlow. I’m still figuring everything out, and finding new ways of manipulating and applying Tensor Flow models, so my code is rapidly changing, maturing, and not always complete. When it comes to the API interface I am focused on not introducing breaking changes, and maintaining a coherent request and response structure for the image manipulation APIs. Even though the code will at some point be stabilized, with updates becoming less frequent, the code will probably not see the light of day on Github, but the API might eventually be shared beyond just me, providing a simple, usable interface someone else can use.

I wouldn’t develop APIs like this outside a controlled group of consumers, but I find being both API provider and consumer pushes me to walk a line of unfinished and evolving backend, with a stable, simple API, without any major breaking changes. The fewer fixes I have to do to my API clients, the more successful I’ve been pushing forward the backend, while keeping the API in forward motion without the friction. I guess I just feel like the API makes for a nice wrapper for code, acting as shock absorbers for ups and downs of evolving code, and getting it to do what you need. Providing a nice facade that obfuscates the evolving code behind my API.


Provide An Open Source Threat Information Database And API Then Sell Premium Data Subscriptions

I was doing some API security research and stumbled across vFeed, a “Correlated Vulnerability and Threat Intelligence Database Wrapper”, providing a JSON API of vulnerabilities from the vFeed database. The approach is a Python API, and not a web API, but I think provides an interesting blueprint for open source APIs. What I found interesting (somewhat) from the vFeed approach was the fact they provide an open source API, and database, but if you want a production version of the database with all the threat intelligence you have to pay for it.

I would say their technical and business approach needs a significant amount of work, but I think there is a workable version of it in there. First, I would create a Python, PHP, Node.js, Java, Go, Ruby version of the API, making sure it is a web API. Next, remove the production restriction on the database, allowing anyone to deploy a working edition, just minus all the threat data. There is a lot of value in there being an open source set of threat intelligence sharing databases and API. Then after that, get smarter about having a variety different free and paid data subscriptions, not just a single database–leverage the API presence.

You could also get smarter about how the database and API enables companies to share their threat data, plugging it into a larger network, making some of it free, and some of it paid–with revenue share all around. There should be a suite of open source threat information sharing databases and APIs, and a federated network of API implementations. Complete with a wealth of open data for folks to tap into and learn from, but also with some revenue generating opportunities throughout the long tail, helping companies fund aspects of their API security operations. Budget shortfalls are a big contributor to security incidents, and some revenue generating activity would be positive.

So, not a perfect model, but enough food for thought to warrant a half-assed blog post like this. Smells like an opportunity for someone out there. Threat information sharing is just one dimension of my API security research where I’m looking to evolve the narrative around how APIs can contribute to security in general. However, there is also an opportunity for enabling the sharing of API related security information, using APIs. Maybe also generating of revenue along the way, helping feed the development of tooling like this, maybe funding individual implementations and threat information nodes, or possibly even fund more storytelling around the concept of API security as well. ;-)


Their Security Practices Are Questionable But Their Communication Is Unacceptable

I study the API universe every day of the week, looking for common patterns in the way people are using technology. I study almost 100 stops along the API lifecycle, looking for healthy practices that companies, organizations, institutions, and government agencies can follow when dialing in their API operations. Along the way I am also looking for patterns that aren’t so healthy, which are contributing to many of the problems we see across the API sector, but more importantly the applications and devices that they are delivering valuable data, content, media, and algorithms to.

One layer of my research is centered around studying API security, which also includes keeping up with vulnerabilities and breaches. I also pay attention to cybersecurity, which is a more theatrical version of regular security, with more drama, hype, and storytelling. I’ve been reading everything I can on the Equifax, Accenture, and other scary breaches, and like the other areas of the industry I track on, I’m beginning to see some common patterns emerge. It is something that starts with the way we use (or don’t use) technology, but then is significantly amplified by the human side of things.

There are a number of common patterns that contribute to these breaches on the technical side, such as not enough monitoring, logging, and redundancy in security practices. However, there are also many common patterns emerging from the business approach by leadership during security incidents, and breaches. These companies security practices are questionable, but I’d say the thing that is the most unacceptable about all of these is the communication around these security events. I feel like they demonstrate just how dysfunctional things are behind the scenes at these companies, but also demonstrate their complete lack of respect and concern for individuals who are impacted by these incidents.

I am pretty shocked by seeing how little some companies are investing in API security. The lack of conversation from API providers about their security practices, or lack of, demonstrates how much work we still have to do in the API space. It is something that leaves me concerned, but willing to work with folks to help find the best path forward. However, when I see companies do all of this, and then do not tell people for months, or years after a security breach, and obfuscate, and bungle the response to an incident, I find it difficult to muster up any compassion for the situations these companies have put themselves in. Their security practices are questionable, but their communication around security breaches is unacceptable.


The gRPC Meetup Kit

I wrote about Tyk’s API surgery meetups last week, and adding a new approach to our API event and workshop toolbox, and next I wanted to highlight the gRPC Meetup Kit, a resource for creating your own gRPC event. gRPC is an approach out of Google for designing, delivering, and operating high performance APIs. If you look at the latest wave of APIs out of Google you’ll see they are all REST and/or gRPC. Most of them are dual speed, providing both REST and gRPC. gRPC is an open source initiative, but very much a Google led effort that we’ve seen picking up momentum in 2017.

While I am keeping an eye on gRPC itself, this particular story is about the concept of providing a Meetup kit for your API related service or tooling, providing an “In a Box” solution that anyone can use to hold a Meetup. The gRPC team provides three groups of resources:

gRPC 101 Presentation

  • Talk - A 15 minute course introduction video.
  • Slides - Slides that go along with the talk.
  • Codelab - A 45m codelab that attendees can complete using Cloud Shell.

Resources and community

Request Support for Your Event

It provides a nice blueprint for what is needed when crafting your own Meetup Kit as well as some material you could weave into any other type of Meetup, or workshop that might contain gRPC. Maybe an API design and protocol workshop, where you cover all of the existing approaches out there today like REST, Hypermedia, gRPC, GraphQL, and others. If nothing else the gRPC Meetup Kit provides a nice forkable project, that you could use as scaffolding for your own kit.

As I mentioned in my piece about Tyk, I don’t think Meetups are going anywhere as a tool for API providers, and API service providers to reach a developer audience. However, I think we are going to have to get more creative about how we organize, produce, and incentivize others to put them on. They are a great vehicle for brining together folks in a community to learn about technology, but we have to make sure they are delivering value for people who show up. I am guessing that a little planning, and evolving a toolkit using Github is a good way to approach putting on Meetups, workshops, and other small events around your products, services, and tooling.


The API Coaches At Capital One

API evangelism and even advocacy at many organizations has always been a challenge to introduce, because many groups aren’t really well versed in the discipline, and often times it tends to take on a more marketing or even sales like approach, which can hurt its impact. I’ve worked with groups to rebrand, and change how they evangelize APIs internally, with partners, and the public, trying to ensure the efforts are more effective. While I still bundle all of this under my API evangelism research, I am always looking for new approaches that push the boundaries, and evolve what we know as API evangelism, advocacy, outreach, and other variations.

I was introduced to a new variation of the internal API evangelism concept a few weeks back while at Capital One talking with my friend Matthew Reinbold(@libel_vox) about their approach to API governance. His team at the Capital One API Center of Excellence has the concept of the API coach, and I think Matt’s description from his recent API governance blueprint story sums it up well:

At minimum, the standards must be a journey, not a destination. A key component to “selective standardization” is knowing what to select. It is one thing for us in our ivory tower to throw darts at market forces and team needs. It is entirely another to repeatedly engage with those doing the work.

Our coaching effort identifies those passionate practitioners throughout our lines of business who have raised their hands and said, “getting this right is important to my teams and me”. Coaches not only receive additional training that they then apply to their teams. They also earn access to evolving our standards.

In this way, standards aren’t something that are dictated to teams. Teams drive the standards. These aren’t alien requirements from another planet. They see their own needs and concerns reflected back at them. That is an incredibly powerful motivator toward acceptance and buy-in.

A significant difference here between internal API evangelism and API coaching is you aren’t just pushing the concept of APIs (evangelizing), you are going the extra mile to focus on healthy practices, standards, and API governance. Evangelism is often seen as an API provider to API consumer effort, which doesn’t always translate to API governance internally across organizations who are developing, deploying, and managing APIs. API coaches aren’t just developing API awareness across organizations, they are cultivating a standardized, bottom up, as well as top down awareness around providing and consuming APIs. Providing a much more advanced look at what is needed across larger organizations, when it comes to outreach and communication.

Another interesting aspect of Capital One’s approach to API coaching, is that this isn’t just about top down governance, it has a bottom up, team-centered, and very organic approach to API governance. It is about standardizing, and evolving culture across many organizations, but in a way that allows team to have a voice, and not just be mandated what the rules are, and required to comply. The absence of this type of mindset is the biggest contributor to a lack of API governance we see across the API community today. The is what I consider the politics of APIs, something that often trumps the technology of all of this.

API coaching augments my API evangelism research in a new and interesting way. It also dovetails with my API design research, as well as begins rounding off a new area I’ve wanted to add for some time, but just have not see enough activity in to warrant doing so–API governance. I’m not a big fan of the top down governance that was given to us by our SOA grandfathers, and the API space has largely been doing alright without the presence of API governance, but I feel like it is approaching the phase where a lack of governance will begin to do more harm than good. It’s a drum I will start beating, with the help of Matt and his teams work at Capital One. I’m going to reach out to some of the other folks I’ve talked with about API governance in the past, and see if I can produce enough research to get the ball rolling.


Explore, Download, API, And Share Data

I’m regularly looking through API providers, service providers, and open data platforms looking for interesting ways in which folks are exposing APIs. I have written about Kentik exposing the API call behind each dashboard visualization for their networking solution, as well as CloudFlare providing an API link for each DNS tool available via their platform. All demonstrating healthy way we can show how APIs are right behind everything we do, and today’s example of how to provide API access is out of New York Open Data, providing access to 311 service requests made available via the Socrata platform.

The page I’m showcasing provides access 311 service requests from 2010 to present, with all the columns and meta data for the dataset, complete with a handy navigation toolbar that lets you view data in Carto or Plot.ly, download the full dataset, access via API, or simply share via Twitter, Facebook, or email. It is a pretty simple example of offering up multiple paths for data consumers to get what they want from a dataset. Not everyone is going to want the API. Depending on who you are you might go straight for the download, or opt to access via one of the visualization and charting tools. Depending on who you are targeting with your data, the list of tools might vary, but the NYC OpenData example via Socrata provides a nice example to build upon. With the most important message being do not provide only the options you would choose–get to know your consumers, and deliver solutions they will also need.

It provides a different approach to making APIs behind available to users than the Kentik or CloudFlare approaches do, but it adds to the number of examples I have to show people how APIs and API enabled integration can be exposed through the UI, helping educate the massess about what is possible. I could see standardized buttons, drop downs, and other embeddable tooling emerge for helping deliver solutions like this for providers. Something like we are seeing with the serverless webhooks out Auth0 Extensions. Some sort of API-enabled goodness that triggers something, and can be easily embedded directly into any existing web or mobile application, or possibly a browser toolbar–opening up API enabled solutions to the average user.

One of the reasons I keep showcasing examples like this is that I want to keep pushing back on the notion that APIs are just for developers. Simple, useful, and relevant APIs are not beyond what the average web application user can grasp. They should be present behind every action, visualization, and dataset made available online. When you provide useful integration and interoperability examples that make sense to the average user, and give them easy to engage buttons, drop downs, and workflows for implementing, more folks will experience the API potential in their world. The reasons us developers and IT folk keep things complex, and outside the realm of the normal folk is more about us, our power plays, as well as our inability to simplify things so that they are accessible beyond those in the club.


Connecting Service Level Agreements To API Monitoring

Monitoring your API availability should be standard practice for internal and external APIs. If you have the resources to custom build API monitoring, testing, and performance infrastructure, I am guessing you already have some pretty cool stuff in place. If you don’t, then you should not be reinventing the wheel out there, and you should be leveraging one of the existing API monitoring services out there on the market. When you are getting started with monitoring your APIs I recommend you begin with uptime and downtime, and once you deliver successfully on that front, I recommend you work on API performance, and the responsiveness of your APIs.

You should begin with making sure you are delivering the service level agreement you have in place with your API consumers. What, you don’t have a service level agreement? No better time to start than now. If you don’t already have an explicitly stated SLA in place, I recommend creating one internally, and see what you can do to live up to your API SLA, then once you ensure things are operating at acceptable levels, you share with your API consumers. I am guessing they will be pretty pleased to hear that you are taking the initiative to offer an SLA, and are committed enough to your API to work towards such a high bar for API operations.

To help you manage defining, and then ultimately monitoring and living up to your API SLA, I recommend taking a look at APIMetrics, who is obsessively focused on API quality, performance, and reliability. They spend a lot of time monitoring public APIs, and have developed a pretty sophisticated approach to ranking and scoring your API to ensure you meet your SLA. As you can see in the picture for this story, the APIMetrics administrative dashboard provides a pretty robust way for you to measure any API you want, and establish metrics and triggers that let you know if you’ve met or failed to meet your SLA requirements. As I said, you could start out by monitoring internally if you are nervous about the results, but once you are ready to go prime time you have the tools to help you regularly reporting internally, as well as externally to your API consumers.

I wish that every stop along the life cycle had a common definition for defining a specific aspect of service level agreements, and was something that multiple API providers could measure and report upon similar to what APIMetrics does for monitoring and performance. I’d like to see API design begin to have a baseline definition, that was verifiable through a common set of machine readable API assertions. I’d love for API plans, pricing, and even terms of service measurable, reportable, in a similar way. These are all things that should be observable through existing outputs, and reflected as part of service level agreements. I’d love to see the concept of the SLA evolve to cover all aspects of the quality of service beyond just availability. APIMetrics provides a good look at how the services we use to manage our APIs can be used to define the level of service we provide, something that we can be emulating more across our API operations.


Algorithmic Observability Should Work Like Machine Readable Food Labels

I’ve been doing a lot of thinking about algorithmic transparency, as well as a more evolved version of it I’ve labeled as algorithmic observability. Many algorithmic developers feel their algorithms should remain black boxes, usually due to intellectual property concerns, but in reality the reasons will vary. My stance is that algorithms should be open source, or at the very least have some mechanisms for auditing, assessing, and verifying that algorithms are doing what they promise, and that algorithms aren’t doing harm behind the scenes.

This is a concept I know algorithm owners and creators will resist, but algorithms observability should work like food labels, but work in a more machine readable way, allowing them to be validated by other external (or internal) systems. Similar to food you buy in the store, you shouldn’t have to give away the whole recipe and secret sauce behind your algorithm, but there should be all the relevant data points, inputs, outputs, and other “ingredients” or “nutrients” that go into the resulting algorithm. I talked about algorithm attribution before, and I think there should be some sort of algorithmic observability manifest, which provides the “label” for an algorithm in a machine readable format. It should give all the relevant sources, attribution, as well as input and outputs for an algorithm–with different schema for different industries.

In addition to there being an algorithmic observability “label” available for all algorithms, there should be live, or at least virtualized, sandboxed instances of the algorithm for verification, and auditing of what is provided on the label. As we saw with the Volkswagen emissions scandal, algorithm owners could cheat, but it would provide an important next step for helping us understand the performance, or lack of performance when it comes to the algorithms we are depending on. Why I call this algorithmic observability, instead of algorithmic transparency, is each algorithm should be observable using it’s existing inputs and outputs (API), and not just be a “window” you can look through. It should be machine readable, and audit-able by other systems in real time, and at scale. Going beyond just being able to see into the black box, but also be able to assess, and audit what is occurring in real time.

Algorithmic observability regulations would work similar to what we see with food and drugs, where if you make claims about your algorithms, they should have to stand up to scrutiny. Meaning there should be standardized algorithmic observability controls for government regulators, industry analysts, and even the media to step up and assess whether or not an algorithm lives up to the hype, or is doing some shady things behind the scenes. Ideally this would be something that technology companies would do on their own, but based upon my current understanding of the landscape, I’m guessing that is highly unlikely, and will be something that has to get mandated by government in a variety of critical industries incrementally. If algorithms are going to impacting our financial markets, elections, legal systems, education, healthcare, and other essential industries, we are going to have to begin the hard work of establishing some sort of framework to ensure they are doing what is being sold, and not hurting or exploiting people along the way.


A Guest Blogger Program To Create Unique Content For Your API

Creating regular content for your blog is essential to maintaining a presence. If you don’t publish regularly, and refresh your content, you will find your SEO, and wider presence quickly becoming irrelevant. I understand that unlike me, many of you have jobs, and responsibilities when it comes to operating your APIs, and carving out the time to craft regular blog posts can be difficult. To help you in your storytelling journey I am always looking for other stories to help alleviate your pain, while helping keep your blog active, and ensure folks will continue stumbling across your API, or API service, while Google, or on social media.

Another interesting example of how to keep your blog fresh came from my partners over at Runscope, who conducted a featured guest blog post series, where they were paying API community leaders to help “create an incredible resource of blog posts about APIs, microservices, DevOps, and QA.” Which has produced a handful of interesting posts:

One thing to note is that Runscope paid $500.00 per post to help raise the bar when it comes to the type of author that will step up for such an effort. I’ve seen companies try to do this before, offering gift cards, swag, and even nothing in return, with varying grades of success and failure. I’m not saying a guest author program for your blog will always yield the results you are looking for, but it is a good way to help build relationships with your community, and help augment your existing workload, with some regular storytelling on the blog.

A guest blogger program is a tool I will be adding to my API communications research, expanding on the tools API operators have in their toolbox to keep their communication strategies active. An active blog does more than just educate your community, and boost your SEO. An active blog, that is informative, and relevant shows that there is someone home behind an API, and that they are investing in the platform. While there are exceptions, the clearest sign that an API will soon be deprecated, or does not have the resources to support consumers properly is when the blog hasn’t been updated in the last six months. While I’m reviewing, indexing, and learning about different APIs, when I come across an inactive blog, or Twitter account for an API, I’ll almost always keep moving, feeling like there really isn’t much worthwhile there, as it will soon be gone.


Treating Your APIs Like They Are Infrastructure

We all (well most of us) strive to deliver as stable of an API presence as we possibly can. It is something that is easier said than done. It is something that takes caring, as well as the right resources, experience, team, management, and budget to do APIs just right. It is something the API idols our there make look easy, when they really have invested a lot of time and energy into developing a agile, yet scalable approach to ensuring APIs stay up and running. Something that you might able to achieve with a single API, but can easily be lost between each API version, as we steer the ship forward.

I spend a lot of time at the developer portals of these leading API providers looking for interesting insight into how they are operating, and I though Stripe’s vision around versioning their API is worth highlighting. Specifically their quote about treating your API like they are real life physical infrastructure.

“Like a connected power grid or water supply, after hooking it up, an API should run without interruption for as long as possible.Our mission at Stripe is to provide the economic infrastructure for the internet. Just like a power company shouldn’t change its voltage every two years, we believe that our users should be able to trust that a web API will be as stable as possible.”

This is possible. This is how I view Amazon S3, and Pinboard. These are two APIs I depend on to make my business work. Storage and bookmarking are two essential resources in my world, and both these APIs have consistently delivered stable API infrastructure, that I know I can depend on. I think it is also interesting to note that one is a tech giant, while the other is a viable small business (not startup). Demonstrating for me that there isn’t a single path to being a reliable, stable, API provider, despite what some folks might believe.

I am spending a lot of time lately thinking of API infrastructure along the lines of our energy grid, and transit system. The analogies are not perfect, but I do feel like as time moves on, some of our API infrastructure will continue to become commoditized, deemed essential, and something we depend on just as much as power, gas, and other utilities. These are the APIs that will be sticking around, the ones that can prove their usefulness, and deliver reliable integrations that do not change with each funding season, or technology trend. I’m looking forward to getting beyond the wild west days of APIs, and moving into the stage where APIs are treated like they are infrastructure, not just some toy, or latest fad, and we can truly depend on them and build our businesses around.


Learning About API Governance From Capital One DevExchange

I am still working through my notes from a recent visit to Capital One, where I spent time talking with Matthew Reinbold (@libel_vox) about their API governance strategy. I was given a walk through their approach to defining API standards across groups, as well as how they incentivize, encourage, and even measure what is happening. I’m still processing my notes from our talk, and waiting to see Matt publish more on his work, before I publish too many details, but I think it is worth looking at from a high level view, setting the bar for other API governance conversations I am engaging in.

First, what is API governance. I personally know that many of my readers have a lot of misconceptions about what it is, and what it isn’t. I’m not interesting in defining a single definition of API governance. I am hoping to help define it so that you can find it a version of it that you can apply across your API operations. API governance is at its simplest form, about ensuring consistency in how you do API across your development groups, and a more robust definition might be about having an individual or team dedicated to establishing organization-wide API standards, helping train, educate, enforce, and in the case of capital one, measure their success.

Before you can begin thinking about API governance, you need to start establishing what your API standards are. In my experience this usually begins with API design, but should also quickly also be about consistent, API deployment, management, monitoring, testing, SDKs, clients, and every other stop along the API lifecycle. Without well-defined, and properly socialized API standards, you won’t be able to establish any sort of API governance that has any sort of impact. I know this sounds simple, but I know more API providers who do not have any kind of API design, or other guide for their operations, than I know API providers who have consistent guides to design, and other stops along their API lifecycle.

Many API providers are still learning about what consistent API design, deployment, and management looks like. In the API industry we need to figure out how to help folks begin establishing organizational-wide API design guides, and get them on the road towards being able to establish an API governance program–it is something we suck at currently. Once API design, then deployment and management practices get defined we can begin to realize some standard approaches to monitoring, testing, and measuring how effective API operations are. This is where organizations will begin to see the benefits of doing API governance, and it not just being a pipe dream. Something you can’t ever realize if you don’t start with the basics like establishing an API design guide for your group. Do you have an API design guide for your group?

While talking with Matt about their approach at Capital One, he asked if it was comparable to what else I’ve seen out there. I had to be honest. I’ve never come across someone who had established API design, deployment, and management practices. Were actively educating and training their staff. Then actually measuring the impact and performance of APIs, and the teams behind them. I know there are companies who are doing this, but since I tend to talk to more companies who are just getting started on their API journey, I’m not seeing anyone organization who is this advanced. Most companies I know do not even have an API design guide, let alone measuring the success of their API governance program. It is something I know a handful of companies would like to strive towards, but at the moment API governance is more talk than it is ever a reality.

If you are talking API governance at your organization, I’d love to learn more about what you are up to. No matter where you are at in your journey. I’m going to be mapping out what I’ve learned from Matt, and compare with I’ve learned from other organizations. I will be publishing it all as stories here on API Evangelist, but will also look to publish a guide and white papers on the subject, as I learn more. I’ve worked with some universities, government agencies, as well as companies on their API governance strategies. API governance is something that I know many API providers are working on, but Capital One was definitely the furthest along in their journey that I have come across to date. I’m stoked that they are willing to share their story, and don’t see it as their secret sauce, as it is something that doesn’t just need sharing, it is something we need leaders to step up and show everyone else how it can be done.


Publishing Your API Road Map Using Trello

I consider a road map for any API to be an essential building block, whether it is a public API or not. You should be in the business of planning the next steps for your API in an organized way, and you should be sharing that with your API consumers so that they can stay up to speed on what is right around the corner. If you want to really go the extra mile I recommend following what Tyk is up to, with their public road map using Trello.

With the API management platform Tyk, you don’t just see a listing of their API road map, you see all the work and conversation behind the road ma using the visual collaboration platform Trello. Using their road map you can see proposed features, which is great to see if something you want has already been suggested, and you can get at a list of what the next minor releases will contain. Plus using the menu bar you can get at a history of the changes the Tyk team has made to the platform, going back for the entire history of the Trello board.

Using Trello you can subscribe to, or vote up any of the message boards. If you want to submit something you need to sign-up and post something to the Tyk community. Then they’ll consider adding it to the proposed road map features. It is a pretty low cost, easy to make public, approach to delivering a road map. Sometimes this stuff doesn’t need a complex solution, just one that provides some transparency, and help your customers understand what is next. Tyk provides a nice way to provide a road map that any other API provider, or service provider can follow.

Another interesting approach to delivering an API road map that I can add to my research. I’m a big fan of having many different ways of delivering the essential building blocks of API operations, using a variety of simple free or paid SaaS tools. You’d be surprised at how useful an open road map can be for your API. Even if you aren’t adding too many new features, or have a huge number of people participating, it provides an easy reference showing what is next for an API. It also shows someone is home behind the scenes, and that an API is actually active, alive, and something you should be using.


Communication Strategy Filler Using Sections Of Your API Documentation

</a>

Coming up with things creative things to write about regularly on the blog, and on Twitter when you are operating an API is hard. It has taken a lot of discipline to keep posts going up on API Evangelist regularly for the last seven years–totaling almost 3K total stories told so far. I don’t expect every API provider to have the same obsessive compulsive disorder that I do, so I’m always looking for innovative things that they can do to communicate with their API communities–something that Amazon Web Services is always good at providing healthy examples that I feel I can showcase.

One thing the AWS team does on a regular basis is tweeting out links to specific areas of their documentation, that helps users accomplish specific things with AWS APIs. The AWS security team is great at doing this, with recent examples focusing on securing things with the AWS Directory Service, and API Organizations. Each contains a useful description, attractive looking image, and a link to a specific page in the documentation that helps you learn more about what is possible.

I have been pushing myself to make sure all headers, and sub headers in my API documentation have anchors, so that I can not just link to a specific page, but I can link to a specific section, path, or other relevant item within my API documentation. This helps me in my storytelling when I’m looking to reference specific topics, and would help when it comes to tweeting out regular elements across my documentation in tweets. I’m slowly going to push out some of the lower grade tweets of curated news that I push out, and replace with relevant work I do in specific areas of my research–using my own work to fill the cracks over less than exciting things I may come across in the API space.

Tweeting out what is possible with your API, with links to specific sections of your API documentation is something I’m going to add to my API communication research. Providing a common building block that other API providers, or even API service providers can consider when looking for things to fill the cracks in their platform communication strategy. It is simple, useful to API consumers, and is an easy way to keep the Tweet stream regularly flowing, and helping developers understand what is possible. I feel it is also something that should impact how we craft our documentation, as well as our communication strategies, making sure we are publishing with appropriate titles and anchors so we can easily reference and cite the valuable information we are making available across our platforms.


Publish, Share, Monetize Machine Learning APIs

I’ve been playing with Tensor Flow for over a year now, specifically when it comes to working with images and video, but it has been something that has helped me understand what things looks like behind the algorithmic curtain that seems to be part of a growing number of tech marketing strategies right now. Part of this learning is exploring beyond Google’s approach, who is behind Tensor Flow, and understand what is going on at AWS, as well as Azure. I’m stil getting my feet wet learning about what Microsoft is up to with their platform, but I did notice one aspect of the Azure Machine Learning Studio emphasized developers to, “publish, share, monetize” their ML models. While I’m sure there will be a lot of useless vapor ware being sold within this realm, I’m simply seeing it as the next step in API monetization, and specifically the algorithmic evolution of being an API provider.

As the label says in the three ML models for sale in the picture, this is all experimental. Nobody knows what will actually work, or even what the market will bear. However, this is something APIs, and the business of APIs excel at. Making a digital resource available to consumers in a retail, or even wholesale way via marketplaces like Azure and AWS, then playing around with features, pricing, and other elements, until you find the sweet spot. This is how Amazon figured out the whole cloud computing game, and became the leader. It is how Twilio, Stipe and other API as a product companies figured out what developers needed, and what these markets would bear. This will play out in marketplaces like Azure and Google, as well as startup players like Algorithmia–which is where I’ve been cutting my teeth, and learning about ML.

The challenge for ML API entrepreneurs will be helping consumers understand what their models do, or do not do. I see it as an opportunity, because there will be endless amounts of vapor ware, ML voodoo, and smoke and mirrors trying to trick consumers into buying something, as well as endless traps when it comes to keeping them locked in. If you are actually doing something interesting with ML, and it actually provides value in the business world, and you provide clear, concise, no BS language about what it does–you are going to do well. The challenge for you will be getting found in the mountains of crap that is emerging, and differentiating yourself from the smoke and mirrors that we are already seeing so much of. Another challenge you’ll face is navigating the vendor platform course set up by AWS, Google, and Azure as they battle it out for dominance–a game that many of us little guys will have very little power to change or steer.

It is a game that I will keep a close eye on. I’m even pondering publishing a handful of image manipulation models I’ve been working on. IDK. I do not think they are quite ready, and I’m not even entirely sure they are something I want widely used. I’m kind of enjoying using them in my own work, providing me with images I can use in my storytelling. I don’t think the ROI is there yet in the ML API game, and I’ll probably just keep being a bystander, and analyst on the sideline until I see just the right opportunity, or develop just the right model I think will stand out. After seven years of doing API Evangelist I’m pretty good at seeing through the BS, and I’m thinking this experience is going to come in handy in this algorithmic evolution of the API universe, where the magic of AI and ML put so many people under their spell.


Thinking About Why We Rate Limit Our APIs

I am helping a client think through their API management solution at the moment, so I’m working through all the moving parts of how, and why of API management solutions. The API management landscape has shifted since the last time I helped a small company navigate the process of getting up and running, so I wanted to work through each aspect and think critically before I make any recommendations. My client has a content API, which isn’t very complex, but possesses some pretty valuable data they’ve aggregated, curated, and are looking to make available via a simple web API. It is pretty clear that all developers will need a key to be access the API, but I wanted to pause for a moment and think more about API rate limiting.

Why do we rate limit? The primary reason is to help manage the compute resources available for all API consumers. You don’t want any single user hitting the server too hard, and taking things down for everyone else. I’d say after that, the next major reason is to enforce API access tiers, and ensure API consumers are only consuming what they should be. Which both seem like pretty dated concepts, that might need re-evaluation in general, but also in the context of this particular project. There is no free access to this API. I believe there will be a public account for test driving (making very limited # of calls), and some that drive their embeddable strategy, but for access to the majority of content, developers will have to register for a key, and provide a credit card to pay for their consumption. Which leaves me with the question, should we be rate limiting at all?

If users are paying for whatever they consume, and there is a credit card on file, do we want to rate limit? Why are we so worried about server capacity in a cloud world? It seems like rate limiting is a legacy constraint, that has continue to live on unquestioned, and even propped up by accounting and business decisions over simple technical ones. API access tiers with varying rate limits are sometime imposed as part of identity and access control, limiting what new users have access to, but often times they are used to corral and route users into specific, and measurable account plans, that help startups predict and articulate revenue to investors. I know many of my friends disagree with my thoughts on this, but I feel this accounting decision behind rate limiting are hurting the bottom line, more than they are helping. If you are focused on your API being the product it is hurting it, if you are focused on your API consumers being your product, then you are helping it.

My client in question is looking to build an actual business that sells a product to customers, without an exit strategy, so I want to do my best to help them understand how they can reduce technical and business complexity, while maximizing revenue around the API services they are offering. If we have the API properly resourced with scalable compute, load-balancing, monitoring, checks and balances. Then we also have a verified credit card on file for each API key holder. Why do we want to rate limit? It seems like it is an unnecessary complexity for API consumers to have to wrestle with. Let’s just allow them to register, make API calls, measure, and bill accordingly. Amazon provides a clear precedent for how this works, and from my experience I tend to spend more on my AWS bill then I do with services I use which keep me in tiered access plans. I’m not saying tiered access plans don’t have their place, I’m saying we should be questioning their value each time we are constructing them, and not just assuming they should be done my default.

A by-product of noticing how the API management landscape recently is helping me reassess each of the common building blocks of API management, and think more critically about the how and why behind their existence. There is a significant difference between rate limiting and metering API calls, and I don’t think we always have to do both. We still need the ability to turn off keys, and block specific user agents and IP addresses, but in some cases I think rate limiting shouldn’t be part of the API management operations. We have the compute, storage, and databases resources at our disposal to scale as we need to meet demand, and we have the credit card verified, and on file to bill against, let’s just get out of API consumers way. In the case of this particular project I’m working, I think this will be my recommendation. Focusing on reducing the amount of API management overhead, and simplifying the load for API consumers along the way.


The API Management Landscape Has Shifted More Than I Anticipated

It is interesting to take a fresh look at the API management landscape these days. It has been a while since I’ve looked through all the providers to see where their pricing is at, and what they offer. I’d say the space has definitely shifted from what things looked like 2012 through 2015. There are still a number of open source offerings, which there weren’t in 2012, but the old guard has solidly turned their attention to the enterprise. There are the cloud solutions like Restlet, ad SlashDB which really help you get up and running from existing data sources in the cloud, but for this particular project I am looking for a simple proxy and connector approach to deploying on any infrastructure, and they don’t quite fit the requirements.

Apigee, and the other more enterprise offerings have always been out of my league, and 3Scale’s entry level package is up to $750, which is a little out of my reach, but I do know they are open sourcing their offering, now that they are part of Red Hat. There is API Umbrella, APIMan, Fusio, Monarch, and handful of other solutions that will work, but they take certain platform, or specific language commitment that doesn’t work for this project. Everything else is of the enterprise caliber, nothing really that I would recommend to my customers who are just getting started on their API journey. I’m really left with the cloud giants, which I guess is one of the main reasons we are at this junction in the evolution of API management. API management becoming a commodity has really shifted the landscape, making it more difficult to be a strong player like Tyk and Kong are managing to pull off.

If my customer was looking to launch a data API from an existing database I’d point them to SlashDb or Restlet. If they are an enterprise customer I’d point them to 3Scale. Tyk is pretty much my goto person for the lower end of the market, with Kong as the alternate. If my customer is already running on Google, Azure, or AWS, then I’m pretty much telling them to stay put, and use the tooling that is available to them. Another thing I’m noticing has dropped out of prominence is the billing for API usage aspect of API management. It’s in Tyk, and 3Scale, but really wasn’t a component of many of the other solutions I’ve been looking at. Overall things seem scattered out there after the last round of API management acquisitions, the VC funding shifted, and the cloud giants stepped into their game with their solutions. I’m guessing that is the just the circle of life and all.

In this new landscape I am going to have to spend more time playing with the low-end solutions that are available. It is essential that there are solutions that are accessible to folks who are just starting on their API journey. It is something that requires a climb-able ladder. Meaning you need to be able to afford the reach to the next rung of the ladder, otherwise it can become quite a blocker. I was a big advocate for this in the early days, but stopped pushing on because there were so many options out there. It will take some playing around to get a better feeling about where we are, before I feel good about making recommendations to new players again. A process I should probably be repeating each year, because things seem to be shifting a little more than I anticipated.


A Couple More Questions For The Equifax CEO About Their Breach

Speaking to the House Energy and Commerce Committee, former Equifax CEO Richard Smith pointed the finger at a single developer who failed to patch the Apache Struts vulnerability. Saying that protocol was followed, and a single developer was responsible, shifting the blame away from leadership. It sounds like a good answer, but when you operate in the space you understand that this was a systemic failure, and you shouldn’t be relying on a single individual, or even a single piece of scanning software to verify the patch was applied. You really should have many layers in place to help prevent breaches like we saw with Equifax.

If I was interviewing the CEO, I’d have a few other questions for him, getting at some of the other systemic and process failures based upon his lack of leadership, and awareness:

  • API Monitoring & Testing - You say the scanner for the Apache Struts vulnerability failed, but what about other monitoring and testing. The plugin in questions was a REST plugin, that allowed for API communication with your systems. Due to the vulnerability, extra junk information was allowed to get through. Where were your added API request and response integrity testing and monitoring process? Sure you were scanning for the vulnerability, but are you keeping an eye on the details of the data being passed back and forth? API monitoring & testing has been around for many years, and service providers like Runscope do this for a living. What other layers of monitoring and testing were in place?
  • API Management - When you expose APIs like you did from Apache Struts, what does the standardized management approach look like? What sort of metering, logging, rate limiting, and analysis occurs on each endpoint, and verification occurs, ensuring that only required clients should have access? API management has been standard procedure for over a decade now for exposing APIs like this both internally and externally. Why didn’t your API management process stop this sort of breach after only a couple hundred record went out? API management is about awareness regarding access to all your resources. You should have a dashboard, or at least some reports that you view as a CEO on this aspect of operations.

These are just two examples of systems and processes that should have been in place. You should not be depending on a single person, or a single tool to catch this type of security incident There should be many layers in place, with security triggers, and notifications in place. Your CTO should be in tune with all of these layers, and you as the CEO should be getting briefed on how they work, and have a hand in making sure they are in place. I’m guessing that your company is doing APIs, but is dramatically behind the times when it comes to commonplace API management practices. This is your fault as the CEO. This is not the fault of a single employee, or any group of employees.

I am guessing that as a CEO you are more concerned with the selling of this data, than you are of securing it in storage, or transit. I’m guessing you are intimately aware of the layers that enable you to generate revenue, but you are barely investing in the technology and processes to do this securely, while respecting the privacy of your users. They are just livestock to you. They are just products on a shelf. It shows your lack of leadership to point the finger at a single person, or single piece of technology. There should have been many layers in place to catch this type of breach beyond a single vulnerability. It demonstrates your lack of knowledge regarding modern trends in how we secure and provide access to data, and you should never have been put in charge of such a large data brokerage company.


Teaching My Client Three Approaches To Modular UI Design Using Their APIs

I am working with a client to develop a simple user interface on top of a Human Services Data API (HSDA) I launched for them. They want a basic website for searching, browsing, and navigating the organizations, locations, and services available in their API. A part of this work is helping them understand how modular and configurable their web site is, with each page, or portion of a page being a simple API call. It is taking a while for them to fully understand what they have, and the potential of evolving a web application in this way, but I feel like they are beginning to understand, and are taking the reigns a little more when it comes to dictate what they want within this new world.

When I first published a basic listing of human services they were disappointed. They had envisioned a map of the listings, allowing users to navigate in a more visual way. I got to work helping them see the basic API call(s) behind the listing, and how we could use the JSON response in any way we wanted. I am looking to provide three main ways in which I can put the API data to work in a variety of web applications:

  • Server-Side - A pretty standard PHP call to the API, taking the results and rendering to the page using HTML.
  • Client-Side - Leveraging JavaScript in the browser to call the API and then render to the page using Jquery.
  • Static Push - Calling the APPI using PHP, then publishing as YAML or JSON to a Jekyll site and rendering with Liquid and HTML.

What the resulting HTML, CSS, and JavaScript looks like in all these scenarios is up to the individual who is in charge of dictating the user experience. In this case, it is my client. They just aren’t used to having this much control over dictating the overal user experience. Which path they choose depends on a few things like whether they want the content to be easily indexed by search engines, or if they want things to be more JavaScript enabled magic (ie. maps, transitions, flows). All the API does is gives them full access to the entire surface area of the human services schema they have stored in their database.

After moving past the public website, I’m beginning to show them the power of not just GETs via their API, I’m showing them the power of POST, PUT, and DELETE. I find it is easy to show them the potential in an administrative system using a basic form. I find people get forms. Once you show them that in order to POST they have to have a special token or key, otherwise the API will reject, they feel a whole lot better about the process. I find the form tends to put things into context for them, beyond displaying data and content, and allowing them to actually manage all this data and content. I find the modularity of an API really lends itself to giving business users more control over the user interface. They may not be able to do everything themselves, but they tend to be more invested in the process, and enjoy more ownership over the process–which is a good thing.


How API Evangelist Works

I’ve covered this topic several times before, but I figured I’d share again for folks who might have just become readers int he last year. Providing an overview of how API Evangelist works, to help eliminate confusion as you are navigating around my site, as well as to help you find what you are looking for. First, API Evangelist was started in the summer of 2010 as a research site to help me better understand what is going on in the world of APIs. In 2017, it is still a research site, but it has grown and expanded pretty dramatically into a network of websites, driven by a data and a content core.

The most import thing to remember is that all my sites run on Github, which is my workbench in the the API Evangelist workshop. apievangelist.com is the front door of the workshop, with each area of my research existing as its own Github repository, at its own subdomain with the apievangelist domain. An example of this can be found in my API design research, where you will find at design.apievangelist.com. As I do my work each day, I publish my research to each of my domains, in the form of YAML data for one of these areas:

  • Organizatons - Companies, organizations, institutions, programs, and government agencies doing anything interesting with APIs.
  • Individuals - The individual people at organizations, or independently doing anything interesting with APIs.
  • News - The interesting API related, or other news I curate and tag daily in my feed reader or as I browse the web.
  • Tools - The open source tooling I come across that I think is relevant to the API space in some way.
  • Building Blocks - The common building blocks I find across the organizations, and tooling I’m studying, showing the good and the bad of doing APIs.
  • Patents - The API related patents I harvest from the US Patent Office, showing how IP is impacting the world of APIs.

You can find the data for each of my research areas in the _ data folder for each repository. Which is then rendered as HTML for each subdomain using Liquid via each Jekyll CMS driven website. All of this is research. It isn’t meant to be perfect, or a comprehensive directory for others to use. If you find value in it–great!! However, it is just my research laying on my workbench. It will change, evolve, and be remixed and reworked as I see fit, to support my view of the API sector. You are always welcome to learn from this research, or even fork and reuse it in your own work. You are also welcome to submit pull requests to add or update content that you come across about your organization or open source tool.

The thing to remember about API Evangelist is it exist primarily for me. It is about me learning. I take what I learn and publish as blog posts to API Evangelist. This is how I work through what I’m discovering as part of my research each day, and use as a vehicle to move my understanding of APIs forward. This is where it starts getting real. After seven years of doing this I am reaching 4K to 7K page views per day, and clearly other folks find value in reading, and sifting through my research. Because of this I have four partners, 3Scale, Restlet, Runscope, and Tyk who pay me money to have their logo on the home page, in the navigation, and via a footer toolbar. Runscope also pays me to place a re-marketing tag on my site so they can show advertising to my users on other websites, and Facebook. This is how I pay my rent, an how I eat each month.

Beyond this base, I take my research and create API Evangelist industry guides. API Definitions, and API Design are the two most recent editions. I’m currently working on one for data, database, as well as deployment, and management. These guides are sometimes underwritten by my partners, but mostly they are just the end result of my API research. I also spend time and energy taking what I know and craft API strategy and white papers for clients, and occasionally I actually create APIs for people–mostly in the realm of human services, or other social good efforts. I’m not really interested in building APIs for startups, or in service of the Silicon Valley machine. Even tough I enjoy watching, studying, and learning from this world, because there are endless lessons regarding how we can use technology in this community, as well as how we should not be using technology.

That is a pretty basic walk through of API Evangelist works. It is important to remember I am doing this research for myself. To learn, and to make a living. API Evangelist is a production, a persona I created to help me wade through the technology, business, and politics of APIs. It reflects who I am, but honestly is increasingly more bullshit than it is reality, kind of like the API space. I hope you enjoy this work. I enjoy hearing from my readers, and hearing how my research impacts your world. It keeps me learning each day, and from ever having to go get a real job. It is always a work in progress and never done. Which I know frustrates some, but I find endlessly interesting, and is something that reflects the API journey, something you have to get used to if you are going to be successful doing APIs in this crazy world.


Show The API Call Behind Each Dashboard Visualization

I am a big fan of user interfaces that bring APIs out of the shadows. Historically, APIs are often a footnote in the software as a service (SaaS) world, available as a link way down at the bottom of the page, in the settings, or help areas. Rarely, are APIs made a first class citizen in the operations of a web application, which really just perpetuates the myth that APIs aren’t for everybody, and the “normals” shouldn’t worry their little heads about it. When in reality, EVERYBODY should know about APIs, and have the opportunity to put them to work, so we should stop burying the links to our APIs, and our developer areas. If your API is too technical for a business user to understand what is going on, then you should probably get to work simplifying it, not burying it and keeping it in developer and IT realm.

I have written before about how DNS provider CloudFlare provides an API behind every feature in their user interface, and I’ve found another great example of this over at the network API provider Kentik. In their network dashboard visualization tooling they provide a robust set of tooling for accessing the data behind the visuals, allowing you to export, view SQL, show API call, and enter share view. In their post, they proceed to instruction about how you can get your API key as part of your account, as well as providing a pretty robust introduction into why APIs are important. This is how ALL dashboards should work in my opinion. Any user should be introduced to APIs, and have the ability to get at the data behind, and export it, or directly make an API call in their browser or at the command line.

Developers like to think this stuff should be out of reach of the average user, but that is more about our own insecurities, and power trips, than it is about the average users ability to grasp this stuff. There is no reason why ALL user interfaces can’t be developed on top of APIs, with native functionality for getting at the API call, as well as the data, content, or algorithms behind the user interface feature. It makes for more powerful user interfaces, as well as more educated, literate, and efficient power users of our applications. If all web applications operated this way, we’d see a much more API literate business world, where users would be asking more questions, curious about how things work, and experimenting with ways they can be more successful in what they do. While I do see positive examples like Kentik out there, I also find that many web application developers are further retreating from APIs being front and center, preferring to keep them in the shadows of web and mobile applications, out of the reach of the average user. Something we need to reverse.


Big Data Is Not About Access Using Web APIs

I’m neck deep in research around data and APIs right now, and after looking at 37 of the Apache data projects it is pretty clear that web APIs are not a priority in this world. There are some of the projects that have web APIs, and there a couple projects that look to bridge several of the projects with an aggregate or gateway API, but you can tell that the engineers behind the majority of these open source projects are not concerned with access at this level. Many engineers will counter this point by saying that web APIs can’t handle the volume, and it shows that the concept isn’t applicable in all scenarios. I’m not saying web APIs should be used for the core functionality at scale, I’m saying that web APIs should be present to provide access to the result state of the core features for each of these platform, whatever that is, which something that web APIs excel at.

From my vantage point the lack of web APIs isn’t a technical one, it is a business and political motivation. When it comes to big data the objectives are always about access, and it definitely isn’t about the wide audience access that comes when you use HTTP, and the web for API access. The objective is to aggregate, move around, and work with as much data as you possibly can amongst a core group of knowledgable developers. Then you distribute awareness, access, and usage to designated parties via distilled analysis, visualizations, or in some cases to other systems where the result can be accessed and put to use. Wide access to this data is not the primary objective, paying forward much of the power and control we currently see around database to API efforts. Big data isn’t about democratization. Big Data is about aggregating as much as you can and selling the distilled down wisdom from analysis, or derived as part of machine learning efforts.

I am not saying there is some grand conspiracy here. It just isn’t the objective of big data folks. They have their marching orders, and the technology they develop reflect these marching orders. It reflects the influence money and investment has on the technology. The ideology that drives how the tech is engineered, and the algorithms handle specific inputs, and provide intended outputs. Big data is often sold as data liberation, democratization, and access to your data, building on much of what APIs have done in recent years. However, in the last couple of years the investment model has shifted, the clients who are purchasing and implementing big data have evolved, and they aren’t your API access type of people. They don’t see wide access to data as a priority. You are either in the club, and know how to use the Apache X technology, or you are sanctioned one of the dashboard analysis visualization machine learning wisdom drips from the big data. Reaching a wide audience is not necessary.

For me, this isn’t some amazing revelation. It is just watching power do what power does in the technology space. Us engineers like to think we have control over where technology goes, yet we are just cogs in the larger business wheel. We program the technology to do exactly what we are paid to do. We don’t craft liberating technology, or the best performing technology. We assume engineer roles, with paychecks, and bosses who tell us what we should be building. This is how web APIs will fail. This is how web APIs will be rendered yesterdays technology. Not because they fail technically, it is because the ideology of the hedge funds, enterprise groups, and surveillance capitalism organizations that are selling to law enforcement and the government will stop funding data systems that require wide access. The engineers will go along with it because it will be real time, evented, complex, and satisfying to engineer in our isolated development environments (IDE). I’ve been doing data since the 1980s, and in my experience this is how data works. Data is widely seen as power, and all the technical elements, and many of the human elements involved often magically align themselves in service of this power, whether they realize they are doing it or not.


APIs Used To Give Us Access To Resources That Were Out Of Our Reach

I remember when almost all the APIs out there gave us developers access to things we couldn’t ever possibly get on our own. Some of it was about the network effect with the early Amazon and eBay marketplaces, or Flickr and Delicious, and then Twitter and Facebook. Then what really brought it home was going beyond the network effect, and delivering resources that were completely out of our reach like maps of the world around us, (seemingly) infinitely scalable compute and storage, SMS, and credit card payments. In the early days it really seemed like APIs were all about giving us access to something that was out of our reach as startups, or individuals.

While this still does exist, it seems like many APIs have flipped the table and it is all about giving them access to our personal and business data in ways that used to be out of their reach. Machine learning APIs are using parlour tricks to get access to our internal systems and databases. Voice enablement, entertainment, and cameras are gaining access to our homes, what we watch and listen to, and are able to look into the dark corners of our personal lives. Tinder, Facebook, and other platforms know our deep dark secrets, our personal thoughts, and have access to our email and intimate conversations. The API promise seems to have changed along the way, and stopped being about giving us access, and is now about giving them access.

I know it has always been about money, but the early vision of APIs seemed more honest. It seemed more about selling a product or service that people needed, and was more straight up. Now it just seems like APIs are invasive. Being used to infiltrate our professional and business worlds through our mobile phones. It feels like people just want access to us, purely so they can mine us and make more money. You just don’t see many Flickrs, Google Maps, or Amazon EC2s anymore. The new features in mobile devices we carry around, and the ones we install in our home don’t really benefit us in new and amazing ways. They seem to offer just enough to get us to adopt them, and install in our life, so they can get access to yet another data point. Maybe it is just because everything has been done, or maybe it is because it has all been taken over by the money people, looking for the next big thing (for them).

Oh no! Kin is ranting again. No, I’m not. I’m actually feeling pretty grounded in my writing lately, I’m just finding it takes a lot more work to find interesting APIs. I have to sift through many more emails from folks telling me about their exploitative API, before I come across something interesting. I go through 30 vulnerabilities posts in my feeds, before I come across one creative story about something platform is doing. There are 55 posts about ICOs, before I find an interesting investment in a startup doing something that matters. I’m willing to admit that I’m a grumpy API Evangelist most of the time, but I feel really happy, content, and enjoying my research overall. I just feel like the space has lost its way with this big data thing, and are using APIs to become more about infiltrating and extraction, that it is about delivering something that actually gives developers access to something meaningful. I just think we can do better. Something has to give, or this won’t continue to be sustainable much longer.


API Providers Should Provide Observability Into Government Developer Accounts

I’ve talked about this before, but after reading several articles recently about various federal government agencies collecting, and using social media accounts for surveillance lately, it is a drum I will be beating a lot more regularly. Along with the transparency reports we are beginning to see emerge from the largest platform providers, I’d like to start seeing more observability regarding which accounts, both user and developer are out of government agencies. Some platforms are good at highlighting how government of all shapes and sizes are using their platform, and some government agencies are good at showcasing their social media usage, but I’d like to understand this from purely an API developer account perspective.

I’d like to see more observability into which government agencies are requesting API keys. Maybe not specific agencies ad groups, and account details, although that would be a good idea as well down the road. I am just looking for some breakdown of how many developer accounts on a platform are government and law enforcement. What does their API consumption look like? If there is Oauth via a platform, is there any bypassing of the usual authentication flows to get at data, any differently than regular developers would be accessing, or requiring user approval? From what I am hearing, I’m guessing that there are more government accounts out there than platforms either realize, or are willing to admit. It seems like now is a good time to start asking these questions.

I would add on another layer to this. If an application developer is developing applications on behalf of law enforcement, or as part of a project for a government agency, there should be some sort of disclosure at this level as well. I know I’m asking a lot, and a number of people will call me crazy, but with everything going on these days, I’m feeling like we need a little more disclosure regarding how government(s) are using our platforms, as well as their contractors. The transparency disclosure that platforms have been engaging is a good start to the legal side of this conversation, but I’m looking for the darker, more lower level surveillance that I know is going on behind the scenes. The data gathering on U.S. citizens that doesn’t necessarily violate any particular law, because this is such new territory, and the platform terms of service might sanction it in some loopholy kind of way.

This isn’t just a U.S. government type of thing. I want this to be standard practice for all forms of government on the platforms we use. A sort of UN level, General Data Protection Regulation (GDPR). Which reminds me. I am woefully behind on what GDPR outlines, and how the rolling out of it is going. Ok, I’ll quick ranting now, and get back to work. Overall, we are going to have to open up some serious observability into how the online platforms we are depending are being accessed and use by the government, both on the legal side of things, as well as just general usage. Seems like the default on the general usage should always be full disclosure, but I’m guessing it isn’t a conversation anyone is having yet, which is why I bring up. Now we are having it. Thanks.


Letting Go In An API World Is Hard To Do

I encounter a number of folks who really, really, really want to do APIs. You know, because they are cool and all, but they just can’t do what it takes to let go a little, so that their valuable API resources can actually be put to use by other folks. Sometimes this happens because they don’t actually own the data, content, or algorithms they are serving up, but in other cases it is because they view their thing as being so valuable, and so important that they can’t share it openly enough, to be accessible via an API. Even if your APIs are private, you still have to document, and share access with folks, so they can understand what is happening, and have enough freedom to put to use in their application as part of their business, without too much constraint and restrictions.

Some folks tell me they want to do API, but I can usually tell pretty quickly that they won’t be able to go the distance. I find a lot of this has to do with perceptions of intellectual property, combined with a general distrust of EVERYONE. My thing is super valuable, extremely unique and original, and EVERYONE is going to want it. Which is why they want to do APIs, because EVERYONE will want it. Also, once it is available to EVERYONE via an API, competitors, and people we don’t want getting at it, will now be able to reverse engineer, and copy this amazing idea. However, if we don’t make accessible, we can’t get rich. Dilemna. Dilemna. Dilemna. What do we do? My answer is you probably that you shouldn’t be doing APIs.

You see, doing APIs, whether public or privately requires letting go a bit. Sure, you can dial in how much control you are willing to give up using API management solutions, but you still have to let go enough so that people can do SOMETHING with your valuable resource. If you can’t, APIs probably aren’t your jam. They just won’t work if you don’t give your API consumers enough room to breathe while developing and operating their integrations and applications. I understand if you can’t let go. The API game isn’t for everyone, or maybe there is some other data, content, and resources you don’t feel so strongly about that you could start with, and get the hang of doing APIs before you jump in with your prized possessions?

Another thing I might suggest, is that maybe you should twice about why these digital things are so important to you. Is it because they really matter to you, or is because you think they’ll make you a lot of money? If it is just the latter, they are probably not very valuable if you just keep them locked up. The best ideas are the ones that get used. The things that make the biggest impact get shared, and are usually pretty accessible. I’m guessing that most of your anxiety does not come from APIs, and what will happen when you launch them. I’m pretty sure it comes from some unhealthy views about what you have, the stories you’ve been told about intellectual property, and your obsession with getting rich. Again, which leaves us at the part of the story where you probably shouldn’t do APIs–I don’t think they are your jam.


Sharing Top Sections From Your API Documentation As Part Of Your Communications Strategy

I’m always learning from the API communication practices from out of the different AWS teams. From the regular storytelling coming out of the Alexa team, to the mythical tales of leadership at AWS that have contributed to the platform’s success, the platform provides a wealth of examples that other API providers can emulate.

As I talked about last week, finding creative ways to keep publishing interesting content to your blog as part of your API evangelism and communications strategy is hard. It is something you have to work at. One way I find inspiration is by watching the API leaders, and learning from what they do. An interesting example I recently found out of the AWS security team, was their approach to showcasing the top 20 AWS IAM documentation pages so far in 2017. It is a pretty simple, yet valuable way to deliver some content for your readers, that can also help you expose the dark corners of your API documentation, and other resources on your blog.

The approach from the AWS security team is a great way to generate content without having to come up with good ideas, but also will help with your SEO, especially if you can cross publish, or promote through other channels. It’s pretty basic content stuff, that helps with your overall SEO, and if you play it right, you could also get some SMM juice by tweeting out the store, as well as maybe a handful of the top links from your list. It is pretty listicle type stuff, but honestly if you do right, it will also deliver value. These are the top answers, in a specific category, that your API consumers are looking for answers in. Helping these answers rise to the top of your blog, search engine, and social media does your consumers good, as well as your platform.

One more tool for the API communications and evangelism toolbox. Something you can pull out when you don’t have any storytelling mojo. Which is something you will need on a regular basis as an API provider, or service provider. It is one of the tricks of trade that will keep your blog flowing, you readers reading, and hopefully your valuable API, products, services, and stories floating to the top of the heap. And that is what all of this is about–staying on top of the pile, keeping things relevant, valuable, and useful. If we can’t do that, it is time to go find something else to do.


Clearly Designate API Bot Automation Accounts

I’m continuing my research into bot platform observability, and how API platforms are handling (or not handling) bot automation on their platforms, as I try to make sense of each wave of the bot invasion on the shores of the API sector. It is pretty clear that Twitter and Facebook aren’t that interested in taming automation on their platforms, unless there is more pressure applied to them externally. I’m looking to make sure there is a wealth of ideas, materials, and examples of how any API driven platform can (are) control bot automation on their platform, as the pressure from lawmakers, and the public intensifies.

Requiring users clearly identify automation accounts is a good first place to start. Establishing a clear designation for bot users has its precedents, and requiring developers to provide an image, description, and some clear check or flag that identifies an account as automated just makes sense. Providing a clear definition of what a bot is, with articulate rules for what bots should and shouldn’t be doing is next up on the checklist for API platforms. Sure, not all users will abide by this, but it is pretty easy to identify automated traffic versus human behavior, and having a clear separation allows accounts to automatically turned off when they fit a particular fingerprint, until a user can pass a sort of platform Turing test, or provide some sort of human identification.

Automation on API platforms has its uses. However, unmanaged automation via APIs has proven to be a problem. Platforms need to step up and manage this problem, or the government eventually will. Then it will become yet another burdensome regulation on business, and there will be nobody to blame except for the bad actors in the space (cough Twitter & Facebook, cough, cough). Platforms tend to not see it as a problem because they aren’t the targets of harassment, and it tends to boost their metrics and bottom line when it comes to advertising and eyeballs. Platforms like Slack have a different business model, which dictates more control, otherwise it will run their paying customers off. The technology, and practices already exist for how to manage bot automation on API platforms effectively, they just aren’t ubiquitous because of the variances in how platforms generate their revenue.

I am going to continue to put together a bot governance package based upon my bot API research. Regardless of the business model in place, ALL API platforms should have a public bot automation strategy in place, with clear guidelines for what is acceptable, and what is not. I’m looking to provide API platforms with a one-stop source for guidance on this journey. It isn’t rocket science, and it isn’t something that will take a huge investment if approached early on. Once it gets out of control, and you have congress crawling up your platforms ass on this stuff then it probably is going to get more expensive, and also bring down regulatory hammer on everyone else. So, as API platform operators let’s be proactive and take the problem of bot automation on directly, and learn from Twitter and Facebook’s pain.

Photo Credit: ShieldSquare


<< Prev Next >>