{"API Evangelist"}

Providing API.json As A Discovery Media Type Every One Of My API Endpoints

It can be easy to stumble across the base URL for one of my APIs out on the open Internet. I design my APIs to be easily distributed, shared, and as accessible as possible--based upon what I feel the needs for the resource might be. You can find most of my APIs, as part of my master stack, but there are other APIs like my screen capture API, or maybe my image manipulation API, that are often orphaned, which I know some people could use some help identifying more of the resources that are behind API operations.

To help support discovery across my network of APIs, I'm going to be supporting requests for Content-Type: application/apis+json for each endpoint, as well as an apis.json file in the root of the API, and supporting portal. An example of this in action, can be seen with my blog API, where you can look into the root of the portal for API (kin-lane.github.io/blog/apis.json), and in the root of the base URL for the API (blog.api.kinlane.com/apis.json), and for each individual endpoint, like the (blog.api.kinlane.com/blog/) endpoint, you can request the Content-Type: application/apis+json, and get a view of the APIs.json discovery file.

It will take me a while to this rolled out across all of my APIs, I have worked out the details on my blog, and API APIs. Providing discovery at the portal, API, and endpoint level just works. It provides not just access to documentation, but the other critical aspects of API operations, in a machine readable way, wherever you need it. It is nice to be on the road to having APIs.json exist as the media type (application/apis+json), something that isn't formal yet, but we are getting much closer with the latest release, and planned releases.

Next, I will push out across all my APIs, and do another story to capture what things look at that point. Hopefully it is something I can encourage others to do eventually, making API discovery a little more ubiquitous across API operations.

I Am Thankful For Another Amazing APIStrat

I am back home in Los Angeles, after another great edition of API Strategy & Practice--this time in Austin, TX. I have had a few day to decompress, and took a day to reboot my brain by crafting 235K+ API definitions for the English language, resulting in some time to reflect on what happened last week in Austin.

First of all I want to thank 3Scale. Without the API infrastructure provider, APIStrat would not happen. Second I want to thank all the sponsor s who get involved, without your support the conversation wouldn't happen. Third, I want to thank the speakers and attendees for making it such a meaningful conversation.

I tend to use that word, "conversation" a lot when describing APIStrat, but I feel pretty strongly that it is the conversation that occurs at APIStrat that helps move the entire API community in a very meaningful way. The conference always renews my energy, and strengthens my relationships with other important folks in the space that I rely on for my research and storytelling.

Thank you so much to everyone who came to Austin last week and participated, and special thanks to Steve and the 3Scale team for investing so much into the API community, while asking for so little in return.

Sharing 235K API Definitions With The English Language API Recipe Book

I needed a side project to reboot my mind after @APIStrat this last weekend, so I opened up my notebook and picked a project that I've been meaning to give some attention to, one that would help me clean my slate, and let me get back to my regular work levels. The project I picked is one that I came up with a little over a year ago, but recently had flushed out my vision further, as I hung out at my favorite watering whole drinking an IPA.

It took me several iterations before I landed on a name for this project, but my working title is the English Language API Recipe Book. I find myself in an awkward position these days, when it comes to the concept of API copyright, which is something I have taken a firm stance on with my work around the Oracle v Google ava API copyright case, and the release of the API licensing format API Commons, but is something, in the end, I just do not believe in.

You see, in my opinion, API definitions should NOT fall under copyright. Like recipes and menus, API definitions should not be open for anyone to use. To help me make my point, I wanted to craft the English Language API Recipe Book, publishing an open API definition for almost every word in the English dictionary. I found a reasonably complete list of every English word, and auto-generated an Open API Definition Format (OADF) specification for each of the 235K+ words. 

For each API definition, I cover the base GET, POST, PUT, and DELETE verbs for each word, providing a basic query via a parameter, and return a name, and description as the basic underlying data model. I am already playing with other variations of database models, and have also generated another dimension for each word, by again iterating through each word, and adding it as a secondary level resource. I am also playing with other relationships, and ideas for expanding the dimensions of this recipe book, but wanted to get this first version out the door.

Overall, I just want to show how easy it is to programmatically generate API definitions, and add this English Language API Recipe Book to my already growing number of API definitions, from popular APIs that I include in the API Stack. Through this work, I wanted to emphasize, that no matter how much work you put into the naming, ordering, and design of your API definitions, they are not creative works that you should lock up and defend--your API definitions should be open, easily accessible, shared, and designed for reuse.

While I do not think any of the 235K+ API definitions should have copyright applied, I will putting all of these into public domain, using a Creative Commons license, as act two of this production. This is more theater than anything, but using API Commons, I will make sure every word in the English dictionary, crafted as a basic web API, is available for anyone to use, anytime, anywhere (as it should be, DUH). I recently stated in a keynote, after launching API Commons, that I was going to be the "Johnny Fucking Appleseed" of publishing openly licensed API definitions, out in front of slower moving corporations like Oracle--the English Language API Recipe Book is just the beginning of this.

Next up, I will be crafting a series of OADF API definitions for Schema.org and use APIs.json to bundles as a more meaningful collection. I will be using these data models to further automate the English Language API Recipe Book, and establish additional dimensions to this collection. You can find the English Language API Recipe Book on Github. I have published as a Github organization, with a separate sharded repository for each letter of the alphabet, containing a separate OADF definition for each word in the dictionary, and indexed using APIs.json to index each letter, as well as for the overall recipe book collection--making it all machine readable by default.

The APIStrat Austin Schedule Has Reached That Level Of Amazing For Me Again

This is the 6th edition of API Strategy & Practice, happening in Austin, TX next week. As one of the organizers, I can say that pulling together the perfect lineup of speakers and topics is always a daunting challenge, but then at some point before the event happens, the schedule always seems to take on a life of its own.

The APIStrat Austin schedule has reached that point again. We have enough killer speakers and companies present, it has attracted other killer speakers and companies, resulting in a mindblowing 3 days of workshops, keynotes, panels, and sessions--if you haven't taken a look at the schedule lately, take a few moments.

I was going through, looking for problems, missing photos, etc, and the scope of the people and companies present just struck me how amazing it has become, and I had to share. If you aren't registered, make sure and do so, and if there is someone you think that should be in attendance, feel free to ping me directly--you won't want to miss it.

See all y'all in Austin next week!

Thinking Through The Licensing For An API Stack

I've spent a lot of time thinking through the licensing we apply to APIs, as part of my work on the Oracle v Google API copyright case. The licensing around APIs is still in flux, with the current precedent being that APIs are copyrightable. Even though I do not believe this stance, I encourage API designers to make sure and apply one of the more liberal Creative Commons licenses to your API definitions, taking a pre-emptive stance in the conversation.

In my experience most API providers, let alone consumers and the public at large, do not understand the separation between an APIs definition, and the code that runs the API, and often even the code that consumes an API. To help us visualize the separation, as well as think through the licensing implications of each layer, I have setup a specific research project that addresses API licensing, in hopes of spending time regularly researching the topic, as well as telling stories that help people navigate how to license their APIs.

Here is how I'd break down the five most common layers of the API licensing stack, and some ideas for how you can apply licenses to these layers of API operations.

Server Code - For many APIs, your server code will be your secret sauce and kept proprietary, but for those of you who wish to open source this critical layer, here are some options. To help you navigate the licensing, I recommend using Github's Choose a License.

  • Apache - The Apache License is a free software license written by the Apache Software Foundation (ASF). The Apache License requires preservation of the copyright notice and disclaimer. Like other free software licenses, the license allows the user of the software the freedom to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software, under the terms of the license, without concern for royalties.
  • GPL - The GNU General Public License (GNU GPL or GPL) is the most widely used[6] free software license, which guarantees end users (individuals, organizations, companies) the freedoms to run, study, share (copy), and modify the software. Software that allows these rights is called free software and, if the software is copylefted, requires those rights to be retained.
  • MIT - The MIT License is a free software license originating at the Massachusetts Institute of Technology (MIT). It is a permissive free software license, meaning that it permits reuse within proprietary software provided all copies of the licensed software include a copy of the MIT License terms and the copyright notice.

Data - Serving up data is one of the most common reasons for deploying an API, and the Open Data Commons, provides us with some licensing options.

Content - Separate from the data, APIs are being used to server up short, and long form content, where liberal Creative Common licenses should be considered.

  • CC BY - This license lets others distribute, remix, tweak, and build upon your work, even commercially, as long as they credit you for the original creation. This is the most accommodating of licenses offered.
  • CC BY-SA - This license lets others remix, tweak, and build upon your work even for commercial purposes, as long as they credit you and license their new creations under the identical terms. This license is often compared to copyleft free and open source software licenses. All new works based on yours will carry the same license, so any derivatives will also allow commercial use.
  • CC0 - Use this universal tool if you are a holder of copyright or database rights, and you wish to waive all your interests in your work worldwide.

API Definition - The part of the discussion be defined (unfortunately) by the Oracle v Google Java API copyright legal battle, and in light of the ruling, I urge you to consider one of the more liberal Creative Common licenses.

  • CC BY - This license lets others distribute, remix, tweak, and build upon your work, even commercially, as long as they credit you for the original creation. This is the most accommodating of licenses offered.
  • CC BY-SA - This license lets others remix, tweak, and build upon your work even for commercial purposes, as long as they credit you and license their new creations under the identical terms. This license is often compared to copyleft free and open source software licenses. All new works based on yours will carry the same license, so any derivatives will also allow commercial use.
  • CC0 - Use this universal tool if you are a holder of copyright or database rights, and you wish to waive all your interests in your work worldwide.

Client Code - Separate from your server side code, you should make sure all of your client side code SDKs, PDKs, and starter kits have an open source license applied-o-remember you are asking them to potentially integrate this into their business operations. Again I recommend using Github's Choose a License to help you navigate this decision.

  • Apache - The Apache License is a free software license written by the Apache Software Foundation (ASF). The Apache License requires preservation of the copyright notice and disclaimer. Like other free software licenses, the license allows the user of the software the freedom to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software, under the terms of the license, without concern for royalties.
  • GPL - The GNU General Public License (GNU GPL or GPL) is the most widely used[6] free software license, which guarantees end users (individuals, organizations, companies) the freedoms to run, study, share (copy), and modify the software. Software that allows these rights is called free software and, if the software is copylefted, requires those rights to be retained.
  • MIT - The MIT License is a free software license originating at the Massachusetts Institute of Technology (MIT). It is a permissive free software license, meaning that it permits reuse within proprietary software provided all copies of the licensed software include a copy of the MIT License terms and the copyright notice.

This list does not reflect all of the licensing opportunities available to you. These reflect the licensing options that I recommend you consider, as part of your API operations. I want as many APIs to thoughtfully consider the licensing for their entire API stack, and I'm kick-starting these research to provide a short, concise guide for API providers to consider. 

If I get my way, every layer of the API stack will be licensed as openly as possible, however I add some extra concern for the API definition layer. I know many of my readers will argue that this is just code, and should be licensed along side your server side code, but this is not true-modern API definitions are increasingly available as JSON, YAML, Markdown, that is telling a very important story that is having a significant impact on how we do business, and live our personal lives--a story that should be able to retold and echoed across the digital landscape, without licensing restrictions.

My API licensing research is just getting going. I will be rounding it off with examples of the approach used by leading API providers, news and stories I curated from across the space, as well as more API Commons tooling that helps you define the licensing for your API, and share in a machine readable way.

The Swagger Spec Is Reborn As Open API Definition Format (OADF) After Being Put Into Open API Initiative (OAI)

We reached another significant milestone in the API space today, after being acquired by SmartBear this spring, the Swagger specification is being moved into a Linux Foundation grouped called the Open API Initiative (OAI).

SmartBear has been working with the core group of vendors including 3Scale, Apigee, Capital One, Google, IBM, Intuit, Microsoft, PayPal, and Restlet over the summer to hammer out the details of the organization, and the charter that drives the group forward.

Its no secret, I am a pretty big support of the specification, and happy to see it be reborn, within the community driven group, as a community driven open spec. I’m looking forward to continuing my development of interesting things on top of the OADF specification, and telling stories about what the community is building as well.

If you have a cool tool or service that you have built on the spec, make sure and let me know, so I can share the story with my network.

Contemplating Hypermedia When My Focus Is On Experience

I wish I had more time to spend on designing, and deploying APIs the way I desired. Without any real funding of individual APIs, I can only go so far with them, which usually doesn't go beyond the minimal viable API. However, even with this reality, I have two APIs I would love to see done right, and keep nagging at me.

One API is my curated news API, which I currently have a pretty bare bones JSON definition to represent each news article I curate from the API space (date, title, author, body, url). As I have time, I've been trying to craft a Siren representation of this same resource. The opportunities for exploration of my archive of curated API news going back to 2011 is pretty huge (in my opinion).

Recently, I've also been having delusions of a hypermedia enabled version of my audio API, which I use store audio files I find, create, or publish. It is a CRUD version of what I'd like to see, but recently I've been thinking of ways I could craft a more audible version of API Evangelist, but in order to do it right, I need a hypermedia enabled API experience.

For me, ,this takes hypermedia beyond many of the discussions I have been exposed to, moving my motivations into the realm of user experience, and not about building a client that will rule them all, or just as a matter of principle--I just want to deliver the right experience. How do I allow consumers of my audio API to define their own experience? How do I craft some experiences, based upon my view of the API space--as I see the world?

This is where hypermedia goes beyond the technical, and moves into the experiential realm where I'm hoping that I can manifest some more money to pay for tehse API designs to become a reality. ;-) Thank you for your support!

The 30 Areas I Am Working To Define In The API Space

When I started API Evangelist in 2010, I tracked on one area--API management. Over the years this expanding to be about API deploy, and design, and most recently monitoring and discovery. As I approach the end of 2015, I've expanding this to be 30 separate areas of research.

I have almost 200 projects I'm pushing forward in one way or another, but these 30 years reflect the API space I am working so hard to make sense of in 2015. While all of my research is a work in progress, I have these core projects as part of my regular monitoring, and I will be updating as much as possible.

While this may seem like a lot of areas to keep track of, I'm finding it easier and easier to do, as it all continues to come into focus for me. I also have other research areas I'd like to merge in here, and maybe some other areas I'd like to to migrate out of existing research, into new areas.

As I'm preparing for my keynotes at Defrag in Colorado, and APIStrat in Austin this month, I'm refreshing all of my research, and trying to use my work to craft a hopefully interesting talk, while also sharing some nuggets of wisdom from the vantage point I enjoy.

All of my research is licensed CC-BY, is machine readable by default, and runs on Github, so if you have any questions you can submit an issue, or ping me directly. If you are feeling adventurous you are also welcome to fork any of my work and incorporate it with your own work, or even submit a pull request and contribute your own thoughts to my research.

As always you can find this working list of research on my the home page of API Evangelist--thanks for tuning in!

@SlashDB Created The Ranking Digital Rights Corporate Accountability Index API I Was Asking For

I read a lot of blog posts, and press releases about open data these days, and when I find a dataset I think offers a lot of value, or is just interesting enough to help push forward, I either try to incorporate into my Adopta open data work, or I just put it out to my followers to see if anyone can help.

As I was monitoring the space yesterday I came across the Ranking Digital Rights 2015 Corporate Accountability Index, which "evaluates 16 of the world’s most powerful Internet and telecommunications companies on their disclosed commitments, policies, and practices that affect users’ freedom of expression and privacy."--I saw there was an excel and CSV versions of the report, but I didn't see an API, so I tweeted out:

Victor Olex (@agilevic) from SlashDB, turned around an API endpoint in a matter of a couple of hours. Here is what Victor sent me:

I took your Twitter challenge and created the API for Ranking Digital Rights data. The data model does not include scores for individual lines of business, but it does have all qualitative data needed to make sense of it. I did not write any data aggregation queries, but we can add those later. The whole thing works off a MySQL database model, which I designed and fed with data from the spreadsheet.

Using SlashDB, Victor quickly generated the following endpoints for quick access to the digital rights data behind the report:

These are very resource oriented API endpoints, meaning they look like the raw resource they are generated from, but it demonstrates the value of using cloud services like SlashDB to do a lot of the heavy lifting for you when deploying an API. Getting an API up and running, so that you can start building web, mobile, data visualizations, and other tooling is an important first step--then you can also start iterating on, and begin the process of defining the next generation endpoints.

As you can see the SlashDB interface allows me to dill down using the URL, referencing the databases, relationships, and other resources that are exposed. What I like about SlashDB, is it lets you navigate the resources and relationships through the generated HTML pages, and then retrieve the XML or JSON you will need. 

Next steps for me, is I'd like to craft a simple Github Pages hosted landing page, and define a base set of meaningful API endpoints that would allow potential API consumers to quickly understand what is contained in the the Ranking Digital Rights 2015 Corporate Accountability Index, and hopefully incentivize someone building something cool on top of it.

Thanks to Victor Olex (@agilevic) and SlashDB more doing this important work--you guys rock!!

Educating API Developers With Each Login Over At @CloudElements

I am a big advocate for making sure the on-boarding process for developers is as friction-less as possible. Developers should be able to signup, and login without anything getting in their way. This is why normally I wouldn't suggest adding anything unnecessary to to signup and login process, but I saw something over at Cloud Elements that I thought was interesting.

First, let me know that Cloud Elements gets bonus points because they emphasize signing in with your Github or Google account. I am a big fan of keeping all my API accounts linked to my Github profile--it just makes sense to me. I'd love it if API providers allowed me to store keys, SDKs, and other resources within my private Github repository, that I use across all the APIs I depend on. (I'm sure someone will tell me not to do this, but nobody has yet to convince me of why)

What really caught my eye logging in, was the updates they provided at the bottom of the login screen. There was an image, with a simple message telling me that v1 of the Cloud Elements API was sunsetting, with a simple "Read More" link, so I can get more details. I like this concept of adding these type of critical updates to the login screen for developers.

I am going to add a flexible messaging area to signup and login screen, as part of my suggested on-boarding building blocks. Developers may not be regularly logging into their API developer portals, so I wouldn't depend on this channel for everything, but in addition to the blog, Twitter, and other channels, it might make sense as a way to share important information with API consumers.

Adding An OAuth Scope Page As One Of My API Management Building Blocks

I've had a handful of suggested building blocks when it comes to authentication, as part of my API management research, but after taking a look at the OAuth Scopes page for the Slack API, I'm going to add another building block just for listing out OAuth scopes.

For platforms who provide OAuth, scopes are how access to users content and data is being broken down, and negotiated. When it comes to industry levels, OAuth scopes are how power and influence is being brokered, so I'm going to start tracking on how leading providers are defining their scopes--I am sure there are some healthy patterns that we all can follow here.

I have had the pleasure of sitting in on OAuth negotiations between major utility providers, as part of my work with the White House and Department of Energy in the past. This work has given me a glimpse into the future of how access and sharing of data will be negotiated in the future, with OAuth scopes and APIs playing a central role.

It will take me some time to standardize how I gather, store, and publish the OAuth scopes for each API, but I can get started by bookmarking any provider who shares their OAuth scopes, and encourage other API providers to do, by suggesting a formal OAuth scopes page as one possible building block you should consider when crafting your API strategy.

A Social API Performance Report From @APIMetrics

APImetrics just released their second API Performance Report for Social Networks, aggregated from data they have been gathering from monitoring social networks since 2014. APImetrics is publishing the report to "..understand the impact these APIs were having on social media based on geographic location and specific cloud service provider."

I'll let you read the report yourself, I just wanted to highlight the importance of this type of API monitoring from 3rd party services like APImetrics. The other providers I watch closely like Runscope and API Science also monitor 3rd party APIs like this, but I think publishing formal reports on a regular basis like APImetrics is doing, is healthy for the space. 

Eventually, I would like to see an aggregate location where all API monitoring service providers can publish their data, in a common format, and the larger API community could process, and help establish an API rating solution that we can all take advantage of. Historical, and real-time data will be key to establishing the open rating system that we need.

I would say that social, cloud, and messaging apps kind top the list of APIs we should be monitoring and rating, but eventually it would be nice to have this be commonplace for any public API in the space. Part of helping us evolve the API discovery conversation, is establishing a baseline for rating the good APIs from the bad ones, and work like the social network report from APImetrics helps us get closer to this possible future.

To Incentivize API Performance, Load, And Security Testing, Providers Should Reduce Bandwidth And Compute Costs Associated

I love that AWS is baking monitoring testing by default in the new Amazon API Gateway. I am also seeing new service from AWS, and Google providing security and testing services for your APIs, and other infrastructure. It just makes sense for cloud platforms to incentivize security of their platforms, but also ensure wider success through the performance and load testing of APIs as well.

As I'm reading through recent releases, and posts, I'm thinking about the growth in monitoring, testing, and performance services targeting APIs, and the convergence with a growth in the number of approaches to API virtualization, and what containers are doing to the API space. I feel like Amazon baking in monitoring and testing into API deployment and management because it is in their best interest, but is also something I think providers could go even further when it comes to investment in this area.

What if you could establish a stage of your operations, such as QA, or maybe production testing, and the compute and bandwidth costs associated with operations in these stages were significantly discounted? Kind of like the difference in storage levels between Amazon S3 and Glacier, but designed specifically to encourage monitoring, testing, and performance on API deployments.

Maybe AWS is already doing this and I've missed it. Regardless it seems like an interesting way that any API service provider could encourage customers to deliver better quality APIs, as well as help give a boost to the overall API testing, monitoring, and performance layer of the sector. #JustAThought

API Consumption Moves To The Main Stage At @APIStrat Austin This Month

When planning the API Strategy & Practice Conference, the team works very hard to make the speaker and session line up reflect what we see across the API space. The conference is meant to be an open, non-vendor and non-product focus, discussion about what individuals and companies are facing when it comes to being API providers, as well as API consumers.

When we closed the call for papers for APIStrat this year, it was clear that API consumption would be one of the sessions, but  as we progressed in the process of locking down talks it was clear that API consumption should be something that we should move to the main stage. To help highlight the importance of this discussion, I wanted to ask Mark Boyd (@mgboydcom), the conference chair for APIStrat his thoughts on why we did this:

This year has seen two major API consumption challenges: For API providers looking to continue their growth, 2015 has often been about putting their API in front of non-dev users and making their API accessible to a wider audience. We're seeing an increasing use of tools like Google Sheets and slackbot integrations trying to make API functionality available to people who don't code. For businesses and enterprises generally, the increasing use of external APIs has meant more challenges for dev teams in aggregating APIs and using multiple APIs consistently. Each API they are using might come with different terms of service, differing rate limits, different terminology for the same subject matter, and different ways of measuring units. For example, most API consumers I hear from are frustrated that each API they use seems to have a different way of measuring and displaying time formats!

As APIs keep growing in mainstream business use, there becomes a tipping point in scaling the API economy where it starts adding complexity rather than removing it. We think that 2015 is that tipping point and we need to start solving how end users can consume APIs without it creating new challenges for them. We've got some of the best minds who are thinking through how to solve these issues - Mark Geene from Cloud Elements, Paul Katsen from Blockspring, Taylor Barnett from Keen IO, Kirsten Hunter from Akamai, Noam Schwartz from Similar Web, and Todd Sundsted from SumAll have all navigated their way through making their APIs accessible to others or building an abstraction layer so that all the APIs they are consuming are working in unison. This will be a great panel session for API providers who want to make it easier for their developer community to consume their APIs, and great for API consumers to learn some of the tools and tricks that can make it easier to integrate external APIs into their own work.

I think Mark nails it. For me, this session moving to the main stage demonstrates that APIs are going mainstream, as more, and more of the "normals" are tuning into the API conversation. The API consumption panel at APIStrat this month includes:

All of these panel participants are API providers, service providers, as well as consumers -- making for a pretty potent discussion about APIs. These companies are all very aware of the challenges API consumers face, because they are doing it at scale. If you are an API provider looking for the best advice on how to streamline use of your API, these are the people you want to tune into.

As part of the storytelling around @APIStrat, I will be digging deeper into each of these companies, and profiling what they contribute to the API space. APIStrat is a little over two weeks away, so make sure you are registered, before all the tickets are sold--take a look at the schedule, and make sure you are main stage to listen to Mark, Paul, Taylor, Kirsten, Todd, and Noam discuss API consumption in Austin.

Please Tell Your API Stories

Many of you that have attended any of my talks, have heard me tell the audience about the importance of sharing your API stories. As an API provider it is the most important tool in your toolbox, above and beyond any technical or business advantage you have. I'm spending more time lately, gathering up the things I say over and over in my talks, and other in-person conversations, and craft simple stories that echo these little nuggets of wisdom on the blog.

If you do not tell the story of what your API is doing, nobody will know--it really is that simple. The drumbeat from your blog, should echo the activity that is going on via your API. Your day might be filled with a hectic stack of activity from your view, but your API consumers, analysts, and storytellers like me, we do not see any of this activity, and we all need to hear about your day on your blog, amplified by your active Twitter account.

I can tell you that the APIs who are doing interesting things, and are telling the story of these things, end up on my blog, and in the major tech blogosphere much more often, than those that do not. I'm not telling you to craft amazing essays about your operations, I'm just asking that you blog daily, or every couple days, on the often mundane, but potentially highly valuable stories from the trenches of your API operations--it is something that, the more you do, the easier it wil gets.

While the goal of blogging is to communicate externally with your API consumers, and the public at large, my secondary motive is to expose you to a very beneficial by-product of blogging in this way. When you get into a rhythm where you are working through your ideas, operations, and API road-map, in such a public way, you develop a very different view of your platform, one that is closer aligned with your community, and potentially the industry you are looking to play a role in.

I want you to share your API stories, but more importantly I want your API to be successful, and regular telling stories to your community, and the public, will increase these odds significantly in my experience.

Expanding My API Partner Research Into Its Own Project

As I explore through API portals, looking for the successful approaches to APIs, I'm increasingly seeing formal partner programs, resulting in me expanding the topic as its own research area. My objective is to keep track of how the APIs I track on operate their partner programs, resulting in the list of organizations I reference in my research. I also want to try and identify what some of the common building blocks of API partner programs are, in addition to the news stories that I have curated about partnerships--which is the fuel of my research.

I have recorded 31 separate partner programs, which I found through the course of my API monitoring and research. 

These are all companies who have formal partner efforts that provide a higher level of access to their API. There are many companies who feel their developer programs is a partner program, but what I am looking for is a partner program that looks to certify developers, and give them more benefits through deeper participation with a platform.

After looking through these companies I've recorded a handful of what I'd consider to be some of the common building blocks of these API partner programs. 

Program Details

  • Landing Page - An official landing page for the partner program.
  • Program Details - Short, concise information about the program.
  • Program Requirements - Information about what the details of the program are.
  • Program Levels - Details about what the different levels of the partner program are.

Partner Showcase

  • List of Partners - A list of partners, usually name and logo, maybe with some description.
  • Partner Stories - Stories of the partners and how they use the platform.
  • Partner Search - A keyword search for discovering partners.

Partner Program

  • Application - The actual application for becoming a partner.
  • Private Portal - A private portal for partners to login to.
  • Certification - Official certification showing that a partner is officially approved and vetted.


  • Quota Increase - Increasing the rate limit for existing APIs.
  • Additional APIs - Provide access to APIs that are only accessible to their partner tiers.
  • Read / Write APIs - Access to not only read, but also write, update, and possibly delete access to aPIs.

Early Access

  • Early Communication - Allow for partners to get early access to platform communications.
  • Early Opportunities - Access to early platform opportunities meant just for partners.
  • Beta Access - Allow for beta access to new platform products.


  • Agreement - The legaleze for the partner program agreement.
  • Privacy Policy - What is the privacy policy for the partner program.
  • Code of Conduct - A code of conduct targeting partners, explaining what is expected.

Marketing Activities

  • Blog Posts - Provide blog posts for partners to take advantage of one time or recurring.
  • Press Release - Provide press releases for new partners, and possibly recurring for other milestones.
  • Facebook Post - Post updates to the platforms Facebook account.
  • Twitter Post - Post updates to the platforms Twitter account.
  • Google Plus - Post updates to the platforms Google Plus account.


  • Discounts - Provide discounts on direct support for partners.
  • Office Hours - Provide virtual open office hours just for partners.
  • Training - Offer direct training opportunities that is designed just for partners.
  • Advisors - Provide special advisors that are there to support partners.


  • Quotes - Allow partners to provide quotes thath can be published to relevant properties.
  • Testimonials - Have partners provide testimonials that get published to relevant sites.
  • Use of Logo - Allow partners to use the platform logo, or special partner platform logo.


  • Blog - Have a blog that is dedicated to providing information for the partner program.
  • Spotlight - Have a special section to spotlight on partners.
  • Newsletter - Provide a dedicated partner newsletter.


  • Revenue Sharing - Offer revenue sharing for partners.
  • Reseller Discounts - Offer reseller discounts from referrals they make.

This is just a start, after I spend more time looking through these programs, and learning more, I will evolve further. As I do with my other areas of research I will publish all of this as a white paper when I can. You can stay in tune with the API partner related news I find on the research site under news, the official company partner programs I've flagged under organizations, and the common elements I have recorded under building blocks--it will change as I have time, so check back regularly.

Next, I think I will look at the types of partnerships that these platforms are offering, and study the relationship between their partner programs, and their general API access. If I can, I will also track on which partners are showcased as part of each program, to try and get a vibe for which companies are working hard at establishing partnerships across the space. There is quite a bit of curated news to still go through and make sense of all the partner related activity I have been coming across for the last couple years. It is always hard to get my research up to speed with how much information I have curated over time, and the more I research, the more information I seem to collect. 

If you know of any partner programs I don't have listed, feel free to let me know.

Even Non-Developers Can Create An API Using Popular Form Services, Zapier, and Restlet

I have a notebook of story ideas I can do from any moment, but I find that I foten need some sort of inspiration to kick them off properly. I got that this afternoon in a Tweet from Leah Bannon, reminding me how important it is for me to have stories for non-developers.

There are many tools available right now that let's anyone publish an API, you just have to know the right tools, and right process to get the results you need. Thanks to Zapier, you can connect to the API driven services of a wide variety of cloud tools, and using Google Spreadsheets and Zapier, you can easily store data and content, to be used in a simple API--all without writing any code.

The most useful example of this, is using popular form services like Wufoo, Gravity Forms, Typeform, and Survey Monkey, you can collect just about any information you want, crafting exactly the form and fields you want, then using Zapier you can route the form results to a Google Spreadsheet. Once you have your Google Spreadsheet, you can then easily publish an API, complete with documentation using Restlet's cloud API deployment, and management services.

I'm going to save the detailed how-to for a future post, but I wanted to kick-start, what I hope to be an ongoing series, of simple exmaples of how non-developers can consume and publish their own APIs, without being a programmer. My goal is to help encourage anyone to understand the value of aggregating valuable data and content using Google Sheets, and then allowing it to be put to use anywhere, by deploying an API with Restlet.

If you need help with this example, let me know and I will expedite my next how-to post.

Disclosure: Restlet is an API Evangelist partner.

I Have A Bunch Of API Resources, Now I Need A Plan, Or Potentially Several Plans

I have some really amazing resources exposed as APIs. Everyone is doing it these days, and I have some good ones, now I just need a business plan. You know, actually, I need several plans, that will help me expose these resources to the right people, while retain as much control as I need, and also generate revenue from these resources, tailored to whoever I am offering them to.

Starting With The Essential API Building Blocks
I want to allow anyone to access my valuable APIs, however I want to control exactly how much they can access, and even limit it to a free trial, and a small amount of calls upon the API. I am going to call this Plan A, also known as my freemium layer, meant to just wet the appetite of any potential API consumer. I now have a basic level of access to my API resources, that anyone can sign up for 24/7, yet I get to dictate exactly how many hits on the API they can make, and who gets access--something I can revoke at any time.

Moving Beyond Plan A For My API Strategy
Plan A will only gets me so far, I need to also be able to establish additional plans that help me cover the costs of my operations, and hopefully also generate some revenue from API access and consumption. Each additional plan that I offer, lets call them plan B, C, and D, should have the ability to sign-up 24/7 without approval, possess a trial version option if necessary, and allow for charging of setup costs to get going--if I so choose.

Defining Usage Metrics That Are Meaningful To Me
All of my plans will start with these elements, but then should also allow me to set any other metric that I desire. I want to track by API call, by message, according to bandwidth consumed or stored, the time period at play, the scope of compute being applied, and much, much more. Any of my plans should be able to to measured using these metrics , or anything else I want to define. If an API plan provides access to a blog, or my product catalog, the requirements will be different than when I provide access to images, videos, podcasts or other heavy objects. I should be able to define the metrics, and set a price per metric that can be applied to each APIs I am making available.

Establishing Limitations For API Access
Each of my plans should have well defined units of operations (metrics), cost associated with each metrics, as well as volume pricing levels, but also should possess limitations of how my resources can be accessed. I need to restrict some plans to a daily amount, limit server loads by only allowing a certain amount of requests, bandwidth, per second, minute, hour, day, week, or month. If I want, I should be able to leave API access open ended as well. How I define the access for each of my API plans, should be tailored exactly for my intended audience for each plan, with an infinite number of plans possible.

Which APIs Methods Are Available In Each Of My Plan(s)
No plans are created equal. Which API resources,  methods, and verbs, that are available in any given plan is also tailored to the intended audience of each plan. My Plan A allows limited reading from just a handful of resources for everyone, while my Plan C allows for reading and writing for a variety of API resources, designed for a specific group of partners. My public facing plans encourage consumption of specific resources that I have crafted, while my partner plans encourage two way usage, incentivizing my partners to add, update, and curate information available via the API resources I have made available.

All The Variables I Need To Compose The Plans I Need
I can create as many APIs, and individual methods, and package them up into plans, and measure their access by call, bandwidth, storage, time period, compute, and other critical metrics. It is within my control to set limitations, and volume levels of access across any of these metrics, charging exactly what I need to incentivize API usage, while also covering the costs of my operations, and bring in a healthy bit of revenue in the door as well--I have a full toolbox of what I need to orchestrate all of my API driven business model(s) behind my products and services.

Provide The Features My Consumers Will Need Along With Each Plan
Along with the access of API resources within each plan, I need to also bundle other features, like support, service level agreement (SLA), and other resources that consumers will need to be successful with integration. Once again, each plan should be tailored for it's intended audience, providing the API access they need, while also makes sure all their adjacent needs are met along the way.

Providing Variable Unit(s) of Value For All API Transactions
Whether its an API call, bandwidth transmitted, or duration of resource usage, you have measurable units of value, where a price can be set, in a way that lets it be adjusted by volume, or each plan level. An API call might be one cent (for sake of discussion), but if you access more than 10K per day, it goes down to half a cent per API call. For partner plans, each API call might be 1/10th of a cent by default, with 1/100th of a cent if you access more than 10K per day--for the exact same resource. Each API has its unit of value, with the price for this unit of value variable depending which plan you are in, how much you consume, and where you are positioned in the API supply chain.

How Do We Know What Price To Set For Each Unit Of Access?
You have an API, but where do you even start understanding where to price this resource, which you feel is really cool and valuable, but might be completely meaningless to others. You can start by looking at competing API providers, and follow any precedent that has been set in the industry to date. Beyond following the lead of others, you can break down your own story, and dial in exactly what it costs to deliver an API, and set the price for the industry, and lead others. 

What Did Cost To Acquire What Is Need For An API Resources?
Every API starts somewhere. What did it take to discover the idea for an API, negotiate or license its usage, or maybe you had to purchase some content, data, or access to other software resources. Before you get to work developing an API, there will be some investment to bring an API idea to the table. 

What Has Gone Into Developing An API?
Beyond the acquisition of API resource, like the API usage itself, this might be a two way street, it might not just be costs associated, but also investment from other partners, investors, or otherwise. What has gone into normalizing resources, designing and developing the database, server, and the code for the API itself. Consider the network too--what will it take when it comes to network bandwidth, and what are the DNS considerations to manage traffic. Thanks to cloud computing, there are many ways to meter areas like compute, storage, and bandwidth, make sure and put these tools to work for you.

What Are The Realities Of What It Will Cost To Operate An API?
What are the costs associated with maintaining the central truth of an API, its definition? What are the hard compute, storage, and bandwidth costs associated with API operations and scaling? I'm sure you have a good handle on what these are. How much do you have invested in management, monitoring, security, and evangelism? There are a lot of costs to consider when looking at the technical aspects of API operations, but there are also a lot of associated costs that often get left behind like support, the creation of additional resources, and evangelism.

Let's Discuss Who Will Be Access An API And Our Plan For Different Levels 
Now we have a better handle of what goes into acquiring, developing, and operating an API,  who will actually be accessing the API, and who do we need to offer different plans to, tailored for their relationship, as well as their unique needs. We have discussed the opportunities around free, and free trial access plans, but what about other pro bono approaches like not-for-profit, and educational or research plans, that help eliminate costs for consumers, and encourage meaningful API access and innovation.

Once we have establish the entry levels of access we can begin crafting the additional levels of retail, partner, or internal tiers of API access. It doesn't need to stop there, you can craft as many plans for API access as needed to meet the needs of every potential group, inside or outside of a company or government agency. API providers should work to be as transparent as possible with available plans, and what is available within each plan level, all the way up to partner tiers and potentially reseller or private label tiers of access. 

Now That We Have A Better Understand Of What Goes Into Our APIs, How Do We Set Pricing?
Even though we know what went into an API, we don't always know how API consumers will perceive the value around an API, so we must be ready to adjust pricing based upon this perception, and real world influences. How we limit, and incentivize usage needs to reflect this value, and the relationship with each group of consumers. There should be paths that incentivize usage, encourage the purchase of resources in bulk, by the time-frame (monthly, quarterly), or maybe at specific times that benefit the provider, or the consumer. Remember, pricing is a two-way street, that can benefit the provider, as well as the consumer, but strike the balance that makes API platforms go round. 

Let's Remember There Isn't Always Direct Value Generation From APIs
Something that often gets overlooked in API operations is the indirect value they can generate. APIs are a potential marketing vehicle when done right, pushing forward awareness around a brand, driving web or mobile traffic, or generating valuable data and content through network and application activity. These activities can be measured, just like with direct API consumption, but should be treated different than commercial consumption, these areas act as marketing, advertising, and word of mouth around the value an API brings to the table. 

Greasing The Wheels With Valued Partners
Public APIs have dominated the conversation for some time now, but web APIs bring just as much, or more value to trusted partners. APIs make it so I can quickly share the valuable resources I possess with those I feel would benefit from them most. While I try to make all of my APIs publicly available, it is my partners who get preferred access, and if I do it right, they can generate value via my APIs, and benefit what I am doing. If I do this the right way, I should be able to generate revenue from my APIs, while also sharing that exhaust with my trusted partners, and if they are the right kind of partners, they kick exhaust back my way as well--ecosystem effect.

Realizing The Value Of APIs Internally
As we've learned from the Steve Yegges rant about APIs at Amazon, and as the myth tell us, Bezos ordered everyone at Amazon to use APIs for the exchange of internal API resources. Amazon understood that APIs bring agility, and efficiency when done well, and can help internal groups work together, and like public API access, internal consumers can also have plans tailored specifically for their needs. This is where the true benefits of APIs are evident, and unfortunately is the aspect of APIs that is least discussed out in the open for everyone to learn from.

Units of Value Can Go Both Ways, Depending On Who Is Access An API
It is very common for APIs to charge for access, restricting access by some of the common units of value listed above. It is less common for APIs to pay consumers for accessing APIs, incentivizing the publishing of content, posting of images and videos, and other value generating ways of putting APIs to work. What is the value of the first image added to an API for a business location vs. the 10th photo added? What is the value of encouraging API consumers to add their own content to a a system, augmenting existing information. There are endless ways to encourage developers to contribute to a platform, the only limit is your own plan for defining the boundaries of this participation.

We Should Be Making Value Transferable Across API Providers
All of this provides a standardized way for API providers to define the value of API resources, and incentivize usage, while also generating needed revenue. Money is spent accessing some resources, while money is generated for publishing or refining other resources. This two way value generation shouldn't be locked up within each API providers silo, and should be transferable between API providers. In 2015, developers are using not just one or two APIs, they putting many different APIs to work, and while API providers need to cover costs, and generate revenue, so do API consumers. It is two sides of the same coin--the API balance.

How Do We Compare The Value Of Value Across API Platforms?
The first challenge we will face is transferring this value from one silo to another, is lack of shared metrics between platforms. Even when platforms have comparable API resources available, rarely will it be an apples to apples comparison. While this post gives us a better look into how resources can be defined, and have a price applied to their consumption across multiple plan levels, this is limited to just discussion around each individual platform. Before we can compare the value across API platforms, we need to standardize the business model for each API, and establish a ranking system for each provider, that will act as a weight when comparing pricing from provider to provider, and across many providers within an industry.

This story was just my way of exercising my understanding around the development of API business models. Next I will explore the concept of API rating in 2015 as I see it, and try to find the linkage to the unit price established as part of API management operations in this post. Everything I discussed above is not hypothetical--it is rooted in API infrastructure solutions provided by companies like 3Scale. To craft this story, I had my 3Scale administrative console open for my own APIs, and theoretically walked through some of the known approaches to API monetization from across the space.

Once I am done playing with my thoughts around rating APIs, I think I'll revisit this post, and take a  look at what Amazon, Twilio, and other well known APIs deliver when it comes to API plans, to help me provide ready to go API business model blueprint that others can follow. While there is endless ways to experiment and play here, I think most of the API space is just looking for ready-to-go business models, that are proven, that they can plug into their operations without much planning.

How Are We Going To Create The Standard And Poors And Moodys For The API Economy?

API rating is a conversation I have several times a year, with different groups, and is a conversation that seems to occurring with a little more frequency in recent months. How do we know which APIs are the good ones? What constitutes a good API? How do we measure whether a contract (the API) is being adhered to, by both API provider, and potentially the consumer as well. How will we establish the Standard & Poors or Moodys for the API economy? These are the API rating conversations I am having with folks.

These are all great questions, and something many companies have approached me looking help in solving, with nobody moving forward with this in a meaningful, and sustainable way. So to help apply some CPR to the subject, I am going to explore API rating some more, from my own point of view, in hopes of guiding some existing conversation I'm already having on the subject, and also hopefully generate some new ones. I want to quantify where we are at with this problem, what work I've put into, and maybe establish an open blueprint that the community might be able help run with.

Starts With API Discovery
Before we will ever have a usable approach to API ratings, we are going to have to also work on our API discovery problems. We have one open format, designed to not just index the technical side of APIs, but also the business, and political side of operations--APIs.json. The problem is we haven't reached a critical mass where a machine readable index of API operations is available for most of the public APIs available, as well as private indexes of the APIs that exist behind the firewall. It will take time, but soon, every actor in the API economy, from Fitbit, up to the International Trade Administration, will have APIs.json indexes of their API operations. 

Next, What Makes A Good API?
There are many things that goes into making an API good, or bad, and unfortunately many of these characteristics are difficult to track on in any sort of automated way. I'd say that API design is the number one complaint against APIs, but until recently there were not really any good ways to measure API consistency across many disparate APIs. I'd say the number two complaint against an API is around the documentation--good, clean, intuitive, and up to date documentation is always telling. However we can talk about API design and API documentation until the cows come home, and will make no progress on solving the question of how to measure, and quantify what a good API is, and what a bad API is.

In reality, there are a number of criteria that could be used to help separate the good APIs from the bad APIs. I've worked to aggregate what I consider to be some of the common building blocks across API design, deployment, management, and almost 20 other stops along the API life-cycle. From this research, I can provide a pretty good starting point for establishing a baseline rating that can be applied against all types of APIs. Part of my motivation in crafting this post, is to organize these thoughts, and publish as a potential v1 open blueprint, to help move conversations forward. I'm not saying this is what should be, but just a list of things to consider as the community contributes something.

The Technology of APIs
This is where all API operations, good or bad begin, with the technical implementations of the API itself. When you are dealing with something as technical and abstract as an API, it can be hard separate the technology from everything else, but I feel it is an important exercise, even if I don't always get it right. Here are some of the technological aspects to consider when rating APIs from the technical view:

  • Design - How is the API designed? Is it simple, web-focused, and follows moderns conventions, and leverage HTTP over language specific characteristics.
  • Authentication - What sort of authentication technology is employed? Are modern approaches from  basic keys, too tokens, and OAuth employed?
  • Availability - Is the API available? Our there significant outages involved in recent times, and what is the overall track record for a platform.
  • Scalability - Will the API scale technically, and meet the demand of its users. Does the underlying code, server, and database scale with the demand.
  • Code Resources - What extra code resources area available around an API from CLI to SDKs, and starter kits. Code resources play big role in API success.
  • Webhooks - The presence of webhooks for me, tells me that a platform operates as a two way street, contributes to availability, and scalability, and a nod towards real-time.

To establish a true rating criteria, you would need to break many of these things down, and establish a way of actually validating in a programmatic way. You would need to be able to look for consistent API design patterns, usage of HTTP status codes, and how available and scalable a platform is. With most of the technology of an API hidden behind the facade, you are left with little to go on, but the design, authentication pattern, availability, scalability, and any other technical resources that are made available.

The Business of APIs
I have worked hard to understand the line between the tech, and the business of API operations. I feel this separation is critical to truly understanding why APIs work. Like the technology behind, the business approaches to API operations can be subjective, and vary from provider to provider. However, in my work I have found some ways of identifying the healthy business practices, as well as some unhealthy business practices that exist in the space.

  • Documentation - The availability of simple, intuitive, up to date API documentation, usually of the interactive kind is standard operating procedure of good APIs.
  • Self-Service Registration - Self-service, 24/7 registration for API access is a hallmark of good APIs--leaving access considerations dealt with via multiple tiers of API plans.
  • Business Model - A clear business model, and pricing is always present in the good APIs. It can be easy to spot the bad apples, because they tend to only be free.
  • Support - What support resources area available? Are there multiple channels of self-service, or direct support that have regular person behind the scenes supporting?
  • Communication - Does an API platform communicate? Are there multiple channels like a blog, Twitter, or other social accounts available? 
  • Change - How often, and rapid is change on the platform, and are changes communicated through a road-map, and other available channels? 

An API platform reflects the core business values of the company that operates it. The business of an API sets the tone for their entire community, something API consumers can contribute to, but it is ultimately left to the platform provider to set overall. It is fairly easy to identify bad actors, because of significant gaps in the business model, support, and communication around platform operations. There are many other aspects of API business operations to consider, but this provides a good base to start.

The Politics of APIs
In the last three years, another significant line has emerged when determining which APIs are good ones, and which are the bad ones. While much of this area depends on legal building blocks of API operations, I call it the politics of APIs, because it tends to reflect the politics of a company, or maybe the industry it resides in. Some of these areas will make or break APIs, and definitely put them in the good or bad bucket.

  • Terms of Service - What is in a terms of service? Are the heavily weighted towards the platform, or do they allow for innovation by consumers, within the ecosystem?
  • Privacy Policy - What does privacy look like on a platform? What do official policies state, and what does actual practice look like on the platform, and among consumers?
  • Platform Security - How is security handled across platform operations? How open is a platform about their security policies. This will be the number one criteria going forward.
  • Transparency - How transparent is a company with platform operations, and the overall roadmap. Is there an adversarial relationship with community, or an open, collaborative one?
  • Licensing - What software, content, and data license are in place? How is the API interface, server, and client code licensed, as well as the supporting content, and resources for the platform.
  • Rate Limits - What rate limits are imposed by the provider, and are they consistent and fair across access plans? Do rate limits favor the platform or consumer?
  • Investment - Who is invested in the company behind an API platform? Where in the investment cycle is a company, and how much money has been invested?

Those are just a handful of the political elements of API platform operations, but they are the usual suspects when it comes to determining the good APIs from the bad ones. This area tends to be the hardest to nail down area of how we would rate API providers in my eperience. This is where the most amount of smoke and mirrors is deployed, and nothing is ever what it seems, but this area makes or breaks many API operations, in more situations than the other technical and business elements listed.

The areas I provided across the technical, business, and political considerations for rating APIs as good or bad, can be broken down much further. I have specific questions that should be asked about API design, communication, and platform licensing, but for this discussion I just wanted to highlight the core groups of considerations. The separation between these areas, as well as the overlap is tough to see sometimes, but is something that is coming into focus, with the more APIs that I look at through this lens. 

How Do We Make One Such API Rating Model Work?
With a rating definition in hand (theoretically) how do we move forward actually making this a reality. This is a huge problem, one very few people or companies truly grasp the scope of, and one that wave of new startups think they can solve, only to watch ALL of them fail. This is a problem that I feel will take four distinct groups to make happen, each bringing their own assets to the table, when it comes to making one such API rating model work.

The Essential API Provider Signals
What information can API providers share, and what signals are the sending, either consciously, or unconsciously, that can be included as part of a rating system? There are numerous aspects of API operations on the open Internet, that are ripe for discovery, monitoring, and ultimately rating how a provider approaches their operations.

  • Blog - An active blog is a great way to understand the viability, tone, and overall presence of an API provider, with a RSS bonus for monitoring.
  • Twitter - An active Twitter account is second to a blog, when it comes to understanding the viability, tone, and overall presence of an API provider. 
  • Github - The social coding platform is quickly surpassing Twitter and a blog as the most important signal an API can send about their viability as a platform.
  • Press -  A regular PR drumbeat out of a company is a great to understand the investment into any platform, and the tone that it will take in the community.
  • Road-map - The regular update, and sharing of the platforms road-map providing as much information as possible about what is coming down the road for the platform.

It may sound silly to track on such basic things, but when a platform is about to die, the blog and Twitter accounts go silent, and you see diminished activity on Github. These content and social signals are the heartbeat of modern API operations. If you aren't telling stories, being social, and pushing code on a regular basis, your overall rating in the space goes down.

The Signals From The Community, Industry, and Open Web
What is the community saying about a platform? This is one of the benefits of being a public API operation is you can generate the right signals across the space that attract other developers, businesses, analysts, and generally open you up for inclusion in any potential API rating systems. What are some of the common signals the community sends, that could be gathered as part of an open API rating system?

  • Blog Sharing - Which blogs posts were shared across the community, and at large on social networks, and bookmarking sites?
  • Twitter Followers - How many Twitter followers does a platform have? Did they buy them?
  • Twitter Retweets - What gets retweeted by the Twitter community? What is the echo from platform operations?
  • Twitter Favorites - How many tweets have been favorited? Even if this is just for bookmarking, it acknowledges people are paying attention.
  • Github Followers - How many Github followers does a platform have?
  • Github Repo Stars - How many stars has been received across a platforms repositories?
  • Github Repo Forks - How many repositories have been forked, and how many times?
  • Github Repo Watchers - Are they watchers of a platforms Github repository? How many?
  • Platform Stories - What other stories are told about an API provider, via popular tech blogs, and other industry news.
  • QA & Forum Discussions - What is the presence, and conversation on sites like Stack Exchange and Quora, and what is the overall sentiment?

These are just a handful of the signals I tune into, when it comes to monitoring what API providers are up to across the sector. While there are no absolutes when it comes to these data points, they provide me a buffet of information to make my own decisions around, when it comes to what the community is thinking regarding any particular platform.

What The Almighty Analysts Gods Think About An API Platform

The third area that has a significant influence on the API space, is the analyst perspective. I put myself in this category, as I make a living analyzing the API sector. If we are going to strike a balance across the internal provider, and the public view of any single company, and its API offerings, we will need an informed analyst community to weigh in. Here are just a handful of the data points I include in my own analyst ranking variable.

  • My Notes - How many occurrences of a company are there in my notebook?
  • My Curation - How many times have I bookmarked and curated a post from, or about a platform?
  • My Posts - Have I written stories about a company? How many?
  • My White Papers - Has a platform being referenced, or featured in one of my white papers or guides?
  • My Tweets - Have I tweeted anything from a platforms Twitter account? Or a mention about them?
  • My Gut - What is my general feeling, and gut reaction to an API platform?

This reflects my own criteria that goes into my analyst ranking for a company, and yes, I have a tool that helps me calculate this across the companies, and individuals I track on. Other analysts are probably going to have their own formula, but is something I'd love to start seeing shared by the community--how do experts in the space, feel about existing APIs, and their technical, business, and political approach to API operations.

There Is Always A Role For Government To Play In All Of This
Whether you believe in the power of government or not, there will always be a role, both positive and potentially negative, for government of all shapes and sizes to play in this API rating system. Like other data points considered as part of this rating system, the following data points should be considered, but will mean different things to different folks, and APIs.

  • Filings - What are the SEC, IRS, and other government filings by companies who operate API platforms.
  • Investigations - What are ongoing investigations by government agencies into companies who operate API platforms.
  • Stories - What are some of the leaks, and other stories being told in media and government circles about how companies are viewed.
  • Patents - Are there patent filings by specific platforms that impact the overall API space, down to specific industries.
  • Legal -  What court battles are being waged, at any level of our legal system, that impact specific platforms, and the industries they operate in.
  • Legislation - What legislation is being planned, or is on the books that impact any specific sector an API provider operates in.

There are a number of government induced influences that could impact the rating any particular API platform may receive. Patents could be seen as strengths or even weaknesses, and existing court battles, filings, investigations, and other government influenced decisions may weigh into rating decisions as well. The involvement by the government of all sizes is only going to increase in the API economy.

It Will Take Everyone Working Together To Make This Work
A realistic API rating system or systems, will take work between all the groups listed above. API providers will have to step up and be more transparent, and share the data they possess. The developer community, API service providers, and the public at large will have to share information, and knowledge they accumulate from across the space. Analysts are going to have to step up, providing leadership and analysis across all industries, as well as within specific industries. The government will play a role in not just regulating industries, but also being API providers, and consumers themselves, providing valuable data, and aggregate resources, as well as informed analysis from the vital public sector perspective.

How Do We Make Sure An Share An Open Blueprint For Rating Algorithm(s) Exists?
This post is just an executive summary of the summary data points I'd like to see in a potential open API rating algorithm. Ultimately I'd like to see an API-Rating outline published to Github, outlining all of the areas I track on. I'm happy to be caretaker of these open blueprints, and use Github to store the algorithm, and public or proprietary data that goes into the end rating. Ultimately it should be an open, and transparent blueprint that anyone can follow, or contribute to.

What Are Models For Incentivizing The Investing Of Open And Proprietary Data In Rating Algorithm(s)
In establishing an open blueprint, and house the algorithm for an open API rating blueprint, I have ideas about how we can store open, and even proprietary datasets that goes into different areas of the API rating system. Using a Github organization, a series of public, and private repositories, it is easy enough to provide a central place for everything to live. I've seen evidence of strong currents in the space that will swoop in and snatch up tooling and specification that gains adoption in the space, and know there will be many groups who seek to influence the rating system disproportionately--it has to be an open blueprint, and have the ability to support public and private data, from a variety of players, that will drive the rating results.

This Will All Take Time, But We Need To Start Somewhere
As with the API discovery work I'm doing with APIs.json, I know this is a long game, and will not happen overnight. APIs are the driving force behind the growth in online commerce, social, cloud, mobile, and devices, touching almost every industry in 2015, the need for not just discovery of high quality APIs, but also being able to rate them, is only going to increase. We have to begin somewhere, and while some of what I propose seems far-fetched, it represents my efforts to define a rating algorithm, and I am the only person to put anything forward, beyond thinking it should be a startup.

I have most of the v1 algorithm available, it would just take some work to hammer into a coherent road-map. I have some of the data needed to get a v1 rating off the ground, going back to 2011 and 2012. With some work we could pull together other missing data. However to make this work, it would require API providers to step up and share data, and API service providers to do the same. Many leading API providers will be unwilling to participate, because they are rated highly without a rating system. It will be new APIs that will be willing to contribute to such as system, in hopes they can compete with the existing leaders.

I see, what I'm calling API Rating, as something that needs to evolve alongside my API discovery, and API monetization research. I will setup a project, and help me tune into the bigger picture. i will also formalize my look into crafting API plans, that are driven by the business model, and monetization goals of companies playing in the API game. Alright, moving on to crafting some blueprints for Twilio, SendGrid, and other common messaging APIs for their API plans, and API rating...then we'll do some comparison, and see how any of this will actually make sense in the wild.

After Combining My API Plans, Pricing, And Rating Research I See Hints Of An API Industry Economic Engine

After writing I Have A Bunch Of API Resources, Now I Need A Plan, Or Potentially Several Plans, and How Are We Going To Create The Standard And Poors And Moodys For The API Economy, I wanted to combine what I had learned while crafting these stories, and try to look at how these two areas could work together. The API plan and pricing research is derived from existing approaches to API service composition introduced by providers like 3Scale, however the rating portion is fresh territory for me, with very few precedents to follow.

What I see when I start wading through a structured approach for API providers to craft meaningful API plans to serve up their API resources through, and how developers will be paying for API usage, via the apps they build, I begin to see the potential of a structured approach to API plans, and pricing. When you start thinking of the implications across providers, and consider the opportunities for developers to manage API consumption across API providers, and exchange the credits they purchased, or generated via API usage--a potential blueprint for an economic engine for the API space begins to emerge. 

I wanted to explore this concept, by crafting a visualization, and ponder how common approaches to API plans, and pricing, could be complimented by a standardized API industry rating system.

The only thing really original in this diagram, is the introduction of an API rating system, and the potential for developers being able to exchange credits between the API service providers they depend on. The rest of this standard API management approaches, that are defined by API providers like 3Scale. If you aren't familiar with modern approaches to API service composition, API providers can have many different API resources, as well as many different plans for subscribing to these API services, which provide a wealth of dimension for API providers to define, price, and limit how developers put API resources to use in applications.

The T Circle in the above diagram is where the current magic happens, when you put modern API management solutions to work for API operations. This allows you to mix and match access to your API resources, charging different prices, to different developer groups, introduce volume usage levels, and measure as many dimensions of consumption as you desire. Rarely do successful APIs have just one rate of access to them, this approach to API management allows providers to maximize access to resources, while also maximizing potential revenue around subscriptions, and usage consumption.

When you start putting API providers into standard API plans, and pricing framework like this, you start seeing intra-provider opportunities, industry wide benefits, as well as potential to really make API consumers lives much easier. If there were standardized ways to understand how API providers were pricing their resources, and how they tiered access, and adjusted pricing between these tiers, the API industry competition would heat up significantly. The problem comes in when you start allowing developers to transfer credits from provider to provider, while it may seem like you are transferring common units of value, but in many cases there are big differences between each API provider and what the value of one transaction may be.

Once you introduce an API rating system to the equation, it provides one possible way for rating each provider, and setting a benchmark that can be used to exchange credits between each platform. If each provider operated in a credit format, that could be cashed out using an exchange rate, or possible transferred from one platform to another, you'd start to see some potentially interesting market effects. I think you'd quickly see some negative, as well as positive effects, but I think some balance could be struck between some of the common API resources being served, and consumed across the API space. Newer resources, with fewer precedent would be very volatile for a period of time.

The R Circle in the above diagram is mean to reflect the potential of an industry wide API rating system, when you have standardized information on API pricing, across API providers. This pricing would vary across the plans each provider offers, but you could still come up with a median pricing for individual providers, and even entire industries. When you apply the rating system you could provide a potential exchange rate that could be applied when moving credits bought, and earned via each API platform, between API providers. Developers could explore which platforms allow them generate credits by engaging with API resources, exchange to other providers, where they could then spend the credits for other services they need, in lieu of cash. This also opens up the possibilities for markets for API credits exchanged across business sectors being impacted by APIs.

I'm just exploring concepts involved with common approaches to API plans and pricing, and brainstorming a potential API rating system, then using my imagination to understand the developer, and industry implications of this one possible future. Only one part of this equation exists, and it would take some significant work to bring such a thing into reality, but it is fun to explore, and consider one possible design for an economic engine, that could possibly scale, and drive the API industry.

API Visualization Exploration Using API Definitions

Swagger is now Open API Definition Format (OADF) -- READ MORE

There are number of areas across the API life-cycle that are being expanded upon in the current space, thanks to the evolution of API definition formats like Swagger, API Blueprint, and RAML. One area I haven't seen as much growth as I'd like, is in the area of visualizations driven by API definitions

There are two distinct pools of API definition driven visualization: 1) Letting you visualize the surface area of an API 2) Letting you visualize the resource made available via an API. One area my friend the @APIHandyman has been exploring is around the surface area of API.

@APIHandyman has a nice prototype created that he is calling "Swagger Specification Visual Documentation". The API Definition driven visualization uses A D3.js visualization to help you explore the surface area of any API that is defined using Swagger. I have written about API definition driven visualizations before, so I am happy to see the concept being pushed forward, as we have a lot of iterations to cycle through before we find a visualization format(s) that works for different API designers, architects, and developers.

The visual documentation that @APIHandyman created runs on Github, and he is looking for feedback on the micro tool, and where he should take it next. He recently added a bigger information display area, but could use the communities ideas on how to make it more useful. This type of work is a time drain. Every time I started playing with Swagger + D3.js I would lose an entire evening, and have very little to show for work, so I know how valuable feedback can be.

I strongly feel that API definition driven D3.js visualizations will be the future of API design, management, and orchestration. APIs are going to continue to grow in number, and scope, and we will need simple, visual ways we can quickly traverse the landscape, and makes sense of things. If you are working on an API definition driven visualization tool, either for the surface area of an API, or helping visualize the actual resources being served up, please let me know so I can showcase.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

An API Idea: HTTP Status Code Clinic

Swagger is now Open API Definition Format (OADF) -- READ MORE

This is one of my ideas for an API service provider, that I will never get time to do, but would love to see exist in the space, so I am happy to put out there for someone else to do. I'm calling this one an HTTP Status Code Clinic, which would be an API definition driven API service provider (why would you do it any other way in 2015), that would help API providers tailor more useful, and meaningful HTTP status codes.

HTTP Status Code Clinic would be a simple software as a service, complete with its own API (why would you do it any other way in 2015), and CLI for operating as well. You would pass it a Swagger, API Blueprint, RAML, or any other API definition format, and it would return a bunch of feedback on HTTP status codes returned (or not). The process could anchor the feedback in the current API design, while also providing HTTP status code education along the way.

The service would have to get crafty around how it walked through an API, requesting sample data, or requiring default values for query parameters, headers, body, etc. If it was done well it could be done as a kind of wizard that walked the user through each endpoint, what was returned, and might be better solutions, as well as additional status codes that could be used. You could also include examples from popular APIs like Twilio, Twitter, and others, to show what is possible.

I recommend you do this as a standalone service. While it might be nice to have as part of an existing service provider stack, I think it makes more sense to offer as a standalone clinic / service that API providers could use when they are ready. API designers and architects could make it part of the API design and definition portion of their life-cycle, as well as integrate into their existing systems using the HTTP Status Code Clinic API--other API service providers could also integrate the features into their tooling using the API

This is an open API idea, yours for the taking, it you want more detail from me on how I'd approach, I would be willing to do some tech investment for a piece of the action. Good luck!

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Hand Crafted Or Generated SDKs For Your API?

Swagger is now Open API Definition Format (OADF) -- READ MORE

One of the discussions we are having behind the scenes at APIWare.io, is whether we should be hand-crafting, or auto-generating SDKs for the clients we are working with on the design, deployment, and management of their APIs. 

The answer is easy--we do both! 

There are many situations where I would push for the API SDKs to be hand-crafted, and designed for a specific API, as well as specific language or platform. However, with the increase in the number of APIs, and simplicity of some of the APIs I work with, crafting an API definition, and auto-generating your SDKs using a service like APIMATIC, makes sense.

I've cracked open the APIMATIC SDKs, and only being fluent in JavaScript, PHP, Python, Ruby, and C#, I can only speak to these realms of operation, but I'm happy with what the APIMATIC team is producing. For me, this isn't a black and white game, it isn't hand-crafted vs. autogenerated--it is how you employ both approaches across your API operations.

APIMATIC provides high quality SDKs in 10 languages, some languages I do not speak. The APIWare.io team has a wealth of programming talent on the team, and while we could hand code libraries across these areas, for all of our clients, for many of the more straightforward APIs, it just makes sense to generate. We are already an API-definition first shop, where we craft Swagger, API Blueprint, RAML, or other required API definition format, so using this to generate SDKs makes sense in many situations.

Are you trying to figure out the best path forward for providing high quality SDKs for your API?Llet me know how my APIWare team can help. They are willing to help you figure out the best route for providing the SDKs you need, but also help maintain them as part of the versioning, and overall roadmap for your API operatins. Even if you just need someone to talk through the pros and cons, I am here to help--let me know.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Adding A New Research Area To API Evangelist I Am Calling #APIDesignFiction

I have been writing some fictional stories on a project site I have called Alternate Kin Lane for some time now. Writing fictional stories in the tech space has provided a sort of pressure release valve for me, giving me a quick creative outlet throughout the week. Honestly, there are more stories in my notebook, than there are on the actual blog, but this is a sign for me that I need to spend more time working in this area.

I am making several shifts in my work lately, in response to kind of hitting the wall recently, a process that includes rethinking how I work on projects, and partner with folks in the API space, as well as an increased focus on the fictional side of my writing. I do not just want to write fiction that is derived from the tech space. I want to write fiction that helps me formulate my own view of the space, and who knows, maybe influence the space in similar ways my regular tech blogging has over the last five years.

If you aren't familiar with the concept of design fiction, it "is a method of critical design that uses fictional and narrative scenarios to envision, explain and raise questions about possible futures for design and the society." Up until now I was just dumping some random, crazy ideas into my fiction notebook, and taking the time to publish a few of these on Alternate Kin Lane. Now I'd like to try and bring a little closer to my daily API monitoring and research, and see if I can use to explore concepts around the tech, business, and politics of the API space--which is touching pretty much every aspect of our personal and business lives in 2015, leaving a pretty wide playing field.

I think some of my writing on Alternate API Evangelist will be just as dark as some of the stuff I've written on Alternate Kin Lane, but hoping my voice will evolve, and become more nuanced with time. The medium allows me to work through some potentially volatile concepts in a hopefully safer space--meaning, it gives me a quick out with people who disagree. It is fiction! ;-) I'm hoping this helps me work through unlimited number of scenarios that apply to the API space, but also gives me a place to vent, when the tech space gets me down--which seems to be quite often lately. 

While I want this fictional layer of my research to be tightly woven to my regular API industry research, I also want the separation between fact and fiction to be very clear. I will put the hashtag #apidesignfiction in each story title, as well has clearly stating that it is fiction on the page, like I do with Alternate Kin Lane. This way any tweets, or social shares set expectations right away, clearly showing this is a different way of exploring the API space for me--not the API space we have, but the one we might have, or maybe should have, or maybe will have if we don't get our shit together!

Which API Service Providers Across The 20 Areas I Track On Have APIs?

As I go through each of the 20 core areas of the API sector that I am keeping an eye on, in preparation for my keynotes at @DefragCon and @APIStrat, I'm taking a fresh look at which of them have APIs. When you think about it, if a company is selling a product or service to API providers, encouraging an API focus--they should probably have APIs. :-)

As I was updating some of the news, companies, and reviewing the common building blocks I have aggregated across the 20 areas of research, I just poked my head around to see if I could find an API for each service provider. There are many things that look like a language specific API, but I am looking for a clear developer / API area, with a coherent web API available, as we'd expect from API providers in 2015.

After spending a couple hours looking through my core research, I found the following service providers, in the follow areas, actually had web APIs available:

This was my first pass through all of my core research looking for APIs, so I am sure I missed some--let me know which ones I missed. My design, hypermedia, performance, monetization, evangelism, embeddable, client, SDK, visualization, and virtualization didn't have any clear leaders who had easy to find APIs. I will update as I find them--which smells like an opportunity to me.

Eventually I'd like to see some common definitions to be established across all these areas of the API space, but as you can see, like the rest of the API space, there is a lot of work to be done. Now that I have a list of API service providers who have APIs, I can start crafting, and aggregating API definitions for all the APIs. Once i've done this I can start looking for the overlap across them, and see if I can highlight the common patterns, and establish some bridges for the differences between them.

Every potential stage of the API life-cycle will have to be automated in the future, and ironically APIs will be how we do that.