{"API Evangelist"}

Realizing I Need Hypermedia To Bring My API Lifecycle Vision To Life

I have been learning about hypermedia over the last three years now, and only earlier this year, I began playing with Siren to help me craft a better experience around my API industry news and link curation API. My motivations in going down this hypermedia road was never about easing my client side pain, or helping me with future versions of my API--I am just not that sophisticated of an operation.

I started playing with hypermedia to help evolve the experience around the API news I was curating each week, making it so you could browse week by week, month by month, but also by topic, company, author, etc. I'm still trying to figure it out all out, and honestly the project is currently in the ditch after hitting the wall this fall, and not really giving a shit about the flow of API news. (I am better now, thx!)

Now in December, I'm trying to take my building block API, which provides access to over 600 of the common patterns I've tracked on across the API space. These are the features offered by API service providers, and the competitive advantage brought to the table by the successful API providers I keep an eye on, and they are all potential stops along the API life-cycle I am working to define.

My API building block API is a pretty standard content API, providing each element broken down by category, and type, with other supporting details. However, now I need to be able to plot them on a subway map, with an endless number of configurations, and dimensions to the journey. Via my building block API, I need to return any single stop along the API life-cycle, but along with it I need to provide the next stop in the line, the previous stop (paging 101), but then if I hit a transfer station, or other element, I need to offer an unlimited number of dimensions.

While I am concerned with the behavior of my client, it isn't the normal hypermedia-focused arguments I usually hear. I need to be able to deliver a subway-like transport experience for over 600 stops along the API lifecycle, for any possible API you can image. A simple API design just isn't going to cut it, I need a request and response model that can dynamically deliver the API life-cycle transport experience that I am looking to extend to my users.

I wish I had thought this out more, when I first got started designing my building block API. ;-(

See The Full Blog Post


API Economy Tooling For The Business Masses

Most of the tooling and services I come across in the API space, are designed for developers. As I play with more services, and put tools to work, trying understand their role in the API space, some take a lot of work to figure out, while others are pretty clear--it will be these tools and services that the masses of business users adopt, as part of the API evolution that is occurring online currently.

There are just a handful of services out there right now, that I think are mainstream ready, and are something that the average business user of the web, should be playing with right now--here are three of them:

  • Restlet - Deploy API from a Google Spreadsheet.
  • Blockspring - Use APIs in a Google Spreadsheet.
  • Form.io - Deploy an API as form, via Single Page App.

There are other services and tools out there that will help you deploy and consume APIs, but these stand out, because they help the average business user solve real world business problems they are facing, without needing to write code. They also anchor there solutions in existing tooling that are ubiquitous, and part of every day business operations--the spreadsheet and the form.

Developers might care about the technical details of APIs, and evangelists like me might be able to convince some of the average business users of the potential of APIs, but before we can bring the masses on-board, we will need the right tools. Restlet, Blockspring, and Form.io have the potential to be these tools, and help the "normals" become active participants in the API economy, but more importantly find the (API driven) solutions they need for the problems they face every day.

See The Full Blog Post


The Growing Need For API Virtualization Solutions

This conversation has come up over 10 times this month, at Defrag, APIStrat, and online conversations via Skype and GHangouts. The concept of API virtualization solutions. I am not talking about virtualization in the cloud computing and Docker sense, although that will play a fundamental role in the solutions I am trying to articulate. What I am focusing on is about providing sandbox, QA environments, and other specialized virtualization solutions for API providers, that make API consumers worlds much easier. 

I've touched on examples of this in the wild, with my earlier post on API sandbox and simulator from Carvoyant, which is an example of the need for virtualization solutions that are tailored for the connected automobile solution. Think of this, but for every aspect of the fast growing Internet of Things space. The IoT demand is about the future opportunity, I've talked about the current need, when I published, I Wish All APIs Had Sandbox Environment By Default.

I see envision 1/3 of these solutions being about deploying Docker containers on demand, 1/3 being about virtualizing the API using common API definitions, and the final 1/3 being about the data provided with the solutions. This is where I think the right team(s) could develop some pretty unique skills when it comes to delivering specialized simulations tailored for testing home, automobile, agriculture, industrial, transportation, and other unique API solutions.

We are going to need "virtualized" versions of our APIs, whether or not it is for web, mobile, devices, or just for managing APIs throughout their life cycle. You can see a handful of the current API virtualizations out there on my research in this area, but I predict the demand for more robust, and specialized API virtualization solutions is going to dramatically increase as the space continues its rapid expansion. I just wanted to put it out there, and encourage all y'all to think more about this area, and push forward the concept of API virtualization as you are building all the bitch'n tooling you are working on.

See The Full Blog Post


Freemium Access For Your API Is Not Bad, It Is Just One Tool In Your Providers Toolbox

I get regular links sent to me, and foiks telling me that freemium API access is a bad idea. That it doesn't help your API sales funnel, and was something that was just a thing, for a brief moment in time. The folks who always bring me these stories, are not API consumers, they are business folks. 

I agree, there are some really bad examples of freemium in play across the API space, and with the bad behavior we see from API consumers, I fully understand why stories make their rounds--leaving a bad taste in the mouths of API providers for freemium access.

The disconnect that allows this to happen in my opinion, is folks not thinking through their monetization, plans, and pricing in an API centric way. Folks approach it as a across the board monetization approach, and do not think of it as a tool to apply on a resource by resource basis. 

Freemium is just one tool in your toolbox, and should evaluated at every step in the monetization planning for your APIs, and used, or not used, based upon your well planned on-boarding funnel for your API consumers.

Consider educational access to your API resources (also one tool in your toolbox), which is a common offering from API providers. You wouldn't offer this in all scenarios, it just doesn't make sense, but you aso wouldn't just dismiss it as a bad idea--you keep in your toolbox for when it makes sense.

I see API operations let by business folks dismiss freemium as a bad idea because they don't experience on how it influences in the on-boarding process, but I also see API operations let by developers dismiss the need for traditional sales approaches, and hurt themselves in similiar ways.

Ultimately, I recommend iterating through the list of common API monetization building blocks I've aggregated from leading API providers, for each resource you are planning to release. If you have a modern API infrastructure provider like 3Scale, you will be able to define individual plans, using a variety units of value, at different rates, rooted in who you are targeting with your API resources--make sure you have as many tools as possible in your toolbox, don't throw any away.

See The Full Blog Post


Evolving My API Stack To Be A Public Repo For Sharing API Discovery, Monitoring, And Rating Information

My API Stack began as a news site, and evolved into a directory of the APIs that I monitor in the space. I published APIs.json indexes for the almost 1000 companies I am trackig on, with almost 400 OADF files for some of the APIs I've profiled in more detail. My mission around the project so far, has been to create an open source, machine readable repo for the API space.

I have  had two recent occurrences that are pushing me to expand on my API Stack work. First, I have other entities who want to contribute monitoring data and other elements I would like to see collected, but haven't had time. The other is I that I have started spidering the URLs of the API portals I track on, and need a central place to store the indexes, so that others can access.

Ultimately I'd like to see the API Stack act as a public repo, where anyone can grab the data they need to discovery, evaluate, integrate, and stay in tune with what APIs are doing, or not doing. In addition to finding OADF, API Blueprint, and RAML files by crawling and indexing API portals, and publishing in a public repo, I want to build out the other building blocks that I index with APIs.json, like pricing, and TOS changes, and potentially monitoring, testing, performance data available.

Next I will publish some pricing, monitoring, and portal site crawl indexes to the repo, for some of the top APIs out there, and start playing with the best way to store the JSON, and other files, and provide an easy way explore and play with the data. If you have any data that you are collecting, and would like to contribute, or have a specific need you'd like to see tracked on, let me know, and I'll add to the road map.

My goal is to go for quality and completeness of data there, before I look to scale, and expand the quantity of information and tooling available. Let me know if you have any thoughts or feedback.

See The Full Blog Post


Each Of My APIs Has Its Own Github Repo With Two Branches, One Private And One Public

While it will never be 100% complete or perfect, I finally have my API stack in a way that lets me add, evolve, scale, and deprecate the endpoints as I need. I've been centralizing all of my APIs, underneath a single Github organization, with each API as a single repository. 

Each API has from one to around 50 endpoints, existing in its own repo, with the master branch set to private, and the gh-pages branch existing as a public repository, and contains the public face of the API (aka portal). This approach is giving me standardized way to manage my fast growing API stack, in a way that I know how to find anything, anytime. 

While the API itself runs on a series of AWS EC2 instances, the private repository is the blueprint for its operation, with server side code, database backups, and details of its overall configuration. The public, or Github Pages branch of the API, acts as the public portal, with docs, client code, OADF file, and of course an APIs.json file.

Within my master API Github organization I have 33 separate API repos, with a single master public page, which acts as the central index of all APIs, with a single portal portal with APIs.json file. I use this page to navigate to all my APIs, and I rely on Github heavily to manage the public, and private side of my API operations (emphasis on public).

Now that I have my house somewhat in order, when I'm looking to start a new API, I simple create a new repo, add an APIs.json, and OADF file, and begin crafting the API resource that I need. Every other stop along the API life cycle from deployment to deprecation uses this central Github repo, and it's API definitions, as its central source of truth and operations. 

I still have a lot of housekeeping left to do, to get things in order, but I can't help myself, and I keep adding new APIs. I'm hoping I can keep automating my API lifeycle using this approach, allowing me to tackle a growing API stack, while staying a one person operation.

See The Full Blog Post


Deploying An API From Your Critical Twitter Data Without Being A Programmer

I am continuing my series on helping non-developers realize they can publish, and put APIs to work, without having an API expert in their pocket, using Zapier, Google Sheets, and Restlet. Its no secret that Restlet is an API Evangelist partner, and they are my partner because they are the easiest way to deploy a web API--something I am trying to help the "normals" understand the untapped potential of.

As I finish up my @APIStrat conference again, for the sixth time, I'm reminded that I need to harvest the essential Twitter exhaust from the conference, otherwise if I wait too long, I won't be able to get it. You see, Twitter limits what you can grab from the Twitter API by either time, or number of Tweets, so I can only gather X amount of Tweets, and if I wait too long, I'm often out of luck. It is critical that I do this right away, and depending on how loud the Twitter exhaust is, I need to do it while the event is still going on.

To start, you obviously need a Twitter account, but you will also need a Zapier, Google and Restlet account. Then using Zapier you can gather the following Twitter data points, and send to a Google Sheets:

  • New Mentions
  • New Followers
  • New Favorites
  • New Tweets

How you store these in Google Sheets is up to you, but I recommend breaking each data point down as its own sheet, and even by separate time-frame like day, week, or month. It helps to keep things broken up into smaller chunks, when you are using Google Sheets as a data store--tricks of the trade!

Zapier gives you everything you need to setup the Zaps that route mentions, followers, favorites, and your tweets to the Google Sheet(s), and once they are there, you can clean up, edit, and organize as you see fit. Next Restlet comes into the picture, allowing you deploy an "entity store" from your Google Spreadsheet. Once you have your entity store connected to the Google Spreadsheet, using their simple wizard, you can then deploy as a web API--Restlet walks you through the entire process, no code necessary.

What you do with your API is up to you, but the important part for me, is I have the references to the Tweets and followers from Twitter that are important to me. When I need to reference the mentions for an event back in 2013, I can do it. When I need to create a special outreach list to anyone who Tweeted at our most recent event, I can do it. 

Simple API solutions, using Zapier, Google Sheets, and Restlet, are empowering to the average business and organizational user who is looking to just get their work done, track on what is important to them, but in an API-centric way which will then allow the information to be then used in web and mobile apps, but also spreadsheets, widgets, and other API driven goodies.

See The Full Blog Post


When Intelligent Programmers Realize They Do Not Understand HTTP And The Web That They Use Daily

I've seen something at an ever increasing pace lately, situations where very intelligent software engineers hit a wall, and realize they do not understand the fundamental building blocks of HTTP, and the web that we are all using daily. It makes my heart ache, because I remember when I found myself in the same place, and still suffer from the deprivation I experienced.

Whether it was my time programming in Microsoft-land, Drupal, WordPress, or any other Web 2.0-land, I eventually realized how much was hidden from me behind the curtain. When you bundled this with the fact I'm a fairly privileged white male software engineer who can be fairly clueless about what is around me, I missed a lot. Something I still find myself recovering from in 2015, even after five years of exclusively studying web APIs--you know, the ones that use HTTP for transport? 

I'm not stupid, but with my lazerfocus ™, I often miss a lot. I feel that many of the smart people I know suffer from a similar illness, but do not have the fortitude for transparency, or the humbleness to accept, that will allow them to move on. Programming language dogma, and platform or framework dependencies can be a powerful thing, but they can also do a lot protect us from what we need to learn to actually be successful, and grow.

At this point, i question everything. What I know. What a platform might be hiding from me, to sell back to me as feature, and what is the bigger picture of the Internet that I use everyday, and often take for granted.

See The Full Blog Post


Making Sure Everything You Offer As An API Service Provider Is Portable

Runscope added the ability to import and export your API tests as JSON, helping make API monitoring a much more flexible and portable thing. You can import and export using the Runscope UI, as well as import via their API, help you automate the setup of your API tests. 

I judge API service providers based upon whether they have an API, and I am increasingly encouraging my readers to do the same. I will also be studying the portability of services that are being sold to API providers, and pushing for more import / export features like Runscope offers.

I will be tracking on the API definitions used by service providers like Runscope, across the 26+ areas I monitor, and trying to better understand the JSON schemas they are using to encourage the portability of their services. The API definition, response, and request models put forth by companies, tells a lot about the service they offer in my opinion, and I'd say the overall portability of services is another strong characteristics I will be keeping an eye out for.

See The Full Blog Post


I Like Being Able To Verify A Developer Is Real Before Giving Them Access to My APIs

As I think about the bad behavior that occurs on the API consumption side of API operations, I'm considering ways that I can help API providers address these problems when they arise within their ecosystems. What can you do when bad actors have access to your APIs? Also more critically for some providers, what can you do to prevent bad actors from on-boarding with your API program at all?

I strongly believe that companies should be as public with their API efforts as possible, but when it comes to which developers you let in, and which ones you don't, I'm finding I'm becoming more conservative in my thoughts--as long as you are transparent about the process. I'm still forming all of my thoughts around this (hence the blog post), and I'm sure is something that will keep evolving as I continue to push forward my awareness of the API space.

When I see a new sign-up for my own APIs, I like to be able to verify who the new consumer is. I like to see a real name, and potentially a company name, but also when I Google the combination, I like to see an active Twitter, LinkedIn, or Github account. It is easy to tell real people, from personas that live n the shadows, and I prefer verifiable people use my APIs.

If you are a public API consumer, I do not think it is unreasonable to ask you to maintain some sort of public presence, to verify who you are, and what you do. I know for many enterprise developers this is insanity, which is why I put LinkedIn profiles in the mix--I do not expect everyone to be super popular on Twitter, and a die-hard Github user. However, in 2015, you really should consider!

As I'm going through my own API on-boarding process, trying to make smoother (it isn't the best right now), I am considering how I will articulate what behavior I expect of my API consumers--in plain English. This post is just part of my iteration, putting my thoughts out to the universe, getting feedback when I can, but ultimately knowing that I am the sole decision maker when it comes to setting the tone in my own API community.

What tone are you setting? Do you verify your API consumers? What is the bar you set for a place at the table?

See The Full Blog Post


How Do I Price My API Resources?

I am continuing to push my research around API monetization, plans, and partners forward, whilep preparing for my API lifecycle keynote at @Defrag and @APIStrat. Along the way, I am also exercising some of my API pricing and planning strategies with my partner in crime at APIware, as we think through some new products that we are developing.

How I approach pricing for API products, is on a resource by resource basis, considering all of the API monetization tools in my toolbox. For example, when i launch two new API endpoints for exploding API definitions, and telling me how big my APIs are, I quickly run down my list of monetization building blocks to see which areas will apply to these new resources. The reality is, until these APIs leave an alpha stage, they will just exist in my internal consumption tier.

Even though these APIs aren't ready for prime time yet, I am already thinking through which of my API monetization building blocks will apply. Will I charge for access? Are they available in the freemium tier? Do I offer only a trial version of them? Ultimately it all comes down to who I will be targeting with these resources, and in this case it will be primarily hardcore API architects--something that helps me define how I will price these resources, and which service tiers I make them available in.

When it comes to generating revenue from APIs, rarely are there a one size fits all solution. You should be considering each resource based upon the value it brings to the table, who you are targeting with this resource, and weighing all the tools in a modern API monetization toolbox. Remember the business of APIs resembles the technology of APIs, where you do things in small chunks, and you iterate, until you find the optimal value proposition that works for the platform, and your consumers.

See The Full Blog Post


Bridging How We Currently Document Our APIs Now With How We Should Be Experiencing APIs Via Hypermedia

I am still catching up on my feeds, and open browser tabs, and one tab that has been open for a couple of weeks is Why Your Colleagues Still Don’t Understand Hypermedia APIs, by Luke Stokes (@lukestokes) of FoxyCart. The post is very thought provoking for me, and represents what I feel is the very pragmatic front of the hypermedia movement, from someone who has helped move the concept of a hypermedia API from academic discussion to reality, with the FoxyCart API.

His challenges at the end of his post really set the stage for me:

So how do we find a balance between idealism about what Hypermedia API documentation systems “should” be and what they practically are? How can we move the whole ecosystem forward by encouraging client developers to code to link relationships instead of hard-coded URLs? How do we help pave the way for the future but not look like unsophisticated outsiders in the process? What pragmatic steps should we take to be like the other cool kids using standard documentation presentations while at the same time saying, “Um, yeah, don’t do that. Don’t code to the URL, code to the link relationship instead.”

For me, his questions illuminate the canyon between where the API community is currently with API design, and the vision of where we should be going with our API design practices. The more I play and learn with hypermedia, the less I see it as the lofty vision for the future, and the more I see it as a set of practical design patterns that will make my API work more meaningful.

The shortcomings of API definitions like Swagger (now OADF), is that it was designed for documenting your API, after it was already designed--awesome, but not remotely about API design. Where the concept of hypermedia is all way back the other direction, where you have to put some serious thought into your API design, before you ever roll up your sleeves to write code--big gap between these worlds.

Jakub, Z, and the Apiary.io team has done some amazing work to bring the mainstream conversation closer towards API design, much than hypermedia folks had ever done--until recently. What Luke asks, reflects the work we have left to bridge the divide, and emphasizes that it has to be something that scales. This is why Swagger was successful, because it provided the section of the bridge that API architects needed to solve an immediate problem that on the table, at scale--documentation. 

The problem is, the masses are pretty short-sighted, and often unwilling to the heavy learning and lifting needed to "code to link relationships, instead of hard-coded URLs". Unfortunately we will need more tooling to help us build more sections of the bridge (beyond Swagger), as we won't be able to it all at once. I think the important thing with this tooling, is we need make sure with each section of the bridge that we build, we are also educating API architects, and client developers around HTTP, media types, link relations, and other empowering hypermedia concepts. 

Anyways, Luke got me pumped with his questions. I do not know what the answers are either, but I want to lend a hand to help figuring it out, and will be asking these questions over and over in my head, showcase solutions that y'all are working on, and tell stories here on the blog, until we make some progress.

See The Full Blog Post


APIs Dedicated To Elections At the City, County, State, Or Federal Level

I'm neck deep in government open data again, and as we are gearing up for the presidential election, you really begin to see the potential for accurate, real-time election data via APIs. There are a number of leading election-related APIs at the federal level like we have from the Sunlight Foundation, and you see the emergence of high value APIs out of government like the Federal Election Commision (FEC) API from the 18F, but with the amount of money in politics, and scope of what is at stake, I can't help but feel there is a huge opportunity out there for more election APIs.

Seems to me that there is an opportunity for some API savvy activistpreneur to step up at the city, county, state, and even the federal level. I'm not just talking about election districts, candidate, and other common building blocks of elections we experience, but real-time sentiment, superpac spending, and other influential information that if availablle for distribution via APIs, could shift the balance one way or another. It would take a healthy assessment of the current landscape, but I think you could identify some low hanging fruit to get started with.

One you get going on the API journey around election data, if you do it right, you will learn a lot, and discover other valuable content or data sources, and see patterns that the rest of might not see. The use of social platforms over the last 8+ years of elections was all API driven, and is just the tip of the iceberg--an API dedicated to elections seems like a huge opportunity to me, in coming months and years.

Let's get to work!

See The Full Blog Post


The Bad Actors On Both Sides Of The API Fence

I've always been a strong advocate for the API consumer, which is one of the primary motivations for me working to define best practices that API providers can follow across their operations. The majority of my negative experiences, when it comes to APIs, has been as an API consumer, not as an API provider.

As I do this API Evangelist thing longer, and longer, the bad behavior by API consumers becomes more clear to me, and I'd say rivaling much of the bad behavior by API providers that I have seen, and in some cases helping actually drive it. I do not have any bad actors in my API community, but through conversations I have had with leading API providers, I'm hearing some pretty crazy stories.

Badly behaved API consumers range from signing up for multiple accounts, rather than paying for higher levels of access, to trolling within the community, treating other developers badly, platform owners horribly, and just being a shitty API community citizen. You will never truly understand how badly API consumers can behave, until you've operated an API platform, and had a large number of consumers putting an API to use. 

While I will always keep my critical stance towards API providers, as I feel they often set the tone for a community, I am increasingly more understanding when platforms have to tighten things down, and get more critical when it comes to the public availability of API resources. In the end, I will always push for more public transparency around every aspect of API operations, but I am increasingly advising companies to have a tight grip on their API service composition, and what resources developers get access to, before they prove they are trustworthy.

See The Full Blog Post


The API Lifecycle (My Talk From @Defrag and @APIStrat)

I recently told the story of how I view the API life-cycle, based upon my research across the space, at the Defrag Conference in Broomfield, CO, and at my API Strategy & Practice conference in Austin, TX. I spent two weeks pushing my research forward in preparation for these talks, and wanted to take a moment to gather my thoughts, and share the narrative of my talk.

When I first gave this title and abstract for both the Defrag and APIStrat keynotes, I called it "the 17 stops along a modern API lifecycle". After pushing my research forward, to support these talks, it became 26 stops, then those became what I am calling "lines", resulting in me just call the talk "the API life-cycle".

I use my public speaking as a vehicle for my API research, helping me polish my work, and ultimately pushing me to craft better narratives around my work, but most importantly, make it more coherent, and make sense to the average individual. One way that I do this, is to root my stories in history, build upon the earlier work in the tech space, and anchor them to other relevant areas of our everyday lives.

Everything I do as the API Evangelist is built on the hard work of men and women who came before me.

From the 1940s...

The 1950s...

The 1960s...

The 1970s...

The 1980s...

The 1990s...

The 2000s...

Getting us to the current decade, where I saw the potential of delivering the compute resources we needed for the mobile devices, that were quickly becoming ubiquitous in our daily personal as well as business lives. Designing, deploying, and managing APIs in support of mobile was the catalyst for my research as the API Evangelist in the summer of 2010.

As the API Evangelist, all that I do is map out what I am seeing across the API space, mapping out what API pioneers like Amazon and Salesforce are doing, as well as how recent API darlings like Twilio, SendGrid are executing on their API operations. I take this awareness, and map it against the services that leading API service providers are offering, in hopes of establishing a more coherent view of the overall API space. 

I conduct my research in hopes of helping API providers better understand how to establish a more successful strategy, but with a focus on being able to articulate this not just internally, but with developers, partners, and potentially the public at large. APIs are important. They are increasingly the pipes that are driving our personal, and professional worlds, and it is important that we do them as well as possible, so that they benefit everyone involved.

In 2015, I feel like I am the Henry Beck of the API space. If you aren't familiar with his work, he was one of the original minds behind a different way of thinking about mapping, designed for public transportation, that is still used today to describe major transit operations in cities around the globe.

In 1933, Henry Beck created a map of the London Underground, that was decoupled from earlier ways of mapping and communicating around transit resources, in a way that was focused on how the resources would be experienced by end-users, evolving beyond a focus on the transit resources themselves, and their location in our physical world.

Earlier approaches to mapping out the subway reflected traditional mapping techniques, and were bound to legacy physical elements like roads, rivers, mountains, and buildings. While maps like this one from 1889, were technically accurate, they didn't always convey an increasingly complex subway system to end-users as they experienced it.

Even into the new century, subway maps were plotted along roads and rivers, leaving them coupled to legacy elements, and ways of thought,  ignoring the fact that end-users of the subway were not thinking in these terms as they put the transit system to use. Riders just wanted to know how they needed to get where they were going, and not be burdened with these legacy elements they often do not even see.

By 1915, public transit engineers like Henry Beck were rethinking how they mapped the transit infrastructure, in a way that helped them better communicate their increasingly complex infrastructure internally, but most importantly, in a way that helped them communicate externally to the public.

In 2015, this is what I feel like I am doing. Mapping out all the stops along the increasingly complex API life-cycle, in a way that helps API providers better understand their own life-cycle, but also like the subway map infrastructure, in a way that helps them communicate to consumers, and end-users.

At this point in my research, I am asking myself how  I take the 26+ common areas of an API life-cycle and create standardized lines, that I could possibly be communicated in a common way, using the subway map analogy. As I approach the end of 2015, I have 809 building blocks, in 149 categories, across these 26 common areas--I need a more universal approach to plotting these out, and the subway map is providing me with one possible way of doing this.

Design
Hypermedia
Definition
DNS
Containers
Virtualization
Deployment
Management
Monitoring
Testing
Performance
Security
Terms of Service
Privacy
Licensing
Branding
Discovery
Client
IDE
SDK
Embeddable
Webhooks
Monetization
Plans
Partners
Evangelism

 

The problem I am facing, is how do I properly map out these lines, in a way that reflect the role the play in an overall life-cycle? This is the difficult place I find myself currently, but will be working to solve using the subway map tools I am developing. They beauty of this analogy, is that I will be able to create multiple iterations over time, as well as be able to fork, and evolve maps for different platforms--similar to how multiple cities use the same subway map for their cities unique infrastructure.


Even once I find a rhyme and rhythm to mapping out the lines, stops, stations, and flows for my version of the API life-cycle, how will I do the same for the actual APIs themselves? I will need each API to follow a version of the API lifecycle, but also potentially possess its own, entirely separate journey, unique to a single API or potentially a stack of APIs.


Eventually I envision a world where I don't just have my entire API life-cycle mapped out, as well the journey involved with each of my APIs, and the API collections I define, I want a real-time visualization for my entire infrastructure. I want to see where each API is on the API life-cycle, as well as see where each of my API consumers, and even API users are on their own journey.

 

This might seem like a pretty tall order, but when you take a closer look at things, you realize that over 75% of the elements present in my API life-cycle definition, also has an API. My API management with 3Scale has an API, and my API monitoring with API Science has an API. I'm increasingly choosing API service providers that have an API, because I need them to be a real-time contributor to my overall API lifecycle.


With my API infrastructure being API driven, I can now think of my API life-cycle subway map in API terms. I can dynamically plot out each API I operate as a journey, depicted as its own subway line, and also make sure each API interacts with its respective API life-cycle journey--all using APIs. *mind blown*


But wait there is more! It gets even better--it just so happens that I also have an API for the subway map. It is just an alpha version, but using this API, I can dynamically plot out the map of each area of my API life-cycle, as well as for each individual API, and its own journey on the life-cycle.

I'm not sure where all of this API life-cycle mapping will go. I'm hoping it will help me think through the API life-cycle, in the context of my overall API industry research, as well as my own APIs that I operate. I am hoping the process will help me decouple the way I approach my APIs from some of my legacy ways of thinking, built up over the last 25+ years a software architect. 

I am hoping I will start to see my APIs in a new light, breaking away from thinking about them in a linear way, through the lens of the applications I build on top of them, as well as the resources they are developed from. I want my APIs to be decoupled from much of their legacy constraints, and reflect more how they will be consumed by developers, and experienced by end-users.

I am not under any illusions that this process will be easy, and it will probably take a great deal of time, and iterations, much like it did for early subway architects, when it came to mapping out early transit infrastructure.

Until finally, I'm hoping I will find a way to describe my own API life-cycle, and APIs, in a universal way, that could work for other API providers, and even API service providers. It took over 15 years of iterating on early London Underground maps, before they finally landed on this design in 1933.

Beck developed an approach to mapping transit infrastructure that would also work for defining complex transit infrastructure over almost a century later. The approach he helped define, is still applied as part of how we map out, as well as communicate transit infrastructure, around the globe in 2015. I hope I can apply some of the same principles to begin defining the virtual API infrastructure we are increasingly depending on in 2015.

While Ian Bogost was tweeting in jest about using a subway map for all written and visual communications, the tweet was very timely and relevant for me. It came across my Twitter timeline, just as I was crafting my subway map keynotes for Defrag and APIStrat. While I do not favor the subway approach for "all written and visual communications", I do think it has potential for "discursive synthesis" of the potentially complex API infrastructure that we are building in 2015.

Through my work, I am hoping to establish a sort of subway map line "toolbox" for communicating around the API life-cycle that I have spent the last five years defining. I am looking for this toolbox to give me a way of communicating  and collaborating with others in a way that reflects common API industry approaches, while also being able to customize for specific platform needs.

In addition to having a toolbox that reflects a modern API life-cycle, I am hoping the subway map approach gives me a set of tools I can use to visual the life-cycle of every individual API within my control. I desperately need a common way visualize my APIs, as well better understand where they exist within my overall API life-cycle.

Having a common way to describe and communicate around transit infrastructure is critical to society operating, and the global economy. I feel pretty strongly that having a common way to describe and communicate around the API infrastructure we are increasingly depending on is just as critical critical to society operating, and the global (API) economy.

We need an approach that reflects how companies around the globe are already operating their APIs, acknowledging the value of existing successful patterns of interoperability, established by the Amazons and Twitters of the world, but also allows for the flexibility, customizablity, and unique needs of individual companies, and industries. 

The standardization, as well as uniqueness of what I'm talking about can be seen in the beautiful array of subway maps for major cities around the globe. These amazing maps meet the transit needs of each cities, while also embracing the principles laid out by early transit planners like Beck.

My storytelling at events like Defrag and APIStrat, as well as on my blog, always reflects where I am at with my research. Next I will be crafting more meaningful version of subway maps for each area of the API life-cycle, as well as an overall view showing all 809 building blocks, in 149 categories, across the 26 areas I track on as part of my research.

I am anticipating that I will hit some areas where the subway map analogy fails me. I am already thinking about some ways of augmenting API specific elements on the approach, to help me better communicate specifically about API infrastructure. An example of where I'm bending this, is the concept that an individual API can have its own map, and line, as well as be a passenger on other lines--I just need to come up with a sort of wormhole station connector element that helps communicate this relationship and / or transition. 

As always, you can find my research on the home page of API Evangelist. As I complete subway maps for each of the 26 areas, as well as the overall map, I will make sure they are available as part of the navigation between each areas of my research. I am hoping to also use some JavaScript magic, in conjunction with the subway map analogy, to make navigating my research more of a journey, than just clicking around my site.

See The Full Blog Post


Keeping A Window Open Into How Power Flows Within Algorithms Using APIs

I just read The Pill versus the Bomb: What Digital Technologists Need to Know About Power, by Tom Steinberg (@steiny), and I'm reminded of the important role APIs will (hopefully) continue to play in helping provide a transparent window into some of the power structures being coded into the algorithms we are increasingly relying on in this digital world we are crafting.

In this century, we are seeing a huge shift in how power flows, and despite the rhetoric of some of the Silicon Valley believers, this power isn't always being democratized along the way. Much of the older power structures is just being re-inscribed into the algorithms that drive network switches, decide pricing when purchasing online, via our online banking, and virtually ever other aspect of our personal and business worlds.

APIs give us a window into how these algorithms work, providing access to 3rd party developers, government regulators, journalists, and many other essential actors across our society and economy. Don't get me wrong, APIs are no magic pill, or nuclear bomb, when it comes to making algorithmic power flows more transparent and equitable, but when they are done right, they can have a significant effect.

If APIs are a complete (or near complete) representation of the algorithms that are driving platforms, they can be used to better understand how decisions behind the algorithmic curtain are made, and exactly how power is flowing (or not) on web, mobile, and increasingly connected device platforms--API does not equal perfect transparency, but will help prevent all algorithms from being black boxes.

We may not fully understand Uber's business motivations, but through their API we can test our assumptions. We may not always trust Facebook's advertising algorithm, but using the API we can develop models for better understanding why they serve the ads they do. Drone operators may not always have the best intentions, but through mandatory device APIs, we can log flight times and locations. These are just a handful of examples that APIs can be used to map out digital power.

All of this is one of the main reasons that I do API Evangelist. I feel like we have a narrow window of opportunity to help ensure APIs act as this essential transparent layer for ALL API operations across industries. As the established power structures (eye of Sauron) turn their attention to the web, and increasingly APIs, their powers of transparency are becoming more diminished. It is up to us API Evangelists, to help make sure APIs stay publicly available to 3rd party developers, government, journalists, end users, and other key players--providing much needed transparency into how algorithms work, and how power is flowing on the web and mobile Internet.

See The Full Blog Post


Providing API.json As A Discovery Media Type Every One Of My API Endpoints

It can be easy to stumble across the base URL for one of my APIs out on the open Internet. I design my APIs to be easily distributed, shared, and as accessible as possible--based upon what I feel the needs for the resource might be. You can find most of my APIs, as part of my master stack, but there are other APIs like my screen capture API, or maybe my image manipulation API, that are often orphaned, which I know some people could use some help identifying more of the resources that are behind API operations.

To help support discovery across my network of APIs, I'm going to be supporting requests for Content-Type: application/apis+json for each endpoint, as well as an apis.json file in the root of the API, and supporting portal. An example of this in action, can be seen with my blog API, where you can look into the root of the portal for API (kin-lane.github.io/blog/apis.json), and in the root of the base URL for the API (blog.api.kinlane.com/apis.json), and for each individual endpoint, like the (blog.api.kinlane.com/blog/) endpoint, you can request the Content-Type: application/apis+json, and get a view of the APIs.json discovery file.

It will take me a while to this rolled out across all of my APIs, I have worked out the details on my blog, and API APIs. Providing discovery at the portal, API, and endpoint level just works. It provides not just access to documentation, but the other critical aspects of API operations, in a machine readable way, wherever you need it. It is nice to be on the road to having APIs.json exist as the media type (application/apis+json), something that isn't formal yet, but we are getting much closer with the latest release, and planned releases.

Next, I will push out across all my APIs, and do another story to capture what things look at that point. Hopefully it is something I can encourage others to do eventually, making API discovery a little more ubiquitous across API operations.

See The Full Blog Post


I Am Thankful For Another Amazing APIStrat

I am back home in Los Angeles, after another great edition of API Strategy & Practice--this time in Austin, TX. I have had a few day to decompress, and took a day to reboot my brain by crafting 235K+ API definitions for the English language, resulting in some time to reflect on what happened last week in Austin.

First of all I want to thank 3Scale. Without the API infrastructure provider, APIStrat would not happen. Second I want to thank all the sponsor s who get involved, without your support the conversation wouldn't happen. Third, I want to thank the speakers and attendees for making it such a meaningful conversation.

I tend to use that word, "conversation" a lot when describing APIStrat, but I feel pretty strongly that it is the conversation that occurs at APIStrat that helps move the entire API community in a very meaningful way. The conference always renews my energy, and strengthens my relationships with other important folks in the space that I rely on for my research and storytelling.

Thank you so much to everyone who came to Austin last week and participated, and special thanks to Steve and the 3Scale team for investing so much into the API community, while asking for so little in return.

See The Full Blog Post


Sharing 235K API Definitions With The English Language API Recipe Book

I needed a side project to reboot my mind after @APIStrat this last weekend, so I opened up my notebook and picked a project that I've been meaning to give some attention to, one that would help me clean my slate, and let me get back to my regular work levels. The project I picked is one that I came up with a little over a year ago, but recently had flushed out my vision further, as I hung out at my favorite watering whole drinking an IPA.

It took me several iterations before I landed on a name for this project, but my working title is the English Language API Recipe Book. I find myself in an awkward position these days, when it comes to the concept of API copyright, which is something I have taken a firm stance on with my work around the Oracle v Google ava API copyright case, and the release of the API licensing format API Commons, but is something, in the end, I just do not believe in.

You see, in my opinion, API definitions should NOT fall under copyright. Like recipes and menus, API definitions should not be open for anyone to use. To help me make my point, I wanted to craft the English Language API Recipe Book, publishing an open API definition for almost every word in the English dictionary. I found a reasonably complete list of every English word, and auto-generated an Open API Definition Format (OADF) specification for each of the 235K+ words. 

For each API definition, I cover the base GET, POST, PUT, and DELETE verbs for each word, providing a basic query via a parameter, and return a name, and description as the basic underlying data model. I am already playing with other variations of database models, and have also generated another dimension for each word, by again iterating through each word, and adding it as a secondary level resource. I am also playing with other relationships, and ideas for expanding the dimensions of this recipe book, but wanted to get this first version out the door.

Overall, I just want to show how easy it is to programmatically generate API definitions, and add this English Language API Recipe Book to my already growing number of API definitions, from popular APIs that I include in the API Stack. Through this work, I wanted to emphasize, that no matter how much work you put into the naming, ordering, and design of your API definitions, they are not creative works that you should lock up and defend--your API definitions should be open, easily accessible, shared, and designed for reuse.

While I do not think any of the 235K+ API definitions should have copyright applied, I will putting all of these into public domain, using a Creative Commons license, as act two of this production. This is more theater than anything, but using API Commons, I will make sure every word in the English dictionary, crafted as a basic web API, is available for anyone to use, anytime, anywhere (as it should be, DUH). I recently stated in a keynote, after launching API Commons, that I was going to be the "Johnny Fucking Appleseed" of publishing openly licensed API definitions, out in front of slower moving corporations like Oracle--the English Language API Recipe Book is just the beginning of this.

Next up, I will be crafting a series of OADF API definitions for Schema.org and use APIs.json to bundles as a more meaningful collection. I will be using these data models to further automate the English Language API Recipe Book, and establish additional dimensions to this collection. You can find the English Language API Recipe Book on Github. I have published as a Github organization, with a separate sharded repository for each letter of the alphabet, containing a separate OADF definition for each word in the dictionary, and indexed using APIs.json to index each letter, as well as for the overall recipe book collection--making it all machine readable by default.

See The Full Blog Post


The APIStrat Austin Schedule Has Reached That Level Of Amazing For Me Again

This is the 6th edition of API Strategy & Practice, happening in Austin, TX next week. As one of the organizers, I can say that pulling together the perfect lineup of speakers and topics is always a daunting challenge, but then at some point before the event happens, the schedule always seems to take on a life of its own.

The APIStrat Austin schedule has reached that point again. We have enough killer speakers and companies present, it has attracted other killer speakers and companies, resulting in a mindblowing 3 days of workshops, keynotes, panels, and sessions--if you haven't taken a look at the schedule lately, take a few moments.

I was going through, looking for problems, missing photos, etc, and the scope of the people and companies present just struck me how amazing it has become, and I had to share. If you aren't registered, make sure and do so, and if there is someone you think that should be in attendance, feel free to ping me directly--you won't want to miss it.

See all y'all in Austin next week!

See The Full Blog Post