The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Github Is Quickly Becoming My Most Important Discovery Source For API Space

I have monitored the Github accounts and organizations for individuals and companies doing interesting things with APIs for some time now. However, recently this channel is increasingly being the way that I discover truly interesting companies, individuals, specifications, tools, and even services. The most interesting people and companies doing things with APIs usually understand the importance of being transparent and aren't afraid of publishing their work on Github.

Developers are often very poor at blogging, tweeting, and sharing their work, but because Github allows me to follow their work, and provides additional ways to surface things using Github trending, I'm able to find things often before they'll show up on other common channels like Twitter, LinkedIn, etc--if they do at all. You can subscribe to the changes for a Github user, and organization using RSS, or you can do like I do, and use the API to dial in what you are following, and identify some pretty interesting relationships and patterns.

The interesting things I'm discovering aren't always directly code related either. With the increased usage of Github for publishing API portals, documentation, and other resources, I am increasingly finding valuable security guides, white papers, presentations, and much more. All of this makes Github an important place to discover what is going, while also helping ensure what you are working on around your API is being discovered. I'm thinking it is time for a refresh of my Github guide for API management, which I published a couple years back, and provide a fresh look at how the successful API providers are using Github.


Beyond Just API Discovery: The Technical, Business & Political Decisions Needed At Runtime

I was included in a conversation the other day on Twitter about runtime API discovery which reminded me of some thoughts I was processing before I walked away from work this summer, and before I dive back into the technical work, I wanted to refresh these thoughts and bring them to the surface. Blogging on API Evangelist, and other channels which I publish my work on is how I work through these ideas out in the open, something that saves me expensive time and research bandwidth while I'm down in the trenches doing the coding and API definition work.

The Wider Considerations Of What Is API Discovery
Like APIs themselves, the concept of API discovery means a lot of different things to different people. I find that broadly it means actually finding an API (ie. searching on Google or ProgrammableWeb), but once you talk to a more technical API crowd, it often means the programmatic discovery of APIs. Ideally, this is something that is done using hypermedia suupported discovery, but can also apply a standard like JSON Home, or APIs.json. There are also many folks who are thinking about programmatic API discovery using OpenAPI Spec, API Blueprint, and other common API specification formats.

Some Thoughts On API Discovery At Runtime Today
The conversation I was pulled into was between some of the leading minds in the area of not just defining what APIs are, but also how we truly can scale, and conduct API discovery, consumption, and evolution of our resources in a logical way. This discussion is pushing forward how our web, mobile, and other systems can discover, put to work, and roll with the changes that occur around critical API resources. How a human finds a single API for their use is one thing, but how a system and application finds a single API and puts it to work at runtime is a whole other conversation.

The Hard Work To Define Runtime Discovery of APIs
Separating out the human and programmatic discussions around what is involved with the runtime discovery of APIs is just the first line of challenges we face. The second layer of challenges is often about cutting through dogma and ideology around specific approaches to defining an API. The third layer I'd say is that this is just hard work of separating out the numerous differences between APIs, each often possessing their own nuances and differing approaches to authentication. As with every other aspect of APIs, the challenges are both technical, and human-centered, which slows expectations around the progress we make, but I trust the community will ultimately execute on this properly. 

The Even Harder Work To Define Runtime Discovery Of Many APIs
While I'm actively participating in the current discussions around runtime API discovery using both hypermedia, as well as other approaches, I can't help but keep an eye our for the future of how we are going to do the same thing across many APIs--this is what I do as the API Evangelist. We have a lot of work ahead of us to make each individual API is discoverable at runtime, but we also have a significant amount of work to harmonize this at web scale across ALL APIs--which is why so many hypermedia evangelists are so passionate about their work. 

The Technical Considerations Of API Discovery At Runtime
98% of the discussions around API discovery at runtime focus on the technical--as it should be at this phase. Hypermedia design constraints, leading API definition specifications like OpenAPI Spec and API Blueprint, and API discovery formats like JSON Home and APIs.json are providing us with vehicles for moving this technical discussion forward. Ideally, our APIs should reflect the web, and when you land on the "home page" of an API, you should be presented with a wealth of links reflecting what the possibilities are (does your API have a navigation). Secondarily, if hypermedia is not desired or feasible, JSON Home and APIs.json should be considered, providing a machine readable index of what APIs are available within any domain, as well as additional details on what is possible using OpenAPISpec and API Blueprint.

The Business Considerations of API Discovery At Runtime
As technologists, we often fail when it comes to considering the business implications of our solutions, ranging from making sure to make money to keep them operational, all the way to industry-wide influences we should be aware of. I see many discussions amongst API specialists fall short in this area, which is why I started API Evangelist in the first place, and which is why I'm pushing these thoughts forward, and sharing with the public, even before they are fully baked.  

At runtime, the technical considerations of where an API is, how to authenticate, and what parameters and other details need to be clear. However, when you elevate this process to operate across many APIs, important business criteria also become important--things like what plans are available, what do API resources cost, and are there volume options available. The example I like to use in this scenario is from the world of SMS, and making runtime business decisions across nine separate SMS APIs.

At runtime, I may have different business concerns with each execution, even after I know where the APIs exist. Some SMS blasts I may want to use the cheapest provider, while in other campaigns I may choose to use a higher priced, more trusted provider. These considerations made by a human in 2016 can be difficult, let alone having what we need to do in a programmatic way at runtime--something I've spent some cycles developing schemas and tools to help me sort through the mess. I have been able to establish patterns across some of the more mature API areas like SMS, email, search, and compute, but we are going to have to wait for other areas to evolve before this is even feasible.

There is a reason why I call my research in this area API plans and not simple API pricing. I feel this label reflects the future of business decisions we will have to make at runtime, which won't always be simply about pricing, and hopefully reflect our overall business plans--which are executed in real time at runtime in milliseconds. Sadly old ways of doing business by the enterprise continue to cast a shadow on this area, with companies hiding their pricing page behind firewalls, and not sharing the algorithm behind pricing decisions, let alone looking outward and following common industry patterns--beliefs around intellectual property and what is secret sauce will continue to hinder this all moving forward.

The Political Considerations of API Discovery At Runtime
Another area I have found myself paying attention to as the API Evangelist, beyond just the technology and business of APIs, is what I call the politics of APIs. Alongside the technical and business considerations, these often politically charged areas will have to be considered at runtime. Which API have the terms of service and privacy policies that reflect my companies strategy? Which API is the most reliable and stable? Can I get support if something fails? Is the long-term strategy of an API in alignment with our long-term strategy, or will they be gone within months due to funding and investment decisions (or lack of)? There are many political considerations that will have to be made at the programmatic level and included in runtime discovery and decision making around API integration(s).

Similar to the business considerations I have also invested some cycles into understanding the variability some providers are applying when it comes to the politics of APIs, like variability in terms of service and pricing, and how pricing, plan availability, availability, stability, and other ranking criteria can be made more machine readable and applied at runtime. As with the business concerns around API integration, there are many obstacles present when we are trying to make sense of the political impact at runtime. As more API providers emerge which are not resistant to sharing their API plans, I am able to document the variables at play in these algorithms, and share with the wider industry, but alas, many companies are holding these elements too close to their chest for the conversation to move forward in a healthy manner.

It is easy to think about the political runtime decisions that need to be made around APIs as purely being about terms of service, but there are much more grander considerations emerging, like which country and region we deploy into, and regulatory considerations that will have to be followed when putting API resources to work, or possibly injected at runtime like we are seeing within the drone space. Like terms of service are guiding almost everything we do online today, the politics of APIs will govern the runtime decisions that are made in the future. 

Beyond Discovery And Considering The Technical, Business And Political Decisions Needed At Runtime
This is just a glimpse at the long road we have ahead of us when it comes to truly reaching the API economy we all like to talk about in the sector. Unfortunately, there are also many obstacles in the way of us getting to this possible future. We have to increase our investment in hypermedia and web-centric API solutions, and not just vendor-driven API solutions if we are going to move down this road. We have to be more transparent about our API plans, pricing, and the variables that go into the human and algorithmic business decisions that are driving our API platforms. We also have to start having honest discussions about the terms of service, privacy policies, service level agreements, and regulation that are increasingly defining the API space. 

I am optimistic that we can move forward on all of this, but current beliefs around what is intellectual property, something that is fueled by venture capital, and further set back by legal struggles like the Oracle v Google API copyright case are seriously hurting us. The definition of your API is not IP or secret sauce. Your pricing and plan variables are not your secret sauce, and should not be hidden behind the firewall in the Internet age--regardless of your enterprise sales belief. The only way that we are going to continue meaningful automation of the growing number of resources being made available via APIs using Internet technology, is to share vital metadata out in the open, so we can make sure we are all making proper, consistent decisions at runtime--not just technically, but also the right business and political decisions that will make the API economy go round.


APIs Are Not Just Meant For Killer Apps, They Can Also Be A Lifeline For Users

In the Silicon Valley rat race users often become collateral damage amidst the entrepreneurial quest to get rich building the next killer startup. I've heard many startups like Snapchat and Pinterest state the reason they don't want to do APIs, is they don't want developers building unwanted applications on their services, something that stems from a mix of not understanding modern approaches to API management, and not really thinking about their end-users needs (both these companies now have APIs but for different reasons).

I am sure that these platforms are often more concerned with locking in their userbase, then allowing them to be able to migrate their data, content, and other media off the platform for their own interests and protection. As companies race forward towards their exits, or in many cases their implosions, users often lose everything that they have published on a platform, many times even if they've been paying for the service.

An API is not always meant just for developers to build the next killer website and mobile applications integration on top of an API that benefits themselves, and the platform. Sometimes these applications are focused on providing data portability, syncing, and important backup solutions for users--allowing them to minimize the damage in their personal and professional worlds when things go wrong with startups. While data portability and data dumps can alleviate some of this, often times what they produce is unusable, and an API often allows for more usable real world possibilities.

As an API provider, you do not have to approve every developer and application that requests access. If an application is in direct competition or does not benefit your platform, and its users--you can say no. I encourage ALL platforms to have a public presence for their APIs (you know you have them) and incentivize developers to build data portability, syncing, and backup solutions for users. APIs are not just for encouraging developers to build the next killer startup, sometimes they will just help protect your users from when things go wrong with your startup vision--make sure and think beyond just your desires and remember that there are people who depend on your service.


Add A Prominent Icon Link To Your API Definition On Your Documentation Page

In an effort to help folks understand the many layers of just exactly what is an API and how people are using them, I'm going to emphasize (again) the importance of sharing your API definition publicly. I'm not going to talk about why you should have an API definition for your API, if you need a reason, go look at the growing number of ways that API definitions are driving a modern API life cycle--this post is about making sure you are sharing it properly once you have one crafted.

I'm increasingly stumbling across OpenAPI Spec-driven Swagger UI documentation for APIs which I then have to fire up my Chrome developer tools to reverse engineer the path of the OpenAPI Spec--this is dumb. If you have an API definition available for your API, make sure it is available in a prominent location within your API portal, preferably using an easy-to-find icon and supporting link.

Your API definition isn't just driving your API documentation. They are being used for API discovery search engines like APIs.io, to get up and running in API clients like Postman, and to help me monitor, test, and troubleshoot my API integrations using Runscope. Please stop hiding them! I know many of you think this is some secret sauce, but it isn't. You should be proudly sharing your definitions, and making them available to your consumers with one click, so they can more quickly integrate, as well as successfully manage their ongoing integration.


Using Anchors In Your FAQ And Other API Support Pages

I was going through some of the Twitter feeds of the APIs that I track on and noticed Spotify's team providing support to some of their API users with quick links / anchors to the answers in their API user guide available at developer.spotify.com. This might sound trivial, but having an arsenal of these links, so you can tweet out like Spotify does can be a real time saver.

This is pretty easy to do with a well-planned API portal and developer resources but is something you can rapidly add / change to using a frequently asked questions page for your API. The trick is to make sure you have anchors to the specific areas you are looking to reference when providing support for your community.

Another benefit of doing this beyond just developer support is in the name of marketing and evangelism. I'm often looking for specific concepts and topics to link to in my stories, and if an API doesn't have a dedicated page or an anchor for it, I won't link it--I do not want my readers to have to dig for anything. The trick here is you need to think like your consumers, and not just wear your provider's hat all the time.

When crafting your API portal, and supporting resources make sure you provide anchors for the most requested resources and other information related to API operations, and keep the links handy so you can use across all your support and marketing channels.


Is Your Sales Deal Size Just Too Big To Be Reading API Evangelist?

I am blessed to have people in the space who have supported what I do for the last six years. Companies like 3Scale, Restlet, WSO2, Cloud Elements, and others have consistently helped me make ends meet. Numerous individuals stepped up in May to help me make it through the summer--expecting nothing in return, except that I continue being the API Evangelist.

I do API Evangelist because I enjoy staying in tune with the fast-growing landscape of industries being touched by APIs. I believe in what is possible when individuals, companies, organizations, institutions, and government agencies embark on their API journey (aka digital transformation). I do not operate as the API Evangelist to sell you a product, service, or to get rich. Don't get me wrong, I do ok, but I definitely am not getting rich--all I got is the domain, Twitter account, and my stories.

The prioritization of sales and profits over what is really important in the space always blows my mind, but rarely ever surprises me. I find myself regularly worrying about the companies and individuals who focus on sales over actual transformation, but I have to admit my friend Holger Reinhardt's post about the motivations behind their Wicked (cool) open source API management made me chuckle. Their API management work was in response to a sales lead that "felt that our focus on ‘just enough API management’ was too narrow and not addressing the larger needs (and bigger deal) of the ‘Digital Transformation’ of the Haufe Group." << I LOVE IT!!!

I've been through hundreds of enterprise sales pitches, sitting on both sides of the table, and experiencing this bullshit song and dance over and over was one of the catalysts for leaving working with SAP in 2010 and starting API Evangelist. I just wanted to tell honest, real stories about the impact technology could make--not scam someone into signing a two or three-year contract, or be duped by a vendor into doing the same. Granted, not all sales people are scammers, but if you are in the business, you know what I'm talking about.

All I can say is I am very glad I do not have to live in a sales deal-driven world and I refuse to go back. To brag a little, I know that a significant portion of my readers are enterprise. People who work at IBM, SAP, Oracle, SalesForce, Microsoft, Capital One, and on, and on read my blog, and I want you all to know: That NONE of your deal sizes are too big, or too small to be reading my blog--I give a shit about all of you. However, maybe you could let me know what your expected budget might be? ;-)


I Am Digging Stripes New Interactive API Documentation Walkthrough

I am digging Stripes new documentation release, and specifically their interactive API documentation walkthrough. The new "try now" section of their documentation provides an evolve look at what is possible when it comes to providing your API consumers the documentation they need to get up and running.

The new documentation provides not just a code example of processing a credit card charge, they walk you through accepting a credit card, creating a new customer, charge the card, establish a recurring plan, and establishing a recurring customer subscription.

The walkthrough is simple, informative, and helpful. It helps you understand the concepts at play when integrating with the Stripe API, in a language agnostic way. I was super impressed with the ability to copy, paste, and run the curl commands at the command line, and when I came back to the browser--it had moved to the next step in the walkthrough. 

The new Stripe API documentation walkthrough is the most sophisticated movement forward in API documentation I've seen since Swagger UI. It isn't just documentation, in an interactive way--it walks you through each step, bordering on what I'd consider API curriculum. All without needing an actual live token--I wasn't even logged in. Additionally, Stripe made sure they Tweeted out the changes and included a slick GIF to demonstrate the new interactive capabilities of their documentation.


Going The Distance To Help API Consumers Find Their API Keys And Tokens

I am always amazed at how difficult it can be to obtain the API keys, or fire up an initial set of oAuth tokens when kicking the tires on a new API. I would also say that I am also regularly impressed the distance API providers will go to help API consumers obtain the keys they need to make a successful API call.

One example of this is present in the new Stripe API documentation. Their new code samples give you a slick little alert every time you see a demo key and mouse over. The alert gives you a quick link to log in and obtain the keys you need to make an actual call.

While I like this approach, I also  like the way Twitter does this and gives me a dropdown listing all of my applications, allowing me to choose from any of my current apps I have registered--maybe it is something that could be merged?

Both are great examples of API providers going the extra distance to make sure you understand how to authenticate with an API, and get your API keys and OAuth tokens. If you know of other good examples of how API providers are working to make sure authentication is as frictionless as possible, making API keys and oAuth tokens more accessible directly within API docs--let me know.

This is an area I think interactive documentation has made significantly easier, but things have seemed to stagnate in this area. It is definitely an area I'd like to see move forward, eventually providing cross-API provider solutions that developers can put to use.


Watching Out For Your API Keys & Tokens On Open Internet

I was just learning about Auth0's new password breach detection service, adding to the numerous reasons why you'd use their authentication service, instead of going at it on your own. It's an important concept I wanted to write about so that it was added to my research, and present in my thinking around API authentication and security going forward.

Keeping an eye out for important identity and authentication related information used as part of my API consumption is a lot of work--it is something that I'd love to see more platforms assist me with. I've written about AWS communicating with me around my API keys, and I could see an API key and token management solution be built on top of their AWS Key Management Service. I've also received emails from Github about my OAuth token that show up in a public repo (happened once ;-( ).

Many application developers do not have the discipline to always manage API keys & tokens in a safe and secure way (guilty). It seems like something that could become default for API providers--if you issue keys and tokens, then maybe you should be helping consumers keep an eye out for them on the open Internet << Which smells like an opportunity for some API-focused security startup. 

Have you seen any other API providers provide key and token monitoring services? Is there anything that you do as an API consumer to keep an eye out for your own keys and tokens? Search for them on Github via the API? Manually search on Google? I am curious to learn more about what people are doing to manage their API keys and tokens.


Providing A Dedicated Mobile SDK Page For Your API

Every API provider will have slightly different needs, but there are definitely some common patterns which providers should be considering as they are kicking off their API presence, or looking to expand an existing platform. While there are some dissenting opinions on this subject, many API providers provide a range of specific language, mobile, and platform SDKs for their developers to put to use when integrating with their platforms. 

A common approach I see from API providers when it comes to managing their SDKs is to break out their mobile SDKs into their own section, which the communications API platform Bandwidth has a good example of. Bandwidth provides iOS and Android SDKs and provides a mobile SDK quick start guide, to help developers get up and going. This approach provides their mobile developers a dedicated page to get at available SDKs, as well as other mobile-focused resources that will make integration as frictionless as possible.

Unless your anti-SDK, you should at least have a dedicated page for all your available SDK. I would also consider putting all of them on Github, where you will gain the network effect brought by managing your SDKs on the social coding platform. Then when it makes sense, also consider breaking out a dedicated mobile SDK page like Bandwidth--I will also work on a roundup of other providers who have similar pages, to help understand a wide variety of approaches when it comes to mobile SDK management.


More Considerations When Providing An Anonymous App For Your API Service

I wrote a post the other day about Postman.io having a limited, anonymous version of their API modeling tool. I stumbled across it while I was trying to upgrade my Stoplight.io account. Shortly after I tweeted out the blog post, John Sheehan (@johnsheehan) from Runscope chimed in with some wisdom on the subject.

Definitely, something to consider. In the current online environment, it might become quite a pain in the ass to maintain an anonymous app, as John points out. This is one reason I work to publish my API tooling as standalone JavaScript applications, which run 100% on Github. First off they run on Github infrastructure, and use Github's bandwidth. Second, this type of app is forkable, and people can choose to run them wherever they desire--on Github, or any other site they wish.

I'll keep an eye out for other anonymous apps built on top of API service providers, or individual APIs--maybe there are other successful models out there, or maybe there is also some other cautionary tales we should hear.


Managing The Apps Across All My API Accounts

I am going through all of my online accounts changing passwords, and one of the things I do along the way is check which applications have access to my digital self. Increasingly my accounts have two dimensions of applications: 1) apps I have created to allow me to make API calls for my system(s) 2) apps I have given access to any account using OAuth. This is a process that can take quite a bit of time, something that is only going to grow in coming years. 

The quickest example of this in the wild is Twitter. I have authorized 3rd party applications to access my account, and I have also developed my own applications, which have various types of access to my profile--this is how I automate my Tweets, profiling of the space, etc. I'm regularly deleting apps from both of these dimensions, which I tend to add as I test new services, and build prototypes. 

I really wish the platforms I depend on would allow me to manage my internal and 3rd party applications via an API. If I could aggregate applications across all the accounts I depend on, manage the details of these applications (including keys & tokens), and add and remove them as I need--that would be awesome! If nothing else, maybe this will put the bug in your ear to consider this for your own world, and you can help put the pressure on existing API providers to open up oAuth and app management APIs for us to help automate our operations.


Adding An Atom Feed For The API Evangelist Blog

The API Evangelist platform is far from perfect, there are always portions of it that just aren't finished yet (always work in progress). I am always thankful that people put up with my API Evangelist workbench, always changing and evolving. Even with this unfinished status, there are some unfinished or broken elements that are just unacceptable--one of these is the lack of an Atom feed for my blog.

Thankfully I have other folks in the space who are kind enough to remind me of what's broken when it comes to specifications, and ultimately what is broken on my website.

Thanks Erik for gently pushing back. In response I went ahead and added an Atom feed for the API Evangelist blog, to add to the existing RSS feed. I made sure the Atom feed validated and added a link relation to the header of the blog. I am going to do the same to all my individual research areas with the next push of their website template.

Syndication of my writing is important, so my blog is now available via RSS, Atom, and JSON. Thanks Erik for helping make sure the web is not entirely broken. ;-)


You Can Make Money While Also Doing Important Work For The API Space

I see a lot of companies doing things with APIs, and I often find myself struggling to find companies who are doing important things that benefit the community, have a coherent business model, and providing clear value via their services. In the drive to obtain VC funding, or after several rounds of funding, many companies seem to forget who they are, slowly stop doing anything important (ie. research, open source, etc.) with their platform, and seem to just focus on just making money. 

One phrase I hear a lot from folks in the space is, "it's just business", and that I should stop expecting altruistic behavior around APIs, and within the business sectors which they are impacting--APIs are about making money, and building businesses hippie! Often times I begin to fall for the gaslighting I experience from some in the API space, then I engage with services like CloudFlare.

I use CloudFlare for all my DNS, but I also stay in tune with their operations because of what they do to lead the DNS space, and because of their DNS API. I was going to craft this post after reading their blog post on the Cuban CDN, then I read their post on an evenly distributed future, and I'm renewed with hope that the web just might be ok--things might not be as dark as they feel sometimes.

I follow what CloudFlare is doing because their work represents the frontline of the API sector--DNS. This makes it not just about DNS, it also becomes about security, and potentially one of the most frightening layers of security--the distributed denial of service attack (DDoS). CloudFlare clearly get DNS, and care so much that they have become super passionate about understanding the web as it exists (as messy as it is), and pushing the conversation forward when it comes to DNS, performance, and security. 

CloudFlare makes DNS accessible for me, and for other less-technical professionals like my partner in crime Audrey Watters (@audreywatters), who also uses CloudFlare to manage her DNS, with no assistance from me. I operated my own DNS servers from 1998 until 2013, and it is something that I will never do again, as long as CloudFlare exists. CloudFlare knows their stuff and they help me keep the frontline of my domains healthy and secure.

There are a number of companies I look up to in the space, and CloudFlare is one of them. For me, they prove that you can build a real business, do important work that moves the web forward, be passionate about what you do, while also being transparent along the way. Knowing this is possible keeps me going forward with my own research, and optimistic that this experiment we call the web might actually survive.


If You Use API Definitions There Is No Excuse For Not Having An API Sandbox

I have long been a proponent of using API definitions, not just because you can deploy interactive API documentation, but because they open up almost every other stop along the API life cycle. Meaning, if you have an OpenAPI Spec definition for your API you can also generate SDKs using APIMATIC, and API monitors using Runscope. 

One of the examples I reference often is the API Sandbox solution appropriately named Sandbox. One of the reasons I use Sandbox in this way is that API mocking using API definitions is a pretty easy concept for developers to wrap their heads around, but also because their home page is pretty clear in articulating the opportunities opened up for your API when you have machine-readable definitions available.

Their opening text says it well, helping you understand that because you have API definitions you can "accelerate application development", and provide "quick and easy mock RESTful API and SOAP webservices". The presence of common API definition icons including API Blueprint, OpenAPI Spec, RAML, and WSDL then provide a visual re-enforcement for the concept.

Sandbox opens up mocking and sandbox capabilities, which I lump together under one umbrella which I call API virtualization. You can easily create, manage, and destroy sandboxes for your APIs using their API, and your API definitions. I envision API providers following Cisco's lead and having any number of different types of sandboxes running for developers to put to work, using server virtualization (virtualization on virtualization).

With the evolution of API definition-driven solutions like Sandbox for providing virtualized instances of your APIs, there really isn't any excuse for not having a sandbox for your API. For device focused APIs, a sandbox is essential, but even for web and mobile-focused APIs you should be providing places for your API consumers to play, and not requiring them to code against production environments by default.


CRX Extractor Wins For The Best Customer Quote Ever

Having quotes from your customers on your company website is a no-brainer. Finding the best examples of brands and companies putting your valuable service, or tool to work demonstrates it has value, and that people are using it.

While playing around with a new chrome add-on reverse engineering tool called CRX Extractor, I noticed the quote at the bottom of their page:

They win in my book for having a funny, but also pretty realistic endorsement for why you should be using a product. I'm using the tool to better understand how browser add-ons are putting APIs to work and evolve my own creations as well, but I can see reverse engineering them to make sure they are secure is a pretty important aspect of operating your company securely online.

When it comes to marketing your API, make sure you have quotes from smart people, as well as brands that people know, makes sense, but I would also add that making them funny, and allowing ourselves to laugh along the way can make a significant impact with the right people as well


An OpenAPI Spec For A Building Permits API

One of the reasons why crafting API definitions like OpenAPI Spec for our APIs, and openly sharing them on the web, is so that the pattern will get used, and reused by other API providers. That might sound scary to some companies, but really that is what you want--your API design used across an industry. Your API definition is not your IP, it is the magic behind your API, and the way you approach all the supporting elements around your API operations.

There are numerous industries where I'd like to see a common API definition emerge, and get reused, and one of the more obvious ones is in the area of building permits. Open Permit has shared their API definition, by publishing the OpenAPI Spec to drive their Swagger UI documentation. This is a great example of an API definition that should be emulated across the industry because the money to be made is not around the API design, but the portion of our economy that the API will fuel when it is in operation.

Can you imagine if all cities, contractors, and vendors who service the construction industry could put APIs to use, and even better, put common patterns to use? If you have ever tried to build something residential or commercial and had to pull a permit, you understand. This is one industry where APIs need to be unleashed, and we need to make sure we share all possible API definitions so that they can get used, and we aren't ever re-inventing the wheel.


A Wicked (Good) Open Source API Deployment And Management Stack

I was introduced to a new open source, Dockerized API operations solution called Wicked, that was developed by the integrated cloud and desktop solutions provider, the Haufe Group. There are a number of open source API management solutions out there, and an even greater number of API frameworks that can help you deploy your APIs, but Wicked is the first to span several areas of the API life cycle including DNSdeployment, containers, authentication, management, and documentation,

Built On Existing Open Source API Gateway
The Haufe Group built the core of Wicked on top of an existing open source API management solution, further augmenting, evolving, and improving on an existing solution:

Why reinvent the wheel? It makes sense to build on existing solutions for API management, developing on top of what is already being used by API architects and developers.

Simple Developer Onboarding
Wicked employs the latest approaches to allowing developers to onboard with an API, sticking to what is already working for API providers, and what developers expect: 

  • Authenticate with email and password - Let your users sign up with their email address and a password. Email addresses will be automatically validated by sending out verification emails.
  • Authentication with GitHub or Google - You may also configure signup and login using OAuth2 with GitHub and/or Google. These identities will be treated as 'verified' automatically.

I like that they let you use Github or Google on top of the standard email and password setup. I've been aggregating all my personal API developer accounts under my single @kinlane Github account, and when I set up a business account I authenticate using my @apievangelist Github account--more API providers should offer this, to help us all organize our accounts.

Using Modern API Authentication
There are a handful of proven approaches out there for allowing developers to authenticate against an API, and Wicked allows for two of the most common approaches:

  • API Key or OAuth 2 - Out of the box, wicked enables fast securing of your API using API Key authentication or OAuth 2 Client Credentials Flow.

Allowing for either API key or OAuth will cover 75% of the use cases companies are looking for when securing their digital resources. Most public resources will just need an API key which acts as the identifier, but for personally identifiable information, OAuth is essential.

Enabling API Service Composition
Every successful API provider knows that you don't provide the same access for all developers, and service composition is an essential way to approach this--Wicked provides the essentials in this area: 

  • Implement Rate Limiting - Using Mashape Kong's rich functionality, implement rate limiting for your APIs, wherever needed.
  • Subscription Plans - API definitions can be associated with subscription plans, which can carry additional settings, e.g. different rate limits for different users.
  • Group based rights to APIs - Define custom user groups and assign those groups to users in order to limit access to specific APIs to specific groups. The Admin group can also be assigned.
  • Group based rights to custom content - The content section also supports group-based access, e.g. to How-tos or tutorials.
  • Subscription Approval Workflow - API Plans can be configured to require an approval of subscription; you will be sent an email to a predefined email address to the approval request.

While I am not a big fan of API approval workflows, as I prefer resources to be self-service, I was intrigued by the email approval feature, allowing for a (hopefully) frictionless onboarding flow that can add an additional layer of security for our most valuable of API resources.

Providing The Necessary Application Mangement
APIs are all about developing applications, and Wicked allows for the identification of apps, and the incorporation of these apps into the service composition workflow: 

  • Application Concept - In order to subscribe to an API, a user needs to create an application (which is the client of the API); APIs are coupled with applications, not users.
  • Application Owner Roles - Applications can be shared among users, using different roles on the application: Admin/Owner, Collaborator, and Reader.

Users may have one or many apps which integrate with one or many APIs. This many to many relationships provide a robust way to manage API consumers, and potentially the multiple applications which they will develop.

Interactive API Documentation
No API in 2016 is complete unless it has interactive documentation, and Wicked sticks with what works in this area and provides documentation for APIs using Swagger UI and OpenAPI Spec

  • Swagger UI Integration - In order to view the APIs in more detail, wicked has integrated Swagger UI, with configurable direct access to the backend services.

Using open API definitions like OpenAPI Spec, as well as providing up-to-date interactive API documentation is pretty much much the new baseline for APIs these days, and Wicked keeps up.

Scalable Deployment
Next, as if that wasn't enough, you get te scalable deployment of APIs using Docker. Wicked weaves together the DNS, deployment, and management of your APIs, and allows for modular deploy with Docker, and scaling with Docker Compose:

  • Docker Deployment - The entire APIm solution is deployed using docker; everything runs in docker, enabling deployments to whatever infrastructure supports it.
  • Scaling With Docker Compose - By using docker-compose, the deployment of your API Management solution can be easily scaled to use multiple instances of Kong, behind a powerful HAproxy.

This type of API deployment is how all APIs will be deployed in the future. We have a lot of work ahead of us when it comes to decoupling our legacy infrastructure, but Wicked gives us the tools we need to get this done--providing a fuller open source stack which we can more confidently bake into our infrastructure.

There are two things that stand out for me about Wicked. 1) It spans deployment and management in a scalable way 2) It is built using the best of breed open source tooling, specifications, and standards available out there right now--Kong, HAproxy, OpenAPI Spec, Swagger UI, and Docker.

I'm just getting going with Wicked. It makes me happy to see API operations solutions like this come together. I'm just getting going reviewing the stack, and I am really liking the motivations behind why they did it, and how they are doing it--more to come.


Who Is Going To Do The DevOps Aggregation API Platform?

There are two distinct types of APIs I keep an eye on. One is what I call my life cycle APIs, which are the APIs of the service providers who are selling services and tools to API providers and developers. The second category is what I call my stack network, and these are the individual API providers, who offer a wide range of API resources--you can find both of these types on the home page of API Evangelist

The 50+ life cycle APIs I track on can be used by companies to manage almost every stop along a modern API life cycle. In theory, all of these service providers have APIs. In reality, they do, but they do not practice what they preach and often do not make their APIs easily discoverable. I have said it a thousand times before--if you sell online services to API providers, you should have an API. Period.

At some point in the future, I will have profiled all of the companies included in my API life cycle research, like I did for API monitoring, and be able to provide a comprehensive API stack across all the providers for all stops along the life cycle. Ideally, each provider would have their own OpenAPI spec, but I'm still getting many of them to make their APIs public, convincing them of the importance of also having an API definition for their API will come next. Then I'll continue pushing on them to allow for the import / export of API definitions, so their customers can more easily get up and running with their services--if you need an example of this in the wild, take a look at Sandbox, or over at API Metrics.

I'd love to see someone take this idea and run with it beyond what I'm able to do as a one-man act. There are numerous API aggregation solutions already out there for financial, healthcare, images, documents, and more. What about an aggregated API across providers in the name of DevOps or microservices orchestration? An aggregated solution would allow you to automate defining of your APIs in multiple formats with API Transformer, deploy them using Docker or Heroku APIs, manage with 3Scale APIs, deploy sandboxes with Sandbox, monitor with Runscope, and almost every other stop along the life cycle. 

I'm sure I've written this one up before, but I couldn't find it, so I wanted to get a fresh post up on the subject. With all the agile, orchestration, DevOps, microservices, continuous integration goingz on, having a coherent, cross-vendor API stack, and a suite of the usual analytics, billing, and other vital middleware services just makes sense. Let me know when you get up and running, and I'll send over my bank account information for the royalty payments. ;-)


The Expanding API Layers That Overlap Our Physical And Virtual Worlds

I wrote the other day about the interesting opportunity opening up within the satellite imagery API layer, and earlier about the similar opportunity that is being expanded within the fast growing dimension of our world being opened up with drones. Layers within maps are nothing new, and is something that Google has pushed forward early on in the history of APIs with Google Maps, but I feel is further being expanding on as APIs open new dimensions like satellites and drones. Then being further expanded on by adding API access to each layer for augmenting, and injecting other valuable API resources into these newly created API dimensions.

Let's see if I can successful describe the multiple API dimensions being opened up here. APIs are providing access to maps of our physical world, whether it is on the ground, from the air with drones, or from space with satellites, and these API-driven maps have layers, which are also made available via APIs, allowing other API driven resources like weather, forest fires, restricted spaces, and temporary or permanent elements are being injected. Once injected, these API-driven mapping resources with API injected resources are also being made available via APIs, providing entirely new, and specialized resources--that is a lot of APIs!

I am not even touching on the physical devices who put these maps to work also possessing APIs, like the drones, GPS units, cars, etc. This is just the expanding layer that is opening up via the multitude of API driven mapping resources and is further expanded when you look at the video layer which drones, mobile phones, automobiles, security cameras, and other Internet-connected devices are opening up. Drones, automobiles, and others will share layers with the mapping resources, but other video resources also will possess their own layers for augmenting the experience on the web, mobile, and television.

The part of this that is really striking to me isn't just the overlapping layers between mapping, video, and other channels, it is the overlap between our physical and virtual worlds. Think what Pokemon Go has done for gaming, but now consider drones, consumer automobiles, as well as commercial fleets. It can be very difficult to wrap your mind around the different dimensions of opportunities opening up, but it doesn't take much imagination to understand that there is a growing opportunity for APIs to thrive in this expanding universe.


Sharing Your API Platform Road Map And Telling The Story Like Readme.io

Sharing your platform's road map with the public, and your community is an often overlooked aspect of API operations but is one that can go a long way to communicate your plans for the future with your community. This is why I carved the concept out into its own research area, to help me better understand how the successful API providers, as well as API service providers are publishing, sharing, and communicating around their road map.

One example of this recently is out of the API documentation service provider Readme.io, who didn't just publish a nice, simple, and clean road map for their platform, they also told the story of the process. This is a great way to announce that you have a road map for the platform but is also something you should repeat as often as possible, telling the story of what you just added to the road map and as much detail on why.

Sharing your road map, and the story behind goes a long way in lowering the anxiety around what the future holds for your API consumers, something that lets them know that you care about them enough to share what you have planned. In my opinion, a road map for an API platform shows that you have empathy for your community, and is something I like to encourage and support, by showcasing the process of your road map storytelling here on my blog when I can.


Prioritizing Commonly Requested Information With Your API Deployment

I was reading the post from open data service provider Socrata about "putting citizens first" when it comes to opening up city, county, state, and federal government data. One of the headlines they showcased was "Texas overhauls open data portal, prioritizes commonly requested info"--which is a pretty sensible thing to consider for not just government, but also companies thinking about what to open next.

First, let me emphasize that I am talking about open data that is already published on the web in another format (or should be). What the State of Texas is doing is what I call the low-hanging fruit for API deployment--if it is on your website, it should also be available in a machine-readable format. Ideally, you offer HTML, as well as JSON, XML, and other relevant formats side by side within a single domain using content negotiation, but no matter how you accomplish it, the priority is making sure that commonly requested information is accessible to those who need it.

It is a shame that Texas is only now considering this with the latest revision of their portal, ideally government agencies and companies would be applying this way of thinking by default. If it is on your website as HTML, most likely it has already been deemed important, which is why it was made self-service on the open web in the first place. If you are planning on deploying an API or open data portal, and you are just wondering where you should start, make sure to learn from the State of Texas, and prioritize the commonly requested information.


API Providers Could Add A Page To Showcase Their Bots

I am coming across more API providers who have carved off specific "skills" derived from their API, and offering up as part of the latest push to acquire new users on Slack or Facebook. Services like Github, Heroku, and Runscope that API providers and developers are putting to work increasingly have bots they employ, extending their API driven solutions to Slack and Facebook.

Alongside having an application gallery, and having an iPaaS solution showcase, maybe it's time to start having a dedicated page to showcase the bot solutions that are built on your API. Of course, these would start with your own bot solutions, but like application galleries, you could have bots that were built within your community as well.

I'm not going to add a dedicated bot showcase page until I've seen at least a handful in the wild, but I like documenting these things as I think of them. It gives me some dates to better understand at which point did certain things in the API universe begin expanding (or not). Also if you are doing a lot of bot development around your API, or maybe your community is, it might be the little nudge you need to be one of the first APIs out there with a dedicated bot showcase page.


What Is A RESTful API And Why Does It Matter To IoT?

I'm pretty skeptical about many of the reasons behind why companies are connecting devices to the Internet using APIs--I am just not convinced this is the best idea when we already have so many security issues with the standard, and mobile web. Regardless, I'm constantly working to understand the motivation behind a company's motivation to do APIs, as well as what they are telling their customers. 

I published a story last week about defining the industrial programmable automation controller (PAC) strategy using an API, which focuses on the approach by Opto 22. To support their efforts the industrial automation provider offers up a dedicated page to educating their customers on why you would want to use REST, providing some bullets:

  • Archive I/O and variable data from the PAC directly into Microsoft SQL Server using Microsoft's T-SQL—no OPC or ODBC required
  • Read data from and write data to the PAC from your browser or web-based application using JavaScript.
  • Read or write PAC data using your favorite programming languageC, C++, C#, Java, PHP, Python, and many more
  • Build a mobile application that directly accesses data on your PAC—using Java, Swift, or Xcode 
  • Build a data flow application for communicating with cloud platforms and cloud APIs, using Node-RED and our new SNAP PAC Nodes.

Each of the industrial controllers "includes an HTTP/HTTPS server and RESTful API, compatible with any programming language that supports JavaScript Object Notation (JSON)". In my opinion, this reflects the wider API space that is serving the web and mobile objectives, allowing for integration using any programming language, as well as opening up the devices to API orchestration solutions using iPaaS, and the variety of other API service provider solutions available in the market.

Ultimately I think using web technology is inexpensive, and avoids the usage of proprietary, vendor specific solutions. As the ability to offer up a web server on any physical object becomes easier and cheaper, the usage of web APIs to interact, integrate, and orchestrate around physical objects will only increase, for better or worse.


Thinking In Terms of API Skills And Moving Beyond Just API Resources

The APIs which have seen the greatest adoption across the API space, always provide the functionality that developers are needed in their applications. It is either because the platform is already in use by their users (ie. Twitter, Facebook), or just provides the core feature that is required (ie. SMS, Email). There are an unprecedented number of high-value APIs out there, but I think many API providers still struggle when it comes to defining them in a way that speaks to the needs that web, mobile, and device app developers will be needing.

I have explored this topic before, discussing the importance of exposing the meaningful skills our APIs possess for use in the next generation of messaging and voice apps, as well as asking whether or not our APIs have the skills they need in a voice and bot enabled world. I am not 100% behind the concept that voice and bots are the future, but I am 100% behind defining our API resources in a way that immediately delivers value like they are doing in these environments.

The approach used by Alexa, when it comes to developing "skills" is an important concept for other API providers to consider. Even if you aren't targeting voice enablement with your APIs, the model provides many positive characteristics you should be emulating in your API design, helping you deliver more meaningful APIs. For me, thinking in terms of the skills that your APIs should be enabling, better reflects the API journey, where we move beyond just database and other very technical resources, and providing the meaningful skills developers need for success, and end-users (aka humans) are desiring.


The Racial Bias Being Baked Into Our Algorithms

My "fellow" Presidential Innovation Fellow Mollie Ruskin (@mollieruskin), was doing some work with veterans recently and stumbled across a pretty disturbing example of how racial bias is being baked into the algorithms that are driving our online, and increasingly offline worlds.

This morning I was searching #‎Google for images of #‎Veterans for a project. I stumbled upon a photographer who had taken hundreds of beautiful photographs of Veterans all in the same style.

I clicked on a few striking portraits...I quickly noticed something very troubling.

When doing image searches, I often use the 'related images' feature to uncover more pictures relevant to what I'm hunting for, as was the case this time around. Where most of the photos returned related images of other veterans, one photo of a smiling black male vet in his uniform fatigues, garnered a series of related images that were all mugshots of CRIMINALS.

The tools we use to fuel our 21st century lives are not the seemingly neutral blank slates we imagine them to be. They are architected and shaped by people, informed by our conscious and unconscious biases. Whether this is reflecting back a dark mirror on what people click on or surfacing a careless design in an algorithm, this random search result shines a little more light on the more subtle and insidious ways racism is baked into our modern lives.

For my friends who work at the big tech giants which are increasingly the infrastructure to our lives, please help make sure your institutions are addressing this stuff. (And thanks to those of you who already have been.)

#‎BlackLivesMatter

PS: Recently saw a great talk about this idea of 'oppression' in our algorithms. Def worth a watch: https://www.youtube.com/watch?v=iRVZozEEWlE

This isn't some edge weird case, this is what happens when we craft algorithms using development teams that lack diversity. Racial bias continues to get baked into our algorithms because we refuse to admit we have a problem, or unwilling to actually do anything about it. Sadly, this is just one of many layers in which bias being built into our algorithms, which are increasingly deciding what shows up on your Facebook wall, all the way to which criminals will commit crimes in the future.

You will hear more stories like this on API Evangelist as I push forward my APIs and algorithm research, working to identify ways we can use open source, and open APIs to make these often black box algorithms more transparent, so we can potentially identify the bias inside. Even with this type of effort, we are still left with having to do the hard work change the culture that perpetuates this--I am just focusing on how we crack things open, and more easily identify the illness inside.


The Expanding World of Technology Evangelism

Technology evangelists are nothing new, but are something I think is continuing to expand as the Internet continues to crack open more of the core areas of the tech sector. I specifically chose the term API Evangelist to define what I did evangelizing for all APIs, but all I was really doing is following the lead of evangelism pioneers like Amazon, Google, and even Microsoft. 

There has long been discussion around evangelism vs advocates, and I've seen companies also choose to adopt an ambassador format. I have also been interested to see the evolution of Docker's Captain's program--who are "Docker experts and leaders in their communities who demonstrate a commitment to sharing their Docker knowledge with others". 

I also stumbled across a post out of the MongoDB as a service provider Compose showcasing what they call the database advocate, whose "job is not to guard the database from the world but to advocate the best ways to use it and offering the tools to optimize that usage". In their view, the outdated DBA is going away, with the database advocate emerging as a much more friendly, outward facing, pragmatic gatekeeper for databases within the enterprise.

It makes me happy to see the open ethos brought to the table by web APIs spreading to all the layers of the tech stack, making the world of virtualization and containers more accessible. As an old database guy, it makes me really, really happy to see it also spread to the world of databases--I am hoping that it is something that continues to spread to all layers.


Delivering API Docs Using OpenAPI Spec Driven Templates For Angular

I have been talking with Nick Houghton over at Sandbox about the state of OpenAPI Spec driven API documentation, and the lack of a machine-readable core when you deployed Slate driven documentation. He was wanting the same thing--a good looking, dynamic API documentation that was OpenAPI Spec driven.

He recently got back to me and found a solution that worked for them: "Ended up just templating the Swagger JSON myself rather than relying on Slate etc to do it. So model/resources are Swagger annotated, CI pushes out Swagger JSON and Angular UI parses in the browser, works quite well I think".

Nick is on a similar path that I am, as I work to simply API documentation using OpenAPI Spec, and provide specialized views of APIs using Liquid. We are looking for the simplicity, control, and beauty of Slate, but the machine readable core of OpenAPI Spec--allowing us to keep the core specification for the API up to date, and the documentation is always the latest.

They are going to write up their journey on their blog (as all API service providers should), and share with us. I'll probably do another write-up once I get more details on how they created the templated API docs using OpenAPI Spec and Angular. I also like how they have the OpenAPI Spec JSON pushed out as part of the CI life-cycle--something I'll cover more as part of my API life-cycle orchestration research.


When Your API Consumption Influences The Acquisition Of Your Startup

I saw that the contact API solution FullContact recently purchased the professional network management solution Conspire. Thankfully FullContact is good about blogging about the move, and the details of the motivations behind their decision -- without this type of storytelling I wouldn't even have known it happened.

One thing I noticed in the blog post was that "Paul, Alex, and the entire Conspire team have been fabulous partners with FullContact, having utilized our Person API as a part of the Conspire offering". Acquisitions from within an API ecosystem is not new. It is why many companies do APIs in the first place, to help identify talent within their ecosystem like Paypal does, and complimentary companies and teams like FullContact has done.

There are many mating rituals you can perform as a startup these days, and building an interesting product, service, and company on top of an API is a pretty cool way to accomplish this. Obviously, it is easier said that done, but if you can identify a real-world business problem, and develop a solution to solve this problem on top of an API, and get some real traction--it can lead to some pretty interesting outcomes.


Providing Multiple Types of API Sandboxes To Develop Against

I was going through the Cisco Devnet ecosystem and stumbled across their sandbox environment. I thought it was worth noting that they provided several different types of sandbox environments, with a rolling list of available sandbox instances at any point in time.

Cisco provides seven different types of sandboxes:

  • Networking - The Networking Sandbox allows you to remotely access Cisco Networking technologies. Each Sandbox contains either simulated or physical network elements as well as access to developer tools. Some Sandboxes also provide for the creation of synthetic traffic. 
  • Collaboration The Communication and Collaboration Sandbox allows you to remotely access Cisco Collaboration technologies in a cloud lab. Labs contain Cisco UC services: Unified Communications Manager, Unified Presence, and Unity Connection. In these labs you can build and test integrations to support features such as Instant Messaging/ presence, voicemail, and conferencing services into your application or using e.g. the Jabber Web SDK. 
  • Compatibility Testing The DevNet Sandbox IVT program allows users to complete Interoperability Verification Tests (IVT) in our labs with your engineer as an option to using an authorized Cisco IVT partner services lab. Cisco Solution Partner Program members will be eligible for a Cisco Compatible Logo once testing is complete and deemed passed. The labs contain the architecture, configuration and products needed to complete an IVT for supported products and categories.
  • IoT The DataCenter Sandbox allows you to remotely access Cisco Iot technologies remotely. Labs contain architectures with products from the DevNet Iot product portfolio. The labs contain simulated and actual hardware elements as well as access to tools or synthetic traffic.
  • Cloud The Cloud Sandbox allows you to remotely access Cisco Cloud technologies remotely. Labs contain architectures with products from the DevNet Cloud product portfolio. 
  • Security The Security Sandbox allows you to remotely access Cisco Security technologies remotely. Labs contain architectures with products from the DevNet Security product portfolio. The labs contain simulated and actual hardware elements as well as access to tools or synthetic traffic. 
  • Datacenter The DataCenter Sandbox allows you to remotely access Cisco DataCenter technologies remotely. Labs contain architectures with products from the DevNet DataCenter product portfolio. The labs contain simulated and actual hardware elements as well as access to tools or synthetic traffic.

I like the diverse number of environments represented here. I've been seeing more virtualized environments show up in support of device-based API integrations--you just can't expect everyone to develop and test against the real thing. The significant area represented here for me is the compatibility and security testing sandbox environments--important areas of if we are going to harden integrations.

API definitions like OpenAPI Spec and API Blueprint, combined with recent advances in virtualization (aka containers), makes for a pretty rich environment for pushing forward the number of available sandbox environments that developers can take advantage of. I'd like to see more API providers offer sandbox environments, build up the capacity in this area, and get to the level where Cisco is already operating, and offer a rich variety of virtualized environments for developers to test their integrations against.


Providing An Anonymous Layer To Your API Provider Service Like Stoplight.io

I was playing around with the free and the now paid layers of Stoplight.io, and wrote a previous piece about their lack of a public pricing page, and I noticed they provided an anonymous layer to their API modeling service--without logging in, you can play around with their HTTP client tool, and make requests to an API.

The anonymous version is super limited compared to their full solution, but I think the presence of an anonymous edition opens up an interesting discussion. It appears Stoplight.io has done a lot of work lately to separate the layers of v2 of their service, and provide a public, free, as well as paid, and enterprise editions of their API modeling solution.

With the shrinkage of freemium these days in the API space and the tightening down on free trials, an anonymous layer is compelling. It isn't something that would work for all API service providers, but it is at least something to consider as you are working to define the layers like Stoplight.io has been doing.


I Know It Is Hard When You Are Just Getting Started, But Please Make Your Pricing Page Public

I received an email from Stoplight.io about their version updates, which included the phasing out of the free beta period--makes sense. I clicked on the "you can view pricing, and setup billing, on your account billing page" in the email, and was taken to the register page. 

To clarify a little bit, I have an account with Stoplight.io, which I registered using my @kinlane Github account. I'm logged in as my @apievangelist Github account presently as I'm doing some work with multiple repos, and I really didn't want to log out of @apievangelist and log in with my @kinlane just to see the pricing.

So I head directly to the public website of Stoplight.io to look for pricing--which I can't find within 30 seconds (standard approach). When this happens I will then Google for the [API name] + "pricing"--nothing. Ultimately I did log with my @kinlane Github so that I can see the pricing because I genuinely want to keep my account, as Stoplight.io made it into the highly useful tool category for my world.

I just wanted to articulate the friction I experienced, so Stoplight.io can consider, but also so that the rest of you can consider. My preference is that you always make your pricing page public. I understand that this is difficult for startups that are just getting going, are in beta phase, etc., but I feel like in 2016 it should be the default practice for API providers, as well as service providers.

Even if your service is in beta, and you aren't charging for it yet, you should have a dedicated page to explaining this, and keep it updated as you evolve. Please do not make me log in just to review your service, understand what it does, and find your pricing. This is bad for helping analysts like me understand what you are up to (no I don't want a briefing), and is bad for your customers who are trying to understand where their accounts stand, and whether we can afford to move forward. 


Tweeting Out The iPaaS Opportunities That Are Available For Your API

I've been an advocating for API providers to embrace integration platform as a service provider (iPaaS) for three years now, encouraging them to make sure their API is accessible via popular platforms like Zapier. While I don't push these as a required building blocks for all providers, they definitely are what I'd consider as the common building block across many of the successful APIs I keep an eye on.

Making sure your API is available on iPaaS platforms, and you are showcasing these opportunities is becoming more and more important. Another positive move you can make when it comes to iPaaS, is to tweet out what is possible on a regular basis like #mce_temp_url#the transactional and marketing email API provider Mailjet does--here are a couple examples of this in the wild:

I may add this type of activity to my list of common API evangelism building blocks. It is something that I think compliments the fact that you have iPaaS integration solutions, and you are showcasing them within your API portal. It just makes sense that you should also be regularly tweeting out these opportunities, and making it known to your followers, and hopefully your API consumers.

Tweeting out the iPaaS opportunities available for an API is a great to reach beyond just the developer API consumer, and potentially reach the actual business consumer, who is probably going to have a real world problem that they hopefully will be able solve using your API--opening up a whole other dimension to how your API can be put to work.


The Historic Newspaper API From The Library Of Congress

It always bums me out that the cool kid startup APIs always get the lion share of the attention when it comes to APIs in the tech news. Which I guess makes it my responsibility to show the ACTUAL cool kid APIs, like the Chronicling America API from The Library of Congress, which provides access to information about historic newspapers and select digitized newspaper pages.

The Library of Congress provides APIs for searching the newspaper directory and digitized page contents using OpenSearchauto suggesting of newspaper titles, links using a stable URL pattern, and JSON views of all newspaper resources. They also provide linked data allowing you to process and analyze newspaper information with "conceptual precision" (oooooh I like that), as well as bulk data for your deeper research needs.

I wish that ALL newspapers had APIs like the New York Times and The Guardian do, and information was available to the public by default, and that everything was automatically synced to any archive or collection that was interested in it, like the Libary of Congress. I know that newspapers are having a hard time in the digital age, and I can't help but feel that APIs would help them evolve, and shift until they find their voice in the digital age.


Providing Additional Support Options For Your API In Your Twitter Bio

As I was writing up a story on Mailjet tweeting out the iPaaS opportunities around their email API, I noticed their Twitter bio. It is subtle, but having spent a great deal of time looking for the support channels for an API, this is a potentially huge time saver. It is what I do best, discovering these simple, subtle things that the successful API providers are doing.

I always encourage API providers to use Twitter as a support channel because it doesn't just provide support, it also is a public demonstration that you give a shit about your API consumers, and will actively work to help them solve their problems. I've seen APIs who offer no support, or a minimal number of support channels, while also seemingly working very hard to hide them--leaving you feeling like you will never get help when you need it.

Mailjet provides links to their status page, as well as a link where you can submit a trouble ticket in their Twitter bio. I would consider their Twitter bio pretty well crafted with the first half explaining what they do(for new users), and the second half providing information on how to get support(for existing users). It is a subtle, positive thing that all API providers should consider doing--thanks for the lead Mailjet.


Maybe A Save As JSON Option For Excel Wasn't Forward Thinking Enough

In September of 2015, I asked when are we going to get a save as JSON in our spreadsheets? I was doing a lot of work saving spreadsheets as CSV files, something I can easily do programmatically, but I was doing it manually as part of a workshop. After I downloaded each CSV file, I then converted to a JSON file--leaving me asking, "where is the save as JSON"?

As I've been reviewing new Microsoft's Excel API, I got to thinking about the need for a save as JSON option, and now I think that this line of thought was not forward thinking enough. A "save as" just does not speak to the future of machine readable spreadsheet interactions in an online world. Save as CSV, TSV, are very desktop oriented visions of using Excel, and in 2016 we need more.

The Microsoft's Excel API plus OAuth opens up an endless number of opportunities for working with data available in spreadsheets. Microsoft will have to open up the navigation in the online version of Microsoft Excel to the API developer community, allowing for users to subscribe to 3rd party API driven solutions like, save as JSON, open in Github as CSV, visualize with D3.js, and anything else that developers dream up via the Microsoft Excel API.

Maybe this is already possible in the Microsoft Excel online navigation, regardless, there will also be opportunities for extending these options via browser add-ons, as well as integration directly within 3rd party solutions, that use OAuth and the Microsoft Excel API to access valuable data that is locked up in spreadsheets. I enjoy that APIs are constantly pushing me to re-evaluate my legacy ways of thinking, and help me look more towards the future.


Expanding My Awareness Of How APIs Are Being Used At The Network Level

I work as hard as I can to understand every sector being opened up using web APIs, and the network level is one that I need to push my awareness of, partially because I find it interesting, but mostly because of the impact it can have on every other aspect of how the Internet works (or doesn't).

To get started I went over to Cisco Devnet, and took a look at their pxGrid solution, which is a "multivendor, cross-platform network system that pulls together different parts of an IT infrastructure such as security monitoring and detection systems, network policy platforms, asset and configuration management, identity and access management platforms", which provides "you with an API that will open up a unified framework that will enable you to integrate to pxGrid once, then share context with any other platform that supports pxGrid". 

I'd categorize pxGrid as network API aggregator in my world, providing a single, seamless API to access resources at the network level. Of course, all the network endpoints have to speak pxGrid, but the platform provides me with an introductory blueprint for how web APIs are being applied to network resources, for device configuration and management, as well as identity and security services. I'm not a network professional, but I do know that Cisco devices are pretty ubiquitous, and with the company historically having over 50% market share, I'm betting it is a good place to kick off my learning.


Continuing My Struggle For Reciprocity As ETL Evolves Into The Cloud As iPaaS

Early on in 2013, I started a research project to keep an eye on a specific type of API driven service provider, like IFTTT and Zapier, who were enabling individuals and businesses to move data around in the cloud. This new wave of startups was moving from what we traditionally called ETL in the enterprise, which was about extracting, transforming, and loading data between various systems, into the cloud era. 

I'm an old enterprise database guy, and ETL has been an essential tool in my toolbox for quite some time--according to Wikipedia ETL is:

Extract, Transform and Load (ETL) refers to a process in database usage and especially in data warehousing that performs: Data extraction – extracts data from homogeneous or heterogeneous data sources.

The IT teams that I have historically been a part of have employed ETL to make sure data was where it was needed within the enterprise. As IT began its evolution to the web, the need to migrate data between systems outside the firewall increased. We were increasingly extracting, transforming, and loading data via FTP, web services, and web APIs outside the firewall, and even migrating data between systems that exclusively existed in the cloud.

Increasingly the data we were migrating was using web APIs and began employing a more granular approach to how authentication occurred for each "extract and load"--oAuth. With the growth of shadow IT, and the adoption of software as a service (SaaS) solutions, individuals, and businesses were needing to move documents, media, and other vital data or content between the cloud services which we were increasingly depending on.

APIs were enabling savvy tech users to migrate and sync their data between systems, and startups like IFTTT and Zapier saw the opportunity and jumped in with their new offerings. I do not talk about IFTTT, as they choose to not pay all of this forward by offering an API, and also opt to ignore the transparency that APIs bring to the table--so I will simply refer to Zapier as my example of this new breed of service provider. ;-) 

In addition to evolving beyond FTP and ODBC as the primary channels, and being about the migration of information in the cloud, the other significant characteristic that stood out for me, is this new approach was also paying attention to the individual needs of stakeholders, owners, and the people who the migration of information was important to. This information was also increasingly being migrated between multiple systems and adhering to the terms of service, as well as the privacy of each party involved.

When I launched my research back in 2013, I called it reciprocity, which the dictionary defined as:

  • the quality or state of being reciprocal : mutual dependence, action, or influence
  • a mutual exchange of privileges; specifically : a recognition by one of two countries or institutions of the validity of licenses or privileges granted by the other

When I looked in the thesaurus, reciprocity also had a definition of "interchange" with synonyms of cooperation, exchange, mutuality and reciprocation. Reciprocity is also a synonym of connection with a definition of “person who aids another in achieving goal”. With synonyms being an acquaintance, agent, ally, associate, association, contact, friend, go-between, intermediary, kin, kindred, kinship, mentor, messenger, network, reciprocity, relation, relative, and sponsor.

All of these terms apply to what I was seeing unfold with this new generation of ETL providers. ETL was moving into the clouds, and out from behind the firewall, using the open web, cloud platforms and APIs, and now we have to rethink ETL, and make it accessible to the masses--putting it within reach of the everyday problem owners.

After three years, as I've seen this area continue to grow, but the growth in my website traffic to this research was not in alignment with what I saw in other areas. Over time I realized the area had been dubbed integration platform as a service (iPaaS), and while I have resisted using this term for a while, as I was wanting to emphasize the human aspect of this, I am now giving in. Reciprocity was the only term I have ever tried to push on the API community and have to admit it will be the last term or phrase I ever try to brand in this way.

Sadly iPaaS has become the leading acronym to talk about this evolution, and because IT vendors and analysts couldn't give a shit about the users who's information is being moved around, they have defined it simply as:

Integration Platform as a Service (iPaaS) is a suite of cloud services enabling development, execution and governance of integration flow connecting any combination of on premises and cloud-based processes, services, applications and data within the individual or across multiple organizations.

In classic IT fashion, no emphasis has been placed on the humans in this equation. In the early days of web APIs, startups had focused on what the people who were using their solution needed. Then with each wave of VC investment into the space, with enterprise vendors shifting their focus to this new world, and the industry analyst pundits realizing the value that lies in this new era, we are going back to the old ways of thinking about IT--one that rarely ever focuses on the human aspect of the bits and bytes we are shuffling around.

I am losing this same fight in almost every area of the API space that I keep an eye on. I'm not naive to think I can cut through the noise of the space, and truly take on the IT and developer class's obsessive belief in technology, and the blindness that comes from a focus on the money, but I do believe I can at least influence some of the conversations. This is why I'll keep trying to make sure there is reciprocity across the API space, and iPaaS pays attention to the human aspect of integration, migration, and keeping our increasingly online world in sync.


Zapier and The Excel API

I have been finding quite a few nuggets of wisdom out of the recent release of the Microsoft Excel API. This is what I enjoy doing as the API Evangelist, evaluate and gather any positive or negative activities performed by the leading API players, craft blog posts in hopes that other API providers will read, enabling them be more successful in their own API operations.

As part of my monitoring, I am also looking for significant trends that may reflect other wider industry opportunities, beyond any individual API. I am always telling API providers to think about integrating iPaaS solutions like Zapier into their operations as early as possible, and I think Microsoft's acknowledgment in their release emphasizes the importance of iPaaS.

Here is the language, directly out of Microsoft's release post:

zapierZapier lets users easily automate tedious tasks. Zapier recently announced a new Excel integration, powered by Excel REST API, with near-infinite use cases, like simplifying a data collection process. Users can now build zaps where data is automatically added into Excel from other services, like emails and surveys, making Excel the data repository for all your connected services.

I am guessing that Microsoft Excel API team did their homework, and saw the number of Google Sheet integrations available on Zapier. When it comes to integration platform as a service, being able to take data, content, and other resources and put it into a spreadsheet, has to be in the top 10 of use case scenarios. Moving the bits and bytes we are creating on the web daily, and making available via the familiar spreadsheet UI, is increasingly how business gets done on the ground at the average organization.

I guess there are two nuggets here for my API provider audience: 1) iPaaS, and specifically Zapier integration is growing important 2) Making sure your API is available as Zapier integration with Excel and Google spreadsheets might be something else you will want to consider too!


The Support Flow Over At The Microsoft Excel API

I am pretty impressed with the casual release of the Microsoft Excel API, which I think is a pretty significant milestone for the world of APIs. One of the subtle elements of their API release that I think is worth noting, is the support flow they provide. Here is the actual text from the release blog post for the Excel API:

"Give us your feedback on the API and documentation through GitHub and Stack Overflow, or make new feature suggestions on UserVoice."

This is the most open, and casual I think I've seen Microsoft ever be. They are allowing the community to suggest edits to their API documentation using Github, encouraging the conversation on their own forum, as well as outside their ecosystem, on Stack Overflow. 

I don't know. Maybe I'm reading too much into it. They just seem like they are getting better with each API they have released, and the Excel API release was much more casual, natural, and less rigid than I'm used to from Microsoft.

I can't speak to the technical design of each endpoint yet, as I haven't played with enough, but the overall experience of the Microsoft Office Dev Center is surprisingly modern, simple, and intuitive. There are still a lot of places where it feels like a knowledgebase and not an API community, but overall it exceeds what I'm used to from Microsoft when it comes to APIs.

I have to admit, I didn't hold out much hope for Microsoft when it came to catching up on cloud and APIs--something that is changing with each API they release.


Not Having RSS For Your Blog And Just Relying On Tweeting

I regularly come across organizations who have blogs without RSS feeds. Sometimes I will drop people a line and ask if they have one, or let them know it would be very useful to have an RSS feed easy to find. This is also a topic I write about regularly to help remind folks that RSS is not dead. 

Almost every time I write about the topic I get a handful of people who tell me you don't need RSS anymore, you just tweet out your blog posts--nobody uses RSS readers after Google Reader went away. While it might be true that some folks have abandoned their feed readers, I think they were casual readers in the first place, and serious analysts still very much depend on RSS.

I wish I could help folks understand the power they are giving away to Twitter by thinking like this. I am happy to share a piece of my digital self with Twitter, but as with all platforms I use, I will be considering at every turn the benefit or damage to my own brand. RSS from your blog, at your domain, strengthens your brand and puts you in control of the syndication.

You can still use services like Zapier to take each new RSS entry and automatically Tweet out--you don't have to relinquish control of distribution to Twitter. This is the dangers of these platforms, every platform wants you to spend as much time generating content and media on their platform. This is how they make money, off your activity and hard work--please don't give it up for free, or at the cost of your own brand.

If you are a professional, and expect to make a living from your business, products, services, and brand--you need to maximize the control, licensing, and distribution of your content--give up only what you need. There is no reason to make a deal with Twitter that exclusively gives them full control over your syndication. RSS and API should always be your primary content distribution channel, and Twitter, Facebook, LinkedIn, and others should follow after that.


Not All APIs Are Created Equal So Make Sure You Set Expectations Properly

I've heard of numerous API providers shutting down their API programs after a couple months because they didn't see the number of new users, and integrations they had hoped for. It is so easy for us technologists to shoot for the moon, while also simultaneously shooting ourselves in the foot when it comes to defining where the bar should be for our operations. 

Remember, we are not all Twilio. For most of us our API will never be our primary product, and operating it will be less than a glorious affair--meaning you will never be written up in Techcrunch ;-). We are entering into the really mundane, business as usual phase of the API industry, and things will not be as exciting or sexy as they have been, and we need to make sure our APIs stay real, and reliable amidst all of the tech delusions we have on a regular basis. 

Even if your API is not the next amazing tech sensation, it doesn't mean you can't operate following the lead of the giants like Twilio, Amazon, and Google. You can run a first class shop, even if you don't make the front page of Reddit for the launch, and get all the hacker kids building amazingly cool stuff in the first weeks. To do this you need to set some realistic goals for operations, that aren't based on the often fantasy world of Silicon Valley.

It is likely your company's website wasn't a major hit when it first launched either, and you didn't ditch it because it was underperforming. I'm guessing your goals were probably a little more realistic, and in alignment with your primary business objectives. Your API is no different than your web efforts, it will take time to do it properly, and probably require a few adjustments along the way--do not give up before you have a chance to find success.


Instead of Just Discussing Via Phone You Should Publish Your API Goings On To Your Blog

At any point in time, there are numerous emails in my inbox, LinkedIn messages, and DMS, asking me if I would "just jump on the phone to discuss the latest" about an API. I get a regular stream of these, which makes it pretty impossible for me to actually make time to do, and if I did make time to talk to each of these API providers, I wouldn't have time to actually write up stories and guides for my site.

I'm sure it can be frustrating for folks who just want to share the latest with me, but I hope you will understand that I am just a one man show, and I have to prioritize. I recommend that you also get better at managing your time, and write up each of the things you wanted to talk to me about as a blog post, and then share out via your API platform's Twitter, LinkedIn, and Facebook (you have all those right?). 

I am not a fan of being the recipient of embargoed or top-secret information, as EVERYTHING I do tends to be very public. If you publish your information as a blog post, it is much more likely that I will see it, and read it, than I will if it is an email. Plus you get the bonus of me being able to share out the link, and the wider API community getting to learn about it as well. It is just way more efficient for you to tell your story publicly than it is for us to jump on the phone--for both of us.

Handling yourself this way will continue to pay dividends, as the chances, I will write about something in the future increases if I have the link bookmarked--something that will rarely happen if we talk about on the phone. This played out in my story on the role API definitions are playing with API integrations, where I linked to ClearbitCronofyBest Buy, and SparkPost stories on their release of Run in Post Buttons. If they didn't tell the story on their blog, I would never have had a link to share in that story, and now this story--bonus two separate posts on API Evangelist, just because they had the forethought to tell the story of what they did.


Defining The Industrial Programmable Automation Controller (PAC) Strategy Using An API

I was learning more about the Programmable Automation Controller (PAC) API from Opto 22 and fouind myself intrigued by their usage of the word strategy to describe the role a web API can play when it comes to making the industrial automation process more programmable. I'd say the over API design is still very rough and represents the engineer's view of a PAC, but the potential for industrial IoT strategy orchestration is still present.

I'm learning about PAC APIs through the lens of my drone API research, where the role an API can play in the devices strategy creation, as well as execution. Meaning, with a drone, I can use the API to get at the data from one or many drone flights, as well as use the data, then the API again, to help me execute on the strategy. When this line of thought is applied in an industrial setting, the potential for an API driven strategy increases pretty dramatically.

A PAC API takes this strategy concept further down the road for me than it did with the drone alone. Each PAC can have its own DNS, and its own API, and the overall industrial process I am building a strategy for might contain many different PACS--allowing me to orchestrate in an unlimited number of industrial scenarios. I guess the API surface area for a PAC-enabled industrial workflow expands, contracts, and communicates very differently than it has for a single drone or even a drone fleet. I will have to take what I've learned from PACs and apply to drones.

This is the part of my API Evangelist existence I enjoy the most--the cross-pollination between the different sectors I am learning about.


API Definitions Are Slowly Becoming More Important Than Having SDKs

As the debate over whether you need an SDK for your API or not has rumbled over the last couple of years, API specification formats like OpenAPI Spec, Postman, and API Blueprint have been gaining traction. As this has progressed, I've asked myself several times whether or not API providers even need SDKs anymore. Not just because of the complexities of developing and maintaining, but because more developers are using web clients like Postman and DHC to evolve their integrations.

Apigee Explorer and Swagger UI documentation demonstrated that many developers needed to play with an API as they were learning about what resources were available, and how to use them. I think the evolution of API clients like Postman, DHC, PAW, and others have shown that this phase of playing, exploration, and non-code integration can go well beyond just integration and should actually be neverending across stops along the API life cycle.

It is just anecdotal at the moment, but looking at the API providers who have embedded the Get Postman Button on their sites like Clearbit, Cronofy, Best Buy, and SparkPost, it seems to me that having your API definition available for your developers is growing more important than having SDKs. Developers are getting more traction by loading the Postman Collection, and other common specification format into their client, testing, and other tooling, than they are cracking open the language library of their choice.

This is something I'll further validate through my research. One critique I have on this, is I'm seeing the benefits of this approach focus around Postman, which I'm a big fan of, and a user, but represents just a handful of stops along the API life cycle (client, testing), and not the wider spectrum that OpenAPI Spec and API Blueprint would offer. I have yet to see any of these formats follow my advice like Postman did and get to work on an embeddable solution for OpenAPI Spec and API Blueprint--giving Postman a significant headstart.

API providers need to be openly sharing their API definitions, API service providers need to be allowing for the run in, and import using the common API specification formats, and developers need to be fluent in these formats and functionality, as they will play an important role in the evolution of the space--one that I think will be more significant than the role SDKs have played, after this plays out.


Putting The Industrial Into IoT With Programmable Automation Controllers (PAC) APIs

When you hear about the Internet of Things (IoT) you often hear about the hopeful consumer side of thing, like with Nest thermostat, and the next wave of Internet-connected devices that will change our personal worlds forever. Personally, when I think about the concept of Internet-connected devices actually seeing adoption, and getting traction, I think of it being applied in an industrial setting. 

The RESTful programmable automation controllers (PACs) APIs out of Opto 22 resemble this vision of IoT which I have in my head. APIs making everyday objects used in a variety of industrial processes, programmable, and accessible over wired or wireless networks. Making everything from manufacturing to water and energy facilities more automated and efficient, while potentially generated data that can be used to monitor, and optimize these industrial workflows--this is IoT.

The dream of home automation just doesn't do it for me. I just don't buy the Jetsons vision of the home, but I can buy into the potential for IoT making the industrial processes which we depend on to make our world operate more efficient. Consumer IoT seems like a bandwagon to me, but proven industrial equipment manufacturers like Opto 22 realizing the potential of web API infrastructure, and baking it into the devices they manufacture, seems like real world IoT business to me.


Introducing Vendorless APIs and Microservices

I'm a big fan of the concept of serverless APIs and microservices, but not so much of the name. I get it, the space needs new concepts to rally around, and I'm the first to admit even the concept of an API is bullshit, but when people say serverless, it always makes me chuckle--especially when people work so hard to sell it (who am I to make fun of that).

Often when I come across a serverless solution, it is often bundled with the phrase "100% serverless", a description which usually then contains the 2-6 vendor solutions used to deploy said serverless solution. I know there is no use in pointing out there are always servers behind serverless solutions, but can I at least point out the vendor dependencies involved? Are the vendor dependencies better or worse than our dependencies on servers (that never went away)? IDK. WTF. BBQ.

In support of this line of thinking, I want to start promoting the concept of vendorless APIs and microservices. You know the ones that employ web, and open source technology? Then I can label my microservices as vendorless, made from the finest gluten free, organic HTTP, free-range web concepts available today!


API Service Composition Baked Into The Cloud With Usage Plans For Amazon API Gateway

Being able to provide different levels of access for a single API has been one of the telltale characteristics of any modern web API. Savvy API providers know they don't just make their valuable API resources publicly available for anyone to use, they know you can craft a logical set of plans that are in alignment with your wider business objectives, outlining how any developer can put an API to use--this is the essential business of APIs.

Mashery was the first API management provider to standardize this approach to API access, something further evolved by 3Scale, Apigee, and others. Amazon's release of their API gateway wove API management into the fabric of what we call the cloud, and the introduction of usage plans, does the same for API service composition. Making the identification, metering, limiting, and monetization of resources made available via APIs, a default function of operations in the cloud.

Being able to take any digital asset, whether it is data, content, or an algorithmic resource, and make available via a URL, and control who has access, while also metering their usage, and charging different rates for this usage, is where the business of APIs rubber meets the road. API service composition lets you dial in exactly the right levels of access, and usage, required to fulfill a business contract, delivering precisely the service that customers are wanting for their web, mobile, and device apps.

It's taken a decade for this key element of doing business on the web to mature beyond just a handful of vendors, then into an assortment of open source solutions, and now something that is just baked into what we know as the cloud--allowing us to plan API access consistently and universally across all the digital resources we are increasingly storing and operating in the cloud.


Constant Contact Provides Good Blueprint For An API Getting Started Page

I was going through the getting started pages for the APIs that I keep an eye on, pulling together an outline of what I'd consider to be some of the best elements across all the API providers. Then I came across the getting started page from Constant Contact, and I'd say they win for being the clearest and concise API getting started page of them all. 

Constant Contact's approach to their getting started page has given me a good start for my outline, including essential links to setup your account, create a new application and get your keys. Constant Contact also provides required documentation, an API tester, and supporting code libraries. Additionally, they encourage you to certify your integration as a partner and get published in their integration marketplace--providing a pretty well thought out getting started page in my opinion.

I am going to take what I've learned here, and craft a sample getting started page for my minimum viable API portal definition. I'm sure some of you are snickering at me paying attention to API operations at this level of detail, but I wouldn't underestimate the ability of a well crafted API getting started page to reduce friction with developers. I want to give API providers with a simple template they can follow when publishing a getting started page.

With the next release of my API portal, I'll have a good example of a getting started page, which will be heavily influenced by Constant Contact.


The Release Of The Microsoft Excel API Is A Pretty Significant Milestone

I am all about marking down the important milestones that help define the API sector. It is what I've been working to define as my history of the web APIs for the last six years. An API has to make its mark in pretty big way before I'll add as an official milestone in my version of the last 17 years of the web APIs. 

I'll give it some time, but I'm thinking the recently released Excel API from Microsoft is going to get added pretty quickly. I think the next five years of API evolution is going to be much less exciting than the previous five years, with numerous small businesses publishing their valuable data, content, and complex algorithms via the Excel API.

This stage of API deployment won't be filled with API heroes like Amazon, and Twilio, but will still make a seismic shift in the landscape just through the pure volume of data that is made available--we won't be able to keep up. I don't think all the DB to API connectors collectively add up to the number of spreadsheet-driven data, content, algorithmic, and visualizations that will be driving business decisions out there in the future.

I am not a diehard fan of the spreadsheet, as I'm a database guy, but in my 25+ year career, there is no single more important business tool out there than the spreadsheet. Giving Excel native API capabilities by default will dramatically increase the number of high-value APIs available. It will be something that I cannot ignore, and will have to do some playing around with the API, to see what is possible.