29 Sep 2015
This is a review of the communication API platform CallFire, crafting a snapshot of platform operations, from an external viewpoint, providing the CallFire platform team with a fresh take on their API from the outside-in. The criteria applied in this review is gathered from looking at the API operations across thousands of API providers, and aggregating best practices, into a single, distilled review process.
This review has been commissioned by CallFire, but as anyone who's paid me for a review knows, money doesn't equal a favorable review—I tell it like I see it. My objective is to help CallFire see their platform through a different sense, as developers might see their platform. It can be hard to see the forest for the trees, and I find the API review is a great way to help API providers see the bigger picture.
I prefer to share my API reviews in a narrative format, walking through each of the important aspect of API management, telling the story of what I see, and what I don't see. Here is the story of what I found while reviewing the communication API platform for CallFire.
CallFire makes for an easier review than some API operations, because the API is the product. As soon as you land on the homepage of the website, you begin your API journey as a potential API consumer. The first thing you read is "Over 2 Billion Messages Delivered”, so you immediately understand what CallFire does, and the next thing that grabs your eye is “Signup For Free”—way to not miss a beat, CallFire.
Next you see five simple icons, with simple text, breaking what CallFire does down: Text Messaging, Call Tracking, Video Broadcast, Cloud Call Center, IVR. Within the first five seconds you fully understand what is being offered, and given the opportunity to sign up. If that is not enough, you are also told the reasons why: Engage Your Customers, Save Valuable Time, Increase Revenue.
After you look at thousands of APIs, nothing is more frustrating than to have to figure out what an API does. CallFire gives me what I need, in five seconds or less, without clicking or scrolling. This is the way all APIs should be, if not the homepage of website, then the landing page of the API developer portal. The main page of the CallFire website is well designed, and organized in a simple, and robust way, giving you one-click access to everything you need to get going with the platform--no other feedback required.
This is the part of the review I dive into the actual design of the API, and provide some feedback on the technical endpoints of the APIs themselves. CallFire is unique because it has a REST and a SOAP version of the API available, which I think is important in today's business climate, where APIs are targeting open developers, as well as those within the enterprise.
The CallFire API is very robust, with wide range of endpoints / methods for the most basic text and call features, all the way up to campaign, subscription, and contact management. You can tell the system is well thought out, providing a full suite of communication resources for all types of developers.
On you dig into the REST API you begin to see quite a bit of SOAP residue, and while the API has a well thought out list of endpoints, many elements of the parameters, requests, and responses feel SOAPy, including the XML responses. There is also a lack of consistent response codes, and defined data model, giving the REST API an unfinished, empty bottom feeling.
Overall I give the API a solid B in that it is a very robust stack, but I'd say it needs some hard use, and integrations before some of the rough edges are hammered off, parameters become more intuitive, and request and response structures normalize. Much of this just comes with usage, and requires getting closer to real world use cases, end users, before it becomes more of an experience based design vs. the resource based design it currently is.
I could easily go through the entire Swagger definition for CallFire and make recommendations on naming conventions, and help craft the resource definitions for the underlying data models, but this is best worked out with the community, iterating, and communicating with developers, and learning more about truly what they need. Think of it as kiln firing of the API, through developer execution, and robust platform feedback loops.
On-boarding with the CallFire API was frictionless. I went one click from home page, authenticated with my Google Account, and immediately I was dropped into my account dashboard, with a helpful intro screen showing me where things are. I easily found the area in my account to add an application, and get my API keys, then stumbled into the overview of how to activate your account as well—account management was intuitive.
The intuitive and informative CallFire home page made the API easy to find, and with frictionless account signup, and a standard API app management, I was ready to make my first call on the API within 10 minutes. The only thing I would consider adding as part of the process, is an option for signup and login using my Github credentials, in addition to Facebook, and Google.
On-boarding with an API is often the most frustrating part of API integration, and it wasn't something I worried about at all with CallFire. The process was intuitive, smooth, and didn't leave me trying to understand what the API does, and how I am supposed to make it work. Solid A on the on-boarding process for the CallFire API.
Documentation is one of the most critical aspects of API integration, making or breaking many integration efforts by developers. CallFire has double duty, in that it needs to provide documentation for the REST and SOAP version. Somethig CallFire manages to deliver with no problem, providing clean, easy to follow documentation for both APIs they offer.
The SOAP API document provides a simple breakdown of operations and methods, with easy to follow descriptions for everything. Ultimately the SDKs do most of the heavy lifting here, but the SOAP docs provide a nice overview of the CallFire platform.
The REST API for CallFire is defined using the machine readable API definition format Swagger, and uses Swagger UI to generate documentation, making learning about the API more interactive. Swagger provides a machine readable overview of the CallFire API, and is an approach to delivering API documentation that keeps pace with modern approaches.
I do not have much feedback for the documentation side of CallFire, I'd like to see more information about the underlying data model described in the Swagger definition, as well more detail about the response codes, but I give the platform documentation a solid B for being simple, clean, and complete--the API just needs some hardening, and the documentation will improve.
On-boarding with the CallFire API is frictionless, and adding an app, and finding your API keys are intuitive enough. The platform also provides a nice overview on how to enable the API on your CallFire account, but ultimately the topic of authentication is neglected.
Authentication for SOAP interactions, and REST for that matter, are abstracted away by each of the SDKs. However one of the elements of RESTful APIs is that authentication should be clearly defined as part of the documentation. I suggest adding a page, or section on an existing page that is dedicated authentication, explaining the BasicAuth used to secure the CallFire API. For any experienced API consumer it isn't difficult navigate, but the app manager with login and password, has appearance that it may be app key via a header or parameter, not BasicAuth--a dedicated authentication overview page would help clear up.
As part of authentication review I do not usually advocate for a specific authentication approach, when the choices are BasicAuth, or using the app id and keys in the header or parameters. The best option is to pick one, be consistent, and explain it clearly, on a page that stands out. Overall the authentication for CallFire is intuitive, it just needs a little bit of information to makes things 100% clear when you first find yourself making that first API call.
UPDATE: Since I wrote this review, and the time of publishing, CallFire has updated their portal to include a well formed authentication overview for the platform.
After documentation, code samples, libraries, and SDKs are key to a painless API integration. For the CallFire API, there are only two SDKs available currently, for the PHP, and .NET platforms. It is common that platforms who are just getting going, only have a handful of SDKs in specific language, and is forgivable, but is also a sign of platform immaturity (aka lots of work still to be done).
Another aspect of SDK design for CallFire, that I'd like to bring up, is the cohabitation of REST and SOAP in a single SDK. I'm not sure this type of cross pollination is ideal for all integrations. Maybe it is just my architectural style, but I like seeing each SDK be as smallest possible footprint as possible, and meet the needs of developers without any extraneous bloat.
Moving beyond SDKs, into what I call Platform Development Kits (PDK), CallFire does well in providing two distinct platform plugins for WordPress and Drupal. I recommend bringing these PDKs to the surface, and showcase them in a full SDK and PDK showcase page—showing what is possible. Maybe even considering the next step, of what is the 3rd PDK that could be developed? SalesForce? Heroku?
The usage of Github by CallFire is another important signal, showing the platform is progressive, and something that developers can depend on. I recommend further bringing Github into the site, linking to accounts, providing direct links to SDK and PDK repositories from an official page, and add Github authentication for developers to be able to create and manage their accounts. Github isn't just about code, it is a potentially important social layer to the CallFire API ecosystem.
There really is no evidence of any mobile SDKs, or information for mobile developers available on the CallFire platform. It is common to find entire sections dedicated to mobile developers, or at least links to mobile platform specific code libraries. I recommend establishing a mobile focused section of the platform, and invest the resources necessary to help developers build iOS, Android, and Windows mobile applications using CallFire.
CallFire is doing well on the support front, providing building blocks for both direct, and self-service support. I like to see a mix of support services that developers can find on their own, getting the help they need 24/7, but they also need to be able to get direct support when they get stuck.
When it comes to direct support, CallFire is rocking it, with a support email, live chat, phone number, contact form, and ticketing system tied to your account messaging area. The only additional things I could recommend CallFire offering is paid support plans, allowing developers to pay for one-on-one support via chat, online hangout, or other means.
With self-service support, there is the CallFire FAQ, which I'd call more a knowledgeable than a simple FAQ, providing a wealth of knowledge about the platform. The only common element I see missing is more of a community element with a forum, or usage of Stack Overflow to engage with the wider developer community. The current FAQ is very robust, and with the integrated ticketing system, the potential is great, but it is all missing that community piece.
Overall, CallFire support is as robust as you'd expect from any API platform. When you combine with the social media presence the platform has, that I'll cover as part of the communication strategy, the platform has all of its bases covered. A+ on support effort.
When it comes to my review criteria for communications, other than a newsletter, CallFire nailed everyone of them. The platform has all of the expected social media platforms, has an active blog with RSS feed, and is very accessible with email, phone, and chat. All of this sends the right signals to the community, and potential API consumers, that someone is home. I have nothing to contribute when it comes to communications, as long as all channels are kept active, CallFire is doing everything it can in my mind.
As platform providers, we are asking developers to depend on us, and integrate our resources into their applications, and businesses. That is asking a lot, and we need to provide as much information as possible about what the future hold, to hlep build trust with developers. There a handful of proven ways for doing this, established by leading API platforms.
- Roadmap - API roadmaps are usually a simple, bulleted list, derived from the APIs own internal roadmap, showing what the future holds for the platform. Transparency around an APIs roadmap is a tough balance, since you don't want to give away too much, alerting your competitors, but your developer ecosystem needs to know what is next.
- Status Dashboard - Status dashboards are a common way for API platforms to communicate the availability of an API, but also show the track record of a platform, helping developers understand the track record for an API they are to integrate with. There are several simple services, that help API providers do this, without investment in new tools and systems.
- Change Log - Knowing the past is a big part of understanding where things are in store for the future. A change log should work in sync with the API roadmap building block, but provide much more detailed information about changes that have occurred with an API. Developers will not always pay attention to announced updates, but can use a change log as a guide for deciding what changes they need to make in their applications and how they will use an API.
Sharing the change history of a platform, a roadmap to the future, and a status of API operations at the moment go a long way in help build trust with developers. Transparency in the development of any platform, is essential in helping developers feel comfortable that a platform will be around to support their needs, and is worthy of their time.
When reviewing APIs, the overall business model is usually one of the most incomplete aspects of operations, in my experience. This is ok, as many platforms are still figuring this out, however this is not the case with CallFire. The business model for the platform isn't just well defined, it provides me with an example to use when helping other API providers visualize what is possible.
The pricing page for CallFire is clean, well thought out, and provides sensibles tiers of operation, with clear units of measurement, letting me know what I get for each level. I can easily upgrade the access tier directly from my account settings, and I can get volume pricing if needed. This is how APIs should work, allowing me to easily calculate what I'll need, and figure out which tier I will be operating within, complete with a self-service option for scaling as I need, paying for what I use, as I go.
The billing management and credit system for CallFire is superior to most of even the most thought out API billing and pricing models. It is clean, well thought out, and makes sense from a user perspective—which is the most important aspect. I'll be using CallFire credit system as a reference when I talk about how the platforms should build tooling that supports the underlying API business model.
I can't articulate enough, how well done the business model, pricing, supporting billing and the other business elements of the CallFire API.
When it comes to available support resources, it is another area CallFire does very well. The platform has heavily invested in case studies, videos, webinars, and a tour of the platform. They even have a communications and marketing glossary developers can use to get up to speed. CallFire does a good job of providing valuable resources to get developers quickly understand all aspects of platform operations..
A couple of areas I could provide suggestions for improvement, in is when it comes to more industry level white papers, and when the evangelism side of things kicks in maybe consider posting slide decks from events CallFire presents at, as well as a calendar of interesting events. These things will happen I'm sure once an API evangelism strategy is kicked into full gear, but for now just keep doing more of the same--providing lots of rich resources for devs.
Research & Development
I'd file R&D in the same category as mobile, non-existent. API ecosystem are essentially external R&D labels for companies, and general operations are about exploring ideas of what can be build with an API, but it helps to have some element available to stimulate overall R&D via the platform.
Some of these elements are:
- Idea Showcase - A place the community can share ideas of what could be built with the CallFire platform.
- Labs Environment - A workbench showing what CallFire is working on when it comes to their own integration.
- Opportunities - Available opportunities to build things like SDKs, PDKs, or specific projects.
These are just three things that help stimulate the innovation around an API. Sometimes developers just need something to spark the imagination, or possibly see an existing labs project to help them see something in their own work. These rich R&D environments can provide a great opportunity to help meet the needs of CallFire, and its partners.
A couple of items I'd recommend also considering, based upon what I seen on other platforms:
- Code License - The PHP SDK has an MIT license, but the .NET didn't have anything. A centralized code licensing page could help as well.
- API License - A license for the API itself, applied to the REST API interface that is defined by Swagger, using API Commons format.
- Service Level Agreement (SLA) - Provide a service level agreement for API consumers to take advantage of, and understand service level commitments.
- Branding - There are no branding or style guidelines with support resources like logos, etc—missed opportunity for spreading word, and steering developers in the right direction.
Just a couple of things to think about. All of these would go a long way in building trust with developers, and the branding thing is a huge missed opportunity in my opinion. When you bundle these with the TOS, privacy, and compliance information already provided by the platform, it would round off the legal department of the CallFire API nicely.
Embeddable tooling is another area that is non-existent for the CallFire API. There are no embeddable tools like widgets, buttons, etc that allow the average end-user, and developer to put the API to use in web pages, and applications. I'm not sure what an embeddable suite of tools would look like for CallFire, that would need to be a separate brainstorming process.
When it comes to communication platforms, especially ones involving media, and deep social interactions, embeddable tools are a proven way to grow platform, expand the network effect, and potentially bring in new developers. I recommend including an embeddable section to the site, with a handful of embeddable tooling to compliment the SDK and PDK resources already available.
One area I consider when looking through API operations, is the environment itself. By default many APIs are live, ready for production use, but increasingly platforms are employing alternate environments for development, QA, and potentially variant product environment.
- Sandbox - With the sensitive information available via many APIs, providing developers a sandbox environment to develop and test their code might be wise idea. Sandboxes environments will increase the overall cost of an API deployment, but can reduce headaches for developers and can significantly reduce support overhead. Consider the value of a sandbox when planning an API.
- Production - When planning an API, consider if all deployments need to have access to live data in real-time or there is the need to require developers to request for separate production access for their API applications. In line with the sandbox building block, a separate API production environment can make for a much healthier API ecosystem.
- Simulator - Providing an environment where developers can find existing profiles, templates or other collections of data, as well as sequence for simulating a particular experience via an API platform. While this is emerging as critical building block for Internet of Thing APIs, it is something other API providers have been doing to help onboard new users.
- Templates - Predefined templates of operation, when a new environment is setup, either sandbox, production, or simulator, it can be pre-populated with data, and other configuration, making it more useful to developers. These templates can be used throughout the API lifecycle from development, QA, all the way to simulation.
This approach to delivering an environment for the CallFire API is not essential, but I could see it providing some interesting scenarios for communication campaigns, and the deployment of messaging infrastructure in a containerized, Internet of Things rich environment. Deployment CallFire communication infrastructure should be as flexible as possible to support the next generation of Internet enabled communication, both in the cloud and on-premise.
An often overlooked aspect of API operations is the tools provided to API consumers. CallFire is in a fortunate position as the API is core to their product, and the API integration is an extension of a primary CallFire user account. The account area for CallFire is well done, clean, and gives users, and those who choose to be API consumers, quite a bit of control over their communication infrastructure.
CallFire nailed almost every API account area I like to see in any API platform:
- Account Dashboard - The dashboard for CallFire account is well done, and information.
- Account Settings - CallFire provides a high level of control over account settings.
- Reset Password - Resetting password is important, and I like to highlight separately.
- Applications - The app management for CallFire is on par with rest of industry.
- Service Tiers - The ability to change service tier, and scale is pretty significant.
- Messaging - An import part of the communication and support strategy of platform.
- Billing - Essential to the economy portion of platform operations, well executed.
There are a couple of areas I'd like to see, to round off developer account operations:
- Github Authentication - It would fit nicely with Facebook, and Google Auth--I prefer authenticating for APIs with my Github.
- Delete Account - Maybe it is in there, but I couldn't find it. The ability to delete an account is important in my book.
- Usage Logs & Analytics - I'd like to see application specific analytics like on the dashboard, showing usage per app.
- Account API - Allow API access to all account operations, allowing access to account settings, usage limits, billing, messaging and other areas.
The CallFire account management for users and developers is much more robust than I see in many of the APIs I review. Like I said before the monetization portion is something to be showcased, and all the most important aspects of account management for API operations are present. It wouldn't take much to round off with a couple more features, some more analytics, and an account management API would really take things to the next level.
I always enjoy when I come find consistent design, and function across API operations. This is what I found with CallFire. The API isn't an afterthought like other platforms, it is their product, and the site design, messaging, and content are consistent across the platform.
The API design is consistent, and the supporting documentation is as well. The only thing I'd add is design patterns across the SOAP and REST API should be less consistent, and stay true to their own design constraints. The details of the REST could be tightened, to be more consistent in how parameters are formed, and response formats, and error code are commonly presented.
Usually when reviewing APIs, I look for fractures between API operations, like clunky UI between website sections, or incomplete documentation, often created by disparate teams. This doesn't exist with CallFire, and while there are many details that could be cleaned up, the consistency is all there.
This is a term thrown around a lot in the space, and very seldom do sites live up to it. There are many things that contribute to whether or not an API is truly open. CallFire delivers on all of the important areas, making open a word I'd apply to CallFire.
One of the things I think heavily contribute to the openness of the CallFire platform is the business model. The monetization strategy is well formed, with pricing and service tiers well defined. You know what things cost, and how to get what you need. This type of approach eliminates the need for other extraneous rate list, or restrictions—this type of balance is important to truly achieving openness.
After this review I'd call CallFire an open API, but only time will tell, if the platform is also stable, support channels are truly supportive, and other aspects of open that only CallFire can deliver on. For right now I consider them open for business, and open for developers, but ultimately whether or not CallFire is willing to share this review, will put the exclamation on the platform openness definition, won't it! ;-)
The usual footprint you'd see when an API platform has an active evangelism program doesn't exist for CallFire, but that is part of the motivation of this review. We are looking to take a snapshot of where the platform is at, in hopes of providing much needed input for the roadmap, as well as establish a version 1.0 evangelism strategy--we will revisit the evangelism portion of this review in a couple months.
In short, the CallFire passes my review. There are several key areas like mobile, and roadmap communication, that are missing from the platform entirely, but then there are other areas CallFire nails almost every one of my review criteria. The API is robust, the documentation is complete, and they provide all the essential support building blocks.
One of the things that really stand out for me is the CallFire business model, something that I think really cuts through the BS of many APIs I look at. CallFire has a clear business model, and the tools to manage your API usage. There is no grey area with the business of model for CallFire, which is something I just don't see a lot of.
I'd say my biggest concern with the platform is the lack of diverse code resources. I can't tell if they are just getting going, or maybe the lack of developer skills is slowing the diversity of available coding resources--I am just not sure. My guess is their is a lack of diverse developer skills on staff, which explains the lack of mobile SDKs, and the SOAP residue on the REST API. My advice is to invest in the developer resources necessary to load the platform up with a wide variety of coding resources that developers can put to work in their projects.
Beyond the code resources, it is really just a bunch smaller items that would bring the platform into better form. CallFire definitely reflects everything I'm looking for in an API platform, and is something I've included in the top APIs I track on as part of my API Stack. Additionally, I've gathered a couple of other stories while doing this review, including the overall monetization strategy, the notification system under account settings, and their usage of Swagger—which is always another good sign of a healthy platform, and a positive review.
Lots going on with the CallFire platform, I recommend taking a look at what they are up to.
This was a paid review of the CallFire platform, if you'd like to schedule a review of your platform, please contact me, and we'll see if I can make time. A paid review does not equal a good review. it is my goal to give as critical, and constructive feedback as I can to help API providers enhance their roadmap and better serve their consumers.
See The Full Blog Post
26 Sep 2015
I have a whole bunch of APIs I want to deploy. There is a queue of APIs that I will never get to, but I can't help myself, and when I am tired of watching what everyone else is doing, and want to get busy actually building things, I crack open my queue of ideas. This weekend I launched four new APIs, that will be in the service of some very different objectives that I have.
- Low Hanging Fruit - Identifying the potential CSV, XLS, XML, and table resources that exist in any domain, and publish them to Github as a list for deploying as APIs - aka the low hanging fruit.
- TSA.Report - A simple form for submitting your thoughts as you are going through the airport line.
- APIs.How - The URL shoretner for the API Evangelist and Kin Lane network--getting off Bitly.
- Magnetic Jargon - A dictionary creation API, allowing the building of word lists to use for a magnetic fridge web and mobile app I am building.
Each of these APIs have a specific purpose. Low Hanging Fruit, and APIs.how are both very back-end system tools, where TSA.Report and Magnetic Jargon are meant to drive simple web and mobile applications. The purpose of these applications are not the point of this story, the minimum viable existence of these four new APis is what I want to talk about.
These four APIs are far from complete. I anticipate that TSA.Report and Magnetic.Jargon will evolve based upon the apps that are developed on them, but Low Hanging Fruit and APIs.How are core to how I do business, and will continue to evolve as far as I can afford to scale them (ie how many sites and links I can index).
My goal this weekend, is to just get the APIs up and running. Their overall relevance and productivity in my world will be decided by their roadmap. I just needed to establish them as a real project, and get a mimum viable definition up and running. Here is what defines a minimum viable API in my world right now:
- API - One or many endpoints that serve data, content, or other resources via one of my domains.
- Swagger - A JSON definition of the surface area, and underlying data model for any API I provide.
- API Blueprint - A markdown definition of the surface area, and underlying data model for any API I provide.
- Postman - A Postman Collection allowing anyone to quickly deploy an API in a Postman Client.
- API Science Monitor - At least one monitor of one of the endpoints, acknowledging it is up and running.
- APIs.json - Providing an index of each project, giving me machine readable access to each API, as well as its operations.
- Github Repo - A repository to house the server code (privately), but also publish a developer portal with API definitions available via a known location.
This the current minimum viable definition of an API in my world currently. I will be adding more endpoints to each of these APIs, as well as other building blocks to support their operations, but this represents the start for each of these projects for me.
Honestly I do not know if each of these APIs will survive. They will each have their own road-maps, one that is driven based upon how much I evangelize, and ultimately attract interest from consumers. Each API will have its own monetization strategy, something I will work on sharing via the API definitions that I've published in their API portals.
Two of these APIs are core to my operations as API Evangelist, while two of them are just really for fun, and are in support of simple micro apps I wish to deploy. Low Hanging Fruit will depend on how much funding I can get for each domain indexes, and APIs.How will thrive based upon the growth of my own writing and storytelling. The other two will be driven by advertising revenue, and I do not expect anyone to actually build anything else on the APIs, they only exist because I am API-first in everything I do.
Now that I have a minimum viable existence for all four of these APIs I can move into the next next steps for each project, while also addressing other items on my task list, but at least I know these projects are up and running, and discoverable when I get time to get back to them.
See The Full Blog Post
24 Sep 2015
I was gathering my thoughts today around API management solutions can better work together, in response to an industry discussion going on between several creators of open source tooling. The Github thread is focused on proxies, and how they can work more closely together to facilitate interoperability in this potentially federated layer of the API economy, but as I do, I wanted to step back and look at bigger picture before I responded.
As I was gathering my thoughts, I also had an interesting conversation with the creator of API Garage, in which one of the topics was around how we encourage API service providers to want to work closer, but this time from the perspective of a API client workspace. This made me think, that the thoughts I was gathering about how open source proxies can better work together, should be universally applied to every step along the API lifecycle.
It should be a no-brainer for API service providers--have an API! One of the best examples of this, is with 3Scale API management--they have a very robust API, that represents every aspect of the API management layer. Other service providers like API Science, and APIMATIC, who server other stops along the API life-cycle, also have their own APIs. If you want your API tooling to work with other API tooling, have an API--think about Docker, and their API interface, and make your tools behave this way.
Provide openly licensed tooling around any service you provide. Make your tooling as modular as you can, and apply open source licenses wherever it makes sense. Open licenses, facilitate people working together, and breaks down silos. This is obvious to the folks on the API proxy thread I'm referencing, but will not be as obvious to other service providers looking to break into the market.
Speak common API definition formats like Swagger, API Blueprint, and in Postman Collections, and provide indexes using APIs.json. If consumers can import and export in their preferred definition format they will be able to get up and running much quicker, share patterns between the various proprietary services, and the open tooling they are putting to work. There are a lot of opportunities to partner around common API definitions for API deployment, and management, which would open up a potentially new world of API services that aggregate, integrate, and sync between common API platforms throughout the life-cycle.
Developers never want their tooling to be a silo. Allow the extensibility of any tooling or services, from design, to management, or client, using common approaches to connectors, plugins, etc. Make plugins be API first, so that other API service providers can easily take their own APIs, craft a plugin, and quickly bring their value to another platform's ecosystem. Plugins are the doorway to the interoperability that open APIs, source code, and definitions bring to the table for platform providers.
Open APIs, source code, definitions, and plugins all set the stage for ever deeper levels of partnership. APIs are the pipes, with open source code and definitions providing the grease, and if all API design, deployment, management, monitoring, testing, performance, security, virtualization, and discovery providers allow for plugins, strategic partners can occur organically.
All of this really sounds familiar? It really sounds like what the API space is telling the rest of the business world, so I can't help but see the irony in giving API service and tooling providers this same advice. If you want more partnerships to happen, expose all your resources as APIs, providing open tooling and definitions, allow other companies to plug their features into your solutions, and a whole new world of business development will emerge.
With the mainstream growth we are seeing in the API space in 2015, there are some pretty significant opportunities for partnership between API service providers right now--that is, if we follow the same API-first approach, currently being recommended to API providers.
See The Full Blog Post
22 Sep 2015
My own API management system allows me to import Postman collections, HAR files, Charles Proxy XML files, and Swagger version 1.2, but when it comes to output, it only speaks Swagger 2.0. I've been wanting to create a tool for outputting my definitions as API Blueprint for some time now, but just haven't had the time to do the work.
I have been secretly hoping someone would build a good quality, so I wouldn't have to do this work myself. Now I have API Transformer, an API definition translation platform, developed by the APIMATIC team. Using API Transformer you can upload or pull API definitions in the following formats:
- API Blueprint
- Swagger 1.0 - 1.2
- Swagger 2.0 JSON
- Swagger 2.0 YAML
- WADL - W3C 2009
- Google Discovery
- RAML 0.8
- I/O Docs - Mashery
- APIMATIC Format
Then you can output API definitions in the following formats:
- API Blueprint
- Swagger 1.2
- Swagger 2.0 JSON
- Swagger 2.0 YAML
- WADL - W3C 2009
- APIMATIC Format
With API Transformer, they are offering a pretty valuable service, that any API service provider, and API consumer can put to work right away. I quickly generated four API Blueprints, for my API, audio, blog, and building block APIs, which I also indexed as part of each APIs.json file.
As any good API service provider should, The API Transformer has an API. You can build in API transformations between definition into any service or tooling. In my opinion, every API service provider in 2015 should speak as many API definitions as possible, always allowing customers to import, and export in all of the formats offered above.
API Transformer reflects how API service providers should work, it is doing one thing, and doing it well, and it provides a simple web interface, as well as a dead simple API. There is no way I will be building out my own service now that APIMATC has launched API Transformer.
See The Full Blog Post
22 Sep 2015
I have a research project dedicated to trying to understand al things Swagger. I try to add any new research, or tooling there when I can. The latest thing I added was a page to list out Swagger extensions that I find in the wild.
I knew Apigee has extended Swagger in some interesting ways, but I was coming across other interesting examples, and want to try and aggregate into a single location, so that others can reference and build upon.
There is now an extensions page on my Swagger research. I have added APIMATIC's x-codegen-settings, a couple from Apigee to kick things off. If you know of any interesting examples of Swagger extension, please let me know via the project's Github issues, or feel free to fork the project and add to the extensions page using the _config.yml file for the Github project, they are stored as a YAML collection.
Hopefully we can start centralizing all the innovative extensions of Swagger into a single location, helping us all not re-invent the wheel when it comes to extending the popular API definition format, affectionately known as Swagger.
See The Full Blog Post
21 Sep 2015
I just sat in on an APIWare call with a fast growing startup, developing a better understanding of how we can assist their team. They have a pretty solid development team behind their API, so providing core API development resources is not in the cards.
Where this startup is needing the most help, is where the APIWare team shines:
- API Operations Review - Taking a look at how an API platform works from the inside out, preferably starting with an outside perspective, so we can bring that fresh perspective to the table, and immediately begin adding value to the road-map.
- Crafting API Strategy - With a better understanding of how an API works, inside and out, we can take a walk through our massive list of best practices, and see what we can apply when it comes to bringing together a strategy.
Providing companies with a comprehensive review of their API operations from an outside perspective, but also walking through their architecture, team, and other internal resources, is a valuable process on its own. Having a team of experience API professionals listening, and walking through the current makeup of your API operations helps you better articulate your vision while also helping us define how things work, so a coherent API strategy can be crafted by our team.
This is the stage we are in with this particular startup conversation--kicking off a formal review, so we can help them craft a coherent API strategy. Once this is done, we can re-assess how we can help them, and probably begin telling more public stories about the project. This conversation reflects other conversations the APIWare team is having with other startups, SMB and enterprise groups. However one thing that did present itself on this call, is how APIWare is also becoming a burst-able development team that can act as a backup when small startups are faced with potentially large enterprise relationships that come with API success.
This particular startup just had a new enterprise customer come in through their API, the dream of any small API, until you actually have to scramble to meet the demands. I have seen the interest of the enterprise be both a blessing and a curse in the past for small teams, something that has the potential to derail a startup, taking them away from the daily work that matters. The enterprise has the resources to throw at these projects, but small startup teams do not always have what it takes to keep day to day operations running smoothly, while also meeting the needs of a large project or partnership.
This is APIWare. In addition to helping this startup review their current API operations, and craft a coherent API strategy, the APIWare team is here to help their core API developer team expand and contract as necessary. We are getting to know know the company's existing infrastructure and operations, primarily to be able to help craft the overall API strategy, but this awareness also puts us in a good position to step in and help either to help with a larger project, or augment the core team, while they address the needs of larger, temporary projects.
The ability for APIWare to be there for a company in this capacity always starts with an API operations review. We can't be waiting in the wings, ready to work, if we aren't up to speed on how things are currently operating, and in tune with the road-map. It is nice to be starting these conversations early, so we are up to speed when we are needed most. I am happy to see that staying specialized in the areas of API design, deployment, and management is proving to be a valuable approach to the startups we are talking with so far.
See The Full Blog Post
21 Sep 2015
Sometimes I have ideas, that are sticky, meaning they won't leave my brain, but are not always concepts I personally enjoy exploring. This is just a little insight into the madness that is my brain, focus Kin...FOCUS! So, I found myself thinking last week about API monetization, while I was also updating my API design and hypermedia research--then, during this Reese's peanut butter cup moment, I came up with an idea for a suggested or sponsored link relation engine for hypermedia APIs.
First, I'd prefer to start with what I'd consider to be a hypermedia suggestion engine, that could provide external suggestions for related links to any structured data that is being returned via an API. It is a suggestion engine that an API provider, who has followed hypermedia principles as part of their API design, could use to augment the resources they are serving up. Such a suggestion engine would have to be pretty smart, and work from an existing index that API providers could train via an external API.
One possible example of this from my existing work, is present in a scientific research API, that might be serving up research papers, and with each API call you could potentially get some native related links for annotating, tagging, and other opportunities that are dictated by the API providers. But, what if the institution, where the research occurred could provide related links to other research going on at the institution, or maybe a governing scientific organization could suggest other related research or resources, and it was up to the API provider to whitelist or blacklist link relations that were included.
Taking this to the next level, which is inevitable, and I might as well be the asshole who suggest it, but what if API providers could approve a paid index of links to suggest as a link relation, for any product in the catalog. Sure you can add this product to your wishlist, or shopping cart, but you could also donate it to a non-profit organization, or buy it through a partner who will modify it for me, and deliver it to me in a way that improves on the original product experience. Such a link relation engine could inject valuable links into the API stream, and the API provider, and potentially developers could tap into this as a potential revenue stream. Of course, all links included would be fully vetted, and certified secure using a service like Metacert.
If you are still reading this, this link relation engine does not exist. This is just a random idea derived from the collision of several of my active projects, with a little forward thought of what could be one possible dystopian post-advertising world, API monetization world might look like. I'm sure, like current advertising, 90% of the links in a link relation engine would be total shit, but who knows, they might also add value to the increasing number of structured objects being served up via APIs, and maybe active as a next generation financial engine that is tailored specifically for the API economy.
See The Full Blog Post
15 Sep 2015
If you are in an industry being impacted by technology, you have probably become very aware of the term Application Programming Interfaces, more widely known as APIs, and how they are driving web applications, mobile applications, and increasingly everyday objects in our homes, cars, businesses, and across the public landscape. If you are finding yourself part of this growing conversations, you have most likely have also heard talk of a new breed of API definition formats that are becoming ubiquitous like Swagger and API Blueprint.
API definitions are a way to describe what an API does, providing a machine readable blueprint of how to put the digital resource to work. API definitions are not new, but this latest round of available formats are taking the API conversation out of just IT and developer groups, and enabling business units, and other key stakeholders to participate throughout the API life-cycle. Much like the latest wave of web APIs has made data, content, and other digital resources like video and images more accessible, API definitions are making APIs more accessible across the rapidly expanding digital business landscape.
The first widely available API definition format was the Web Services Description Language (WSDL), which is an XML format established in 2001 that described web services. Much like web services (an API predecessor), WSDL was a very technical vision of APIs, something dictated by IT, and developer groups, with heavy top down governance from business and industry leadership. While web services, and WSDL are still ubiquitous across the enterprise, they are rapidly being replaced with much lighter weight, simpler web APIs that use the Internet as a way of delivering the digital data, content, and resources web, mobile, and devices are demanding in 2015.
Along the way, newer, more web friendly API definition formats emerged, such as Web Application Description Language (WADL), but ultimately WADL was something that never took root, suffering from many of the same illness of its predecessor WSDL. It wasn't until a new format called Swagger was born, that we started to see the conversation around how we define, communicate and develop standardized tooling around APIs evolve, providing an open specification for defining all the details that go into an API.
Swagger provided developers a way to describe an API that was more in sync with everything else modern API developers were used to, including using JSON, rather than the XML of previous web services, WSDL and WADL. Swagger gave us something more than just way to define APIs, it gave us swagger UI, which is a interactive version of API documentation that made learning about what an API does, and how to integrate with it, a hands on, interactive experience. This new approach to documentation gave us a solution to the number one problem plaguing API providers, which was out of date documentation that confused consumers.
Shortly after Swagger began seeing wide adoption because of the interactive documentation it provided for APIs, a new API definition format also emerged called API Blueprint, which provided interactive documentation, but rather than using JSON, it used Markdown, making the process of defining APIs a little less intimidating for non-developers. Apiary, the makers of API Blueprint did another thing that would move the conversation forward again, making the reasons for defining APIs in these formats, more about API design, than just about delivering up-to-date documentation.
Using API Blueprint, API designers could define an API, before any code was actually written. Developers could craft an API using Apiary's tooling, then a mock version of an API could be generated, which could be shared with other project stakeholders, from business users, to potential web or mobile developers. This process saves considerable time, money, and other resources in ensuring than API would be something web, mobile, and device developers could actually put to use. With two new API definition formats Swagger, and now API Blueprint, the processing of defining, designing APIs in a machine readable way, was accessible to everyone, across a rapidly expanding API life-cycle.
This evolution has all occurred over the last 4 years, a period which has also produced other API definition formats like RSDL, RADL, RAML, I/O Docs, and MSON--just to name a few. All of these API definition formats are quickly becoming the preferred format for not just defining, and designing APIs, as well as delivering documentation. Another positive by-product has been that a new breed of API service providers are also using it as the central definition for quickly putting their services to work on any API, for documentation, mocking and virtualizing APIs, generating server code, producing software development kits (SDK), and setting up essential testing and monitoring to keep APIs stable and reliable for consumers--the API definition driven life-cycle continues to expand.
In 15 years, like APIs, the API definition formats have moved out of the real of the technical, and are providing vital business interactions that ensure APIs meet critical internal, partner, and public needs. They are also being applied to bring much needed balance to the political side of API operations, from making sure APIs are stable, and available, to defining pricing, rate limits, terms of service, and even helping secure APIs that operate on the open Internet.
API definitions have become a machine readable contract that defines the boundaries of a relationship between API provider, and its consumers. Acting as a central truth, that is crafted by developers and API architects to govern how an API operates, from mocking to client integration, but in a way that is also setting the technical, business, and legal expectations of consumers. This API definition-driven contact is transcending the often proprietary, black box algorithms that make an API function behind the scenes, providing a portable, shareable, machine readable contract that can be shared internally, and externally with partners or the general public.
The importance of this new layer, and its role in the future of software development can be seen playing out in the Oracle v Google API copyright case, where Oracle (using the courts) has set the precedent that the naming, and ordering of your interface is separate from the code, and falls under copyright protection. Beyond the core legal case, the questions, and understanding of exactly what is code has been even more interesting. Many API architects do not see APIs as anything but code, having not seen impacts of the modern API definition movement within their architecture yet.
API definitions aren't just about defining the URLs, parameters, headers, and other aspects of API operations that developers need to know, it is also bringing much needed clarity and awareness of value generated by APIs among business users, and the end-users of the applications that APIs are powering. API definitions provide a common format in markdown, YAML, or JSON, that describe the technical surface of an API, but then also take this technical specification, and allow it to be applied across every stop along the API life-cycle, from idea to deployment, to resulting integration with web, mobile, and device applications.
As APIs make their way into almost every aspect of our business and personal lives, driving our social relationships with family and friends, meter our connections to our utility companies, connect us to educational and healthcare opportunities, this touch-point between platform, and the web, mobile, and device applications it powers, is becoming increasingly critical. To businesses this layer represents critical supply chains, but to each individual this touch point is where all of our life bits flow--further emphasizing the importance of, but also the sensitivity required in defining APIs in a meaningful way, that makes sense to EVERYONE involved.
In 2001, a WSDL definition was very much about communicating what a service did between platform and an the system that was integrated, something that only involved IT, and developers. In 2015, a modern API definition format provides the same benefits that WSDL has historically delivered, but it is also addressing the business, and political elements of how Internet enabled software works. A modern API definition provides:
- a medium for API designs, architects, and business stakeholders to craft exactly the API that is needed, before any production code is written.
- a necessary set of instructions needed for a quality assurance (QA) team to make sure an API meets business requirements
- a definition of sandbox, mocking, simulation, and virtualization environments that developers may need to be successful
- what a developer needs to integrate with another system, or build an application through interactive documentation, and even complete Software Development Kits (SDK)
- what the API testing, monitoring, and performance groups will need to ensure service level agreements are met or exceeded
- the known surface area that security auditor will need to properly secure the infrastructure web, mobile, devices, and ultimately users will depend on
- a map that government regulators can use to understand the industry landscape, and help keep all players in alignment with the nations priorities
This is just a sampling of how API definitions are being used as a driver for what is widely being called the API economy, which is the heart of cloud, mobile, big data, Internet of Things (IoT), and almost every other technical trend of the last ten years. While API definitions provide the much needed machine readable instructions for computers to understand what occurs at these vital API touch-points, they also provide the much needed human readable instructions, that people can use to interpret business agreements, individual relationships, that are playing out across our increasingly digital lives.
See The Full Blog Post
15 Sep 2015
When it comes to the API space, it always takes numerous conversations with API providers and practitioners, before something comes into focus for me. I've spent five years having API management conversations, an area that is very much in focus for me when it comes to my own infrastructure, as well as using as a metric for reviewing the other public and private APIs that I review regularly.
While I have been paying attention to API monetization for a couple years now (thank you @johnmusser), in 2015 I find myself involved in 20+ conversations, forcing the topic to slowly come into focus for me, whether I like it or not. When talking to companies and organizations about how they can generate revenue from their APIs, I generally find the conversation going in one of two directions:
- Resource - We will be directly monetizing whatever resource we are making available via the API. Charging for access to the resource, and composing of multiple access tiers depending on volume, and partnerships.
- Technology - We believe the technology behind the API platform is where the money is, and will be charging for others to use this technology. Resulting in a growing wholesale / private label layer to the API economy.
90% of the conversation I engage in are focused in the first area, and how to make money off API access to a resource. The discussion is almost always about what someone will pay for a resource, something that is more art than science--even in 2015. The answer is, we don't know until there is a precedent, resulting in an imbalance where developers expect things for free, and providers freak the fuck out--then call me. ;-)
As more of my thoughts around API monetization solidfy, a third dimension is slowly coming into focus, one that won't exist for all API providers (especially those without consumers), but is something I think should be considered as part of a long term roadmap.
- Exhaust - Capture, and make accessible the logs, usage, tooling, and other resources that are developed and captured through the course of API operations, and make available in a self-service, pay as you go approach.
There are many ways you can capture the exhaust around API operations, and sell access to it. This is where the ethics of APIs come into play--you either do this right, or you do it in a way that exploits everything along the way. This could be as simple as providing an API endpoint for accessing search terms executed against an API, all the way to providing a franchise model around the underlying technology behind an API, with all the resources someone needs to get up and running with their own version of an API. If you are very short-sighted this could be just about selling all your exhaust, behind the scenes to your partners and investors.
To me this is all about stepping back, and looking at the big picture. If you can't figure out a theoretical, 3rd dimension strategy for making money off the exhaust generated by the resource you are making available via an API, and the underlying technology used to do so---there probably isn't a thing there to begin with. Well, if you can't do this in an ethical way, that you will want to talk about publicly, and with your grandmother, you probably shouldn't be doing it in the first place. I'm not saying there isn't money to be made, I'm say there isn't real value, and money to be made that also lets you also sleep at night.
This opens up a number of ethical benchmarks for me. If you are looking at selling the exhaust from everything to your VC partners, and never open it up via your public APIs, you probably are going to do very well in the current venture capital (VC) driven climate. What I'm talking about, is how do you generate a new layer of revenue based upon the legitimate exhaust, that is generate from the valuable resource you are making available, and the solid technological approach that is behind it. If there is really something there, and you are willing to talk about it, and share publicly, the chances I'm going to care and talk about on my blog increases dramatically.
If you do not have a clue what I'm talking about, you probably aren't that far along in your API journey. That is fine. This isn't a negative. Just get going as soon as you can. If you are further along, and have people and companies actually using your API, there is ap robably a lot of value already being generated. If you partner with your ecosystem, and educate, as well as communicate with end-users properly--I am just highlight that there is a lot of opportunity to be captured in this 3rd dimension.
See The Full Blog Post
14 Sep 2015
API design and definitions are the number one area when it comes to talks submitted for APIStrat 2015 in Austin, and when it comes to traffic across the API Evangelist network in 2015. After diving into the Amazon API Gateway a little more over the weekend, I was reminded of the opportunity out there when it comes to API design tooling.
Amazon did a good job, providing a GUI interface for crafting the methods, resources, and underlying data models for APIs you are deploying using the gateway. However when you compare to some of the GUI API design editors I mentioned in my last post on this subject, from Restlet, APIMATIC, and Gelato, the Amazon interface clearly needs evolve a little more.
AWS is just getting started with their solution, so I'm not knocking what they have done. I just wanted to keep comparing of all of the solutions as they emerge, and highlight the opportunity for some standardization in this layer of API tooling. I see a pretty big opportunity for some player to step up and provide an open source API design editor that provides a seamless experience across API service providers.
This post is nothing new. I am just trumpeting the call for open API design tooling each time I see another new interface introduced for crafting APIs, their paths, resources, parameters, headers, authentication, and underlying data models. At some point, a new player will emerge with the open source API design editor I am looking for, or one of the existing players open sources their current offering, and evolve it in context of the wider API design space, providing an abstracted layer that supports all API definition formats.
With the growth in the number of service providers I see stepping up to server the API space, the need for common, open tooling when it comes to API design is only going to grow. It took almost four years of waiting for the API management space to figure this out, I'm hoping I don't have to wait as long in the API design side of things.
See The Full Blog Post
14 Sep 2015
One of my readers recently reached out to me, in response to some of my recent stories of monetization opportunities around government and scientific open data and APIs. I'm not going to publish his full email, but he brought up a couple of key, and very important realities of open data and APIs that I don't think get discussed enough, so I wanted to craft a story around them, to bring front and center in my work.
- Most open data published from government is crap, and requires extra work before you can do anything with it
- There currently are very few healthy models for developers to follow when it comes to building a business around open data
- Business people and developers have zero imagination when it comes to monetization -- aka ads, ads, ads, and more ads.
My reader sums it all up well with:
I don't dispute that with some pieces of government data, they can be integrated into existing businesses, like real estate, allowing a company to value add. But the startup space levering RAW open, or paid government data is a lot harder. Part of my business does use paid government data, accessible via an API, but these opportunities the world over are few and far between in my mind.
I think his statement reflects the often unrealized challenges around working with open data, but in my opinion it also the opportunity when it comes to the API journey, when applied to this world of open data.
APIs do not equal good, and if you build a simple API on top of open government data, it does not equal instant monetization opportunity as an API provider. It will take domain experts (or aspiring ones) to really get to know the data, make it accessible via simple web APIs, and begin iterating on new approaches to using the open data to enrich web and mobile applications in ways that someone is willing to pay for.
The reality of taking an open data set, cleaning it up, and then being able to monetize access to it directly via an API is simply not a reality, and is something that will only work in probably less than 5% of the scenarios where it is applied. However this doesn't mean that there aren't opportunities out there when it comes to monetizing adjacent to, and in relationship to the open data.
Before you can develop any APIs that any business or organization would want to pay for you have to add value. You do this by adding more meaningful endpoints that do not just reflect the original data or database resources, and provide actual value to end users of the web and mobile devices that being built--this is the API journey you hear me speak of so soften.
You can also do this by connecting the dots between disparate data-sets, in the form of crosswalks, and the establishing common data formats that can be applied across local, and regional governments, or possibly an industry. Once common data formats and interface models are established, and a critical mass of high value open data, common tooling can begin to evolve, creating opportunities for further software, service, and partnership revenue models.
The illness that exists when it comes to the current state of open data is something partly shared between early open data advocates when it came to over-promising the potential of open data, and their own under-delivery, as well as the governments under-delivery when it came to the actual roll-out and execution around their open data efforts. Most of the data published cannot be readily put to work, requiring several additional steps before the API journey even begins--making more work for anyone looking to develop around it, putting up obstacles, instead of removing them.
There is opportunity to generate revenue from open data published by government, but it isn't easy, and it definitely isn't VC scale opportunity, but for companies like HDSCore, when it comes selling aggregate restaurant inspection data to companies like Yelp, there is a viable business model. Companies that are looking to build business models around open data need to tamper down their expectations of being the next Twitter, and open data advocates need to stop trumpeting that open data and APIs will fix all that is wrong with government. We need to lower the bar, and just get to work doing the dirty work of exposing, cleaning up, and evolving how open data is put to work.
It will take a lot of work to find more of the profitable scenarios, and it will take years and years of hard work to get government open data to where it is default, and the cleanliness and uselessness levels are high enough, before we see the real potential of open data and APIs. All this hard work, and shortage of successful models, doesn't mean we shouldn't do it. For example, just because we can't make money providing direct access to Recreational Information Database (RIDB), doesn't mean there isn't potentially valuable APIs when it comes to understanding how people plan their summer vacations at our national parks--it will just take time to get there.
My Adopta.Agency project is purely about the next steps in this evolution, and making valuable government "open data" that has been published as CSVs and Excel, more accessible and usable, by cleaning them up and publishing them as JSON and / or APIs. I am just acknowledging how much work there is ahead of us when it comes to making the currently available open data accessible and usable, so we can just begin the conversation about how we make them better, as well as how we generate revenue to fund this journey.
See The Full Blog Post
14 Sep 2015
The Element Loader is interesting to me as an evolution to the concept of what I’ve long called API reciprocity, where companies like Zapier allows you to migrate your bits and bytes between the platforms we are all increasingly finding ourselves dependent on.
I think Cloud Elements is moving the needle forward just a bit, by formalizing a tool that is dedicated to real-time sync, between the platforms you depend on. You can accomplish similar things with Zapier, but I think looking at it purely about sync of specific life bits (objects) can be a very valuable exercise.
I have been calling this reclaim your domain for a couple years now, where I think the process of identifying the services we depend on is extremely valuable, and one where establishing a plan for how your bits and bytes work in concert, really pushes things into the realm of actually healthy IT operations--for both individuals and businesses.
I do not have my world synced. My contacts on Google, LinkedIn, and other platforms are totally out of sync, and my documents are spread between Google, Amazon, and Dropbox, without any coherency at all. Don’t get me started on my images. This is a real problem, that is only growing, and a segment where I'd like to see more solutions like Element Loader emerge.
I’ll start tracking on reciprocity providers like Cloud Elements who are doing specific things like sync, which will become one of the common building blocks I’ll add to my research when I get the time to update. Hopefully I will find some more time soon to take a deeper look at my my API automation and interoperability research--it has been a while.
See The Full Blog Post
12 Sep 2015
I preparing a talk this week in Portland, OR, at the IDX Developer Summit. IDX serves the real estate industry, providing real estate professionals access to hundreds of multiple listing services (MLS) groups, from around the United States. If you are a real estate broker or agent, and you need real estate listings on your site, IDX is how you do this--they are leading player in the space.
If you aren't familiar with the world of real estate data, it has been long controlled by a network of MLS groups, totaling almost 2000 (I think), spread around the nation. These MLS organizations tightly control their data, deciding exactly who has access to it, and exactly how it can be used, and how it MUST be displayed in print and across the web. This is a process that has long existed, prior to the existence of the web, but since 1995, it is mechanism that has gotten even more strict, and litigious, seeking to maintain their control over a very valuable, and increasingly digital layer of our physical worlds.
When it comes to APIs, the real estate industry is the OG API provider, making data available via FTP locations as soon as the web was a thing. However, when it comes to the core principles of what makes APIs work, the real estate industry is the anti-API. MLS hoard facts, something that cannot have copyright applied, but if you are litigious enough, it is something you can defend. The address, and details of residential and commercial property is data that should be accessible to everyone, but MLS groups, and National Association of Realtor (NAR) have created a cartel, that prevents this from ever being a reality. Think what the Record Industry Association of America (RIAA) and record labels have done to music--the MLS and NAR do this to real estate.
IDX has struck a balance between hundreds of these MLS organizations, allowing them to process their prized data, and enable real estate agents and brokers to publish this data on their websites using seamless and often embeddable tooling, that adheres to the distribution, and branding guidelines set by the MLS. IDX provides a bridge between the online digital world, and this legacy world of data control, potentially providing the real estate industry with the online tooling they will need to be successful.
Ok, if the MLS, NAR, and real estate industry is the anti-API, what the hell am I doing speaking at a real estate industry developer conference? Well, I was the original tech founder of IDX. ;-) I build the original system for pulling real estate data, targeting the first handful of MLS organizations. I exited the company sometime around 2005, and my technology is long gone (thank god), and my two co-founders have gone on to do some very interesting things, building a thriving company in a very difficult space.
I won't be going to the IDX Developer Summit to talk shit on the MLS and NAR, I will be helping to inspire the developers, about how much opportunity is available out there right now, even in the real estate industry. There are a lot of open data, and Internet of Things related opportunities emerging when it comes to residential, and commercial buildings, as well as neighborhood, and city level development possibilities. My objective is to help them understand the realities of the space they exist in, and still build value within, and around an industry that is so entrenched when it comes to data sharing--it can be done!
I also hope to get them to use my nationwide MLS API, and bypass the MLS and NAR system. Just kidding!! If you have ever worked in the industry, this is the question every newbie asks, "can't I just get access to the nationwide MLS?". ;-)
I am actually really honored to be speaking at the IDX Developer Summit this week. My buddies Chad and Jeff have done some good work, in a very difficult industry--I am proud of them.
See The Full Blog Post
11 Sep 2015
I sat down for a second, more in-depth look at the Amazon API Gateway. When it first released I took a stroll through the interface, and documentation, but this time, I got my hands dirty playing with the moving parts, and considering how the solution fits into the overall API deployment picture.
API Design Tools
As soon as you land on the Amazon API Gateway dashboard page, you can get to work adding APIs by defining endpoints, crafting specific methods (paths), the crafting the details of your HTTP resources (verbs), and round off your resources with parameters, headers, and underlying data models. You can even map the custom sub-domain of your choosing to your Amazon API Gateway generated API, giving it exactly the base URL you need.
API Mapping Templates
One feature provided by the Amazon API Gateway that I find intriguing is the mapping templates. Using the data models and the mapping template tool, you can transform data from one schema to another. This is very interested when you are thinking about evolving your own legacy APIs, but I'm also thinking it could come in real handy for mapping to public APIs, and demonstrating to clients what is possible with a next version--designed from the outside-in-mapping is something I've wanted to see for some time now.
API Integration Types
Up until now, in this review, we are just talking about designing APIs, and possibly mapping our data models together. There are many other ways you can gateway existing systems, databases, and other resources using Amazon API Gateway, but the one that seems to be getting the lions share of the discussion, is deploying APIs with Lambda functions as the back-end.
API Integration Using Lambda Functions
Lambda functions give you the ability to create, store, and manage Node.js and Java code snippets, and wire up these resources using the Amazon API Gateway. When you create your first Lambda function, you are given a small selection of blueprints, like a microservice, or db connection, which also allows you to edit your code inline, upload a .zip file, and pull a .zip file from Amazon S3 (where is the Github love).
Identity & Access Management (IAM)
The Amazon API Gateway gives you some pretty simple ways to secure your APIs using API keys, but then also gives you the whole AWS IAM platform, and resources to put to leverage as well. I think most of the IAM will be more than many API providers will need, but for those that need this, I can see this part of their gateway solution sealing the deal.
Scaling Lambda Functions Behind
Being scalable is one of the promises of a Lambda backed API deployed using Amazon API Gateway, which I can see being pretty alluring for devops focused teams. You can allocate each Lambda function to posses the memory it needs, and individually monitor and scale as needed. While I see the recent containerization movement taking care of 50% of the API back-end needs, I can also see that being able to quickly scale individual functions as you need using the cloud, taking care of the other 50%.
Events For Lambda Functions
Another powerful aspects of a Lambda function, is you can engineer them to response to events. Using the interface, command line, or API, you can define one or many event sources for each Lambda function. Amazon provides some pretty interesting sources for triggering each Lambda function.
Those six event sources provide some pretty potent event sources for triggering specific functions in your vast Lambda code library. You can rely on running code stored as Lambda functions using the API you deploy using Amazon API Gateway and / or you can have your code run in response to a variety of these events you define.
When it comes to defining a back-end for the APIs you deploy using Amazon API Gateway, Lambda is just the beginning. Amazon provides three other really interesting ways to power APIs. I see a lot of potential in managing code using Lambda, and using it to scale the back-end of many APIs pretty quickly, but these areas provide some pretty interesting potential as well.
A quick way to put Amazon API Gateway to use is as a proxy for an existing API. When you think about the potential in this area, when put mapping templates to work, transforming the methods, resources, and models. I haven't mapped it to any existing APIs yet, but will make sure and do so soon, to better understand the HTTP proxy potential.
Another way to quickly deploy an API is mock your integration, providing a quick API that can be used to hack on, making sure an API will meet developer's needs. You may even want to mock an existing public API, rather than use a live resoure as you are developing an application. There are many uses for mock integration.
AWS Service Proxy
The final way Amazon gives provides for you to power your API(s), is by proxying an existing AWS service. This opens up the entire AWS cloud stack for exposing as API resources, using the Amazon API Gateway. This reminds me of other existing API gateway solutions, except instead of your on-premise, legacy infrastructure, this is your in the cloud, more recent infrastructure. I'm guessing this will incentivize many companies to migrate their legacy infrastructure into the cloud, or at least make it cloud friendly, so you can put the AWS service proxy to use--lots of possibilities here.
Defining The Stages Of Your Lifecycle
Going beyond the types of integration you can employ when crafting, and deploying APIs using the Amazon API Gateway, the platform also provides a way to define stages that APIs will exist in from design, development, QA, production, or any other stage you wish. I like the concept of having a stage defined for each API, designating where it exists on the API life-cycle. I tend to just have dev and prod, but this might make me consider this a little more deeply, as it seems to be a big part of defining the API journey.
API Monitoring By Default
Amazon has built in monitoring by default into the API Gateway, and Lambda functions. You can connect APIs, and their designated integration back-end to CloudTrail, and monitor everything about your operations. CloudTrail is very much a cloud infrastructure logging solution over API analytics solutions, but I could see it evolve into something more than just monitoring and logging, providing an overall awareness of API consumption. Maybe an opportunity for the ecosystem to step in via the API(s).
CLI An API For The API Gateway
You have to admit, Amazon gets interfaces, making sure every service on the platform has a command line interface as well as an API. This is where a lot of the API orchestration magic will come into play in my opinion. The ability to automate every aspect of API design, deployment, management, and monitoring, across your whole stack, using an API is the future.
There Are Some Limitations
There are some current limitations of the Amazon API Gateway. They limit things to 60 APIs maximum per AWS account, 300 resources maximum per API, 10 stages maximum per API, and 10-second timeout for both AWS Lambda and HTTP back-end integration. They are just getting going, so I'm sure they are just learning how people will be using the API deployment and management infrastructure in the cloud, and we'll see this evolve considerably.
What Will This Cost?
Lambda is providing the first 1 million requests per month for free, and $0.20 per 1 million requests thereafter, or $0.0000002 per request. The Amazon API Gateway costs $3.50 per million API calls received, plus the cost of data transfer out, in gigabytes. It will be interesting to see what this costs at scale, but I'm sure overall, it will be very inexpensive to operate like other AWS services, and with time the cost will come down even further as they dial it all in.
AWS API Gateway Has Me Thinking
I won't be adopting AWS right away, I'd prefer to watch it evolve some more, but overall I like where they are taking things. The ability to quickly deploy code with Lambda, and use blueprints to clone, and deploy the code-behind APIs, has a lot of potential. Most of my APIs are just simple code that either returns data from a database, and conducts some sort of programmatic function, making Lambda pretty attractive, especially when it comes to helping you scale and monitor everything by default.
My original criticism of the platform still stands. Amazon is courting the enterprise with this, providing the next generation of API gateway for the legacy resources we have all accumulated in the cloud. Something that really doesn't help large companies sort through their technical debt, allowing them to just grow it, and manage it in the cloud. Win for AWS, so honestly it makes sense, even though it doesn't deliver critical API life-cycle lessons the enterprise will need along way to actually make change.
This is a reason I won't be getting hooked on Lambda + Amazon API Gateway anytime soon, because I really don't want to be locked into their services. I'm a big fan of my platform employing common, open server tooling (Linux, Apache, NGINX, MySQL, PHP), and not relying on specialty solutions to make things efficient--I rely on my skills, and experience and knowledge of the resources I'm deploying, to deliver efficiency at scale. My farm to table approach to deploying APIs, keeps me in tune with my supply chain, something that may not work for everyone.
While the tooling I use may not be the most exciting, it is something I can move from AWS, and run anywhere. All of my APIs can easily be recreated on any hosting environment, and I can find skills to help me with this work almost anywhere in the world. After 25 years of managing infrastructure, I'm hyper-aware of lock-in, even the subtle moves that happen over time. However, my infrastructure is much smaller than many of the companies who will be attracted to AWS Lambda + API Gateway, which actually for me, is another big part of the API lesson and journey, but if you don't know this already, I'll keep it to myself.
I'd say AWS gives a healthy nod to the type of platform portability I'm looking for, with the ability to import and export your back-end code using Lambda, and the ability to use API definitions like Swagger as part of Amazon API Gateway emerge. These two things will play a positive role in the overall portability, and interoperability of the platform, but doing this for the deeper connections made with other AWS services, will be a lot harder to evolve from if you ever have to migrate from AWS.
For now, I'll keep playing with Amazon API Gateway, because it definitely holds a lot of potential for some very powerful API orchestration, add while the platform may not work for me 100%, AWS is putting some really interesting concepts into play.
See The Full Blog Post
08 Sep 2015
There is a little more than 24 hours left for you to submit your talk for APIStrat in Austin, TX, this November 18th, 19th, and 20th. With this sixth edition of APIStrat, we are taking things back to our roots, and not choosing a theme, but making it a conversation about the most important topics in the space facing API providers and consumers in 2015.
From looking at the talks that have been submitted so far, API definitions, design, and Internet of Things seems to be leading the pack. We've also seen a couple session talk submissions that we think are probably more worthy of being keynotes, because they are just that good.
The APIStrat team feels like 400 people is the sweet spot when it comes to having a productive API discussion, so we chose a venue that fits this vision, which will most definitely sell out. Check out the sponsors who have already lined up, and we have only announced two keynotes (more coming this next week).
Make sure and submit your talk before tomorrow night, so that you are part of the conversation. Also, make sure you jump in and sponsor the event, as we are already looking at closing off one tier of sponsorship--contact us today if you want to get in on the action.
We'll see you in Austin!
See The Full Blog Post
08 Sep 2015
My last rant of the evening, I promise. Then I will shut up and move back to actual work instead of telling stories. I'm working on my Adopta.Agency project, processing a pretty robust spreadsheet of Department of Veterans Affairs expenditures by state. As I'm working to convert yet another spreadsheet to CSV, and then to JSON, and publish to Github, I can't help but think, "where is the save as JSON" in Microsoft or Google Spreadsheets?
I can easily write scripts to help me do this, but I'm trying to keep the process as close to what the average person, who will be adopting a government agency data set, will experience. I could build a tool that they could also use, but I really want to keep the tools required for the work as minimal as possible.
It would just be easier if Microsoft and Google would get with the program, and give us a built in feature for saving our spreadsheets as JSON.
See The Full Blog Post
08 Sep 2015
I had a reinder on my task list to check-in on where some of the common database platforms were when it came to APIs. I think it was a Postgres announcement from a while back that put the thought in my notebook, but as an old database guys I tend to check-in regularly on the platforms I have worked most with.
The point of this check-in, is to see how far along each of the database platforms are when it comes to easy API deployment, directly from tables. The three relational database platforms I'm most familiar with are:
- SQL Server - The platform has APIs to manage, and you deploy an OData service, as well as put .NET to work, but nothing really straightward, that would allow any developer to quickly expose simple RESTful API.
- PostgreSQL - I'd say PostgreSQL is furthest along with thier "early draft proposal of an extension to PostgreSQL allowing clients to access the database using HTTP", as they have the most complete information about how to deploy an API.
- MySQL - There was a writeup in InfoQ about MySQL offering a REST API, but from what I can tell it is still in MySQL Labs, without much movement or other stories I could find to show any next steps.
The database that drives my API platform is MySQL running via Amazon RDS. I haven't worked on Postgres for years, and jumped ship on SQL Server a while back (my therapist says I cannot talk about it). I automate the generation of my APIs using Swagger and the Slim framework, then do the finish work like polishing the endpoints to look less like their underlying database, and more like how they will actually be used.
Maybe database platforms shouldn't get into the API game? Leaving API deployment to the API gateway providers like SlashDB and DreamFactory. It just seems like really low hanging fruit for these already used database solutions, to make it dead simple for developers to expose, and craft APIs from existing datasources.
if you are using any database to API solutions for SQL Server, PostgreSQL, or MySQL, please let me know.
See The Full Blog Post
08 Sep 2015
There was a pretty interesting conversation around API design going on in one of my API slack channels over the last couple days, about what is API design, and what is needed to get developers, and even non-developers more aware of the best practices that exist. It is a private group, so I won't broadcast it as part of this post, but I did want to extract a narrative from it, and help me set a bar for other API design discussions I am having.
The Restafarian, hypermedia, and linked data folk have long been frustrated by the adoption of sensible API design practices across the sector, and the lack of basic HTTP literacy among developers, and non-developers is at dangerously high levels. The good news, is some of this is beginning to change, but we still have so much work to do, something that won't be easy, and unfortunately it won't always have the clean outcomes leaders in the space are looking for.
APIs returning JSON is just the next step in the evolution of the web, and when you consider how much work it took to get everyone on-board with HTML, and building web pages, then web apps, and recently mobile apps, you can begin to understand the work that still lies ahead. We have to help the next generation of developers be more HTTP literate (something the previous generations of developers aren't), and possess a baseline knowledge of common API design best practices. This needs to be done in a world where many of these developers really aren't going to care about the specifics of good API design, like us API architects and early pioneers have.
The average API designer in the future will not always be willing to argue about the nuance of how to craft URLs, whether to use a header or parameter, caching, and how to version. They just want to get the outcome they seek, accomplish their project, and get paid for their work. Consider the solar industry as a quick comparison. The first wave of installers were domain experts, while the following waves of installers who will be focused on scaling and maintaining the industry, will only need to be trained on only what is required to get the job done in a stable, profitable way.
Ok. So how do we do this right? I feel like we are already on the good path. We just need you to publish your own API design guide somewhere that we can all learn from, like other leading API providers already present in my API design research. As we build a critical mass of these, we need to also work to aggregate the best practices across them, so that instructors and textbook publishers can incorporate into their curriculum. If you have an API platform, and have ever wished that there were more highly skilled API designers out there, make sure you have your API design practices documented, and shared with the world.
This will get healthy API design practices out of the trenches of startups, SMBs, enterprise, and government agencies, and get it into the educational institutions around the world. Then we can start equipping the next generation of programmers with the knowledge they will need to be successful in delivering the resources need for the next generation of Internet powered apps, networks, and devices.
I want to add on emore thing. API service companies who are looking to provide tooling that API providers can use to do deploy APIs, will have share in the load here. This is core to my criticism of the AWS API Gateway, in that I applaud their use HAL, but please make sure you also provide a healthy dose of hypermedia literacy along the way, don't just hide it behind the curtain. I really do not want to see the another FrontPage for APIs, so if you are building an API editor, let me know so I can provide you with some ideas. (1) (2).
We all have a lot of work to do, in preparing the next generation of developers, and business users when it comes to a baseline of HTTP literacy, as well as a healthy dose of API awareness. We are going to need an army of API designers to help us deliver on the API economy we are all seeing in our heads--so let's get to work. If you do not have a formal API design strategy get to work on one (let me know if you need help). If you have one, please share it, so I can add it to my API design research, for others reference.
See The Full Blog Post
08 Sep 2015
I have two more conversations kicking off on the topic of API monetization, so I just needed to take a moment to gather up the last wave of posts on the subject, catch my breath, and refresh my overall thoughts in the area. What I really like about this latest wave, is that they are about providing much needed funding for some potentially very important API driven resources. Another thing is that they are pretty complicated, unproven approaches to monetizing APIs--breaking ground!!
Over the last couple weeks, I have be engaged in four specific conversations that have shifted my gaze to the area of API monetization:
- Audiosear.ch - Talking with the PopupArchive team about making money around podcast search APIs.
- Department of Interior - Providing feedback on the Recreation Information Database (RIDB) API initiative.
- Caltech Wormbase - Helping organize a grant effort to fund the next generation of research, from Wormbase, and other scientific database.
- HDScores - Mapping out how HDScores is funding the efforts around aggregating restaurant inspection data into a single, clean API.
As I think through the approaches above, I'm pushed to exercise what I can from these discussions, on my own infrastructure:
- My API Monetization - As I look to add more APIs to my stack, I'm being forced to clearly define all the moving parts of my API monetization strategy.
- Blockchain Delusions - While thinking again about my API pricing and credit layer, I'm left thinking about how the blockchain can be applied to API monetization.
The API Evangelist network is my research notebook. I search, re-read, and refine all the thoughts curated, and published here. It helps me to aggregate areas of my research, especially in the fast moving areas, where I am receiving the most requests for assistance. Not only does it refresh my memory of what the hell I've written in the last couple weeks, I also hope it gives you a nice executive summary in case you missed anything.
If you are looking for assistance in developing your API monetization strategy, or have your own stories you'd like to share, let me know. If you have any feedback on my stories, including feedback for the folks I'm talking to, as well as items missing from my own API monetization approach, or blockchain delusions--let me know!
See The Full Blog Post
07 Sep 2015
I believe in the ability of APIs to pull back the curtain of the great OZ, that we call IT. The average business and individual technology consumer has long been asked to just believe in the magic behind the tech we use, putting the control into the hands of those who are in the know. This is something that has begun to thaw, with the introduction of the Internet, and the usage of web APIs to drive applications of all shapes and sizes.
It isn't just that we are poking little holes into the corporate and government firewall, to drive the next generation of applications, it is also that a handful of API pioneers like Amazon, Flickr, Twitter, Twilio, and others saw the potential of making these exposed resources available to any developers. The pulling back of the curtain was done via these two acts, exposing resources using the Internet, but also inviting in 3rd parties to learn about, and tap into these resources.
Something that is present in this evolution of software development, is trust. API providers have to have a certain amount of trust that developers will respect their terms of service, and API consumers have to have a certain amount of trust that they can depend on API providers. To achieve this, there needs to be a healthy dose of transparency present, so API providers can see into what consumers are doing with their resources, and API consumers have to be able to see into the operations, and roadmap for the platform.
When transparency and trust does not exist, this is when the impact of APIs begin to break down and they become simply another tech tool. If a platform is up to no good, has ill intentions, selling vapor ware, or there is corruption behind the scenes, the API concept is just going to create problems, for both provider and consumer. How much is exposed via an API interface is up to the API designer, architect, and ultimatley the governing organization.
There are many varying motivations behind why companies open up APIs, and the reasons they make them public or not. APIs allow companies to keep control over their data, content, and algorithmic resources, while also opening them up so "innovation" can occur, or simply be accessible by 3rd party resources to bypass the historical friction or bottleneck that is IT and developer groups. Some companies I work with are aware of this balance being struck, while many others are not aware at all, they simple are trying to make innovation happen, or provide access to resources.
As I spend some brain cycles pondering algorithmic transparency, and the recent concept of "surge pricing" used by technology providers like Uber and Gogo, I am looking to understand how APIs can help pull back the curtain that is in front of many algorithms impacting our lives. in the same way APIs have pulled back the curtains on traditional IT operations, and software development. As part of this thought exercise I'm thinking about the role Docker and other virtualized contaniners can play in providing us with more transparency in how algorithms are making decisions around us.
When I deploy one of my APIs using my microservices model, it has two distinct API layers, one for the container, and one for what runs inside of it. Docker comes ready to go with an API for all aspects of it operations--here is an Swagger definition of it. What if all algorithms came with an API by default, just like each Docker container does? We would put algorithms into containers, it would have an interface for every aspect of its operation. The API wouldn't expose the actual inner workings of the algorithm, and its calculations, but provide a complete interface for all its functionality.
How much of this API a company wished to expose, would vary just like with APIs, but companies who cared about the trust balance between them, their developers, and end-users, could offer a certain amount of transparency to build trust. The API wouldn't give away the proprietary algorithm, but would give 3rd party groups a way to test assumptions, and verify the promises made around what an alogorithm delivers, thus pulling back the curtain. With no API, we have to trust Uber, GoGo and other providers about what goes into their surge pricing. With an API, 3rd party regulators, and potentially any individual could run tests, validating what is being presented as algorithmic truth.
I know many companies, entrepreneurs, and IT folks will dismiss this as bullshit. I'm used to that. Most of them don't follow my beliefs around the balance between the tech, business, and politics of APIs, as well as the balance between platform, developer, end-users, and what I consider to be an invetiable collison with government regulation. For now this is just a thought exercise, but it is something I will be studying, and pondering more, and as I gather more examples of algorthmic, or "surge pricing", I will work to evolve these thoughts, and solidify into something more presentable.
See The Full Blog Post