The API Evangelist Blog
This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.
When I am review API services and tooling, the majority of what I see is targeting the API elite, the most technical, and specialized of us in the API space. Rarely do I come across approaches that really speak to the average business person, but each time I talk to the Form.io team I am reminded that these tools can, and do exist in the wild.
I get regularly walk through from the team, so what they do is no surprise, but each time I see it in action, I'm moved by what is possible, and the inroads they are building into everyday businesses. Using Form.io you can build forms, but these aren't forms, they are apps, and they are APIs, as well as being a form. When you are building a form, you are building an API, and when you hit publish, it publishes the API, with the form as the first web or mobile app that is consuming the API.
Form.io gives you the capability to publish one or many of these apps (forms), and manage the data that is pushed out, pulled in, and aggregated from across your network of apps. Form.io does all the heavy lifting for you, while still speaking the common language of the form -- which almost everyone in the business world understands. I walked though a couple of Form.io most recent case studies (which I'll talk about in future posts), but I wanted to take a moment to highlight that Form.io has made an API design, deployment, and management tool--except you don't realize that is what you are doing, as an end user, you are just building a form that matches meets your business need.
I'm in the middle of a sprint, where I am going through 50 of my main API stacks, to see what has changed, and who is still home. I'm always fascinated by the number of APIs that just fade away into a 301 redirect to a domains home page. Some projects get gobbled up by domain squatters, where others almost rise to the level of API deprecation art.
I might get this one framed. I thought the background, combined with the message was a great representation of the current state of API affairs. Its getting harder and harder to operate an API, keep it up and going, and live up to the hype and expectations.
I am going to start saving more snapshots of what happens when an API goes away--who knows maybe some day I'll have an interesting collection.
I was going through the list of APIs that I depend on, auditing the services that I'm paying for, and trimming the budget where I can, a process that involves spending time on the pricing and plan pages for the APIs I pay for--understanding pricing. I have all of my internal APIs defined as an APIs.json collection, as well as the 3rd party APIs which I depend on. This allows me to easily navigate all the moving parts of these APIs, and quickly get at what I need during times like this.
As I was browsing the pricing pages, and evaluating the replacement of a couple of these systems I wanted to better understanding the terms of service, and licensing for these APIs I am employing. I try to keep the legal department for all of the API I use indexed with APIs.json as well, something that allows me to then navigate to the legal side of these platforms. My APIs.json index for this IT collection, gives me easy access to the legal side of my API integrations.
Amazon Web Services
Pop Up Archive
While these building blocks are not machine readable like the OpenAPI Spec, Postman, and other items I have indexed with APIs.json, they do provide me with a single place to go and find the pricing, TOS, privacy policies, trademark, branding, security, and support for the APIs I depend on to run my businesses.
I use OpenAPI Specs to help guide my integration from the generation of client code, and then load into my Postman or other web API client. I'm also increasingly used other secondary machine readable elements like Twitter users, Github users, and Blog RSS to stay in tune with the heartbeat of these APIs. The Twitter and Github APIs allow me to pull details about these companies operations in real time, something I'm hoping I can do with pricing, security, TOS, and privacy updates some day. #hopeful
Wearing my old IT director hat (which my therapists says I'm not supposed to), it just makes sense to have an up to date, machine readable APIs.json index of the APIs I depend on. In my opinion, we should have real time monitoring data from my API Science account, and performance data from my APIMetrics account, and accurate data on plans and pricing for services, and what I am spending. Currently, only a handful these providers give me analytics, and API usage data, let alone other vital details on spend, errors, security, and other critical aspects of my integration--if I do get it, I have to go get it manually, a small group have APIs for these data points.
Do you have a single dashboard, list, or napkin with the critical business and legal aspects of API integrations? Or are you just leaving the management of these details to your IT and developers?
I see a lot of APIs in my daily work. The diverse number of ways in which APIs are being used is one of the things that keeps my ADD brain interested in all things APIs. While the technical, business, and politics of the API game infinitely has ways to keep me paying attention, I find myself engaged more lately, not just for the belief in the potential of APIs for good, but around the potential for misuse--at a troubling pace.
Increasingly I am stumbling across API implementations, that when I am initially learning about, the 12 year old boy in me is immediately interested, but then the 43 year old skeptical in me recoils, like I am seeing a car accident on the highway. Insitu's Tungsten Software Development Kit has this effect on me -- Insitu makes technology for unmanned aerial vehicles (UAV).
I think the tagline on their home page sums itup well--the API opportunity is at this layer of civil, commercial, and defense deployment of UAVs.
I think their own description of the Tungsten Software Development Kit, speaks well to what APIs can do for almost any device, including UAVs:
Our Tungsten Media Toolkit offers the ideal software development kit (SDK) for companies and platforms that demand flexible, superior, real-time digital media solutions. Designed by software developers, Tungsten can be used for a wide array of media purposes and applications and offers developers an extensive application programming interface (API) to operate on media from a variety of sources, be it cameras, network streams, archives, and beyond. Insitu Mission Systems addresses your metadata-rich media challenges by applying the Tungsten toolkit and deep expertise in tactical data collection and processing.
With the awareness I've seen realized from using APIs in between the back-end, and front-end web and mobile applications, I can only imagine what is possible, when you open up API access to a drone, its camera, and all its information gathering capability. It is exciting to think about from a technical perspective, that is when I have my technological blinders on -- which is default mode for me, as a white male software architect living in the US.
However, when I take these blinders off, and think about the potential for abuse in all three areas listed above, I can't help but worry for our future. APIs are a win for these platforms, and their partner developers, but when it comes to transparency and accountability at this layer I've seen very little action. Which is part of the reason I am writing about it. I'm all for APIs being a thing in UAV, but part of my motivation isn't about the drone opportunity, its the opportunity for transparency and accountability to be baked into this layer by default, so that we help minimize the number of negative outcomes.
While UAV usage worries me, I think the cat is out of the bag, and there is no stopping this type of technology from being a thing. We just have to get better and ensuring the mechanisms are in place to ensure platform providers, and their developers are doing the right thing. The opportunities for misuse is extremely high, but I am hoping that it will be something that can be reduced with an open API approach. #NotHoldingBreathe
I just published the OpenAPI Spec I just created for the Human Services Data Specification (HSDS) into one of my default portals, which once the OpenAPI Spec is indexed via the portals API.json, I get a ready to go landing page, documentation, and other tooling for supporting the API. I have been pushing forward my API documentation, beyond the (now) standard issue Swagger UI, keeping the OpenAPI Spec core, but evolving the UI portion using Jekyll and Liquid.
I have a default API docs implementation, which loops through all OpenAPI Spec included within a project, and renders HTML documentation for each path, verb, etc. I'm still working on how I recreate the dynamic functionality brought to the table by Swagger UI, but so far I'm really, really enjoying the flexiblity with the user interface, and overall experience I get using this approach--I know the interactive portion will come.
One of the things I'm enjoying being able to do, is apply additional, external elements to the OpenAPI Spec, by augmenting them with independent, APIs.json defined schema. One example of this I am calling API filters, a simple filter collection defined in both YAML and JSON, that is indexed along with the OpenAPI Spec, within the APIs.json index. For the Open Referral API I wanted to provide the complete documentation, but also play around with other different, filtered views of the API -- designed for specific audiences.
The first entry I made into API filters, I called "learn the basics", only showing the core resources that were available via the Open Referral API, eliminating all of the noise.
This API docs view, gives me just the handful of core endpoints, filtering out the other 60+ APIs that someone who is just learning about the Open Referral API do not need to be bothered with when they are learning. Next, I wanted to focus in on a specific area of the API, like services, and only show the summary elements, that most people will care about when it comes to these services. I'm calling this one "services summary":
The Liquid driven HTML API docs are the same for the full documentation, for learn the basics, or for the services summary. All three documentation elements are also all driven from the same OpenAPI Spec. The only thing I did, was specify one of the API Filters by name, in the top of each specialized docs page -- the Liquid template handles the rest. Its still pretty crude, and I'm sure it will need a lot of polishing, but it provides me with a simple, machine readable way to filter out the endpoints I do not need, accomplishing what I set out to do.
I am using schema approaches like this more lately. An approach that when indexed along with the OpenAPI Spec, in an API.json file gives me other layers, and dimension that I can apply for not just documentation, but almost any other stop along the life cycle. To me, and important part of this is that these elements do not live embedded within the OpenAPI Spec, living independent of the the machine readable definition of the API. Each one lives its own schema file, and when indexed alongside each API definition, within the APIs.json, interesting things happen--all without cluttering up the OpenAPI Spec, keeping it purely about the API.
My goal in this work, is to provide me with a way that I can equip the other less technical advocates for the Open Referral format, with a toolbox of documentation, visualizations, and other snippets that will help them articulate what the API can do. Since all of these API documentation snippets are APIs.json, and OpenAPI Spec driven, using Liquid, and Jekyll to render, they are something that anyone could put to use, for any API.
As I was preparing for my talk with Dan from Open Referral, I published some of my thoughts on the organization, and the Human Services Data Specification (HSDS). One of the things I did as part of that work was generating of a first draft of an OpenAPI Spec for the Open Referral API. To create that draft, I used the existing Ohana API as the base, exposing the same endpoints as they did in the Code For America project.
Over the last couple days, I spent some more time getting to know the data model set forth by HSDS, and got to work evolving my draft OpenAPI Spec to be closer in alignment with the data schema. To do this I took the JSON Schema for HSDS that was available on the Github, and generated used it as a framework to add any missing elements to the API definition, resulting in almost 70 API paths, in support of almost 20 separate entities dictated in the HSDS.
|Open Referral API||OpenAPI Spec|
This is a very formulaic, generated, representation of what the Open Referral API could look like. While I have lots of ideas on how to improve on the design, I want to be cautious to not project too much of my own views on the API design--something the community should do together. I can tell a lot of work went into the current specification, and the same amount of energy should go into evolving the API design.
I accomplished what I wanted. Learn more about HSDS, get more familiar with the entities at play, while also producing a fairly robust representation of what an API could look like for Open Referral. It has way more details than the average implementation will need, but I wanted to cover all the bases, providing full control over every entity, and relationship represented in HSDS. Most importantly I was able to get more intimate with the specification, while also producing an OpenAPI Spec that will playing a central role throughout this project.
Next I'm going to play with some minimum viable representations, and other ways to tell stories and talk about the HSDS. I'd like to eventually have a whole toolbox of YAML / JSON driven UI elements, like the one I pasted in this post, allowing me to describe all the moving parts of the Open Referral work. More posts to come, as I work through my thoughts, and play possible designs for the Human Services Data Specification (HSDS).
I had another conversation with an API service provider today about their freemium accounts not converting. I've been sharing my thoughts about these freemium service account conversations, as I work to understand a shift that is occurring, and what my own feelings are. At this point, I feel like we (providers and consumers) are all responsible for freemium not working, than it is about the concept itself not working.
Some of the poor behaviors of service providers:
- No Business Models - Everything being free, and not having a coherent business model.
- No Ascension Possible - There is free account, maybe one more paid level, then unobtainable beyond.
- Credit Card Trap - Requiring credit card on demos, trials, without easy way to not be billed.
Some of the poor behaviors of service consumers:
- No Communication - Never actually engaging in a conversation with platform providers.
- Multiple Accounts - Using multiple accounts to get around limitations of individual accounts.
- Drop Service - Only use the service to get what you need, then migrating off of service.
I feel that the concept of freemium model, when its bundled with a coherent monetization strategy, can work. However, I feel like service providers will ultimately have to tighten down their controls, and do like Best Buy has done, requiring that developers get to know them before you get any meaningful API access. #MakesSenseToMe
Sadly, when the bad behavior by developers, and API consumers, is coupled with a push by investors to focus on selling to the enterprise, we are going to see free and freemium tiers go away. We'll see plenty of quality services emerge, but unless you have enterprise resources behind you, you won't be getting access to them. Which in my opinion will further shift the way software is developed and consumed, back toward the old way we did before SaaS.
I'm guessing many of you business folks will just file this under markets working things out. However I think it is more about those with resources actually maintaining the upper hand. It bums me out that 1000lb gorillas can offer free services to lower the bar so low, nobody can survive. These aren't healthy market behaviors or forces, it is just bad for everyone without the resources to survive in these environments.
Another realization I've had about all of this, is that a whole generation of developers have grown up in this environment. I talked to a "senior engineer" yesterday, who told me about his long career, that started in 2007. Leaving me realizing that many developers think free software is normal, and 99c apps is a good thing. Buying into the unrealistic, market disrupting, and dominating behavior of the Googles out there.
Honestly I don't give a shit anymore. I won't be writing about any solutions that my small business can't afford, and kick the tires before swiping my business credit card. Who am I though? Nobody. As a real small business, I will always spend the time to get to know my service providers, and well as my service consumers, and I will be able to find money in the cracks -- as I have since 1988. #onward
I am spending time evaluating the evolution of the three applications offered by Restlet, as they work to bring the experience across API Spark, Restlet Studio, and DHC, into closer alignment. To describe what Restlet does in my API terminology, API Spark is an API deployment solution, Restlet Studio is an API design solution, and DHC is an API client solution. These are extreme simplifications, but helps me keep the fast moving API space somewhat organized (for me), and helps me share stories that (hopefully) make sense (for you).
Every couple of weeks I spend an hour or two talking with Jerome Louvel (@jlouvel), the founder of Restlet, about their road map, and where the wider API space is going. Similar to my own work, trying to map out the API life cycle, Restlet is trying evolve their own suite of API solutions, into more of a life cycle management solution. Restlet touches all of the core areas of the life cycle, including design, definitions, deployment, management, and client, while also making moves into testing, and beyond. I told Jerome I would spend more time thinking about this journey that they are on, and provide any thoughts I can.
While I was playing with DHC, making some calls to my blog API, I kept being pulled down to the bottom tab, below my API response information, where there are tabs for history, assertions, HTTP, and docs. While the API request and response is very technical, at the bottom of the DHC client, I see elements of the business and politics of APIs.
For me, the features Restlet has exposed as tabs in the DHC footer, resemble how APIs are acting as business contracts. The docs represents the contract language, the HTTP tab is the details of the exchange, assertions are what is expected of the contract, and history is the recording of the contracted exchange (or lack of). What keep pulling my eyes to the bottom of the screen in DHC was the concept of assertions, which the dictionary says is:
- the act of stating clearly and strongly or making others aware
- something stated as if certain
What assertions are being made of any API? Assertions span the three tiers of API operations for me, crossing the technical, business, and politics. In theory assertions shouldn't just be API developer-centric, they should also reflect the needs of API consumers, business, and end-users as well. As I'm playing with assertions in DHC, I'm thinking how can we open up these definitions to business groups? How do we collaborate and share assertions across our API teams? Or possibly with an entire API community or industry?
While assertions are applied at the specific API level in DHC, I can't help but see the need for a library of company-wide assertions, some unique to specific APIs, with others existing more universally and applied across many APIs. This leaves me thinking that assertions need to stand on their own, independent of any single API, be portable and shareable, and always be written in as plain English as possible--something that can be agreed upon across all stakeholders involved with an APIs contract.
I feel like assertions will be a growing dimension of APIs, similar to APIs needing to have the "skills" required for simple, meaningful implementations in bot, and voice enable applications, assertions are just the contract level promise of what an API can do, will do, and then actually is proven to do reliably. Anyways, just thinking through my ideas, in preparation for my next conversation with Jerome. Having partners that I can brainstorm with like this helps me work through my ideas, and hopefully they can cherry pick some useful items from my ramblings for their road map.
It always makes me smile, when I talk to someone about one or many areas of my API research, sharing how I conduct my work, and they are surprised to find how many areas I track on. My home page has always been a doorway to my research, and I try to keep this front door as open as possible, providing easy access to my more mature areas like API management, all the way to my newer areas like how bots are using APIs.
From time to time, I like to publish my API life cycle research to an individual blog post, which I guess puts my home page, the doorway to my research into my readers Twitter stream, and feed reader. Here is a list of my current research for April 2016, from design to deprecation.
I am constantly working to improve my research, organizing more of the organizations who are doing interesting things, the tooling that is being deployed, and relevant news from across the area. I use all this research, to fuel my analysis, and drive my lists of common building blocks, which I include in my guides, blueprints, and other white papers and tutorials that I produce.
I am currently reworking all the PDF guides for each research area, updating the content, as well as the layout to use my newer minimalist guide format. As each one comes off the assembly line, I will add to its respective research area, and publish an icon + link on the home page of API Evangelist--so check back regularly. If there is any specific area you'd like to see get more attention, always let me know, and I'll see what I can do.
API definitions like OpenAPI Spec, API Blueprint, and Postman, have been gaining in popularity over the last couple of years, mostly because of the their ability to deploy interactive documentation like Swagger UI. However, the API providers who have been using them the longest, have also realized these machine readable definitions can be applied effectively at almost every step along a modern API life cycle, from design to deprecation.
I'm always encouraging companies, who are selling software services to the API space (aka API service providers), to make sure and have APIs for their entire stack, as well as speak in as many of the leading API definition formats as you possibly can. To help in this effort, I try to regularly showcase the API service providers who are doing it right--this round, its the folks over at APImetrics.
The screen that comes up, when you go to add new API calls to the monitoring service is exactly what I am talking about, allowing me to get up and running using the API definition format of my choosing.
I am given the option to manually add an API, or do a bulk import of the APIs I wish to monitor using the service. I'm given the option of importing WSDL, OpenAPI Spec, RAML, Blueprint, and Postman, which reflects the leading API definition formats, any API service provider should be speaking by default. If you need help enabling this in your services, I recommend talking to the APIMATIC folks about using their API Transformer.
API definitions are quickly becoming the central contract that gets passed around among technical folks, and increasingly with business units as well. No matter where you exist on the API life cycle, between API design, all the way to deprecation, your customers should be able to onboard, and offboard using all of the modern API definition formats--enabling your users get up and running in seconds, rather than minutes, hours, or even not at all.
I see quite a few rogue APIs, and often rogue SDKs, but this is the first time I've come across a rogue embeddable button. While browsing Product Hunt this morning I came across this rogue Snapchat embeddable button, which allows you to promote your Snapchat account on any website.
It just makes me sad that platforms don't just ignore platform fans and advocates like this, but actively work to lock things down to prevent this kind of serendipity from happening. Why would you want to shut down people who are looking to promote your tool? You should be enabling them.
If Snapchat had a proper API portal, they would take this signal, internalize it, and turn it into an official set of embeddable tools for the messaging platform. #derp
One common thing you hear from the growing number of integrations and bots that are leveraging the Slack API, is all about injecting some specific action into the platform and tooling, we are all already using. Startups like Current, who are providing payments within your Slack timeline, use the slogan, "transact where you interact". I began to explore this concept, in what I call API injection, and is something I'm sure I'll be talking about over and over in the future, with the growth in bot, voice, and other API enabled trends I am following.
The concept is simple. You inject a valuable API driven resource, such as payments, knowledge base, images, video, or other, into an existing stream, within an existing platform or tool you are already putting to use. It is not a new concept, it is just seeing popularity when it comes to Slack, but really has been happening for a while wit Twitter Cards, and chatbots who have been around for some time.
I'm seeing another incarnation of this coming from my friends over at Blockspring, who I've showcased for some time, for bringing valuable API resources to spreadsheet users. Blockspring just released Google Sheets Templates, which enables your spreadsheets to "do useful things", and "get data, use powerful services, and do things you didn't think were possible in a spreadsheet." Instead of Slack being the "existing tool", it is Google Spreadsheet, but the approach is the same -- bring useful API driven resources to users, where they are already existing and working.
As I continue to watch the latest bot evolution, a stark contrast has emerged between the types of bots we've seen between Twitter and Slackbots--a contrast I feel is being defined by the tone and business viability of each platform. Twitterbots are much more whimsical and about stories, poetry, and other information, where Slack is very much productivity, and business focused, making it a prime target for the next wave of startups, and VCs.
I think there is a big opportunity to deliver valuable API resources into the timeline of Slack users. I think there is an even bigger opportunity to deliver valuable API resources into the worksheet of the average business spreadsheet user. The problem is that these opportunities also means there will be a significant amount of crap, noise, and pollution injected into these channels. I'm just hoping there will be a handful of providers who can do this right, and actually bring value to the average Slack, as well as the every day spreadsheet worker -- you know who you are.
I am regularly reminded of the wide spectrum of what API means to any single person. What is API, and what APIs enable, are all in the eye of the beholder, with only a handful of common aspects shared by any single group of people. This is one of the things that make it very difficult to answer the common newcomer question of where they should start with APIs. This is what makes it so difficult for APIs to ever rise to the expectations of leading architects, and API visionaries.
For me, APIs are about enabling API providers to open up access to their resources, and empower API consumes to get at the resources they need to be successful in their every day worlds. Some folks out there enjoy regularly reminding me that APIs are not for everyone, and they should only be used by a handful of sanctioned tech practitioners, to facilitate the technical, business, and political / ideological motivations of these per-ordained--my dreams of enablement and empowerment are nice, but they are just not reality.
I've heard this from day one of API Evangelist in 2010, and I'm sure I will hear it for some time to come. Some days these currents are strong, and get me down, but other days there are the small rays of light, that keep me hopeful (enough) to keep on, keep'n on. One of these rays of light in my world currently is watching my friend Tom Woodward (@twoodwar) work through his world, after the university API workshop I did in North Carolna.
Tom has been working through his own thoughts on what a personal API means to him, via his blog. Something that has seriously turned him on to the potential of APIs:
To the folks I talk about above, who simply see APIs as a tool of the API architect, developers, and IT across startups and the enterprise, will not understand what I am talking about. One person using APIs? They see no opportunity. This will always be dismissed as an anomaly. This isn't how APIs are done, this isn't how technology happens. For me, this is API. This is how APIs will enable and empower, and help individuals be more successful in the companies they work at, the small businesses they run, and across their professional existence--what Tom is doing, is API in my book.
This post is just a reminder to me, to not dwell on the people who see APIs as the pipes in which all our digital resources flow, but are pipes that SHOULD NOT be visible to everyone, allowing only the sanctioned class of API elite, developers, startup and enterprise to understand. This vision of API, which is quickly spreading as one popular view, as Silicon Valley continues its shift in focus towards selling to the enterprise, is not a future of APIs that I will accept, anymore than Facebook being the future of the web (OMG, have you seen the amazing things they are doing in Africa, with those poor people -- ack).
There is no right or wrong here, I am just reminded that there are some very differing views of what APIs are, born out of some very different motivations which I do not share. There are many, many reasons why we need APIs, resulting is some very different concepts around what an API is, and what an API can do. I just need to regularly renew my faith in what APIs are all about, and Tom's journey is giving me one chance to do so.
Many of the core areas of my API research, and the common building blocks of the API life cycle that I talk about regularly, often seem trivial to the technically inclined, or the purely business focused segments of my audience. To many, having a road map might be a thing you have when developing and deploying an API, but really doesn't matter if you share publicly. I'd say, with technical folks they often don't even think of it, with the more business focused individuals often deliberately choose not to, seeing it as giving away too much information to your competition.
I think Slack nails the reason why you want to open up and share your API road map. I can talk about this kind of stuff until I'm blue in the face, but people just don't listen -- they need leadership like Slack brings to the table. In their post today, they open with:
We know that being a developer is hard, and building on a platform is not a decision to be made lightly. Many platforms have burned developers and we frequently see that risk highlighted. This is our response.
They nail the the (often elusive) promise of the API ecosystem:
An ecosystem, a real platform, is shared. We are growing fast, but no one company alone could grow fast enough to meet the amount of potential in front of us. We are working hard, but no one team can work hard enough to meet the demand that lies before us all. Instead, we are building a platform where this potential, this demand, is shared. As we grow, developers are able to succeed with us; sharing our customers and joining us in changing the way people do work. In turn, our customers are delighted, new customers have even more reason to use Slack, and the cycle continues.
They nail the reality of the API ecosystem, and how it is a shared experience:
An ecosystem in its healthiest form creates a virtuous cycle. Platforms do fail. We’ll make mistakes, but we’re building for something much greater. We are building for a future where Slack is dwarfed by the aggregate value of the companies built on top of it. This is our success as a platformâ—âwhen the value of the businesses built on top of us is, in sum, larger than we can ever be.
They connect the reality of an ecosystem with having a public road map:
So, today we’re sharing our platform product roadmap. It is a small step in equipping you to claim this opportunity with us. There are three major platform product themes to highlight: app discovery, interactivity and developer experienceâ—âyou can see more on this card.
Slack doesn't stop there, and throw in having a idea showcase to help as well:
We’re also sharing what we’re calling an Ideaboard —âa list of useful ideas that could be built into Slack apps per conversations we’ve had with our customers. [...] The goal of the Ideaboard is to continue this momentum by creating a bridge between our developer community and our customers’ needs.
As Slack mentions, this won't be perfect. They don't have all the answers. A public road map and idea board might not ultimately make or break Slack as a platform, or ensure the community continues to evolve as a full blown ecosystem. Shit can go wrong at any point, but having a public road map, and active, truly contributing idea board, will go a long way to contributing to positive outcomes.
If you aren't allowing input from your API community into your road map, truly considering this input as you craft your road map, and then share the resulting plan with your community, how can you ever expect them to stay in sync? It doesn't mean that Slack will listen to everything, and every developers opinion will be included in the road map. It will however, put Slack into a more open state of mind, and go a long way to set the right tone in the API ecosystem, building trust with the individuals and business who are building on their platform.
What tone has been set in the community around the APIs you provide? What is the tone in the communities you exist in as an API consumer?
It is pretty easy to design, define, and deploy APIs these days, and I get a number of folks who approach me with questions about how to get going with the operations and management side of things. While each company, and API provider, will have different needs, I have a general list of the common building blocks used by the leading API providers I track on across the API sector.
So that I have an up to date URL to share with a couple of my partners in crime, I wanted to organize some of the common building blocks across my almost 50 areas of API research, into a single list, that can be considered when anyone is planning on deploying an API. For this guide, I wanted to touch on some of the building blocks you should consider as part of your central API developer portal, documentation, and other elements of the management and operations, what I feel should be a minimum viable presence for successful API providers.
Taking API Inventory
Taking inventory of what web services, and APIs may already exist, be in use, or are available within an organization, providing a master catalog of current resources, that can be put to use, and evolved.
- Internal APIs - What existing APIs are in operation and use by internal groups?
- Public APIs - What public APIs are currently available for use?
What is the process for on-boarding of new users? Walk through what a new user will experience, looking at each step from landing on home page, to having what I need to make my own API call. Reduce as much friction as I can, and making on-boarding as fast as possible.
- Portal - Is the portal publicly available, or just a centrally portal on a private network?
- Getting Started - Does this API have a getting started guide applied to its operations?
- Self-Service Registration - Is this API available for self-service registration?
- Sign Up Email - Do API consumers receive an email upon signup for an account?
- Best Practices - Does this API have a best practices page applied to its operations?
- FAQ - Does this API have a frequently asked questions (FAQ) page applied to its operations?
- Google Authentication - Is Google Authentication available for platform signup and login?
- Github Authentication - Is Github Authentication available for platform signup and login?
- Facebook Authentication - Is Facebook Authentication available for platform signup and login?
The on-boarding experience has to have a little friction as possible, and feel like what API consumers are already used to when they are putting other leading API platforms to use. Do not re-invent the wheel, or introduce obstacles into the API on-boarding process.
What is provided when it comes to documentation for the platform? There are a number of proven building blocks available when it comes to API documentation, providing the technical details of what an API can do.
- List of Endpoints - Is there simple list of endpoints available?
- Static Documentation
- Is there documentation for the API?
- Is Slate used for API documentation?
- Interactive Documentation
- Error Response Codes - Are error response codes and detail documented anywhere?
- Crowd Sourced Updates - Does the platform allow the community to edit, and submit changes to documentation using Github, or other mechanism?
- Notifications - Are there notifications that are sent out as part of any change that is made to documentation?
Are there small, simple usable samples in a variety of programming languages, and potentially for a variety of platforms, demonstrating each API call available via a platform.
- PHP - Is there PHP samples for each endpoint?
- Python - Is there Python samples for each endpoint?
- Ruby - Is there Ruby samples for each endpoint?
- Node.js - Is there Node.js samples for each endpoint?
- C Sharp - Is there C Sharp samples for each endpoint?
- Java - Is there Java samples for each endpoint?
- Go - Is there Go samples for each endpoint?
- Scala - Is there Scala samples for each endpoint?
Generally samples will have minimal authentication elements, and reduce any external dependencies, focusing in on a specific API endpoint call, in a particular programming language.
What SDKs are available? These SDKs might be hand crafted, or auto generated, but should be available in a variety of languages, encouraging the jumpstarting of integrations by a wide as possible audience.
- PHP - Is there a PHP SDK for the API?
- Python - Is there a Python SDK for the API?
- Ruby - Is there a Ruby SDK for the API?
- Node.js - Is there a Node.js SDK for the API?
- C Sharp - Is there a C Sharp SDK for the API?
- Java - Is there a Java SDK for the API?
- Go - Is there a Go SDK for the API?
- Scala - Is there a Scala SDK for the API?
It is getting more common for API providers to use an SDK generation service, using the machine readable definitions of APIs as the contract to follow. Even with the overhead of SDK generation, development, and support, they are still widely used to help speed up application and system integration.
There are many overlaps with mobile in the regular SDK portion of this research, but some providers are publishing more resources specifically dedicated to the support of mobile integrations.
- Mobile Overview - Is there a page dedicated to the platforms mobile integration resources?
- iOS SDK - Is there an IOS SDK?
- Android SDK - Is there an Android SDK?
Not all platforms will need to support mobile integrations, but the growing number of APIs being deployed, are deploy to support mobile efforts. There are a number of other considerations, but these three areas represent the minimum viable considerations.
What resources are available for managing code across the platform. This are focuses on just the services, tooling, and process associated with code management, not always the code itself.
- Code Page - Is there a page in the portal dedicate to the code available for a platform?
- Github - Is Github used to manage code that is part of API operations?
- Application Gallery - Is there an application gallery available for applications that are built on top of the API?
- Open Source - Are there open source code, and applications available as part of API operations?
- Community Supported Libraries - Does the platform accept and list community supported libraries?
Github should play a central role in the code management of any modern API platform. Much like Twitter, Facebook, and LinkedIn will play important role in your communication and support efforts, Github is key to the management of code resources at all levels of operations.
What support services are available 24/7, that developers can take advantage of without requiring the direct assistance of platform operators.
- Forum - Is there a forum available that provides self service support options?
- Forum RSS - Does the forum have an RSS feed?
- Stack Overflow - Is Stack Overflow used as part of the support strategy for the platform?
- Knowledge base - What sort of content directory and knowledge base is available to search and browse?
Self-service support is always present in the successful API platforms. Like the web, APIs are a 24/7 operations, and if developers cannot get direct support 24/7, there should be a wealth of self-service items available.
What support services are available that developers can take advantage of, that involves direct employee attention. Even though APIs should be self-service as makes sense, direct support will always play an important role in setting the tone for the community.
- Email - Is there an email for API consumers to receive direct support?
- Contact Form - Is there an contact form for API consumers to receive direct support?
- Phone - Is there a phone number available for API consumers to receive direct support?
- Ticket System - Is there a ticketing system available for API consumers to receive direct support?
- Social - Is community support also offered via existing social network profiles and channels?
- Office Hours - Are office hours available, and posted for API consumers to take advantage of?
- Calendar - Is there a calendar of events for office hours, and other support related events?
- Paid Support Plans - Are there paid support plan options available for the platform?
APIs are a business, and you have to provide support. Many savvy API consumers will browse the blog, Twitter account, and other support channels looking for the right amount of activity, and assistance present -- if its not there, they'll move on.
The Road Map
How are we planning, and communicating updates to the platform? Providing a map of how things have changed across the platform, from versioning of the API itself, to even documentation, and other aspects of platform operations.
- Road Map - Is there a road map shared with API consumers?
- Idea Submission - Can API consumers and partners submit ideas for inclusion in the road map?
A road map plays a critical role as sort of value or joint where platform provider and platform consumer engage. It pushed the provider to consider ideas from the community, bringing the platform closer in alignment with consumers, and goes long way in building trust with the community.
What is currently happening on an API platform, providing a real time heart beat of the current status of API resources. There are a handful of common elements platforms use to stay in tune with their platform operations, while also sharing with the community.
- Status Dashboard - Is there a status dashboard available to API consumers?
- Status RSS - Does the status dashboard have an RSS feed?
- Status History - Is status history archived, and available for review alongside the current status?
When done right, platform status shared with the community, can send the important signal on a regular basis, that all is well on a platform. Something that will be echoed across the platform, and social web, eventually reaching others who might eventually be new consumers.
What has already happened with a platform, providing a single archive of all changes made to the platform, for consumers to review at any time, in an easy to find location.
- Change Log - Is there a change log available for API consumers to review, to better understand what changes have been made?
- RSS Feed - Is there an RSS feed for the platform change log, allowing users to subscribe to changes as they are made?
- Notifications - Notifications about changes to the road map, and status of overall operations that will impact API consumers.
- Emails - Are email notifications sent to API consumers where there is a change in the roadmap or status of API platform?
An active change log is one of the clear signs that a platform is active, and something you can depend on. The record that exists across a platforms road map, status, and change log will help set the tone for an API community. A platform where these elements are missing, or has big gaps in information, or out of sync, are all signs of other wider illnesses that may exist across platform operations.
What are the communication elements available as part of the overall feedback loop for an API platform. There should be at least a minimum viable communications present, otherwise it is unlikely anyone will learn that a platform exists.
- Blog - Is there a blog for API communications?
- Blog RSS Feed - Does the blog have an RSS feed?
- Twitter - Is there a Twitter account for API communications?
- Email - Is there a email account for API communications?
- LinkedIn - Is there a LinkedIn account for API communications?
- Slack - Is there a Slack channel for API communications?
- Email Newsletter - Is there an email newsletter dedicated to API communications?
Like support, road map, status, and change logs, an active and informative communication strategy will set the tone of an API community, and build trust amongst consumers. Also providing clear signals of when a platform is healthy, or one that should be avoided.
What other resources are available for API consumers to take advantage of? Common resources provide a wealth of usually self-service knowledge resources that API consumers can consume on demand, as part of their API integration journey.
- Case Studies - Are there case studies available showcasing how APIs can be put to use?
- How-to Guides - Are there how to guides assisting consumers in understanding how to integrate with an API?
- Webinars - Are webinars conducted, introducing consumers to platform operations?
- Videos - Are there videos available to assist consumers in understanding what a platform does, and how to integrate with it?
API consumers will learn in different ways. Not all will need how to guides, and videos, but many users will prefer. Make sure and provide a wealth of up to date, and informative resources.
Consumers of an API platform always need an account where they can get access to API authentication, usage reports, and other common elements of API operations. What does the developer account, or area look like, and what resources are available for developers to take advantage of.
- Developer Dashboard - Is there a dashboard for API consumers?
- Account Settings - Can API consumers manage their account settings?
- Reset Password - Can API consumers reset their passwords to their account?
- Application Manager - Can API consumers manage the applications setup to integrate with API?
- Usage Logs & Analytics - Can API consumers access logs and analytics for their API consumption?
- Billing History - Can API consumers see billing history for their accounts?
- Message Center - Is there a messaging center for API consumers to communicate with the platform, and receive notifications?
- Delete Account - Can API consumers delete their account?
- Service Tier Management - Can API consumers change / update the tier of service their account exists in?
Like mentioned in on-boarding, make sure the developer account acts, and feels like other modern SaaS, and online accounts. Don't make it difficult for API consumers to manage their profile and account on an API platform. There are a wealth of healthy examples of how to do this right, across the API landscape.
It may seem silly, but what APIs are available for managing API management related elements? API consumers are increasingly needing programmatic control over all aspects of their API accounts, as the number of API used increases. There are a number of API platforms that provide API management APIs, something that is easy to do with modern API management infrastructure.
- User Management - Is there an API for managing users who have access to any API?
- Account Management - Is there an API for managing account level information?
- Application Management - Is there an API for managing applications that have access to any API?
- Service Management - Is there an API for accessing service level details for available APIs?
Remember, you may be one of multiple APIs that API consumers are using to drive their web, mobile, and device applications, as well as systems integrations. Allow for the automation of all aspects of their accounts, user details, applications, and service management.
When it comes to API operations, what is needed to reach an international audience? There are number of building blocks emerge that are being used by leading platforms to make sure their properly internationalized for a global audience.
- Language - Are there multiple language versions of the portal available?
APIs are a global resources, and are increasingly being deployed to support multiple regions around the world. Even if internationalization is a down the road concern, take a moment to understand how far down the road it is.
Authentication is central to many other lines of the API life cycle. There are several common elements present in modern API solutions that address authentication.
- Overview - Is there an authentication overview available?
- Basic Auth - Does the platform employ basic authentication for accessing API resources?
- Key Access - Does the platform require API keys for accessing API resources?
- JSON Web Token - Does the platform require JSON Web Tokens for accessing API resources?
- oAuth - Does the platform require OAuth for accessing API resources?
- Tester - Is there an authentication tester available?
- Scopes - If OAuth is employed, is there a page dedicated to sharing OAuth scopes.
- Two Factor Authentication - Is two factor authentication available for the platform?
While not the perfect identity and access management stack, there are plenty of proven approaches to handling API authentication. Carefully consider how much authentication is necessary, based upon what resources are made available, and the expectations around API integration. Do not over do authentication when not necessary, but also make sure and don't under invest in this area, as it will bit you in he ass down the road.
The details of security an API platform. Since web APIs often use the same infrastructure as websites and applications, some of the approaches to security can be shared.
- Security Practices Page - Is there a page dedicated to providing an overview, and some times detail of security practices?
- Security Contact - Is a security contact published as part of platform operations?
Share as much detail as possible about what is being done to mitigate threats at all layers of your stack. This should be just as much an admission that you know what you are doing, as it is an important details for API consumers around security. When security details are missing from an API platform's presence, I find it is more because this area wasn't considered, more than anything else.
The lawyers are driving and guiding almost all value being generated, captured, and done via the growing number of online services depend on. Any savvy API consumer will be looking to understand the legal requirements that surround integration, so make sure there are a handful of building blocks present.
- Terms of Service - A terms of service that guides platform operations, and developer integrations.
- Licensing - What the licensing considerations are for all data, content, server & client code, as well as APIs.
- Branding - The branding requirements, guides, and other assets to be considered as part of company branding.
The best API providers have a legal department that is more human than lawyer, speaking plain english over legalize. A simple, comprehensive, and understandable legal department is a sign of a healthy platform, with nothing to hide from its consumers.
What are the units of currency the platform uses. What are the individual value units applied to each API, and how are things calculated. Most like this is done in dollars, or euros, but other units are emerging as well.
- Value - What is the direct value associated with an API?
- Usage - What direct value does API usage deliver?
- Volume - How does volume usage of an API deliver value?
- Limits - How is value maintained by imposing limitations?
- Users - How does having more users generate value?
- Applications - How does having more application generate value?
- Integrations - How can more integrations with other systems generate value?
You would be surprise how many existing API platforms I speak with who cannot answer many of these questions. They feel their APIs are valuable, but cannot articulate the value they bring to their consumers in a coherent way. Understanding the direct value an API generates is something that should be discussed as part of every API deployment, and shared with API consumers, and other key stakeholders in a coherent way.
Beyond the obvious, APIs are generating a lot of value for platform providers, and consumers. What are some of the common ways to look at indirect value generation.
- Marketing Vehicle - How are APIs used as a marketing vehicle for an organization, products or services?
- Traffic Generation - How is an API used for generating traffic to other websites, mobile applications, or devices?
- Brand Awareness - How is an API used for increasing brand awareness of an organizations, and its products or services?
- Data & Content Acquisition - How does the acquisition of data or content via an API generate value?
- Syndication - How does the API generate value through the syndication of data, content, and other digital resources?
There are numerous APIs that restrict any indirect value from occurring through tightening down on other aspects of API operations. It takes a savvy API provider to be in tune with the indirect value generated via an API, and see the big picture of what is possible with an API presence.
These are the key elements of API plans that I have gathered from across hundreds of API providers. These elements can be associated with specific plans that are available, but they do not have to, and I often use them to generally describe the plans, or perceived plans behind API operations. These are the elements you should be considering as part of your own plans. You do not have to use all of them, but hopefully they will help you better understand the possibilities when it comes to API planning.
- Overview - Is there a page dedicated to providing of all the plans available via the API platform?
- Private - Are there private APIs available via the platform?
- Internal - Are APIs available via the platform used internally?
- Partner - Are APIs available via the platform used by partners?
- Public - Are APIs available via the platform availably publicly?
- Free - Are there free API access via the platform?
- Commercial - Is there commercial usage of API resources?
- Non-Commercial - Is there non-commercial usage of API resources?
- Educational - Is there educational access to API resources?
Using HTTP as the transport for your API does not mean it is a public API by default, but there are a number of technical, business, and political elements to be considered when planning the internal, partner, and public access to API resources. Have a plan, share the plan, and use it to guide platform discussions.
Beyond the overall access considerations, what are the specific metrics that are being applied to overall API operations, as well as individual plans and access tiers. Depending on the resource, there are a number of metrics being used across the API space, by leading API providers. This layer of the journey is meant to walk through the metrics you will want to consider in your API journey, allowing to cherry pick the ones that are most import to you. Not all metrics apply in all situations, but they are the building blocks of good API plans.
- Access - Is access (or not access) used as a metric in monetization, or can you buy access to some API resources?
- Calls - Are API plans metered by individual API call?
- Transaction - Are API plans measured by overall transactions completed?
- Message - Are API plans measured by number of messages sent?
- Compute - Are API plans metered by the amount of compute resources available?
- Storage - Are API plans metered by the amount of storage used?
- Bandwidth - Are API plans metered by the amount of bandwidth used?
Metrics are often rooted in what the hard costs are with deploying, managing, and operating an API. Once they are well defined, and you get more in tune with platform operations, and what value is being generated, and what operational costs are, you will begin to see things in new ways. Think about what Amazon Web Services has done with APIs, pushing the concept of how we measure the access of valuable digital resources.
What are limitations and constraints applied as part of the API planning operations. How are these crafted, applied, and reported upon. All APIs will have limitations. Even with wealth of scalable tooling we available today, there are still a handful of areas where limitations are being applied to keep platforms healthy and stable.
- Overview - Is there a page dedicated to helping understand API limits in place?
- Range - Are API rate limits based upon limits of metrics applied to API resources?
- Resources - Are API rate limits applied to individual API resources?
- Unlimited - Are there places where there are no limits applied?
- Increased - Can rate limits be increased?
- Inline - Are API rate limits available inline for each API in the documentation?
The primary reasons for setting limitations is to keep API resources available to the entire community, helping ensure stability, and keep operational costs within reasonable realms. However, limitations are also used for business, and political goals, going well beyond the common technical restrictions in place.
The consumption of API resources is often measured within timeframes, in addition to the wide number of other metrics that can be applied. Having meaningful timeframes defined for evaluating how APIs are consumed, and using as part of overall planning, when it comes to all aspects, ranging from rate limits to billing.
- Seconds - Are elements of plans metered in seconds?
- Minutes - Are elements of plans metered in minutes?
- Hourly - Are elements of plans metered by the hour?
- Daily - Are elements of plans metered daily?
- Monthly - Are elements of plans metered monthly?
- Annually - Are elements of plans metered annually?
At first these seem like they shouldn't be included in the minimum viable presence for API operations, but in reality, these timeframes are core to everything we do. We limit API calls by the second, minute, and hour, and we often clear limitations each day or monthly, as we bill for usage or just allow the amount of consumption we can afford as platform providers.
The communication around partner levels of access is critical to overall health and balance with other tiers of access. Providing as much detail for partners, but also potentially other levels of access is important. Here are a few of the building blocks employed to help manage partner details.
- Landing Page - Is there a landing page dedicated to the partner program?
- Program Details - Are the program details available via a landing page, as well in a portable, shareable format(s).
- Program Requirements - What are the requirements to be part of the partner program?
- Program Levels - What are all the levels of the partner program, and what are the details?
- Application - Is there a partner program application available for prospects to fill out?
- List of Partners - Will there be a list of partners available for other partners and consumers to view?
How Are APIs Found?
How APIs are being discovered across the current API landscape. How are APIs being found by developers, and application architects at all stages of development.
- API Directory - What API directories are in use?
- APIs.json - Is APIs.json in use to provide meta data indexes for API discovery?
This provides one possible base for the average API operations. Granted, not every element here should be implemented by all APIs, but it does provide a healthy checklist that can be considered as part of any APIs life cycle. I'm sharing this so that my partners may consider as part of their own operations, and to use as a draft for a future white paper, that any company can use as a guide in their own API journey.
My goal in assembling this information is to help shape what the portal, and potentially online API presence might be for an API. I also want to provide a nice checklist that anyone can just run down, making sure any important element was not considered. It is easy to miss things, while you are down in the weeds making an API happen. This is why I'm here, to help keep an eye on the bigger picture, and provide you with what you need to be as successful as you can in your own API efforts.
I find it very tough to provide just enough information to people, without going into areas of the API life cycle that do not apply. This guide is meant to address what is needed as you prepare to launch a new API, but could also be used by API providers, with existing APIs and portal, but are looking to consider what might be next for a road map. You can find all of these elements as part of my overall research into the API space, as well as additional areas, and building blocks that didn't quite fit into this particular perspective.
To augment my last post about when you have an API, but you need some help to identify what is needed to manage your presence, I wanted to talk about some of what you can do once you've established your base API management, operations, and presence--now you need to get the word out, and get people using your APIs. Whether your APIs are intended for a public, partner, or internal group, there are many well established techniques, that are employed by successful API platforms, that you can employ.
When I first sat down to write this guide, I was going to label it simply as API evangelism, but after pulling together the building blocks that I needed from across my research, to support a specific point, I shifted to make sure it was more of a week to week guide of things that should be considered. I wanted to share my own list which I use regularly, to help make sure I'm not forgetting any important things, and possibly re-evaluating existing practices to make sure they still relevant.
I wanted this API evangelism, outreach, and maintenance guide to speak to as wide of an audience as possible. To help do this, I wanted to focus on some simple goals, and building blocks that could be employed as part of the day to day, week to week, and month to month of the average API operations--here is what I have so far.
What Are The Goals
What are the core goals of the API operation? These need to be precise, measurable, and obtainable goals. While there may be unique ones to your situation, these are some of the common ones I see employed regularly.
- Growth in New Users - Is growth in the number of new users a goal?
- Growth in Existing User API Usage - Is growth in usage by existing users a goal?
- Brand Awareness - Is increase brand awareness a goal in evangelism?
- More Applications - Is a growth in the number of applications integrated with API a goal?
- New System Integrations - Is a growth in the number of system integrations a goal of evangelism?
- Other Goals - What other goals are there around evangelism of the API?
These should be strategic goals established, as well as more short term tactical goals, something that should fluctuate from week to week. Some weeks, outreach at events might be a priority, while others may involve content creation, development of how-to guides, videos, or other resources. Avoid only focusing on the usual goals listed above, and try to set other simple, more relevant goals that you truly can achieve.
Outreach & Engagement
Reaching out to API consumers is essential, not just to attract them as new users, but after they've registered, and as they are putting to the platform to work. Outreach will look different at various stages of the API life cycle, and may vary between your different groups of consumers, but here are a few areas to look at.
- Fresh Engagement - How are new developers engaged after they sign up for API access?
- Active User Engagement - What does the process look like to engage existing users and get them more active?
- Historical Engagement - How are inactive users engaged, either to reactivate them, or verify for removal?
- Social Engagement - What is the establish tone of social engagement when it comes to outreach?
I've experienced APIs where everything is automated and distant. You receive regular emails, and see the occasional tweet, but there really is nobody home, nobody reaching out and making sure things are good. I'm not talking about sales in this arena, I'm talking about genuine outreach, and engagement to see what API consumers are up to.
Blogging & Storytelling
How does blogging occur via the platform? What approaches are being used to generate, produce, and syndicate stories, keeping a regular stream of information flowing from the platform. There are some common areas to consider when planning this portion of operations, that can be applied on a regular basis, establishing a regular drumbeat of valuable content coming from the platform.
- Projects - What projects are occurring that can be showcased as part of the API effort?
- Stories - Is storytelling a regular thing that occurs on blog(s)--with dedicated resources?
- Syndication - How will blog posts by syndicated out?
Platform communication and support, should be done in a way that gives a personality to an API platform. Be genuine, don't market to consumers. Build things that highlight the value an API delivers, tell the stories of how and why you did it, and make sure and spread the word, doing the hard work to syndicate to the most relevant platforms.
Partner Storytelling Activities
Your partners are always looking to get some special attention when it comes to blogging, storytelling, and outreach. What are some of the elements that can be considered when looping in partners into your outreach and engagement efforts.
- Blog Posts - Are there blog post activity opportunities available to partners?
- Press Release - Are there press release opportunities available to partners?
- Facebook Post - Are there Facebook post activity opportunities available to partners?
- Twitter Post - Are there Twitter activity opportunities available to partners?
Hopefully you have the right partners on board, letting in the ones that will benefit your existing blogging, storytelling, and outreach efforts. It shouldn't be extra work to include partners in your regular operations, they should fit in nicely, or maybe you should be considering if they are the right fit--having the right partners will make all the difference.
Partner Content Acquisition
What kind of content relationships can be established as part of partnership activities. Content generated from existing, successful relationships, can be a big driver in forming new partners, as well as keeping existing ones healthy and happy, feeling like they are getting value from the arrangement as well.
- Quotes - Are quotes from partners being gathered?
- Testimonials - Are testimonials from partners being gathered?
- Use of Logo - Are partners given different usage permissions around logos?
Showcasing the right partners, in just the right way, can send the signals you need to potentially new customers, and partners. Make a regular habit of asking partners for quotes, testimonials, and to be able to use their logo. Maybe even make it a default requirement as part of the partner application process.
Every API operates within a specific space, and understanding the landscape of the space is very important to the health and effectiveness of evangelism efforts. Because APIs are very technical in nature, it can be easy to keep your head down, operate within a silo, and ignore the world at large. There are some common ways you can tune into the landscape in which you are operating, and better understand the role your API will play.
- Competition Monitoring - Who is the competition? What are they up to? How do we compare?
- Industry Monitoring - What industry organizations and resources are available?
- Keywords - What are the top key words and key phrases that apply to this effort?
Landscape awareness isn't about the numbers, it is about awareness. It is about knowing what your industry is up to, what matters, and in tune with what the competition is up to. Most importantly, it is about picking up your head, and looking beyond your firewall, and seeing the bigger picture.
Forums play a big role in the self-service, and ecosystem nature of API operations. Forums can be within a platform, as well as on existing public forums. Healthy API communities encourage, and engine in conversation across their platform, and within the communities of others. These are some of the considerations with forums when it comes to evangelism.
- Forum Conversations - Are the conversations that occur on forums considered as part of overall evangelism and storytelling?
- Forum Posting - Are stories, and conversations from other channels posted on the forum, to help stimulate conversations?
- Stories - What stories are being told, derived from forum activity, or monitoring?
Not all API platforms will have their own forum, with some not having one at all, leaning on their social presence, and relying on existing communities like Stack Exchange, and Quora. Even if you don't operate your own dedicated community, participating in other communities, and weaving these experiences into your own planning, storytelling, and outreach is important.
What role does support play in the overall evangelism workflow. Evangelism and outreach is not just about marketing and sales, with much of the tone of evangelism being set by the overall approach to support (or no approach).
- Email Coordination - Are there resources dedicated to email coordination with the platform community, and the public?
- Email Needs Tracking - Are issues, and conversations that occur within email support considered as part of other activities like the roadmap, and blogging?
Not all individual support scenarios should be shared as part of communication, storytelling, and outreach, but the valuable ones should be generalized, and included in regular outreach materials. When possible, ask your consumers if it is ok to share their situation with others. You never know when it could impact how others will view the platform, and help improve their own situation.
How are SDKs discovered by developers during development? What are the considerations for making sure existing SDK efforts get found. Are there considerations for API discovery being more about someone looking for an SDK solutions, than someone looking for APIs--here are some common considerations.
- List SDK - A listing of available SDKs.
- Search SDK - A search tool for available SDKs.
- Browse SDK - All the browsing of available SDKs by category or tag.
- Rating - Providing a rating system for SDKs.
Making sure you tell stories, and provide rich content around API operations is more about SEO, and marketing, with most of the people looking for the solution your API provides, not ever being aware they are looking for an API. Many potential developers will be looking for an SDK that meets their needs, and the fact that it is an API might be a separate consideration--make sure code is discoverable.
Github plays a central role in many areas of the API life cycle, but the social nature of Github lends itself well to evangelism efforts. When it comes to services I encourage API providers to use, Github is #1. The platform is much more than just code, and the social aspects can significantly benefit platform operations -- here are some of the common aspects of outreach on Github I am seeing.
- Github Repository - Are Github repositories used for code, support, content, and other parts of outreach?
- Github Relationship - Does the platform engage with other users through issues, wikis, and other social channels available on Github and around repositories?
- Github Organization - Is there a Github Organization dedicated to this API effort?
As with any other platform you use, your profile should be up to date, but code, wikis, and issue management should be active, and included in all other aspects operations, and projects. Github should touch almost every aspect of API operations, and playing a central role in how you engage with API consumers.
Social media plays a big role in business operations, and is something that is just as critical to API evangelism efforts. As with any other business in operation today, a healthy, active social presence needs to exist. Here are some of the common channels API providers are using social services to engage consumers.
- LinkedIn - Is there a LinkedIn user or page associated with API efforts?
- Twitter - Is there one or more Twitter accounts associated with API efforts?
- Facebook - Is there a Facebook user or page associated with API efforts?
I do not push every social network, in every scenario. Sometimes Facebook makes sense, and sometimes it doesn't. You will know your audience better than I will, but either way it shakes out, make sure and have a healthy, active, and valuable presence on social media to stay connected with your audience.
Beyond just social networks, some social bookmarking sites can also be important to the API evangelism workflow. Here are some of the social bookmarking sites in use today, that you should be considering.
- Reddit - Is Reddit use for discovery, and sharing of news and stories?
- Hacker News - Is Hacker News used for discovery, and sharing of news and stories?
- Product Hunt - Is Reddit use for discovery, and sharing of new products, services, and tooling?
Like other channels, these bookmarking sites are not just one way channels to share links out. The management of profiles, engagement with other users, and discovery of new sites, applications, APIs, and other interesting elements is a required part of the dance.
- Widgets - Are there widgets available for consumers to embed on websites, that uses the API?
- Buttons - Are there buttons available for consumers to embed on websites, that uses the API?
- Badges - Are there badges available for consumers to embed on websites, that uses the API?
- Bookmarklet - Are there bookmarklets available for consumers to embed on websites, that uses the API?
Think about what the Facebook Connect, and Twitter share buttons have done to bring new users, and awareness of these social platforms. Embeddable tooling should always considered as part of API outreach, providing simple, embeddable goodies that anyone can use, allowing the community share the load when it comes to outreach.
How are evangelism activities being reported upon. Reporting isn't just for your bosses and management. Reporting helps everyone involved better understand the day to day, and month to month details, and helps keep efforts organized. What are some of the common approaches to reporting on API evangelism efforts.
- Activity By Group - What are the activities going on around evangelism, broken down by group?
- New Registrations - What do new registrations look like?
- Number of Applications -
- Volume of Calls - What does API activity look like in general, but the number of calls?
Reporting should reflect your goals for outreach, and day to day operations. Why are we doing this? Are the goals and outcomes lining up? In addition to internal reporting, and sharing among key stakeholders like partners, consider what should also be shared with the API community, and the public at large.
Events & Gatherings
Events are the in-person, retail face of any API platform. While much work can be done in an online environment, also make sure you are available at events where API consumers will already be. There are a number of proven types events, that work for API evangelism.
- Conferences - What conferences are being attended, spoken at, and sponsored?
- Meetups - What local Meetups are being attended, spoken at, and sponsored?
- Hackathons - What hackathons are being attended, presented, sponsored, or put on?
It can be easy to spend a lot of money in this area, in an effort to be everywhere. A successful, in-person API outreach strategy, is always in sync with a robust, active, online presence. Make sure you efforts here are genuine, and in alignment with the other virtual elements discussed here.
Evangelism is not just an external thing. How is the platform being evangelized internally, even for publicly available APIs. Internal evangelism is very important for maintaining trust, and the continuation of the funding necessary to operate. There are some common patterns I've seen to help stimulate internal involvement.
- Storytelling - What sort of storytelling about platform gets told internally with leaders, and other stakeholders?
- Participation - What sort of participation internally occurs to get people involved in platform operations?
- Hackathons - Similar to the public hackathon movement, internal hackathons are increasingly a thing.
- Process Jams - Instead of gatherings to build applications, some folks are coming together to just improve on the process.
- Brown Bag Lunches - Gathering people together to just talk, can be a powerful thing, helping educate and spread the word.
- Reporting - What kind of reporting happens internally, keeping people in tune with what is going on with the platform?
This is the number one reasons I see APIs fail. From a lack of internal engagement. The most successful APIs I've seen enjoy wide support internally, and are owned beyond just a single group. A lack of internal buy-in, the necessary investment and resources that are required for success, will kill the even best planned API.
How Are APIs Found?
How APIs are being discovered across the current API landscape. How are APIs being found by developers, and application architects at all stages of development. Are APIs being shared in a way that they will be found by developers, and the people who will need them most.
- API Directory - What API directories are in use?
- IDE Integration - Are APIs available in common IDE platforms?
Whether they are internal, or public, APIs shouldn't be hard to find. They should be browseable, searchable, and available where the consumers will exist. They should be woven into all other areas discussed in this guide, opening up the opportunity for engagement across all aspects of outreach and engagement.
API Definition Formats
A machine readable specification is designed to assist in the area of API discovery. This allows APIs, and their supporting operations to be described in a way that can ingested, and indexed by API search engines, and directories.
- APIs.json - Is APIs.json in use to provide meta data indexes for API discovery?
- OpenAPI Spec - Are API definitions available to understand what APIs do.
These formats make APIs discoverable by other systems, and tools, but also open them up for sharing, collaboration, and syndication. Think of these API definitions like little machine readable business cards, that can be left anywhere for potential consumers to find.
Next up, are some API directories that exist publicly, and provide directories that developers can browse, and search for APIs by keywords. There are a handful of examples of available when it comes to relevant API directories you should be considering today.
- ProgrammableWeb - Are internal users aware of ProgrammableWeb, and do they put it to use?
- Mashape - Are internal users aware of Mashape, and do they put it to use?
- APIs.guru - Are the API definitions available in APIs.guru the Wikipedia for APIs?
This are is meant to serve the public APIs that are embarking on this journey. These are the leading API directories out there, that allow API platforms to list their APIs, and benefit from discovery, and other network effects these platforms bring to the table.
API Search Engines
Beyond just API directories, there are also a new breed of API search engines emerging, that allow for API discovery that goes beyond just static API browsing and search. Currently there is just one API search engine.
- APIs.io - Are all public APIs registered with the APIs.io search engine?
Contrasting API directories, API search engines allow you to maintain the control over the index for APIs that are included in their collections. Eventually open API directories will allow for new ways of API discovery, via fast growing channels like via IDE, spreadsheets, messaging, and other platforms where API consumers are already operating.
Business directories that allow for additional information related to APIs, as well as having APIs themselves, allowing for discovery of APIs from companies who list themselves in these directories.
- Crunchbase - Is the business and its public APIs documented with Crunchbase?
- AngelList - Is the business and its public APIs documented with AngelList?
APIs are hot in the startup, and business arena. Many other companies pay attention to these business directories looking for interesting API implementations. A healthy, active presence in these space can help in the evangelism and outreach process, and help you be more aware of some of things going on in the space.
I have compiled this list from the evangelism efforts I've helped craft in the past, from keeping an eye on the activities of other evangelists, as well as use as list to cherry pick tasks each day, and week, as part of my own API Evangelist network of sites and APIs. When I am looking for something to do, that will help engage my existing audience, as well as bring in new folks--I work from this list.
In 2016, i'm opting to focus on more on a digital presence for my operations, but only after five years of heavy, on the road evangelism, as well as maintaining the projects, and storytelling that I do regularly. Consumption of my APIs isn't my #1 priority, as this isn't my main way of generating revenue, but in 2016 this will slowly be shifting--in addition to evangelizing the API efforts of others, I will be stepping up the evangelism of my own APIs, further expanding this evangelism, outreach, and maintenance strategy.
Hopefully this guide provides you with some ideas for how you can formalize your own approach to getting the word out about your API. Whether you are evangelizing them publicly, stimulating partner interest, or making them known within internal groups, you should be able to find something here you can get to work on. After five years of beating my own drum in these ways, I can say that some weeks you will have highs, other times you will experience lows, but if you keep doing consistently, in a genuine way, it will pay off.
I process many press releases to feed the of API.Report beast. The primary reason I do this work each week, is to identify new APIs, being done in interesting business sectors. One common thing I see, is companies that reference their API in their press release, and when I head over to their website -- no API, or information to be found.
Ok, I only spend about 30-60 seconds looking for a link to a dedicated API area, but its pretty clear that people either haven't though making it accessible, or are holding their cards close to their chest. I'm sure there are a whole spectrum of reasons why people do this, but it just doesn't make sense for you to talk publicly about your API, while also making it so difficult to learn more--it is not very API-like.
I'm all for companies intelligently approaching how they tailor API efforts, but if its something you are showcasing as a selling point in your PR, you should also be making it easy for us analysts, bloggers, and potential customers to get additional information, without asking for help. If I can't find a dedicated API portal, and easily discover information about what an API does, I won't add it to my queue of research, and stories -- I'm sure this is the case for other tech writers.
As I study the approach of bot, and messaging platform integrations like Current, I keep thinking about the potential for API injection at this layer of messaging. In this scenario I am thinking about Slack, and how you craft messages, approach the unfurling of links, and customize attachments.
I am envisioning a Twitter Cards, but a more open approach, that can help companies deliver nice looking, well designed, and media rich messages that deliver what messaging users are looking for. The service would be cross platform, allowing me to design for Twitter Cards, Slack Messages, or any other messaging platform that finds its 15 minutes of tech fame. With just a little bit of exploration of how to craft a good looking, and functional Slack message, I can see this being a full time job for someone.
When building my own messaging integration or bot solutions is to not vomit up any text, images, or functionality into people's message timeline. I would like to understand how to deliver exactly what they need, in a visually appealing way. Like every other channel I've developed a competency around, I want to understand the best approaches employed by the savviest of the channel operators, and if I can encourage them to offer their solutions as a service, I will. I don't have the time or patience to master every channel (ie. mobile), and will gladly pay for the right service that actually helps me do this right.
If you are working on any simple solutions that touch on helping inject API resources, and craft them into meaningful messages and experiences, via platforms like Slack, let me know. If you aren't, but want to do as startup, get going, send me a beta invite, and I'll make sure you have my address so you can send the royalty checks.
It has been acceptable for integration platform as a service providers (iPaaS) like IFTT and Zapier to focus on delivering the end-solutions that their consumers have needed, and requiring them to visit your domain to discover and then make the magic happen. While I have always been an advocate for more accessibility, I have to admin that the current "closed" approach has brought a number of new people to the world of APIs, as people realize that IFTTT and Zapier are possible because of APIs!
However to me, it is a no-brainer that if you build a business on top of public APIs, you should pay it forward by providing an API, but I understand many of you startups follow a different philosophy than I do. One that is more about trapping, locking up, and owning these activities. You wouldn't want someone building on what you've done via an API, and potentially offering more value than you do!! Kind of like you did. No we wouldn't want to do that. Derp! #SelfishBusiness
As I see it, the altruistic reasons for offering an API if you are an API integration service providers are being pushed aside, for more business centric reasons, that these startups will have to take notice of. If you do not provide API access to your API driven recipes, they will not be available to discover, and execute for the growing number of bot enabled solutions we are seeing emerge. Your really useful iPaaS recipes will not play a role in this new bot driven phase of API consumption, and be left in the ditch of yesterdays tech.
If you are an iPaaS provider, please make your recipes discoverable, and executable via an API. I have long complained that if I can search for recipes via an API, I'm less likely to know it exists. I'm also less likely to embed your meaningful actions into my web site or application, if there is no API for me to drive embeddable tools. I just wanted to also take a moment and point out that your valuable iPaaS solutions will not be part of the bot evolution if you don't also have an API for my bots to discover and execute your valuable integration(s).
I am spending time talking to more API providers, and API service providers, about the challenges they are facing, while reaching out to potential customers, thanks to the support of my partners Cloud Elements. One of the conversation I had last week was with Diego Oppenheimer (@doppenhe) of Algorithmia (@algorithmia), who shared with me the challenges he faces in getting senior engineers to realize the potential of APIs, and the value API driven platforms like Algorithmia bring to the table.
Diego expressed that the biggest thing they face is convincing their engineer, senior dev, and other tech-focused consumers, that Algorithmia isn't just something new they need to add to their existing stack, and that it is more about enabling what is already in place. While some folks will benefit from discovering entirely new algorithmic approaches on Algorithmia's marketplace, the biggest impact will come from the platform's approach to defining, scaling, and stabilizing the algorithms developers and IT folks are already putting to work.
These are the content, data, and other resources you are already putting to work, the algorithms in your business life that already have relevance in your operations. I'm constantly working to focus on the fact that APIs are all about making these resources better defined, accessible and more discoverable, but when you also leverage what's being called "serverless" approaches like Algorithmia, you are also making them more scalable, more stable, and usable as well.
Diego said he's always trying to reassure senior tech folks of the fact that they aren't pointing out that they don't have the skills needed to define, deploy, and scale the bits of code (algorithms) that are making all of our worlds go around. It is about employing APIs, the cloud, and making your existing algorithms more agile, flexible, and scalable, augmenting your existing world with tangible benefits--ultimately making you better at what you are already doing.
I've talked about this concept before within my own operations. As the API Evangelist I will not scale what I do, unless I can find a service that augments what I already do, justifying an added costs only by truly achieving relevance in my daily operations. Little API driven algorithmic nuggets is how I do this. All you have to do as an APi service provider and enabler, is convince me of the tangible benefit you deliver in my operations, and your products, services, and tooling will naturally become more relevant.
During the latest IFTTT flareup, I realized how much I haven't written about my feelings surrounding API integratio service providers, iPaas, or whatever else you call it. Something that always frustrates me in the future, as I am unable to reference my earlier thoughts with a specific URL. So while I am ranting about the lack of APIs for these integration platform as a service (iPaaS) provider, let me add to my list of critical elements I feel are missing from the space: Embeddability!
As a user of your service, provide me an API, so I can embed your API recipes into any website, web and mobile application. While the fact that Zapier does have a public API, where IFTTT does not have one at all, the Zapier API doesn't not provide me any access to what API integration recipes (Zaps) are available (ie, core business value). I cannot automatically search for all the Google Spreadsheet integrations. I cannot search for all the Twitter API integrations. Let alone embed the actions enabled by these recipes on any of my website.
When preparing any list of meaningful actions you can take with APIs, like I did for my university workshop a couple weeks ago, I have to manually go to Zapier, search for actions, and then craft my own link to the detail page (which isn't public BTW). The response from Zapier to why there is no API for this, each time I ask, is that nobody has ever asked for that? Which is starting to smell more and more like business lock-in to me, in the wake of the IFTTT / Pinboard shitstorm.
It isn't like the concept of an API driven, embeddable button is anything new. Facebook Share, Twitter Tweet? I just can't buy the fact that nobody has asked for a Run Zapier Zap button? I'm just guessing that folks aren't seeing the embed opportunity, when they are down in the weeds, like with the Run in Postman button was. It just takes time for it to happen, a process I am always looking for ways to expedite. ;-)
Anyways. If you are running an API integration service provider, please:
- Pay APIs forward by having an API for your platform.
- Make all aspects of your service available via API.
- Provide embeddable buttons, and linkable hooks at any point in the action.
If you do this, your service will become more than just a destination for discovering and enabling API driven recipes. With a complete API, your platform can become baked into the API enabled fabric of our web, and mobile world(s). As an API integration platform as a service (iPass) provider, it is your responsibility to pay the API concept forward (hear the laughter of my haterz), and not terminate, and capture all the value generated via APIs. The reasons for this are much more than the altruistic visions you may think I'm speaking of, and touch on real world business advantages that you are missing.
Here is the email I received from the CEO of IFTTT, in response to the whole Pinboard kerfuffle, a few minutes ago. It looks like they've done a little soul searching, and wanted to apologize:
Hello Pinboard Customers,
We've made mistakes over the past few days both in communication and judgment. I'd like to apologize for those mistakes and attempt to explain our intentions. I also pledge to do everything we can to keep Pinboard on IFTTT.
IFTTT gives people confidence that the services they love will work together. There are more services in the world than IFTTT can possibly integrate and maintain alone. We are working on a developer platform that solves this by enabling service owners to build and maintain their integration for the benefit of their customers.
The vast majority of Channels on IFTTT are now built on that developer platform by the services themselves. We made a mistake in asking Pinboard to migrate without fully explaining the benefits of our developer platform. It's our responsibility to prove that value before asking Pinboard to take ownership of their Channel. We hope to share more on the value of our platform soon.
I also want to address Pinboard’s concerns with our Developer Terms of Service. These terms were specific to our platform while in private beta and were intended to give us the flexibility to evolve our platform in close partnership with early developers. We’ve always planned to update and clarify those terms ahead of opening our platform and we are doing so now. We are specifically changing or removing areas around competing with IFTTT, patents, compatibility and content ownership. The language around content ownership is especially confusing, so I'd like to be very clear on this: as a user of IFTTT you own your content.
I truly appreciate all of your feedback, concerns and patience. Helping services work together is what IFTTT does. We respect and appreciate the open web. This very openness has been instrumental in enabling us to build IFTTT and we fully intend to pay it forward.
I want to believe. I want to believe. However, when you build a company on top of public APIs, and you do not have a public API paying it forward, you do not have ANY transparency around your partner program, and there isn't even a pulic URL for me to reference as part of your apology --- I just can't believe. I'm sorry.
You have a great idea IFTTT. You ave captured the imagination of the average person when it came to what the API potential is. The problem is you don't have API in your DNA. You just don't understand how APIs can enable partnerships like you enjoyed with Pinboard at one point in time, and you took it for granted once you found success. I've seen many API driven companies make this same mistake.
Without a public API, a transparent partner program, and a public communication strategy, you won't recover from this.
I am working on several very rewarding API efforts lately, but one I'm particularly psyched about is Open Referral. I'm working with them to help apply the open API format in a handful of implementations, but to also share some insight on what the platform could be in the future. I have been working to carve out the time for about it, and finally managed to do so this week, resulting in what I am hoping will be some rewarding API work.
As i do, I wanted to explore the project, work to understand all the moving parts, as well as what is needed for the future, using my blog. I am not recommending that Open Referral tackle of all this work right now, I am just trying to pull together a framework to think about some of the short, and long terms areas we can invest in together. I intend to continue working with Greg, and the Open Referral team to help spread the awareness of the open API specification, and help build the community.
Open Referral is all about being an open specification, dedicated to helping humans find services, and even more humans to help other humans to find the services they need--I can't think of a more worthy implementation of an API. In my opinion, this is what APIs are all about -- providing open access to information, while also allowing for commercial activity. To help prime the pump, let's take a look at the specification, and think more about where I can help, when it comes to the Open Referral organization and eventually, the Open Referral platform.
Human Services Data Specification (HSDS)
"The Human Services Data Specification (HSDS) defines content that provides the minimum set of data for Information and Referral (I&R) applications as well as specialized service directory applications." Which represents a pretty huge opportunity to help deliver vital information around public services, to those who need them, where they need them, using an open API approach.
Currently there is an existing definition for HSDS available on Github, but I'd like to see the presence of HSDS elevated, showcasing it independently of any single implementation of the API, or the web, and mobile applications that are built on top of it. It is important that new people, who are just learning about HSDS understand that it is a format, and independent of any single instance. Here is a break down of the HSDS presence I'd like to see.
- Website - Establish a simple, dedicated website for just the specification.
- Twitter - Establish a dedicated Twitter account for the specification.
- Github Repo - Can repo be moved under Open Referral Github?
- Partners - Link to the Open Referral partner network.
- Road Map - What is the road map for the specification?
- Change Log - What is the change log for the specification?
- Licensed - CC0 License
I want to help make sure HSDS is highly available as an OpenAPI Specification, as well as the API Blueprint format. Both of these formats will enable anyone looking to put HSDS to work, to use the definition as a central reference for their API implementation, that can drive API documentation, code samples, testing, and much more.
I do not know about you, but having an open standard for finding and managing open data about human services, that can be used across cities, regions, and countries, seems like a pretty vital API design pattern--one that could make a significant impact in people's lives. When you are talking about helping folks find food and health services, making sure the disparate systems all speak the language matters, and could be the difference between life and death, or at least just make your life suck a little less.
While Open Referral, and HSDS was born out of Code For America, there is an organization in place, to use as a base for evolving the format, and building a community of implementations around the important specification. I wanted to take some time and organize some of the existing moving parts of the Open Referral organization, while also exporting what elements that I feel be needed to help evolve it into a platform.
The Open Referral Organization
As I mentioned, there is an organization already setup to guide the effort, "the Open Referral initiative is developing common formats and open platforms for the sharing of community resource directory data — i.e., information about the health, human and social services that are available to people in need." -- You can count me in, helping with that. Right up my alley.
Right now Open Referral is a nice website, with some valuable information about where things are today. The "common formats" portion of that vision is in place, but how do help scale Open Referral toward being an open platform, while also enabling others to also deploy their own open platform, in support of their own human services project(s). Some of these projects will be open civic projects by government and non-governmental agencies, while some will be commercial efforts -- both approaches are acceptable when it comes to Open Referral, and HSDS.
Let's explore what is currently available for the Open Referral organization, and what is needed to help evolve it towards being a platform enabler. Here is what I have outlined so far:
There is already a basic web presence for the organization, it just needs a little help to look as modern as it possibly can, and assume the lead role in getting folks aware, and involved with Open Referral and HSDS as possible.
- Website - Having a simple, modern web presence for the Open Referral organization.
OpenReferral.org is the tip of the platform, but if we want to increase the reach of the organization, and take the conversation to where people already exist, we'll need to think more multi-channel when it comes to the organizational presence.
There is already a great presence in place, an active blog, Twitter, and Google Group. Based upon the approach of other open formats, and software efforts, there are a number of other platforms we should be looking to spread the Open Referral presence to.
- Twitter - Managing an active, human presence on Twitter.
- LinkedIn - Managing an active, human presence on LinkedIn.
- Facebook - Managing an active, human presence on Facebook.
- Blog- Having an active, informative, blog available.
- Blog RSS - Providing a machine readable feed from blog.
- Medium - Publishing regularly to Medium as well as blog.
- Google Group - Maintaining community and discussion on Google Groups.
- Newsletter - Provide a dedicated partner newsletter.
So far we are just talking about marketing, and social media basics for any organization. We will need to make sure the overall organizational presence for Open Referral dovetails seamlessly with the more technical side of things, establishing a very non-developer friendly, yet still a little more technical, developer, and IT focused audience.
Open Referral Developer Portal
I suggest following the lead of other successful open standard, and software efforts, and establish a dedicated portal for the platform at http://developer.openreferral.org. This central portal will not provide access to a working implementation of the API, but focus instead on the community resources it will take to help ensure the widespread adoption of HSDS.
Right now, there is only the Ohana API, and supporting client tools that have been developer by Code for America. This is a great start, but Open Referral needs to evolve, making sure there are a wealth of language and platform formats available for supporting any implementation. I went to town thinking through what is possible with the Open Referral developer portal, based upon other open API, specification, and software platforms I have studied. Not everything here is required to get started, with a minimum viable developer portal, but provides some food for thought around what could be.
- Landing Page - A simple, distilled representation of everything available.
- HSDS Specification - Link to separate site dedicated to the specification.
- Github - The organizational organization as umbrella for presence.
- Server Implementations (PHP, Python, Ruby, Node, C#, Java)
- Server Images (Amazon, Docker, Heroku Deploy)
- Database Implementations (MySQL, PostgreSQL, MongoDB)
- Client Samples (PHP, Python, Ruby, Node, C#, Java)
- Client SDKs (PHP, Python, Ruby, Node, C#, Java)
- White Label Apps
- Platform Development Kits
- WordPress (PHP)
- Spreadsheet Connector(s) (Google, Excel)
- Database Connector(s) (MySQL, SQL Server, PostgreSQL)
- Widgets (ie. Search, Featured)
- Buttons (ie. Bookmarklet, Share)
- Visualizations (ie. Graphs, Charts)
- Email - The email channels in which the organization provides.
- Github Issues - Setup for platform, and aggregate across code projects.
- Google Group - Setup specific threads dedicated to the developers.
- Legal - The legal department for the Open Referral organization and platform.
- Terms of Service - What are the terms of service set by the Open Referral organization.
- Licensing (Data, Code, Content) - What licensing is applied to content, data, and code resources.
- Branding - What are the branding guidelines and assetts available for the Open Referral platform.
The Open Referral developer portal really is just a project website which organizes links, and meta information about any valuable code that is developed, that uses HSDS as its core. The ultimate goal is to provide a rich marketplace of server, client-side, platform, and language resources that can be applied anywhere. Some of it will be officially platform support, while other will be partner and Open Referral community supported. The central portal is purely to help organize all the valuable resources that are generated from the community, and easy to find by the community.
Open Referral Demo Portal
I have assembled this outline, based upon the portal presence of leading API platforms like Twitter, Twilio, and Stripe. As with every other area, not all these elements will be in the first iteration of the Open Referral demo portal, but we should consider what the essentials should be in a minimum viable definition for an Open Referral demo portal.
- Landing Page - A simple, distilled down version of portal into a single page.
- Getting Started
- Overview - What is the least possible information we need to get going.
- Registration / Login - Where do we signup or login for access.
- Signup Email - Providing a simple email when signing up for access.
- FAQ - What are the most commonly asked questions, easily available.
- Overview - Provide an overview of how to authenticate.
- Keys - What is involved in adding an app, and getting keys.
- OAuth Overview - Provide an overview of OAuth implementation.
- OAuth Tools - Tools for testing, and generating OAuth tokens.
- Interactive (Swagger UI) - Providing interactive documentation using Swagger UI.
- Static (Slate) - Providing more static, attractive version of documentation in Slate.
- Schemas (JSON) - Defining all underlying data models, and providing as JSON Schema.
- Pagination - Overview of how pagination is handled across API calls.
- Error Codes - A short, concise list of available error codes for API responses.
- Samples (PHP, Python, Ruby, Node, C#, Java) - Simple code samples in variety of languages.
- SDKs (PHP, Python, Ruby, Node, C#, Java) - More complete SDKs, with authentication in variety of languages.
- Widgets (ie. Search, Featured) - Simple, embeddable widgets that make public or authenticated API calls.
- Buttons (ie. Bookmarklet, Share) - Simple browser, web, or mobile buttons for interacting with APIs.
- Visualizations (ie. Graphs, Charts) - Provide a base set of D3.js or other visualizations for engaging with platform.
- Outbound - Allow for outbound webhook destinations and payload be defined.
- Inbound - Allow for inbound webhook receipt and payload be defined.
- Analytics - Offer analytics for outbound, and inbound webhook activity.
- Alerts - Provide alerts for when webhooks are triggered.
- Logging - Offer access to log files generated as part of webhook activity.
- Limits - What are the limits involved with accessing the APIs.
- Pricing - At what point does API access become commercial.
- Road Map - Providing a simple road map of future changes coming for the platform.
- Issues - A list of current issues that are known, and being addressed as part of operations.
- Change Log - Providing a simple accounting of the changes that have occurred via the platform.
- Status - A real time status dashboard, with RSS feed, as well as historical data when possible.
- Github Issues - Provide platform support using Github issues, allowing for public support.
- Email - Provide an email account dedicated to supporting the platform.
- Phone- Provide an phone number (if available) for support purposes.
- Ticket System - Providing a more formal ticketing system like ZenDesk for handling support.
- Blog w/ RSS - Providing a basic blog for sharing stories around the platform operations.
- Slack - Offering a slack channel dedicated to the platform operations.
- Developer Account
- Dashboard - An overview dashboard providing a snapshot of platform usage for consumers.
- Account Settings - The ability to manage settings and configuration for platform.
- Application / Keys - A system for adding, updating, and remove application and keys for API.
- Usage / Analytics - Simple visualizations that help consumers understand their platform usage.
- Messaging - A basic, private messaging system for use between API provider and consumer(s).
- Forgot Password - Offering the ability to recover and reset account password.
- Delete Account - Allow API consumers to delete their API accounts.
- Terms of Service - A general, open source terms of service that can be applied.
- Licensing (Data, Code, Content) - Licensing for the data, code, and content available via the platform.
- APIs.json - Providing a machine readable API.json index for the API implementation.
- APIs.io - Registering of the API with the APIs.io search engine via their API.
This base portal design will act as a demo implementation, with an actual functional API operating behind it. It could also be potentially forked, and used in other Open Referral API implementations, as a forkable base, that can be customized, and built on for each individual deployment. Github, using Github Pages, along with Jekyll pages allows for the easy design, development, and then forkability of an open portal blueprint. I'd like to see all the project sites that support the Open Referral effort operate in this similar fashion, which isn't unique to Github, and can run on Amazon S3, Dropbox, and almost any other hosting environment.
One of the strengths of the Open Referral organization, and is essential to evolve into a platform is the availability of a formal partner program to help manage a variety of different partners who will be contributing in different ways. I suggest operating a site dedicated to the Open Referral partner program, located at the sub domain http://partner.openreferral.org. This provides a clear location to visit to see who is helping building out the Open Referral platform, and get involved when it makes sense.
- Overview - An overview of the Open Referral partner program.
- Gallery of Partners - Who are the Open Referral Partners.
- Gallery of Applications - What are the Open Referral implementations.
- Partner Stories - What are the stories being the partner implementations.
- Types - The types of partners involved with platform.
- Application - The partners who are just deploying single web, or mobile applications.
- Integration - The partners who are just deploying single API, portals.
- Platform - The partners who are implementing many server, and app integration.
- Investor - Someone who is investing in Open Referral and / or specific implementations.
- Registration / Form - A registration form for partners to submit and join the program.
- Marketing Activities
- Blog Posts - Provide blog posts for partners to take advantage of one time or recurring.
- Press Release - Provide press releases for new partners, and possibly recurring for other milestones.
- Discounts - Provide discounts on direct support for partners.
- Office Hours - Provide virtual open office hours just for partners.
- Training - Offer direct training opportunities that is designed just for partners.
- Advisors - Provide special advisors that are there to support partners.
- Quotes - Allow partners to provide quotes that can be published to relevant properties.
- Testimonials - Have partners provide testimonials that get published to relevant sites.
- Use of Logo - Allow partners to use the platform logo, or special partner platform logo.
- Blog - Have a blog that is dedicated to providing information for the partner program.
- Spotlight - Have a special section to spotlight on partners.
- Newsletter - Provide a dedicated partner newsletter.
Formalizing the partner program for Open Referral will help in organizing for operation, but also provide a public face to the program, lending credibility to the platform, as well as to its trusted partners. Not all partnerships need to be publicized, but it will lend some potential exposure to those that want. Not every detail of Open Referral partnerships needs to be present, but operating in the open, being as transparent as possible will help build trust in a potentinally competitive environment.
There will be some HSDS API implementations, as well as potentially web or mobile applications that are developed by Open Referral, with some developed and operated by partners. Whenever possible, being transparent about this will help build trust, and reduce speculation around the organizational mission. Formalizing the approach to platform partnerships, that help set a positive tone for the community, and go from just site, to community, to a true platform.
I wanted to explore some of the services that will be needed in support of the Open Referral format specification, open source software development, as well as specific implementations. Not all of these services will be executed by Open Referral, with partners being leveraged at every turn, but it will also be important for Open Referral to develop internally capacity to support all areas, and as many types of implementations as possible. This internal capacity will be necessary to help move the specification forward in a meaningful way.
Here are some of the main areas I identified that would be be needed to help support core API implementations, as well as some of the web and mobile applications implementations that will use HSDS.
- Server Side
- Deployment - The deployment of existing or custom server implementations.
- Hosting - Hosting and maintenance of server implementations for customers.
- Operation - The overseeing of day to day operations for any single implementation.
- Data Services
- Acquisition - The coordination, access, and overall acquisition of data from existing systems.
- Normalization - The process of normalization of data as part of other data service.
- Deployment - The deployment of a database in support of implementation.
- Hosting - The hosting of database, APIs, and applications in the support of implementations.
- Backup - The backing up of data, and API, or application as part of operations.
- Migration - The migration of an existing implementation to another location.
- Development - The development of an application that uses an Open Referral API implementation.
- Hosting - The hosting of a web or mobile application that uses an Open Referral API implementation.
- Management - The management of an existing web or mobile application that uses an Open Referral API implementation.
- UI / UX - There will be the need to create graphics, user interface, and drive usability of end-user applications.
- Developer Portal
- Deployment - The demo portal can be used as base, and template for portal deployment services.
- Management - Handling the day to day operations of a developer portal.
- Registration - Registering for the domains used as part of implementations.
- Management - Running the day to day management of DNS for implementations.
- App Monitoring - The monitoring of apps that are deployed.
- API Monitoring - The monitoring of APIs that are deployed.
- API - Initial, and regular evaluation of the security of the API.
- Application - Initial, and regular evaluation of the security of applications.
In some of these areas I want to offer API Evangelist assistance as a partner, while in others I will be looking for partners to step up. I will also be looking at what cloud services, or open source software can assist in augmenting needs in these service areas. These are all areas that Open Referral will not be able to ignore, with many projects needing a variety of assistance in any number of these areas. Ideally Open Referral develops enough internal capacity to play a role in as many implementations as possible, even if it is just part of the platform storytelling, or support process.
What service providers will be used as part of operations? Throughout this project exploration I've mentioned the usage of Github, a potentially free, and paid solution to multiple service areas. I've listed some of the other common service providers I recommend as part of my API research, and would be using to help deliver some of my contributions to the platform, and specific projects.
- Github - Github is used for managing code, content, and project sites.
- Amazon - AWS is used as part of database, hosting, and storage.
- CloudFlare - Used for DNS services, and DNS level security.
- Postman - Applied as part of on boarding, testing, and integrating with APIs.
- 3Scale - A service that can be used as part of the API management.
- API Science - A service that can be used as part of API monitoring.
- APIMATIC - A service that can be used to generate SDKs.
For a well balanced approach I recommend that Open Referral strike a balance in the number of services it uses to operate the platform, and what it suggests for partners, and specific implementations. If possible, it would be nice to have one or more cloud services identified, as well as some potentially open source tooling that might be able to help deliver in the specific area.
Open Source Tooling
What tools will be used as part of operations? Complementing the services showcased above, let's explore some of the open source tooling that will be used as part of Open Referral platform operations. This should be a growing list, hopefully outweighing the number of cloud services listed above, providing low cost options to tackle much of what is needed to stand up, and operate an Open Referral, HSDS driven solution.
- Slate - A static, presentation friendly version of API documentation.
- Jekyll - An open source content management systems used for project sites.
I have only gotten started here. There are no doubt other open tools already in use, as well as some we should be targeting. What are these, what will they be used for, and do their licensing and support reflect the Open Referral mission. Each of these solutions should be forked, and maintained alongside other organizational developed or managed software.
HSDS is an open definition, built on the back of, and supporting other existing open definition formats. Let's showcase this heart of what Open Referral, and HSDS is, by providing an update to date list of all open definition formats, and standards in use.
- OpenAPI Spec - An open source, JSON API definition format for describing web APIs.
- APIBlueprint - An open source, Markdown API definition format for describing web APIs.
- MSON - An open source, markdown data schema format.
- JSON Schema - An open source, JSON data schema format.
- The Alliance of Information and Referral Systems XSD and 211 Taxonomy
- Schema.org - Civic Services Schema (at the W3C)
- The National Information Exchange Model - via the National Human Services Information Architecture - logic model here.
Open source software, and open definitions are the core of Open Referral. The goal is to provide open formats, APIs, data, and tools that can be easily replicated by cash strapped municipalities, government agencies, and other organizations. However software development, and operation takes money, and resources, so there will be a monetization aspect to Open Referral, which will need to be explored, and planned for.
I wanted to take what I've learned in the API sector, and put towards the evolution of a monetization framework that can applied across the Open Referral platform, down to the individual project level. Most monetization planning will be at the project level, with some of these considerations when it comes to thinking of generating revenue.
- Acquisition - What does it cost to get everything together for a project from first email, right before development starts.
- Development - What person hours, and other costs associated with development of a project.
- Operations - What goes into the operation of APIs, portals, and other applications developed as part of integration.
- Direct Value
- Services - What revenue is generated as part of services.
- Grants - What grants have been received, and being applied to projects.
- Investment - What investments have been made for platform projects.
- Indirect Value
- Branding - What branding opportunities are opened up as part of operations.
- Partners - What partnerships have been established as part of operations.
- Traffic - What traffic to the website, project sites, and other properties.
- Internal - What internal reporting is needed as part of platform monetization?
- Public - What reporting is needed to fulfill public needs?
- Partners - What partner reporting is needed as part of the program.
- Investment - What reporting is needed for investors?
- Grants - What grant reporting is required for grants.
Most of these areas will be applied to each project, but no doubt will need to be rolled up and reported, and understand across projects, as well as by other areas listed above. Open Referral will not be a profit driven platform, but will be looking to revenue generation to not just develop the open specification further, but also push for the development of open tooling, and other resources.
Monetization strategies applied to Open Referral will heavily drive the plans for API access that are applied to each individual implementation. While not everything will be standard across HSDS supporting implementations, there should be a base set of plans for how partners can operate, and generate their own revenue to support operations.
Platform API Plans
What are the details of API engagement plans offered as part of operations? I wanted to explore the many ways that leading API platforms open up access to their resources, and hand pick the ones that made sense for a minimum set of plans that could be inherited by default, within each implementation. Of course, each potential implementation might be different, but these are some of the essential platform plan considerations.
- Public - What are the details of public access.
- Commercial- At what point does access become commercial.
- Sponsor - How much access is sponsored by partners?
- Partner - Which plans are only available to partners?
- Education - Are there educational and research access?
- Time Frames
- Seconds - Resources are restricted by the second.
- Daily - Resources are restricted by the 24 hour period.
- Monthly - Resource access is reported on my monthly timeframe.
- Calls - Individual API calls are measured.
- Support - Support time is measured.
- Writes - The ability to write data to platform is measured.
- Country - In country deployment opportunities are available.
- On-Premise - On-premise options are available for deployment.
- Regions - The deployment in predefined regions are available.
- Range - API access limitations are available in multiple ranges.
- Minutes - Support access is limited in hours
- Hours - Support access is limited in hours.
- Endpoints - There are access limitations applied to specific API paths.
- Verbs - There are access limitations applied to the method / verb level.
While it is ideal that HSDS implementations provide public access to the vital resources being made available, it is not a requirement, and some implementations might severely lock down the public access elements of the platform. Regardless, all of the items listed should be considered, when one to five separate API access plans. The plans should cover hard infrastructure costs like compute, storage, and bandwidth, while also providing other commercialization opportunities that support revenue generation as well.
These are mostly the resources that currently exist on the public website, but I wanted to also make sure and provide other details about the organization, the team behind the efforts. These are a few other resources that shouldn't be forgotten.
- FAQ - Providing an organized list of the frequently asked questions for the platform.
- History - Provide the history necessary to understand the background of the project.
- Strategic - What are the strategic objectives of the organization and specification.
- Technical - What are the technical details of the organization and specification.
- Organization - Description of the organization.
- Team - Description of the team involved.
- Specification - Description of the HSDS.
I can keep adding to this list, but I think this represents a pretty significant v2 presence for Open Referral, as well as the Human Services Data Specification (HSDS) format. This isn't just a suggested proposal. I needed to think about what was needed, and what is next to help support projects on table, and proposals that in the works for specific implementations. I couldn't think about any single project without exploring the big picture.
Now I'm going to share this with Greg Bloom, the passionate champion behind Open Referral, and HSDS. I just needed to make sure everything was in my head, in support of our discussion in person tomorrow. We'll be looking to move the needle forward on this vision, in conjunction with the implementations on the table. Exploring the big picture on my blog, is how I put my experience on the table, working through all of its moving parts, and make sure I've covered all the ground I need to discuss.
What Does The Road Map Look Like?
Greg and crew are in charge of the road map. I just need to get more intimate with the specification. I have already created a v1 draft, scraped from the Slate documentation for the existing Ohana API implementation, using OpenAPISpec. I have the PDF documentation for an Open Referral partner to convert to a machine readable OpenAPI Spec as well. The process will help me further build awareness around the specification itself. This post has helped me see the 100K view, crafting the OpenAPI Spec will help me dive deep down into the weeds of how to deliver a human services API using the HSDS standard.
A Model For Human Services API And Hopefully Other Public API Services
I'm pretty stoked with the potential for working on Open Referral, and honored Greg has invited me to participate. This is just a first draft, tailored for what I would like to see considered for Open Referral / HSDS API, and for a couple of immediate implementations. However the model is something I will keep evolving alongside this project, as well as a more generic blueprint for how public service APIs could possibly be implemented.
There are several other API implementations that have come across my desk, which I've felt a model like this should be applied. I was thinking about applying this to the FAFSA API, to help develop a student aid API community. I also thought it could be applied around the deployment of the RIDB API, in support of our national park system. In both of these environments a centralized, common, open API definition, with supporting schema and dictionaries, and a healthy selection of of open source server, and client side web or mobile app implementations, would have gone a long ways.
Anyways, I have what I need in my head so that I can talk with Greg, and coherently discuss what could be next.
I'm engaging in another conversation with a higher education institution about where to start with APIs on campus. A new CIO has assumed a leadership position, and some very forward thinking folks on campus asked me to come visit, talk with IT about APIs, and try to understand where they can get to work with APIs on campus--a perfect opportunity to get to work developing an API culture at Davidson College.
As their Twitter profile says, "Davidson is a highly selective independent liberal arts college committed to access and equal opportunity." Sounds like a rich environment to get to work empowering students, teachers, and faculty using APIs. Which leaves at the familiar place of being asked by folks on campus, where do we start with APIs?
First we need a place to coordinate the getting started with APIs effort at Davidson with:
- Creating a Github Repo - To help facilitate the discussion, as well as store any code, data, and content that is generated.
- Publish a Github Pages - To help provide a web friendly destination to share content, information, and act as public face.
- Leverage Issue Management - To help establish an asynchronous communication stream around the initial API effort.
Second, let's start mapping out all the moving parts, that will be meaningful to APIs on campus:
- Who Are The Key Stakeholders?
- What Are The Campus Run Systems Used?
- What Are The Cloud Services Being Put To Use?
- What Open Source Software Tools Are Used?
- What Are The Bits and Bytes That Are Involved?
- Who Are The Key Individuals Involved With API Efforts?
- What Is This Low Hanging Fruit On The Public Website?
Everyone involved, from IT, to students, including myself can use the Github repo, and the pages setup to support each of these areas, can be used to collectively establish a map of where to begin with APIs. Once a map of all these areas begins to come into focus, individual API opportunities will emerge, and become more clear. We are looking for the low hanging fruit when it comes to deploying APIs on campus.
We will use each page of this low hanging fruit project website as a workbench, and we'll use the Github Issues management as a communication channel for the work, providing an asynchronous, centrally located place to identify individual micro-projects that can be executed. Hopefully this work will spawn other Github repositories for each individual project, putting them into any Github Organization that makes sense -- this one lives in a low hanging fruit organization managed my API Evangelist.
This project is not an official project by Davidson College. The goal is to involve several key individuals on campus to help identify the low hanging fruit, then deploy individual projects within the institutional domain that can be pushed forward by campus IT, faculty, and students.
I always dig it when API stories spin out of control, and I end up down story holes. I'm sure certain people waiting for other work from me do not appreciate it, but these are where some of the best stories in my world come from. As I was writing the story about Best Buy limiting access to their API when you have a free email account, which resulted in the story about Best Buy using Medium for their API platform blog presence, which ended up pushing me to read Medium's terms of service.
Maintaining the legal side of your platform operations on Github, taking advantage of the version control build in makes a lot of sense. Something that also opens the door for using Github Issue Management, and the other more social aspects of Github for assisting in the communication of legal changes, as well as facilitate ongoing conversations around changes in real time. I can see eventually working this into some sort of rating system for API providers, a sort of open source regulatory consideration, that is totally opt in by API platforms -- if you give a shit you'll do it, if you don't, you won't.
One thing that struck me as I wrote my post about Best Buy stopping issuing API keys to free email accounts, was the fact that Best Buy operates their developer blog on Medium--something I am seeing more of. As I discover new API-centric companies via my blog, Twitter, Product Hunt, AngelList, and the many other ways I tune into the space, I'm seeing more companies operating the blog portion of their presence in this way.
One of voices in my head points out that this just doesn't seem like a good idea. It reminds me of hosting our blogs on Posterous. Medium doesn't let you map your domain, or sub-domain to your blog (invite only), but I'm sure is something they'll do soon. They don't have RSS, and they don't have a read API either? *warning bells* While I get the Medium thing, it seems like one of those neatly tended gardens, where as many roads out are gated off with friendly hand-painted signs.
However some of the other voices in my head chimed in that Medium is just asking for non exclusive license to use your content. They let you delete your account, and I know they are working hard on their API strategy. Also, I'm a big supporter of API providers being as scrappy as possible, as many of us have to do as much as we can with some often very non-existent budgets. Best Buy rocks it at leveraging Github, Medium, Twitter, and all the channels I usually recommend. So in the end I really can't bitch about API platforms using Medium for their blog presence.
When you use Medium you get the network effect for your blog presence--something I'm working to understand better, and leverage more as part of my own work. I'll keep tracking on the APIs I find that use Medium for their blog presence. Right now my only complaint is simple -- THERE IS NO RSS!!! Beyond that, I'll just track on and see how it all plays out. My advice is to still to utilize a WordPress, Jekyll, Blogger, or other hosted service where you can map a subdomain to, keeping all the valuable exhaust within your domain. Then follow the POSSE principles when incorporating Medium into the communication strategy for your API platform.
Best Buy is one of the many of the recent responses I am seeing from public API providers, as they work to strike a healthy balance within their API community. In an attempt to incentivize the behavior they desire within the Best Buy API community, the platform will not be issuing API keys to any email address that comes from the popular free email platforms (@google.com, @yahoo.com, etc.). While I hate seeing any public API access be tightened up, I can't help but sympathize with their move, and I have to support any API providers who works to set a healthy balance within their communities.
While I am not sure limiting access based upon email account may not be the solution they are looking, they hit all the right notes for me:
- Develop Human Relationships >> If we want to have a better relationship with you, our active users, we need to make better connections between your alpha-numeric key and the services we provide to you
- Respect For What You Build >> If we disable a key because the email address is old, we may break an app. We don’t like breaking things.
- Business With Companies >> Over the next couple of months we will transition to a new system that will associate API keys with a company and not an individual.
- Empowering Education usage >> We are developing a program that we intend to have up and running before the start of the next school year that will accommodate educational use.
- Allow For Play & Exploration >> Similarly, we have ideas for how to accommodate events, hackathons and developer sandboxes to allow folks to test the waters without needing to go through a formal key sign up process.
I will always encourage companies, organizations, institutions, government agencies, and individuals to be as public as possible. More importantly i will always encourage them to do it in the safest, most meaningful way possible, and when they are just working to cultivate and get to know their community -- I can only lend my support. Looking beyond the tech, APIs are all hammering out how you discover, and maintain digital relationships and partnerships, via your website, web and mobile applications, as well as your API platform.
Not every comopany will be able to do API the way Best Buy is. Its just not in the DNA of every company. The ones that will be most successful with it, will be the ones that do the hard work of getting to know their community, establish sensible ground rules, but in my opinion the most critical part it all, is that you be as communicative and transparent about it as you possibly can. The Best Buy API does this well, by sharing their thoughts behind their difficult decision to lock down on API keys issued to users with free email accounts.
I am always trying to identify the common building blocks employed by leading API providers, and Twilio is one of the usual suspects I showcase. This time it is focusing their annotated code walk-throughs and tutorials, which provides a pretty good model that other API providers can follow when planning their own tutorials.
Twilio offers up a tutorial for almost every API endpoint they offer. Some of the more popular tutorials have versions for almost every programming language, while some of the lesser traveled ones only have a single version. Once you click in to each tutorial, it delivers as the title implies, an annotated walk through, showing you how to get up and running, making the API call, in the language of choice.
Each tutorials provides you a direct link to the code libaries, available on Twilio's Github account. It can get pretty busy in the walk-through section for Twilio, but the value is clear. I could envision a more portable, embeddable, and machine readable tool, that would help API providers do this as well as Twilio, but maybe in a more cleaner, plug and play way.
I've had tutorials as a common API management building block for some time, maybe I'll take Twilio's model and expand on it, and provide a more detailed blueprint that API providers can follow when planning their own approach. Maybe someone could also turn it into a simple service, or open source solution that the API space could put to use.
The Lack Of An API And Healthy Partner Integrations Is An Early Warning System For Service Providers28 Mar 2016
I was disappointed to see the email in my inbox this morning from IFTTT about their Pinboard integration. I also helped amplify Pinboard when he was Tweet'n up a storm earlier, and I recommend you read his post: My Heroic and Lazy Stand Against IFTTT.
However I had work to get done on an essay about the API effort over at Brigham Young University, and prepare for some meetings I have this week around the Open Referral API definition, to help people find government services--IFTTT bums me out, but priorities.
The loss of Pinboard integration in IFTTT sucks, but there is always Zapier. ;-) I began moving away from my IFTTT support back in 2014, after I began seeing their lack of an API, and the absence of a forward-facing business model, as an early warning sigs about the sustainability of IFTTT as an service provider.
I prefer having the option of paying for services in which I have a growing dependence on for my personal digital presence, and it is vital when it comes to my business presence. This was a deal breaker for me when it comes to IFTTT being a service I could support.
I also strongly believe, that if you are building a startup using the APIs of other companies, you should be offering an API for your own aggregation, automation, interoperability, and any other aspect of your tech--otherwise it isn't a service I want to support.
There are many other approaches to integrating your cloud services, and the concept of iPaaS is a growing layer of the API space. Unfortunately there will also continue to be incentives for startups to not offer an API, be secretive and shady about their operations, not communicate honestly with their community, and have fucked up terms of services.
All we can do to combat this, is to make sure and only use the services who have a sensible business model which is in alignment with their community, provide a truly open API, that possesses terms of service that are more human than lawyer, and know how to actually communicate with their community.
Which is why I support API providers like Pinboard, a service that plays a central role in the operation of API Evangelist.
The news out of Runscope makes today a good day to kick off discussion around a project that I've been helping push forward with the API Garage team, assisting them find the healthiest path forward for their API client tooling. As Runscope demonstrates, it is a tough time for API startups, something that adds fuel to my personal mission to do what I can to help startups find success.
First, what is API Garage? It is one of the HTTP / Web API Client tools available today, including Postman, Paw, DHC, and Stoplight. API Garage is an Electron based solution, you can download and put to work in helping you integrate with web APIs, allowing you to make calls, see the requests / response, all without having to write code. This approach to working with APIs has evolved beyond use by just API consumers, and is quickly becoming the tool of choice for API development teams, and being applied at almost every stop along the API life cycle.
The API Garage team approached me a couple months back to discuss the roadmap, and figure out how we can evolve it beyond just being a web API client, and help it emerge as an "garage environment" where individuals and teams can work on APIs, at any stage of the API life cycle. Inevitably this discussion led us to the talking about how the API Garage would be licensed, and generate revenue to sustain their vision. At this point, the API Garage team expressed interest in open sourcing the solution, and focusing on alternative approaches to generating the money they needed to sustain the solution, and keep them working on meaningful API projects.
After a number of conversation, they settled in on open sourcing the API Garage, and begain focusing on revenue being focused on several key areas:
- Sponsorship - The API Garage website, and the open source download will have a handful of sponsorship spots, ranging from the default home page, to a well placed banner location, which they will be filling with quality, complementing services and tooling providers.
- Default APIs - When you download the API Garage, there will be a handful of default APIs available. The team will be carefully considering a small group of valuable, relevant, and usable API partners to fill these slots.
- API Services - The current solution provides API testing and mocking services, and the new open source solution will offer opportunities for a diverse range of API services, that serve almost every stop of the API life cycle. There will be a handful of default slots available for quality API services to sponsor.
- Private Label - With the focus on the API Garage being open source, the team is committed to helping deploy it as custom, and private label solutions for companies who would like to establish an API Garage within their local organizational environment.
- Consulting - The team is also opening up to API consulting services, helping companies of all shapes and sizes with their API design, development, management, testing, strategy, and educational needs.
These are just a handful of the approaches we've sketched out to help the team make ends meet. The team has a short runway to focus on transforming the current API Garage, into the open source version (until summer 2016 target), and will need to bring in additional revenue by the time they launch, if they are going to make all of this work.
There are two things that attracted me to this project. 1) The opportunity for an open source API life cycle solution to help be a window for managing all stops. 2) The opportunity in helping cultivate alternative, collaborative approaches to funding this window to the API life cycle, as well as potentially the APIs, and API services that need to exist within any thriving API Garage. As demonstrated by Runscope's announcement, delivering the valuable API services we need is challenging, and we need to be pushing forward open source software, and open community revenue models, alongside the VC funded vision that is dominating the sector right now.
I have a notebook full of stories to publish, from weeks of conversations with the API Garage team, and lots of work to help flush out what exactly is an API Garage. Think about the tech startup myth stories of the HP garage, and imagine how we can create an API focused environment where API engineers can craft their solutions. If you would like to know more about what the API Garage team is up to, head over to their website and contact them directly, or feel free to ping me directly, and I'll get you plugged in.
I'm curious to see what the community can do with an open, collaborative API Garage, where we can share our API designs, and put to use the best of breed API services and tooling, in the service of a modern API life cycle.
I recently caught a glimpse of how APIs are going to deliver the change we need in this world. It began while I was attending a gathering of indie ed-tech folks on the campus of Davidson College in North Carolina. Where 20-30, mostly non-developers, discussed what is indie ed-tech, something which included many visions of what was dubbed "the personal API". While the gathering was very enlightening, it was what this gathering has set into motion, after we all parted ways, that I think has the most potential
Since the gathering occurred, a rolling wave of API driven awarenss has been picking up speed, with Indie Ed Tech, Colleagues/Friends, APIs, Unexpected Emergent Ideas, and dot dot dot and Can The JED-API Power a Certification with a fellow who barks about and plays with web tech. The Indie Ed-Tech: Revue/Reflections from the ed-tech Cassandra. I took a Journey to discover what is Indie Ed-tech with an expert generalist, and heard about Indie Educational Technology from a university chief information officer (CIO). I then read about how we are Pushing/Pulling Data – Thinking Computationally? Differently? from someone focused technology integration in K12 and higher ed. Then I explored Lo-fi Ed”-“Tech and The Personal in Indie from an edupunk with a mountaintop compound in Italy, and enjoyed the Reflections on Indie Ed-Tech from his partner in crime who is someone who is just making things up as he goes. I enjoyed the recap of the #IndieEdTech Design Sprint and #IndieEdTech Personal APIs & The Current State of Ed-Tech and IndieEdTech Keynote Reflections from an instructional psychology and technology graduate student. I thoroughly enjoyed the Framing Indie EdTech and Indie EdTech Design Sprint and Indie EdTech: Future and Funding and finally listening to the Vinyl API of One’s Own from a director of digital learning.
This collective vision of Indie Ed-Tech, which includes some very personal views of a what an API is, is how APIs will deliver the change we need.
It will not be the e-commerce vision of SalesForce, eBay, Commerce, Paypal, and other API pioneers that will move the needle with APIs. It will not be the following wave of social API leaders like Twitter and Facebook who connect all of us together using APIs. It will not be the API as a product vision of startups like Twilio, SendGrid, and Stripe who shift the landscape with a complete API package.
A perfectly designed REST API, that follows all the hypermedia rules, and the linked data vision of API visionaries will not save us. No single API specification, schema or standard handed down from above will provide the framework we will need to make the change necessary. Our belief in the perfect API implementation will not ever unite, and connect humans in the ways we need, and bring the balance that is necessary.
The well oiled API platforms of the big five: Amazon, Google, Microsoft, Facebook, and Microsoft will not bring the collective power needed to make web scale API change. Their API platforms, organizational-wide unity, design strategies, and CEO mandates, will never support the API power that is needed to achieve the global scale we will need.
The better late than never API implementations of last generation tech gorillas like Oracle, IBM, SAP, and AT&T will not begin to power even 5% of the potential that APIs will deliver. These 1000 lb legacy tech giants like to talk API, and like to pretend they get what APIs can do, but they will never realize what is actually needed to be API, beyond their own selfish needs.
Even the mighty US, UK, and other top governments, with their all knowing, all seeing, all mighty NSA, military, and bureaucratic institutions will ever fully realize API, with all their open data efforts, and global surveillance networks. Their belief that they have all the data, intelligence, network, and mobile access that they will need, will only distract them from what is actually possible with APIs.
It won't even be the over eager API Evangelists, who spends all their time understanding everything that is API, that will change the world using APIs. These evangelist will only be the channel in which each individual receives the information and awareness about APIs each day, setting the stage for what will bring the change we need. Evangelists can spend the next 33 years watching, writing, speaking, and hacking on APIs, and still not move the needle in the same way, that the API literacy showcased above will set into motion.
It will be the API literate individual, who understands that they can get access to their own data, and information from any website, system, application, connected device, company, and institution, using APIs. It will be people who understand that they can make their education, career, and the web into what they want, using APIs. That the web is programmable. A digitally aware individual who assumes full control over their online self, taken it back from the tech giants, understanding that they own all the exhaust from their online (and increasingly offline) personal, and professional life.
I have been excited about some of what I've seen while monitoring the API space over the last five years. Something that is getting harder and harder to find each year. However, nothing I have seen makes me more hopeful and optimistic about what APIs can do, than reading about each of these individuals, who have been turned on to what an API is. It is not my vision. It's not IT's vision. It's not Silicon Valley's vision. It is their own vision of what is API, an understanding that APIs are all around them, and developing their own interpretation of what APIs can do.
No single API implementation, tool, service, specification or standard will set into motion the change that is needed. The real API story is about empowering every single individual to take control over their own digital self using APIs--something I believe should begin in education at the K-12, as well as University level. This is the front-line of APIs. This is how we will push back on Silicon Valley, technology, digital exploitation, and the NSA's of the world.
This is how APIs will help to deliver the change we need. #IndieEdTech
History is everything. Understanding where we have come from is critical to knowing where we are going. While pushing forward with the latest technology, it is always healthy to pause and take a look at that past. Someone Tweeted the link to my history page, and I realize it has been three years since I refreshed my view of the overall history of the space, so I wanted to take some time and add a few other milestones that I feel were significant along the way.
When I talking about APIs, I'm focused one version that was born out of the enterprise, during the Service Oriented Architecture (SOA) movement, which sometime around 2000, a portion of the SOA experiment left the enterprise and found a more fertile environment in the world of start-ups. In 2016, this version of API has re-captured the attention of the enterprise, as they see them being used in popular, public API driven services, and the startups they are acquiring and gobbling up.
Where we stand in 2016, there are some obvious technical reasons of why web APIs are finding success in companies of all shapes and sizes, and even within government; but not all the reasons for this success is technical. There are many other, less obvious aspects of web APis that have contributed to their success, things we can only learn by closely studying the past and looking at why some of the pioneers of web APIs were successful, and have continued to be successful over the years.
In 2016, it is critical that we emulate the best practices that have been established over the last 16 years, following the lead of early API providers like Amazon, Salesforce, eBay and Twitter--much of what is still being emulated by new API practitioners in 2016. As a startup, SMB, enterprise, institution, and government agency organization, you don't have to follow every example set in this 16 year history, but you should be aware of this history, and understand your place in the sector.
As I look back each year, I see to some clear patterns emerge that have defined the industry--patterns that need to be emulated, and some that should be avoided, as we plan our own API strategy and presence.
As the first .COM bubble was bursting, platforms were looking for innovative ways to syndicate products across e-commerce web sites, and web APIs, built on the backs of existing HTTP infrastructure proved to be the right tool for the job.
With this in mind, a handful of tech pioneers stepped up to define the earliest uses of APIs as part of sales and commerce management, kicking-off a ten year evolution that I consider as the early history of web APIs, defining the sector we all enjoy today.
However, even with the early success of APIs, the sector would struggle to reach a mature point, without several other critical ingredients that would prove to be as important as essential commerce variables like social, payments, and messaging.
February 7th, 2000 Salesforce.com officially launched at the IDG Demo 2000 conference.
Salesforce.com launched its enterprise-class, web-based, sales force automation as a "Internet as a service". XML APIs were part of Salesforce.com from day one. Salesforce.com identified that customers needed to share data across their different business applications, and APIs were the way to do this.
Marc R. Benioff, chairman and founder of salesforce.com stated, "Salesforce.com is the first solution that truly leverages the Internet to offer the functionality of enterprise-class software at a mere fraction of the cost."
Salesforce.com was the first cloud provider to take an enterprise class web application and API and deliver what we know today as Software-as-a-Service.
Even with SalesForce being the first mover in the world of web APIs, they are still a powerhouse in 2016. SalesForce continues to lead when it comes to real-time APIs, testing, deployment and most recently taking a lead when it comes to mobile application development and backend as a service (BaaS).
The eBay API was originally rolled out to only a select number of licensed eBay partners and developers.
As eBay stated:
"Our new API has tremendous potential to revolutionize the way people do business on eBay and increase the amount of business transacted on the site, by openly providing the tools that developers need to create applications based on eBay technology, we believe eBay will eventually be tightly woven into many existing sites as well as future e-commerce ventures."
The launch of the eBay API was a response to the growing number of applications that were already relying on its site either legitimately or illegitimately.
The API aimed to standardize how applications integrated with eBay, and make it easier for partners and developers to build a business around the eBay ecosystem.
eBay is considered the leading pioneer in the current era of web-based APIs and web services and still leads with one of the most successful developer ecosystem today.
On July 16, 2002, Amazon launched Amazon.com Web Services allowing developers to incorporate Amazon.com content and features into their own web sites.
Amazon.com Web Services (AWS) allowed third party sites to search and display products from Amazon.com. Product data was made accessible using XML and SOAP.
Internet visionary Tim O'Reilly was quoted in original Amazon Web Services press release saying, "This is a significant leap forward in the next-generation programmable internet."
APIs and Amazon both have roots in e-commerce, but APIs were quickly applied to other areas resulting in the social media, cloud computing, and almost every single component necessary to build the web, and mobile Internet that we all use every day.
As API driven commerce platforms were still finding their footing, working to understand the best way to put APIs to work, a new breed of technology platforms emerged when it came to using content, media, and messaging on the web, in a way that was very user centric and socially empowering for individuals and businesses.
Publishing user generated content, and the sharing of web links, photos and other media via APIs emerged with this birth of new social platforms between 2003 and 2006. This was an entirely new era for APIs, one that wasn't about money, it was about connections.
These new API driven, social platforms would take technology to new global heights, and ensure that applications from here forward, would all always contain essential social features, that were defined via their platform APIs.
Social, was an essential ingredient the API industry was missing.
del.icio.us is a social bookmarking service for storing, sharing and discovering web bookmarks to web pages, that was founded by Jousha Schachter in 2003.
Del.icio.us implemented a very simple tagging system which allowed users to easily tag their web bookmarks in a meaningful way, but also established a kind of folksonomy across all users of the platform. Which proved to be a pretty powerful way for cataloging and sharing web links.
The innovative tagging methodology used by del.icio.us allowed you to pull a list of your tags, or public web bookmarks by using the URL http://del.icio.us/tag/[tag name]/. So if I was searching for bookmarks on airplanes, I could http://del.icio.us/tag/airplane and I would GET a list of all bookmarks that have been tagged airplane. It was simple
When it came to the programmatic del.icio.us interface, the API was built into the site, creating a seamless experience--if you wanted the airplane tags via HTML you entered http://del.icio.us/tag/airplane, if you wanted RSS of the tags you entered http://del.icio.us/rss/tag/airplane, and if you wanted XML returned you used http://del.icio.us/api/tag/airplane. This has changed with the modern version of Delicious API.
del.icio.us was the first, concrete example of how the web could deliver HTML content, alongside machine readable like RSS and XML, using a URL structure that was simple and human readable. This approach to sharing bookmarks would set the stage for future APIs, in making APIs easy to understand for developers and even non-developers alike. Any slightly technical user could easily parse the XML or RSS, and develop or reverse engineer widgets and apps around del.icio.us content.
del.icio.us has been sold twice since its early popularity, which included to Yahoo! in 2005 and AVOS Systems on April, 2011. However del.icio.us was one of the pillar platforms that ushered in the social era of the API movement, establishing sharing via APIs as critical to the API economy, but also showing that simple rules when it comes to API design.
Flickr was originally created as an online game, but quickly evolved into a social photo sharing sensation.
The launch of the RESTful API helped Flickr quickly become the image platform of choice for the early blogging and social media movement by allowing users to easily embed their Flickr photos into their blogs and social network streams.
The Flickr API is the driving inspiration behind the concept of BizDev 2.0, a term coined by Flickr co-founder Caterina Fake. Flickr couldn't keep up with the demand for its services, and established the API as a self-service way to deal with business development.
The core concepts established by Flickr using its API would transcend the company and its acquisition by Yahoo. Business development using APIs is embedded in the philosophy of the business of APIs pushing APIs to something beyond technical.
APIs became something that any company could use to actually conduct business with its partners and the public, but we still had a ways to go before APIs would grow up.
On August 15th 2006, Facebook launched its long-awaited development platform and API.Version 1.0 of the Facebook Development Platform allowed developers access to Facebook friends, photos, events, and profile information for Facebook.
The API used REST, and responses were available in an XML format, following common approaches by other social API providers of the time.
Almost immediately, developers began to build social applications, games, and mashups with the new development tools.
While the Facebook API and platform is considered by many developers to be unstable, it continues to play a significant role in the evolution of the entire platform with applications and partnerships that drive new features and experiences on Facebook.
On September 20, 2006 Twitter introduced the Twitter API to the world.
Much like the release of the eBay API, Twitter's API release was in response to the growing usage of Twitter by those scraping the site or creating rogue APIs.
Twitter exposed the Twitter API via a REST interface using JSON and XML.
In the beginning, Twitter used Basic Auth for API authentication, resulting the now infamous Twitter OAuth Apocalypse almost four years later, when Twitter forced all those using the API to switch to OAuth.
In four short years Twitters API had become the center of countless desktop clients, mobile applications, web apps, and businesses -- even by Twitter itself, in its iPhone, iPad, Android apps via its public website for much of its existence (no longer true).
Twitter is one of the most important API platforms available, showing what is possible when a dead simple platform does one thing well, then opens up access via an API and lets an open API ecosystem build the rest.
Twitter is also one of the most cautionary tales, of how your API ecosystem can also begin to work against you, unless you properly address the political considerations of an API ecosystem as it grows.
Business and Marketing
As APIs evolved from commerce, through social, it was clear that the industry was going to need some standardizing, by introducing some common business practices. The industry needed to standardize how APIs were deployed as well as provide marketing to help get the word out about the potential of APIs and common business practices.
The establishment of common business and marketing practices for the API space took a lot of grassroots outreach as well as storytelling of the behalf of APIs, companies and the industries they rose out of. There were two separate API pioneers that stepped up to help define the API industry we know today, between the years of 2005 and 2012.
While writing about the history of APIs, it is easy to be so focused on just APIs, that you overlook the single most important player in the entire history of the web API--ProgrammableWeb.
In July 2005, John Musser started ProgrammableWeb. According to his original about page:
ProgrammableWeb is a web-as-platform reference site and blog delivering news, information and resources for developing applications using the Web 2.0 APIs.
I started this site because I couldn't find what I was looking for: a technology focused starting point for web platform development. (For a bit more see my initial post.) Although no guarantees, the last time I started a reference site it somehow became Google's highest rated link on the topic. Given that this site will be a collaborative effort with community input as well, this can be what we make it.
I hope you find the site useful.
John Musser - Seattle, August 2005
John’s original blog post on why he started ProgrammableWeb, says it all: Why? Because going From Web Page to Web Platform is a big deal.
Web APIs are a big deal! Whether its social networking, government, healthcare or education--having a programmable platform to make data and resources available will be a critical part of how commerce and society operates from here on forward.
John made a early decision to showcase open and RESTful approaches to deploying APIs vs. parallel attempts of Service Oriented Architecture (SOA) and Web Services, and focused on telling stories about open APIs--way before it was the thing to do in Silicon Valley.
When I started API Evangelist in July 2010 (5 years after PW), and started talking about the business of APIs, the technology of web APIs was already widely accepted in Silicon Valley, because of the stories that have told on ProgrammableWeb.
As we progress through 2013, a year in which I think we can confidently say APIs are moving mainstream, and I feel we owe much of the success to ProgrammableWeb. The stories John, Adam and other writers have been telling on ProgrammableWeb have been crucial to quantifying and defining the API industry--allowing us all to build, iterate and move things forward.
Without stories around the technical, business and politics of APIs, these virtual interfaces would not have been able to find a place in our real life worlds.
In November 2006, API the first API service provider Mashery came out of "stealth mode" to offer documentation support, community management and access control for companies wishing to offer public or private APIs--from a blog post in TechCrunch titled API Management Service is Open for Business.
At this point in time, in 2006, we were moving from the social period of APIs into the cloud computing phase with the introduction of Amazon Web Services. It was clear that the world of web APIs was getting real, and there was opportunity for companies to offer API management as a service.
While there were tools for deploying APIs, there was no standard approach to managing your API deployment. Mashery was the first to bring a standard set of services to API providers, that would help set the stage for the future growth of the API industry.
It would take almost six more years before the API industry would come of age, in which Mashery significantly helped contribute. The space we all know today was defined by early API commerce pioneers like SalesForce and Amazon, social pioneers like Flickr and Delicious, and from Mashery who helped define what is now know as the business of APIs.
In 2013, Mashery was acquired by Intel, and again by Tibco in 2015, helping continue to validate that the API industry truly is coming of age.
Web developers quickly saw the potential of embeddable maps, and found ways to hack these mapping sources to innovate and build the web properties users desired, focused on solving the local problems we all face daily.
This early use of APIs in providing mapping tools and services for developers laid the groundwork for much of the early mobile developer talent that would drive the coming mobile API period.
Google Maps API started a trend of API mashups with its valuable location based data, with over 2000 mashups to date.
The API demonstrates the incredible value ofand mapping APIs, as well as the power users can have in influencing the direction an application or API takes. Lars Rasmussen, the original developer of Google Maps commented how much he learned from the developer community by watching how they hacked the application in real-time, and they took what they learned and applied it to the API we know today.
Few other companies have the resources to tackle a problem like mapping the worlds resources and delivering a reusable, API driven resource, like Google did. Google has played many roles in moving forward the APi space, but Google Maps played a pivotal role in the history of APIs.
As APIs were generating social buzz across the Internet, Amazon saw the potential of a RESTful approach to business, internalized and saw APIs in a way that nobody had seen them before--giving birth to an approach to using APIs that was much more than just e-commerce, it would re-invent the way we compute.
Amazon transformed the way we think about building applications on the web, delivering one of the essential ingredients we needed for APIs to work, by putting APIs to work. What we now know as cloud computing changed everything, and make the mobile, tablet, sensor and other API driven realms possible.
In March, 2006 Amazon launched a new web service, something completely different from the Amazon bookseller and e-commerce site we've come to know. This was a new endeavor for Amazon: a storage web service called Amazon S3.
Amazon S3 provides a simple interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives developers access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of websites.
Amazon S3 or Simple Storage Service was initially just an API. There was no Web interface or mobile app. It was just a RESTful API allowing PUT and GET requests with objects or files.
Developers using the Amazon S3 API were charged $0.15 a gigabyte per month for storing files in the cloud.
With this new type of API and billing model, Amazon ushered in a new type of computing we now know as cloud computing.
This also meant that APIs were no longer just for data or simple functionality. Now they could be used to deliver computing infrastructure.
In August 2006, shortly after Amazon launched its new cloud storage service Amazon S3, the company followed with a new cloud computing service dubbed Amazon EC2 or Elastic Compute Cloud.
Amazon EC2 provides re-sizable compute capacity in the cloud, by allowing developers to launch different sizes of virtual servers within Amazon data centers.
Just like its predecessor Amazon S3, Amazon EC2 was just a RESTful API. Amazon wouldn't launch a web interface for another three years.
Using the Amazon EC2 API developers can launch small, large and extra large servers and pay for every hour that the server is running.
Amazon EC2, combined with Amazon S3 has provided the platform for the next generation of computing with APIs at the core.
Cloud computing, was an essential ingredient the API industry was missing, and would grow to become what was needed for almost every aspect of growth, during the next 10 years. The most significant part of this story is that Amazon's cloud APIs were not just making their companies digital resources available to other businesses, they were also driving much of the growth across every other sector of the API sector.
A Mobile World
With the introduction of iPhone and Android smart phones, APIs had evolved from powering e-commerce, social, and the cloud, to delivering valuable resources to the mobile phones in our pockets, that are quickly becoming commonplace around the globe.
APIs make valuable resources modular, portable and distributed, making them a perfect channel for developing mobile and table applications of all shapes and sizes.
A small group of API driven technology platforms have helped defined the space and won over the hearts and minds of both developers and the end users of the applications they develop.
In March 2009 Foursquare launched at the SXSW interactive festival in Austin, TX.
Foursquare is a location-based mobile platform that makes cities more interesting to explore, by checking in via a smartphone app or SMS, users share their location with friends while collecting points and virtual badges.
Even with growing competition from early mover Gowalla and major players like Facebook and Google, Foursquare has emerged as the dominant mobile location-sharing and check-in platforms.
On October 6, 2010 Instagram launched its photo-sharing iPhone application.
Less than three months later, it had one million users.
Kevin Systrom the founder of Instagram focused on delivering a powerful, but simple iPhone app that solved common problems with the quality of mobile photos and users' frustrations with sharing.
Immediately many users complained about the lack of central Instagram web site or an API, with Instagram remaining firm on focusing its energy on the core iPhone application.
In December a developer name Mislav Marohni? took it upon himself to reverse engineer how the iPhone app worked, and built his own unofficial Instagram API.
By January Instagram shut down the rogue API and announced it was building one of its own.
Then in February of 2011, Instagram released the official API for the photo platform.
Within days many photo applications, photo -sharing sites, and mashups built around the API started showing up.
Instagram became a viral iPhone app sensation, but quickly needed an API to realize its full potential. Asserting the platforms place in history as one of the defining players in the mobile period of APIs.
In 2007, a new API-as-a-product platform launched, called Twilio, which introduced a voice API allowing developers to make and receive phone calls via any cloud application. In recent years Twilio has also released a text messaging and SMS short code API, making itself a critical telephony resource in many developers toolbox.
Twilio is held up as a model platform to follow when evangelizing to developers. Twilio has helped define which technical and business building blocks are essential for a healthy API driven platform, set the bar for on the ground evangelism at events and hackathons, and worked hard to showcase, support and invest it its developer ecosystem.
Alongside Foursquare and Instagram, Twilio has come to define mobile application development, helping push APIs into the mainstream. While Twitter has sometime been held up as a cautionary tale when it comes to APIs, Twilio has demonstrated, that when done right, API driven ecosystems do work.
By 2011 the bar for delivering APIs, via HTTP, has been well established by early pioneers like SalesForce and Amazon, but Twilio has shown how mature the business of APIs has become with its evolution into the mobile period. However, mobile development via APIs, owes its roots to the foundation laid by the commerce, social and cloud API pioneers.
JSON use evolved out of a need for stateful server-to-browser communication, without using browser plugins such as Flash or Java applets, which had been the dominant methods in the early 2000s. The JSON organizational website was officially launched in 2002, but it wasn't until Yahoo! began offering some of its Web services in JSON in 2005 and then Google used it for its GData protocol in 2006, that we started to see widespread adoption of the format by API providers, and consumers.
The switch from XML to JSON has marked the maturing of the web APIs space, going from hobby to an actual business solution that can be used to describe essential business resources--resulting in near complete adoption in 2016.
The Ongoing Evolution of Online Commerce
Over the first decade of the 21st century, online commerce APIs were still evolving, with the essential elements like product, sales, auctions, shopping carts, and payments play a central role. Many API providers would come and go, but there are only a handful that deliver a precise approach to APIs that would prove to elevate their offering, making an impact on how well approach commerce APIs, as well as almost any other digital resource.
By September 2011, startups and investors had read the writing wall, and the proven "API as a product" model began being applied to disrupt the payment industry, with the launch of Stripe. Like Twilio, Stripe was built for developers, and did everything right, when it came to API design, to documentation, support, and pricing that worked for web, and mobile application developers when it came to integrating payments into their business and consumer solutions.
Right along with compute, storage, location, and messaging, payments are an essential resource to any commercial web or mobile application, and having a simply priced, easy to get up and running payment APIs, proved an instant hit with developers. I considered adding Authorize.net, and Paypal to the history of APIs, but in my opinion it took 10 years for digital commerce to evolve via APIs, with providers like Amazon, and eBay, and API-as-a-product business models established by Twilio, before a standalone payment provider like Stripe could exist, an make the impact that they have.
Payments are a mission critical resource for developers, and will continue to be in the future. Stripe continues to set the bar for how you do payment APIs, as well as how you do APIs in general, and is held up by the entire API industry as how you do it. Stripe continues to do one thing (payments), and do it well, setting the tone for what APIs can do, to disrupt a well established industry like online payments.
Hardening Security Practices
As more companies looked to open up their digital assets via web APIs, the need to harden security practices emerged, but at the same time these practices needed to reflect the simple nature of the modern web API, that developers expected. Traditional enterprise approaches to identity and access management would not always fly within web API implementations, with the majority of providing opting to go with basic auth, or API keys, when securing their APIs, but there were two approaches to securing APIs that have evolved along the way.
In 2006 a movement was born out of Twitter, and the social bookmarking site Ma.gnolia, out of a frustration that there were no existing standards for platforms, developers, and users to manage API access and resource delegation. By 2007 a small group gathered to draft a proposal for a new proposal, resulting in what became the OAuth Core 1.0 draft, which then emerged as an OAuth working group within the Internet Engineering Task Force (IETF).
By October 2012, OAuth 2.0 had emerged as the next evolution of the protocol, focusing on client developer simplicity while also providing specific authorization flows for web applications, desktop applications, mobile phones, as well as devices. OAuth 2.0 has seen wide adoption by leading API providers, quickly establishing it as as one of the first major open standards, that the web API community would embraced.
While OAuth can be celebrated as a security standard for the API space, the evolution hasn't been without its problems. In July 2012, one of the original OAuth champions Eran Hammer resigned his role of lead author for the OAuth 2.0 project, withdrew from the IETF working group, removing his name from the specification, citing a conflict between the open web and enterprise cultures, stating that the IETF as a community is "all about enterprise use cases", and "not capable of simple." What is now offered is a blueprint for an authorization protocol, he says, and "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions."
While OAuth 2.0 is not the perfect solution the API space needs to delegate access to resources via APIs, it is the best we have at the moment. The approach to securing APIs, provides a viable solution that allows API platform providers to secure resources in a way that enables developers to easily access resources, with the involvement of end-users. Even if OAuth 2.0 has become a tool of the enterprise, it is providing some meaningful delegation, and enabling the space to safely and securely expand, and integrate at a steady pace for the last few years.
Jason Web Tokens (JWT)
At the same time OAutb has been maturing, another industry standard (RFC 7519) evolved, called JSON Web Tokens, providing an open way to securely represent online exchanges between two parties. The tokens are designed to be compact, URL-safe and usable in single sign-on (SSO) context via the web. JWT claims are typically used to pass identity of authenticated users between an identity provider and a service provider, or any other types of claims as part of regular business activity.
JWT began being worked on in September of 2010, with the first draft of JWT becoming available in July of 2011. A growing number of API providers are using JWT as a middle ground between simple API keys, and the sometimes overwhelming OAuth implementations, that can create friction for developers.
I think JWT has the potential to flourish outside of the challenges OAuth has faced from the enterprise, at least for a couple more of years, until it sees the same amount of adoption as OAuth.
Both OAuth, and JWT has helped round off the API security stack, where along with Basic Auth and API Keys, API providers now have a robust set of tools that allow them to secure the valuable resources that are being made available via web APIs.
The Now Infamous Yegge Rant
Echoing the API history Amazon has been putting down, in March of 2011, there was an accidental post from a Google employee about Google+. The internal rant was accidentally shared publicly and provides some insight into how Google has approached APIs for their new Google + platform, as well as insight how Amazon adopted an internal service oriented architecture (SOA).
The insight about how Google approached the API for Google+ is interesting, but what is far more interesting is the insight the Google engineer who posted the rant, Steve Yegge, provides about his time working at Amazon, before he was a engineer with Google.
During 6 years at Amazon he witnessed the transformation of the company from a bookseller to the almost $1B, Infrastructure as a Service (IaaS) API, cloud computing leader. As Yegge's recalls that one day Jeff Bezos issued a mandate, sometime back around 2002 (give or take a year):
- All teams will henceforth expose their data and functionality through service interfaces.
- Teams must communicate with each other through these interfaces.
- There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
- It doesn't matter what technology they use.
- All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
The mandate closed with:
Anyone who doesn't do this will be fired. Thank you; have a nice day!
Everyone got to work and over the next couple of years, Amazon transformed itself, internally into a service-oriented architecture (SOA), learning a tremendous amount along the way. While this story has been proven to be more myth than reality, I think the real impact of the story is in how this myth has been heard, and passed around the API sector, told and retold by IT, developer, and business people around the globe.
This story came at a time where many companies were struggling with the scary possibility of operating public APIs, and has allowed them to refocus much of the value that APIs bring to the table internally. The Yegge rant provides an important story that companies can tell themselves as the begin their API journey, keeping things internal in the beginning, but with hopes that someday they can go public, and find the success that Amazon has with their successful API platform.
Open Source Software and Now APIs With Github
In tandem with the evolution of the cloud, another company was being born who would make yet another monumental impact across the API space. In late 2007, Thomas Preston Warner, and Christ Wanstrath would come together to improve on the open source distributed version control system, Git. The pair were looking to improve on the existing Git experience, and develop a hub for coders, which by mid-January of 2008, after three months of nights and weekends, they would launch Github into private beta mode.
Along with simplification of using Git at the heart, and the social network that brought together coders, Github also has leveraged APIs all along the way. The Github API provides developers with access to all aspects of the Github platform, providing the ability to manage the software development life cycle, while also building community along the way.
As the potential of Github in software development was being realized, Github did another seemingly simple thing, which would further expand its use across the API sector, by launching Github Pages. The new solution would allow project websites to be deployed alongside Github master repos, something that would tweak the meaning of exactly what a repo could be used for.
API providers would begin using Github Pages to host their API developer portals, to host API SDKs and code samples, and even began pushing its use for publishing event presentations, and managing the publishing of open data. Github has emerged as the platform of choice in the API space, and is used at almost every stop along the API life cycle, leverage Git, and a robust API to orchestrate and automate the API driven backend of the latest wave of web, mobile, and device-based solutions.
Changing The Way We Communicate Around Are APIs With Swagger And The Swagger UI
In 2011, and 2010, a new way to approach the old SOA way of describing services emerged called Swagger. The new API definition format, was developed by Tony Tam (@feyguy), to meet their API needs at Wordnik, when it came to helping managing the evolution of their dictionary API. Swagger provides API providers a new way to describe the surface area of any web API, allowing for the generation of documentation, code libraries, and many other things developers need to understand what an API does, and how to put it to work.
Swagger is often known for its tooling for deploying a new type of API documentation, in a way that made it more interactive, allowing developers to make API requests, and see the details of the request, and the results, before they ever write any client code. However the interactive API documentation was just the beginning, and the API definition format would eventually be applied to almost every stop along the API lifecycle.
Swagger has matured to version 2.0, and has become the central contract that defines the arrangement between API provider and consumer. In 2015, Swagger was acquired by SmartBear Software, with the specification put into the Linux Foundation. In 2016, the specification has re-emerged as the OpenAPI Spec, and is now governed by the Open API Initiative (OAI), the organization formed as part of the move to the Linux Foundation. Even amidst all the turmoil, the OpenAPI Spec is still rapidly expanding in use across the web, and providing a machine readable way for API providers, consumers, and even business stake holders to describe the valuable API resources being exchanged as part of the API economy.
Apiary Teaches Us To Be API Design First
Swagger gave us a way to describe our APIs, but many API providers still apply it after an API has been developed, until one company came along and helped us move the conversation earlier on in the API life cycle. The Apiary.io team used their own API definition format, call API Blueprint, to not just to describe and document an API, but also allow designers mock it, before you ever got your hands dirty writing code. This API design first approach to API development has had a profound effect on how we look at the API life cycle, allowing us to make mistakes, and bring in key stakeholders before things ever get costly.
What Apiary brought to the table wasn't just about making it easier to design, mock, develop, and document our APIs, they pushed the space to open up the API conversation with consumers, and key business stakeholders much earlier on in the life cycle, before things went down a bad road, and were set in stone. This process allows everyone involved to get to know the resources being made available via APIs, and design a solution that better matches how the resources will be experienced, not just how the resource is stored.
API design first has become a mantra for many companies, and API service providers. While it isn't truly a reality for all who recite the phrase, it provides a healthy focus for API designers, architects, and business stakeholders, at varied stages in their API journey. Many companies will need this focus to get them through many of the challenges they face along the way, as they try to operate in this new online, API driven web, mobile, and device driven world.
A Glimpse At The Internet of Things From Fitbit
By 2009 and 2010, it was becoming clear that APIs could be used to deliver the resources we need for the increasing number of mobile phones that were becoming ubiquitous. Amidst this rapid growth of mobile, another company popped up that would see the potential of connecting devices to the Internet, with the birth of the Fitbit. The new fitness and health tracking device would allow users to track their activity, health, and other key wellness indicators, which could then be connected to our mobile phones, helping plant the seeds for what we now call the Internet of Things (IoT).
In February 2011 Fitbit quietly launched their API, providing connectivity to the data that was uploaded to the Internet from the tracking device, via our mobile phones. Two months after Fitbit launched their API, they announced the first wave of partners who had integrated with the fitness and health device. This partner potential is why companies of all shapes and sizes were beginning to deploy APIs, allowing for 3rd party companies to tap into the growing number of valuable resources being made available online.
While Fitbit is not responsible for the Internet of Things, as devices being connected to the Internet via wifi and bluetooth is nothing new, they do provide a solid example of IoT in action, one that is publicly traded, and has seen both consumer, and commercial success. Whether you call it the quantified self, wearables, or Internet of Things, Fitbit has captured the imagination when it comes to Internet connected devices.
Integration Platform as a Service (iPaaS)
As developers are realizing the potential of web APIs, a wave of new companies were also emerging that saw the potential for non-developers to put APIs to work in everyday business and consumer world. In November 2011, Zapier began publishing simple connectors between popular cloud platforms that would allow anyone to put APIs to work in managing their increasingly online world.
By June of 2015, Zapier launched its third-party developer platform, which allowed API providers to build their own connectors. The connectivity that companies like Zapier offered, reflect older, more enterprise approaches like Extract, Transfer, and Load (ETL), which helped businesses move data and information around on their networks. This big difference with this new breed of provider is that that connectors employ simple icons, that represent popular API driven services, and focused on the API driven cloud, moving beyond the company network.
There are more than 50 providers that I track on who provide iPaaS services, of all shapes and sizes, continuing to to legitimize the concept, but not all pay it forward by providing an API as well--a significant part of the concept working. While iPaaS helps smooth over some of the more difficult aspects of API integration, they shouldn't hide it all together, and eliminate the possibility for API access by consumers.
iPaaS isn't just about move data and content from point A to B, it is about aggregating, syncing, and migrating valuable API driven resources. As the number of APIs grow, the number of iPaaS providers also increases, providing a wealth of API driven resources that any business user, or even developer can put to work for them.
Obama Mandates Federal Government To Go Machine Readable By Default
As a follow-up to the Executive Order 13571 issued on April 27, 2011, requiring executive departments and agencies to identify ways to use innovative technologies to streamline their delivery of services to lower costs, decrease service delivery times, and improve the customer experience--Barack Obama has directed federal agencies to deploy Web APIs.
The Whitehouse CIO has released a strategy, entitled "Digital Government: Building a 21st Century Platform to Better Serve the American People", provided federal agencies with a 12-month plan that focuses on:
- Enabling more efficient and coordinated digital services delivery
- Encouraging agencies to deliver information in new ways that fully utilize the power and potential of mobile and web-based technologies
- Requiring agencies to establish central online resources for outside developers and to adopt new standards for making applicable Government information open and machine-readable by default
- Requiring agencies to use web performance analytics and customer satisfaction measurement tools
While the mandate itself didn't do much to move the open data and API needle in the federal government, it did mobile many people who were looking to make change in government. In addition to the mandate, a wave of open data, and API savvy CTOs and CIOs have led the charge at the White House, and groups like 18F have taken up the cause of open data and APIs across the federal government.
At the same time this change is happening at the federal government level, open data and APIs would also be making change on the ground in city, state, and county governments across the country. While not all early visions of open data have been realized, the Obama mandate marked a major milestone in how our government works, in part to the concept of the web API.
Setting A Very Negative Precedent In The Oracle v Google API Copyright Case
Even with all the gains the API industry has made in the last 15 years, it hasn't been without its major potholes, speed bumps, tool booths, detours, and disruptions. Just as the API space is seeing some amazing contributions, and growth, a chill was sent across the industry by a court case brought by Oracle against Google, which claimed that the Java API had been copied by Google,and were something that was protected under copyright.
In May 2012, a jury in the case found that Google did not infringe on Oracle's patents, and the trial judge ruled that the structure of the Java APIs used by Google was not copyrightable. However, by 2014, the Federal Circuit court partially reversed the district ruling, ruling in Oracle's favor that the APIs were indeed protected under copyright. A petition was submitted to the United States Supreme Court on June 29, 2015, but was denied, sending the remaining issue of fair use back down to the district court.
While the Java API is a different breed of API, than the web APIs that have gained momentum, and there remains the fair use discussion, the court case has sent shockwaves across the API sector. There is a lot of uncertainty involved with companies doing APIs, and the API copyright precedent adds yet another concern for both API providers, and consumers, adding unnecessary strain to the space. Web APIs flourish when they are used as an external R&D lab between a company, its partners, and the public, and the dark cloud of API copyright threatens this balance.
Twitter Sends All The Wrong Signals To Its Community in 2012
At the same time we were dealing with the fallout from the Oracle v Google case, one of poster children of the modern API movement sent a series of chilling messages, and veiled threats to its then fast growing API ecosystem. In June 20212 Twitter published a post explaining the need for delivering a consistent Twitter experience for users, followed up by a very ominous post in August of 2012 talking about changes coming down the line for the Twitter API.
While Twitter was just tightening up its control over its brand, applications, and its community, something all API providers face, the way it approached the situation, sent such a negative vibe to its community, the developers revolted. Twitter made it clear, that it was clearly in competition with its API ecosystem, and was trying to take back control over some of the more successful areas of development that had been occurring within the ecosystem, and already being met by businesses being built by API developers.
Everything we know of as Twitter was built by its developer ecosystem, a relationship that was very public, and encouraged by Twitter, until the company did not did the free labor, and took on a significant amount of funding, requiring it to shift its course. Twitter was needing to generate revenue, and made it very clear that they were taking back the most successful areas of the platform, something that would have a very chilling effect on the API community, and is something that the company has never recovered from.
Even though Twitter co-founder Jack Dorsey reassured developers that Twitter cares about its developers, as he retook the reigns of his stumbling company in 2015, the trust had already been broken. Proving that trust is one of the most important aspects of API platform operations, something that once it has been broken, will be almost impossible to recover from.
As the momentum in the API space grew in 2011 and 2012, the traction API service providers were seeing caught the attention of some of the more established tech giants who have dominated the tech sector for decades. The well defined discipline of API management, set into motion by Mashery who is showcased above, had ripened to the point that made a very attractive acquisition targets by the enterprise, and we saw a handful of acquisitions that rung out across the space in 2013.
In late 2012 we saw the first acquisition of Vordel by Axway, which set off a series of high profile API management provider acquisitions in 2013, beginning with Mashery being purchased by Intel, then Layer 7 by CA Software, and Apiphany by Microsoft later in the year. The acquisitions would send the signal to markets that the API space had come of age, the space was maturing, and the big boys were taking notice.
In a little less tan a decade, API management had grown up to be a legitimate business, and prove to be one that would attract the attention of the biggest tech companies in the space. While the acquisitions have legitimized the value of API management solutions, it hasn't all been good, as the attention from the enterprise, has almost meant a shift in focus by the investors of popular API service providers, looking for the big pay-off, shifting away from many of the priorities that have made API successful, operating on the open Internet, as opposed behind a corporate firewall.
Even with all the acquisitions in 2013, the biggest milestone for the API management space was the IPO of one of the API pioneers, Apigee. In May of 2015, Apigee Corporate, the developer of API-based software platform, filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission (SEC) relating to a proposed initial public offering of shares of its common stock.
The API management acquisitions were validating, but one of the leading companies was going public which was a significant milestone marking that the space was indeed a real thing (we hope), and potentially something that mainstream markets now acknowledged. In the tech sector we are all surrounded by like-minded folks who are usually believers by default, when out in the real world there are large sectors of business who are much more skeptical, and bullish on what is relevant.
While the Apigee IPO performance has been mild at best, it still legitimizes not just API management, but also brings a validation to the wider concept that the web, can be used as a driver of real world business, not just mashups, and online play. In 2015, after fifteen years of evolution, the web APIs now have a representative on wall street, setting the stage for wider growth in many established industries like banking, insurance, health care, and beyond.
In early 2014, Stewart Butterfield, one of the original founders behind the pioneering photo sharing platform included in this history, launch a team messaging solution named Slack. After Butterfield left Yahoo, who acquired Flickr in 2008, he began building a game called Glitch, which while enjoying a small cult following, was not a commercial success, and by 2012, had to shut the doors and lay off their staff.
One by product of the gaming platform, was a messaging core they had built, which after shutting down, they spun off into a separate product they continued to work on throughout 2013. Once released in 2014, the platform was an immediate hit with the VC, and Silicon Valley community, and quickly has become a huge messaging success, but equally as important, via its API the platform spawned a huge number of successful integrations, as well as a fast moving bot ecosystem.
In 2016 Slack has become the epicenter of a chat and messaging bot evolution, that originally focused no the Twitter ecosystem, but has become more about business productivity, and other business solutions, injected into the workplace team environment, via the popular messaging platform. This bot movement has spawned a whole new wave of interest from VC"s, and while the concept is nothing new, Slack, Twitter, and other API or messaging driven platforms are giving rise to this new bot as an API client environment.
Amazon API Gateway
In 2015, AWS continued to define the APIs space, and demonstrate their dominance, by releasing the AWS API Gateway, which allows any AWS customers to design, deploy, manage, and monitor their APIs via their existing AWS cloud infrastructure. While many cried this was a killer of many of the existing API management service providers, after time has passed, it seems to be a natural progression of the API space, as well as telling of Amazon's role in the space.
As the AWS API Gateway press release information states:
“create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any Web application”
The new gateway will take all that existing infrastructure you have accumulated (in the cloud), and it:
“..handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.”
Distilling down the lessons from the last five years, and selling it:
“With the proliferation of mobile devices and the rise in the Internet of Things (IoT), it is increasingly common to make backend systems and data accessible to applications through APIs.”
To me, the release of the AWS API Gateway is a pretty significant milestones in the evolution of what is API. By 2006 the web had matured, and the Internet was being used for much more than just consumption, the API community was realizing that we could deploy vital digital resources using the Internet as a vehicle. Almost 10 years later, Amazon understands the opportunity in enabling you to do this for yourself--helping you either embark on, or speed up your API journey, which they've been on for over 15 years.
Allowing you to manage any of your digital assets as an API, using AWS API Gateway, is just the beginning of the expertise that Amazon is packaging up for all of us in the latest release.
Delivering On Promise Of Voice Enablement With The Alexa Voice Service & Skills Kit
Joining in on the wider conversation around the Internet of Things, Amazon has released several IoT focused solutions, but none have made an impact on the space, and potentially the future of APIs, more than the Alexa Voice Service. I hesitate to include this as a milestone in what I consider to be the history of APIs, but what Amazon is doing is already making significant waves when it comes to how APIs are consumed.
In the summer of 2015, Amazon introduced their voice enabled device the Amazon Echo, which was supported by a suite of APIs they bundled under the Alexa Skills Kit (ASK), and now also the Alexa Voice Service (AVS). Much like the rest of IoT platforms, Amazon Echo still has to provide itself with real world usage, but the skills kit, and voice service has emerged at just the right time in the evolution of mobile, voice, as well as complimenting the number of API resources available.
Like messaging platforms like Slack are providing a potential new way to reach consumers, Alexa Voice Services is providing a new way to access valuable API driven resources. I feel more importantly the concept of the "Skills Kit" is providing an entirely new way for API providers to think about how they expose their valuable resources, making them available in ways that are more meaningful to home, and business users. Only time will tell if Alexa becomes part of the overall API consciousness, but after less than a year of operation, I am seeing signs of the platform being a very important milestone in the evolution of the space.
Understanding history is critical to understanding where we are going. Calling this document the "history" of web APIs seems kind of silly. We are actually talking about a span of 16 years. But there are so many important lessons to be learned from the approach of these API pioneers, and the marks they've left on the space, to the point we can't ignore this history. If the technologists had their way, APIs would have purely been successful in the period of commerce, but with the radical innovation of companies like Amazon, Twitter, Twilio, Slack, and others, we now understand that APIs needed several essential ingredients to succeed: commerce, social, cloud, messaging, voice, and more.
Of course, all of this has to make money, but APIs need to be scalable, while also delivering the meaningful tools, services and resources that are important end users, otherwise none of this matters. As we stand solidly in the mobile period of API evolution, looking at the evolution to a period that will be more about devices and an internet of things (IOT), we need to understand our own history and how we've gotten here, to make sure we the right decisions for what is next.
Web APIs are about delivering valuable, meaningful, scalable and distributed resources across the World Wide Web. While Silicon Valley keep pushing forward with the next generation of technology solutions, we need to make sure that we know our past.
As I was reviewing patent #20160070605: Adaptable Application Programming Interfaces And Specification Of Same, from yet another person I know, after I pick my head up off the desk, I begin thinking about all of the unintended consequences of API patents. Here is the abstract from this work of beauty:
Aspects of the disclosure relate to defining and/or specifying an application programming interface (API) between a client and a computing device (such as a server) in a manner that the client, the computing device, or both, can evolve independently while preserving inter-operability.
This is something that WE ALL should be building into our clients, and is fundamental to all this API shit even working at scale. This particular patent is owned by Comcast, so I'm assuming it will be wielded in the courtroom, and leveraged in back room net neutrality discussions among cable and telco giants. There really isn't a lot of unintended consequences involved when two or three 1000 lb gorillas battle it out. The little people always suffer in these battles, and there really isn't much I can do--I will leave this to my friends in the federal government to take on.
Where I feel like I should be speaking up more, is when it comes to the patents being filed by my startup friends, like the hypermedia patents I talked about a few months ago. For weeks after I published that story, my friends came out of the woodwork to tell me this is what you have to do in this game, and they are well meaning, and would never ever use the patent for anything bad. Everyone I talked to I respect, and do believe they mean what they say, but its the unintended consequences that keep me up at night.
What happens after your startup is acquired, and you have cashed out, and long gone? Are you going to follow the company who acquired your startup and track on their litigation, and speak up? Maybe spending some of your fortune to protect the little guy or gal? What happens when your startup fails, and your investors parts out your company, and all of its assetts (cough cough patents), what recourse do you have here? Your VC's will assume your friendly stance on patent usage right? Right.
I know many of you like to brush me off as being an optimistic hippie, and naive of how things work (all without actually looking at my background). My response to that, is you should probably look around a little, understand the beast you work for and the damage they do in the world. I think you should examine your decisions to make when getting into bed with certain money men, and realize the machine you've become a cog in the wheel of. Only then you'll realize that the stories you tell me, and yourself to help you sleep at night, are fairy tales handed to you by the machine that will consume you.
I spend a lot of time gathering, creating, and organizing machine readable OpenAPI Specs, as part of my API Stack, and personal API stack work. I'm not insane enough to think I can create OpenAPI Specs for all of the public APIs out there, I'm just trying to tip the scales regarding how many API definitions that are out there, to increase the usage and value of open tooling, which in turn will increase the number of people who will create their own OpenAPI Specs. (just kind of sort of insane)
As I do this work, I realize how easily I can obsess over unnecessary, and meaningless metrics and goals in the tech space. Two things have stood out for me as I do this work:
- Overall Number of API Specs - Much like our obsession over the completely meaningless and incorrect number of APIs that are in the ProgrammableWeb directory, the number of OpenAPI Specs means nothing. At some arbitrary point, things will change and people will get on board--it is already happening.
- The Complete OpenAPI Spec - Having a complete OpenAPI Spec that contains ALL endpoints, parameters, and underlying data schema is a myth, and really doesn't matter. You only need as much as you are going to use. Maybe to API providers this matters, but to consumers, you only need what you need. The trick is figuring out what this is, without dumping everything on a consumer.
I won't stop what I am doing. As an experienced technologists I know when I am simply a tool in a larger game, and I perform as the automatron that is expected of me. However I will not obsess over the completeness of the API specs, or ever showcase the overall number of OpenAPI Specs that I have--I'll just keep on working.
In this data driven world, it is easy to get caught up in focusing on mythical, meaningless, data points. I'm not saying we shouldn't be doing this, because I often use these data points to motivate and push my work forward, but I think the lesson for me, is to know when an obsession over some arbitrary number becomes unnecessary or unhealthy. Another important aspect of this realization for me, is separating out when I personally set these number, or they are handed down from the wider tech space, and I had no part in helping craft them--this is when I think the chance for moving into the unhealthy realm exists the most.
I get these regular updates from FullContact when there is new information available about the contacts I have added to my contact list of people I care about. Anytime there is a new photo, social network, or other element of their contact information updated, I get notified, andI can choose to update it in my CRM.
Would someone go ahead and create this, but for OpenAPI Specifications? All you have to do is use Github, and build your own index of the websites of leading companies who are publishing APIs (common crawl is good start), and begin keeping track of ALL of the OpenAPI Specs, and API Blueprints that are increasingly spread across the web. Then you will need to develop some sort of API definition diff solution (which I've talked about before), and then send me any changes or updates that I do not have in my directory of API definitions--which you know, because you have indexed already.
You can offer premium services, like private repositories, and pathways to the wealth of API definition driven tooling out there. Again, this is something I could very well do, except I am one person, and not really interested in doing anything beyond all the work I already do. Especially if it involves taking VC money, scaling, and being any closer to the machine than I already am. So if you could get to work on this for me, and help solve this growing need in the API space, that would be great! Yeah. Also make sure and cut me in for 10% of all the $$$ you are going to make.
I play with a lot of services that are looking to provide solutions to the API industry, and I'm always looking to better understand what leading API services providers are using to deploy their warez. I was test driving the testing and monitoring solutions from Opsee this week, and separate from the solutions they provide (which I'll talk about later), I thought the deployment of their API testing and monitoring solutions was worthy of talking about all by itself.
Opsee deploys as a micro-instance within my AWS stack, and gets to work testing and monitoring the APIs that I direct it to, providing a very precise, and effective way of doing monitoring.
I do not think this approach will work in all scenarios, for all API providers, but I think packaging up the services, so that API providers can deploy within their stack, and run within the cloud or on-premise environment they choose, is a potentially very powerful formula.
I have written before about offering up your APIs as wholesale or private label solutions before, and I would categorize what Opsee is doing as offering up your API industry service provider as wholesale or private label solutions. Many companies will do just fine consuming your SaaS or publicly available API driven solution, but more sophisticated operations, and potentially regulated companies, are going to need a solution that will run within existing infrastructure, not outside the firewall.
I could see bandwidth and CPU intensive situations also benefiting from this approach. Opsee's way of doing things has gotten me thinking more about how we package up and deploy the services we are selling to API providers. Once Opsee was up and running in my stack, using a set of keys I setup and configured especially for it, it got to work monitoring the endpoints I tell it to. I could also see this approach also work as a locally available API, where I tell my systems to integrate and work with an API made available from the deployed instance as well--either permanently or on a more ephemeral time frame.
There is lots to consider but with the evolution in container tech, I could see this approach be applied in a lot of different ways, allowing companies to pick from exactly the API services they need (A la carte), and deploy exactly where they are needed, eliminating the need to depend on services outside the firewall.
As I'm working through my morning work monitoring the API space, I'm proccesing stories about the availability of valuable resources, like the House Rules Committee data being released in XML formats, and ExoMol, the molecular line lists DB used in simulation of atmospheric models of exoplanets, brown dwarfs & cool stars.
I feel fortunate to live in a time where the world is opening up such valuable resources, making them available online--available for anyone to use, remix, improve, and make better. My faith in APIs doesn't come from any single API, it comes from the possibilities that will exist when individuals, companies, organizations, institutions, and government agencies all publish valuable resources using APIs.
While there is still a lot of work ahead, I'm seeing the early signs of this reality emerging across my API monitoring in 2016. I'm coming across so many, extremely valuable, openly licensed, machine readable resources that can be used in some very interesting ways. The trick now, is how do we expose the most meaningful parts of these resources, and make sure they get found by the people who will actually put them to use. As the number of APIs increase, this is something that is going to get harder and harder, and the need for value even more critical.
Another dimension to this discussion is the growing number of channels we need to make our API resources available in. Web and mobile are still king when it comes to consuming APIs, but quickly devices, messaging, voice, bots, and other channels are growing in use. The next wave of API evangelism is going to require that the right people (domain experts) are available to help expose the most meaningful skills that our APIs posses, via these growing number of quick moving channels.
An example of this in action, using one of the valuable resources above, could involve making the Congressional activity that is most relevant and important to me, available in my Slack channel (or messaging app of choice), or even available via my Amazon Echo, using Alexa Voice Skills. How do we start carving out meaningful skills from government, and other open data, using simple APIs? How do we use these to educate individuals, either as an average citizen, or maybe in a professional or commercial scenario.
We have many, many years ahead of us, helping individuals, companies, institutions, and government understand why they need to be exposing valuable data, content, and other digital resources via simple web APIs. However, alongside these efforts, we are going to need armies of other individuals who have the ability to identify valuable resources, and help craft simple, usable, and meaningful endpoints, that can be added as skills within the web, mobile, device, messaging, bot, and voice apps of the future.
I have self-censored stories about microservices, because I have felt the topic is as charged as linked data, REST, and some parts of the hypermedia discussion. Meaning there are many champions of the approach, who insist on telling me, and other folks how WRONG they are in the approach, as opposed to helping us work through our understanding of exactly what is microservices, and how to do well.
For me, when I come across tech layers that feel like this, they are something that is very tech saturated, with the usual cadre of tech bros leading the charge, often on behalf of a specific vendor, or specific set of vendor solutions. Even with this reality, I've read a number of very smart posts, and white papers on microservices in the last year, outlining various approaches to designing, engineering, and orchestrating your business, using the "microservices diet".
Much of what I read nails much of the technology in some very logical, and sensible ways--crafted by some people with mad skills when it comes to making sense of very large companies, and software ecosystems. Even with all of this lofty thinking, I'm seeing one common element missing in most of the microservices approaches I have digest--the human element.
I hear a lot of discussion about technical approaches to unwinding the bits and bytes of technical debt, but very little guidance for anyone on how to unwind the human aspect of all of this, acknowledging the years of business and political decisions that contributed to the technical debt. Its one thing to start breaking apart your databases, and systems, but its another thing to start decoupling how leadership invests in technology, the purchasing decisions that have been made, and the politics that surrounds these existing legacy systems.
I don't know about you, but every legacy system I've ever come across almost always has had a gatekeeper, an individual, or group of individuals who will fight you to the death to defend their budget, and what they know about tech (or do not know). I've encountered systems who have a dedicated budget, which only exists because the system is legacy, and with that gone, the money goes away too--sell me a microservices solution for this scenario!
Another dimension to this discussion, is that investors in microservice solutions are not interested in their money being used for this area. It just isn't sexy to be spending money on dealing with corporate politics, and unwinding years of piss poor business decisions, and educating and training the average business user around Internet technology. If you do not unwind these very human led, politically charged, business environments, you will never unwind the systems that exist within their tractor beams. Never. I'd care how much YOU believe.
In the end, I'm not trying to make you feel like you are going to fail. My goal is to encourage more investment in this area by the microservice pundits, vendors providing solutions the space, and VC's who are pouring money into these solutions. Many of the young, energetic folks at the helm of startups do not fully grasp the human side of corporate operations, and the potential quagmire that exists on the ground in front of them.
I am hoping that a handful of service providers out there can lower the rhetoric around their services and tooling, so that expectations get set at more realistic levels. Otherwise the push-back against the first couple failed waves of microservice implementations will become impenetrable, and blow any chance of making it work.
I have looked at way more Bots than I should have in the last couple days, and I'm beginning to see similar patterns emerging across bot implementations, in sync with what I shared as part of my advice to API service providers. After you look at hundreds of APIs, and now a couple hundred bot implementations, you really begin to see what some of the common building blocks of the successful bot implementations are:
- Domain - Having a domain dedicated to the bot, its operations, and the community around it.
- Website - Simple, modern, and informative website for your folks to discover, and put your bot to work.
- Logo - Having a simple, modern, and often clever logo and overall branding to your bots presence.
- Twitter - Have a genuine, active Twitter account that actually engages in conversations with community.
- Github - Establish an active, and useful Github presence via a dedicated user account or organization.
- Blog - Provide a thoughtful, active, and informative blog that engages with a community, customers, and the public at large.
- API - You are using open APIs, and messaging formats to make your bot work, pay it forward with APIs and Webhooks.
- Monetization - You gotta make some money, how are you going to keep the lights on, feed and cloth your bot. ;-)
- Support -Every bot will need support, even if it is automated. Make sure have answer questions via Twitter, and Github.
The most interesting bots I came across, whether they are Twitter, Slack, or Telegram bot, all had at least half of the items listed above. I'm guessing we are going to see a huge surge in the number of bots that are available, as well as the platforms in which bots can operate, and I am thinking that the bots that follow these patterns will be floating to the top of the churn.
I'm just getting started documenting the common building blocks in this recent surge in API driven bot activity. I'll keep adding the most interesting bot solutions to my research, and keep track of what I feel the best parts of the sector are. I'm curious to see where all of this goes. Not everything bot is capturing my attention right now, but I am seeing enough interesting approaches to using APIs for bot delivery, as well as providing the resources bots will need, to keep my attention.
I'm constantly working to hand-craft, scrape-craft, and auto-generate OpenAPI Specs, and APIs.json files for as many of the top APIs I can. It is something Steve Willmott (@njyx), the CEO of 3Scale always flicks me shit about, saying I shouldn't have to do that--API providers should be doing this! While I agree, I feel like we haven't reached the point where all providers understand the importance of having an up to date OpenAPI Spec available for their API (some even have them, and work to hide them!)
This is something the savvy API providers like SendGrid are doing, rolling around Github, making sure copies of their OpenAPI Specs are up to date.
Thank Elmer, you da man. Now I just need to convince another 2K other API providers of the importance of having up to date machine readable API definitions available, and actively maintain them. Having your definitions up to date, and easy to find increases the chance a developer will load up in the favorite HTTP client like Postman, Postman, PAW, or API Garage.
Its going to happen Steve! I tell you! Some day, all the good API providers will maintain their own API definitions. You should always have an easy to find copy available as part of your API documentation, but you should also search for them on Github, and submit pull requests to keep them up to date. It is like have little machine readable business cards for your API, sitting on the desks of developers, except with this business card allows you to submit a pull request to update.
P.S. While on the subject--PUBLISH AN RSS FEED FOR YOUR BLOG!!! ;-)
I'm seeing a significant shift in the conversations around how SaaS, and API-first platforms are planning access to their APIs. I'm seeing a pretty significant back peddling around free, and freemium access levels. I'm trying to keep notes on what I'm hearing, so that I can better understand what is causing this, and see if I can identify where the balance might exist in providing self-service access to API our valuable resources.
Let' me explore some of the main reasons I'm hearing for reducing, restricting, or completely doing away with these lower level areas of access:
- Too Many API Freeloaders - There is a growing number of poorly behaved API consumers who are just looking for a free ride.
- At Odds With Sales Teams - Free layers of access cannibalize our sales cycle, and make it harder for sales teams to close the deal.
- These Layers Do Not Convert - The users coming in at these layers are just not converting, and becoming paid customers.
- We Built It And Nobody Came - Nobody seems to care about the API, and nobody signed up for acess, so we are shutting down.
- No More Money To Support Free - We just don't have the money to pay for the infrastructure and support it takes to support.
- VC's Told Us To Focus On Enterprise - Our investors told us to focus all our attention on selling to the enterprise, consumer focus is gone.
When it comes to providing and consuming APIs, I've seen it all. I sympathize with many of these reasons for shrinking of the free tiers of access to APIs. There will be many contributing factors to why things might be off in an API community. As my friend Ed Anuff (@edanuff) focused on in his latest post about how a large number of us are doing API wrong, with many companies approach to APIs being fundamentally at odds with their ecosystem.
Ultimately I think that API providers WILL need to tighten down their access levels, but this can't be done without properly thinking things through. You need to consider the bigger picture, around how have planned API access, communicated and engaged with your consumers, and be honest with yourself about what you've done right, and what you've done wrong. While it might be a lot of work to manage this free level of access, and do what it right, you want to make sure you still maintain an environment where serendipity can happen.
Then again, maybe you weren't actually interested in this happening in the first place. You were just looking to get someone to build some things for free on your resources, looking to offload the hard work on an external community. I'm not saying everyone who has a self-service, publicly available API will find success, bu the ones that work hard to strike a balance here, are more likely to have invested in all the right areas--setting the stage for a healthier balance between API provider and consumer.
I am slowly getting sucked into the world of bots. I've been tagging stories related to Twitter bots for some time, but it was the growing buzz of Slack bots that has really grabbed my attention. It pushed me to light up a research area, so that I can begin to look at things closer, and work to understand the common building blocks, like I do for the other areas of the API space.
The world of bots intrigues me, from the perspective of how APIs can be used to execute bots, but also provide the valuable resources needed to deliver bot functionality. I feel like the list of categories of available Slack bots are somewhat telling of the business potential for bots:
While I find Twitter bots creative and interesting, there are many I also found annoying as hell. I've had similar responses to some of the bots I've encountered on Slack. I tend to be pretty boring when it comes to goofy shit online, so my threshold for bot silliness is low. I don't care what others do, I just don't go there that often, so you'll see me highlighting more of the business productivity bots, or the more creative ones.
Another thing I think is significant, is the growing number of platforms in which bots are becoming common practice, with many of the bots operating on multiple platforms.
Another interesting part of this, is that not all bots are executed via API. Some bots simply use the chosen protocol of popular messaging applications, and operate via an account setup on the platform. However, even with these implementations, APIs still come into play in providing the bot with its required skills.
I think bots are significant to API providers, and they should be working to better understand how their APIs can be used to either drive bot behavior via popular messaging platforms, as well as how bots can operate on their platform, either using APIs, or a common messaging format. There is a lot of chatter around bots online to sort through, and will be something that takes me a while to produce some coherent research, but you can keep an eye on it, as I progress via my Github research project for bots.
For some workshops preparation this week, I needed to isolate just the best of the API calls and documentation from handful of APIs I am trying to teach my intended audience about. I have almost twenty separate companies targeted, with a couple hundred individual endpoints across the API provided served up by these companies. I needed to a way to easily define, organize, and present a subset of API samples, intended for a specific purpose audience.
In this workshop, I need the simplest, most intuitive samples possible across these popular APIs. For Twitter I need to be able to just send a tweet, or list friends, and on Facebook I need to post to my wall, and search for user. I need simple actions, that will be meaningful to my higher education audience--most likely students. I don't want to bury all of them with the endless API possibilities that surround the APIs I'm showcasing, I need the twenty use cases that they'll give a shit about, and result in them being interested in what an API is.
As I do with all my research, I organize my lists of APIs into APIs.json collections, and publish the results as a Github repository. This allows me to quickly assemble, relevant collections of APIs, designed for specific audiences, in a way that I can wrap with informative content and stories, helping them on-board with the importance of APIs, as well as the individual APIs. This got me thinking, because as part of an APIs.json index I have all the moving parts identified, I just need a way to simplify, and distill down into little API samples, so that I can offer up through the experience I am crafting.
To help me in my effort I started defining a new APIs.json API type schema, which allows me to define a sample subset of any API I have defined using OpenAPI Spec. Within each schema, I can map to a specific endpoint + method present in the OpenAPI Spec, select which parameters will be used, and what the default values and enum will be applied. I also provide a title, description, and now I have a machine readable schema for my API sample. Next, I just need to craft a simple API Samples JS library for rendering these samples into widgets and other embeddable goodies.
Each sample will use the security definition defined within the OpenAPI Spec to get the authentication details it needs, render it as a simple widget, which allows users to quickly make a relevant API request, and see the results immediately. This really is no different than API explorers and interactive docs like Swagger UI, but rather than providing a complete UI or docs, it provides just a sample, to help educate the API consumer around what is an API, as well as what any single API does.
While I think samples can live alongside the regular API docs, and explorers, within the portal of each API, I'm thinking they will become exponentially more valuable when used in a hacker storytelling, API broker, and evangelism setting. I have the base JSON schema defined for my API Samples, and once I get a working prototype for the JS client, I will publish a simple page, demonstrating a few use cases. At first I'll just be using to help people understand the value of APIs in general, as well as specific APIs, but I think future I will be able to get more sophisticated in how I tell stories around stacks of APIs, and how APIs can be assembled to accomplish bigger things than just any single API can accomplish on its own.
Hopefully in the near future, I will have a wealth of API Samples for the most important APIs I profile as part of my API Stack work, providing me with some valuable education and storytelling tooling to assist me in my evangelism work.
I am preparing a project for the conversations, and a workshop I have on my schedule this week at Davidson College, called: Indie EdTech & The Personal API. I'll be going on campus, talking to campus leadership, administrators, teachers, and students about APIs. To put myself into the right frame of mind, I wanted to explore what the concept of a personal API.
The Concept Of Personal APIs Is Ridiculous
First, I'll set the stage with what is a common reaction, when I mention the term "personal API" to other API folk in the space: "Its a nice idea, but it just isn't something the average person will ever need, let alone care about what an API is--it is a non problem." To me, that response sounds just like what you'd hear in early 2000's when asked if any single individual would ever need a web presence--something that blogging and the social media star has continued to evolve, while also proving the naysayers wrong.
At first glance, a personal API seems like it would be a stack of APIs that you would setup, and manage yourself. Again, something the average developer or IT person would dismiss as out of the realm of reason for the average person--there just isn't a need. I would probably agree here. While having a set of APIs that could help you manage your life bits sounds like a great idea, I just don't think the average Internet user is going to care about their digital life bits at this level--the average person will have to have a an actual problem to solve, before they will ever need and care about an API (secret is that this is true for business too)
Everybody Already Has Personal APIs
I prefer looking at this topic as something that has already been answered--everyone already has personal APIs! I manage all my public messages and social network using the Twitter API, my photos using the Flickr and Instagram APIs, my documents with the Google Drive and Dropbox APIs, and my email with the Gmail API. I already have a stack of personal APIs I depend on each day to drive the web and mobile applications I use for my personal and professional existence, they just are spread all over the Internet, and not owned and operated by yours truly.
OK, sure. These APIs aren't technically personal APIs, but they do provide API access to my personal information. For me, personal APIs are going to be just a very personal API focused journey--as well as a local destination. Even in the business API world, where having an API is not a ridiculous notion, what an API actually is, varies widely. Rarely is their a coherent stack of APIs within a company, and the reality providing and consuming APIs is actually spaghetti mess of services, open source tooling, and custom code. The API journey is always about pulling together this vision, organizing it, discovering new APIs, evolving and deprecating old ones, while also having a plan to actually conduct business in this volatile online enviornment << the journey that will be no different for the individual.
In My World There Are Two Lists Of My Personal APIs
I am an API professional, and making my living with APIs, and I have two lists of what I would consider my "personal APIs". First, here is my master list of APIs I have hand developed, then there are the public APIs I depend on, that are developed by external people and organization. I'm guessing the first list comes closes to what someone would consider to be a stack of personal APIs. While I do prioritize the development of an API in my own stack, the reality of a modern business owner is that you depend on a whole suite of API driven services, that accomplish specific business objectives, something that is often done via integration using APIs. I also have to note, that the overlap betwen these two spheres is huge. (Bernie Sanders huge!)
In the end, my point is that the lines between personal and other APIs are very blurred, and will always be a mix. I have a bunch of personal and professional life bits I create, move around, and share as I need throughout any given day or week. My personal storage API is always Amazon S3, Dropbox, or Google Drive, no matter how you look at it. I make decision about which API I publish, storage, and manage files, documents, and other heavy objects, based upon where I need it, cost of storage, and any number of other factors. The leading storage API providers are my personal APIs any way you look at it, a theme that comes up again and again, blurring the concept of what a personal API is for me.
Breaking Down The Core Personal API Stack
To help think through this, I wanted to take a moment and break down what some of the core personal API stack might be. With my experience it is pretty easy to identify what might be the core personal API stack, as one area my API research into the backend as a service (BaaS) space, works hard to deliver many of these common objects that are used across mobile apps that developers are building today. These are just a handful of the common, end-user facing API resources that are available as part of the standard BaaS offering:
- Profiles - The account and profile data for users.
- People - The individual friends and acquaintances.
- Companies - Organizational contacts, and relationships.
- Photos - Images, photos, and other media objects.
- Videos - Local, and online video objects.
- Music - Purchased, and subscription music.
- Documents - PDFs, Word, and other documents.
- Status - Quick, short, updates on current situation or thoughts.
- Posts - Wall, blog, forum, and other types of posts.
- Messages - Email, SMS, chat, and other messages.
- Payments - Credit card, banking, and other payments.
- Events - Calendar, and other types of events.
- Location - Places we are, have been, and want to go.
- Links - Bookmarks and links of where we've been and going.
Obviously there are many other objects that represent our digital existence, but to help keep things focused, I will only highlight these very common life bits we generate, store, move around, and share online each day. Depending on what we do for a living, the bits and bytes we will be managing using APIs will vary widely, but for the most part all users will have something to put into one or more of these areas of one possible personal API stack.
Understanding Where Our Personal APIs Operate and Store Information
With a core stack defined, let's think about where this personal API stack will live. There is no way all of these resources could possibly live in a single location, and we will always need the help of a variety of companies, organizations, government agencies, and other individuals, to help realize even this core set of personal APIs. Let's spend a moment exploring where each of these life bits already exist, and considering what our motivations around storing, syncing, syndicating, and backing up might be.
- Facebook - Your account and profile on Facebook.
- Twitter - Your account and profile on Twitter.
- Instagram - Your account and profile on Instagram.
- Tinder - Your account and profile on Tinder.
- Yik Yak - Your account and profile on Yik Yak.
- Snapchat - Your account and profile on Snapchat.
- Facebook - Maintaining connects and friends on Facebook.
- Twitter - Maintaining connects and friends on Twitter.
- Instagram - Maintaining connects and friends on Instagram.
- Tinder - Maintaining connects and friends on Tinder.
- Yik Yak - Maintaining connects and friends on Yik Yak.
- Snapchat - Maintaining connects and friends on Snapchat.
- Facebook - Managing your own, and engaging with other business profiles & pages.
- Twitter - Managing your own, and engaging with other business accoiunts.
- LinkedIn - Managing your own, and engaging with other business profiles & pages.
- Facebook - Managing the photos that are primarly published, or syndicated to Facebook.
- Instagram- Managing the photos that are primarly published, or syndicated to Instagram.
- Youtube - Managing your own videos, and the videos curated from other users on Youtube.
- Facebook - Managing your own videos, and the videos curated from other users on Facebook.
- Instagram - Managing your own videos, and the videos curated from other users on Instagram.
- Spotify - Access your music, and experincing music discovery via Spotify API.
- Dropbox - Using the Dropbox platform storage, management, and sharing of documents.
- Google Drive - Using the Google Drive platform storage, management, and sharing of documents.
- Google Sheets - Using the Google Sheets platform storage, management, and sharing of documents.
- Facebook - What is the current status, as well as historical archive, on Facebook.
- Twitter - What is the current status, as well as historical archive, on Twitter.
- Instagram - What is the current status, as well as historical archive, on Instagram.
- Facebook - Managing larger form content published on Facebook.
- Twitter - Managing larger form content published on Twitter.
- WordPress - Managing larger form content published on WordPress.
- Blogger - Managing larger form content published on Blogger.
- Facebook - All of the public and private messaging occurring via Facebook.
- Slack - All of the public and private messaging occurring via Slack.
- SnapChat - All of the public and private messaging occurring via SnapChat.
- Yik Yak - All of the public and private messaging occurring via Yik Yak.
- Paypal - All payments mde and received via Paypal.
- Facebook - All payments mde and received via Facebook.
- Facebook - Adding, updating, an deleting of Facebook events.
- Google Calendar - Adding, updating, an deleting of Google Calendar events.
- Facebook - What is the current and historical location, as well as the places of others.
- Twitter - What is the current and historical location, as well as the places of others.
- Instagram - What is the current and historical location, as well as the places of others.
- Pinboard - All bookmarked URLs added, and stored via Pinboard.
- Google URL Shortener - All URLs that were shortened via Google URL shortener.
The APIs of these platforms are your APIs, and much of this information is never going anywhere, unless you are simply looking to backup, or sync to other locations (which you should be doing). I try to look at this as a positive, by default, you have a rich stack of APIs available, to help you manage your digital information. Whether you are a company, or an individual, you will always have to make some trade offs about how you manage your information, and ultimately where things are stored, how important the resources are, and ultimately are seeking to strike some sort of balance.
OK, So What? Nobody Cares About Their Information
Even if we look at APIs in this way, that our personal APIs will always be a mix of APIs across the services we use, the average individual doesn't know or care about APIs--this is a non-problem. #truth The average individual most likely will never care about the bits and bytes they generate online each day. The photos, videos, messages, and other exhaust from our daily lives can just be lost (and monetized and owned by some one lese), along with many of our memories in the physical world--not everything needs to saved.
Within this reality, there will always be some of the bits and bytes that we choose to look at differently. There will be situations where some videos, and some photos are more valuable than others. Maybe we are a professional speaker, and our photos and videos are used as part of our professional services. Maybe we are a writer, and we need to be paid for our words, either through advertisements, pay to download, and pay walls or subscriptions. There are many reasons why we will want to keep better track of our digital bits and bytes, and there will always be a need to educate new individuals around the opportunities to take control of our digital self, even with the majority of people never caring about their information.
Personal APIs Will Always Require Regular Doses Of API Literacy
Nobody will ever care about APIs, and put the proper thought into their existence, if they do not know about APIs. APIs are not a destination, they are a journey, and whether it is an individual, business, organization, or other entity, there has to be the proper education about what is an API, what are the APIs that are already in use. Throughout this journey, APIs will continue to evolve in their meaning, and as this understanding becomes more nuanced, so will the personal API stack.
Personal APIs Will Evolve And Be Strengthened By Living POSSEE First
API literacy will be exercised and strengthened through operating and managing your online domain, living a Publish (on your) Own Site, Syndicate Elsewhere (POSSE) existence. POSSE is less about your bits and bytes living within your domain, than it is about thinking critically about what your bits and bytes are, and where they do, and should live. Regular POSSE rituals help you understand the API possibilities, and experience the limitations of APIs, but also hopefully the potential of APIs, always guiding you down a path, toward your fuller awareness of what your personal API can be.
Personal APis Will Be Defined By The Services We Use (or Don't Use!)
Our personal APIs will always be defined by the services we use, or don't use. Having the control that we desire over our life bits amplifies the open (or the closed) nature of the services and platforms we are using. Once we are API literate, and have embarked on our a personal API journey, platforms that do not give us the control we are used to, make less sense to use. This experience will raise our expectations of the services we use, shaping our regular POSSE rituals, feeding our thoughts about what is a personal API, and driving our decisions around the online services we use, or do not use in the future.
Personal APIs Are Validated By The Solutions They Deliver For Us
Ultimately our personal APIs will always be defined by the solutions that they deliver. If we cannot use an API as part of our regular rituals, sync, share, and publish as we desire, an API will either never exist in the first place, or whither on the vine, and end up orphaned, and in disrepair. This reality will continue to define what is a personal API, and it will be a driving force for anyone to learn about what is an API, how to live POSSE, and making decisions around which services we use. Personal APIs will continually be validated, or invalidated, by the solutions they enable, or do not enable.
The Need For Personal APIs Will Grow As We Take Control
As API literacy matures, we take more control of our world through our POSSEE rituals, which are strengthened by the services and tools we use, the concept of personal APIs will grow, evolve, and take deeper root. Something that brings me back to all of the root concerns I hear from folks, about people not caring, and there is no problem or need for personal APIs. This is all true, in the average employee who subscribes to current IT norms. This is all true, in the average online digital citizen, who subscribes to current Silicon Valley, tech industry norms. I see the naysayers of the concept of the personal API, as the gatekeepers of traditional power structures, with the handful of us who strike out our our personal API journeys, simply demanding to live a somewhat safer, saner, and healthier life in the cracks of this digital circus.
I'm evaluating the Alexa Voice Service ecosystems alongside leading API messaging platforms like Telegram, and Slack, who are changing the way users engage and communicate, but also are evolving how we are putting our API driven resources to work. As I do this research, I keep finding myself coming back to Amazon's concept of an Alexa Skill, and thinking about how it applies to average everyday APIs like mine.
Do my APIs have the skills they need to compete in this new voice and bot enabled world? It is bad enough that I don't always have the skills necessary to compete as a programmer, but now my APIs have to have the right skills? WTF ;-) Seriously though, I feel the Amazon's concept of the "skill" reflects a wider experiental shift in the API space, where APIs need to deliver information and other digital resources in the context of how they will be experienced by users, and not just how they are stored and maintained by developers and IT operations.
Since there is such a diverse amount of APIs out there, what exactly consitutes as a "skill" could vary widely. If you are person or business directory, the skill might be returning the website address or phone number for an individual or business. If you are an email or SMS service it might be simply send a message to an individual. The concept of a skill further come into focus when you think in context of Alexa Voice Service, or as Amazon puts it:
Alexa, the voice service that powers Amazon Echo, provides capabilities, or skills, that enable customers to interact with devices in a more intuitive way using voice. Examples of skills include the ability to play music, answer general questions, set an alarm or timer, and more.
How does this same way of thinking apply when we are communicating in Slack? Does my API have the same skills to identify that someone just asked a question, or possible executed a keyboard shortcut, and can respond intelligently, in real-time, with the expected behavior the user is anticipating? In addition to having the right skills, Slack is also asking if our APIs can enable Bot Users to be "delightful, interesting, and fun"--significantly raising the bar for what is expected.
As with the evolution of our own personal and professional skills, it will take some practice to develop the new skills that our APIs will need to be successful in this evolving landscape. Something that cannot even begin, unless we have already embarked our own API journey, exposing valuable data, content, and other resources as APIs, while also having in place an efficient way to add, and evolve our API resources. Only then can we start really polishing and honing our API skills, to operate via voice enabled platforms like Alexa, and the next generation of messaging platforms like like Telegram, and Slack.
All of a sudden I feel like my APIs are just a teenager who is typing up their first resume, headed out to find their first job interview, so they can afford a car, and go out on their first date.
I was experimenting with breaking apart API definitions over the weekend, and exploring different ways of assembling the moving parts into different types of tools, visualizations, and other goodies. I do not have any particular objective with this work, other than just pushing the boundaries of we dynamically tell the story of our APIs, and hopefully assist in moving forward the current available API documentation toolbox we have for API providers, and consumers.
Yesterday I published a short piece on API definition driven tag clouds, and this morning I have an API definition driven autocomplete text box, providing access to the paths, verbs, or tags present in any single, or multiple OpenAPI Specs that are indexed using APIs.json.
This particular edition of the API.json and OpenAPI Spec autocomplete pulls across the SMS API providers included in my SMS API research, but is something that will be available for any of my research areas, as part of my ever evolving set of APIs.json tooling. I will be building these into my own custom API design tooling, allowing me to quickly recall the hundreds of endpoints available in my API stack, as well as learn about additional API endpoints available in the 3rd party APIs I already depend on.
I am going to also play with how I can use across the documentation of the APIs that I include in my research. Ultimately I would like to see these types of solutions available as a suite of UI, and UX tools in my overal API driven, hacker storytelling toolbox, allowing me to better tell stories around what APIs can do.
This weekend I took my API Stack tag cloud, and made it driven by API collections defined using APIs.json and OpenAPI Spec. Instead of driving it from a simple tag JSON file, I wired it up to the APIs.json for each of my research projects, and it loops through each API that is indexed, finds their OpenAPI Spec, and uses various elements to publish as tags.
Then I wanted to scale it, and see what the tag cloud would look like when applied to a larger collection:
I'm not sure if these visualizations offer me any value, but it gets me thinking about APIs at the macro level, considering different ways to slice and dice the information available as part of any of the API's indexed. The verb tag cloud is extracted from an API I have that returns the verb count for any APIs.json collection, which gives me one possible data point to consider when quantifying how open, or closed an API is. Its not always constant, due to the wide variety of ways people design their APIs, but when you see an API that is all GET, there is good chance they are pretty tight with their resources.
I was reading a post on Amazon's new SMART(Surveillance Marketed As Revolution Techonology) water pitcher, which is more about Amazon's new connected device partner commerce strategy, than it is about this individual connected device example. The quote from the story explains the strategy pretty well.
Last week, Amazon.com curiously devoted an entire press release to a water pitcher. But not just any water pitcher. Rather, Amazon detailed a new partnership with Brita to bring consumers the $45 smart Brita Infinity Pitcher. Just connect it to your Wi-Fi, and the Brita Infinity Pitcher will automatically track how much water passes through its filter. Then, using Amazon Dash Replenishment, it'll automatically order a new filter when a replacement is needed.
So now, your pitchers orders its replacement filters, your lights will order new bulbs, your printer will order more of those wonderfully expensive print cartridges for you. At a consumer level, most of this seems pretty lazy to me, and unnecessary, but then again, I do not represent mainstream society in any way. Having your common devices in your life, order up their replacement parts, or even servicing, seems pretty attractive, but is something I'm sure we won't consider the downside until its way to late.
Ok, let's stop for a second. There is your new startup idea Jane / Joe entrepreneur, "the Angie's List + Dash Replenishment == service scheduling API platform". My cable modem calls for a service provider when the modem needs replaced with newer model, my refrigerator and dishwasher call repair(wo)man when they are not running at optimal levels. When you get your funding, make sure and cut me in for a point or two. ;-)
Well, as with 96% of this API circus, I'm not highlighting this because I think it is a good idea, I am talking about because it is happening, and will begin impacting the rest of us. As more of our everyday objects are being connected to the Internet, I want to understand the technical, business, and politics aspects of what is happening behind the scenes. Honestly the consumer vision of this doesn't really get me fully interested, but I could see it being used in some pretty interesting ways in a commercial, and industrial environment.
Amazon Dash Replenishment is an interesting layer of any home, or small business, and as an API driven vehicle for commerce. I'm sure Amazon is going to do very well with it. I will keep an eye on what they are doing, as well as in any other implementations in a commercial and industrial settings, and keep thinking about what is possible, both the good, and the bad.
I am borrowing from the very prescient post from Martin Fowler, an older post, but is a topic that should be revisited regularly. Google translate tells me Datensparsamkeit means "data minimization". I prefer Fowler's translation:
It's an attitude to how we capture and store data, saying that we should only handle data that we really need.
My partner in crime Audrey Watters (@audreywatters) sent me the link, and expressing that it was very telling that we (United States) do not have a word that applies for this concept. I think data minimization gets the point across, but I think Fowler elevates it from being just a data process, to something that should be rooted in company culture.
I am adding Datensparsamkeit as a building block to any area of the API life cycle that will potentially store data. The goal is to help API platforms consider how they capture and store data at the strategy level, as well as tactically at every stop along the API life cycle, and only capture and store exactly what you need, and nothing more.
Just because cloud storage is cheap doesn't mean we should capture and store everything we can--just because we can, doesn't mean we sould. Data can just as easily be a liability as it can be an asset, and your machinations around there being an all knowing, all seeing big data future are pure fantasy. The real trouble becomes clear when you realize that the majority of the tech sector, and the NSA, kind of enjoy living in this fantasy world.
One topic that has been present in numerous discussions lately is just how much work goes into designing, deploying, and managing APIs, as well as around the integration between the growing number of APIs. It keeps coming up in conversations with existing API service providers, as well as internal actors within small business, enterprise, and government API efforts.
When you live and breathe API, it all just makes sense in your head, but as API architects, designers, and believers trying to shepherd forward your API within existing organizations, it will come up against many unanticipated challenges. I have discussed this before with my stories about 75% of your API efforts in the enterprise being cultural and political, not technical, and with a short one on your API strategy providing a glimpse into your company culture. In any API journey, there will always be numerous business, political, and ultimately human obstacles between you and the API success you see in your head, something that takes investment, and access to the right expertise and resources.
In conversations with API tooling and service providers lately, the need to focus more resources in training, development, and integration services has come up several times. It is good that companies in the space are recognizing the need, and investing in what is needed. Where the imbalance in all of this begins to show up, is that it isn't something these companies are willing to highlight as a core competency on their site, or in my storytelling, because it is viewed as a potential negative with investors. I am guessing that investment in training, development and integration services just isn't trendy enough for VCs.
I get it. Investors want to put their money into the coolest tools, the ones that match all the stories they are hearing. However I can't help but feel this positioning is creating an imbalance that is preventing some very interesting tools, and API driven solutions from getting the traction they need in some very real world use cases, because they can't properly invest in these areas. If I'm seeing this behavioral adjustment from just a handful of startups, who have valuable API services, and resources, but are still seeking funding and adoption, I'm guessing there is a wider imbalance that I may not be seeing, at companies who haven't opened up to me.
I guess, ultimately I am just asking companies who are embarking on their API journey to invest more in company-wide API literacy training, development, and integration services. I'm also asking API service providers who are selling the valuable tools and services, to make sure and invest in these areas as well (publicly if you can). Then finally I am asking VC's to loosen up your views on how you see these areas, and understand the potentially negative impact on the API focused companies in your portfolio, even if they are the next big thing. Personally, I'm betting this whole investment game ends up being like Bitcoin, with each additional block in the chain being a little harder and harder to mine, something that takes incrementally more and more resources, and the companies who have the on-demand training, development, and integration services ready to go, will do well.