{"API Evangelist"}

Preparing Postman Collections Ahead Of Time For Developers Like JustGiving Does

I have been slowly adding Postman Collections to many of the APIs indexes for my new master stack. I index each API using an APIs.json file, as well as provide a master APIs.json which brings together all of the APIs, into the single portal I've published to Github. Within some of the APIs, you will see a Postman icon, allowing you to easily import the API definition into postman, directly from each APIs landing page.

When I first published that I was publishing these Postman collections on my Alpha API Evangelist blog, someone from the donation API platform Just Giving said they also prepare Postman collections for their developers, and publish them to Github for their API consumers: 

At JustGiving we give our API consumers a set of collections to import to make their jobs even easier, we also use the "Environments" feature to substitute different values depending on whether you're using our Sandbox or Production environment - it's a real time saver and also helps when supporting our API consumers.

I feel pretty strongly that Postman is a measurable unit for transactions in the API economy, and I feel it goes a long way in helping API consumers onboard with each new API. As a developer, if I can import a Postman Collections into my client, within just a couple clicks, and be up and running with an API--I am going to be a much happier camper, and be that much closer to integration.

I'm considering adding Postman collections as a building block under API on-boarding, and encourage API providers to follow JustGivings lead. Postman Collections is kind of the new API explorer, allowing developers to quickly test drive an API they are going to integrate with. With my own platform, I'm also going to play with publishing Postman Collections, complete with API keys that I refresh regularly, allowing immediate, but limited access my APIs, for anyone who stumble across them. 

Like Swagger, Postman is increasingly a machine readable definition that API providers should be producing, alongside Swagger and API Blueprint for the entire API surface area, and APIs.json for indexing of all API operations.



On Twitter, Gnip, DataSift, And Making The Hard Platform Decision

I have had Steve Willmotts of 3Scale's response to the announcement that Twitter was cutting off firehose access for over a week now, and bringing the higher level access completely in house via their Gnip acquisition. You can read the response from Gnip via their blog post Working Directly With the Twitter Data Ecosystem, and from DataSift in their The Twitter/Gnip Gap: Data Licensing vs Data Processing. Steve and I are in agreement with the points made via his How Not to be a Platform: Twitter’s Firehose Mistake post. Where I really struggled with over the last week, in producing a official response, was after responding How To Be A Platform: Making Tough Partnership Choices, from my friend Tyler Singletary. 

You see, I agree with everything Tyler pointed as well. He's right. Twitter has probably made the right decision for Twitter, and based upon their goals and understanding of the Twitter API ecosystem, did exactly what should be prescribed. I have lots of opinions around the Twitter ecosystem, and how Twitter Continues to Restrict Access to Our Tweets. However, when I step back, and put on my API operations hat, everything Tyler points out, is valid in my opinion. Twitter gets to decide who has access, and in respect to their goals, chooseing the Gnip offering over the DataSift offering makes sense. It's all about the winning solution, and where the ROI is when it comes platform operations? Life is good when you are on the winning team. However #winning != #good, and #winning != best outcome. I also guess, that #open != good, and #open != best outcome either. There are no absolutes in API land--sorry.

Where Tyler stance really begin to lose its shine for me, is when you zoom out. You see Tyler, you are on the winning team. You have access to the firehose, and you operate squarely within in the #winning conversation (I know you don't have direct firehose, and rely on Gnip). Those of who aren't, the conversation looks very different. Let's take a look at the Gnip vs. Datasift product comparison you provide, but from my stance.

After two phone calls with Gnip sales team, they realized I didn't have the requisite $45K to move forward, not worthy of continuing to talk to this one. All I'm left with is Twitter REST and Streaming API, and DataSift. Gnip isn't even on the menu, isn't even an option in my world. So only the players who can afford to be at the table, get to play--kind of sounds like a familiar story to me. Where have I heard this before? Oh in every industry, throughout history. This isn't the API promise I've been hearing! ;-)

I remember in 2007 and 2008 when Twitter was built on these same principles. Only people with the big money could step up and play on the platform. Oh wait no! It was the other way, the people with money didn't give shit about Twitter, and only us small fish were playing on the platform. We were building widgets, Wordpress plugins, developing meaningful analytics, mobile apps, and Twitter encouraged it all. Prove what you are building on the public API, and when you are ready, we'll get rid of any rate limits, or even give you the firehose-just ask us, we'll give you the access you need--the firehose rhetoric was much different 5 years ago:

Full investment in this ecosystem of innovation, means all our partners should have access to the same volume of data, regardless of company size. More than fifty thousand interesting applications are currently using our freely available, rate-limited platform offerings. With access to the full Firehose of data, it is possible to move far beyond the Twitter experiences we know today. In fact, we’re pretty sure that some amazing innovation is possible.

The rush of innovation did happen, but does anyone have access to the same volume of data regarless of company size in 2015? Does the argument that Tyler lays out today, apply in a 2007 or 2010 Twitter ecosystem? How does this line of thinging impact the new startups who are trying to get API developers to come out to use their ecosystems, now, when the big money players don't care about them? Sure things have changed, things have evolved, but I just don't buy the argument that the better product (Gnip) won out over the other product (DataSift). In the neatly package and positioned arguement sure, it makes sense, but if you zoom out, it just doesn't hold weight, now or then. Things have changed for Twitter, but things haven't changed for new startups who are building platforms--they are still depending on early adopters to build on their platform and turn them into a success story, and attact VCS and users.

This is what leaves many of us so disgruntled, its not Twitter making the hard decisions around platform operations. It is their abrasive stance towards those of us who are not in the club, and their lack of communication about the ecosystem, and their sharecropping, and poaching of the best of ideas along along the way, and changing the tune slowly over time. Honestly, I feel like they've done a good job in correcting much earlier bad behavior recently, with their overall partner program, API operations, and evangelism, but their whole firehose position is lacking the bigger ecosystem vision--it is just self centered. Twitter exploited the community when they could, they hand picked a group of partners (Gnip, DataSift, DataMinr, Mass Relevance, and many more) to serve this need when they didn't have time or resources to do it on their own, and now that they see that there more dumbass enteprise peopleare willing to pay Gnip prices for our data, over people building on the long tail, I respect the whole thing even less--to the point where I almost have stopped giving a shit, and begun moving on.

Steve is right, Tyler is right, and sure Twitter is right. You might even be able to build an argument that Twitter doesn't owe the long tail of ecosystem developer anything for helping make Twitter what is, which would be a tough argument in my opinion, but you can't argue that Twitter doesn't owes a certain amount of responsibility to end-users to keep platform operations real. This means not making firehose access something only the IBM, Intel's, Oracle's, and other big $$ companies of the world can afford to build (at scale) on Twitter. Sure the rest of us can still build on the public REST and streaming API, but what then? Why would we build on the platform if we reach the 100K token limit, we have no guarantees for the future--that is unless we are in the good ol boy club, and don't have anything to worry about.

I guess in the end, this is all about access (pun intended). Like with many other areas of the world, if you are lucky enough to have a priveleged position, which I'm not just pointing out the Klout / Twitter releationship. I'm also talking about the other cozy VC relationships I've witnessed companies enjoy with Twitter, versus the companies who I've seen that do not enjoy such priveleged access. The conversation cannot just be about he merits of Twitters decision, and the performance of Gnip and DataSift, without including the bigger ecosystem picture. Making the hard API ecosystem decision argument is only valid, if they take into consideration all of the macro issues that truly impact a platform, not just those of the platform owners and its most cherished partners and investors have. While I think Twitter and its investors feel like they own Twitter, us developers, and power users who have been on the platform since early days, also feel like we have a certain amount of ownership too--an ownership Twitter has shown they don't respect.

Twitter definitely holds the cards at the moment, but eventually as more of us power user realize we are just fine without a Twitter presence, Twitter will face becoming the NBC of social media--still relevant, but are you still truly relevant? Not forever.



Hypermedia APIs From Sebastien Cevey of The Guardian at APIDaysBerlin / APIStrat Next Week

The most important stories told across the API space, the ones that have the biggest impact on API providers, and ultimately API consumers, are the stories that come out of the trenches of the API operations at the leading public API providers. This is why 3Scale and API Evangelist started APIStrat, and what the audiences over the last 3 years of operating APIDays and APIStrat have consistently requested--more stories from the trenches.

With this in mind, we've asked Sébastien Cevey (@theefer) of The Guardian to come to @APIDaysBerlin / @APIStrat Europe 2015, and share a story from API operations at the worlds leading media API provider. Sébastien wanted to share their view of what Hypermedia APIs are, by comparing them to a classical RPC architecture and ad-hoc "JSON APIs". He will also cover key benefits of using Hypermedia APIs (cacheability, discoverability, evolveability/extensibility, simplicity, interactive nature), presented in the context of real-world examples and the different constraints of REST (HTTP, URIs, verbs, hypermedia controls like links and forms).

What I like about Sébastien viepoint, is 1) he works at one of the most progressive media outlets in the world, where he is actually putting APIs to work, and 2) his pragmatic approach. Sébastien will talk about what hypermedia APIs are, anchoring his lessons in real-world examples from Guardian operations, but he will also discuss the caveats of this API style, acknowledging the realities of API operations. There are no perfect solutions, but technologists like Sébastien are pushing the discipline of API design forward, while also balancing with the real-world reality that exists in their workplaces.

The keynote from Sébastien Cevey (@theefer) of The Guardian, will kick off the first day of @APIDaysBerlin / @APIStrat Europe 2015, so make sure you are registered today, so you don't get locked out when we sell out (which we are very close). If you are looking to learn more about hypermedia APIs, I recommend starting with Sébastien's keynote, but also make sure and join us Saturday morning for the entire hypermedia API session, where you can learn more about this fast growing API design consideration, from those pushing the conversation forward, and putting hypermedia APIs to work in the real world.

I look forward to seeing you all in Berlin, next week!



Hypermedia APIs From Sébastien Cevey of The Guardian at @APIDaysBerlin / @APIStrat Next Week

The most important stories told across the API space, the ones that have the biggest impact on API providers, and ultimately API consumers, are the stories that come out of the trenches of the API operations at the leading public API providers. This is why 3Scale and API Evangelist started APIStrat, and what the audiences over the last 3 years of operating APIDays and APIStrat have consistently requested--more stories from the trenches.

With this in mind, we've asked Sébastien Cevey (@theefer) of The Guardian to come to @APIDaysBerlin / @APIStrat Europe 2015, and share a story from API operations at the worlds leading media API provider. Sébastien wanted to share their view of what Hypermedia APIs are, by comparing them to a classical RPC architecture and ad-hoc "JSON APIs". He will also cover key benefits of using Hypermedia APIs (cacheability, discoverability, evolveability/extensibility, simplicity, interactive nature), presented in the context of real-world examples and the different constraints of REST (HTTP, URIs, verbs, hypermedia controls like links and forms).

What I like about Sébastien viepoint, is 1) he works at one of the most progressive media outlets in the world, where he is actually putting APIs to work, and 2) his pragmatic approach. Sébastien will talk about what hypermedia APIs are, anchoring his lessons in real-world examples from Guardian operations, but he will also discuss the caveats of this API style, acknowledging the realities of API operations. There are no perfect solutions, but technologists like Sébastien are pushing the discipline of API design forward, while also balancing with the real-world reality that exists in their workplaces.

The keynote from Sébastien Cevey (@theefer) of The Guardian, will kick off the first day of @APIDaysBerlin / @APIStrat Europe 2015, so make sure you are registered today, so you don't get locked out when we sell out (which we are very close). If you are looking to learn more about hypermedia APIs, I recommend starting with Sébastien's keynote, but also make sure and join us Saturday morning for the entire hypermedia API session, where you can learn more about this fast growing API design consideration, from those pushing the conversation forward, and putting hypermedia APIs to work in the real world.

I look forward to seeing you all in Berlin, next week!



Weekly API.Report For April 13th, 2015

My Weekly API.Report represents the best of what I've read throughout the week, and is only what I personally felt should be showcased. Each news item comes with a link, and some thoughts I had after curating the piece of API related news. I'm trying to break down stories into as coherent buckets as I can, but it remains something that is ever changing for me, but ultimately I will settle on a clear definition for each of the research areas.

Open with an Account Management scenario:

Some Acquisitions from the previous week:

Handful of Analytics items:

  • Facebook Launches Analytics Tool - Facebook doing more analytics. 
  • What is Tin Can API? - You should learn about Tin Can API. I see a lot of potential for this API as a layer for the API space - http://apievangelist.com/2014/07/25/a-shared-distributed-experiencemetrics-layer-for-the-api-driven-application-stack/

A kinda sorta new take on API Aggregation:

API Definitions continues to be big news:

Interesting API Deployment news:

API Deprecation showed up as story this last week:

Lots of great API Design advice:

I"ll keep owning API Discovery stories:

Activity when it comes to API Events:

I'm work'n so hard in the API Industry:

Filing these two under API Integration:

Nordic APIs dominating the API Lifecycle stories:

You know how i know API Management has come of age? Shitload of stories:

A big part o fmy storytelling process is API Mocking: ;-)

Two stories to highlight when it comes to API Monetization:

Handful of API Monitoring items:

Keeping the infinite API News loop going (have you seen that movie Interstellar?):

API Performance is fast becoming a majore theme:

A single API Reciprocity case study to note:

This Application item was interesting:

A look behind the curation, at the Etsy Architecture:

APIs making inroads into the world of Audio Visual:

Blockchain is interesting:

Two Business Intelligence data points:

The Business of APIs:

Cameras will continue to capture our attention in good and bad ways:

A handful Careers options:

Two Case Studies:

Two separate Chat items:

City Government was a lot shorter this week:

AWS dominating the world Cloud Computing, as usual:

As well as the world of Cloud Storage:

Couple of Commerce related stories:

Containerization conversations:

An interesting Copyright discussion:

Just including a digest from the County Government space:

A single CRM item:

Data is short and sweet this last week too:

The Data Center discussion is continuing to be interesting:

Database APIs are important:

Breaking out of just the world of DNS:

Drones, the good and the bad:

Couple things Education front caught my attention:

Embeddable, mostly highlight whats up at SalesForce:

I <3 the Enterprise:

I don't <3 Facial Recognition:

Just three Federal Government stories:

Single Gaming tidbit:

A single Government thing to comment on:

What is shaking at Hackathons:

More Healthcare momentum:

Your weekly dose of History:

Echo dominating the Home:

Slcak Integration is just cool:

Look at that, only three things in Internet of Things, and I wrote one:

A container related Investment I am tracking on:

More for my own needs, in the area of Links:

Machine Learning is a leading topic, week over week:

Two Mapping things to share:

Messaging, blah blah:

Microservices observations:

A single Mobile story:

Only one New API that I thought was cool (grumpy this week):

Roundup from the world of Open Data:

Zoom in to Ukrain, and out to the UN level, when looking at Open Government:

Couple of interesting Partners related discussions:

Just three Payment stories:

Focusing on the Platform Development Kits (PDK):

APIs in Politics:

The Politics of APIs flares up in the Twitter ecosystem:

Handful of Privacy discussion:

Not really Rate Limits, but close enough:

Change is hard in the Real Estate industry:

Single Real-Time story to tell:

Always Reclaim Your Domain when you can:

Another look at Regulation:

Keep trying Retail:

Scalability talk at Etsy:

Love me some Scraping:

A couple of interesting SDK nuggets:

A single Search story:

Security talk from the week:

Sensors are everywhere!

Showcasing the Showcase stories:

Will they really be Smart Cities?

On the Smart Watches front:

Its all about the numbers when it comes to Social:

Tracking on Software Defined Networking now too:

I spend a good amount of time each week in Space:

Hope and despair when it comes to Surveillance:

Telecommunications blah blah:

Is Television still a thing? If so when will it stop?

Two very different Transparency reports:

Keeping tabs on University discussions:

Stories all about  Versioning:

The Video space:

Three interesting Visualizations this last week:

Making sure these stories stand out as Voice discussions, not just home related:

Casue we all want our Internet to be Wearables, right?

A long term Weather forecast API eh?

A good Webinar to attend. You know how i know, because I am in it:

That concludes my report on what I read last week across the API space. I'm still working on pulling together a summary e-newsletter version, allowing people to get an executive summary of what I thought was important from the week--I am hoping it will be available next week. I'm also going to auto-generate some visualizations, and summary counts for each week. I'd like to see a tag cloud, and overall counts to help me understand the scope of news I cover each week.

If you know of something I missed, feel free to email or tweet at me, and make sure I know about your company, and have a link to your blog RSS. If you don't tell the story and share, the world will never know it happened.

It is a good time to be tracking on the API space--lots going on!

 



On APIs and Microservices

I've been monitoring an emerging slice of the API, that has been dubbed "microservices" for some time now, and you've even heard me explore its use, when describing my architectural approach to redesigning more core API stack for my internal systems. I’m slowly redesigning my internal API stack, keeping them as small as possible, throughout all aspects of operations, deploying each endpoint in a single dockerized container, that contains the OS, database, web server and all server side API code. I'm also using Github, in conjunction with APIs.json to assist me in my orchestration, throughout every aspect of these APIs lifecycle from design to testing—all a new approach for me when managing my APIs.

Is defining my APIs in this way, so that the definition is much more than just the actual API endpoints, but also include the entire backend, and lifecycle workflow Microservices? Fuck if I know. One thing I do know, is I've had enough conversations with folks who are doing microservices, resulting in me saying several times that the micro services definition is something very personal. Something which the first 10 times I said , sounded very positive, but after experiencing multiple people telling me what I'm doing isn't microservices, I can tell the term definitely will continue to be very personal—very much in the same way conversations around REST or Hypermedia has gotten personal on forums, and HN threads.

You know what this tells me about microservices? Is it is more about power, control and influence, everything from enterprise architects stating their endorsement of the concept, down to the individual architects, who love to make sure you know that the way you are doing it, is WRONG! I don't think there is a single definition of "microservice", and much like REST, I think we'll hear plenty of “right” definitions, by the enterprise justifying what they are doing, and those who are selling something to the enterprise. Is there anything wrong with this? Nope. Just not my style, not my game, and has shown me over the couple months how the concept is not a fit for me.

I’m going to stick in the realm of API. It is a completely bullshit term, that means everything, but you know what? Somehow it escaped the IT, architect, and vendor ownership that SOA and now microservices possesses. For me APIs have long been more than just the tech, it is also about the business and politics of any platform implementation, so why can’t it contain my architecture styles as well? So from here forward I will be closely paying attention to microservices conversations, but you won’t hear me use the word, as I’m more comfortable in API land, and what it has come to mean to the wider public—I just don’t give a shit about what the enterprise adopts, or doesn't.

Microservices will reflect the power and control of the entities behind them, unlike APIs, which also possesses that characteristic, but also quickly shifts the conversation to be more about the innovation and opportunity that developers bring to the table, and the end-users who put that to use. Let’s make sure as we do not get caught up in the architectural discussions behind the technical curtain, and that we remember why all of this API stuff is working. Let's not get caught up in all the value being just about our architectural styles, and re-live a classic IT tale, ignoring that the real value is what people actually do with our services.



My Minimum Viable API Footprint Definition

This is something I talk about often, but it has been a while since I’ve done a story dedicated to it, so I wanted to make sure and take a fresh look at what I’d consider to be a minimum viable footprint for any API—I don’t care if it is public or not. This definition has grown out of five years of monitoring the approach taken by leading API providers, and is also baked into what I’d consider to be a minimum viable APIs.json definition—which provides an important index for API operations.

What do I want to see when I visit a developer area? More importantly, what does your average developer, or API consumer need when they land on your API portal? Let’s look at the basics:

  • Portal - A clean, easily accessible, prominently place portal landing page. This shouldn’t be buried with your help section, it should be located at developer.[yourdomain].com. '
  • Description - As soon as developers land, they need a simple, concise explanation of what an API does. Actually describe what the API does, not just that it provides programmatic access to your products and services.
  • Getting Started - Give everyone, even non-developers as place to start, helping us understand what is needed to get started with API integration, from signing up for an account to where do I find support.
  • Documentation - Deliver, simple, clean, and up to date documentation, preferably of the interactive kind with a Swagger or API Blueprint behind.
  • Authentication - Help developers understand authentication. There are a handful of common approaches from BasicAuth, and API keys, to oAuth--provide a simple overview of how authentication is handled.
  • Self-Service Registration - How do I sign up? Give me a link to a self-service account signup, and make it as easy for me to create my new account, and go from idea to live integration as fast as possible—don’t make me wait for initial approval, that can come later.
  • Code - Provide consumers with code, whether they are samples, libraries, or full blown Software Development Kits (SDKs) and Platform Development Kits (PDK). Make sure as many possible languages are provided, not just the languages you use.
  • Direct Support - Give API consumers a way to reach you via email, ticketing system, chat, or good ol fashioned phone.
  • Self-Service Support - Provide self service support options via FAQ, Knowledgbases, Forums and other proven ways developers can find the answers they need, when they need.
  • Communication - Setup the proper communication channels like a blog and PR section, as well as a healthy social presence on Twitter, LinkedIn, Facebook, or other places your audience already exists.
  • Pricing - Even if an API is free, provide an overview of how the platform makes it money, and generates value — enough to keep it up and running, so I know, as an API consumer I can depend on. Let me know all pricing levels, and provide insight into other partner opportunities.
  • Rate Limits - Provide a clear overview of how the platform is rate limited, even if they are to protect service availability, let consumers know what to expect.
  • Roadmap - Give consumers a look at what is coming in the future, keeping it as a simple, forecast of the short and long term future of an API.
  • Change Log - Provide us consumers with a list of all changes that have been made to platform operations, don’t limit to just API changes, and include significant roadmap milestones that have been reached.
  • Status - Offer a real-time status dashboard of the platform, with a history view of platform status, that consumers can use as a decision making tool, as well as get current platform status.
  • Github - Use Github for all aspects of your API platform operations from hosting code, to providing support via issues, to actually hosting your entire developer portal on Github Pages.
  • Terms of Service - Provide, easy to find, and understand terms of service for platform operations, helping API consumers understand what they are in for.
  • APIs.json - A machine readable index of any API driven platform, providing a single place to find not just the API endpoints, but also all of the essential building blocks of API operations listed above.

This is my shortlist, of common building blocks that every API platform should have. Part of the reason I’m publishing this, is to provide a fresh look at what I’d consider to be the minimum viable footprint, but I’m also working to get my own API portal for my new master API stack up to snuff, meeting my own criteria. Without a checklist, I forget what some of the essential building blocks are—you know the cobbler's kids have the worst shoes and all.

After I’m done making sure my own API portal meets this criteria, something I can programmatically measure when done, via the APIs.json file, I will provide a self-service evaluation tool that anyone can use to measure whether or not their own portal meets my minimum viable API footprint definition.



How Not To Onboard With Your API: Fiber Locator API

Somewhere during my weekly monitoring I found the Fiber Locator API, which like all APIs, especially the ones that ask me to “request access”, I signed up for the service. Understanding where telecommunications companies have laid fiber optic cable, for me, equals a potentially valuable API resource—sure I would consider integrating these resources into my applications, and systems.

I’m never a big fan of APIs, who make you request access vs self-service registration, but if the API has value, I jump through hoops, so I can can better understand what an API does, which according to the Fiber Locator site:

FiberLocator offers an application programming interface that gives you access to our database of over 265+ carriers, over 272,000 carrier lit buildings locations and 3,000+ data centers. We created our API services so that client companies can augment their own systems and analyses with our unique data sets.

I filled out the form, providing my name, email, phone and company name, the provided the additional details: I am the API Evangelist, and I’d like to write a story about the Fiber Locator API. Something I do a lot. Within 24 hours I received:

Then I got this:

Both of these emails I replied with the same additional details I included with my request for API access. I received a third solicitation, but felt it didn't add any value to the conversation. From what I can tell, the Fiber Locator API is simply a lead generation form, which routes to sales people—I see no evidence of an API.

Maybe there is something really there, if there is, this is not how you on-board people. I may just be writing a story about the API, which you should probably encourage, but that is just me, but if you really want developers to integrate fiber location into their applications and internal systems, you should probably let us kick the tires, see what the service is all about.

Modern APIs are not a lead generation tool, and API evangelism does not equal sales. If I cannot understand what your API platform offers, sign up to get at least a taste of that value, I’m moving on. I’ll keep the Fiber Locator API in my database, but I will not be tuning into what they are up to, because by my definition, they don’t fit the model of a modern API platform.



Working Toward An API Definition Driven, SEO, and Section 508 Compliant API Documentation Interface

This is a topic I’ve had an increasing number of conversation with folks about in the last couple months, and a friend of mine Tweeted in response to today, resulting in this lovely rant. This is about two very separate problems, in which the solutions are what I'd consider significantly overlapped. I’m talking about the need to make sure the valuable metadata, as well as underlying resources made available via an API is accessible to search engines, while also making sure the it is all Section 508 compliant, providing critical access to people with disabilities.

In response to my recent post, on "Why The New API Blueprint Sharing Button From Apiary Is So Important”, said:

First, this is another reason why the API Blueprint being more open is important, anyone can now come along and not just build an SEO API documentation solution, they can get access to Apiary.io users who desire such a solution. Ok, that might seem bad for Apiary, but in situations that Apiary is not concerned with, this allows the community to step up and serve the long tail, and if it is something that concerns Apiary, it lights the fire under their ass to consider adding to the road map (sorry Jakub & Z, I know I"m a pain ;-).

Second, it allows the community to step up and serve what I'd consider the critical long tail—like Section 508. In our rush to deliver the latest in technological innovations, we often forget about significant portions of our society who, well, aren’t like us. How many leading apps and platforms that you use are Section 508 compliant? I guarantee almost all API tooling is not—something we need to address now, and is a topic where discussion is currently happening the federal government circles

I am a big fan of Swagger UI, and Apiary, and what they have done for API design, deployment, management, discovery, and evangelism, but we need to keep working on the next generation of UI, and UX that is easily indexed, and works well with assistive technologies. Ok, at this point the hypermedia folks want to kick my ass--bring it! ;-) Kidding. What I am talking about should happen in conjunction with well designed, hypermedia fueled clients, not instead--both camps should be addressing making sure all clients or SEO and 508 friendly.

I’ll close with a note on the opportunity here. There is huge potential to develop an open source API UI component that can use Swagger and API Blueprint as the core definition, implementing a more SEO and 508 compatible experience, something akin to Slate from Trippit. I’m thinking rather than a pull technology like Swagger UI does with its JS interface pulled from a Swagger definition, and I’m guessing Apiary does a similar pull from API Blueprint docs?? A more of an HTML, CSS push or publish of functional, static API documentation, that uses JavaScript and APIs to operate—just some raw thoughts on how to take next steps, its now up to you.

I guarantee if you made an open source set of tools to support this, you'd get the attention of government at all levels, around the world, something that would open up significant other opportunities. 



Opportunities In The Long Tail Of API Deployment For Non-Developers, Using Kimono Labs

I was just doing on of my regular check-ins with Pratap Ranade (@pratapranade), the CEO of Kimono. We try to make time to catchup on what each other are up to, and find where we can work together to incite API evolution. Pratap sees the space like I do, that APIs are more than tech, possessing huge potential when it comes to empowering folks to make change within their companies, organizations, institutions, government agencies, and the wider industries and countries they exist in.

Unfortunately I can’t go into too much detail about the projects we are working on until they are ready, but I just spend over an hour talking through some pretty interesting use cases that have occurred via Kimono Labs, in which Kimono users are scraping data and content from existing websites, and crafting some pretty interesting APIs. I’m not just talking about the fun things like March Madness or World Cup going on over at Kimono, I’m talking about influencing elections, addressing disaster recovery issues, and providing timely data that can influence how financial markets are working (or not).

The best part about these stories, is the work is being done, in many cases by people who don't have traditional programming skills, they are more hackers who have identified a problem, and used Kimono to scrape the data, and generate simple APIs, that help provide a new way of delivering API-driven solutions to the problem being targeted. Pratap said they are quickly becoming a big data company as well as a structured data API solution, because of the number of folks stepping up to address the long tail of data.

I just wanted to provide a taste of what we are working on. I’m always left pumped up about potential of APIs, after hearing what Kimono is up to. If you want a taste of some of the stories I'm talking about, check out the Kimono blog.