{"API Evangelist"}

The Politics, Marketing, And Fear of API Security

I cringe, when I think about the number of mobile applications out there, that people depend on in their personal and professional lives, that are using insecure APIs, allowing personally identifiable information (PII) to flow across the open Internet. I’m a big advocate for helping mobile developers understand the important role that a public API can play in this situation, but another side of the discussions that also scares me is the fear, uncertainty, and doubt (FUD) that also emerges as part of this conversation.

I’ve covered recent security and privacy involving mobile usage of APIs at Snapchat, and Moonpig, and was processing the API news this morning for inclusion on API.Report, when I came across this press release, Wandera Finds Official NFL App to be Leaking Users' Personal Data Just Days Ahead of The Big Game. I can get behind the NFL securing their users personal data, but have a hard time jumping on the bandwagon when this is used as PR for a mobile security service, in the lead up to the Superbowl.

Understanding how everyday mobile apps are using APIs to communicate in the cloud is important. Sharing stories about how to map out this very public surface area, and secure it properly, while giving end-users more awarness and control over their PII is critical. Doing this in the name of marketing, PR, or to ride a fear hype wave is not ok. Yet, I fear this will become commonplace in coming months and years, as security breaches, cybersecurity, and privacy are front and center in the media.

APIs Role In Data Security And Privacy

As we get close to wrapping up the first month in 2015, it is clear that Internet security and privacy will continue to be front and center this year. As technology continues to play a central role in our personal and business lives--security, transparency, and respect for privacy is only growing more critical.

I know I'm biased in thinking that APIs will continue to take a central role in this conversation, but I feel it is true. Many of the existing conversations around security about platforms like Snapchat, and MoonPig, are directly related to APIs, while other security scope at companies like Sony, JP Morgan Chase, and beyond could easily be reduced with a sensible API strategy.

Companies are increasingly operating online, but do not act like any of information lives in an online environment. Adopting an API approach to defining company resources, helps map out this surface area, acknowledging it is available over the Internet, and works to define, secure, and monitor this surface in a healthier way.

Mobile users need access to their data, and by applying an API centric approach, providing account management, data portability, and access and identity controls using oAuth, you can increase transparency, while also strengthening overall security. If your company operations is centered around customer and end-user data transactions, you should be making all data points available via an API, accompanied by a well oiled oAuth layer to help end-users manage their resources, playing a significant role in their own privacy and security.

I'm not delusional in thinking that APIs provide a perfect solution for all our security and privacy woes, it doesn't, but it does set a tone for a more healthy conversation about how companies are doing business on the open Internet, and how we can better secure the online web, mobile, and device-based applications we are increasingly depending on in this new world we have created.

My Experiences Generating API Server or Client Code Using Swagger

Swagger is now Open API Definition Format (OADF) -- READ MORE

When you start talking about generating server or client side code for APIs, using machine readable API definition formats like Swagger or API Blueprint, many technologists feel compelled to let you know, that at some point you will hit a wall. There is only so far you can go, when using your API definition as guide for generating server-side or clienit-side code, but in my experience you can definitely save some significant time an energy, by auto-generating code using Swagger definitions.

I just finished re-designing 15 APIs that support the core of API Evangelist, and to support the work I wrote four separate code generation tools:

  • PHP Server - Generating a Slim PHP framework for my API, based upon Swagger definition.
  • PHP Client - Assemble a custom PHP client of my design, using Swagger definition as guide.
  • JavaScript Client - Assemble a custom JavaScript client of my design, using Swagger definition as guide.
  • MySQL Database - Generate a MySQL script based upon the data models available in a Swagger definition.

Using Swagger, I can get myself 90-100% of the way for most of the common portions of the APIs I design. When writing a simple CRUD API like notes, or for links, I can auto-generate the PHP server, and a JS client, and underlying MySQL table structure, which in the end, runs perfectly with no changes.

Once I needed more custom functionality, and have more unique API calls to make, I then have to get my hands dirty, and begin manually working in the code. However auto-generation of code sure gets me a long way down the road, saving me time doing the really mundane, heavy lifting in creating the skeleton code structures I need to get up an running with any new API.

I’m also exploring using APIs.json, complete with Swagger references, and Docker image references to further bridge this gap. In my opinion, a Swagger definition for any API, can act as a fingerprint for which interfaces a docker image supports. I will write about this more in the future, as I produce better examples, but I'm finding that using APIs.json to bind a Swagger definition, with one or many Docker images, opens up a whole new view of how you can automate API deployment, management, and integration.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Finished Processing 973 API Related Patents From 2013

I’m slowly processing XML files of patents from the US Patent Office. You can read the origin of my journey here, but as of today, I finished processing 50 files of patent applications for 2013, adding 973 API related patents to the queue. I’m still making sense of the different types of patents, and whether they are specifically for APIs, more API adjacent, or just mention API as one component in the patent definition.

I figure I’ll go back to about 1990-1995, but not 100% sure at this point. So far I’ve identified 105 patent applications from 2015, 322 from 2014, and now 973 from 2013. I’m still reading the title and abstract of each patent application as they come in, building my understanding of what has been submitted, and the possible motivations behind each application.

Here are the current numbers that I have so far.

There are 6 patents so far that mention application programming interface or API directly in the title of the patent:

  • 12953023 - Providing customized visualization of application binary interface/application programming interface-related information - Red Hat, Inc.
  • 11559545 - 3D graphics API extension for a shared exponent image format - Nvidia Corporation
  • 12832025 - Application programming interface, system, and method for collaborative online applications - Apple Inc.
  • 12974934 - Method and apparatus for reverse patching of application programming interface calls in a sandbox environment - Adobe Systems Incorporated
  • 13178356 - Application programming interface for identifying, downloading and installing applicable software updates - Microsoft Corporation
  • 12716022 - Secure expandable advertisements using an API and cross-domain communications - eBay Inc.

There are 54 patents that mention application programming interface or API directly in the abstract of the patent:

  • 14477614 - System and method for connecting, configuring and testing wireless devices and applications - Jasper Technologies, Inc.
  • 13974635 - Information processing device, information processing method, and program - Sony Corporation
  • 13564610 - System and method for detecting errors in audio data - NVIDIA Corporation
  • 13744892 - Configuring and controlling wagering game compatibility - WMS Gaming, Inc.
  • 13858506 - Method and system for providing transparent access to hardware graphic layers - QNX Software Systems Limited
  • 11110293 - Directory features in a distributed telephony system - ShoreTel, Inc.
  • 12875132 - Lighting control system and method - Lumetric Lighting, Inc.
  • 13283872 - Unified transportation payment system - LG CNS Co., Ltd.
  • 13244694 - Apparatuses, methods and systems for a distributed object renderer - Zynga Inc.
  • 13743078 - System and method for processing telephony sessions - Twilio, Inc.
  • 13793712 - Cross-promotion API - Zynga Inc.
  • 13654798 - Declarative interface for developing test cases for graphics programs - Google
  • Inc.
  • 12267828 - Method and apparatus for monitoring runtime of persistence applications - SAP AG
  • 12559750 - Graphical display of management data obtained from an extensible management server - American Megatrends, Inc.
  • 13146368 - Configuring and controlling wagering game compatibility - WMS Gaming, Inc.
  • 12957479 - User authentication system and method for encryption and decryption - Empire IP LLC
  • 12570012 - Generic user interface command architecture - Microsoft Corporation
  • 13072462 - Adaptive termination - Triune Systems, LLC
  • 12605782 - Digital broadcasting system and method of processing data in digital broadcasting system - LG Electronics Inc.
  • 13244187 - Backup systems and methods for a virtual computing environment - Vizioncore, Inc.
  • 12970873 - Shared resource discovery, configuration, and consumption for networked solutions - SAP AG
  • 12425992 - Command portal for securely communicating and executing non-standard storage subsystem commands - Siliconsystems, Inc.
  • 12535139 - Methods for determining battery statistics using a system-wide daemon - Red Hat, Inc.
  • 12940986 - Integrated repository of structured and unstructured data - Apple Inc.
  • 13354233 - Browsing or searching user interfaces and other aspects - Apple Inc.
  • 12564288 - Detecting security vulnerabilities relating to cryptographically-sensitive information carriers when testing computer software - International Business Machines Corporation
  • 12885706 - Contact picker interface - Microsoft Corporation
  • 12357979 - Computer apparatus and method for non-intrusive inspection of program behavior - Trend Micro Incorporated
  • 12512121 - Method and system for managing graphics objects in a graphics display system - Microsoft Corporation
  • 11559545 - 3D graphics API extension for a shared exponent image format - Nvidia Corporation
  • 11828957 - Apparatus, system, and method for hiding advanced XML schema properties in EMF objects - International Business Machines Corporation
  • 12829714 - Storage manager for virtual machines with virtual storage - International Business Machines Corporation
  • 13073146 - Unified onscreen advertisement system for CE devices - Sony Corporation
  • 12209267 - Method and apparatus for measuring the end-to-end performance and capacity of complex network service - AT&T Intellectual Property I, LP
  • 12832025 - Application programming interface, system, and method for collaborative online applications - Apple Inc.
  • 11633851 - Method and apparatus for persistent object tool - SAP AG
  • 12889083 - Window server event taps - Apple Inc.
  • 12974934 - Method and apparatus for reverse patching of application programming interface calls in a sandbox environment - Adobe Systems Incorporated
  • 13248633 - Ecommerce marketplace integration techniques - Microsoft Corporation
  • 11899197 - Energy distribution and marketing backoffice system and method - Ambit Holdings, L.L.C.
  • 12941026 - Extended database search - Apple Inc.
  • 12209996 - Custom search index data security - Google Inc.
  • 13481814 - Apparatus and method for circuit design - Agnisys, Inc.
  • 12536363 - Seamless user navigation between high-definition movie and video game in digital medium - Warner Bros. Entertainment Inc.
  • 12271690 - Programming APIS for an extensible avatar system - Microsoft Corporation
  • 13178356 - Application programming interface for identifying, downloading and installing applicable software updates - Microsoft Corporation
  • 12716022 - Secure expandable advertisements using an API and cross-domain communications - eBay Inc.
  • 12838128 - Method and system for improving performance of a manufacturing execution system - Siemens Aktiengesellschaft
  • 12459005 - Method and system for providing internet services - Alibaba Group Holding Limited
  • 12858814 - Interaction management - 8x8, Inc.
  • 11684351 - Administrator level access to backend stores - Microsoft Corporation
  • 13048810 - Preventing malware from abusing application data - Symantec Corporation
  • 13271978 - Interfaces for high availability systems and log shipping - Microsoft Corporation
  • 12276157 - Unified storage for configuring multiple networking technologies - Microsoft Corporation

With the other other 1231 mentioning application programming interface or API in the description of the patent. It will take time to understand the scope of role APIs play when APIs are just a metnion in the description, where patents that directly mention in title, and abstract, carry a little more weight. (at least for now)

Another interesting layer so far, were the companies behind the last 2 years of API related patent activity. Here are the top 25 from the list so far.

  • Microsoft Corporation with 118 patents
  • International Business Machines Corporation with 75 patents
  • Google Inc. with 52 patents
  • Apple Inc. with 41 patents
  • Amazon Technologies, Inc. with 26 patents
  • QUALCOMM Incorporated with 23 patents
  • SAP AG with 19 patents
  • Research In Motion Limited with 18 patents
  • Citrix Systems, Inc. with 17 patents
  • Hewlett-Packard Development Company, L.P. with 17 patents
  • BlackBerry Limited with 16 patents
  • Adobe Systems Incorporated with 16 patents
  • Bally Gaming, Inc. with 16 patents
  • Oracle International Corporation with 15 patents
  • Zynga Inc. with 15 patents
  • Cisco Technology, Inc. with 13 patents
  • Symantec Corporation with 12 patents
  • Red Hat, Inc. with 12 patents
  • AT&T Intellectual Property I, L.P. with 11 patents
  • Fujitsu Limited with 10 patents
  • EMC Corporation with 10 patents
  • Intel Corporation with 9 patents
  • CommVault Systems, Inc. with 9 patents
  • Canon Kabushiki Kaisha with 9 patents
  • Enpulz, L.L.C. with 9 patents

I’m still learning, understanding, and making sense of these API related patents I’m mining. I will work to read, and separate out the hardware related APIs, but only slightly off to the side, as I still think hardware APIs are extremely relevant with the latest IoT evolution. During the process, I’m sure I’ll come up with other buckets to put APIs into, but ultimately looking to understand the role that APIs are playing in the patent game.

I suspect that downloading the last 20+ years of patents, processing them looking for the phrases “application programming interface”, or the acronym API, is just the beginning. As I read through the patents, learn about the ideas, people, and companies behind them, I’m sure my understanding will expand greatly. I’ll check in with more data after each import of files, and discovery of any new insights, and as always, you can find my patent API research repository on Github.

Doing The Research In Preparation For My Patent On A Patent API

Swagger is now Open API Definition Format (OADF) -- READ MORE

David Berlind’s (@dberlind), Amid The API Copyright Controversy, An API Patent Claim Surfaces story from this last Friday, planted some ideas in my head around APIs and patents—something that once takes hold, becomes hopeless for me to resist, and 48 hours later, here are my initial thoughts.

Before I begin, let me state, patents and APIs, much like copryight and APIs, are not concepts I subscribe to, but because they are concepts that are being wielded in the API space, I am forced to enter the conversation.

I’ll let you read David’s piece on Pokitdok’s claim regarding their “patent-pending API” for yourself, something I find very troubling (about Pokitdok). My post is about my journey, after I read David's story, resulting in yet another research project into the politics of patents involving APIs. After reading David's story this Saturday AM (yeah I need a life), I started “googling” the concept of API patents, just to educate myself—my eternal mission at API Evangelist.

I was very surprised to find the number of results that come up when you search Google Patents for the term “application programming interface”—about 552,000 results (0.32 seconds), to be precise. What does this mean? I start reading, and 48 hours later I’m still reading. Sure there a number of older patents for hardware APIs, and even many APIs that were established during the web service heyday of the late 90’s and early 2000’s, but there is much more than that. I was surprised to see the number of patents with APIs as the main focus from API sector leaders like AWS, Twilio, Apigee, and Google.

I have to start with two of the most ironic API patents i have found so far, during my search:

  • Google (Through Unisys Patent Portfolio Acquisition) - Generation of Java language application programming interface for an object-oriented data store - An embodiment of the present invention is a technique to interface to a repository. A connection between a client and a repository database is established. The repository database has a repository application programming interface (API). The repository database contains objects related to a project management process. The repository API is communicated with to perform an operation based on parameters passed from the client via the connection. A return value associated with the operation is returned to the client.
  • CopyRightNow - CopyRightNow Application Programming Interface - Disclosed herein is a method and system for providing copyright protection for creative works from within a running content-creating software application by assembling and submitting a copyright application on the creative work to the United States Copyright Office through the use of a portable application programming interface, which may be utilized by third-party content-creating applications. Additionally disclosed is a method and system of registering a creative work in order to memorialize the creation of the creative work from within a running content creating application through the use of a portable application programming interface, which may be utilized by third-party content-creating applications. Additionally disclosed is a method and system for accessing and sharing copyright-protected creative works through a web accessible software application.

I spent most of the day Saturday looking through API patents, trying to separate the hardware from software, and the API adjacent to the specific API patents. I began my journey at Google Patent Search, and eventually find myself using Google Patent Search API (now deprecated), until I hit the wall, and I end up at the original data source, processing XML files. I’m only now getting to year 2013, after 24 hours or making sense of, writing a processing script, and downloading of patent XML files—so far I have 428 patents where “application programming interface” shows up in the title, abstract, description, or anywhere else in the content of a patent application.

You can view the entire patent listing here, and get the JSON file driving the listing here--as I process each year, going back to 1990, I will publish the resulting JSON.

My head is swimming. Once again I find myself neck deep in a concept I fundamentally disagree with, in hopes of understanding where the battle line exists within the API space, and educate myself enough so that I can articulate a line on this battlefield, in a war I refuse to participate in, yet I find myself sucked into.

From what I can see, API related patents are not something that that will happen in the future, it is already happening, and has been for quite some time. Sure there are many things that need separating when evaluating the 550K API related patents, but after 48 hours of research, I have to say that APIs are being patented at an increasing rate, and is something we just don't talk about publicly.

As I read through many of the titles, and abstracts, of the patent applications within each XML download I’ve been processing, I feel like I’m reading the antithesis of API Evangelist. On my blog I work every day to better understand how APIs are being used across each business sector, while looking to educate everyone about what is possible, sharing the best practices I discover along the way—when I read through the API patent abstracts, I see some very educated, and other not so educated bets on how APIs are being used, and where the future might be going, hoping to lock in these ideas, and extract value at some point in time.

Don’t get me wrong, I understand that patents are a necessary evil in the world of big business and in startup-land, but it doesn’t change my contempt for them. I’m sure many of my API heroes like AWS and Twilio are just submitting patents in a defensive stance, but yet again it doesn’t shift reduce my concern. To channel my worr around patents being applied in the world of APIs, I’m kicking off deeper research, and monitoring in this dark, un-discussed alley of the API economy.

To help me with my research I dowloaded all of 2014 patent submissions, pulled out all the API related ones, and published as a patent API. To keep up with the amount of patent absurdity that I have read over this beautiful weekend, I have to state my own intent to submit a patent on providing patent APIs. I will take my patent API design, and ensure if you intend to ever organize your patents, and make accessible via APIs you have to pay the piper—cause you know, this is how this shit gets done!!

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Preparing For My Talk At API Days In Sydney With Lots of Docker, Swagger, and APIs.json Work

Swagger is now Open API Definition Format (OADF) -- READ MORE

i’m spending a lot of time lately, playing around with different approaches to deploying APIs in Docker containers. Part of this is because it is the latest trend in API deployment and management, but also because I’m preparing for my talk at API Days in Sydney Australia next month.

I am always working to keep my keynotes in sync with not just my own work, but in alignment with where the wider conversation is going in the API space. I’m building off my talk at Defrag last November, which was titled, "containers will do for APIs what APIs have done for companies".

The title of my talk at API Days in Sydney is, "The Programmable World With APIs And Containers". While some of my talk will be inspirational, trying to understand where we are headed with APIs and containers, much of it will also be rooted in my own work to run my own business world using APIs and Docker, and using APIs.json and Swagger to help me orchestrate my resources.

I apologize for the gaps in my normal writing, and the heavy focus on Docker lately, but it is where my mind is at, from an operational standpoint, in preparation for the conversation in Australia, and from what I’m seeing, it is in sync with the wider API and cloud computing conversation.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

As API Space Expands, So Do The Sources Of Knowledge: New YouTube and SoundCloud Channels

The API space is continuing its rapid expansion, and along with the number of API design, deployment, management, and integration providers, and the number of API conferences, there are some new sources of discussion around APIs--one as a YouTube Channel, and the other as a SoundCloud podcast.

  • API Workshop - We'll discuss topics around API Design, including: sustainability concepts: how to design APIs that last, new ideas in API design, voices and posts from the API blog world.
  • APIsUncensored - The official home of the APIs Uncensored podcast, a monthly series where Ole Lensmar and Lorinda Brandon blather about all things APIs with experts in the industry.

API Workshop is a production by J(a)son Harmon (@jharmn), and APIsUncensored is from the smart folks over at SmartBear(when describing them I always feel tricked into saying they are smart). I was honored to have mentions in both of them, and you can read my response to API Workshop here, and to APIsUncensored over here.

In my opnion, it is a healthy sign for the space to see more video and audio outlets emerge—the space needs as many open voices, educating the masses as it can possess. Maybe soon we'll need an aggregator of all the sources of great information! (get to work on that please)

I posted a video in response to my reference in the API Workshop episode, in addition to my blog post. Something I’m working on for APIsUncensored as well, continuing to expand my presence on these important channels. Personally I’m a big podcast listener, over YouTubing, but everyone has a different set of frequencies they tune into, and I think it is important to maximize the reach.

Now if only we could get the next episode of Traffic & Weather! Wink, Wink!

A Conversation About My Subway Map API On The APIsUncensored Podcast

There is a new podcast on SoundCloud called APIsUncensored, from the folks over at SmartBear. I was honored to have a mention in the first episode, where they brought up a project I did a couple of months back, which I called the Subway Map API. I published a full story on what I was doing, and launched supporting Github repo, and API, but the work was very much a weekend side project, and something that will need a lot more love before it goes anywhere.

Ole Lensmar (@olensmar) admitted that when he introduced the project, he didn’t really get it, but ultimately in their segment, they accomplished what I was looking to do—stimulate conversation. The Subway Map API isn’t about the ideas I plotted on the map for API Evangelist, it is about the format for grouping, and mapping out iof deas—then evolving them through conversation. Don’t get hung up on my groupings of API design, deployment, management, etc., because my mission is to help introduce people to the world of providing and consuming APIs.

The goal of the project is to provide a platform where anyone can step up and frame a conversation about the lifecycle that means the most to them, their company, their products, and industry. When I first published the proejct, I quickly sent it out to two groups:

  • 3Scale - Life-cycle to 3Scale is about helping people manage the APIs they’ve deployed, and I will be helping create a v1 map, based upon great work that Manfred has done on this subject.
  • SmarBear - Life-cycle to SmartBear is about helping people monitor, test, and understand the APIs they posses, and if you guys have any stories on your blog to help me see this vision, I can help translate into a map.

Don’t get hung up on the maps I created, and don’t even get hung up on it being a subway map. As Ole mentioned you can add bus routes, train routes, etc. Via the API you can control the color, shapes, direction, connectors and labels for every aspect of the map. If you want to draw a straight line with 10 dots in a row, go for it. If you want to draw a big circle with 10 stops along the way, going all the back to the beginning, go for it.

The Subway Map API project was a fun thing I did for the weekend, and was looking to jump-start the conversation, something I think I did--eventually I’d love for someone to build an interface for facilitating this conversation. John Musser nailed it when he articulated the overall objective, and that is to create the most meaningful visual as possible, to help someone understand a Wycliffe related to the world of APIs. Think of John’s series of hockey stick API growth charts, or his API business models diagrams, these have become ubiquitous in API discussions, and if you create the next generation one for API design, deployment, management, integration, or other—you win!

I'll work on a video response to go with this story. I’m neck deep in API-micro-service-docker-land, and don’t have the bandwidth to pull head up, I love the conversation!

Simple, Intuitive API Backends With HTTPHUB

I come across a lot of API related companies during my monitoring of the space, which I queue up, and as I have time, I work to explore and understand what they do. One company that I’ve had open in a tab for the last week is HTTPHUB.

HTTPHUB is very interesting. It starts by giving you a root namespace, like https://kinlane.httphub.com, and from there I can add on any resources on, and make it either public or private—then I can POST any JSON to this resource in the body. So I could create https://kinlane.httphub.com/notes/, and craft a simple form + script to post any data I submit to my HTTPHUB resource.

I think that HTTPHUB captures the simplicity of APIs. Obviously the system will have some limitations, but ultimately it makes on boarding with the concept of APIs, and getting up and running with an API backend as frictionless as possible. HTTPHUB also gives you logging, and user management for your namespace, adding some other essential elements you will need for your API backend.

Here are some of the features of an HTTPHUB account:

  • Unlimited requests (rate limited)
  • 3Gb of storage space
  • 300.000 resources
  • 3.000 namespace users
  • 3Mb Max POST size
  • Signup and quota management for namespace users
  • Authenticate with credentials or IP address
  • Register up to 5 different accounts using the same email address

HTTPHUB also provides a pretty slick demo feature, using demo test data, in conjunction with Postman. They provide you with a link to the Postman Chrome app, and a link to a collection file that you can import. This gives you a listing of all of the base endpoints for HTTPHUB, which you can then use to expand, and manage your HTTPHUB API backend--no programming necessary.

HTTPHUB is most interesting to me because its simple, intuitive, and potentially something you could self-define exactly the backend resources you need, as you need them. Imagine if applications could grow its own backend? Allowing end-users to add to their world, changing how they store contacts, or take notes—using a backend that flexes, grows, and fits end-users needs like a glove.

Translating Postman Collections Into APIs.json Collections And Back Again

Swagger is now Open API Definition Format (OADF) -- READ MORE

I've been a Postman user for a while, as a tool for quickly making API calls against the APIs I use most, and explore the new APIs I discover daily. As I use Postman, I can't help but think the concept of assembling collections of API calls using Postman, is in sync with part of our vision for APIs.json--we just need a common way to communicate.

APIs.json is not just for defining the API operations that exist within a specific domain, which is the most common approach, it is also about assembling collections of multiple APIs, for a specific purpose. In my mind, the motivations for assembling Postman collections, and APIs.json collections are definitely in alignment.

Similar to providing resources for translating between popular API definition formats like Swagger and API Blueprint, I want to make sure and translate between emerging API discovery and collection formats. While I’d love for APIs.json to become the default standard for API discovery, I’m happy to embrace other formats, designed for other API tools—as long as they are striving to be as open as they can.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

This Is How You On-Board With An API

I sign up for a lot of APIs, and when I encounter a frictionless on-boarding process, I feel the need to showcase, and help everyone understand how important it is to make the process as easy as possible. I'm still amazed at how many new APIs make this really, really hard. I will write a separate story after this about how not to on-board with an API, but for now lets take a look at a kick-ass example.

This example comes from the National Institutes of Health (NIH) 3D Print Exchange API. When it comes to signing up for API access, the only thing that can make this easier is providing an API for the process, but right from the home page you can signup for an API key.

Once I provide my first name, last name, email, and optionally some details on how I will use the APIs, I am given my API key. I don’t have to click again to find my API key, and I’m given an example API call, complete with my API key. 

My first click gives me the instant gratification I am looking for via a simple API call, resulting in an example JSON response. I do not have to work to find my key or figure out how to make my first API call, it is all given to me immediately.

I'm up and running with the NIH 3D Print Exchange API in two clicks. I don't have to fill out a long form, wait for access, or work to find my keys, and fumble with how to make my first API call. To reenforce this, I look in my email inbox, and there is an email providing my API keys again, along with the link to make my first API call.

Like I said above, the only thing that would make that easier, is for all of this to be a series of API calls, which would make the on-boarding process programmatic. Eventually I could see this being essential, but for now I'm happy to see them not only standardizing API on-boarding across federal agencies, but making it as dead simple as you possibly can.

I'm a big fan of frictionless API on-boarding, and I applaud NIH and 18F for their work on the NIH 3D Print Exchange API. I will keep an eye on future releases, for more stories on healthy patterns I'm seeing in API deployment in the federal government.

Update: I'm aware I left my keys in these pics, it was easier to reset them, then to blurr them out, and I think it helps with point. ;-)

The Next Steps For The The Recreation Information Database (RIDB) API

I referenced the Recreation Information Database (RIDB), in my story late last year, when I was asking for your help to make sure the Department of Agriculture leads with APIs in their parks and recreation RFP. I'm not exactly sure where it fits in with the RFP, because the RIDB spans multiple agencies.

Here is the description from the RIDB site:

RIDB is a part of the Recreation One Stop (Rec1Stop) project, initiated as a result of a government modernization study conducted in 2004. Rec1Stop provides a user-friendly, web-based resource to citizens, offering a single point of access to information about recreational opportunities nationwide. The web site represents an authoritative source of information and services for millions of visitors to federal lands, historic sites, museums, and other attractions/resources.

When I wrote the post last October, I referenced the PDF for the REST API Requirements for the US Forest Service Recreation Information Database (RIDB), but this week I got an update, complete with fresh links to a preview of a pretty well designed API, complete with documentation developed using Slate.

I haven’t actually hacked on the endpoints, but I took a stroll through the docs, and my first impression was it is well designed, and robust, including resources for organizations involved with the RDB, recreational areas, facilities, campsites, permit entrances, tours, activities, events, media, and links. The RIDB documentation also includes details on errors, pagination, version, and a data dictionary, and the on-boarding was frictionless when looking to get a key.

Everything is in dev mode with the RIDB API, and the team is looking for feedback. I’m not entirely sure they wanted me to publish a story on API Evangelist, but I figure the more people involved the better, as I’m not sure when I’ll get around to actually hacking on the API. I’m happy to see such a quick iteration towards the next generation of the RIDB API, and it makes me hopeful to be covering so many new API efforts out of the federal government in a single day.

REST API Design: Bridging What We Have, To The Future, By Organizing The JSON Junk Drawer

Swagger is now Open API Definition Format (OADF) -- READ MORE

API storyteller J(a)son Harmon (@jharmn) has a new YouTube channel up called API Workshop. He's going to be publishing regular API design workshop episodes, with the latest one titled REST API Design: Avoid future proofing with the JSON junk drawer. J(a)son provides a nice overview of how you should be structuring the JSON for your API, focusing on the usage of key / value stores. Ironically he uses APIs.json as an example of why you SHOULD NOT use custom key / values within your JSON. What is ironic about this, is that he makes the case for APIs.json properties, giving me a great starting point for helping folks better understand APIs.json, and why properties are key to its evolution, and flexibility.

The process J(a)son outlines in the portion of this segment that referred to APIs.json, describes the lifecycle of an APIs.json property, towards becoming more of a "first class property". There are three phases of an APIs.json property:

  1. X-Property - A user defined property, allowing anyone to craft exactly the properties that matter to them
  2. Property - An official machine readable APIs.json property element, acknowledging its wide usage, potential as common blueprint
  3. "First Class Property or Collection" - Baking a property into the specification as default property of APIs collection, or establishing as sub-collection for the API

The lesson J(a)son provides, describes the journey of each APIs.json property, the difference is in that his lesson provides best practices for API providers who are designing new APIs, helping them avoid the creation of a junk drawer, and the APIs.json property format is being applied to define the junk drawer that we have (aka the public API space). This represents the fundamental separation between my approach to defining the space vs. many other technologists—I am trying to map out what we have, and get us the next step in our evolution, while others are working hard to define where we should be going.

When Steve and I originally hammered out the APIs.json format, we couldn't 100% agree on what should be first class properties and collections for each API defined using an APIs.json file—what you see in version 0.14 is what we agreed to, the rest needs to be defined by the community, through actually implementations and discussion on the APIs.json Github repo.

When you are crafting the JSON, for a new API, J(a)son’s lesson is very important, but when you are evaluating hundreds of APIs, and trying to define a common pattern for representing not just the tech, but the business, and politics of an API—you don’t have as much control over how things get defined. I’m not worried about the overhead involved with adding a little more complexity to my code, to bridge the well-defined aspects of API operations with the lesser-defined parts of the space.

Ideally, in one of the next iterations of APIs.json, the official properties of Swagger, RAML, Blueprint, and WADL can move up into a first class collection for each API called definitions, and maybe also consider moving API Commons up to be a first class licensing collection as well. I’m also looking to graduate some X-properties to official status, based upon creating APIs.json for over 700+ companies, and having identified almost 150 separate properties that can be used to describe API operations. All of this represents the API sector coming into focus, and I do not think all of these properties should become "first class" properties or collections—I am just trying to make sense of the "junk drawer" that we have, and help provide a blueprint for a machine readable formats that can help us evolve it towards the future we want.

I have been asked by several folks to better explain Steve and mine APIs.json vision, and J(a)son’s story gives me the opportunity to help kick off this storytelling. In an effort to keep this story short, I’m going to follow J(a)son’s lead and move the rest of this story on to YouTube, so if you want to better understand APIs.json properties, you can watch my video response to J(a)son's story, which augments this post, and hopefully provides some better examples, along with narrative.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

How Not To On-Board With An API

I wrote a piece earlier today about the kick-ass on-boarding process at the National Institutes of Health (NIH) 3D Print Exchange API--within two clicks I had my API key and was making an API call. To contrast this post I wanted to provide an example of how not to on-board with an API.

I am always amazed at how hard people make it to sign up and play with an API in 2015. Today's example of on-boarding with the most friction you can imagine comes from the Garmin Wellness API--I think their API signup form speaks for itself.

I understand you are worried about what people might do with your API, but that is kind of the whole point of doing an API. If you are looking to encourage innovation on your API, this is not how you want to on-board your developers, and make a first impression.

There are other ways to manage API integrations without restricting users from day one. Make sure and help make API on-boarding as frictionless as possible, there is no reason to use a form like Garmin has.

A New 3D Print Exchange API from the National Institutes of Health (NIH)

There is an very interesting new 3D Print Exchange API from the National Institutes of Health (NIH). The NIH 3D Print Exchange is designed for publishing "biomedically-relevant" 3D models, that anyone can download and use as blueprint for printing on a their 3D printer.

I've been tracking on 3D printing APIs since 2011, when I started profiling how APIs were being used by printing, and 3D printing providers. It is interesting to see an industry focused 3D printing API emerge, which I feel is a sign the space is maturing, and will continuing to evolve in 2015.

Another thing to note in this story, is the simplistic approach the NIH took in deploying their API. The 3D Print Exchange API has all the essential building blocks to help you understand what the API does, on-board with the API (another story), and get the support you need throughout integration via TwitterGithub, and email.

The 3D Print Exchange API has all the fingerprints of 18F on it, demonstrating that APIs in the federal government will continue to pick up momentum with the assistance of 18F and the Presidential Innovation Fellowship (PIF) program out of the White House and GSA.

We Added 3 New Speakers To @APIStrat Lineup - Have You Submitted A Talk?

We just added three new speakers to the lineup for @APIStrat Berlin this April 24th and 25th. I get pretty excited about this part of the event planning lifecycle, which is all about reviewing talks that being submitted, and working with the rest of the APIStrat, and now the API Days team, to develop the best lineup possible.

Here are the three that we added today:

Antti Silventoinen (@lamantiini) of Music Kickup
Jordi Romero (@jordiromero) of Redbooth
Matt Boyle (@mboylevt) of Shapeways


These three speakers represent APIs and the music industry, project collaboration, and 3D printing—which I think is a pretty nice representation of how APIs are making an impact in 2015.

If you haven’t submitted a talk, make sure and head over to the APIStrat call for talks page, and make sure you are part of the lineup in Berlin, this April—we will be closing the call for talks soon, so make sure you do it quick!

Instant Access To APIs Via Github Profile

An open project for me this month, is about better understanding how API keys are provisioned, and how developers are given access to valuable resources. As the number of APIs grows, so do the number of APIs that we depend on in any single application, forcing developer to have to manage many API keys, potentially from many different platforms.

It doesn’t take an API expert to see that many current practices by API providers requiring consumers to manually register for an account, will not scale. We need more automated ways for not just discovering, but also on-boarding with APIs, allowing API consumers to begin using an API, without the current overhead required.

One idea I’m bouncing around in my own APIs, is allowing for instant account creation via an API, allowing you to programmatically generate new account, a new app, and API keys. Of course I do not want these new accounts to have full access to everything, and using my 3Scale API management I will create a specific service tier for these accounts, limiting what they can do.

I want to take this another step further though, I do not want just any spammy, Johnny come-along to be able to create new accounts, without any sort of validation. To help filter, I’m developing a Github account ranking layer, requiring you to pass along your Github user with the generation of new account, app, and keys. I will pull the profile for the Github user, and some statistics on their overall profile, and make some assumptions on the developers trustworthiness.

Immediately this approach will limit API access for a number of people, which in some scenarios may not be ideal, but for my APIs, I’m willing to allow instant account creation, and API access to people who have an active, verifiable Github presence. I’ve been a proponent of API providers providing Github login for developer accounts for some time now, and this seems like a logical next step.

We’ll see where I go with this. It is more an exercise than it is a real thing, but who knows, in addition to using Github to manage my API keys securely, maybe I can actually use it to instantly access new APIs, without having the current overhead I face with each new API on-boarding experience.

Storing API Keys In The Private Master Github Repository For Use In Github Pages

My public websites have been running on Github Pages for almost two years now, and slowly the private management tools for my platform are moving there as well. Alongside my public websites, I’m adding administrative functions for each projects. Most of the content is API driven already, so it makes sense to put some of the management tools side by side with the content or data that I’m publishing.

These management tools are simple JavaScript, that use the Github API to manage HTML, and JSON files that I have stored either publicly or privately within repositories. I use Github oAuth to work with the Github API, but to work with other APIs I need a multitude of other API keys, including 3Scale generated API keys I use to access my own API infrastructure.

My solution is to store a simple api-keys.json file in the root of my private master repository, and then again using Github oAuth, and the Github API, I access this file, read the content of the JSON file into a temporary array I can use wthin my management tools. If you do not have access to the Github repository, you won’t be able to read the contents of api-keys.json, rendering the management tools useless.

I will develop a centralized solution to helping manage API keys across all my projects, allowing me to re-use keys for different projects, and easily update, or remove outdated API keys. This approach to storing API keys in my private Github repository is allowing me to easily access keys in client-side apps I run on Github Pages, as well as via server-side applications and APIs—something that I’m hoping will give me more flexibility in how I put multiple APIs across my infrastructure.

Using Containers To Bridge What Swagger Cannot Define On The Server-Side For My APIs

Swagger is now Open API Definition Format (OADF) -- READ MORE

When I discuss what is possible when it comes to generating both server and client side code using machine readable API definitions like Swagger, I almost always get push-back, making sure I understand there are limitations of what can be auto-generated.

Machine readable API definitions like Swagger provide a great set of rules for describing the surface area of an API, but is often missing many other elements necessary to define what server side code should actually be doing. The closest I’ve gotten to fully generating server side APIs, is when it came to very CRUD-based APIs, that possess a very simple data models--beyond that it is difficult to make "ends meet".

Personally, I do not think my Swagger specs should contain everything needed to define server implementations, this would make for a very bloated, and unusable Swagger specification. For me, Swagger is my central truth, that I use in generating server side skeletons, client side samples and libraries, and to define testing, monitoring, and interactive documentation for developers.

Working to push this approach forward, I’m also now using Swagger as a central truth for my Docker containers, allowing me to use virtualization as a bridge between what my Swagger definitions cannot define for each micro-service or API deployed within a specific container. This approach is leaving Swagger as purely a definition of the micro-service surface area, and leaving my Docker image to deliver on the back-end vision of this surface area.

These Docker images are using Swagger as a definition for its container surface area, but also as a fingerprint for the value it delivers. As an example, I have a notes API definition in Swagger, for which I have two separate Docker images that support this interface. Each Docker image knows it only serves this particular Swagger specification, using it as a kind of fingerprint or truth for its own existence, but ultimately each Docker image will be delivering its own back-end experience derived from this notes API spec.

I’m just getting going with this new approach to using Swagger as a central truth for each Docker container, but as I’m rolling out each of my API templates with this approach, I will blog more about it here. Eventually I am hoping to standardize my thoughts on using machine readable API definition formats like Swagger to guide the API functionality I'm delivering in virtualized containers.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Use APIs.json To Organize My Swagger Defined APIs Running In Docker Containers

Swagger is now Open API Definition Format (OADF) -- READ MORE

I continuing to evolve my use of Swagger as a kind of central truth in my API design, deployment, and management lifecycle. This time I’m using it as a fingerprint for defining how APIs or micro-services that run in Docker containers function. Along the way I’m also using APIs.json as a framework for organizing these Swagger driven containers, into coherent API stacks or collections, that work together to accomplish a specific objective.

For the project I’m currently working on, I’m deploying multiple Swagger defined APIs, each as separate Docker containers, providing some essential components I need to operate API Evangelist, like blog, links, images, and notes. Each of these components have its own Swagger definition, and corresponding Docker image, and specific instance of that Docker image deployed within a container.

I’m using APIs.json to daisy chain all of these APIs or micro-services together. I have about 15 APIs deployed, each are standalone services, with their own APIs.json, and supporting Swagger definition, but the overall project has a centralized APIs.json, which using the includes properties, provides linkage to each individual micro-services APIs.json--loosely coupling all of these container driven micro-services under single umbrella.

At this point, I’m just using APIs.json as a harness to define, and bring together the technical aspects of my new API stack. As I pull together other business elements like on-boarding, documentation, and code samples, I will add these as properties to my parent APIs.json. My goal is to demonstrate the value APIs.json brings to the table when it comes to API and micro-service discovery, specifically when you deploy distributed API stacks or collections using Docker container deployed micro-services.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Scrubbing Individuals And Company Names From Stories I Tell

I find it more valuable to scrub the names of the APIs from about 75% of my stories, which I feel helps them be received by a widest possible audience as possible. If I say “API Provider X”, other providers just think I’m talking about that company, but when I make it generic, I find API providers think I’m talking about them.

There are many reasons I would scrub individual or company names from a story, most often this is because they’ve asked me not to reference them public, but are fine with me telling the story in an anonymous fashion. However, it occurred to me over time that there are other more powerful storytelling influences potentially behind leaving out specific characters.

Early on I identified that there are a lot of API lessons to be learned across business sectors, but if you reference the specific company, or business sector, many business leaders or developers will put there blinders on, feeling this doesn’t apply to them. Ideally companies would identify the potential of hearing stories from other ways of doing business, but unfortunately many lack the imagination to make the transition—lucky I have enough imagination to go around! ;-)

I thoroughly enjoy storytelling in the API space. Over the last four years I’ve developed a voice I enjoy speaking in, and have slowly built an entire toolbox of approaches like scrubbing people or company names, to make my storytelling more impactful, and potentially reach a wider audience. I feel strongly that I would never have been able to do this if API Evangelist was purely a marketing blog for a company, but because I’m lucky enough to be able to focus exclusively on entire API space, spanning multiple business sectors, and left to my own devices when it comes to the editorial choices--I’ve been able to grow my storytelling in ways I never imagined in 2010 when I first started.

Providing An oAuth Signature Generator Inline In Documentation

I talked about Twitter's inclusion of rate limits inline with documentation the other day, which is something I added as a new building block, that API providers can consider when crafting their own strategy. Another building block I found while spending time in the Twitter ecosystem, was an oAuth signature generator inline within the documentation.

While browsing the Twitter documentation, right before you get to the example request, you get a little dropdown that lets you select from one of your own applications, and generate an oAuth signature without leaving the page.

I am seeing oAuth signature generators emerge in a number of API platforms, but this is the first inline version I’m seeing. I’ve added this to my tentative list of oAuth and security building blocks I recommend, but will give some time before I add. I like to see more than one provider do something before I put it in there, but sometimes when it is just Twitter, that can be enough.

Generating Swagger Specs For The APIs Of The 700+ Companies That I Monitor

Swagger is now Open API Definition Format (OADF) -- READ MORE

I'm about 1/3 of the way into generating Swagger specifications for the APIs at the 700+ companies that I monitor. I have the Swagger specs for almost 250 APIs so far, and have no idea how many I’ll have when I'm done (ha, will I ever be done), as the target is kind of ever moving. The only way to get to know an API better than having to create a Swagger spec for it, is to actually integrate with it.

Thankfully I’m not integrating with ALL of the APIs I monitor, but I do want to get more intimate with their API surface area, right up to actually having to integrate. There are four ways that I obtain a machine readable API definition for an API:

  • Manually - Good ol fashioned elbow grease because there is nothing standard about an APIs documentation, or even the API itself, forcing me to hand craft a Swagger definition that works.
  • Scraping - Some APIs documentation is pretty standardized, making it very easy to write a scrape script that harvests, and generates a Swagger skeleton of endpoints, headers, parameters, and other aspects of the interface.
  • APITools - Some APIs that I’m actually integrating with, going beyond just a review of their API, I use APITools as a proxy, and make all my API calls via this proxy, and after I hit all the endpoints, I can grab the auto-generated Swagger definition from the APITools interface.
  • Swagger - There is already a Swagger definition available for an API, created by the platform owner—this is when I’m in heaven. I love it when APIs create their own API definition, and even better when they create their own APIs.json file. ;-)

I would like to also add to the list, that I use API Blueprint, RAML, I/O Docs, and Apigee Explorer formats as well, but I do not. While these formats are out there, it is an understatement to say they are tucked away, and hidden. I’d venture to say the providers are actively assisting API providers in keeping buried, and hard to access—in my opinion each format should be front and center, accessible with a single click.

This is one of the reasons I use Swagger. If you look at Swagger UI docs, for each endpoint there is a RAW link which takes you directly to the machine readable API definition that drives it. Show me the equivalent on API Blueprint, RAML, Mashery, and Apigee API explorers, and interactive documentation and I’ll eat my words. I'm creating my own API stack for converting between all of these formats, but alas this doesn't help me in my API discovery process, because these formats aren't easy to find.

This disconnect, is one of the reasons there is so much fragmentation in API designs, and ultimately a lack of open tooling and services that support the entire API life-cycle—nobody wants to share their design patterns. Please help me document all the APIs in my API Stack, and if you know of an existing Swagger spec I don’t have, send my way, or publish to the Github repo. Also, if you are actively using an API, can you switch to using an APITools proxy and generate a Swagger spec that way, and also send my way, or publish to the Github repo.

I wish I could convince all y'all of the importance of this layer of the API space being not just machine readable, but also accessible. This is one reason why I’m hand rolling all of these API definitions myself, is because I’m just going to show you.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Using APIs To Help Achieve A More Owner-Controlled Internet of Things Experience

A blog post that caught my attention recently was Fuse, Kynetx, and Carvoyant, by Phil Windley (@windley). Phil is pushing the boundaries of connecting devices to the Internet, and is very vocal about his thoughts on the subject—something I support 100%. If you want a taste of what Phil is thinking, check out his keynote from APIStrat Chicago 2014.

His blog post is a worthy read, but what really caught my attention was in the last paragraph—the simple statement that he is:

“using Fuse as a means to explore how a larger, more owner-controlled Internet of Things experience could be supported”

I’m intrigued by this concept, and feel it reflects my mission with API Evangelist, where I’m looking to create not just a more educated and informed API providers and consumers, but also a more knowledgable, and empowered end-user.

The concept of a owner-controlled Internet of Things experience is something I will be keeping an eye on as I continue to be inundated with Internet of Things API stories. My goal is to better understand how IoT devices are being designed, and deployed, then provide some assistance in how we can ensure platforms lean towards a more “owner-controlled experience”.

The Quickest Way To Proxy, Secure, Rate Limit, and Monitor My APIs

As I am designing my APIs, one of the first things I decide is whether or not I will be making this public. If its a simple enough resource, and doesn't put too much load on my servers, I will usually make it publicly available. However if an API has write capabilities, could potentially put a heavy load on my servers, or just posses some private resource that I want to keep private, I will secure the API.

I use 3Scale for my API management infrastructure--I have since 2011, long before I ever started working with them on projects, and organizing @APIStrat. When it comes time to secure any of my APIs, I have a default snippet of code that I wrap each API, validating the application keys, and recording their activity--which 3Scale calls the plugin integration approach.

This time around, I logged into my 3Scale admin area, went to my API integration area, and saw the setup for the 3Scale Cloud API proxy that they are calling APICast. I can't help but notice the simple setup of the proxy--I give it a private base URL for my API, it gives me a public base URL back, and then I can configure the proxy rules, setting the rate limits for each of my API resources.


That is it. I can set up my APIs in a sandbox environment, then take it live when I am ready. It is the quickest way to secure my APIs I've seen, allowing me to instantly lock down my APIs, and require anyone who uses it to register for a key, and then I am able to track on how it is being put to use—no server configuration or setup needed.

This easy setup, bundled with the fact you can setup 3Scale for free, and get up to 50K a day in API calls, makes it the perfect environment for figuring out your API surface area. Then when ready, you can pay for heavier volume, and take advantage of the other advanced features available via 3Scale. I'm still using the plugin feature for 90% of my endpoints, but some I will be using APICast to quickly stand-up, secure, and monitor some of my APIs. I will publish a how-to after I finish setting this one up.

Disclosure: 3Scale is an API Evangelist partner.