{"API Evangelist"}

Adopta.Agency ClinicalTrials.gov Data And API

In the last six months I was fortunate enough to be able to push forward one of my side projects, with the help of a prototype grant from the Knight Foundation. The mission of the project, is to help move forward existing open federal government data projects, by adopting them, and helping clean up the data, publish simple APIs, generate more meta data, while also telling stories around the project. Something I originally called Federal Agency Dataset Adoption, but then shortened to simply Adopta.Agency.

When I was doing my final presentation in Pittsburgh last month, for the Knight Foundation, someone from the University of Miami was present, and when they got home, contacted me about helping move forward the database available at ClinicalTrials.gov. I do not know much about the data present in the collection, but immediately recognized it as a viable Adopta.Agency project.

To summarize their request, it went something like this :-)

I am working on a project that uses data from https://clinicaltrials.gov/ . Their API is crap to say the least. I was wondering if you could help me out. Is there a tool I could use to get better access to the data? If we download the entire thing is an 850MB zipped file in XML. I only need a fraction of the trials in the db. I guess I am looking for advice on how to proceed. 

I get questions like this a lot, something that contributed to me pushing forward my Adopta.Agency work. The Knight Foundation prototype grant was just that, the prototype funding, something I intend to keep pushing forward, targeting new data sets, and looking for more open data activists to assist in doing the heavy lifting. The ClinicalTrials.gov database seemed like an excellent candidate because it is a high value data sets, and is something that is pretty poorly presented via the download and API (?) page available at ClinicalTrials.gov.

Enough talk. I got to work downloading the ClinicalTrials.gov data file, and kicking off a new Adopta.Agency project. Here is what I've accomplished so far:

I now have 40+ separate clinical trials data files, available as JSON or CSV, and a complete API that allows for reading and writing, plus a communication and issue management system that will help me engage with others around the project. This entire process is Github driven, with APIs.json and OpenAPI Spec as its machine readable core, which indexes everything that is going on--as it happens.

I think that the ClinicalTrials.gov project represents the Adopta.Agency mission well. There is a wealth of amazing open data sets available on the government perimeter. Much of it isn't well defined, lacking necessary descriptions, tagging, and other meta to make it discoverable, let alone using machine readable, open data formats like APIs.json, OpenAPI Spec, and JSON Schema. I do not blame the folks in government, I understand that they are working with limited resources, and something awareness of modern open data and API approaches. This is why they need our help!

If you'd like to get involved with the clinical trials open data and API project, head over the project site, and drop me a line using the project's Github Issue management. I am not sure what is next. I will be looking for feedback from the folks who tuned me into the data set, as well as from my own network--then I can spend some more time on the road map, and see what we can make happen.

See The Full Blog Post


My Hangout With Wade Foster Of Zapier: Its About The Process

I had the pleasure of hanging out with Wade Foster (@wadefoster), co-founder and CEO of Zapier (How do you pronounce Zapier? It rhymes with happier :-) recently. As I travel less, I'm looking at doing more of these Google Hangouts, to fill my need for hanging out with super smart people, doing really cool things with APIs.

I am a big fan of what Wade, and the Zapier team are up to. His perspective on helping us define "the process", is extremely empowering for EVERYONE and ANYONE, especially non-developers.

What Zapier enables, is very much aligned with how I would like the API economy end up operating. Where system to system, website, mobile, and device based API integration is important, but platform interoperability for the masses I feel is much more critical. Zapier is how the average person will take control over their digital self, and move the growing amount of data and content we are generating, across across the growing number of service we depend on daily.

Thanks to Wade, and the Zapier team for hanging out with the API Evangelist, and doing such important, and cool work with APIs.

See The Full Blog Post


Exploring My thoughts Around API Injection Into Messaging, Voice, And Other Online Experiences

As I listen to my hangout with Wade Foster of Zapier, I'm considering the overlap between my API reciprocity, bots, virtualization, containerization, webhooks, and even voice research. At the same time I'm thinking about how APIs are being used to inject valuable data, content, and other valuable API driven resources to the stream of existing applications we use every day. 

Some of this thinking is derived from my bot research, where they impersonate Twitter users, or respond to specific key words, phrases, or keyboard shortcuts in Slack. Some of this thinking comes from learning more about the ability to inject "code steps" into multi-step workflows with Zapier. Then as I continue doing my curation of news I read about Uber allowing developers to create trip experiences, opening up another window for potential API driven injection into the Uber "experience". 

It got me thinking, where else is this happening? I would say Twitter Cards is a form of this, but is an example that is more media focused, rather than bots (although it could be bot driven behavior). Then I started looking across the 50 areas of the API life cycle I'm monitoring, and voice stood out, as another area of potential overlap. Amazon is allowing developers to inject API driven content, data, and other resources into the conversational stream occurring Alexa users. I don't see this being much different that bots injecting responses into the messaging stream of Slack and Twitter users.

I'm just getting going with these thoughts. Something I'm thinking containers, and serverless approaches to API deployment are going to impact this line of thought as well. I'm considering pulling together a research project around this overlap, something I will call API injection. Essentially, how are people injecting API driven data, content, and other resources, into the streams of other applications, using APIs. I could see a whole new breed of API provider emerge, just to satisfy the appetite of bots, and other API injection tooling, in engaging with users via the streams they are already existing daily, whether messaging, voice, or any other online experience.

I do not think this is limited to consumer level injections. I could see the B2B approaches to API injection, opening up some pretty interesting API monetization opportunities. Hock'n your bot warez within other people's business streams. ;-) We'll see where this line of thought goes...regardless it is fun to think about.

See The Full Blog Post


The API Transparency Discussion Is Not Exclusively About Being Public Or Private

When I talk about companies using APIs to be more transparent, one of the immediate comments I receive from folks is that "not everyone can be public by default". I agree with this situation, but I always counter with an introduction to the concept that transparency can be applied in strictly internal or partner situations as well--public is not the only type of transparency out there.

I am a big proponent of the public version of API driven transparency, but I also feel it can be applied within the firewall, as well as on the open web. Simple developer portals, with a quality selection of valuable APIs, up to date interactive documentation, and other resources, available at a known, yet secured location, can go a long way to stimulate integration--both human (team) and system. Self-service access to API design, definitions, deployment, management, testing, and other life cycle strategies, as well as the API resources can go a long way to establish a rich environment for collaboration, reuse, and consistency in API strategy across an organization.

The benefits of internal and partner layers approaches to transparency, as well as public transparency helps break down silos. Think about the separations between some of the other groups in your organization, and possibly with your leadership up the ladder. Would it help if the road map for your team was out in the open for anyone within your company to follow? Would conversations around outages and system stability be more productive if they included a wider group--maybe preventing some of the micro aggression that occur behind closed doors? 

There are many, many ways API can bring transparency to your organization, way before you ever consider doing it publicly. When I say API, this starts with the technical endpoints, but a modern API conversation ALWAYS involves documentation, code, communications, and feedback loops. The Amazon Web Services family of APIs isn't just purely about the compute and storage endpoints. It is also about the self-service documentation, videos, tutorials, case studies, the 24/7 community forums, and paid tiers of premium support, that makes it all go round.

How could API driven transparency break down the silos in your organization, and help things operate a little more efficiently?

See The Full Blog Post


Some Of The Micro API Evangelist Tasks That I Get Asked To Help With Regularly

I have been working for a month or so on what some of the common tasks that developer advocates and evangelists would like to see occur around their API operations. These are small little tasks that evangelists should be doing themselves, but if they can encourage their own API community, or the wider developer and tech community to help out, they might achieve a greater reach.

Getting your community to help out with tasks on your platform is nothing new. This is a core premise of modern web API operations, something that can work well in some ecosystems, but has historically been abused by other ecosystems. Some API providers enjoy vibrant communities where developers are more than happy to step up and build things for free, generate a buzz, and show up to events. The reality for most is that it can be very hard to stimulate developers to help in any way, and resort to the exploration of other ways they can stimulate this activity.

As I was exploring these concepts with fellow evangelist, soliciting ideas for what some of the common tasks evangelist might need, someone brought up how sleezy this sounds. Are you going to give away swag, gift cards, or money for these tasks? Are you going to do it for points? I agree, on the surface, when you describe it like that, it does sound cheezy, if not sleezy. Then I got thinking about the micro transactions and tasks people ask of me on a regular basis--things like:

  • Writing Blog Posts
  • Retweeting a Link
  • Voting Up Something on  Hacker News
  • Voting Up Something on Product Hunt
  • Write a Code Sample
  • Build a Prototype Integration
  • Comment On a Discussion
  • Create a Visualization
  • Participate in a Webinar
  • Keynote a Conference
  • Participate In Discussion
  • Write a White Paper
  • Provide a Quote
  • Talk To Reporter
  • Talk To Analyst

These are some of the common requests I get. Sometimes the requests are paid, sometimes they are not. There is not clear line to follow for any of this. It comes down to what I'm willing to do, and what I'm not. I know plenty of people who write blog posts for companies, and get paid per post--I choose not to do this. I do write white papers for money sometimes, but I would never tweet or upvote something for money. I would prioritize consideration for an upvote or retweet, when someone is a paid partner--I tweet things for my paid partners all the time, but not if it isn't relevant.

I am just trying to explore where exactly this line is for one of my API research areas--evangelism. Like I do in all of my other areas. I am working  with my partner Cloud Elements to explore these concepts, so in a sense I am getting paid for this work, although indirectly. That is not my goal though, I want to help define a simple list of micro transactions that could occur within API ecosystems, and across the web, that benefit both API provider and consumer--equally. I think it can be done ethically, and transparently, and avoid the cheezy and sleezy, however I think many will win in these areas as well.

In the end, I think it is about making the tasks meaningful, and be as transparent about what is going on. I don't see any problem giving developers in your ecosystem gift cards, API credits, or other form of incentive, if they are willing and not being exploited. I know I would do more work for some of the API providers I use, if I got free API credits! (wink wink) I will keep exploring these micro API evangelism activities, and how to do them right. If there are tasks you think should be on the list, or have opinions on where you think the ethical line is, and have examples you'd like to share--feel free to ping me!

See The Full Blog Post


I Wish Everyone Broke Their API Pricing Page Down Like SendGrid Does

I was going through the SendGrid API and profiling their available plans and pricing, using my new API plan tracking format, and I just have to stop and say--I wish everyone would present their pricing pages as simply as SendGrid does. I cannot speak to whether their pricing is good, fair, or otherwise, but the layout, and the way they explain it, is very simple straightforward and easy to make sense of.

In addition to having the pricing for their email API broke down into six coherent plan tiers, they also have a compare plan details table which shows all six plans, side by side, with each core feature, and other elements of API access available to browse. As an API analyst, it make the data entry for my API tracking much easier, but as an API consumer it make it much easier for me to budget for the API driven resources I will need as part of my API operations.

The state of your pricing page tells a lot about how baked your overall API strategy. I will be the first to admit my own API plans and pricing are half-baked at best. I'm still trying to figure out what I want to be, the value of my resources, and what the market will bear--my pricing page reflects and amplifies this dilemma. This is something I am determined to get into order, as I continue to push forward my API plan research.

See The Full Blog Post


I Am Liking The Modular Services That Are Delivering In Specific Areas Of API Life Cycle Like API-Docs.io

I like my API service providers like I like my APIs, doing one thing and doing it well. Sure services can work together (using APIs), and companies can launch multiple services, but I prefer selecting the services that I use as part of my API operations, as small, modular pieces of API infrastructure. In my opinion, API service providers are just another class of API provider, they just happen to be selling services to businesses that are operating APIs.

You can see this in action with API Docs, part of the StopLight.io platform. API Docs does one thing well--API documentation! Although for me, it touches on two important areas of the API lifecycle, both documentation and definitions, but also illustrates the maturation of these stops along the API life cycle, since Tony first launched Swagger UI. The StopLight team is taking API definitions to the next level, but they are also helping us better deliver the documentation portion of our API life cycle(s)--all driven by API definitions. (meta)

If you are looking for another example of this in action, take a look at API transformer, from the APIMATIC team. APIMATIC alone fits my definition of a modular API service provider, doing SDKs, and doing the well, but then you factor in their API translation tooling. #winning Again, APIMATIC is focusing on modular services, that are API definition driven--very similar to what StopLight is up to with their strategy. 

In 2016 you will hear me hammer on three key concepts when it comes to delivering services in the API space, and selling to indivivdual API providers. 1) do one thing and do it well, 2) speak in API definition formats, and 3) do it with APIs!! It isn't rocket surgery that API service providers should be practicng what they preach, but in 2016 I am finding that these three characteristics are proving to be a reliable way for me to differentiate the API service providers who will be here for the long haul, and those that won't.

See The Full Blog Post


Generating OpenAPI Specs For The Mobile Apps You Depend On Just By Using Them

Stoplight.io is a very cool new API modeling and proxy tool. I just wrote a post about the overall features of the platform, but I wanted to zoom in on a specific benefit that Stoplight.io brings to the table--the auto generation of OpenAPI Specs, for the mobile apps you depend on, as you use them. To understand the process in detail, I recommend watching the how was this created video on the Peach API documentation generated by Stoplight.io.

When you download the Stoplight API designer for Mac (available on the account dashboard), you get the Prism API Proxy, which allows you to easily route all the traffic from your Macbook, as well as iPhone and IPad, through the platform. Once you create a new API, add the base URL for the API you are profiling, StoplLight does the rest, automatically generating an OpenAPI Spec, and attractive API documentation in the application, in real time as you watch--it is pretty cool to watch once you have turned on.

Why do I want to do this? Well first of all, I am someone who has spent five years traveling around the world wearing the same t-shirt evangelizing APIs to everyone who will listen--obviously I have a number of problems. Beyond that, I'm fascinated by the API design practices of the leading "dark APIs" I use everyday, but rather than doing so via a public API program, it is via a mobile device. I'm fascinated by this layer of the API sector, one that is fueling all of the API growth we are seeing, but getting a much smaller portion of the conversation that is dominated by leading public APIs. 

To help me learn about what is possible with StopLight, I profiled the APIs that LinkedIn and Instagram mobile applications used. It took me about 15 minutes to go through each application, and hit publish on the documentation you see published to API Docs. I feel that there are many lessons to be extracted by the API designs behind these leading applications, and continue providing lessons when you compare these API design strategies with the public API design strategies for the same companies. 

I am not sure where all of this will lead. I know that profiling the APIs behind mobile applications is something many feel we shouldn't be doing, something I will write about separately, but I can't help but feel the opposite. In my opinion you should be proud of any APIs you make publicly available to drive any web, mobile, or device applications, and we should be having serious conversations around security, privacy, monetization, and other critical things that are increasingly happening in this shadowy layer of our digital lives.

See The Full Blog Post


Concern Around Mapping And Discussing Shadow Mobile APIs Shows Signs Of An Imbalance

After I've talked about my mapping of the public, and mobile APIs with various folks over the last couple of months, I can usually put folks into one of three camps 1) they do not understand what the hell I am talking about and could care less, wishing the geek dude would shut up 2) thinks its a really interesting idea and worth exploring and discussing, or 3) they express concern, and liken it to scraping, and even hacking. I can understand folks not caring because they don't grasp it technically, but I do not fully yet grasp why people express concern, and think its wrong that I am mapping out, and wanting to discuss this layer of our digital world(s).

As I was working with the new features in Stoplight.io last night, profiling some of the applications on my mobile phone's desktop, I spent some more time thinking about this topic. I find it fascinating that companies do not consider the APIs they use to drive mobile applications as public infrastructure, when I can sit in my living room, and engage with them over the public Internet. i find it even more fascinating that some folks consider what I do as contributing to securing problems, when I map these APIs out, share and discuss them on the open Internet. 

I am not intercepting anyone's traffic, or hacking anyone's systems. I am simply using a publicly available mobile application I found in the app store, and I am interested in understanding how leading companies are designing their APIs. Next I'm interested in discussing how these APIs compare with the same companies public API infrastructure, or possibly why a company does not have a publicly available API program. After that, I am also looking for a better understanding around which of my life bits (and others) are being transported as part of my daily mobile app usage. 

I've heard many arguments on why this public infrastructure definition should remain private, like the data that is being transported over it. Ranging from security concerns, all the way to that users should just be thankful the service is available, and its the companies right to do what they think is right, and its the users right to not use the service--markets and all that stuff. While I think this may apply in some situations, there are many scenarios where users are uninformed, being straight up lied to, or do not have the agency to make a decision to leave a platform, or application. 

My focus in this discussion centers around consumer facing mobile, and device based apps. I'm not talking about business to business integrations, government services, and other digital infrastructure, I'm talking about the ubiquitous mobile apps that the average smart phone user is being pushed. This is the layer of the digital economy that is being built on the exhaust of the average mobile and Internet consumer. This is one reason why many folks do not want to disclose and discuss this shadowy layer of the application, IoT, and ultimately API economy--they hope they will get their piece of the action some day, and do not want to rub the money people wrong by rocking the boat.

While I think money motivates a lot of the thinking that occurs, but I can't help but feel that a narrow focus is also a part of it. In the mad rush to be the next big thing, a lot of things get kicked to the side. Things like security and privacy get overlooked, or compromised, and a security through obscurity strategy is installed by default, which works just fine until...well it doesn't. If the bad guys want to know your infrastructure, all they have to do is turn on Charles Proxy like I did, and route your computer, mobile and tablet devices through it. Hiding the blueprint for your APIs, that you are using over the open Internet, and being afraid to discuss them publicly is not good business, and doesn't address the security and privacy concerns everybody who operates on the open Internet will face at some point in the future.

I feel like having people in my world express concern to me about using Charles Proxy to map this world of APIs demonstrates there is an imbalance in the game. Wile I am looking to discuss the business thats going on in this layer, my priority is security and privacy around this very public infrastructure. I get that many of you have concerns about giving away your "not so" secret sauce, or that disclosing what goes on in this layer will hurt your bottom line, I'm thinking that discussing security and privacy should be prioritized, and not discussing them will eventually hurt your bottom line worse. 

I'll end my rant there. I'm interested in gathering more of these concerns, as I continue to map out this layer of our digital world. Like the rest of my API research, I'm finding that I'm learning a lot about the API design strategy of leading mobile application providers, and better understanding of how APIs are fueling the mobile economy, which is something I am also looking to extend to the emerging device economy.

See The Full Blog Post


Automagically Defining Your API Infrastructure As You Work Using Stoplight.io

I stayed up way too late playing with some of the new features in Stoplight.io. If you aren't familiar with what the Stoplight team has been cooking up--they have been hard at work crafting a pretty slick set of API modeling tools. I feel the platform provides me with a new way to look at the API life cycle--a perspective that spans multiple dimensions, including design, definition, virtualization, documentation, testing, discovery, orchestration, and client

Stoplight.io gives me a pretty powerful platform for managing, interfacing, sharing, collaborating, publishing, understanding, and evolving my API designs and definitions. You can begin modeling your APIs by importing an existing OpenAPI Spec, RAML, or Postman collection, or get to work modeling a new or existing API via the Stoplight client interface, which is just one tool in the Stoplight modeling toolbox. Additionally, I am able to create separate work spaces and import the OpenAPI Spec for all my own APIs, as well as the APIs that I depend on in the API space < this organization is very important to me.

Stoplight.io provides me with a new way to approach the organization of my API design and definitions, but the biggest impact for me, is when it comes to modeling the existing APIs out there. I've been working on automating the generation of OpenAPI Spec definitons of the APIs I use, as I use them--something that is two separate tasks at the moment. Stoplight.io does this, but does it waaaay better than what I have been doing, quietly crafting the OpenAPI Spec for any API I am using, in the background when you have Stoplight API discovery mode turned on, and the Prism API Proxy on.

I am enjoying the view of the API space that Stoplight.io is giving me. I got completely lost last night improving on my API designs, profiling other APIs in the space that were already in my work queue, and mapping out the dark APIs behind the mobile apps that I depend on (something I will write about separately). I am curious to see what API designers, and architects do with StopLight--I feel like it has the potential to shift the landscape pretty significantly, something I haven't seen any API service provider do in a while.

See The Full Blog Post


As An API Service Provider, Should I Craft My Own API Definition Format, Or Just Reuse What Is Already Available

I have had multiple conversations with folks in the space who are building services and tooling for the API sector lately, where I was asked whether or not they should only be using existing API definition formats, or create their own API definition format that better represents what they are delivering. The reasoning is usually that they feel their own format would offer a more comprehensive approach than any single, existing API definition could--yet they fully understand the potential for adoption when they use existing formats like OpenAPI Spec and API Blueprint.

My answer to them, is you deliver d) All The Above. I fully get that you will have your own unique view of the API space, and of what your tools and services will deliver, so you should be defining your own schema, but that you also can't ignore what is happening with OpenAPI Spec annd API Blueprint either. There is a groundswell of services, tooling, and savvy API architects and developers using these existing API definition formats, and you do not want to be an island in this very connected sea.

Some examples of this already in motion can be found with APIMATIC and Runscope. Both of these providers fluently speak multiple existing API definition formats, but they also have their own, custom format for describing what their service(s) brings to the table. I'd say the one difference is that Runscope is more focused on their own format, emphasizing the import / export in the Runscope version, while APIMATIC is holding their version behind the scenes and emphasizing the use the existing API definition format of their own.

I think it is important that service providers flex both their own spec, as well as supporting existing formats like these providers do, even if it is just by using the API Transformer service. However, there is also an opportunity at the intersection of these two worlds with the x- extension for OpenAPI Spec. I honestly am unsure if API Blueprint has similar features, but will find out shortly. This is where I see API service providers merging their own custom schema, with the existing OpenAPI Spec. You can do the same within the API dsicovery format APIs.json, when it comes to defining your API indexes and collections. 

As I talk OpenAPI Spec, API Blueprint, and APIs.json with more providers, I am finding that the majority of them are understanding the importance of supporting the leading formats, but also see aspects of the space that don't get covered in these formats, and want to better contribute their own vision of the space. I support both these paths, as long as we do not ever find ourselves cornered in our dogma silos, only believing in a single API definition format, or only supporting our own proprietary format that nobody else speaks. #Balance #Interroperability

See The Full Blog Post


The Essential Building Blocks For Integration, Automation, and Reciprocity

I'm spending some time going through v2 docs for the Zapier API, following the release of multi steps work flows, and code steps for calculating, converting, and manipulating data and content, last week. While IFTTT gets a significant amount of the attention of the API reciprocity platforms I track on, I feel like Zapier is the most successful, and reflects most of what I'd lie to see in an API driven integration, and automation platform--specifically, the fact they have an API.

Along with keeping track of what Zapier is up to, I'm spending more time thinking about the increasing number of API driven bot platforms I'm seeing emerge, and API enabled voice platforms like Alexa Voice Service. As I was reading Zapier's platform documentation, I couldn't help but see what I'd consider to be the essential building blocks for any integration, automation, and reciprocity platform emerge:

  • Authentication - Providing the mechanisms for the most common approaches to API authentication, including basic auth, digest auth, session based, oAuth.
  • Triggers - Provide the framework to use verbs and nouns, with help text, and webhook infrastructure to trigger anything users will desire.
  • Actions - Provide the framework to use verbs and nouns, with help text, and webhook infrastructure to accomplish an action users will desire.
  • Searches -Allowing for simple questions to be asked, and provide a framework to allow APIs to e employed in answering any question asked.
  • Webhooks - Putting the automation in API integration, allowing for webhooks that can be triggered, and used as part of actions.
  • Notification - Using notifications throughout the process to keep the platform, developers, and end-users informed about anything that is relevant.
  • Scripting - Allowing for code integration for calculating, converting, and manipulating data as part of any step of the process.
  • Multi-step - Going beyond just just triggers and actions, and allowing for multi-step workflows that put multiple APIs to use.
  • Activation - Allowing developers and end-users of the integration, and automation to decide whether the process is invite only, private, or publicly available.

While the scripting, multi-step, and activation pieces are pretty localized to Zapier, and other implementing platforms, the authentication, triggers, actions, searches, webhooks, and notifications are something that all API providers should be thinking about, as touch points with their own infrastructure. You should be supporting common approaches to API authentcation, using meaningful verbs and nouns in your API design, and have a robust webhooks workflow available for your platform.

As I do my research, I'm constantly looking for the common building blocks of any single area of my research--in this case API reciprocity. I'm adding these to the common building blocks in this research, but as you can see the webhooks portion also overlaps with my webhooks research. In addition to this overlap I am also looking for how these building blocks also overlap other existing research areas like bots, and real time, then even as part of some new areas I'm considering adding, like around serverless technology.

I am intrigued by these interesting overlaps in my core research right now, between reciprocity, bots, real time, voice, webhooks, and virtualizations. I'm also very interested in understanding more around how these areas are being applied in some of the areas I am research as part of my API stack work in messaging, social, and other sectors where APIs are making an impact.

See The Full Blog Post


The Bit Size Resources We will Need For The Bot And Voice Evolution In The API Space

I just finished looking through the documentation for the Zapier API, and for the Alex Voice Service, trying to understand the approach these platforms are taking to incorporate API driven resources into their services. How do you translate a single API call into a Zapier trigger or action? How do you build a rich index of resources for Alexa to search via voice commands? Learning how APIs are being consumed, or can be consume, is an infinitely fascinating journey for me, and something I enjoy researching. 

All of my research into reciprocity, bots, and voice enablement via APIs, makes me think more about experience based API design, trumping much of the resource based dogma that has domnated much of the conversation. How will my APIs enable meaningful interoperability via Zapier, voice searches via Alexa, and stimulate interesting bot interactions? I am not just focusing on how my resources are defined, I am now also forced to think about how they will be searched, consumed, and put to use in these new client experiences. 

Just like mobile significantly shaped how we craft our APis, automation, voice, bots, and increasingly Internet connected devices will continue to define our API strategies. Users, and developers will increasingly small, meaningful resources that they can use to orchestrate their personal and business lives, that will respond to simple voice commands at home, in the car, and via our mobile devices. When we are designing our APIs, are we thinking about these bite-size resources that will be needed in this emerging bot and voice driven evolution in the API space.

See The Full Blog Post


All This Information Is Great, But Where Do I Start With My API Strategy?

I shared a list of just the essential building blocks from across only 21 areas of my API areas of the API life cycle with a company I'm helping craft an API strategy for, and I got some very common feedback--all this information is great, but where do I start with my API strategy? This is excellent feedback, as that particular overview is void of any specific 101, or on-boarding elements for each of the areas of the API life cycle that I cover. 

This is on purpose for this particular document, providing me with a generic outline that I can use in many different scenarios, but the feedback rings loud in my ears, and is something I get constantly. I receive a constant barrage of emails, tweets, and voice mails asking me where they should get started with their API strategy, spanning both the provider, and consumer side of the API coin. 

The problem for me in all of this is that what "API" means, where someone is at in their journey, and what their overall goals are, can be radically different depending on who you talk to. What API design means to you, may not be what it means to the next. What API deployment might be about gateway solutions in your organizations, for others it means cloud based, or possibly specifically with a Java API framework like Restlet. My goal in my research is to to map out each area, distill down the common building blocks, then yes, build as many introductions (101), and on-boarding doorways as I can, targeting both API provider and consumers, across a wide variety of business sectors, and the public sphere.

To help illustrate my point, I get regular requests from folks around how do they pull all the data on Twitter via the API, and how do publish a video to all the leading video platforms via APIs. In the same day I will get requests on where do you start in defining a rating system that applies to all APIs, or start with APIs at a large higher educational institution where you have no contacts, with a very hostile IT department. The scope is wide, and varies dramatically, so creating a single document providing everyone with a getting started point is very, very difficult.

With that said, it is what I am working towards. 2015 was very much about mapping the scope of the entire space, and organize into specific areas (right or wrong). 2016 is very much about making sense of those areas, and each stop along the API life cycle, within the areas that I have mapped. I'm working on 101 pages for all 50 of the life cycle areas you find on my home page at the moment. I am already working with my partner 3Scale to take all the common building blocks I have, organize them into different areas, to craft what I'm calling "playbooks" for different verticals. 

If you are looking to deploy a content or data API, there will be a playbook. If you are looking to get started with APIs at your existing institution, government, or enterprise organizations, there will be a playbook for you. I plan on taking the overall map of the API life cycle I've drawn, and the common building blocks I've identified within these areas, and begin remixing and publishing them as specific scenarios, providing "playbooks" that hope to be relevant to more specific situations, helping organizations and individuals better understand where they should be starting with their API strategy.

See The Full Blog Post


My Tooling And API For Gathering And Organizing The Details Of The Plans And Pricing For APIs

A couple of weeks ago I started playing with a machine readable way to describe the pricing, and plans available for an API. I spent a couple of days looking through over 50 APIs, and how they handled the pricing, and their API access plans, and gathered the details in a master list, which I am using for my master definition. I picked up this work, and moved it forward over the last two days, further solidifying the schema, as well as launching an API, and set of admin tools for me to use.

While my primary objective is to help me establish a machine readable definition that I can use to describe the plans of the APIs I provide, as well as the ones that I monitor as part of my regular work in the space--I needed an easier way to help me track the details of each API's plan. So I got to work creating an simple, yet robust admin tool that allows me to add one or many plans, for each API that I track on. 

To help me drive this administrative interface I needed an API (of course), that would allow me to add, edit, and delete the details for each plan, using my API plan schema as a guide. I got to work designing, developing, and launched the first beta version of my API plans API, to help me gather, and organize the details for any API I want, whether its mine, or one of the many public APIs I track on.

Now that I have an API, and an administrative interface, I'm going to get to work adding the data I gathered from my previous research. I have almost 60 APIs to enter, then I hope to be able to step back, and see the API plan data Ive gathered in a new light. Once I get to this stage, I'm looking to craft a simple embeddable page for viewing an API's plan, and create some visualizations for looking across, and comparing multiple APIs. I'm looking to apply this concept to verticals, like with business data via APIs like Crunchbase, AngelList, OpenCorporates, and others.

While my API plan schema is far from a stable version, it at least provides me with a beginning definition that I can use in my API profiling process. Here is the current version I have for the Algolia API, to demonstrate a little bit of what I am talking about.

This current version allows me to track the pages, timeframes, metrics, geo, limits, individual resources, extensions, and other elements that go into defining API plans, and then actually organizing them into plans, that pretty closely match to what I'm seeing from API providers. For each plan I define, I can add specific entries, that describe pricing structures, and other hard elements of API pricing, but then I can also track on other elements, giving me a looser way to track on aspects that impact API plans, but may not be pricing related.

I am pretty happy with what I have so far. Something I hope in a couple of years this could be used as a run-time engine for API operations, in a similar way that the OpenAPI Spec, and API Blueprint are being used used today, but rather than describing the technical surface area, this machine readable definition format will describe the business surface area of an API.

See The Full Blog Post


Considering The Obvious And Subtle Differences Between Similar API Providers

Evaluating exactly what is the "right" API can be very difficult. This is what I do full time, and its hard for me to understand the differences--I cannot image what it is like for people who have real jobs. I've used both the Crunchbase API, and the AngelList API for some time now, to help me better profile the companies that I am paying attention to as part of my research. I use both the website, and API for both of these business data platforms on a regular basis. 

As I read about the recent investment in Crunchbase, and their changes in pricing, I'm reminded of my half finished work, incorporating the OpenCorporates API, and taking another look at all of my business data sources. I won't be able to afford Crunchbase after the changes, which will suck, but honestly I'm not a big fan of their overall operations, and approach, and happy to see the data source leave the stack of APis that I depend on

This leaves me assessing what is next, as any major API plan changes will do to consumers. In addition to needing the data behind each company I monitor in the API space, I need a public page to reference across my storytelling. It is unfortunate that I've used Crunchbase for this historically. In an effort to keep my readers informed, I've also been supporting the very closed (and increasingly so) platform. When I pick up my business API integration work, I will switch off Crunchbase, keep my AngelList feed, and turn on OpenCorporates, but I will also start using the public OpenCorporates page as the URL reference I use in my storytelling. 

For me, it is important to find the resources I need to run my business, but it is also important to support services that have healthy practices. I have an overall objective to have relevant business data on companies I monitor, something Crunchbase and AngelList has provided a portion of. However I am also having to weigh what information OpenCorporates provides me, and where this overlaps with, or is in addition to Crunchbase and AngelList. I will lose some valuable information when I turn off Crunchbase, that I will not be able to replace with OpenCorporates. This is ok. The ethical trade off, between supporting OpenCorporates and Crunchbase is worth it to me. 

There are other sources of information like Mattermark, but similar to the path Crunchbase is on, their services are priced out of range for small, startups like me, who really aren't playing in the VC game. OpenCorporates has a more tiered approach to pricing, making it more accessible to me, allowing me to get access to data, and scale sensibly, as my needs grow. OpenCorporates also also me to add data to their corporate database, which is something I will explore through an API lens in future stories. In this post, I just wanted to explore the obvious and subtle differences between these similar API providers who provide business information.

Each of these providers offer up different data points, coupled with different API plans, as well as licensing and terms of service. Understanding the differences takes some serious evaluation, lending weight to the fact that many companies, and application developers will be needing API brokers in the future. Why I choose one provider over another may not be be the same things other companies will be considering, as I have different objectives, but having as many details of these API operations profiled is important, to help other companies go through the same process that I am going through--something I will keep working on as the API Evangelist, through my API Stack work.

See The Full Blog Post


Looking For Partners At Every Turn When Planning Your API Evangelism

One reason for having a well thought out, comprehensive API strategy, is that you are thinking about all the moving parts, and at ever turn you can weave things together, and potentially amplify the forward motion of your API operations. With every new release you should be considering all other areas of your API strategy, how you can include your API consumers, your partners, and where all of the opportunities lie when it comes to evangelism, and storytelling.

This approach was present in the latest release from Postman, with the release of their Run in Postman, embeddable button. As part of the release of the new embeddable button for their API client tooling, Postman also made a call for partners. It was a savvy marketing technique to reach out to partners like this, something that will not just generate implementations in the wild, but will also establish new partnerships, strengthen existing ones, while also generating new blog posts, and press releases around API platform operations.

When it comes to the API Evangelist operations, the approach by Postman is gold--when I consider across many areas of my own strategy. Postman is showcased as part of my API client research, their product release exists within my embeddable API research, and also lives within my API evangelism and API partners research. The moral of this story is Postman thought across their strategy when crafting the release strategy for their new feature, and I also put in similiar thoughts across my own strategy while reading their blog post, giving us both an opportunity for establishing biggest bang possible, around a single API platform release.

See The Full Blog Post


API Evangelist, Assistant, and Broker

I was in Philadelphia last week, hanging out with educational technology practitioners, and at one of the dinners I found myself talking to a young lady who was a digital learning assistant at a university. She spent her days, helping administrators, and faculty, when it came to understanding what digital resources and tooling were available, and assisting them to better understand how they can put them to use in their daily work. 

This conversation left me thinking about the role I play in the API space. I long ago elevated myself above evangelizing any single APIs, a position I enjoy because of my partners 3Scale, Cloud Element, Restlet, and WSO2. I elevated myself into this role because I saw nobody else doing it across the space, and saw an opportunity to make a wider impact, beyond just peddling a single product or service. My conversation in Philly made me realize how much general API assistant I am in my daily operations. 

The role of an API assistant, helping others understand that APIs exist, and how they can be applied in our everyday personal and business lives is an increasingly critical role in our society. How do you teach people they can pull corporate data from OpenCorporates, or Tweets from a specific #hashtag into a Google Spreadsheet? How do you help people understand that APIs make the everyday exhaust from your world
organize-able", accessible, and more "orchestrate-able", via APIs--without knowing how to code!

I've talked about my work as an API broker, where I help organize coherent stacks of APIs, for potential use in specific business verticals, and web, mobile, or even device based applications. I'm adding API assistance to this spectrum now, because an API broker addresses the professional business side of the API equation, but an API assistance addresses the individual side of the API equation. I hope this schism in my own existence as API evangelist, broker, and assistant, is also addressed at organizations, and independently in the API space, because we are sure going to need more expertise at these levels, if we are going to make all of this work.

See The Full Blog Post


Importance Of Thinking Externally When Writing The Description For Your API

I look at a lot of APIs, and one characteristic I judge them by, is their ability to simply explain what their API does. The most import aspect to any individual or company when doing APIs, is the process can potentially bring you out of your bubble, and make you think a little bit more externally. 

One of the most common things I see from API providers, is they think it terms of, and present their API in the context of the language they program in. It is a ".NET Web API", or a Python API. When in reality it is a web API, and if it employs enough HTTP, it should be able to be consumed in any language.

Another regular mistake made by API providers, is they describe their API in terms of their own platform, and never actually tell you what it does. Here is an example of this from PushBullet:

Pushbullet's API enables developers to build on the Pushbullet infrastructure. Our goal is to provide a full API that enables anything to tap into the Pushbullet network.

While I appreciate the short description of what the API does, I'm left still not knowing what it does. You see, I do not know about Pushbullet like you do, and chances are neither will many of your other potential API consumers. Knowing about your company shouldn't get in the way of me learning about what your API does.

To me, this is classic walled silo-itis, where programmers are operating within their little company or tech silo, and don't look much further. That is ok, this is why we do APis, to help break you free of these cycles. 

See The Full Blog Post


It Is All About No Limitations With The Enterprise API Plans

I am continuing to push forward my API plans research, where I look closely at the common building blocks of the service composition, pricing, and plans available for some of the leading API providers out there. I have no less than ten separate stories derived from Algolia, the search API provider's, pricing page--I will be using Algolia as a reference for how to plan your API, along with elder API pioneers like Amazon, and Twilio, for some time now.

On area of Algolia's approach I think is worth noting, is the enterprise level of their operations. They provide the most detail regarding what you get as part of the enterprise tier, being very public about their operations, in a way you just do not see with many API providers. When it comes down to it, the Algolia enterprise search plans are all about no limitations--I think their description says it well:

Your own dedicated infrastructure. Don't like limits? Meet our dedicated clusters. Optimal for high volumes of data, they scale to thousands of queries per second. Search performance and indexing times have never been so good.

The basic building blocks of how Algolia is monetizing their search API, records and operation API calls, melt away at the enterprise level. The lower four plans for Algolia API access meter the number of record, and operation API calls you make, and charge consumers, using four separate pricing levels. If you are an enterprise customer, the need for this metering melts away, eliminating the default limitations applied to lower levels of API consumption.

I support more transparency in enterprise API plans, as well as other partner tiers of access. I do not think Algolia's approach to delivering enterprise services is unique, but their straightforward, simple, and transparent approach to doing it is. In an API driven world, the enterprise levels of access do not always have to be that age old mating dance, that involves smoke and mirrors, and pricing pulled out of a magic hat--it can be just be about reducing the limitations around retail levels of API access, and getting business done.


See The Full Blog Post