{"API Evangelist"}

Decoupling The Solution Provided From The Product In Your Storytelling

I come across a number of really useful stories about APIs in my regular monitoring of the space that can't seem to separate the solution their product delivers from the product itself. I get that you want people to know that your product does the really useful thing that you are telling the story about, but I want to help you understand that they are most likely turning people off to the solution by tightly coupling the solution story with your product and company. 

This type of storytelling is more sales than it is evangelism. It shows you don't really have a good product in my opinion. If you can't talk endless about what your product accomplishes without mentioning the product name or the company behind, you probably don't have much of a thing in the first place. However, I'm guessing in many cases you just do not have the storytelling experience, both reading, and writing, to understand the difference, and that is why I want to help you reach more people.

I know your boss is telling you to sell, sell, sell, and that you need to make your "numbers". The reality though is people are being sold, sold, sold to all the time, and they really need actual solutions for problems they face and they appreciate the companies who focus on solutions, not yet just another vendor solution to be bombarded with. If you are going to take the time to craft content for your blog, Medium, or other popular channels, then take the time to thoughtfully disconnect your solution from the product--if you do it well, people will know it is you, and find the product or company behind, when they are ready to implement your solution.

The Why Behind The Github GraphQL API

I wrote a skeptical piece the other day about GraphQL, which I followed up with another post saying I would keep an open mind. I've added GraphQL to my regular monitoring of the space, but I don't have its own research area yet, but if the conversation keeps expanding I will. A recent expansion in the GraphQL conversation for me was Github releasing the GitHub GraphQL API.

In the release blog post, from Github they provide exactly what I'm looking for in the GraphQL conversation--the reasons why they chose to start supporting GraphQL. In their post Github describes some of the challenges API consumers were having with the existing API, which led them down the GraphQL path:

  • sometimes required two or three separate calls to assemble a complete view of a resource
  • responses simultaneously sent too many data and didn’t include data that consumers needed

They also talk about some of what they wanted to accomplish:

  • wanted to identify the OAuth scopes required for each endpoint
  • wanted to be smarter about how our resources were paginated
  • wanted assurances of type-safety for user-supplied parameters
  • wanted to generate documentation from our code
  • wanted to generate clients

Github says they "studied a variety of API specifications built to make some of this easier, but we found that none of the standards totally matched our requirements" and felt that "GraphQL represents a massive leap forward for API development. Type safety, introspection, generated documentation and predictable responses benefit both the maintainers and consumers of our platform". Some interesting points to consider, as I work to understand the benefits GraphQL brings to the table.

I'm still processing the entire story behind their decision to go GraphQL, and will share any more thoughts I'm having in future blog posts. With this major release from Github, I am now keeping an eye out for other providers who are headed in this direction. Hopefully, they will be as transparent about their reasons why as Github has--this kind of storytelling around API design and deployment is important for the rest of the API community to learn from.

Syndicating API Evangelist Posts To Medium Using Their API

Now that I have API Evangelist up to regular levels of operation after a summer break, I'm working to expand where I publish my content, and next up on the list is Medium. Like many other popular destinations I refuse to completely depend on Medium for my blogging presence, but I recognize the network effects, and I'm more than happy to syndicate my work there. 

To help me manage the publishing of my stories to Medium I wired up the Medium API into my API monitoring and publishing platform. I use the Github API to publish blog posts to API Evangelist, Kin Lane, and API.Report, and it is pretty easy to add a layer that will publish select stories to Medium as well. All I have to do is tag posts in a certain way, and my "scheduler" and the Medium API does the rest.

I will be evaluating which of my stories go up to Medium on an individual basis. I'm not wanting everything to go there, but would like to open up some of my work for discussion on the platform. While I already share my API Evangelists posts to LinkedIn and Facebook, I will also be syndicating select stories using LinkedIn Publishing and Facebook Instant Articles next. I will only be publishing my content to platforms that bring value, but more importantly have APIs so I can retain as much control over my work from a central location within my domain.

You can find everything published under @KinLane over a Medium, something I might expand upon with specific publications in the near future, but for now, I'll keep it all under my user account.

Providing Branding And Attribution Assets With Each API Response

I am tracking on the approaches of API providers who have branding world together when it comes to platform operations. I'm always surprised at how few API providers actually have anything regarding branding in place, especially when it seems like loss of brand control, attribution, and other concerns seem to be at the top of everyone's list.

I was hooking up the Medium API to my API monitoring and publishing system, syndicating select stories of mine to the platform and found myself thinking about how important an API branding strategy is (should be) to content platforms like them. Medium doesn't let you pull posts via the API (yet), but if it did, I would make sure branding and attribution was default.

Few API providers have their API brand strategy together, let alone provide easy to understand and find assets to support the strategy. It seems like to me that if you are concerned about brand control, or just want to really extend your brand across all websites and mobile applications where your API resources are put to use, you would want to bake branding and attribution into the API response itself, as well as a robust branding area of the developer portal.

I'm going to explore concepts around branding and attribution as a default layer of API access. Everything from thinking about hypermedia approaches to providing link relations, to maybe including link relations in the header like Github does with pagination, but using branding and attribution focused link relations. I would like to be able to provide light footprint options that may not require changing up the JSON response, or add an entirely new media type.

When Medium does open up /GET for posts on the platform, I'd be stoked if there were branding and attribution elements present, driven by settings in my account. I'm not under the delusion that every developer who makes a call to an API will respect branding guidelines, but if it is front and center with every API call, and easy to implement, the chances increase dramatically.

Anyways, some food for thought around branding. I will push this topic forward as I have time and maybe play with a prototype for the API Evangelist blog. I'd love for consumers and syndicators of my content to be able to extend the reach of my brand, or at least send some love my way with a little attribution.

Google Spreadsheets As An Engine For API Goodness

I was watching my partner in crime Audrey Watters (@audreywatters) build the weaponized edu Twitter bot using a Google Spreadsheet as an engine. Something she learned from Zach Whalen, a professor at University of Mary Washington. Audrey is not a programmer, but she has become extremely proficient at building these little bots, and using the Twitter API--demonstrating the potential of Google Sheets as an engine for an API-driven bot solutions, or in this case bot mayhem.

Zach's approach is extremely well defined--you will have to copy and go through the spreadsheet yourself to see. Everything you need to get the job done is there, from step by step instructions, to storing your API tokens, and planting the seeds for your bot intelligence. This is the kind of API stuff I'm always talking about when I say that API shouldn't just be for developers--all it takes is having no fear of APIs, and well laid out blueprints like Zach has provided.

It is an approach I'd like to explore more as I have time. I'm not a big fan of the spreadsheet but I fully get its role amongst muggle society. Spreadsheets keep me fascinated because of the many dimensions of API they possess. Spreadsheets can provide APIs, consume APIs, and as Zach's approach to bot development demonstrates, they can be a pretty serious engine for driving API goodness.

API Branding Embeddables That Can Boost My API Rate Limits

I'm expanding on my API branding research, putting some thought into how we might be able to include branding and attribution in API responses. Next, I'd like to brainstorm ways to incentivize both API providers, as well as API consumers to employ sensible branding practices. You'd think API providers would be all over this stuff, but for some reason, they seem to need as much encouragement, and structure as API consumers do--this is why I'm wanting to explore how I can drive both sides.

First, why do I care about branding when it comes to APIs? Well, the more successful companies are with their APIs, the more their companies brand can be not just protected, but enhanced--the more APIs are seen in a positive light, rather than the threat to brand control that is often cast on them. And, the more APIs we have, the more access to valuable data and content for use in web, and mobile applications.

While there are many nuances to API branding, it often centers around making text, link, and image assets available for developers to use wherever they put API driven data, content, and algorithms to use. APIs have many different approaches to branding requirements and enforcement, but few actually provide rich assets and tooling to support a coherent branding strategy. All it takes is a handful of logos, some JavaScript APIs, and guidance for developers, and branding can significantly extend the reach of any brand--not hurt it as many perceive an API will do.

The benefits of branding to API provider are clear for me, but I'd like to explore what we can do to incentivize API consumers. What if, with all the tracking of where branding and attribution are deployed (aka API brand reach), we tracked each domain or subdomain, as well as each impression of text, logos, and other assets?  What if network reach and brand exposure could buy me API credits, and raise my API rate limits as a consumer? I mean, as a developer I'm potentially extending the reach of your brand, providing you with valuable exposure, and potentially inbound links and traffic--if I am rewarded for doing this, the chances I execute healthy API branding practices will ony increase.

Just some thoughts on incentivizing both the API provider and consumer sides of the coin when it comes to API branding. I am going to play around with a design for a simple set of logos and JavaScript APIs for supporting API branding assets. I'm going to also play around with baking in links to these resources within API responses either as JSON collections or present in headers. Once in place, I'll have a better idea of the type of data I can collect, and how it can possibly be measured, and applied to increasing API rate limits or possible equal credits for API access--all things I know developers will want.

Stay tuned for more on API branding in coming weeks...

What I Mean When I Say API

People love to tell me the limitations of my usage of the acronym API. They like to point out they were around before the web, that they are used in hardware, or are not an API unless it is REST. There are endless waves of dudes who like to tell me what I mean when I say API. To help counter-balance each wave I like to regularly evolve, and share what I mean when I say API--not what people might interpret I mean.

When I say API, I am talking about exposing data, content, or algorithm as an interface for programmatic use in other applications via web technology. Application in "Application Programming Interface" means any "application" to me, not just a software application. Consider visualizations, image rendering, bots, devices, or any other way that web technology is being applied in 2016.

I do not mean REST when I say API. I do not mean exclusively dynamic APIs--it could simply be a JSON  data store  made available via a Github repo. If machine-readable data, content, and algorithms are being accessed using web technology for use in any application, in a programmatic way--I'm calling it API. You may have your own interpretations, and be bringing your own API baggage along for the ride, but this is what I'm talking about when you hear me say API.

The PSA Peugeot Citroën’s APIs

I was turned on to the API program out of Groupe PSA,  the French multinational manufacturer of automobiles and motorcycles sold under the Peugeot, Citroën and DS Automobiles brands from a friend online the other day. Rarely do I just generally showcase an API provider, but I think their approach is simple, clean, and a nice start for a major automobile brand, and worthwhile to take note of. 

Companies of all shapes are doing APIs, but very few have the awareness to make their API program public, and accessible to the general public. I think the PSA Peugeot Citroën’s APIs are a pretty interesting set of resources for making available to car owners, and worth talking about:

  • Telemetry - Request data from the car: average speed, location, instantaneous consumption, engine speed, etc.
  • Maintenance Alerting - Request data from the various events or notifications that can be detected by the car: time before maintenance, fired alerts, etc.
  • Correlation - These APIs lets your application evaluate your driving style with others.

Making vehicle data, events and notifications makes sense to me when it comes to vehicles and APIs, and is something that seems like it should just be default mode for all automobile manufacturers. The correlation API seems in a different category, and elevated to more of an innovate class, beyond just the usual car activity. I'm not a huge car guy, but I know people who are, and being able to size up against the competition or a specific community, could become a little addictive.

I've added the PSA Peugeot Citroën’s APIs to my monitoring of the space, and will keep an eye on what they are up to. I'm already following other user manufacturers like Ford and GM who have API efforts, as well as Japanese manufactuers like Honda. I may have to stop, and take roll call in the world of automobiles and see who have official public API develop efforts, and put some pressure on those who do not have their program together yet (like they'll listen).

If you know of any auto related API efforts I do not already have in my auto API stack, please let me know--I depend on my readers to keep me tuned into which companies are the cool kids doing API.

Be Part Of Your Community, Do Not Just Sell To It

A recent story from Gordon Wintrob (@gwintrob) about how Twilio's distributed team solves developerevangelism has given me a variety of seeds for stories on API Evangelist this week. I love that in 2016, even after an IPO, I am still writing positive things about Twilio and showcasing them as an example for other API providers to emulate.

Twilio just gets APIs, and they deeply understand how to effectively build a community of passionate developers, demonstrated by this statement from Gordon's story on developing credibility:

How do you have technical credibility? You have to really be part of your programming community. Each of us is a member of our community, not marketing or trying to sell to it.

It sounds so simple. Yet is something so many companies struggle with. An API community is often seen as something external, and often times even the API is seen as something external, and this is where most API efforts fail. I know, you are saying that not all companies can be API-first, where the API is the core product focus like Twilio--it doesn't matter. Not being able to integrate with your developer community, is more about your company culture, than it is about APIs.

Another area my audience will critique me is around sales--you have to do sales to make money! Yes, and even Twilio has a sales team to come in at the right time. This is about building technical credibility with your developer community, by truly being part of it--if you are always trying to sell to them, there will always be an us and them vibe, and you will never truly be part of your own community.

As an API provider, I always recommend that you get out there and use other APIs, to experience the pain of being an API consumer. Using Twilio, and participating in the Twilio community should be the 101 edition of this. Where all API providers spend a couple of months using the Twilio API, and "actively" participating in their community before getting to work on their own API program.

A New API Programming Language SDK Icon Set

I was working on a forkable definition of my API portal and I wanted to evolve the icons that I usually use as part of my API storytelling. I primarily use the Noun Project API, to associate simple black and white icons which represent the stories I tell, companies I showcase, and topics I cover. One area I find the Noun Project deficient is when it comes to icons for specific technologies, so while working on my project I wanted to find a new source. I fired up the Googles and got to work.

I quickly came across Devicon, a set of icons representing programming languages, designing & development tools which you can use as a font or with SVG code. At the Github repo for the project, it says they have 78 icons with over 200 versions total. I used a set of the icons to display API SDKs on my API portal prototype, allowing anyone who forks to turn on and off which programming languages they offer SDKs for.

Being pretty graphically challenged, as you can tell by my logo, I'm a big fan of projects like Devicon--especially when they make it so simple, and so good looking, all at the same time. If you are needing icons for your API portal I recommend taking a look at what they are doing. They have images for all the technology that cool kids are using these days, and they seem open to crafting more if you find something missing.

Why Would You Build A Business On APIs? They Are Unreliable!

People love to tell me how unreliable APIs are, while also echoing this sentiment across the tech blogosphere. I always find it challenging to reconcile how the entrenreurs who spread these tales choose to put the blame on the technology, and not the companies behind the technology, or more appropriately the investment behind the companies. APIs are just a reflection of what is going on already within a company, and are good, nor bad--they are just a tool that can be implemented well, or not so well.

I was taking some time this last week to work on my API monitoring system, which I call Laneworks. In addition to having my own API stack, I depend on a variety of other APIs to operate my business. As I was kicking the tires, poking around the code for some of my most valuable integrations I found myself thinking about the stability and reliability of APIs, and how stable some APIs have been for me.

Since 2011 I have stored ALL heavy objects (images, video, audio) used in my API monitoring and research on S3. I have NEVER had to update the code. Since 2012 I have used Pinboard as the core of my API curation system, aggregating links I favorited on Twitter, and added using my browser bookmarklet--again I have NEVER updated the code that drives this. Since 2013 all of my public websites run on Github using Github Pages, employing the Github API to publish blog posts, and all other content and data used in my research.

The Amazon S3, Pinboard, and Github APIs make my business work. Three suppliers who have been working without a problem for 5, 4, and 3 years. The only thing I have had to do is pay my bill, and keep my API keys rotated, and the reliable API vendors to the rest. Storing images, video, and audio, curated the news and other stories I share with you and publish the blog posts and web pages you use to browse my API research. So explain to me, why would you want to build a business on APIs, when they are so unreliable?

Standards Evangelism

As the API Evangelist, I spend a lot of time thinking about evangelism (*your mind is blown*). TFrom what I'm seeing, the world of technology evangelism has been expanding, where database, container, and other types of platforms are borrowing the approaches proven by API pioneers like Amazon and Twilio. As I'm doing work with Erik Wilde (@dret) around his Webconcepts.info work, and reading an article about industrial automation standards, I'm left thinking about how important evangelism is going to be for standards and specifications.

Standards are super important, so I have to be frank--the community tends to suck at evangelizing itself, in an accessible way that reflects the success established in the API world. I'm super thankful for folks like Erik Wilde, Mike Amundsen, and others who work tirelessly to evangelism API related web concepts, specifications, and standards. The importance of outreach and positive evangelism around standards reflect the reasons why I started API Evangelist--to make APIs more accessible to the masses. 

This is why I have to get behind folks like Erik who step up to help evangelize standards. I do not have the dedication required to tune into the W3C, IANA, ISO, and other standards bodies, and super thankful for those who do. So if I can help any of you standard obsessed folks hone your approach to storytelling, and evangelism let me know. I'd love to see standards evangelism become commonplace--making them more friendly, accessible, and known across the tech sector.

I Am Feeling The Same About YAML As I Did With JSON A Decade Ago

I have been slowly evolving the data core of each of my research projects from JSON to YAML. I'm still providing JSON, and even XML, Atom, CSV, and other machine-readable representations as part of my research, but the core of each project, which lives in the Jekyll _data folder are all YAML moving forward. 

When I first started using YAML I didn't much care for it. When the OpenAPI Specification introduced the YAML version, in addition to the JSON version, I wasn't all that impressed. It felt like the early days of JSON back in 2008 when I was making the switch from primarily XML to a more JSON-friendly environment. It took me a while to like JSON because I really liked my XML--now it is taking me a while to like YAML because I really like my JSON.

I do not anticipate that JSON will go the same way that XML did for me. I think it will remain a dominant machine-readable format in what I do, but YAML is proving to have more value as the core of my work--especially when it is managed with Jekyll and Github. I am enjoying having been in the industry long enough to see these cycles, and be in a position where I can hopefully think more thoughtfully about each one as it occurs.

D3.js Visualizations Using YAML and Jekyll

I am increasingly using D3.js as part of my storytelling process. Since all my websites run using Jekyll, and published entirely using Github repositories wich are shared as Github Page sites, it makes sense to standardize how I publish my visualizations.

Jekyll provides a wealth of data management tools, including the ability to manage YAML dta stores in the _data folder. An approach I feel is not very well understand, and lacks real world examples regarding how to use when managing open data--I am looking to change that.

I like my data visualizations beautiful, dynamic, with the data right behind--making D3.js the obvious choice. For this work, I took data intended for use as a bar and pie chart and published as YAML to this Github repositories _data folder. This approach to centrally storing machine-readable data, in the simple, more readable YAML format, makes the data behind visualizations much more accessible in my opinion.

The problem with using D3.js visualization is that I need it in JSON format. Thankfully, using Jekyll and Liquid, I can easily establish dynamic versions of my data in JSON, XML, or any other format I need it in. I place these JSON pages in a separate folder I am just calling /data.

Now I have the JSON I need to power my D3.js visualizations. To share the actual visualization, I created separate editions for my bar and pie charts, and have the HTML, CSS, and JavaScript for each chart, in its own file.

There are two things being accomplished here. 1) I'm decoupling the data source in a way that makes it easier to swap in and out different D3.js visualizations, and 2) I'm centralizing the data management, making it easily managed by even a non-technical operator, who just needs to grasp how Jekyll and YAML works--which dramatically lowers the barriers to entry for managing the data needed for visualizations.

There is definitely a learning curve involved. Jekyll, Github Pages, and YAML take some time to absorb, but the reverse engineerability of this approach lends itself to reuse and reworking by any data curious person that isn't afraid of Github. I'm hoping to keep publishing any new D3.js visualization I create in this way, to provide small, forkable, data-driven visualizations that can be used as part of data storytelling efforts-everything here is available as a public repo.

As a 25-year data veteran, I find myself very intrigued with the potential of Jekyll as a data management solution, something that when you combine with the social coding benefits of Github, and Github Pages, can unleash unlimited possibilities. I'm going to keep working to define small, modular examples of how to do this, and publish as individual Github lessons for you to fork and learn from.

A Trusted Github Authentication Layer For API Management

I am reworking the management layer for my APIs. For the last couple of years, I had aspirations of running my APIs with a retail layer generating revenue for API Evangelist--something which required a more comprehensive API management layer. In 2016, I'm not really interested in generating revenue from the APIs I operate, I'm just looking to put them to work in my own business, and if others want access I'm happy to open things up and broker some volume deals.

To accomplish this I really do not need heavy security or service composition for my APIs, I'm just needed to limit who has access so they aren't just 100% public, and identify those who are using, and how much they are actually consuming. To facilitate this I am just going to use Github as a trusted layer for authentication. Using an OAuth proxy, I'll let my own applications authenticate using their respective Github user, and identify themselves using a Github OAuth token when making calls to each API. 

Each application I have operating on top of my APIs have its own Github account. Once they do the OAuth dance with my proxy, my system will then have a Github token identifying who they are. I won't need to validate the token is still good with each call, something I'll verify each hour or day, and cache locally to improve API performance. Anytime an unidentified token comes through, I'll just make a call to Github, and get the Github users associated, and check them against a trusted list of Github users who I have approved for accessing my APIs.

I'm not really interested in securing access to all the content, data, and algorithms I'm exposing using APIs. I'm only looking to identify which applications are putting them to work and evaluate their amount of usage each day and month. This way I can monitor my own API consumption, while still opening things up to partners or any other 3rd party developer that I trust--if they are using too much, I can drop them a message to have a conversation about next steps.

I'm still rolling out this system out, but it got me thinking about API access in general, and the possibilities that a trusted list of Github accounts could be used to expedite API registration, application setup, and the obtaining keys. Imagine if as a developer I could just ping any API, do an OAuth dance with my Github credentials, and get back my application id and secret keys for making API calls--all in a single flow. As an API provider I could just maintain a single trusted list of Github users, as well as consult other lists maintained by companies or individuals I trust, and reduce friction when onboarding, or automatically approve developers for higher levels of access and consumption.

Putting The Concept Of The Public API To Rest As A Dominant Narrative

APIs come in all different shapes and sizes. I focus on a specific type of APIs that leverage web technology for making data, content, and algorithms available over the Internet. While these APIs are available on the open Internet, who has the ability to discover, and put them to use will vary significantly. APIs have gained in popularity because of successful publicly available APIs like Twitter and Twilio, something that has contributed to these types of APIs being the dominant narrative of what APIs are.

A lack of awareness of what modern approaches to API management can do for securing web APIs as well as the dominance of this narrative that APIs need to be open like Twitter and Twilio tends to set the bar to unrealistic levels for API providers. Who has access to a web API is just one dimension of what APIs are, and sharing content, data, and algorithms securely via the web should be the focus. It's not whether or not we should do public or private APIs--it is about how you will be sharing your resources in a digital economy.

While I encourage ALL companies, institutions, and government agencies to be as transparent as they possibly can regarding the presence of their APIs, its documentation, and other resources--who actually can access them is entirely up to the discretion of each provider. You should treat ALL your APIs like they use public infrastructure (aka the web), secure them appropriately, and get to work making sure all your digital resources are accessible in this way, not being bogged down by useless legacy discussions.

Which is why I support putting the concept of the public API to rest as a dominant narrative around what is an API--you shouldn't hear me talking about public vs private anymore. If you do slap me.

Providing YAML driven XML, JSON, and Atom using Jekyll And Github

The power of Jekyll on Github Pages as a data management solutions is not a very widely held concept. I'm always amazed at how technologists and programmers don't understand Jekyll, let alone how it can be used as a data engine--maybe I can help a little by sharing my own usage. As I develop examples of this in action, I want to publish them as Github repositories that anyone can fork and reverse engineer to use in their own work.

While it was not love at first sight for me, I'm increasingly becoming a fan of using YAML for storing and managing a significant portion of the data I use across my business. Part of the reasons I'm using YAML is its readability. The other reasons stem from the augmented benefits of using Jekyll and Github Pages to store and syndicate machine readable YAML for use across my storytelling--when you put YAML data into the _data folder for any Jekyll site, it opens up a new world of possibilities.

Any YAML data I put into the _data folder immediately become objects I can work with across any HTML page within a Jekyll site, using Liquid. Where this really starts to impact my world is when I started dynamically generating other formats of data stored as YAML.

First up is JSON. Here is the file I am using to generate a JSON representation of my central YAML file stored in _data folder.

Which when I view in my browser, via the Jekyll driven, Github Pages published website I get a separate JSON representation:

Next up is XML. Here is the file I am using to generate a XML representation of my central YAML file stored in _data folder.

Which when I view in my browser, via the Jekyll driven, Github Pages published website I get a separate XML representation:

Next up is Atom. Maybe I may want a feed of the latest products added to the catalog, so here is the file I am using to generate an Atom XML representation of my central YAML file stored in _data folder.

Which when I view in my browser, via the Jekyll driven, Github Pages published website, I get a separate Atom XML representation:

From a single YAML file, I just generated a JSON, XML, and Atom representation of the same list of products. It is all stored in a Github repository, and published as a Jekyll website hosted using Github Pages. This particular Github repo is meant to just be a demonstration of what is possible using Jekyll, YAML, and Github Pages. I will use this work as a base in a variety of other projects, where I use these various formats to drive web and mobile applications, as well as visualizations and analytics used across my API storytelling.

There are a wealth of reasons why I conduct this type of work. First, it is work I will use in my own research and storytelling, which all operates using Github and Jekyll. Second, doing my work out in the open, using open source tools and definitions, published as Github repositories making my work forkable, and reusable by others. There is a learning curve involved with unpacking what is happening here, but I feel pretty strongly that these are reusable modules that anyone can put to use--not just developers.

I will publish other examples of this in action as I develop them. When I need the Liquid scripts to generate JSON, XML, or Atom feeds in any of my projects I will just visit this repo, and copy / paste. When I develop new ones I will generalize and publish here for everyone to use as well.

My Forkable Minimum API Portal Definition

I am updating my minimum API portal definition so I can apply to my own API infrastructure, and since I operate 100% on Github using Github Page and Jekyll, I have made it a forkable API portal definition that anyone can put to work as their own API developer portal. This edition of my API portal definition uses Bootstrap for its UI, and Jekyll for the CMS, making it pretty extensible, and remixable once you fork it on Github.

My goal was to make a simple, forkable API portal, that could act as a checklist for API providers looking to quickly set up a presence for their API. I know many companies, institutions, and government agencies do not have the resources to host one, let alone the time to pay attention to all the details--that is my job! To help API providers out, I have included what I feel is a complete API portal in the _config.yml for the Jekyll site.

All you have to do is scroll down the API portal definition and comment out what you don't want, and fill in the areas you do, and the Jekyll site should do the rest. I've included the most common areas I like to see from all API providers in my definition.

  • Portal
  • Simple Description
  • Getting Started
  • Authentication
  • Documentation
  • Discovery
  • Code
  • Communication
  • Plans
  • Self-Service Support
  • Direct Support
  • Road Map
  • Issues
  • Change Log
  • Legal

This is just the first draft of my forkable API portal definition. I am going to apply to my Kin Lane, and API Evangelist API infrastructure, as well as a handful of independent APIs that I operate. Then I'm going to apply it to a couple of government APIs I want to simplify, like the USGS Water Services I am working on, to harden it a little bit. Sometimes all it takes is to better organize the information for an API, to help make it more accessible, and intuitive, reducing the friction when trying to get up and running.

I would wait a couple weeks to fork the API portal definition, until I stabilize it some more against theother APIs, unless you are feeling adventurous. If you aren't afraid of working with YAML, Jekyll, and Liquid driven HTML, the API portal is pretty fun to play with. If nothing else, you can use the _config.yml as a checklist to think about as you review or craft your own API portal.

My Dream API Sketchbook And Portfolio

I have a vision of an API notebook in my head I desperately want to get out. First of all, I want to come up with another name for it, which is a journey that always starts with playing around with synonyms. Direct synonyms of notebook include a diary, journal, log, workbook, pad, and binder-yes, all of that is relevant to what I would like to see. After that, a few other words resonated, including album, collection, portfolio, and registry.

This isn't just a folder to put my API definitions. It will be the place where I go to find all the definitions of 3rd party API which I depend on, as well as the APIs I'm designing, deploying, and operating as part of my own business. I want to be able to just record ideas, sketches, and thoughts I have as I'm thinking about APIs. I want to be able to annotate APIs that I find, and iterate, remix, and riff off of other API designers and architects work. Maybe an API sketchbook?

An artistic design book is just the beginning. I'd also like it to be my professional sketchbook, where my business partners and customers can discover the API I depend on, and share with the world. I want it to be a directory of APIs that are relevant to my business. My sketchbook is where I'm creative, but it is also where I get business done. Maybe acting as an API portfolio? I want it to be fun, but also accommodate my professional existence as well. 

I just want a place where I can design and evolve the API definitions that impact my world. I want to be able to share them, as well as subscribe to other people's API designs. I want my designs versioned, and be able to play back their evolution, and see a timeline, and possibly the attributions of where I found my API inspiration, or leveraged an existing definition or specification. My API sketchbook / folio should evolve with me, and help me make sense of all the API definitions I use, discovery new ones, and always help me not reinvent the wheel. 

Crafting my dream API sketchbook / portfolio is easier to write about than it will be to build. Doing it simply and beautifully will be hard. Doing it so it scales, and allows me to manage not just 10s or 100s of APIs, but work from thousands of APIs will be very difficult. I'm hoping it is something that I won't have to build on my own, and is an idea that many will contribute to, helping push forward the converstion around how we craft, store, organize, and collaborate around the APIs that are increasingly playing a central role in our business and personal world.

A Twilio Process To Emulate Within Your Own API Operations

Leading API providers do not always make me happy with they way they conduct themselves, but it always makes me smile that one of the top API providers consistently over the last five years, continues to do things right, and set a good example that I can write about. I am not delusional to think that everything is perfect behind the Twilio curtain, but a story from Gordon Wintrob (@gwintrob) about how Twilio's distributed team solves developer evangelism leaves me hopeful (once again) about the potential of APIs.

There are several gems in this post, but one of them that stood out for me, and I think reflects the API potential which more companies should be emulating, is about how Twilio designs, develops and evolves new APIs. I think Gordon tells it the best: 

We also have a broader concept of our Developer Network, which handles a lot of the coding and writing for our public-facing documentation, blog posts, and our interactions with the community. Typically they’ll give feedback on the budding ideas for the new API. This feedback comes long before it goes out to the first beta customers.

The Developer Network brings a fresh set of eyes with less biased perspectives. They’ll say things like, “You know what? These parts of the API are awesome. This is what I would use it for.” or “These are the things that need work.” That way we know how the API would work for a developer at a hackathon or trying to finish the story points in a sprint. How do we make it as easy as possible for them?

Once the API or service comes together, we go to a closed beta process for a small group of customers. If we do a product announcement at all, then we’ll have a “request access” button. We’ll use that as a list of people that are really chomping at the bit to get coding. Then, after a period of time, when we have some API stability with people in our private beta process, we’ll switch to a public beta. Then it’s open to everyone who needs access.

We get more feedback before we go fully operational, but there should be no API instability after a public beta period. As an API company, we can’t go and change that underlying API once it’s in production. That would be a terrible experience. If we really need to change that endpoint API, it should be a new version.

Forgive me for just copying and pasting this much from your story Gordon, but I think it needs isolation as its own story. This is the approach to designing, developing, and operating APIs that companies need to hear more about. These are the technical product development benefits which being API-first can bring to the table. It's not just about providing data, content, and algorithms available via the web, it is about opening up the conception, design, and iteration of these API resource in a structured, collaborative, and evolutionary lifecycle on the web. 

This is what makes Twilio such a great API role model to showcase. I know Gordon is telling pretty stories, originating from Twilio, but like the secret to Amazon's success--these stories can have a significant impact on how individuals, companies, institutions, and government gencies approach technology within their own operations. Thanks for such a good story Gordon, providing me with some material to riff on.