{"API Evangelist"}

The Sharing Of Data Via APIs Will Be Key To Viability Of Every Industry

As I'm processing some guidelines around the importance of sharing data in the cybersecurity.theater a story on NPR came on the radio about the importance of data sharing when it comes to the emerging self-driving car marketing. I do API Evangelist, not to encourage everyone to do APIs and be the next Twitter, it is to help everyone understand the important of sharing machine-readable information in a secure and accessible way, to make all industries healthier, secure, and more viable environments for digital transformation (man I hate that phrase).

APIs are about sharing information in machine readable formats like YAML, JSON, and XML. Modern approaches to API deployment and management makes this information available in a self-service, secure way, ensuring those who should have access do, in a 24/7, always-on environment. Sharing cybersecurity threat information, and sharing of real-world and laboratory data around self-driving cars fall into this area, and APIs should be a default for anyone playing in these industries.

We are seeing API efforts emerge for expediting Zika virus research, by sharing machine-readable data that other researchers can put to work in their own efforts, without reinventing the wheel and duplicating work within many separate silos--we need more of this in other industries. If you need a primer on how APIs can be put to work in your industry feel free to reach out to me. APIs are not the next vendor solution, and if someone is telling you this, you should run the other direction. APIs are about sensibly, and securely sharing critical information by default across your organization, and industry, using low-cost web technology--not buying the next product or service.

See The Full Blog Post


Learning From The Sunlight Foundation Situation And Baking Transparency Into Projects

As I work through the APIs, and Github repositories of soon to be gone Sunlight Foundation, I wanted to take some more time to help open data and API efforts realize the important of real-time transparency and openness of their projects--specifically how Github can help contribute to this. I'm super stoked at the number of projects on Sunlight Lab's Github account, but after identifying the actual gaps between what is there and what is available in their APIs, I want to emphasize the importance of doing our work out in the open on Github when working on these types of projects.

In short, it is really difficult to package up any project once the hammer comes down, and a company or individuals are moving on. You'll never be as thorough with sharing the data, code, and the story behind as if you did in real-time, while in the moment. Even if the lights aren't being shut off, it is extremely difficult to remember all the details after the fact--which is why I am an extreme advocate for being transparent by default throughout the life of any open data and API projects in the service of government transparency.

Many technologists see sharing your work as it happens as extra work, but in my experience, it is actually the opposite, even before you come up against circumstances where you have to recreate work after the fact. Technologists also tend to view Github as purely for managing open source code, when in reality it can be used for much, much more. After reviewing the Sunlight Foundation's Github repo, here are few area that come to mind:

  • Data - Publishing raw JSON and YAML data, as part of regular updates, including backups in native database format (ie. MySQL, etc.)
  • Server Code - Using Github to manage all server side code for APIs, with regular commits as the code evolves and changes.
  • Scraping Code - Making sure all libraries and code used as part of data and content scraping is shared on Github.
  • Client /Frontend Code - This is the most common thing currently with providers, with the sharing of client libraries on Github
  • Visualizations - Publishing of data behind, and the JavaScript code for all visualization like I'm doing with D3.js.
  • Storytelling - Jekyll on Github pages provides an easy, free way to publish a blog which can be used to tell the story in real-time.

Everything that runs API Evangelist exists on Github. This approach has fed my Knight Foundation funded Adopta.Agency work. If you want to see an example of what I'm talking about check out the Adopta.Agency Blueprint. It is much easier to make sure all of your work will live on beyond your organization or project's lifespan, if you are publishing your work in real time on Github in this way. If you are using Github to manage your daily data, code, visualizations, and storytelling--you will be transparent by default, no extra work necessary.

No project is immune to what has happened with the Sunlight Foundation. All open data and API projects will go away some day, and if we are going to keep building on each other's work it should happen out in the open, in real time, in a way that anyone can fork. I'm not even touching on the zero costs associated with publishing using Github--which I identified when working on open data efforts for the White House, where I couldn't stand up any hosting environment, but I damn sure could stand up a Github repository. I'm not even talking about the social benefits of doing this in real-time and attracting other key actors and players to your work even before it done. I'm just talking about doing this for the benefits of when the lights are dimmed or shut off completely as they are with the Sunlight Foundation.

I know there has been a lot of rhetoric, both good and bad about using Github in government, but the benefits are real, and this type of work around open data and APIs is too important to ignore. If you need help understanding what I'm talking about, needing open data or API efforts rescued, or specific Github templates developed like my Adopta.Agency blueprint, feel free to reach out. I'm hoping to carve off some cycles to apply my approach to the existing Sunlight Foundation projects, helping keep them alive, as well as teach this approach to other upcoming open data and API efforts, to help ensure the work lives on.

See The Full Blog Post


Identifying The Important Work From The @SunlightFoundation I Would Like To See Live On

I am saddened to hear the news of the Sunlight Foundation dimming the lights on their important work around government transparency. They have provided me a constant spotlight on government activity, and provide a model for me when it comes to opening up government data, and providing APIs that can make a difference. Having helped run non-profit organizations, working to make social change, I know this can be a very difficult thing to keep above water.

I have already reached out to the Sunlight foundation staff letting them know I'm here to help with any API related projects, and happy to fund their existence until I can find a suitable, caring adoption situations for them. First up, I wanted to make sure and go through their APIs, and make sure there is a current OpenAPI Specification snapshot for each one, in case they get shuttered:

  • Capital Words API - The Capitol Words API is an API allowing access to the word frequency count data powering the Capitol Words project.
  • Congress API v3 - A live JSON API for the people and work of Congress. Information on legislators, districts, committees, bills, votes, as well as real-time notice of hearings, 
  • floor activity and upcoming bills.
  • Open States API - Information on the legislators and activities of all 50 state legislatures, Washington, D.C. and Puerto Rico.
  • Political Party Time API - Provides access to the underlying, raw data that the Sunlight Foundation creates based on fundraising invitations collected in Party Time. As we enter information on new invitations, the database updates automatically.
  • Real-Time Federal Campaign Finance API - A JSON and CSV API that delivers up-to-the-minute campaign finance information on federal candidates, committees, PACs and other groups that file electronically with the Federal Election Commission. Summary information for Senate candidate committees, which file on paper, is available as soon as they have been digitized by the FEC.

Next, I wanted to go through their Github repositories and identify any repos that contained data, server, and client code that is API related. In doing this I also found some interesting visualizations built on top of their work, that I think are worthy of showcasing and preserving:

  • openstates - source for Open States' scrapers
  • foia-data - Tracking FOIA data across government agencies and departments
  • billy - scraping, storing, and sharing legislative information
  • census - A Python wrapper for the US Census API
  • congress - The Sunlight Foundation's Congress API
  • hall-of-justice - Working with criminal justice data
  • openstates-boundaries-postgres - PostgreSQL Docker container setup for Open States' implementation of the Represent Boundaries API
  • body-cam-bills - Police body camera bills
  • go-sunlight - Go bindings to the Sunlight APIs
  • read_FEC - Turn raw electronic FEC filings into meaningful data
  • python-sunlight - Unified Python bindings for Sunlight APIs
  • fara - Foreign influence database
  • openitup - A repository for projects that scrape data from government agencies
  • politwoops-data - Storage for data associated with politwoops
  • Capitol-Words - Scraping, parsing and indexing the daily Congressional Record to support phrase search over time, and by legislator and date
  • emailcongress - Lightweight Django web app and API to courier email messages to phantom-of-the-capitol using Postmark.
  • fcc-net-neutrality-comments - scraping, parsing and making sense of the flood of comments submitted to the FCC on net neutrality
  • mapviz - Visualize data from states
  • disbursements - Data and scripts relating to the publishing of the House expenditure reports, and hopefully the Senate's in future
  • databuoy - A spreadsheet-backed data catalog
  • politwoops-tweet-collector - Python workers that collect tweets from the twitter streaming api and track
  • regulations-scraper - Scraper of public comments on regulations.gov
  • ruby-sunlight - Ruby wrapper for the Sunlight Labs API
  • mlscrape - mlscrape is a library for site-specific automated website scraping based on human-annotated examples
  • openstates-api - Documentation for the Open States API
  • datacommons - The core of sunlightlabs' Data Commons project. Includes the Transparency Data site and the APIs that power TransparencyData.com and InfluenceExplorer.com
  • staffers - Interactive and searchable House staffer directory, based on House disbursement data
  • realtime-docs - API documentation for the realtime FEC project
  • nodejs-sunlight - Unified NodeJS bindings for Sunlight APIs.
  • python-transparencydata - Wrapper for TransparencyData.com API
  • carnitas - Email based API key registration
  • gun-data-explorer - visualization of federal- and state-level contributions from gun control and gun rights groups, with links to data.influenceexplorer.com
  • navis-openstates - WordPress plugin to embed legislator and bill information from the OpenStates API.

I only went back to 2013 looking for projects in the Sunlight Foundation Github account, because I felt if it hasn't been updated or touched in that time, I probably won't have time to touch it myself. In addition to visualizations, going through their Github account showed me that there is also some interesting scraping work that should also be targeted for keeping alive, as well as open data, API, and visualization projects.

There is a lot of amazing work present in the Sunlight Foundation APIs and Github repositories. I am just as bummed as many of the comments on their blog, and via the Twittersphere express (some friendly, and some not so friendly), but having worked for the White House, and lived in the world of APIs for the last six years I know that nothing is permanent, and all good things will go away in time--usually due to budgetary and investment scenarios. None of it stops me from doing the hard work that needs to be done, and there is still a lot more to occur around the Sunlight Foundation.

See The Full Blog Post


How To Discover No Name And Description Twitter Accounts For Folks In The Enterprise

I am always fascinated by the online fence sitting persona that is the enterprise tech industry employee. I know many them are there, but few ever retweet my work, respond to my posts via comments, or other channels. Usually, I only know that many of them are there from the occasional like on one of my stories, but I'm slowly developing other ways to build lists of Twitter users from behind the enterprise fence.

One of the best ways to root them out is to write positive stories about their company, products, group, or their flagship clients. When you do this many of the no names, no description Twitter accounts for the enterprise fence sitters will come out of the woodworks. They can't help themselves it seems, when someone writes positively about what they are doing, as many are so starved for genuine praise, they will often retweet a story revealing themselves. 

I do not believe that everyone should be as open and transparent as I am online. Especially if you have a boss breathing down your neck. Though I do like to know that you are there and that I occasionally write things you like. I would also enjoy more signals from you about what is important to you out in the space, and which topics impact your success in the enterprise trenches. You are always welcome to drop me a line via email if you don't feel comfortable talking out loud, or DM now that I have triangulated your Twitter account and followed you.

See The Full Blog Post


Decoupling The Solution Provided From The Product In Your Storytelling

I come across a number of really useful stories about APIs in my regular monitoring of the space that can't seem to separate the solution their product delivers from the product itself. I get that you want people to know that your product does the really useful thing that you are telling the story about, but I want to help you understand that they are most likely turning people off to the solution by tightly coupling the solution story with your product and company. 

This type of storytelling is more sales than it is evangelism. It shows you don't really have a good product in my opinion. If you can't talk endless about what your product accomplishes without mentioning the product name or the company behind, you probably don't have much of a thing in the first place. However, I'm guessing in many cases you just do not have the storytelling experience, both reading, and writing, to understand the difference, and that is why I want to help you reach more people.

I know your boss is telling you to sell, sell, sell, and that you need to make your "numbers". The reality though is people are being sold, sold, sold to all the time, and they really need actual solutions for problems they face and they appreciate the companies who focus on solutions, not yet just another vendor solution to be bombarded with. If you are going to take the time to craft content for your blog, Medium, or other popular channels, then take the time to thoughtfully disconnect your solution from the product--if you do it well, people will know it is you, and find the product or company behind, when they are ready to implement your solution.

See The Full Blog Post


The Why Behind The Github GraphQL API

I wrote a skeptical piece the other day about GraphQL, which I followed up with another post saying I would keep an open mind. I've added GraphQL to my regular monitoring of the space, but I don't have its own research area yet, but if the conversation keeps expanding I will. A recent expansion in the GraphQL conversation for me was Github releasing the GitHub GraphQL API.

In the release blog post, from Github they provide exactly what I'm looking for in the GraphQL conversation--the reasons why they chose to start supporting GraphQL. In their post Github describes some of the challenges API consumers were having with the existing API, which led them down the GraphQL path:

  • sometimes required two or three separate calls to assemble a complete view of a resource
  • responses simultaneously sent too many data and didn’t include data that consumers needed

They also talk about some of what they wanted to accomplish:

  • wanted to identify the OAuth scopes required for each endpoint
  • wanted to be smarter about how our resources were paginated
  • wanted assurances of type-safety for user-supplied parameters
  • wanted to generate documentation from our code
  • wanted to generate clients

Github says they "studied a variety of API specifications built to make some of this easier, but we found that none of the standards totally matched our requirements" and felt that "GraphQL represents a massive leap forward for API development. Type safety, introspection, generated documentation and predictable responses benefit both the maintainers and consumers of our platform". Some interesting points to consider, as I work to understand the benefits GraphQL brings to the table.

I'm still processing the entire story behind their decision to go GraphQL, and will share any more thoughts I'm having in future blog posts. With this major release from Github, I am now keeping an eye out for other providers who are headed in this direction. Hopefully, they will be as transparent about their reasons why as Github has--this kind of storytelling around API design and deployment is important for the rest of the API community to learn from.

See The Full Blog Post


Syndicating API Evangelist Posts To Medium Using Their API

Now that I have API Evangelist up to regular levels of operation after a summer break, I'm working to expand where I publish my content, and next up on the list is Medium. Like many other popular destinations I refuse to completely depend on Medium for my blogging presence, but I recognize the network effects, and I'm more than happy to syndicate my work there. 

To help me manage the publishing of my stories to Medium I wired up the Medium API into my API monitoring and publishing platform. I use the Github API to publish blog posts to API Evangelist, Kin Lane, and API.Report, and it is pretty easy to add a layer that will publish select stories to Medium as well. All I have to do is tag posts in a certain way, and my "scheduler" and the Medium API does the rest.

I will be evaluating which of my stories go up to Medium on an individual basis. I'm not wanting everything to go there, but would like to open up some of my work for discussion on the platform. While I already share my API Evangelists posts to LinkedIn and Facebook, I will also be syndicating select stories using LinkedIn Publishing and Facebook Instant Articles next. I will only be publishing my content to platforms that bring value, but more importantly have APIs so I can retain as much control over my work from a central location within my domain.

You can find everything published under @KinLane over a Medium, something I might expand upon with specific publications in the near future, but for now, I'll keep it all under my user account.

See The Full Blog Post


Providing Branding And Attribution Assets With Each API Response

I am tracking on the approaches of API providers who have branding world together when it comes to platform operations. I'm always surprised at how few API providers actually have anything regarding branding in place, especially when it seems like loss of brand control, attribution, and other concerns seem to be at the top of everyone's list.

I was hooking up the Medium API to my API monitoring and publishing system, syndicating select stories of mine to the platform and found myself thinking about how important an API branding strategy is (should be) to content platforms like them. Medium doesn't let you pull posts via the API (yet), but if it did, I would make sure branding and attribution was default.

Few API providers have their API brand strategy together, let alone provide easy to understand and find assets to support the strategy. It seems like to me that if you are concerned about brand control, or just want to really extend your brand across all websites and mobile applications where your API resources are put to use, you would want to bake branding and attribution into the API response itself, as well as a robust branding area of the developer portal.

I'm going to explore concepts around branding and attribution as a default layer of API access. Everything from thinking about hypermedia approaches to providing link relations, to maybe including link relations in the header like Github does with pagination, but using branding and attribution focused link relations. I would like to be able to provide light footprint options that may not require changing up the JSON response, or add an entirely new media type.

When Medium does open up /GET for posts on the platform, I'd be stoked if there were branding and attribution elements present, driven by settings in my account. I'm not under the delusion that every developer who makes a call to an API will respect branding guidelines, but if it is front and center with every API call, and easy to implement, the chances increase dramatically.

Anyways, some food for thought around branding. I will push this topic forward as I have time and maybe play with a prototype for the API Evangelist blog. I'd love for consumers and syndicators of my content to be able to extend the reach of my brand, or at least send some love my way with a little attribution.

See The Full Blog Post


Google Spreadsheets As An Engine For API Goodness

I was watching my partner in crime Audrey Watters (@audreywatters) build the weaponized edu Twitter bot using a Google Spreadsheet as an engine. Something she learned from Zach Whalen, a professor at University of Mary Washington. Audrey is not a programmer, but she has become extremely proficient at building these little bots, and using the Twitter API--demonstrating the potential of Google Sheets as an engine for an API-driven bot solutions, or in this case bot mayhem.

Zach's approach is extremely well defined--you will have to copy and go through the spreadsheet yourself to see. Everything you need to get the job done is there, from step by step instructions, to storing your API tokens, and planting the seeds for your bot intelligence. This is the kind of API stuff I'm always talking about when I say that API shouldn't just be for developers--all it takes is having no fear of APIs, and well laid out blueprints like Zach has provided.

It is an approach I'd like to explore more as I have time. I'm not a big fan of the spreadsheet but I fully get its role amongst muggle society. Spreadsheets keep me fascinated because of the many dimensions of API they possess. Spreadsheets can provide APIs, consume APIs, and as Zach's approach to bot development demonstrates, they can be a pretty serious engine for driving API goodness.

See The Full Blog Post


API Branding Embeddables That Can Boost My API Rate Limits

I'm expanding on my API branding research, putting some thought into how we might be able to include branding and attribution in API responses. Next, I'd like to brainstorm ways to incentivize both API providers, as well as API consumers to employ sensible branding practices. You'd think API providers would be all over this stuff, but for some reason, they seem to need as much encouragement, and structure as API consumers do--this is why I'm wanting to explore how I can drive both sides.

First, why do I care about branding when it comes to APIs? Well, the more successful companies are with their APIs, the more their companies brand can be not just protected, but enhanced--the more APIs are seen in a positive light, rather than the threat to brand control that is often cast on them. And, the more APIs we have, the more access to valuable data and content for use in web, and mobile applications.

While there are many nuances to API branding, it often centers around making text, link, and image assets available for developers to use wherever they put API driven data, content, and algorithms to use. APIs have many different approaches to branding requirements and enforcement, but few actually provide rich assets and tooling to support a coherent branding strategy. All it takes is a handful of logos, some JavaScript APIs, and guidance for developers, and branding can significantly extend the reach of any brand--not hurt it as many perceive an API will do.

The benefits of branding to API provider are clear for me, but I'd like to explore what we can do to incentivize API consumers. What if, with all the tracking of where branding and attribution are deployed (aka API brand reach), we tracked each domain or subdomain, as well as each impression of text, logos, and other assets?  What if network reach and brand exposure could buy me API credits, and raise my API rate limits as a consumer? I mean, as a developer I'm potentially extending the reach of your brand, providing you with valuable exposure, and potentially inbound links and traffic--if I am rewarded for doing this, the chances I execute healthy API branding practices will ony increase.

Just some thoughts on incentivizing both the API provider and consumer sides of the coin when it comes to API branding. I am going to play around with a design for a simple set of logos and JavaScript APIs for supporting API branding assets. I'm going to also play around with baking in links to these resources within API responses either as JSON collections or present in headers. Once in place, I'll have a better idea of the type of data I can collect, and how it can possibly be measured, and applied to increasing API rate limits or possible equal credits for API access--all things I know developers will want.

Stay tuned for more on API branding in coming weeks...

See The Full Blog Post


What I Mean When I Say API

People love to tell me the limitations of my usage of the acronym API. They like to point out they were around before the web, that they are used in hardware, or are not an API unless it is REST. There are endless waves of dudes who like to tell me what I mean when I say API. To help counter-balance each wave I like to regularly evolve, and share what I mean when I say API--not what people might interpret I mean.

When I say API, I am talking about exposing data, content, or algorithm as an interface for programmatic use in other applications via web technology. Application in "Application Programming Interface" means any "application" to me, not just a software application. Consider visualizations, image rendering, bots, devices, or any other way that web technology is being applied in 2016.

I do not mean REST when I say API. I do not mean exclusively dynamic APIs--it could simply be a JSON  data store  made available via a Github repo. If machine-readable data, content, and algorithms are being accessed using web technology for use in any application, in a programmatic way--I'm calling it API. You may have your own interpretations, and be bringing your own API baggage along for the ride, but this is what I'm talking about when you hear me say API.

See The Full Blog Post


The PSA Peugeot Citroën’s APIs

I was turned on to the API program out of Groupe PSA,  the French multinational manufacturer of automobiles and motorcycles sold under the Peugeot, Citroën and DS Automobiles brands from a friend online the other day. Rarely do I just generally showcase an API provider, but I think their approach is simple, clean, and a nice start for a major automobile brand, and worthwhile to take note of. 

Companies of all shapes are doing APIs, but very few have the awareness to make their API program public, and accessible to the general public. I think the PSA Peugeot Citroën’s APIs are a pretty interesting set of resources for making available to car owners, and worth talking about:

  • Telemetry - Request data from the car: average speed, location, instantaneous consumption, engine speed, etc.
  • Maintenance Alerting - Request data from the various events or notifications that can be detected by the car: time before maintenance, fired alerts, etc.
  • Correlation - These APIs lets your application evaluate your driving style with others.

Making vehicle data, events and notifications makes sense to me when it comes to vehicles and APIs, and is something that seems like it should just be default mode for all automobile manufacturers. The correlation API seems in a different category, and elevated to more of an innovate class, beyond just the usual car activity. I'm not a huge car guy, but I know people who are, and being able to size up against the competition or a specific community, could become a little addictive.

I've added the PSA Peugeot Citroën’s APIs to my monitoring of the space, and will keep an eye on what they are up to. I'm already following other user manufacturers like Ford and GM who have API efforts, as well as Japanese manufactuers like Honda. I may have to stop, and take roll call in the world of automobiles and see who have official public API develop efforts, and put some pressure on those who do not have their program together yet (like they'll listen).

If you know of any auto related API efforts I do not already have in my auto API stack, please let me know--I depend on my readers to keep me tuned into which companies are the cool kids doing API.

See The Full Blog Post


Be Part Of Your Community, Do Not Just Sell To It

A recent story from Gordon Wintrob (@gwintrob) about how Twilio's distributed team solves developerevangelism has given me a variety of seeds for stories on API Evangelist this week. I love that in 2016, even after an IPO, I am still writing positive things about Twilio and showcasing them as an example for other API providers to emulate.

Twilio just gets APIs, and they deeply understand how to effectively build a community of passionate developers, demonstrated by this statement from Gordon's story on developing credibility:

How do you have technical credibility? You have to really be part of your programming community. Each of us is a member of our community, not marketing or trying to sell to it.

It sounds so simple. Yet is something so many companies struggle with. An API community is often seen as something external, and often times even the API is seen as something external, and this is where most API efforts fail. I know, you are saying that not all companies can be API-first, where the API is the core product focus like Twilio--it doesn't matter. Not being able to integrate with your developer community, is more about your company culture, than it is about APIs.

Another area my audience will critique me is around sales--you have to do sales to make money! Yes, and even Twilio has a sales team to come in at the right time. This is about building technical credibility with your developer community, by truly being part of it--if you are always trying to sell to them, there will always be an us and them vibe, and you will never truly be part of your own community.

As an API provider, I always recommend that you get out there and use other APIs, to experience the pain of being an API consumer. Using Twilio, and participating in the Twilio community should be the 101 edition of this. Where all API providers spend a couple of months using the Twilio API, and "actively" participating in their community before getting to work on their own API program.

See The Full Blog Post


A New API Programming Language SDK Icon Set

I was working on a forkable definition of my API portal and I wanted to evolve the icons that I usually use as part of my API storytelling. I primarily use the Noun Project API, to associate simple black and white icons which represent the stories I tell, companies I showcase, and topics I cover. One area I find the Noun Project deficient is when it comes to icons for specific technologies, so while working on my project I wanted to find a new source. I fired up the Googles and got to work.

I quickly came across Devicon, a set of icons representing programming languages, designing & development tools which you can use as a font or with SVG code. At the Github repo for the project, it says they have 78 icons with over 200 versions total. I used a set of the icons to display API SDKs on my API portal prototype, allowing anyone who forks to turn on and off which programming languages they offer SDKs for.

Being pretty graphically challenged, as you can tell by my logo, I'm a big fan of projects like Devicon--especially when they make it so simple, and so good looking, all at the same time. If you are needing icons for your API portal I recommend taking a look at what they are doing. They have images for all the technology that cool kids are using these days, and they seem open to crafting more if you find something missing.

See The Full Blog Post


Why Would You Build A Business On APIs? They Are Unreliable!

People love to tell me how unreliable APIs are, while also echoing this sentiment across the tech blogosphere. I always find it challenging to reconcile how the entrenreurs who spread these tales choose to put the blame on the technology, and not the companies behind the technology, or more appropriately the investment behind the companies. APIs are just a reflection of what is going on already within a company, and are good, nor bad--they are just a tool that can be implemented well, or not so well.

I was taking some time this last week to work on my API monitoring system, which I call Laneworks. In addition to having my own API stack, I depend on a variety of other APIs to operate my business. As I was kicking the tires, poking around the code for some of my most valuable integrations I found myself thinking about the stability and reliability of APIs, and how stable some APIs have been for me.

Since 2011 I have stored ALL heavy objects (images, video, audio) used in my API monitoring and research on S3. I have NEVER had to update the code. Since 2012 I have used Pinboard as the core of my API curation system, aggregating links I favorited on Twitter, and added using my browser bookmarklet--again I have NEVER updated the code that drives this. Since 2013 all of my public websites run on Github using Github Pages, employing the Github API to publish blog posts, and all other content and data used in my research.

The Amazon S3, Pinboard, and Github APIs make my business work. Three suppliers who have been working without a problem for 5, 4, and 3 years. The only thing I have had to do is pay my bill, and keep my API keys rotated, and the reliable API vendors to the rest. Storing images, video, and audio, curated the news and other stories I share with you and publish the blog posts and web pages you use to browse my API research. So explain to me, why would you want to build a business on APIs, when they are so unreliable?

See The Full Blog Post


Standards Evangelism

As the API Evangelist, I spend a lot of time thinking about evangelism (*your mind is blown*). TFrom what I'm seeing, the world of technology evangelism has been expanding, where database, container, and other types of platforms are borrowing the approaches proven by API pioneers like Amazon and Twilio. As I'm doing work with Erik Wilde (@dret) around his Webconcepts.info work, and reading an article about industrial automation standards, I'm left thinking about how important evangelism is going to be for standards and specifications.

Standards are super important, so I have to be frank--the community tends to suck at evangelizing itself, in an accessible way that reflects the success established in the API world. I'm super thankful for folks like Erik Wilde, Mike Amundsen, and others who work tirelessly to evangelism API related web concepts, specifications, and standards. The importance of outreach and positive evangelism around standards reflect the reasons why I started API Evangelist--to make APIs more accessible to the masses. 

This is why I have to get behind folks like Erik who step up to help evangelize standards. I do not have the dedication required to tune into the W3C, IANA, ISO, and other standards bodies, and super thankful for those who do. So if I can help any of you standard obsessed folks hone your approach to storytelling, and evangelism let me know. I'd love to see standards evangelism become commonplace--making them more friendly, accessible, and known across the tech sector.

See The Full Blog Post


I Am Feeling The Same About YAML As I Did With JSON A Decade Ago

I have been slowly evolving the data core of each of my research projects from JSON to YAML. I'm still providing JSON, and even XML, Atom, CSV, and other machine-readable representations as part of my research, but the core of each project, which lives in the Jekyll _data folder are all YAML moving forward. 

When I first started using YAML I didn't much care for it. When the OpenAPI Specification introduced the YAML version, in addition to the JSON version, I wasn't all that impressed. It felt like the early days of JSON back in 2008 when I was making the switch from primarily XML to a more JSON-friendly environment. It took me a while to like JSON because I really liked my XML--now it is taking me a while to like YAML because I really like my JSON.

I do not anticipate that JSON will go the same way that XML did for me. I think it will remain a dominant machine-readable format in what I do, but YAML is proving to have more value as the core of my work--especially when it is managed with Jekyll and Github. I am enjoying having been in the industry long enough to see these cycles, and be in a position where I can hopefully think more thoughtfully about each one as it occurs.

See The Full Blog Post


D3.js Visualizations Using YAML and Jekyll

I am increasingly using D3.js as part of my storytelling process. Since all my websites run using Jekyll, and published entirely using Github repositories wich are shared as Github Page sites, it makes sense to standardize how I publish my visualizations.

Jekyll provides a wealth of data management tools, including the ability to manage YAML dta stores in the _data folder. An approach I feel is not very well understand, and lacks real world examples regarding how to use when managing open data--I am looking to change that.

I like my data visualizations beautiful, dynamic, with the data right behind--making D3.js the obvious choice. For this work, I took data intended for use as a bar and pie chart and published as YAML to this Github repositories _data folder. This approach to centrally storing machine-readable data, in the simple, more readable YAML format, makes the data behind visualizations much more accessible in my opinion.

The problem with using D3.js visualization is that I need it in JSON format. Thankfully, using Jekyll and Liquid, I can easily establish dynamic versions of my data in JSON, XML, or any other format I need it in. I place these JSON pages in a separate folder I am just calling /data.

Now I have the JSON I need to power my D3.js visualizations. To share the actual visualization, I created separate editions for my bar and pie charts, and have the HTML, CSS, and JavaScript for each chart, in its own file.

There are two things being accomplished here. 1) I'm decoupling the data source in a way that makes it easier to swap in and out different D3.js visualizations, and 2) I'm centralizing the data management, making it easily managed by even a non-technical operator, who just needs to grasp how Jekyll and YAML works--which dramatically lowers the barriers to entry for managing the data needed for visualizations.

There is definitely a learning curve involved. Jekyll, Github Pages, and YAML take some time to absorb, but the reverse engineerability of this approach lends itself to reuse and reworking by any data curious person that isn't afraid of Github. I'm hoping to keep publishing any new D3.js visualization I create in this way, to provide small, forkable, data-driven visualizations that can be used as part of data storytelling efforts-everything here is available as a public repo.

As a 25-year data veteran, I find myself very intrigued with the potential of Jekyll as a data management solution, something that when you combine with the social coding benefits of Github, and Github Pages, can unleash unlimited possibilities. I'm going to keep working to define small, modular examples of how to do this, and publish as individual Github lessons for you to fork and learn from.

See The Full Blog Post


A Trusted Github Authentication Layer For API Management

I am reworking the management layer for my APIs. For the last couple of years, I had aspirations of running my APIs with a retail layer generating revenue for API Evangelist--something which required a more comprehensive API management layer. In 2016, I'm not really interested in generating revenue from the APIs I operate, I'm just looking to put them to work in my own business, and if others want access I'm happy to open things up and broker some volume deals.

To accomplish this I really do not need heavy security or service composition for my APIs, I'm just needed to limit who has access so they aren't just 100% public, and identify those who are using, and how much they are actually consuming. To facilitate this I am just going to use Github as a trusted layer for authentication. Using an OAuth proxy, I'll let my own applications authenticate using their respective Github user, and identify themselves using a Github OAuth token when making calls to each API. 

Each application I have operating on top of my APIs have its own Github account. Once they do the OAuth dance with my proxy, my system will then have a Github token identifying who they are. I won't need to validate the token is still good with each call, something I'll verify each hour or day, and cache locally to improve API performance. Anytime an unidentified token comes through, I'll just make a call to Github, and get the Github users associated, and check them against a trusted list of Github users who I have approved for accessing my APIs.

I'm not really interested in securing access to all the content, data, and algorithms I'm exposing using APIs. I'm only looking to identify which applications are putting them to work and evaluate their amount of usage each day and month. This way I can monitor my own API consumption, while still opening things up to partners or any other 3rd party developer that I trust--if they are using too much, I can drop them a message to have a conversation about next steps.

I'm still rolling out this system out, but it got me thinking about API access in general, and the possibilities that a trusted list of Github accounts could be used to expedite API registration, application setup, and the obtaining keys. Imagine if as a developer I could just ping any API, do an OAuth dance with my Github credentials, and get back my application id and secret keys for making API calls--all in a single flow. As an API provider I could just maintain a single trusted list of Github users, as well as consult other lists maintained by companies or individuals I trust, and reduce friction when onboarding, or automatically approve developers for higher levels of access and consumption.

See The Full Blog Post


Putting The Concept Of The Public API To Rest As A Dominant Narrative

APIs come in all different shapes and sizes. I focus on a specific type of APIs that leverage web technology for making data, content, and algorithms available over the Internet. While these APIs are available on the open Internet, who has the ability to discover, and put them to use will vary significantly. APIs have gained in popularity because of successful publicly available APIs like Twitter and Twilio, something that has contributed to these types of APIs being the dominant narrative of what APIs are.

A lack of awareness of what modern approaches to API management can do for securing web APIs as well as the dominance of this narrative that APIs need to be open like Twitter and Twilio tends to set the bar to unrealistic levels for API providers. Who has access to a web API is just one dimension of what APIs are, and sharing content, data, and algorithms securely via the web should be the focus. It's not whether or not we should do public or private APIs--it is about how you will be sharing your resources in a digital economy.

While I encourage ALL companies, institutions, and government agencies to be as transparent as they possibly can regarding the presence of their APIs, its documentation, and other resources--who actually can access them is entirely up to the discretion of each provider. You should treat ALL your APIs like they use public infrastructure (aka the web), secure them appropriately, and get to work making sure all your digital resources are accessible in this way, not being bogged down by useless legacy discussions.

Which is why I support putting the concept of the public API to rest as a dominant narrative around what is an API--you shouldn't hear me talking about public vs private anymore. If you do slap me.

See The Full Blog Post