{"API Evangelist"}

We Added 3 New Speakers To @APIStrat Lineup - Have You Submitted A Talk?

We just added three new speakers to the lineup for @APIStrat Berlin this April 24th and 25th. I get pretty excited about this part of the event planning lifecycle, which is all about reviewing talks that being submitted, and working with the rest of the APIStrat, and now the API Days team, to develop the best lineup possible.

Here are the three that we added today:

Antti Silventoinen (@lamantiini) of Music Kickup
Jordi Romero (@jordiromero) of Redbooth
Matt Boyle (@mboylevt) of Shapeways

 

These three speakers represent APIs and the music industry, project collaboration, and 3D printing—which I think is a pretty nice representation of how APIs are making an impact in 2015.

If you haven’t submitted a talk, make sure and head over to the APIStrat call for talks page, and make sure you are part of the lineup in Berlin, this April—we will be closing the call for talks soon, so make sure you do it quick!



Instant Access To APIs Via Github Profile

An open project for me this month, is about better understanding how API keys are provisioned, and how developers are given access to valuable resources. As the number of APIs grows, so do the number of APIs that we depend on in any single application, forcing developer to have to manage many API keys, potentially from many different platforms.

It doesn’t take an API expert to see that many current practices by API providers requiring consumers to manually register for an account, will not scale. We need more automated ways for not just discovering, but also on-boarding with APIs, allowing API consumers to begin using an API, without the current overhead required.

One idea I’m bouncing around in my own APIs, is allowing for instant account creation via an API, allowing you to programmatically generate new account, a new app, and API keys. Of course I do not want these new accounts to have full access to everything, and using my 3Scale API management I will create a specific service tier for these accounts, limiting what they can do.

I want to take this another step further though, I do not want just any spammy, Johnny come-along to be able to create new accounts, without any sort of validation. To help filter, I’m developing a Github account ranking layer, requiring you to pass along your Github user with the generation of new account, app, and keys. I will pull the profile for the Github user, and some statistics on their overall profile, and make some assumptions on the developers trustworthiness.

Immediately this approach will limit API access for a number of people, which in some scenarios may not be ideal, but for my APIs, I’m willing to allow instant account creation, and API access to people who have an active, verifiable Github presence. I’ve been a proponent of API providers providing Github login for developer accounts for some time now, and this seems like a logical next step.

We’ll see where I go with this. It is more an exercise than it is a real thing, but who knows, in addition to using Github to manage my API keys securely, maybe I can actually use it to instantly access new APIs, without having the current overhead I face with each new API on-boarding experience.



Storing API Keys In The Private Master Github Repository For Use In Github Pages

My public websites have been running on Github Pages for almost two years now, and slowly the private management tools for my platform are moving there as well. Alongside my public websites, I’m adding administrative functions for each projects. Most of the content is API driven already, so it makes sense to put some of the management tools side by side with the content or data that I’m publishing.

These management tools are simple JavaScript, that use the Github API to manage HTML, and JSON files that I have stored either publicly or privately within repositories. I use Github oAuth to work with the Github API, but to work with other APIs I need a multitude of other API keys, including 3Scale generated API keys I use to access my own API infrastructure.

My solution is to store a simple api-keys.json file in the root of my private master repository, and then again using Github oAuth, and the Github API, I access this file, read the content of the JSON file into a temporary array I can use wthin my management tools. If you do not have access to the Github repository, you won’t be able to read the contents of api-keys.json, rendering the management tools useless.

I will develop a centralized solution to helping manage API keys across all my projects, allowing me to re-use keys for different projects, and easily update, or remove outdated API keys. This approach to storing API keys in my private Github repository is allowing me to easily access keys in client-side apps I run on Github Pages, as well as via server-side applications and APIs—something that I’m hoping will give me more flexibility in how I put multiple APIs across my infrastructure.



Using Containers To Bridge What Swagger Cannot Define On The Server-Side For My APIs

When I discuss what is possible when it comes to generating both server and client side code using machine readable API definitions like Swagger, I almost always get push-back, making sure I understand there are limitations of what can be auto-generated.

Machine readable API definitions like Swagger provide a great set of rules for describing the surface area of an API, but is often missing many other elements necessary to define what server side code should actually be doing. The closest I’ve gotten to fully generating server side APIs, is when it came to very CRUD-based APIs, that possess a very simple data models--beyond that it is difficult to make "ends meet".

Personally, I do not think my Swagger specs should contain everything needed to define server implementations, this would make for a very bloated, and unusable Swagger specification. For me, Swagger is my central truth, that I use in generating server side skeletons, client side samples and libraries, and to define testing, monitoring, and interactive documentation for developers.

Working to push this approach forward, I’m also now using Swagger as a central truth for my Docker containers, allowing me to use virtualization as a bridge between what my Swagger definitions cannot define for each micro-service or API deployed within a specific container. This approach is leaving Swagger as purely a definition of the micro-service surface area, and leaving my Docker image to deliver on the back-end vision of this surface area.

These Docker images are using Swagger as a definition for its container surface area, but also as a fingerprint for the value it delivers. As an example, I have a notes API definition in Swagger, for which I have two separate Docker images that support this interface. Each Docker image knows it only serves this particular Swagger specification, using it as a kind of fingerprint or truth for its own existence, but ultimately each Docker image will be delivering its own back-end experience derived from this notes API spec.

I’m just getting going with this new approach to using Swagger as a central truth for each Docker container, but as I’m rolling out each of my API templates with this approach, I will blog more about it here. Eventually I am hoping to standardize my thoughts on using machine readable API definition formats like Swagger to guide the API functionality I'm delivering in virtualized containers.



Use APIs.json To Organize My Swagger Defined APIs Running In Docker Containers

I continuing to evolve my use of Swagger as a kind of central truth in my API design, deployment, and management lifecycle. This time I’m using it as a fingerprint for defining how APIs or micro-services that run in Docker containers function. Along the way I’m also using APIs.json as a framework for organizing these Swagger driven containers, into coherent API stacks or collections, that work together to accomplish a specific objective.

For the project I’m currently working on, I’m deploying multiple Swagger defined APIs, each as separate Docker containers, providing some essential components I need to operate API Evangelist, like blog, links, images, and notes. Each of these components have its own Swagger definition, and corresponding Docker image, and specific instance of that Docker image deployed within a container.

I’m using APIs.json to daisy chain all of these APIs or micro-services together. I have about 15 APIs deployed, each are standalone services, with their own APIs.json, and supporting Swagger definition, but the overall project has a centralized APIs.json, which using the includes properties, provides linkage to each individual micro-services APIs.json--loosely coupling all of these container driven micro-services under single umbrella.

At this point, I’m just using APIs.json as a harness to define, and bring together the technical aspects of my new API stack. As I pull together other business elements like on-boarding, documentation, and code samples, I will add these as properties to my parent APIs.json. My goal is to demonstrate the value APIs.json brings to the table when it comes to API and micro-service discovery, specifically when you deploy distributed API stacks or collections using Docker container deployed micro-services.