28 Jan 2015
I’m fascinated by the rise of Twitter bots. Little automated bundles of social media joy, built to spew mostly nonsense, but everyone once in a while you find nuggets of wisdom in the Twitter API fueled bots. Personally I have never built a Twitter bot, only because I don’t need another distraction, and I can see the addictive potential of bot design.
My partner in crime Audrey Watters (@audreywatters), who has hands on experience building her own bots, said something interesting the other day while we were partaking in our daily IPAs—the wordnik APIs it the base building block of any Twitter bot.
Sure enough, when you search “twitter bots wordnik” on the Googlez, you get a wide variety of python, nobde.js, php, and other recipes for building Twitter bots, with almost all of them using the Wordnik API as the core linguistics engine for the bot.
Two overlaps with other stories I written here. 1) I feel the Twitter API holds a lot of potential as a training ground for IoT connectivity, and 2) I think APIs, and specifically the Wordnik API, and the work to come out of Wordnik, like Swagger, has a significant role to play in the future of the Internet.
I know many of us technologists have grand visions of where the Internet is going, and would like things to happen faster. but I think the “smart” everything we are looking for is going to take some time, and is something you can see playing out slowly in things like Twitter bot design. I’m sure there are many negative incarnations, but many of the Twitter bots I’ve seen have been interesting little nuggets of API driven (Twitter & Wordnik) chaos, driving us towards an unknown, but definitely Internet connected, online world.
28 Jan 2015
I'm rebuilding my underlying architecture using microservices, and docker containers, and I'm using APIs.json for navigation and discovery within these new API stacks that I use to make my world go around. As I assign each microservice, and APIs.json file, taking inventory of the building blocks that make the service operate, I also begin including docker into the equation, and I find myself using Swagger definitions as a sort of fingerprint for my docker powered microservices.
Each microservice lives as its own Github repository, within a specific organization. I give each one its own APIs.json, indexing all the elements APIs of that specific microservice. This can include anything I feel is important, like:
- X-Signup - Where to signup for the service.
- X-Twitter - The twitter account associated with the service.
- X-Samples - Where you can find client side code samples.
As long as you put X- before the property, you can put anything you want. There are only a handful of sanctioned APIs.json property types, and one of them you will find in almost all my APIs.json files generated for my platform is:
- Swagger - A reference to a machine readable Swagger definition for each API.
Another one I’m starting to use, as I’m building out my microservice infrastructure, is:
- X-Docker-Image - A link to a docker image that supports this API.
Each of my microservices I have a supporting Docker image that is the backend for each API. Sometimes I will have multiple Docker images for variant back-ends for the same API fingerprint. Using APIs.json I can go beyond just finding the API definition, and other supporting building blocks. I can also find the backend needed to actually deliver on the API I have defined by a specific Swagger definition. I’m just in the early stages of this work, and this series of posts reflects me trying to work much of this out.
You can browse my work on Github, most of it is public. The public elements all live in the gh-pages branch, while the private aspects live within the private master branch. It is all a living workbench for me, so expect broken things. If you have questions, or would like more access to better understand, let me know. I’m happy to consider adding you to the Github organization as collaborator so that you can see more of it in action. I will also chronicle my work here on the blog, as I have time, and have semi-interesting things to share.
28 Jan 2015
I’m rebuilding my underlying architecture using microservices and docker containers, and the glue I’m using to bind it all together is APIs.json. I’m not just using APIs.son to deliver on discoverability for all of my services, I am also using it to navigate around my stack. Right now I only have about 10 microservices running, but I have a plan to add almost 50 in total by the time I’m done with this latest sprint.
Each microservice lives as its own Github repository, within a specific organization. I give each one its own APIs.json, indexing all the elements APIs of that specific microservice. APIs.json has two main collections, "apis" and "include". For each microservice APIs.json, I list all the properties for that API, but I use the include element to document the urls of other microservice APIs.json in the collection.
All the Github repositories for this microservice stack lives within a single Github organization, which I give a "master" repo, which acts as a single landing page for the entire stack. It has its own APIs.json file, but rather than having any API collections, it just uses includes, referencing the APIs.json for each microservice in the stack.
APIs.json acts as an index for each microservice, but through the include collection it also provides links to other related microservices within its own stack, which I use to navigate, in a circular way between all supporting services. All of that sounds very dizzying to write out, and I’m sure you are like WTF? You can browse my work on Github, some of it is public, but much of it you have to have oAuth access to see. The public elements all live in the gh-pages branch, while the private aspects live within the private master branch.
This is all a living workbench for me, so expect broken things. If you have questions, or would like more access to better understand, let me know. I’m happy to consider adding you to the Github organization as collaborator so that you can see more of it in action. I will also chronicle my work here on the blog, as I have time, and have anything interesting things to share.
28 Jan 2015
I spend time each day reviewing about 15 different press release distribution websites, looking for API related news for API.Report, and potentially seeds for stories elsewhere on the API Evangelist network.
One thing I’m noticing a lot, are companies who reference their API as bullet point in a release, or sometimes as a footnote in their about section at the end of the release, but when you go to their website you can’t find any mention of API.
I’m guessing for many, they just haven’t thought about making the API something that is front and center, relinquishing it to just "feature status". There are also others who do not believe in the whole “public” API thing, and see their API as more secret sauce, something that should be kept close to your chest.
Whatever the reason, I’m looking to better understand how I can help companies be more public with their API presence, even if the API itself isn’t open to the public. My first instinct is to just email the company, and ask why they aren’t more public with API, but I know from experience it isn’t the best approach. ;-)
With this in mind, I’ll be exploring new ways I can encourage companies to be more public with their API presence, and help them understand the benefits of this approach—let me know if you have any good ideas.
28 Jan 2015
I have most of the core platform that I run API Evangelist on re-engineered as individual microservices, defined on Github, and running using Docker instances. I’m using APIs.json for discovery, navigation, and to connect swagger API definitions, to their docker backends. So far I have 18 microservices, which I define as very simple APIs, that do one thing, and focus on doing it well.
I’m very critical about any feature I add in, working hard to keep each service as micro as I can, reducing complexity, and bloat in any single API. However, with each service I reach the point where I have to cut off feature creep, and establish an entirely new API definition--I am at this point with my link service.
I have a pretty bloated link management system in the legacy version of my platform, something which handles links for functions ranging from Pinboard integration, to handling linkrot on my public website. My legacy system runs using an API too, but it is a sloppily designed API, with a wide surface area, something I’m looking to prevent in the next version.
So far I’ve broken out a single API into three separate microservices:
- Link - A dead simple link management tool, allowing me to add a title, and url to a simple link data store, for future use.
- Curated - A service to manage links I curate from many sources, including additional bells & whistles, like pulling text body, taking screenshot, and much more.
- Linkrot - A link storage system designed for a singe purpose—to address link rot on my public websites. The API works with a simple JS library to store, monitor, and take screenshots of websites I link to across my network.
I have about five other simple, link related, microservices I’m planning, but these three designs represent the main link resources I need to make API Evangelist work. All of the microservices I’m designing will be open sourced at some point, but the linkrot implementation is one I may also be offering as a service. I’m enjoying the feature across my sites, and is something that I think has some viability as a basic service.
I’m sure my own definition of what a microservice is--will evolve, ebb and flow, but for now the first rule of microservice club for me, is keep things doing one thing, and doing it very well—enforcing simplicity in every decision.