{"API Evangelist"}

A Well Thought Out API Platform

I was playing with one of the API deployment solutions that I track on, appropriately called API Platform. It is an open source PHP solution for defining, designing, and deploying your linked data APIs. I thought their list of features provided a pretty sophisticated look at what an API can be, and was something I wanted to share.

There are a couple of key elements here. API definition-driven with JSON-LD, Hydra, HAL, and OpenAPI Spec out of box. Containerized. Schema.org FTW! JWT, and OAuth. OWASP's security checklist. Postman Ready! These features make for a pretty compelling approach to designing and deploying your APIs. While I see some of these features in other platforms, it is the first with an open source solution possessing such an impressive resume. 

I'm going to take this list and add to my list of API design, and deployment building blocks in my research. These are features that other API deployment solutions should be considering as part of their offering. This approach to API deployment may not be the right answer for every type of API, but I know many data and content focused APIs thatwouldl benefit significantly from a deployment solution like API Platform.


API Definitions Influencing API Design

I was having a conversation about whether I should be putting my API definition or my API design work first--which comes earlier in the lifecycle of an API? The conclusion was to put definition first because you need a common set of definitions to work with when designing your API(s). You need definitions like HTTP and HTTP/2. In, 2017 you should be employing definitions like OpenAPI Spec, and JSON Schema. These definitions help set the tone of your API design process.

In my opinion, one of the biggest benefits of designing, developing, and operating APIs on the web has been forcing developers to pick up their heads and pay attention to what everybody else is doing and wanting. I suffer from this. Doing web APIs, providing my own, and consuming 3rd party APIs forces me to pay attention to providers and consumers outside my bubble--this is good.

Common definitions help us elevate the design of our APIs by leveraging common concepts, standards, and schema. Every time you employ ISO 8601, you have to think about folks in another time zone. Every time you use ISO 4217, you have to think about people who buy and sell their products and services in a different currency than you. When you use Schema.org, your postal addresses considers the world beyond just US zip codes, and consider a world wid web of commerce.

I am placing definitions before design in my API research listing. In reality, I think this is just a cycle. Common definitions feed my design process, and the more experienced I get with design, the more robust my toolbox of API definitions gets. Ultimately this depends on what I'm calling a definition, but for the sake of this story I am considering the building blocks of the web as the first line of definitions, then secondarily the definitions that are using OpenAPI Spec and JSON Schema as the next line of definitions. Definitions influence my design process, and the design process is refining the definitions that I am putting to work. 


Trying To Define API Awareness

I have a regular call with a really smart API person who is trying to move forward a really cool project for the API space. It is some thought provoking voodoo and I need to be able to write about it--this is how I flush out my thoughts and move forward. He is not quite ready to talk about his project publicly, so I will just talk about and explore in terms of my API Evangelist research and how it applies to the area(s) of the API space he is looking to make an impact.

This topic spans several areas of my API research, but if I had to give it a single label I would call it API awareness. When you hear me talk about my monitoring the API space, API awareness is the result. I wanted to try and communicate this from my vantage point but also share with other analysts, practitioners, and even the average individual online today. This is my attempt to distil my approach to monitoring the API space and establishing a sustained awareness of APIs at any level.

Individual ("Normals")
It may sound crazy to you, but everyone should be API aware. No, they should be paying attention to APIs like I do, or even at the level of the average individual working in the tech sector, but they should have a baseline awareness, and here is my attempt at quantifying that:

  • APIs Exist - Everyone should be aware that APIs exist, and hopefully have one or two examples of what they can do in their business or personal lives -- even if it's' just pulling tweets, photos, or getting news updates via an RSS feed.
  • API Integration - Everyone should be aware that they can move data and content between the online services they use and depend on. If you know APIs exist and are aware of services like Zapier or Datafire, you will be more successful in what you do online.
  • Data Portability - All online services should allow for the downloading of their data, allowing for the portability of all users data and content.
  • API Discovery - With a low-level awareness of APIs and what is possible, the average individual should regularly be introduced to new APIs, and be given simple tooling and services for helping them in their discovery.
    • Applications - Everyone should get exposed to what other people are doing with APIs, and be informed about new and interesting ways to get things done for fun or business.
    • Individuals - Individuals within a company, institution, or online that can help with APIs.
    • Organizations - Organizations that can help individuals with API needs.
    • Events - Meetups, conferences and other events to learn about APIs.

The more we expose the average person to APIs, the more they will be able to absorb and understand. I've turned hundreds of average, non-technical folks on to the concept of APIs, and have seen them become evangelists and even API practitioners. Some move into API focused roles, but many are just are more successful in what they are already doing, from social media work to sending out their weekly newsletter.

I've always put API awareness for individuals into the same bucket as financial awareness. You shouldn't have an awareness of the inner workings of banking and credit industry, but you should have an awareness that you have accounts, who has access to them, and that you can move money around, and have different accounts for different purposes, with different providers--the same applies to the world of APIs.

API Practioners ("Not Normals")
When I first started writing this post, I had this section broken up into three groups: 1) Provider, 2) Service Provider, and 3) Analysts. Much of it ended up being redundant, so I'm going to share the complete list of what contributes to my API awareness, and depending on where you exist in the API spectrum (gonna have to use this one more), what matters to you will vary.

This is a master dump of my research, and the approach I have used to track on in the world of APIs since 2010--an analyst 100K view. However, API providers, service providers, evangelists, and analysts should possess a similar level of awareness--maybe not at the scope I pay attention to but employing some of the same tactics, applied to a smaller group of APIs either internally or externally. Here is what I'd consider a comprehensive definition of my API awareness stack.

  • Exist - Everyone should be aware that APIs exist, and hopefully have one or two examples of what they can do in their business or personal lives--even if you are in a business unit, you should know about APIs.
  • Discovery - With a low-level awareness of APIs and what is possible, the average individual should regularly be introduced to new APIs, and be given simple tooling and services for helping them along in their discovery.
    • Applications - Find new applications built on top of APIs.
    • People - People who are doing interesting things with APIs.
    • Organization - Any company, group, or organization working with APIs.
    • Services - API-centric services that might help solve a problem.
    • Tools - Open source tools that put APIs to work solving a problem.
  • Versions - What versions are currently in use, what new versions are available, but not used, and what future versions are planned and on the horizon.
  • Paths - What paths are available for all available APIs, and what are changes or additions to this stack of resources.
  • Schema - What schema are available as part of the request and response structure for APIs, and available as part of the underlying data model(s) being used. What are the changes?
  • SDKs - What SDKs are available for the APIs I'm monitoring. What is new. What are the changes made regarding programming and platform develop kits?
    • Repositories - What signals are available about an SDK regarding it's Github repository (ie. commits, issues, etc.)
    • Contributors - Who are the contributors.
    • Stargazers - The number of stars on SDK.
    • Forks - The number of forks on an SDK.
  • Communication - What is the chatter going on around individual APIs, and across API communities. We need access to the latest messages from across a variety of channels.
    • Blog - The latest from each API blog.
    • Press - Any press released about APIs.
    • Twitter - The latest from Twitter regarding API providers.
      • Tweets - The tweets from API providers.
      • Mentions - The mentions of API providers.
      • Followers - Who is following their account.
    • Facebook - The latest Facebook posts from providers.
    • LinkedIn - The latest LinkedIn posts from providers.
    • Reddit - Any related Reddit post to API operations.
    • Stack Overflow - Any related Stack Overflow post to API operations.
    • Hacker News - Any related Hacker News post to API operations.
  • Support - What support channels are available for individual or groups of APIs, either from the provider or maybe a 3rd party individual or organization.
    • Forum / Group - What is the latest from groups dedicated to APIs.
    • Issues - What are the issues in aggregate across all relevant repositories.
  • Issues - What are the current issues with an API, either known by the provider or possibly also reported from within the community.
  • Change Log - What changes have occurred to an API, that might impact service or operations.
  • Road Map - What planned changes are in the road map for a platform, providing a view of what is coming down the road.
  • Monitoring - What are the general monitoring statistics for an API, outlining its overall availability.
  • Testing - What are the more detailed statistics from testing APIs, providing a more nuanced view of API availability.
  • Performance - What are the performance statistics providing a look at how performant an API is, and overall quality of service.
  • Authentication - What are all of the authentication approaches available and in-use. What updates are there regarding keys, scopes, and other maintenance elements of authentication.
  • Security - What are the security alerts, notifications, and other details that might impact the security of services, or the approach taken by a platform to making sure a platform is secure.
  • Terms of Service - What are the changes to the terms of service, or other events related to the legal operations of the platform.
  • Privacy - What are the privacy-related changes that would affect the privacy of end-users, developers, or anyone else impacted by operations.
  • Licensing - What licenses are in place for the API, its definitions, and any code and tooling put to use, and what are the changes to licensing.
  • Patents - Are there any patents in play that impact API operations, or possibly an entire industry or area of API operations.
  • Logging - What other logging data is available from API consumption, or other custom solutions providing other details of API operations.
  • Plans - What are the plans and pricing in existence, and what are the tiers of access--along with any changes to the plans and pricing in the near future.
  • Partners - Who are the existing platform partners, and who are the recent additions. Maybe some finer grain controls over types of partners and integrations.
  • Investments - What investments have been made in the past, and what are the latest investments and filings regarding the business and investment of APIs.
    • Crunchbase - The latest, and historical from Crunchbase.
    • Angelist - The latest, and historical from Angellist.
  • Acquisitions - What acquisitions have been made or being planned--showing historical data, as well as latest notifications.
    • Crunchbase - The latest, and historical from Crunchbase.
    • Angelist - The latest, and historical from Angellist.
  • Events - What meetups, conferences and other events are coming up that relevant APIs or topics will be present.
  • Analysis - What tools and services are available for further monitoring, understanding, and deriving intelligence from individual APIs, as well as across collections of APIs.
  • Embeddables - What embeddable tooling are available for either working with individual APIs, or across collections of APIs, providing solutions that can be embedded within any website, or application.
  • Visualizations - What visualizations are available for making sense of any single API or collections of APIs, providing easy to understand, or perhaps more complex visualizations that bring meaning to the dashboard.
  • Integration - What integration platform as a service (iPaaS), continuous integration, and other orchestration solutions are available for helping to make sense of API operations within this world.
  • Deprecation - What deprecation notices are on the horizon for APIs, applications, SDKs, and other elements of API operations.

API awareness spans many stops along the API lifecycle, and across a variety of the most common, and critical building blocks of what drives API ecosystems. Awareness doesn't come easy. It takes time, and have access to the right information, and signals, potentially across many different entities and domains--aggregating, filtering, and ranking is essential developing and strengthening your awareness. In the end, even with the same signals and information available, there will be many definitions of what is the necessary awareness.

I am glad I didn't break this into different buckets for different people. I think that is a dangerous thing for us to do. I think people should be curious and have agency the decisions regarding which signals should feed their awareness. How many APIs. Which APIs. Which companies, news, events, and other areas. Investment, patents, and other legal aspects. I don't think that all individual should be bombarded with the more complex inner workings of the API industry I pay attention to, but they should be able to make the decision to move beyond a basic level of understanding, and become an evangelist, or analyst--quantifying and developing the awareness they desire or need to achieve.

I'll stop there. This is a good first draft of what I consider API awareness. At the individual API level, across a collection or industry of APIs, and even at the analyst levels, potentially paying attention to many different APIs across a variety of different industries. Before I put this definition down, I am going to take it and apply it in two other ways: 1) Observability, "measure for how well internal states of a system can be inferred by knowledge of its external outputs", and 2) Rating, "establish a rating system that articulates where an API or provider exists on awareness and observability spectrum". We'll see where it goes. Thinking about this voodoo is helping me better organize some of the existing parts of my research, and hopefully help my friend out in his work as well.


API Lifecycle Service Providers Instead Of Walled Gardens

It is a common tactic of older software companies to offer open source, services, and tools in a way that all roads just lead into their walled garden. There are many ways to push vendor lock-in and the big software vendors from 2000 through 2010 have mastered how to route you back to their walled gardens and make sure you stay there. Web APIs have set into motion a shift in how we architect our web, mobile, and device applications, as well as providing services to the life cycle that are behind the operation of these web APIs. While this change has the potential to positive it often it can be very difficult to tell apart the newer breed of software companies from the legacy version, amidst all the hype around technology and startups.

I've been having conversations recently which are pushing me to think more about middleware, or what I'd refer to as API life cycle tooling. In my opinion, these are companies who are selling services and tools to the API life cycle, which in turn is fueling our web, mobile, device, and other applications. In my opinion, as a server provider, you should be selling to a company and API provider's life cycle needs, not your walled garden needs. I understand that you want all of a companies business, and you want to lock them into your platform, but that was how we did business 10 years ago.

The API service providers I'm shining a light on in 2017 are servicing one or many stops along the API life cycle, supporting API definitions, and providing value without getting in the way, or locking customers in. They do this on-premise, or in the cloud of your choice, and allow you to seamless overlap many different API service providers providing a variety of solutions across the API life cycle. You will notice this patterns in the companies I partner with like APIMATIC, Restlet, Tyk, and Dreamfactory. I find I have a lot more patience when it comes to the whole startup thing if your service is plug and play and us API providers can choose where and when we want to put your tools and services to use.

I want my API service providers to behave just like they recommend to their customers--modular, flexible, agile, and providing a mix of valuable API resources I can use across my own API lifecycle. You'll find me doing more highlighting of what I consider to be API life cycle service providers who bring value to the API life cycle, with API definitions like the OpenAPI Spec as the center of their operations.


Box's Seamless Approach To API Documentation

The document platform Box updated their developer efforts recently, helping push forward the definition of what API documentation can be. I've long been advocating moving APIs out from the shadow of the developer portal, and make it more seamless with any UI, kind of like CloudFlare does with their DNS dashboard. There is no reason the API should have to be hidden from users--it should be right behind the UI for everyone to discover.

Box does this. You can interact with files just like it is the regular interface. When push the get the folder items, upload file, or other option available to you in the documentation--you get example API request and response in the right-hand column. It is a blend of a regular UI, and some of the attractive and interface documentation we've seen emerge lately like ReDoc. Making it easy to see and understand what an API does, while speaking in the context of solving a relevant problem for a human.

API documentation doesn't have to be overly technical and boring. It can look like a regular user interface, and the API can be right behind the UI curtain, providing a snapshot of the requests and responses that are doing the heavy lifting behind. I'm finally seeing the movement I have wanted to see with API documentation in 2017. I'm feeling like this is going to be common theme with the world of APIs for all of us--we will never see things move as fast as we want, but eventually the world evolves, and we will see investment in the areas that make a difference on the ground at API operations, and for the consumers who are putting APIs to work in their regular world.


API Life(middleware)Cycle API

I have had a series of calls with an analyst group lately, discussing the overall API landscape in 2017. They have a lot of interesting questions about the space, and I enjoyed their level of curiosity and awareness around what is going on--it helps me think through this stuff, and (hopefully) better explain it to folks who aren't immersed in API like I am. 

This particular group is coming at it from a middleware perspective and trying to understand what APIs have done to the middleware market, and what opportunities exist (if at all). This starting point for an API conversation got me thinking about the concept of middleware in contrast to, or in relationship to what I'm seeing emerge as the services and tooling for the API life cycle.

Honestly, when I jumped on this call I Googled the term middleware to provide me with a fresh definition. Middleware: software that acts as a bridge between an operating system or database and applications, especially on a network. What does that mean in the age of API? Did API replace this? There is middleware for deploying APIs from backend systems. There is middleware for brokering, proxying and providing a gateway for APIs. Making middleware as a term pretty irrelevant. I think middle traditionally meant a bridge between backend and the frontend, where web APIs make things omnidirectional--in the middle of many different directions and outcomes.

The answer to the question of what has API done to middleware is just "added dimensions to its surface area". Where is the opportunity? "All along the API lifecycle". Middleware (aka services & tooling) is popping up through the life cycle to help design, deploy, manage, test, monitor, integrate, and numerous other stops along the API life cycle. All the features of our grandfathers API gateway are now available as a microservices buffet, allowing us to weave middleware nuggets into any system to system integrations as well as other web, mobile, and device applications. 

Middleware as a concept has been distilled down into its smallest possible unit of value, made available via an API, deployed in a virtualized environment of your choosing, on the web, on-premise, or on-device. This new approach to delivering services, and tooling is often still in the middle, but the word really doesn't do it justice anymore. I wanted to go through all the areas of my research and look for any signs of middleware or its ghost.

Some of the common areas of my research that I think fits pretty nicely with some earlier concepts of what middleware does or can do. I would say that Database, Deployment, Virtualization, Management, Documentation, Change Log, Testing, Performance, Authentication, Encryption, Security, Command Line Interface (CLI), Logging, Analysis, and Aggregation are some pretty clear targets. Of course, this is just my view of what middleware was to me, from say 1995 through 2007--after that, everything began to shift because of APIs.

As web APIs evolve the reason you'd buy or sell a tool or service to be in the middle of some meaningful action when APIs started being about Software Development Kit (SDK), Embeddable, Visualization, Webhooks, iPaaS, Orchestration, Real TimeVoiceSpreadsheets, Communication, Support, ContainersServerless,  and Bots. This is where things really begain working in many different directions, making the term middle seem antiquated to me. You are now having to think beyond just any single application, and all your middleware is now very modular, API-driven, and can be plugged in anywhere along the life cycle not just any application, but also any API -- mind blown. 

The schism in middleware for me began when companies started cracking open the SDK and were using HTTP to give access to important resources like compute with storage with AWS, and SMS with Twilio, offering a peek behind the curtain for developers. Then further expanding to regular humans with embeddable tooling, iPaaS with services like Zapier, and other services and tools that anyone can implement, no coding necessary. All of this was fueled by mobile, and the need to break down the data, content, and algorithms for use in these mobile applications. Things have quickly gone from backend to frontend, to everywhere to everywhere. How do you get in the middle of that?

Anyways. I'm guessing this story might be a little incoherent. I'm just trying to find my words for future conversations on this topic. As the regular world is confronted with this API thing, they have a lot of questions, and they need help understanding how we got here. Honestly, I feel like I don't fully have a grasp on how we got here. So writing about it helps me to think through this stuff, and polish it a little bit for the next round of conversations.


A CKAN OpenAPI Spec

I was working on publishing an index of the General Service Administration (GSA) APIs I currently have in my API monitoring system, and I remembered that I updated my Data.gov work publishing a cache of the index on Github. Part of this work I had left a note for myself about finding / creating an OpenAPI Spec for the Data.gov API, which since it is a CKAN implementation should be pretty easy--I hoped.

After Googling for a bit I found one created by the French government open data portal -- thank you!!. It looks pretty complete with 102 paths, and 79 definitions, providing a pretty nice jumpstart for anyone looking to documentation their CKAN open data implementation. 

This API definition can be used to generate API documentation using Swagger UI or ReDoc, as well as generate SDKs using APIMATIC, and monitoring using Runscope or API Science. If you come across any other API definitions for CKAN, or any interesting documentation and other tools--please let me know, I want to keep aggregating CKAN related solutions.

Open source tools that have APIs, and have open API definitions like this are the future. These are the tools that companies, institutions, organizations, and government agencies should be putting to work in their operations because it helps reduce costs, but also having an API that uses common API specifications means it will speak the same language as other important tools and services, increasing the size of the toolbox available for implementatioperations your API operatons.


Using Github As An API Index And Data Store

I am spending a lot of time studying how companies are using Github as part of their software and API development life cycle, and how the social coding platform is used. More companies like Netflix are using as part of their continuous integration workflow, something that API service providers like APIMATIC are looking to take advantage of with a new wave of services and tooling. This usage of Github goes well beyond just managing code, and are making the platform more of an engine in any continuous integration and API life cycle workflow.

I run all my API research project sites on Github. I do this because it is secure and static, as well as introduces a very potent way to not just manage a single website, but over 200 individual open data and API projects. Each one of my API research areas leverages a Github Jeykll core, providing a machine readable index of the companies, news, tools, and other building blocks I'm aggregating throughout my research.

Recently, this approach has moved beyond the core areas of my API research and is something I'm applying to my API discovery work, profiling the resources available with popular API platforms like Amazon Web Services, and across my government work like with my GSA index. Each of these projects managed using Github, providing a machine readable index of the disparate APiI, in a single APIs.json index which includes OpenAPI Specs for each of the APIs included. When complete, these indexes can provide a runtime discovery engine of APIs used as part of integrations, providing an index of single APIs, as well as potentially across many distributed APiI brought together into a single meaningful collection.

I've started pushing this approach even further with my Knight Foundation funded Adopta.Agency work, and making the Github repository not just a machine-readable index of many APIs, I'm also using the _data folder as a JSON or YAML data store, which can then also be indexed as part of the APIs.json and OpenAPI Spec for each project. I've been playing with different ways of storing and working with JSON and YAML in Jekyll on Github for a while now, but now I'm trying to develop projects that are a seamless open data store, as well as an API index, providing the best of both worlds.

This is not a model for delivering high performance and availability APIs. This is a model for publishing and sharing open data so that it is highly available, workable, and hosted on Github for FREE. Most of the data I work with is publicly available. It is part of what I believe in, and how I work on a regular basis. Making it available in a Github repo allows it to be forked, or even consumed directly while offloading bandwidth and storage costs to Github. The GET layer for all my open data project is all static, and dead simple to work with. Next, I'm working on a truly RESTfully augmented layer providing the POST, PUT, and DELETE, as well as more advanced search solutions.

I am using the Github API for this augmented layer. I am just playing with different ways to proxy it and deliver the best search results possible. The POST, PUT, PATCH, and DELETE layer for each Github repository data store in the _data folder is pretty straightforward. My goal is to offload as much of the payload to Github as possible, but then augment what it can't do when it comes to more advanced usage. I'm looking for each API index and data store can act as a forkable engine for a variety of stops along the API life cycle, as well as throughout the delivery of the web, mobile, and device-based applications we are building on top of them.


The Reasons Why We Pull Back The Curtain On Technology

Photo by Shelah

I was trying to explain to a business analyst this week the difference between SDK and API, which he said was often used interchangeably by people he worked with. In my opinion SDK and API can be the same thing, depending on how you see this layer of our web, mobile, and device connectivity. The Internet has been rapidly expanding this layer for some time now, and unless you are watching it really don't see any difference between API and SDK--it is just where the software connects everything.

For me, an SDK is where the data, content and algorithmic production behind the curtain is packaged up for you -- giving you a pre-defined look at what is possible, prepared for you with a specific language or platform in mind. Most of the hard work of understanding what is going on has been translated and crafted, providing you with a set of instructions of what you can do with this resource in your application--your integration is pretty rigidly defined, not much experimentation or hacking encouraged.

An API has many of the same characteristics as an SDK, but the curtain is pulled back on the production a little bit more. Not entirely, but you do get a little more of a look at how things work, what data and content are available, and algorithmic resources are accessible. You still get the view which a provider intends you to have, but there are fewer assumptions about what you'll do with the resources put on the interface, leaving you to do more of the heavy lifting with how these resources will get put to use.

Most of the early motivations behind choosing an open approach to web APIs over more closed and as proprietary SDK, pulling back the curtain on how we develop software, weren't entirely intentional. Companies like Flickr and Twitter weren't trying to make their mark on the politics of how we integrate software, they were busy and looking to encourage 3rd party developers to do the hard work of crafting the SDKs, and other platform integrations. The reasons for pulling back the curtain on how the sauce is made was purely about furthering their own needs, and not necessarily about moving the needle forward regarding how we talk about software integration--it was just business, enabled by tech (HTTP), and the politics came later, as a sort of side-effect.

Many traditional software developers and software-enabled hardware manufacturers have a hard time seeing this expansion in how we integrate with software and are usually still very SDK oriented, even if there are many APIs right behind their SDK curtain. They do this for a variety of technical, business, as well as political reason. It is my personal mission to help these folks understand a little more about this expansion in the software connectivity layer, and the benefits brought to the table by being more open. We need the client integration (SDK) and API to be loosely coupled from a technical, business, and political stance--to make things work in a web-enabled environment.

It isn't easy to help business folk see the importance of leaving this layer open. This is the damaging effects of the Oracle vs. Google Java Copyright, is it gums up and slows this expansion, something we need to encourage and keep open, even if the bigtechcos don't fully get it. We are going to need this momentum to not just keep the web, mobile, and device integration accessible and loosely coupled, but we will also need to help make sure the growing number of algorithms that are impacting our worlds are more observable as well. Providers aren't going to be willing to pull back the curtain on the smoke and mirrors that are AI, machine learning, and other algorithmic varietals infecting our lives.

There are many reasons why we pull back (or don't) the curtain on technology, at the application, SDK, API, or algorithmic levels. I don't count on companies, institutions, and government agencies to ever do this for the right reasons. I'm counting on them doing it for all the wrong reasons. I am looking at incentivizing their competitors to do it, helping influence policy or law to direct the systems to behave in a certain way, and encourage companies to be lazy, and keep the curtain pulled back because it's easier. Getting a peek behind the curtain, or convincing some to pull it back is never a straightforward conversation--you often have to use many of the same tactics and voodoo employed by tech providers to get what you want.


Where Are The Interesting API Bookmarklet Examples?

I have been kvetching about the quality of embeddable tooling out there, so I'm working on discovering anything interesting. I started with bookmarklets, which I think is one of the most underutilized, and simplest examples of working with APIs on the web. Here are a couple of interesting bookmarklets for APIs out there:

  • Twitter - Probably the most iconic API and bookmarklet out there -- share to Twitter.
  • Pinboard - An API-driven bookmarklet for saving bookmarks that I use every day.
  • Hypothesis - A whole suite of API-driven bookmarklets for annotating the web.
  • Socrata - A pretty cool bookmarklet for quickly viewing documentation on datasets.
  • Tin Can API - A bookmarklet for recording self-directed learning experiences.

When you search for API bookmarklets you don't get much. Nothing stands out as being innovative. I will keep looking when I have time, and I'll keep curating and understanding any new approaches, and examples, and tooling when possible.

Ultimately it just confounds me, because a simple JS bookmarklet triggering one or more API interactions is a no brainer. We have examples of this in action, making an impact on login, sharing, annotation, and more, so why don't we have more examples? IDK. It is something I'll explore as I push forward my embeddable API research.

Maybe I'm just missing something...


What Do You Get When You Search For The Schema.org Logo?

I spend a lot of time looking for logos of the companies that I write about. A lack of consistency around how companies manage (or don't) their logos, and make them available (or don't) regularly frustrates the hell out of me. While doing my regular work I found myself Googling for the Schema.org logl -- what came up made me smile.

When you Google for Schema.org logo you don't get the logo for Schema.org, you get the schema for a logo, which is the image property of a thing and is used by brands, organizations, places, products, and services. I still had to actually do a separate search to find the Schema.org logo, but it did make me smile, and make me think even deeper about how we manage (or don't) our bits online.

Schema.org is so important. It keeps popping up on my radar, and I'm seeing more examples of it being used as part of JSON-LD API and web search implementations. As I work on my human services data specification (HSDS) project I'm going to carve off time to weave JSON-LD and Schema.org into my storytelling. I can't just show people Schema.org and expect them to understand the importance, I'm going to have to show them with meaningful examples of it working out in the wild.


Having A Program For Researchers Baked Into Your API Operations

I wrote about the need for service level agreements dedicated to researchers who are depending on APIs a couple weeks ago, and while I was doing my work profiling of AWS, I came across their approach to supporting research. Amazon has a dedicated program research and technical computing on AWS, where they:

"helps researchers process complex workloads by providing the cost-effective, scalable and secure compute, storage and database capabilities needed to accelerate time-to-science. With AWS, scientists can quickly analyze massive data pipelines, store petabytes of data and share their results with collaborators around the world, focusing on science not servers."

Amazon has three distinct ways in which they are helping researchers, as well as the industries and people they impact:

  • AWS Research Cloud Program - The AWS Research Cloud Program helps you focus on science, not servers---all with minimal effort and confidence that your data and budget are safe in the AWS Cloud. Government and education-based researchers are eligible to receive program benefits. Apply to join the program, in order to access the AWS Research Cloud Handbook and other cloud resources built for researchers, by researchers.
  • AWS Research Initiative - The AWS Research Initiative (ARI) brings Amazon Web Services (AWS) and the National Science Foundation (NSF) together, with AWS providing AWS Cloud Services through provision of AWS Promotional Credits, awarded to NSF grant applicants to leverage Critical Techniques, Technologies and Methodologies for Advancing Foundations and Applications of Big Data Sciences and Engineering (BIGDATA).
  • Open Data - Organizations around the globe are increasingly making their data open and available for the public to discover, access, and use. This is fueling entrepreneurship, accelerating scientific discovery, and creating efficiencies across many industries. Amazon Web Services provides a comprehensive tool kit for sharing and analyzing data at any scale. 

Amazon helps researchers by providing them with cloud resources for doing their research, assisting them with the budget of it all, while also opening you up to other grant opportunities, as well as can provide you with a place to publish and share your open data, and put data from other researchers to work in your research. It sounds like a pretty fine start to a more formal API researcher  blueprint that other API providers can consider as part of their own operations. 

Now that I have a base definition crafted (thanks, AWS!). I'll spend time looking for other implementations in the wild. Once I have a strong enough blueprint crafted, I want to start lobbying Twitter, Facebook, Instagram, and other leading social data platforms to consider. The data on these platforms provides an important look at the world around us, one that researchers should have access to, without costs, rate limits, or other obstacles in their way. Amazon's motivations in offering this type of package are clear--they want researchers to become customers. When it comes to other providers, I'm going to have to experiment with other incentive models--I am guessing they won't always be happy to get on board.

I will keep polishing the building blocks of my API research program as I find other examples of this. Once I have it refined a little more I will publish as one of my industry guides. Providing a basic blueprint that any company can follow when setting up their own API program for research, but also make it something any individual looking to lobby for this kind of change can use as well. Amidst all the capitalist frenzy within the tech bubbles, it isn't always easy to convince folks of the importance of this type of access--especially when it costs them money.


API Management Is Getting More Modular And Composable

I've been keeping an eye on the API management space for about seven years now, and I actually have to say, even with all the acquisitions, IPOs, commoditization, etc, I am actually pretty happy with where the sector has evolved. API management always resembled its older cousin the API gateway for me, so when companies like 3Scale started offering a freemium model, that I could deploy in the cloud with a couple lines of code---I jumped on the API management bandwagon. It was easy and gave you all the service composition, onboarding, analytics, and metering tools you needed out of the box.

I have been pushing on providers to provide an open source API management solution for quite some time, and providers like WSO2 finally stepped up to bring an enterprise-grade solution to the table, then solutions like API Umbrella also emerged for the government. Now in 2017, we have several open source solutions available to us, which makes me happy, but what I really like is how modular, versatile, and API-driven they are. I'm spending time learning more about my partner's solutions, and today I'm working my way through what is possible with Tyk.

Tyk is what API management should be. It has all the user and organization management and assists you with the onboarding, authentication, service composition, rate limiting, and analytics that are core to any API management solution. However, what is really powerful for me is that you can deploy Tyk anywhere in the cloud, on-premise, on-device, and all of its features are API-driven, and interconnected because of it's APIs, webhooks, and other orchestration features. APIs aren't just about deploying a web API on a server, and making it available through a single base URL--they are everywhere, and our management tools need to reflect this.

We don't just have a single API stack anymore and we use a handful of 3rd party APIs. We are weaving together many different internal and partner stacks, as well a mess of 3rd party solutions. Tyk is your simple API management on a device, in your data center, or in the cloud. Tyk is also part of your API stack--the API management layer portion. I am constantly pushing on API providers to practice what they preach and make everything they do API first--Tyk is that. All the resources for the applications you build should be APIs, and all the infrastructure you use to design, deploy, manage, and orchestrate those APIs should be APIs--if you are a service provider in the space without an API, you will not be competitive.

Anyways, full disclosure and all -- Tyk is my partner. I'm writing about them because they give me money. However, I am not in the business of taking money to write about products that don't offer a real solution. All four of the logos you see on the left hand of my site are companies with products that I believe in, otherwise, I wouldn't take money from them. Tyk's approach to API management represents where I think things should be. Open source. Modular. Composable. And, you pay for complexity and scale. API management being commoditized is a win for everyone in my opinion, but it's been a long road to get here, with a lot of casualties along the way. ;-)


API Evangelist Joins The Open API Initiative (OAI)

It was an interesting journey getting the API specification formerly known as Swagger into the Linux foundation last year. After SmartBear donated the spec to the newly formed Open API Initiative, I was considering joining the governing body behind the spec, but with all I had going on last year, I didn't feel it was the right time. Participating in governance groups hasn't really ever been my thing, and there are a handful of large organizations involved, and who am I really? I'm just a single person ranting about APIs. However, in 2017 I am changing my tune and will be joining the Open API Initiative (OAI)

It is an important time for API definitions, and there is a lot riding on the success of OpenAPI, as well as API definitions in general. I feel like we need as many voices at the table as we possibly can. We need government agencies, enterprise, small businesses, startups, and individual analysts like myself. I feel pretty strongly that API definitions should be openly licensed, accessible, and following the most commonly used patterns available to us across the API sector. We don't need to all be speaking a different language when it comes to deploying compute resources or working with images, and API definitions are how we get there.

After a ramping up period, I will work with OAI on their marketing strategy, helping tell stories, and learn about interesting implementations and use cases when possible--dovetailing nicely with what I am already doing. I'll be tuned into the conversation about the spec, but will definitely be standing off to the side as an observer for quite some time. I'm pretty stoked with what they've done with version 3.0 of the spec, and at this point, I have a lot to learn before I open up my mouth and chime in on anything. I'm just enjoying watching the personalities play out in the Slack room--this stuff should be a reality TV show, but that is another story. 

It makes me happy to be part of the Open API Initiative (OAI). As I work on my regular API Evangelist research, and move forward projects like Open Referral, and their Human Services Data Specification (HSDS), the OpenAPI Specification is critical to everything I am doing. The OAI is extremley important right now, and I'm pleased to have a seat at the table of such an important organization, and learning from the other smart folks moving the specification forward. 2017 will be an interesting year for not just the OpenAPI Spec, but for all the services and topen tooling that is being built on the spec, as well as the wide range of industries being touch by APIs, and putting common API specifications, and schema to work.


The Unlimited Possibilities When You Become API Definition Fluent

I was a regular check-in with one of my favorite API service providers this week, talking about some of the new features they are rolling out in coming weeks, and they demonstrated for me why API definitions are so important in 2017.  APIMATIC got their start deploying SDKs for your API, but have quickly moved into providing API documentation, testing, continuous integration, and some additional stops that they have planned for release in coming months.

As I was sharing how happy I was with their movement into new areas of the API life cycle, and praising their agility when it came to rolling out new features, they responded with:

"The credit goes to machine-readable descriptions. Once you have them then there are endless possibilities. Thanks for motivating us to support all the description formats in first place :-)"

This is why being fluent in API definitions is so important to operating your API, or in APIMATIC's case, operating as an API service provider. There are an increasing number of API providers who support the importing and exporting of API definitions, but nobody has gone full API definition as APIMATIC as. They didn't just center their SDK, documentation, testing and continuous integration solutions around API definitions, they exposed their API definition translation engine as a service called API Transformer, allowing anyone else to follow their lead.

When people first learn about API definitions, the 101 lessons usually center around API documentation, and maybe secondarily the generation of the server or client-side SDKs. However API definitions are rapidly being used for every other stop along the life cycle from design to deprecation, as well as the unknown future stops along the way that only become possible because all your API resources are well defined, and machine-readable. It is this agility and flexibility that I want to incentivize, helping companies be as effective as APIMATIC has been when it comes to delivering new API-driven solutions.


Sharing Compute Costs For Open Data And API Consumers Using The Cloud

I recently wrote about how Algorithmia offloads the compute costs around machine learning using AWS, structuring their image style transfer modeling so that the consumer pays the cost for deploy an AWS GPU instance. It is an interesting way to shift the burden of paying for the hard costs around API operations.

Another interesting approach I extracted from a story I wrote yesterday is from Amazon Web Services (AWS) with their approach to open data. Amazon Public Datasets are available as Amazon Elastic Block Store (Amazon EBS) snapshots and/or Amazon Simple Storage Service (Amazon S3) buckets. AWS hosts the master copy of the dataset, and when you want to use, you fire it up in your AWS account, and get to work.

I find some of these approaches to managing, and offloading the heavy costs of working with algorithms, datasets, and other resources interesting. I especially find this approach interesting because it is in the service of public data, providing an option for how the private sector can help share the load in managing government, research, and other public data, and offloading the heavy costs to those who are going to put the data to use. 

I will figure out how to add this into my API monetization and plan research. Providing a list of these types of approaches to providing on-premise, wholesale, and containerized or virtualized approaches to making data, content, and algorithms available. I will definitely keep looking for unique approaches to API deployment like this--after a couple of these grabbing my attention, I'm feeling like I am seeing another shift in both the technology and business of how we deploy and consume APIs.


API Embeddables In A Conversational Interface World

I would say that embeddable tooling is one of saddest areas of the API space for me in recent years. When it comes to buttons, badges, widgets, and other embeddable goodies that put APIs work, the innovation has been extremely underwhelming. Login, like, share, and a handful of other embeddable tooling have taken hold, but there really isn't any sort of sophisticated approach to putting APIs to work using web, mobile, browser embeddables. 

The only innovation I can think of recently is from Zapier with their Push by Zapier solution -- allowing you to orchestrate with the zaps you've creative, putting APIs to work using the variety of recipes they've cooked up. I'm thinking that I will have to step up my storytelling around what is possible with Push by Zapier, helping folks understand the possibilities. Push by Zapier is a Google Chrome extension, making it more browser than embeddable, but the approach transcends embeddable, browser, and even into the conversation (bot, voice, etc.) for me.

It's all about getting users frictionless access to API driven actions. Whether you are building Zapier pushes, Alexa Skills, or a Slackbot, you need to trigger, and daisy chain together API driven requests and responses ending in the desired location. I'm just looking for dead simple ways of doing all of this in my browser, embedded on web pages, and anywhere I desire. I'm just looking for a way to embed, link, and display the doorway to meaningful actions that anyone can implement, from wherever they want--where the action takes place is up to the API provider.

I want a vocabulary of simple and complex embeddables, either just HTML, some JS, and maybe a little CSS magic to round off. When I explain to someone in a Tweet or email explaining that they can publish a news article to a Google spreadsheet, I want my sentence to accomplish what I'm trying to articulate. I want to be able to speak in API--I want to be able to develop an IT operational outline and make it executable using the wealth of API resources available to me over at AWS. I want to be able to craft meaningful tasks, and easily share with others, replicated it for them, in their environment, using their credentials, without confusing the hell out of them.

Anyways, I am just ranting of this collision of these worlds as I am seeing unfold. I suspect the lack of innovation around embeddables is more about proprietary stances on APIs, and a platformification of things--meaning folks are investing in speaking in these ways via their channel, on their platforms, not an open web approach for everyone. I think Zapier has the strong lead in this area, with Slack, and Alexa trailing behind. The problem is that all the players are just to focused on their ow implementations, and not a wider web edition of their conversational interfaces. I'll rant in a future post about how this politics of API operations and closed views on API IP are stunting the growth in meaningful communication using API resources.  


Maintaining On Premise Capacity As Well As Cloud Expertise

The "cloud" has done some very interesting things for individuals, companies, organizations, institutions, and government agencies, and is something that shouldn't be ignored. However, I watch organizations of all shapes and sizes make a similar mistake when it comes to outsourcing too much of their operations to vendors, and cloud services. Each organization's needs will be different, but technology leaders should be mindful of how they invest in talent, alongside how much they invest in external services.

I struggle with this in my own business on a daily basis, but I've also seen small businesses make the same mistake, as well as witnessed the damage of this all the way up the Department of Veterans Affairs (VA) in the federal government. The VA has outsourced it's technological soul to vendors, leaving the ability to make sound architectural choices to those who control the puppet strings outside of the agency. I'm seeing a lot of smaller organizations doing this same thing, but with cloud services (outsource 2.0), instead of the traditional software vendor (outsource 1.0)--outsourcing their technological soul to the cloud.

Outsourcing capacity is something I struggle with constantly. There is only so much I can do as a one-person shop, but there is also a limit on what I can spend on services in the cloud, creating a naturally occurring balancing effect. In addition to this natural effect, I am also constantly evaluating what I should be learning myself, and investing in my own internal capacity, as opposed to outsourcing it. I have to be extra careful regarding who I depend on because if service changes, is shut down, or maybe prices me out of range -- it could be pretty damaging to my operations. I am hyper aware of this vicious cycle when it comes to my dependence on the cloud or any single service.

I was just having a conversation around the hard decisions that a school district was having to make in this area. They have to make the big contract purchase, hire the talent or invest in talent locally. These are tough decisions to make, but in the end I always lean towards investment in on-premise capacity, working to be as critical as possible when it comes to adopting cloud services, and investing in only external expertise -- while there is no perfect answer, and investing in internal capacity might be costly, but this will help get you through hard times, that depending on the cloud services will not.


Helping Carry The Load When It Comes To Public Data And APIs

I am finally getting back to my Knight Foundation funded grant work on Adopta Agency, I'm investing some research cycles into finding some tools that civic, science, journalism and other public data activists can put to use in their critical work. We've seen folks rise to the occasion when it came to climate data, helping migrate vital resources from federal government servers, something I'd like to see happen across other business sectors, as well as continue as an ongoing thing throughout this administration, and beyond.

I have long been a proponent of the private sector sharing the load when it comes to managing public data and APIs. After leaving DC during the 2013 federal government shutdown I began evangelizing the importance of individuals and companies stepping up to help with the heavy lifting of making sure public data is available when we need it most--resulting in my Adopta.Agency work.  I feel pretty strongly that the federal government has an important role to play in this conversation, but I also feel that the private sector needs to step up and help--additionally, I also feel that is important that individuals step up and be present in the discussion.

Github plays a central role in my Adopta.Agency work. Any government data I turn into an API lives on Github as JSON, CSV, or other machine-readable format--taking advantage fo the Github platform for managing, as well as helping me eliminate or completely reduce the costs of managing public data. I spend money each month on hosting my public data work (it's my addiction), but I couldn't do it without a place to park it and offset storage and bandwidth costs. With this in mind, I'm spending time trying to find other services that public data and API folks can put to work for them, either eliminating or reducing the cost of managing open public data on the web.

My Adopta.Agency toolkit starts with Github and extends to the open source blueprint for adopting government data (which lives on Github) - now I want to add some additional resources that folks can consider. First on my list is AWS Open Data, where organizations can publish their open data sets, and consumers can deploy datasets for use in their own AWS infrastructure--smart. Next up is Google Public Datasets, where you can browse existing datasets and add your own. That is my list so far. I'm spending some time looking for other services that users can use for free or extremely low costs as part of their public data work, and need your help.

If you know of any other services that users can use let me know. I'm not just looking for storage solutions. Ideally, it would be a full stack services that public data advocates could use to acquire, store, management, and deploy public data as an API--additionally, any services that would help them throughout throught the API life cycle. I'm looking for a more formal and established approach, specifically something with a URL, logo, description and landing page that I can add to my Adopta.Agency Toolbox. If you have any questions let me know, you can leave any thoughts, issues, or suggestions to the Adopta.Agency Github issues.


Dedicated Space For Telling The Stories Of Your API Consumers

Telling the story of what your  API accomplishes may seem like a pretty simple, straightforward thing, but you'd be surprised how many API providers DO NOT do this on a regular basis, or do not have dedicated stories, showcase, or similar section to their website. This is why I beat this drum on a regular basis -- if you do not tell the story of the cool things people are doing with your API or your API services, they will never know how your solution works, and will probably never think of your service again--even with they actually have that specific problem that you solve.

To help demonstrate this in a very meta way, I am going to showcase how my clients, showcase their clients. Deep man. Next up is my partner DreamFactory, providing six very compelling stories about how their API deployment and management platform is being put to use:

  • TECHeGO - DreamFactory is at the core of TECHeGO's powerful ERP platform 
  • Verizon Cloud - Discover how DreamFactory gave Verizon Cloud a turnkey developer portal.
  • Intel - Learn how Intel uses DreamFactory to expose legacy SQL data as a powerful REST API for AngularJS.
  • Maxwell Lucas - Read how Maxwell Lucas uses DreamFactory to deliver a world-class travel advisory application for their enterprise companies.
  • Senske Services - Find out how Senske Services uses DreamFactory to REST-enable Microsoft SQL Server for a mobile ticketing application.
  • The Binary Workshop - Discover why Binary Workshop uses DreamFactory as the REST API platform for their co-working SaaS applications.
  • Shortrunposters.com - Find out how DreamFactory enabled communication between the shop floor, a backend MySQL database, and a proprietary e-commerce portal.

Honestly, this type of blog posts makes for easy content for me, and obviously, it makes my partner DreamFactory happy, but more importantly, it is yet ANOTHER reminder for you to tell the story of the people who are using your APIs and services. Telling stories is essential to you obtaining new customers, but it is also essential for analysts like me to understand the real things being done with your APIs, allowing us to cut through the hype of the tech space.

Take another look at your API storytelling apparatus. Do you have an engineering blog or a blog dedicated to your API? Do you have stories, testimonials, customer showcase, or other dedicated areas for telling stories about your customers? Do you actively share these stories via your company's social channels like Twitter, Facebook, LinkedIn, and more? Just a few of the questions you should be asking.