{"API Evangelist"}

Reminding Myself Of Why I Do API Evangelist

This is my regular public service reminder of why I do API Evangelist. I do not evangelize APIs because I think everybody should be doing them, that they are the solution to all of our problems, or because I have an API I want you to buy (I have other things for you to buy). I do API Evangelist because I want to better understand how platforms like Facebook, Twitter, Uber, and others are operating and impacting our personal and professional lives.

I do believe in APIs as an important tool in our professional toolboxes, but Silicon Valley, our government(s), and many other bad actors have shown me that APIs will more often be used for shady things, rather than the positive API vision I have in my head. I still encourage companies, organizations, institutions, and agencies to do APIs, but I spend equal amount of time ensuring people are doing APIs in a more open and equitable way while still encouraging folks to also embark on their API journeys--taking more control over how we store, share, and put our bits and bytes to work each day. 

APIs are playing a role in almost every news story we read today, from fake news and elections to cyber security, healthcare with the FHIR API standard, or banking with PSDS, to automobiles and transportation with Tesla and Uber. I can keep going all day long, talking about the ways APIs are influencing and delivering vital aspects of our personal and professional lives. In ALL of these situation's it's not the API that is important, it is the access, availability, and observability of the technology that is impacting the lives of humans--APIs are just the digital connector where our mobile phones and other devices are being connected to the Internet.

I like to regularly remind myself why the fuck I'm doing API Evangelist, so I don't burn out like I have before and end up roaming the streets foaming at the mouth again. I also do it to remind people of why I do API Evangelist, so they don't just think I'm a cheerleader for technology (rah rah rah, gooo APIs). My mission isn't just about APIs, it's ensuring APIs are in place and allow us to better understand the inputs and outputs of the technology invading all aspects of our lives. Without the observability into the backend systems and algorithms that are driving out personal and professional lives, we are screwed--which is why I do API Evangelist, to help ensure there is observability into how technology is impacting our world.


Considering Standards In Our API Design Over Being A Special Snowflake

Most of the APIs I look at are special snowflakes. The definition and designs employed are usually custom-crafted without thinking other existing APIs, or standards that already in place. There are several contributing factors to why this is, ranging from the types of developers who are designing APIs, to incentive models put in place because of investment and intellectual property constraints. So, whenever I find an API that is employing an existing standard, I feel compelled to showcase and help plant the seeds in others minds that we should be speaking a common language instead of always being a special snowflake.

One of these APIs that I came across recently was the Google Spectrum Database API which has employed a standard defined by the IETF Protocol to Access White Space (PAWS).  I wouldn't say the API is the best-designed API, but it does follow a known standard, that is already in use by an industry, which in my experience can go further than having the best-designed API. The best product doesn't always win in this game, sometimes it is just about getting adoption with the widest possible audience. I am guessing that the Google Spectrum Database API is targeting a different type of engineering audience than their more modern, machine learning and other APIs are, so following standards is probably more of a consideration.

I wish more APIs would share a little bit about the thoughts that went into the definition and design of their APIs, sharing their due diligence of existing APIs and standards, and other considerations that were included in the process of crafting an API. I'd like to see some leadership in this area, as well as some folks admitting that they didn't have the time, budget, expertise, or whatever the other reasons why you are a special snowflake. It is a conversation we should be having, otherwise, we may never fully understand why we aren't seeing the adoption we'd like to see with our APIs.


The OpenAPI Toolbox And My API Definition Research

I have the latest edition of my API definition research published, complete with a community-driven participation model, but before I moved on to my design, deployment, and management guides, I wanted to take a moment and connect my OpenAPI toolbox to this research.

My API definition research encompasses any specification, schema, or authentication and access scope used as part of API operations, providing a pretty wide umbrella. I am always on the hunt for specifications, schema, media types, generators, parsers, converters, as well as semantics and discovery solutions that are defining the layers of the API space. 

This is one reason I have my OpenAPI Toolbox, which helps focus my research into the fast-growing ecosystem developing around the OpenAPI specification. I'm always looking for people who are doing anything interesting with the OpenAPI specification. When I find one I get to work crafting  a title, description, image, and project link so that I can add to the OpenAPI Toolbox YAML file, driving the toolbox website. If you are developing any open tooling that uses the OpenAPI specification please let me know by submitting a Github issue for the toolbox, or if you are feeling brave...go ahead and add yourself and submit a pull request.

My OpenAPI Toolbox is connected to my API definition research at the hip. The toolbox is just a sub-project for my wider API definition research. If you are doing anything interesting with API definitions, schema, or scope please let me know. I am happy to add to my API definition research. The best of the best from my API definition research and the OpenAPI Toolbox will be included in my industry guide, receiving wider distribution than just my network of API Evangelist sites.


API Evangelist Industry Guide To API Definitions

I keep an eye on over 70 areas of the API sector, trying to better understand how API providers are getting things done, and what services and tooling they are using, while also keeping my perspective as an API consumer--observing everything from the outside-in. The most important area of my research is API definitions--where I pay attention to the specifications, schema, scopes, and other building blocks of the API universe. 

The way my research works is that I keep an eye on the world of APIs through monitoring the social media, blogs, Github, and other channels of companies, organizations, institutions, and agencies doing interesting things with APIs. I curate information as I discover and learn across the API sector, then I craft stories for my blog(s). Eventually, all of this ends up being published to the main API Evangelist blog, as well as the 70+ individual areas of my research from definition to deprecation

For a couple years now I have also published a guide for the top areas of my research, including API definitions, design, deployment, and management. These guides have always been a complete snapshot of my research, but in 2017 I am rebooting them to be a more curated, summary of each area of my research. My API definition guide is now ready for prime time after receiving some feedback from my community, so I wanted to share with you.

I am versioning and managing all of my API industry guides using the Github repositories in which I publish each of my research areas, so you can comment on the API definition guide, and submit suggestions for future editions using the Github issues for the project. My goal with this new edition of the API Evangelist API definition guide is to make it a community guide to the world of API definitions, schema, scopes. So please, let me know your thoughts--it is meant to be a community guide.

I am keeping my API definition guide available for free to the public. You can also purchase a copy via Gumroad if you'd like to support my work--I depend on your support to pay the bills. I will also funding this work through paid one page or two-page articles in future editions, as well as some sponsor blogs located on certain pages--let me know if you are interested in sponsoring. Thanks as always for your support, I hope my API definitions guide helps you better understand this important layer of the API universe, something that is touching almost every aspect of the API and software development life cycle.


Google Support Buttons

I talked about the gap between developer relations and support at Google, something that Sam Ramji (@sramji) has acknowledged is being worked on. Support for a single API can be a lot of work and is something that is exponentially harder with each API and developer to add to your operations, and after looking through 75 of the Google APIs this weekend, you see evidence that Google is working on it.

While there are many Google APIs that still have sub-standard support for their APIs, when you look at Google Sheets you start seeing evidence of their evolved approach to support, with a consistent set of buttons that tackle many of the common areas of API support. For general questions, Google provides two buttons linked to StackOverflow:

The search just drops you into Stack Overflow, with the tag "google sheets api", and the ask a new question drops you into the Stack Overflow submit new question form. For bug reporting, they provide a similar set of buttons:

The search and report bug buttons drop you into the Google Code issues page for Google Sheets, leveraging the issues management for the Gooogle Code repository--something that can just as easily be done with Github issues. Then lastly, they provide a third set of buttons when you are looking to submit a feature:

Even though there is a typo on the first button, they also leverage Google Code issue management to handle all feature requests. Obviously working to centralize bug and feature reporting, and support management using Google Code--something I do across all my API projects using Github organizations, repositories, and their issue management. I'm guessing Google Support is tapping into Google Code to tackle support across projects at scale.

These support buttons may seem trivial, but they represent a more consistent approach by the API giant to be consistent in how they approach support across their API offerings--something that can go a long way in my experience. It gives your API consumers a familiar and intuitive way to ask questions, submit bugs, and suggest new features. Equally as important, I'm hoping it is also giving Google a consistent way to tackle support for their APIs in a meaningful way, that meets the needs of their API consumers.


A Community Strategy For My API Definition Guide

I have tpublished the latest edition of my API definition guide. I've rebooted my industry guides to be a more polished, summary version of my research instead of the rougher, more comprehensive version I've bee publishing for the last couple of years. I'm looking for my guides to better speak to the waves of new people entering the API space, and help them as they continue on their API journey.

In addition to being a little more polished, and having more curated content, my API guides are now going to also be more of a community thing. In the past I've kept pretty tight control over the content I publish to API Evangelist, only opening up the four logos to my partners. Using my API industry guides I want to invite folks from the community to help edit the content, and provide editorial feedback--even suggesting what should be in future editions. I'm also opening up the guides to include paid content that will help pay for the ongoing publication of the guides with the following opportunities available in the next edition:

  • One Page Articles - Sponsored suggested topics, where I will craft the story and publish in the next edition of the guide--also published on API Evangelist blog after the guide is published.
  • Two Page Articles - Sponsored suggested topics, where I will craft the story and publish in the next edition of the guide--also published on API Evangelist blog after the guide is published.Sponsor Slot - 
  • Sponsor Slot - On the service and tooling pages there are featured slots, some of which I will be giving to sponsors, who have related produces and services.
  • Private Distribution - Allow for private distribution of the industry guide, to partners, and behind lead generation forms, allowing you to use API Evangelist research to connect with customers.

Even though I will be accepting paid content within these industry guides, and posts via the blog now, they will all be labeled as sponsored posts, and I will also still be adding my voice to each and every piece--if you know me, or read API Evangelist blog you know what this means. I'm looking to keep the lights on, while also opening up the doors for companies in the space to join in the conversation, as well as the average reader--allowing anyone to provide feedback and suggestions via the Github issues for each area of research.

My API definition research is just the first to come off the assembly line. I will be applying this same model to my design, deployment, and management research in coming weeks, and eventually the rest of my research as it makes sense. If there is a specific research area you'd like to see get attention or would be willing to sponsor in one of the ways listed above, please let me know. Once I get the core set of my API industry research guides published in this way, I will be working on increasing the distribution beyond just my network of sites, and the API Evangelist digital presence--publishing them to Amazon, and other prominent ecosystems.

I also wanted to take a moment and thank everyone in the community who helped m last year and for everyone who is helping make my research, and the publishing of these industry guides a reality. Your support is important to me, and it is also important to me that my research continues, and is as widely available as it possibly can. 


What Will It Take To Evolve OpenAPI Tooling to Version 3.0

I am spending some time adding more tools to my OpenAPI Toolbox, and I'm looking to start evaluating what it will take for tooling providers to evolve their solution from version 2.0 of the OpenAPI Spec to version 3.0. I want to better understand what it will take to evolve the documentation, generators, servers, clients, editors, and other tools that I'm tracking on as part of my toolbox research.

I'm going to spend another couple of weeks populating the toolbox with OpenAPI solutions. Getting them entered with all the relevant metadata. Once I feel the list is good enough, I will begin reaching out to each tool owner, asking what their OpenAPI 3.0 plans are. It will give me a good reason to reach out and see if anyone is even home. I'm assuming that a number of the projects are abandoned, and even that their owners do not have the resources necessary to go from 2.0 to 3.0. Regardless, this is something I want to track on as part of this OpenAPI toolbox research.

The overall architecture of OpenAPI shifted pretty significantly from 2.0 to 3.0. Things are way more modular, and reusable in there, something that will take some work to bring out in most of the tooling areas. Personally, I'm pretty excited for the opportunities when it comes to API documentation and API design editors with OpenAPI 3.0 as the core. I am also hoping that developers step up to make sure that the generators, as well as the server and client code generators become available in a variety of programming languages--we will need this to make sure we keep the momentum that we've established with the specification so far.

If you are looking at developing any tooling using OpenAPI 3.0 I'd love to hear from you. I'd like to hear more about what it will take to either migrate your tool from version 2.0 to 3.0 or even hear what it will take to get up and running on 3.0 from scratch. I'm going to get to work on crafting my first OpenAPI definition using version 3.0, then I'm going to begin playing around with some new approaches to API documentation and possibly an API editor or notebook that takes advantage of the changes in the OpenAPI Specification.


The Ability To Deploy APIs In AWS, Google, or Microsoft Clouds

I spent a day last week at the Google Community Summit, learning more about the Google Cloud road map, and one thing I kept hearing them focus on was the notion of being able to operate on any cloud platform--not just Google. It's a nice notion, but how real of a concept is it to think we could run seamlessly on any of the top cloud platforms--Google, AWS, and Microsoft. 

The concept is something I'll be exploring more with my Open Referral, Human Services Data Specification (HSDS) work. It's an attractive concept, to think I could run the same API infrastructure in any of the leading cloud platforms. I see two significant hurdles in accomplishing this: 1) Getting the developer and IT staff (me) up to speed, and 2) Ensuring your databases and code all runs and scales seamlessly whichever platforms you operate in. I guess I'd have to add 3) Ensure your orchestration and continuous integration works seamlessly across all platforms you operate on.

I am going to get to work deploying an HSDS compliant API on each of the platforms. My goal is to have just a simple yet complete API infrastructure running on Amazon, Google, and Microsoft. It is important to me that these solutions provide a complete stack helping me manage DNS, monitoring, and other important aspects. I'm also looking for there to be APIs for managing all aspects of my API operations--this is how I orchestrate and continuously integrate the APIs which I roll out.

Along with each API that I publish, I will do a write up on what it took to stand up each one, including the cloud services I used, and their API definitions. I am pretty content (for now) on the AWS platform, leveraging Github Pages as the public facade for my projects, and each repositories acting as the platform gears of API code, and definitions. Even though I'm content where I am at, I want to ensure the widest possible options available to cities, and other organizations who are looking to deploy and manage their human service APIs.


API Environment Variable Autocomplete And Tooltips In Postman

The Postman team has been hard at work lately, releasing their API data editor, as well as introducing variable highlighting and tooltips. The new autocomplete menu contains a list of all the variables in the current environment, followed by global variables, making your API environment setups more accessible from the Postman interface. Introducing a pretty significant time saver, once you have your environments setup properly.

This is a pretty interesting feature, but what makes me most optimistic, is when this approach becomes available for parameters, headers, and some of the data management features we are seeing emerge with the new Portman data editor. It all feels like the UI equivalent of what we've seen emerge in the latest OpenAPI 3.0 release, helping us better manage and reuse the schema, data, and other bits we put to use across all of our APIs. 

Imagine when you can design and mock your API in Postman, crafting our API using a common vocabulary. Reusing environment variables, API path resources, parameters, headers, and other common elements already in use across operations. Imagine when I get tooltip suggesting that I use Schema.org vocabulary, or possibly even RFCs for a date, currency, and other common definitions. Anyways, I'm liking the features coming out of postman, and I'm also liking that they are regularly blogging about this stuff, so I can keep up to speed on what is going on, and eventually cover here on the blog, and include in my research.


Tracking On Licensing For The Solutions In My OpenAPI Toolbox

I wanted to provide an easy way to publish and share some of the tools that I'm tracking on in the OpenAPI ecosystem, so I launched my API toolbox. In addition to tracking on the name, description, logo, and URL for OpenAPI tooling, I also wanted to categorize them, helping me better understand the different types of tools that are emerging. As I do with all my research, I published the OpenAPI Toolbox as a Github repository, leveraging its YAML data core to store all the tools

It will be a never ending project for me to add, update, and archive abandoned projects, but before I got too far down the road I wanted to also begin tracking on the license for each of the tools. I'm still deciding whether or not I want the toolbox to exclusively contain openly licensed tools, or look to provide a more comprehensive directory of tooling that includes unknown and proprietary solutions. I think for now I will just flag any tool I cannot find a license for, and follow up with the owner--it gives me a good excuse to reach out and see if there is anyone home.

Eventually, I want to also provide a search for the toolbox that allows users to search for tools and filter by license. Most of the tools have been Apache 2.0 or MIT license, details that I will continue to keep tracking and reporting on. If you know of any tooling that employs the OpenAPI Specification that should be included feel free to submit a Github issue for the project, or submit a pull request on the repository and add it to the YAML data file that drives that OpenAPI Toolbox.


Thinking About Schema.org's Relationship To API Discovery

I was following the discussion around adding a WebAPI class to Schema.org's core vocabulary, and it got me to think more about the role Schema.org has to play with not just our API definitions, but also significantly influencing API discovery. Meaning that we should be using Schema.org as part of our OpenAPI definitions, providing us with a common vocabulary for communicating around our APIs, but also empowering the discovery of APIs. 

When I describe the relationship between Schema.org to API discovery, I'm talking about using the pending WebAPI class, but I'm also talking about using common Schema.org org within API definitions--something that will open the definitions to discovery because it employs a common schema. I am also talking about how do we leverage this vocabulary in our HTML pages, helping search engines like Google understand there is an API service available:

I will also be exploring how I can better leverage Schema.org in my APIs.json format, better leveraging a common vocabulary describing API operations, not just an individual API. I'm looking to expand the opportunities for discovering, not limit them. I would love all APIs to take a page from the hypermedia playbook, and have a machine readable index for each API, with a set of links present with each response, but I also want folks to learn about APIs through Google, ensuring they are indexed in a way that search engines can comprehend.

When it comes to API discovery I am primarily invested in APIs.json (because it's my baby) describing API operations, and OpenAPI to describe the surface area of an API, but I also want this to map to the very SEO driven world we operate in right now. I will keep investing time in helping folks use Schema.org in their API definitions (APIs.json & OpenAPI), but I will also start investing in folks employing JSON+LD and Schema.org as part of their search engine strategies (like above), making our APIs more discoverable to humans as well as other systems.


The Relationship Between Dev Relations And Support

I saw an interesting chasm emerge while at a Google Community Summit this last week, while I heard their support team talk, as well as their developer relations team discuss what they were up to. During the discussion, one of the companies presents discussed how their overall experience with the developer relations team has been amazing, their experience with support has widely been a pretty bad experience--revealing a potential gap between the two teams.

This is a pretty common gap I've seen with many other API platforms. The developer relations team is all about getting the word out, and encouraging platform usage and support teams are there to be the front line for support and being the buffer between integration, and platform engineering teams. I've been the person in the role as the evangelist when there is a bug in an API, and I'm at the mercy of an already overloaded engineering team, and QA staff, before anything gets resolved--this is a difficult position to be in.

How wide this chasm becomes ultimately depends on how much of a priority the API is for an engineering team, and how overloaded they are. I've worked on projects where this chasm is pretty wide, taking days, even weeks to get bugs fixed. I'm guessing this is something a more DevOps focused approach to the API life cycle might help with, where an API developer relations and support team have more access to making changes and fixing bugs--something that has to be pretty difficult to deal with at Google scale.

Anyways, I thought the potential chasm between developer relations and support was worthy enough to discuss and include in my research. It is something we all should be considering no matter how big or small our operations are. There is no quicker way to kill the morale of your API developer relations and support teams by allowing a canyon like this to persist. What challenges have you experienced when it comes to getting support from your API provider? Or inversely, what challenges have you faced supporting your APIs or executing on your developer outreach strategy? I'm curious if other folks are feeling this same pain.


Getting Our Schema In Order With Postman's New Data Editor

In 2017 I think that getting our act together when it comes to our data schema will prove to be just as important as getting it together when it comes to our API definitions and design. This is one reason I'm such a big fan of using OpenAPI to define our APIs because it allows us to better organize the schema of the data included as part of the API request and response structure. So I am happy to see Postman announce their new data editor, something I'm hoping will help us make sense of the schema we are using throughout our API operations.

The Postman data editor provides us with some pretty slick data management UI features including drag and drop, a wealth of useful keyboard shortcuts, bulk actions, and other timesaving features. Postman has gone a long way to inject awareness into how we are using APIs over the last couple of years, and the data editor will only continue developing this awareness when it comes to the data we are passing back and forth. Lord knows we need all the help we can get when it comes to getting our data backends in order.

The Postman data editor makes me happy, but I'm most optimistic about what it will enable, and what Postman has planned as part of their roadmap. They end their announcement with "we have a LOT of new feature releases planned to build on top of this editor, capabilities inspired by things you already do using spreadsheets". For me, this points to some features that would directly map to the most ubiquitous data tools out there--the spreadsheet. With a significant portion of business in the world is done via spreadsheets, it makes the concept of integration into the API toolchain a pretty compelling thing.


Azure and Office APIs in Visual Studio

I was reviewing the latest changes with Visual Studio 2017 and came across the section introducing connected services, providing a glimpse of Microsoft APIs baked into the integrated development environment (IDE). I've been pushing for more API availability in IDE's for some time now, something that is not new, with Google and SalesForce having done it for a while, but is something I haven't seen any significant movement in for a while now.

I have talked about delivering APIs in Atom using APIs.json, and have long hoped Microsoft would move forward with this in Visual Studio. All APIs should be discoverable from within any IDE, it just makes sense as a frontline for API discovery, especially when we are talking about developers. Microsoft's approach focuses on connecting developers of mobile applications, with "the first Connected Service we are providing for mobile developers enables you to connect your app to an Azure App Service backend, providing easy access to authentication, push notifications, and data storage with online/offline sync".

In the picture, you can see Office 365 APIs, but since I don't have Visual Studio I can't explore this any further. If you have any insight into these new connected services features in the IDE, please let me know your thoughts and experiences. If Microsoft was smart, all their APIs would be seamlessly integrated into Visual Studio, as well as allow developers to easily import any other API using OpenAPI, or Postman Collections. 

While I think that IDEs are still relevant to the API development life cycle I feel like maybe there is a reason IDEs haven't caught up in this area. It feels like a need that API lifecycle tooling like PostmanRestlet Client, and Stoplight are stepping up to service the area. Regardless I will keep an eye on. It seems likno-braineriner for Microsoft to make their APIs available via their own IDE products, but maybe we are headed for a different future where a new breed of tools helps us more easily integrate APIs into our applications--no code necessary.


A Tighter API Contract With gRPC

I was learning more about gRPC from the Google team last week, while at the Google Community Summit, as well as the API Craft SF Meetup. I'm still learning about gRPC, and how it contributes to the API conversation, so I am trying to share what I learn as I go, keeping a record for others to learn from along the way. One thing I wanted to better understand was something I kept hearing regarding gRPC delivering more of a tighter API contract between API provider and consumer.

In contrast to more RESTful APIs, a gRPC client has to be generated by the provider. First, you define a service in a .proto file (aka Protocol Buffer), then you generate client code using the protocol buffer compiler. Where client SDKs are up for debate in the world of RESTful APIs, and client generation might even be frowned upon in some circles, when it comes to gRPC APIs, client generation is a requirement--dictating a much tighter coupling and contract, between API provider and consumer. 

I do not have the first-hand experience with this process yet, I am just learning from my discussions last week, and trying to understand how gRPC is different from the way we've been doing APIs using a RESTful approach. So far it seems like you might want to consider gRPC if you are looking for significant levels of performance from your APIs, in situations where you have a tighter relationship with your consumers, such as internal, or partner scenarios. gRPC requires a tighter API contract between provider and consumer, something that might not always be possible, depending on the situation. 

While I'm still getting up to speed, it seems to me that the .proto file, or the protocol buffer definition acts as the language for this API contract. Similar to how OpenAPI is quickly becoming a contract for more RESTful APIs, although it is often times much looser contract. I'll keep investing time into learning about gRP, but. I wanted to make sure and process what I've learned leading up to, and while at Google this last week. I'm not convinced yet that gRPC is the future of APIs, but I am getting more convinced that it is another important tool in our API toolbox.


Thinking About Schema.org's Relationship To API Discovery

I was following the discussion around adding a WebAPI class to Schema.org's core vocabulary, and it got me to think more about the role Schema.org has to play with not just our API definitions, but also significantly influencing API discovery. Meaning that we should be using Schema.org as part of our OpenAPI definitions, providing us with a common vocabulary for communicating around our APIs, but also empowering the discovery of APIs. 

When I describe the relationship between Schema.org to API discovery, I'm talking about using the pending WebAPI class, but I'm also talking about using common Schema.org org within API definitions--something that will open the definitions to discovery because it employs a common schema. I am also talking about how do we leverage this vocabulary in our HTML pages, helping search engines like Google understand there is an API service available:

I will also be exploring how I can better leverage Schema.org in my APIs.json format, better leveraging a common vocabulary describing API operations, not just an individual API. I'm looking to expand the opportunities for discovering, not limit them. I would love all APIs to take a page from the hypermedia playbook, and have a machine readable index for each API, with a set of links present with each response, but I also want folks to learn about APIs through Google, ensuring they are indexed in a way that search engines can comprehend.

When it comes to API discovery I am primarily invested in APIs.json (because it's my baby) describing API operations, and OpenAPI to describe the surface area of an API, but I also want this to map to the very SEO driven world we operate in right now. I will keep investing time in helping folks use Schema.org in their API definitions (APIs.json & OpenAPI), but I will also start investing in folks employing JSON+LD and Schema.org as part of their search engine strategies (like above), making our APIs more discoverable to humans as well as other systems.


Getting Our Schema In Order With Postman's New Data Editor

In 2017 I think that getting our act together when it comes to our data schema will prove to be just as important as getting it together when it comes to our API definitions and design. This is one reason I'm such a big fan of using OpenAPI to define our APIs because it allows us to better organize the schema of the data included as part of the API request and response structure. So I am happy to see Postman announce their new data editor, something I'm hoping will help us make sense of the schema we are using throughout our API operations.

The Postman data editor provides us with some pretty slick data management UI features including drag and drop, a wealth of useful keyboard shortcuts, bulk actions, and other timesaving features. Postman has gone a long way to inject awareness into how we are using APIs over the last couple of years, and the data editor will only continue developing this awareness when it comes to the data we are passing back and forth. Lord knows we need all the help we can get when it comes to getting our data backends in order.

The Postman data editor makes me happy, but I'm most optimistic about what it will enable, and what Postman has planned as part of their roadmap. They end their announcement with "we have a LOT of new feature releases planned to build on top of this editor, capabilities inspired by things you already do using spreadsheets". For me, this points to some features that would directly map to the most ubiquitous data tools out there--the spreadsheet. With a significant portion of business in the world is done via spreadsheets, it makes the concept of integration into the API toolchain a pretty compelling thing.


A Tighter API Contract With gRPC

I was learning more about gRPC from the Google team last week, while at the Google Community Summit, as well as the API Craft SF Meetup. I'm still learning about gRPC, and how it contributes to the API conversation, so I am trying to share what I learn as I go, keeping a record for others to learn from along the way. One thing I wanted to better understand was something I kept hearing regarding gRPC delivering more of a tighter API contract between API provider and consumer.

In contrast to more RESTful APIs, a gRPC client has to be generated by the provider. First, you define a service in a .proto file (aka Protocol Buffer), then you generate client code using the protocol buffer compiler. Where client SDKs are up for debate in the world of RESTful APIs, and client generation might even be frowned upon in some circles, when it comes to gRPC APIs, client generation is a requirement--dictating a much tighter coupling and contract, between API provider and consumer. 

I do not have the first-hand experience with this process yet, I am just learning from my discussions last week, and trying to understand how gRPC is different from the way we've been doing APIs using a RESTful approach. So far it seems like you might want to consider gRPC if you are looking for significant levels of performance from your APIs, in situations where you have a tighter relationship with your consumers, such as internal, or partner scenarios. gRPC requires a tighter API contract between provider and consumer, something that might not always be possible, depending on the situation. 

While I'm still getting up to speed, it seems to me that the .proto file, or the protocol buffer definition acts as the language for this API contract. Similar to how OpenAPI is quickly becoming a contract for more RESTful APIs, although it is often times much looser contract. I'll keep investing time into learning about gRP, but. I wanted to make sure and process what I've learned leading up to, and while at Google this last week. I'm not convinced yet that gRPC is the future of APIs, but I am getting more convinced that it is another important tool in our API toolbox.


Lots Of Talk About Machine Learning Marketplaces

I spent last week in San Francisco listening to Google's very machine learning focused view of the future. In addition to their Google Next conference, I spent Tuesday at the Google Community Summit, getting an analyst look at what they are up to. Machine Learning (ML) was definitely playing a significant role in their strategy, and I heard a lot talk of machine learning marketplaces.

Beyond their own ML offerings like video intelligence and vision APIs, Google also provides you with an engine for publishing your own ML models. They also have a machine learning advanced solution lab, throwing a machine learning hackathon, and pushing a machine learning certification program as part of their cloud and data offerings. As the Google machine learning roadmap was being discussed throughout the day, the question of where can I publish my ML models, and begin selling them, came up regularly--something I feel like is going to be a common theme of the 2017 ML hype.

I'm guessing we will see a relationship between the Google ML engine, Google Cloud Endpoints emerge, and eventually some sort of ML marketplace like we have with Algorithmia. We are already seeing this shift in the AWS landscape, between their Lambda, ML, API Gateway, and AWS Marketplace offerings. You see hints of the future in the AWS serverless API portal I wrote about previously. The technology, business, and politics of providing retail and wholesale access to algorithms and machine learning models in this way fascinates me, but as with every other successful area of the API economy, about 90% of this will be shit, and 10% will be actually doing anything interesting with compute and APIs.

I'm doing all my image and video texture transfer machine learning model training using AWS and Algorithmia. I then use Algorithmia to get access to the models I've trained, and if I ever want to open up partner level (wholesale), or public (retail) access to my ML Models I will use Algorithmia, or an API facade on top of their API to open up access, and make available in the Algorithmia ML marketplace. I'm guessing at some point I will want to syndicate my models into other marketplace environments, with giants like Google and AWS, but also other more niche, specialty ML marketplaces, where I can reach exactly the audience I want.


Greyballing Is Embedded In API's DNA

I've been simmering on thoughts around Uber's greyballing for some time now, where they target regulators and police in different cities, and craft a special Uber experience just for them. Targeting users like this are not new, all companies do it, it's just that Uber has a whole array of troubling behavior going on, and the fact that they were so aggressively pushing back on regulators, is why this is such a news story.

I'm familiar with this concept because greyballing is embedded in the DNA of APIs, we just call it API management. Every web, mobile, and device that uses an API have a unique fingerprint, identifying the application, as well as the user. Not all apps or users are created equal, and everyone gets's a tailored experience. I wanted to explore the spectrum of experiences I see on a regular basis, helping us all understand how this broadway production works.

  • Greyballing - Uber's situation is focused on creating a special scenario for regulators, but many companies also do this for their competitors, and anyone they see as a threat. Smoke and mirrors for those who threaten you is the name of the game.
  • Sales Funnel - Where are you in my sales funnel? Based upon your application or user fingerprint, and IP address you will receive a different experience, support, and access to resources--the bigger opportunity you are, the better experience you will get.
  • Country - Due to laws in specific countries, platforms have to deliver a different experience based upon the country, and region an application and user are operating within. 
  • Virtualization - We regularly create sandboxes, staging, and alternate means of providing an environment for applications and users to operate in, delivering a more virtualized experience, based upon platform objectives.
  • Analytics - Dashboards, analytics, and other visualizations provide us with snapshots of our world. These metrics, KPIs, analytics, visualizations, and other reporting drive everything, even when companies like Facebook and Twitter misreport and inflate their numbers.
  • Rate Limiting - Access to data, content, media and algorithms online is always logged and metered. What you have access to, and how much you can use is always limited--something that usually occurs silently behind the apps we are using, protecting the interests of the platform.
  • Error Rates - In response to rate limiting, or possibly because you are a special regulator or competitor you may be receiving elevated error rates. On mobile devices, it is easy to blame this on the network, but your elevated error rates may be more about who you are than the cell service where you are located.
  • Access Tiers - The experience you are getting is the one you have paid for. Depending on what we can afford, we will get a different experience within an application--with all levels of access, and experience available to me the consumer.
  • Partner Tiers - You only gain access to this experience because you are a partner. Only our trusted, approved partners have access to the full experience. While also letting everyone else know what it takes to become a partner. 
  • Personalization - We are tailoring a unique experience for each user based on their interests, location, friends, activity, and a wealth of other data points. Each user of our platform gets their own experience, allowing the algorithm to define a personalized experience for each human.
  • Transparency - You are given a look into the kitchen, and are shown how the algorithm is working (or not). Helping be more transparent about the technology, business, and politics of the experience you are receiving.
  • Observability - In addition to having a window to look in, there are machine readable / defined inputs and outputs that allow for the experience to be measured and quantified, providing some accountability to the transparency.
  • Communication - What we hear is in alignment with the experience. When we are told to expect a certain experience we receive it. There is no surprises or mysterious behavior in the application experience for any user, and all expectations are in alignment with marketing and other communication.

There are many acceptable forms of greyballing. The problem isn't the technology and experience delivered. It was the motivations behind each company doing it. These things don't always happen as the result of malicious intent as we saw with Uber either. In the majority of the cases, it is just incompetence and greed that are the driving forces. Platform engineers are good at being hyper-focused on a single objective, and being totally oblivious to the negative and unforeseen consequences

With Uber we know this was intentional when it happens with Facebook and Twitter misreporting their numbers, things can be much cloudier. Did they do it intentionally? Or did they just get caught? There really is no holy grail for ensuring tech companies behave with virtualization, personalization, greyballing, or whatever you want to call it. We live in a world where nothing is real, and everything is meant to be fabricated, and tailored just for us--it is what everyone seems to be asking for, wanting, or at least blindly accepting.

This will all come down to transparency, observability, and communication. If a company is doing shady things with their platform, there really is no fool proof way of knowing. We only can depend on them being transparent and communicating, or we can push for more access to the inputs and outputs of the platform, in hopes of gaining more observability. Beyond that, I guess whistleblowers is the last line of defense against this kind of behavior, which is pretty much how we are learning so much about Uber's motivations and internal culture.