{"API Evangelist"}

Requiring SSL For API All Calls

This is one of those regular public service announcements that if at all possible, you should be requiring SSL for all your API calls. I recently got an email from the IBM Watson team telling me that they would be enforcing encryption on all calls to the Alchemy API in February.

SSL is something I've started enforcing on my own internal APIs. I do not have wide usage of my APIs by third-party providers, but I do have a variety of systems making calls to my APIs, transmitting some potentially sensitive information--luckily nothing too serious, as I'm just a simple API Evangelist.

Encryption is an area I research regularly, hoping to stay in tune (as much as I can) with where discussions are going when it comes to encryption and API operations. Much of it doesn't apply to the average API provider but requiring encryption for API calls, and emailing your developers when and if you do begin enforcing, is what I'd consider an essential building block for all API providers.

I'll keep crafting a unique blog post each time I receive on of these emails from the APIs I depend on. Hopefully some day soon, all APIs will be SSL by default.

IFTTT vs Zapier vs DataFire

Integration Platform as a Service (iPaaS) solutions are something I've been tracking on for a while, and an area I haven't seen too much innovation in, except by Zapier for most of that time. I'm a big fan of what IFTTT enables, but I'm not a big fan of companies who build services that depend on APIs but do not offer APIs in turn, so you don't find me highlighting them as an iPaaS solution.

Instead, you'll find me cheering for Zapier, who has an API, and even though I wish they had more APIs, I am grateful they paying it forward a little bit. I wish we had better solutions, but the politics of API operations seems to slow the evolution of iPaaS, usually leaving me disappointed.

That was until recently when some of my favorite API hackers released DataFire:

DataFire is an open source integration framework - think Grunt for APIs, or Zapier for the command line. It is built on top of open standards such as RSS and Open API. Flows can be run locally, on AWS Lambda, Google Cloud, or Azure via the Serverless framework, or on DataFire.io.

"DataFire natively supports over 250 public APIs including: • Slack • GitHub • Twilio • Trello • Spotify • Instagram • Gmail • Google Analytics • YouTube, as well as MongoDB, RSS feeds, and custom integrations." They have a sample flows available as an individual Github repositories. Integrations can be added by the URL of an Open API (Swagger) specification or an RSS feed, you can also specify --raml, --io_docs, --wadl, or --api_blueprint.

DataFire is new, so it has a lot of maturing to do as an API framework, but it has EVERYTHING that iPaaS solutions should have at its core in my opinion. It's API definition-driven, its open source, and there is a cloud version that any non-developer user can put to use. DataFire is encouraging everyone to share each of the flows as machine readable templates, each as their own Github repository so that anyone can fork, modify, and put to work. #IMPORTANT

This is the future of iPaaS. There is lots of money to be made in the sector, empowering average business, professional, and individual users when it comes to managing their own bits on the web--if current providers get out of the way. The problem with current solutions is that they work too hard to capture the exhaust of these workflows and limit their execution to specific walled garden platforms. DataFire acknowledges that these flows will need to be executed across all the leading cloud providers, orchestrated in serverless environments, as well as more managed level of service in a single, premium cloud edition. 

DataFire is the iPaaS solution I've been waiting to see emerge and will be investing time into learning more about how it works, developing integrations and flows, and telling stories about what others are doing with it. DataFire and DataFire.io needs a lot of polishing before it is ready for use by the average, non-technical IFTTT or Zapier user, but I don't think it won't take much to get it there pretty quickly. I'm stoked to finally have an iPaaS solution that I can get behind 100%. 

A Missed Opportunity With The Medium API

In addition to using the news of Medium's downsizing as a moment to stop and think about who owns our bits, I wanted to point out what a missed opportunity the Medium API is. Having an API is no guarantee of success, and after $132M in 3 Rounds from 21 Investors, I'm not sure an API can even help out, but it is fun to speculate about what might be possible if Medium had robust API in operation.

Medium has an API, but it is just a Github repository, with reference to a handful of paths allowing you to get details on yourself, the publications you are part of, and post entries to the site. There are no APIs for allowing me to get the posts of other users, or publications, let alone any of the analytics, or traffic for this. I'm guessing there is no API vision or champion at Medium, which results in the simple, restrictive API we see today.

Many media and content companies see APIs as giving away all the value they are trying to monetize, and are unaware of the control that modern approaches to API management bring to the table. Many people see the pain that other API pioneers have suffered like Twitter, and want to avoid the same problems, completely unaware that many of Twitter's ecosystem problems were Twitter-induced and not inflicted by 3rd party developer.

If Medium had a proper developer portal, complete set of API paths, proper OAuth controls, and other API management tooling, they could open up innovation in content delivery, publishing, analytics, visualizations, voice, bot, and the numerous of other areas where APIs are changing how we consume, create, and engage with information. I get that control over the user experience is key to the Medium model, but there are examples of how this can be done well, and still have an API. The best part is it only costs you the operation of the API operations.

I do not think more money will save Medium. I think they have to innovate. I think part of this innovation needs to come from the passionate base of users they have already developed. I've seen Medium carefully plot out a path forward when it comes their trusted partners, and there is no reason this type of quality control can't be replicated when it comes to API consumers, as well as the applications and integrations that they develop. The Medium API just seems like a really huge missed opportunity to me, but of course, I'm a little biased.

Why I Still Believe In APIs--The 2017 Edition

As I approach my seventh year as the API Evangelist and find myself squarely in 2017, I wanted to take a moment to better understand, and articulate why I still believe in APIs. To be the API Evangelist I have to believe in this, or I just can't do it. It is how my personality works--if I am not interested, or believe in something, you will never find me doing it for a living, let alone as obsessively as I have delivered API Evangelist.

First, What Does API Mean To Me?
There are many, many interpretations, and incarnations of "API" out there. I have a pretty wide definition of what is API, one that spans the technical, business, and politics of APIs. API does not equal REST, although it does employ the same Internet technologies used to drive the web. API is not the latest vendor solution or even standard. The web delivers HTML for humans to consume in the browser, and web APIs deliver machine-readable media types (XML, JSON, Atom, CSV, etc.) for use in other applications. When I say applications, I do not just mean the web, mobile, and devices applications--I mean other applications, as in "the action of putting something into operation". An API has to find harmony between the technical, business, and political aspects of API operations and strike a balance between platform, 3rd party developer, and end-user needs--with every stakeholder being well informed along the way.

I Still Believe Early My API Vision
When I close my eyes I still believe in the API vision that captured my attention in 2005 using the Delicious API, again in 2006 with the Amazon S3 and EC2 APIs, and with the Twitter API in 2007. Although today, I better understand that a significant portion of this early vision was very naive, and too trusting in the fact that people (API providers and consumers) would do the right thing with APIs. APIs use Internet technology to make data, content, and algorithms more accessible and observable, securely using the web. APIs can keep private information safe, and ensure only those who should have access do. I believe that an API-first approach can make companies, organizations, institutions, and government agencies more agile, flexible, and ultimately more successful. APIs can bring change, and make valuable resources more accessible and usable. When done right.

APIs Are Not Going Away Or Being Replaced
Yes, many other technologies are coming along to evolve how we exchange data, content, and algorithms on the web, but doing this as a platform, in a more open, transparent, self-service, and machine-readable way is not going anywhere. I try not to argue in favor of any technological dogma like REST, Hypermedia, GraphQL, and beyond, but I do try to be aware and make recommendations of the benefits, and consequences along the way. Having a website or mobile applications for your business, organization, institution, government agency, and often times individuals isn't going to go away anytime soon. Neither will sharing the same data, content, and algorithms available for use in applications in a machine readable way, for use by 3rd party groups using Internet technology. The term API has gained significant mindshare in the tech space, enjoys use as an acronym in the titles of leading news publications, and has taken root in the vocabulary of average folks, as well as the entrepreneurs, developers, and technology companies who helped popularized the term. 

APIs Are Not Good, Nor Bad, Nor Neutral--People And Companies Are
APIs get blamed for a lot of things. They can go away at any time. They are unstable. They are not secure. The list goes on and on, and many also like to think that I blindly evangelize that APIs will make the world better all by themselves--I do not. APIs are a reflection of the individuals, companies, organizations, institutions, and agencies that operate them. In the last seven years, I have seen some very badly behaved API providers, consumers, and companies who are selling services to the sector. Going forward, I will never again be surprised again by how badly behaved folks can be when using APIs, and the appetite for an API-driven surveillance economy that is fueled by greed and fear. 

API Specifications, Schema, and Scopes Are The Most Critical Part
I track on over 75 stops along my definition of the API life cycle, and many of them are very important, but my API definition research, which encompasses the world of API specifications, schema, and scopes will play the most significant role in the future of the web, mobile, and connected device technology. This is why I will be dedicating the majority of my bandwidth to this area in 2017, and beyond. Other areas will get attention from me, but API specifications, schema, and scopes touch on every other aspect of the API sector, as well as almost EVERY business vertical driving our economy today. Industry level standards like PSD2 for the banking industry will become clearer in coming years, as a result of the hard work that is going on right now with API definitions.

Help Ensure The Algorithms That Are Increasingly Impacting Our Lives Are Observable
APIs provide access to data and content, but they also provide some observability into the black box algorithms that are increasingly governing our world. While APIs can't provide 100% visibility into what algorithms do, or do not do, they can provide access to the inputs, and the outputs of the algorithms, making them more observable. When you combine with modern approaches to API management, it can do this securely, and in a way that considers any end-user, as well as developer and platform interests. I'm going to keep investing in fine tuning my argument for APIs to be used as part of artificial intelligence, machine learning, and the other classifications of algorithms that are being put to use across most business sectors. It's a relatively new area of my API research, but something I will invest more time, and resources into to help push the conversation forward. 

More API Storytelling Is Needed, And The Most Rewarding Part Of What I Do
I've done an OK job at instigating other API service providers, API operators, integrators, and individuals to tell their story. I feel that my biggest impact on the space has been producing the @APIStrat conference with my partner in crime Steve Willmott at 3Scale / Red Hat, which is focused on giving people across the API sector a diverse, and inclusive place to tell their story, on stage and in the hallways. Next, I'd say my regular storytelling on my blog, like my evergreen post on the secret to Amazon's success, has had the biggest impact, while also being the most rewarding and nourishing part for me personally. I've heard stories about my Amazon blog post being framed on the wall in a bank in Europe, baked into the home page of the IT Portal for government agencies, and many other scenarios. Stories matter. We need more of it. I am optimistic regarding much of the new writing I'm seeing on Medium, but this storytelling being mostly isolated to Medium worries me.

Keep On, Keep'n On With API Evangelist The Best I Can
I still believe in APIs. Over the last seven years, my belief in what APIs can do has not diminished. My belief in what people are capable of doing with them has tremendously. I enjoy staying in tune with what is going on, and trying to distil it down into something people can use in their own API operations. I think the biggest areas of concern for the API life cycle in coming years will be in the areas of API definitions, security, terms of services, privacy, discovery, intellectual property, and venture capital

I just needed to refresh my argument of why I still believe in APIs. Honestly, 2015 and 2016 really stretched my faith when it came to APIs. Once I realized it was primarily a crisis of faith in humans and not in APIs, I was able to find a path forward again with API Evangelist. Really, I couldn't ask for a better career focus. It keeps me reading, learning, and thinking deeply about something that touches all of us, and possesses some really meaningful areas that could make an actual impact on people's lives. Things like Open Referral, my work on FAFSA, EPA, National Park, Census, Energy, API Commons, and APIs.json, keep me feeling that APIs can make a positive impact on how our world works.

In 2017 I don't think I need to spend much time convincing people to do APIs. They are already doing this. There is more going on than I can keep track of. I think I need to focus on convincing people to do APIs as open and observable as they possibly can. Push for a balanced approach to not just the technology, but the business and legal side of platform operations, in a way that protects the platform's operations, but also in a way that is looking out for 3rd party developer, and end-user interests. I still believe in APIs, but only if they possess the right balance which I talk about regularly, otherwise they are just another approach to technology that we are letting run amok in our lives.

Using An OpenAPI Spec As Central Truth In Stakeholder Discussions

I am working with Open Referral to evolve the schema for the delivery of human services, as well as helping craft a first draft of the OpenAPI Spec for the API definition. The governing organization is looking to take this to the next level, but there are also a handful of the leading commercial providers at the table, as well other groups closer to the municipalities who are implementing and managing Open211 human service implementations.

I was working with Open Referral on this before checking out this last summer, and would like to help steward the process, and definition(s) forward further in 2017. This means that we need to speak using a common language when hammering out this specification and be using a common platform where we can record changes, and produce a resulting living document. I will be borrowing from existing work I've done on API definitions, schema, and scope across the API space, and putting together a formal process design specifically for the very public process of defining, and delivering human services at the municipal level.

I use OpenAPI Spec (openapis.org) as an open, machine readable format to drive this process. It is the leading standard for defining APIs in 2017, and now is officially part of the Linux Foundation. OpenAPI Spec provides all stakeholders in the process with a common language when describing the Open Referral in JSON Schema, as well as the surface area of the API that handles responses & requests made of the underlying schema.

I have an OpenAPI Spec from earlier work on this project, with the JSON version of the machine-readable definition, as well as a YAML edition--OpenAPI Spec allows for JSON or YAML editions, which helps the format speak to a wider, even less technical audience. These current definitions are not complete agreed upon definitions for the human services specification, and are just meant to jumpstart the conversation at this point.

OpenAPI Spec provides us with a common language to use when communicating around the API definition and design process with all stakeholders, in a precise, and machine readable way. OpenAPI Spec can be used to define the master Open Referral definition, as well as the definition of each individual deployment or integration. This opens up the opportunity to conduct a "diff" between the definitions, showing what is compatible, and what is not, at any point in time.

The platform I will be using to facilitate this discussion is Github, which provides the version control, "diff", communication, and user interactions that will be required throughout the lifecycle of the Open Referral specification. Allowing each path, parameter, response, request, and other elements to be discussed independently, with all history present. Github also provides an interesting opportunity for developing other tools, like I have for annotating the API definition as part of the process.

This approach to defining a common data schema and API definition requires that all stakeholders involved become fluent in OpenAPI Spec, and JSON Schema, but is something that I've done successfully with other technical, as well as non-technical teams. This process allows us all to all be on the same page with all discussion around the Open Referral API definition and schema, with the ability to invite and include new participants in the conversation at any time using Github's existing services.

Once a formal draft API specification + underlying JSON schema for Open Referral is established, it will become the machine readable contract and act as a central source of truth regarding the API definition as well as the data model schema. It is a contract that humans can follow, as well as be used to drive almost every other stop along the API life cycle like deployment, mocking, management, testing, monitoring, SDKs, documentation, and more.

This process is not unique to Open Referral. I want to be as public with the process to help other people, who are working to define data schema, understand what is possible when you use APIs, OpenAPI Spec, JSON Schema, and Github. I am also looking to reach the people who do the hard work of delivering human services on the ground in cities and help them understand what is possible with Open Referral. Some day I hope to have a whole suite of server-side, and client-side tooling developed around the standard, empowering cities, organizations, and even commercial groups deliver human services more effectively.

The Google Baseline For A User Account Area

I have a minimum definition for what I consider to be a good portal for an API, and was spending some time thinking about a baseline definition for the API developer account portion of that portal, as well as potentially any other authenticated, and validated platform user. I want a baseline user account definition that I could use as aa base, and the best one out there off the top of my head would be from Google.

To support my work I went through my Google account page and outlined the basic building blocks of the Google account:

  • Sign-in & Security - Manage your account access and security settings
    • Signing in to Google - Control your password and account access, along with backup options if you get locked out of your account.
      • Password & sign-in method - Your password protects your account. You can also add a second layer of protection with 2-Step Verification, which sends a single-use code to your phone for you to enter when you sign in. 
        • Password - Manage your password.
        • 2-Step Verification - Manage 2-Step verification.
        • App Passwords - Create and manage application passwords.
      • Account recovery options - If you forget your password or cannot access your account, we will use this information to help you get back in.
        • Account Recovery Email - The email to send recovery instructions.
        • Account Recovery Phone - The email to send recovery instructions.
        • Security Question - A secret question to use as verification during recovery.
    • Device activity & notifications - Review which devices have accessed your account, and control how you want to receive alerts if Google thinks something suspicious might be happening.
      • Recent security events - Review security events from the past 28 days.
      • Recently used devices - Check when and where specific devices have accessed your account.
      • Security alerts settings - Decide how we should contact you to let you know of a change to your account’s security settings or if we notice something suspicious.
    • Connected apps & sites - Keep track of which apps and sites you have approved to connect to your account and remove ones you no longer use or trust.
      • Apps connected to your account - Make sure you still use these apps and want to keep them connected.
      • Saved passwords - Use Google Smart Lock to remember passwords for apps & sites you use Chrome & Android
  • Personal Info & Privacy - Decide which privacy settings are right for you
    • Your personal info - Manage this basic information — your name, email, and phone number.
    • Manage your Google activity - You have lots of tools for controlling and reviewing your Google activity. These help you decide how to make Google services work better for you.
      • Activity controls - Tell Google which types of data you’d like to save to improve your Google experience.
      • Review activity - Here’s where you can find and easily review your Google activity.
    • Ads Settings - You can control the information Google uses to show you ads.
    • Control your content - You are in control of the content in your Google Account, even if you stop using Google products or decide to delete your account altogether.
      • Copy or move your content - You can make a copy of the content in your account at any time, and use it for another service.
      • Download your data - Create an archive with a copy of your data from Google products.
      • Assign an account trustee - Approve a family member or friend to download some of your account content in the event it is left unattended for an amount of time you've specified.
      • Data Awareness - We want you to understand what data we collect and use.
  • Account Preferences - Manage options for language, storage, and accessibility. Set up your data storage and how you interact with Google.
    • Language & Input Tools - Set Google services on the web to work in the language of your choice.
    • Accessibility - Adjust Google on the web to match your assistive technology needs.
    • Your Google Drive storage - Your Google account comes with free Google Drive storage to share across Gmail, Google Photos, Google Docs and more. If this isn't enough, you can add more storage here.
    • Delete your account or services - If you are no longer interested in using specific Google services like Gmail or Google+, you can delete them here. You can even delete your entire Google Account.
      • Delete a Google service - A Google Account offers many services. Some of these services can be deleted from your account individually.
      • Delete your Google Account - You're trying to delete your Google Account, which provides access to various Google services. You'll no longer be able to use any of those services, and your account and data will be lost.

These are all building blocks I will add to my API management research, with an overlap with my API portal research. I'm not sure how many of them I will end up recommending as part of my formal guide, but it provides a nice set of things that seem like they SHOULD be present in all online services we use. Google also had two other tools present here, that overlap with my security and privacy research:

  • Security Checkup - Protect your account in just a few minutes by reviewing your security settings and activity.
  • Privacy Checkup - Take this quick checkup to review important privacy settings and adjust them to your preference.

I am going to be going through Google's privacy and security sections, grabbing any relevant building blocks that providers should be considering as part their API operations as well. For now, I'm just going to add this to the list of things I think should be present in the accounts of third party platform users, whether they are a developer, partner, or otherwise.

I would also like to consider what other providers offer accounts features I'd like to emulate. Like Amazon, Dropbox, and other leading providers. I would also like to take another look at what the API management providers like 3Scale offer in this area. Eventually, I want to have an industry guide that API providers can follow when thinking about what they should be offering as part of their user accounts.

Your State Issued ID Is Required To Signup For This Online Service

I am evaluating Shutterstock as a new destination for some of my photos and videos. I've been a Shutterstock user for their stock images, but I'm just getting going being a publisher. I thought it was worth noting that as part of their sign up process they require me to upload a copy of my state issued identification before I can sell photos or images as a Shutterstock publisher.

This is something I've encountered with other affiliate , partner, and verified solutions. I've also had domains expire, go into limbo, and I have to fax in or upload my identification. It isn't something I haven't seen with many API providers yet, but I'm guessing it will be something we'll see more of with API providers further locking down their valuable resources. 

I am not sure how I feel about it being a regular part of the partner and developer validation process--I will have to think about it more. I'm just adding to the list of items I consider as part of the API management process. It makes sense to certify and establish trust with developers, but I'm not 100% sure this is the way to do it in the digital age. IDK, I will consider more, and keep an eye out for other examples of this with other API providers.

Shutterstock publisher isn't necessarily connected directly to the API, but once I'm approved I will be uploading, and managing my account via their API, so it is a developer validation process for me. The topic of developer validation and trust keeps coming up in other discussions for me, and with the increasing number of APIs we are all developing with, it seems like we are going to need a more streamlined, multi-platform, and an API-driven solution to tackle this. 

For me, it would be nice if this solution was associated with my Github account, which plays a central role in all of my integrations. When possible, I create my developer accounts using my Github authentication. It would be nice if I had some sort of validation, and trust ranking based upon my Github profile, something that would increase the more APIs I use, and establish trust with.

I will file this away under my API management, authentication, and partner research, and will look for other examples of it in the wild--especially at the partner layer. Certifying that developers are truly human, or possibly truly a valid corporate entity seems like it is something that will only grow in the bot-infested Internet landscape we are building.

Intercom Providing Docker Images Of Their SDKs

I regularly talk about the evolving world of API SDKs, showcasing what API service providers like APIMATIC are up to when it comes to orchestration, integration, other dimensions of providing client code for API integrations. A new example of this that I have found in the wild is from communication and support API platform Intercom, with their publishing of Docker images of their API SDKs. This overlaps my SDK research with the influence that containerization is having on the the world of providing and integrating with APIs.

Intercom provides Docker images for their Ruby, Node, Go, and PHP API SDKs. It's a new approach to making API code available to API consumers that I haven't seen before, (potentially) making their integrations easier, and quicker. I like their approach to providing the containers and specifically the fact they are looking for feedback on whether or not having SDK Docker containers offer any benefit to developers. I'm guessing this won't benefit all API integrators, but for those who have successfully adopted a containerized way of life, it might streamline the process and overall time to integration.

I just wanted to have a reference on my blog to their approach. I'll keep an eye on their operations, and see if their SDK Docker images become something that gets some traction when it comes to SDK deliver. Since they are sharing the story on their blog, maybe they'll also provide us with an update in a couple months regarding whether developers found it useful or not. If nothing else, their story has reminded me to keep profiling Intercom, and other similar API providers, as part of my wider API communication and support research.

Evernote: Reaffirming Our Commitment to Your Privacy

A couple of weeks back, the online note-taking platform Evernote made a significant blunder of releasing a privacy policy update that revealed they would be reading our notes to improve their machine learning algorithms.  It is something they have since rolled back with the following statement "Reaffirming Our Commitment to Your Privacy":

Evernote recently announced a change to its privacy policy and received a lot of customer feedback expressing concerns. We’ve heard that feedback and we apologize for the poor communication.We have decided not to move forward with those changes that would have taken effect on January 23, 2017. Instead, in the coming months we will be revising our existing Privacy Policy. The main thing to know is this: your notes remain private, just as they’ve always been.

Evernote employees have not read, and do not read, your note content. We only access notes under strictly limited circumstances: where we have your permission, or to comply with our legal obligations.Your privacy, and your trust in Evernote are the most important things to us. They are at the heart of our company, and we will continue to focus on that now and in the future.

While I am thankful for their change of heart, I wanted to take a moment to point out the wider online environment that incentivizes this type of behavior. This isn't a single situation with Evernote reading our notes, this is the standard mindset for startups operating online, and via our mobile devices in 2017. This is just one situation that was called out, resulting in a change of heart by the platform. Our digital bits are being harvested in the name of machine learning and artificial intelligence across the platforms we depend on daily for our business and personal lives. 

In these startup's quest for profit, and ultimately their grand exit, they are targeting our individual and business bits. Use this free web application. Use this free mobile application. Plug this device in at home on your network. Let our machine learning evaluate your database for insights. Let me read your most personal and intimate thoughts in your diary, journal, and notebook. I will provide you with entertainment, convenience, and insight that you would never be able to achieve on your own, without the magic of artificial intelligence.

In my experience, the position a company takes on their API provides an early warning system for these situations. Evernote sent all the wrong signals to their developer community years ago, letting me know it was not a platform I could depend on, and trust for my business operations, let alone my personal, private thoughts. I was able to get off the platform successfully, but the trick in all of this is identifying other platforms who are primed for making similar decisions and helping the average users understand the motives behind all of these "solutions" we let into our lives, so they can make an educated decision on their own.

The Design Process Helping Me Think Through My Data And Content

I'm working on the next evolution in my API research, and I'm investing more time and energy into the design of the guides I produce as a result of each area of my research. I've long produced a 20+ page PDF dumps of the leading areas of my research like API design, definitions, deployment, and management, but with the next wave of industry guides, I want to polish my approach a little more. 

The biggest critique I get from folks about the API industry guides I produce is that they provide too much information, aren't always polished enough, and sometimes contain some obvious errors. I'm getting better at editing, but this only goes so far, and I'm bringing in a second pair of eyes to review things before they go out. Another thing I'm introducing into the process is the use am of professional design software (Adobe InDesign) rather than just relying on PDF's generated from my internal system with a little HTML spit shine.

While it is taking me longer to dust off my Adobe skills than I anticipated, I am finding the design process to be extremely valuable. I've often dismissed the fact that my API research needed to look good, and that it is more important that it is just available publicly on my websites. This is fine and is something that will continue, but I'm finding a more formal design process is helping me think through all of the material, better understand what is valuable, what is important, and hopefully better tell a story about why it is relevant to the API space. It is helping me take my messy backend data and content, and present it in a more coherent and useful way.

As I'm formalizing my approach using my API definition guide, I'm moving on to my API design guide, and I can't help but see the irony in learning the value of design while publishing a guide on API design, where I highlight the benefits of API design to make sense of our growing digital assets. Once I've applied this new approach to my definition, design, deployment, DNS, and management guides I am going to go back to my API design for these resources, and take what I've learned and applied to the way I share the raw data and content. The lesson that stands out most at the moment, is that less is more, and simple speaks volumes.

Patent US9300759 B1: API Calls With Dependencies

I understand that companies file for patents to build their portfolios, and assert their stance in their industry, and when necessary use patents as leverage in negotiations, and in a court of law. There are a number of things that I feel patents logically apply to, but I have trouble understanding why folks insist on patenting things that make the web work, and this whole API thing work.

One such filing is patent number US9300759 B1: API Calls With Dependencies, which is defined as:

Techniques are disclosed for a client-and-server architecture where the client makes asynchronous API calls to the client. Where the client makes multiple asynchronous API calls, and where these API calls have dependencies (i.e., a result of one call is used as a parameter in a second call), the client may send the server these multiple asynchronous API calls before execution of a call has completed. The server may then execute these multiple asynchronous API calls, using a result generated from one call as a parameter to another call.

Maybe I live in a different dimension than everyone else, but this doesn't sound unique, novel, and really just feels like folks are mapping out all the things that are working on the web and filing for patents. I found this patent while going through the almost 1300 API related patents in Amazon Web Services portfolio. Many of their patents make sense to me, but this one felt like it didn't belong.

When I read these patents I worry about the future of the web. Ultimately I can only monitor the courts for API related patent litigation, and keep an eye out for new cases, as this is where the whole API patent game is going to play out. I'll keep squawking every time I read a patent that doesn't just belong, and when I see any new legal cases I will help shine a light on what is going on.

Hoping Schema Becomes Just As Important As API Definitions in 2017

The importance of a machine readable API definition has grown significantly over the last couple of years, with a lot of attention being spent (rightfully so) on helping educate API providers of the value of having an OpenAPI Spec, API Blueprint, or another format. This is something I want to continue contributing to in 2017, but I also want to also shine a light on the importance of having your data schema well defined.

When you look through the documentation of many API providers, some of them provide an example request which might give hints at the underlying data model, but you rarely ever see API providers openly share their schema in any usable format. You do come across some of a complete OpenAPI Spec or API Blueprints from time to time, but usually, when you find API definitions, the schema definition portion is incomplete. 

Not having your schema well defined, shareable, and machine-readable is one of the contributing factors to a lack of common, shared schema in the API space. We have healthy examples of this in action with Schema.org, but for some reason, many of us don't bring schema front and center in our API operations. We are accepting them as input for our API requests, and returning them with API responses, but don't always share examples of this in our documentation, complete sections in our API definitions, or share JSON Schema as part of our developer resources. All contributing to a lack of consistency within a single API operation, as well as the wider industry.

I am going to spend more time in 2017 talking to people about the schema they use in their API operations and shining a light on existing schema that has been published by API providers. I will be also continuing to support important schema like Open Referral that helps streamline how our world works. It is no secret that when we speak using common schema things work smoother and that we will make sense to a wider audience. I hope in 2017 we can invest more in defining and sharing the schema we put to use, and reusing some the most common examples out there.

The API Driven Marketplace That Is My Digital Self

I spend a lot of time studying and thinking about the "digital bits" that we move around the Internet. Personally, and professionally I am dedicated to quantifying, and understanding those bits that are the most important to us as individuals, professionals, and business owners. Like many other folks who work in the tech sector I have always been good at paying attention to the digital bits, I am just not as good at others when monetizing these bits, adding to my own wealth.

When you talk about this world in the world as much as I have, you see just a handful of responses. Most "normals" aren't very interested in things at this level--they just want to benefit from the Internet and aren't really interested in how it works. People who are associated with the tech sector and understand the value of these bits, often do not see them as "your" bits, they seem them as their bits--something they can extract value from, and generate revenue. Then there are always a handful of "normals" who are interested in understanding more, because of security and privacy concerns, as well as a handful of tech sector folks who actually care about the humans enough to balance the desire to just make profits.

The Imbalance In All OF This Is What Fascinates Me 
The majority of the "normals" don't care about the complexity of the bits they generate, as well as who has access to them. Folks in the tech sector love to tell me regularly that people don't care about this stuff, they just want the convenience, and for it all to work. However, they are also overwhelmingly interested in the bits you generate each day because there is plenty of money to be made extracting insights from your bits, and selling those insights, as well as the raw bits to other companies so they can do the same. This is why EVERYTHING is being connected to the Internet--the convenience factor is just to incentivize you enough so that you plug it in.

There is a reason this latest wave of tech barons like Facebook, Twitter, and Google are getting wealthy--it is generated from the exhaust from our personal and professional lives. Facebook doesn't make money from me just having a Facebook account, they make money off me regularly using Facebook, sharing details of my life via my home and work computers, and mobile devices. This "exhaust" from our daily lives is why Silicon Valley is looking to invest money in each wave of startups, and why corporations, law enforcement, and government agencies are looking to get their hands on it. If these bits are so important, valuable and sought after, why shouldn't the average citizen be more engaged as part of this? I mean they are our bits, right?

Where the imbalance really comes in for me in all of this, is that technologists agree that these bits are valuable. Many even hold them up to be the source of intelligence for the next generation of algorithms that are increasingly governing our lives. Some even hold up these bits to almost a religious status, and that someday humans will be able to be simply uploaded to the Internet because these bits reflect our human soul. However, the average person shouldn't worry themselves with these bits, do not have any rights to access these bits, let alone be able to assert any sort of control and ownership over these bits and bytes? It is troubling that this is the norm for everything right now.

I operate somewhere in the pragmatic and realistic middle I guess. I do not believe these bits and bytes represent our soul, being, or any other religious stance. I do believe they are a digital reflection of ourselves, though. I do think that my thoughts in my Evernote notebook should not be purely seen as a commodity to enrich their algorithms or the photos of my daughter on Instagram being open for use in advertising--they are my bits. I am fine with the platform that I use generating revenue from my activity, however, I do expect them these bits as mine, and this is a shared partnership when it comes to being a steward of my bits--something I feel has severely gotten out of balance in the current investment climate.

Who Do These Bits Belong To?
Only in recent years have I seen more tangible discussion around who these bits belong to. Startup founders I've discussed this with feel pretty strongly that these bits wouldn't exist if it wasn't for their platform--so they belong to them. Increasingly the ISP, cable, and mobile network providers feel they have some right to this information as well. Local and the federal government has also pushed pretty hard to clarify that they should have unfettered access to these bits. Sadly, there aren't more discussions that actually include end-users, the average citizen, and "normals" in these discussions, but I guess this is how power works right? Let's keep them in the dark, while we work this shit out for them.

I guess that I am looking to help shift this balance by being a very vocal "average citizen", and active participant in the tech sector, one who cares about privacy and security. This is why I do API Evangelist--to shine a light on this layer of our increasingly digital world, and be transparent about my own world, so that I can help educate other "normals", and people who care, about this version of our digital self that is being generated, cultivated, and often exploited online without any respect for us as individual human beings. To help quantify the digital version of myself, I wanted to walk through my footprint, and share it with others.

What Makes Up The Digital Version Of Kin Lane?
Alongside studying the world of APIs, and how the bits and bytes are being moved around the Interwebz, I regularly try to assess my own online presence, define, a regularly redefine who is Kin Lane on the Internet--this is how I make money, and pay my rent, so it is very important to me. I am always eager to dive in and quantify this presence, because the more I am aware of this digital presence, the more I am able to make it work in the service of what I want, over what other people, companies, and the government want for me. Let's take a stroll through the core services that define my digital self in 2017. 

Twitter (@kinlane & @apievangelist)
Starting with the most public version of myself, I'd say Twitter is one of the most important digital performances I participate in each day. I have 10 accounts I perform on, Tweeting about what I see, what I write, and many other elements of my personal and professional life. Here are some of the key bits and bytes:

  • Profile - My Twitter profile, and account details.
  • Tweets - My words, thoughts, and observations of the world around me.
  • Messages - My private messages with people I know.
  • Media - Photos and videos that I have often taken myself.
  • Lists / Collection - Curation of the world of Twitter as I see it.
  • Location - Where I am when I'm engaging with my network.
  • Friends - The people that I know and connect with on Twitter.
  • Links - Links to important other important sites in my world.

I get that Twitter needs to generate revenue by providing insights about the relationships I have, the role I play in trending topics, and other interesting insights they can extract in aggregate. However, these are my words, thoughts, messages, photos, and friends. Even though these activities connect in the cloud at twitter.com, they are occurring in my home and my physical world via my laptop, tablet, and mobile phone.

Github, Github Pages, and Github Gists (@kinlane, @apievangelist)
Twitter is an important channel for me, but I would say that Github plays the most critical role in my professional world of all the services I depend on. I host the majority of my public presence using Github Pages, and I manage all the API driven code that runs my world on Github. Aside from Twitter, I'd say that Github is one of the most important social, and communication channel in my world, used to keep track on what individuals and companies are up to.--here are the bits I track on via Github:

  • Profile - My Github profiles for Kin Lane and API Evangelist.
  • Organizations - For each of my major project areas I have separate organizations.
  • Users - The people that I work with, have relationships with, or just stalk them on Github.
  • Projects - I'm beginning to organize my work into projects instead of organizations.
  • Repositories - All my public websites, backend APIs, and code that I write lives as repos.
  • Issues - I leverage Issues as intended by Github, but also as a tasking system for myself.
  • Gists - I use Github Gists as part of my storytelling, sharing code, data, and other bits of code.
  • Search - I manually, and as part of automated work conduct regular searches across Github.

When it comes to what matters in my world as the API Evangelist, Github is the most critical. Many providers have dialed in how to game Twitter, LinkedIn, Facebook, and other popular channels, but it is much harder to fake a presence on Github. This is where I track on companies and individuals who are doing some the most valuable work on the web--the most important part is it's out in the open.

Facebook (@kinlane & @apievangelist, & @dronerecovery)
I would put Facebook more in the personal category, than a professional one, as I have never really found a voice for API Evangelist on Facebook. I tend to use the platform for coordinating with my friends and family, but I am increasingly opening up to the wider public and including folks from my professional world. This doesn't change the conversation around the value of these bits and bytes in my daily life:

  • Profile - My Facebook profile, details, and pages.
  • Posts - My words, thoughts, and observations of the world around me.
  • Friends - My friends, and family, and their networks.
  • Images - The photos and images I post to Facebook.
  • Videos - The videos and animations I publish to Facebook.
  • Links - Links to important other important sites in my world.
  • Pages - The versions of this detail that get posted to my Facebook pages.

I do not publish much else to Facebook.  I don't check-in, or really use it as part of my business operations, so what I do is pretty basic. I am spending more time crafting how-to API videos, and other content and media that is focused on a Facebook audience, so this will change over time, making my perspective on ownership and control over my bits and bytes even more important in coming years.

YouTube (@kinlane, @dronerecovery, @algorotoscope)
Historically most of my videos on Youtube have been speaking at conferences, universities, and other gatherings. I've been slowly shifting this, as I hope to produce more how-to API content, as well as significantly shifted with the introduction of my Drone Recovery and Algorotoscope work. Youtube's role in my personal and professional world is only going to continue to expand:

  • Profile - My Youtube profiles, detail, and history.
  • Videos - The videos I publish on Youtube.
  • Channels - How I organize things on Youtube.
  • Playlists - The curation of content I publish, and discovery.
  • Comments - The comments on the videos I've published.
  • Search - My search history on Youtube.

I do not have a lot of experience working with Youtube in a professional capacity and do not spend a huge amount of time on Youtube watching videos, but its network effect is undeniable. While Twitter and Facebook are growing in significance when it comes to my video bits, Youtube is still number one.

While not a very public part of my online self, I use Dropbox a lot for managing a more human side of my online storage of files, videos, and other digital objects. I leverage Dropbox as sharing for planning my conference(s), working on video projects, and a variety of other projects, across many different teams. Here are some of the bits I manage with Dropbox:

  • Profile - My profile, details, and Dropbox account.
  • Files - Everything I'm storing on Dropbox.
  • Users - The users I've added to teams and collaborated with.
  • Sharing - The management of sharing files.

I'm increasingly using the Dropbox API as a way to automate my document, image, video, and other media processing workflows in a way that allows me to plug humans into different stops along the way. It helps me maximize control over the media and other objects I need throughout the work on a daily basis. 

Google has been a significant player in helping me define my digital self since 2005. However, in recent years I've made an effort to reduce my dependence on Google, but it is something I'm not able to ever do completely. I see Google as the poster child for why we should be mindful of our digital self, the bits we manage online, and why we should be skeptical of free software and tools for our businesses. Here are the bits I'm leveraging Google to manage:

  • Profile - My overall Google profile for personal and my business domains.
  • Email - Email for my personal and business profiles going back to 2005.
  • Documents - All my files for my personal and business profiles going back years.
  • Calendar - My secondary calendar, that is tied to my local calendar.
  • Search - My search history, and since Google is my primary search engine--significant.
  • Location - Where I am at, where I'm going, and where I've been.
  • Translation - Translating of near real-time conversations I have with people around the world.
  • Analytics - The analytics for all of my public domains.
  • Forums - Conversations from a handful of groups I'm part of and run for other projects.
  • News - I use Google News as primary news aggregate, with personalized sections and searches.
  • Hangouts - I still use Google Hangouts with folks around the world.

I would point the finger at shifting the landscape from business software to free, surveillance-style business models. I also point the finger at Google for shifting the overall focus of a free, open, and democratic Internet, towards something that is about making money, generating leads and clicks, and fueling disinformation networks--"Index The Worst Parts of the World and Human Race".

I like Slack. I get it. However I find it to be a little too noisy and busy, and I find that I can only handle about three active channels at any point in time. Even with all the noise, it is a pretty interesting approach to messaging, and one of my favorite API platforms, so I can't help but put the platform to use in my business and a personal world in helping me manage the following bits:

  • Profile - My Slack profile that spans multiple channels.
  • Channels - The various Slack channels I'm part of or manage.
  • Files - Any file I share as part of my conversations on Slack.
  • Groups - The different groups I tune into and participate in.
  • Search - My search history across the channels I participate in. 
  • Bots - Any automate assistant or other tools I've added.

I'm not a big believer in bots and automated approaches to messaging, but I get why others might be into it. I'm a little more critical about where and how I automate things in my world, and a little biased because I tend to write the code that assists me in operating my world. When it comes to bots, I'm hoping that users are more critical about which bots they give access to, and bot developers provide more transparency into the data they collect, and their security practices.

I have my own conference that I help produce, as well as have played a role in other conferences, so EventBrite can play a pretty significant role in managing the bits for this part of my professional world. Eventbrite helps me manage my bits for use across numerous events but also helps me manage my relationship with the bits of my conference attendees, which are very important to me.

  • Profile - My profile and account on Eventbrite.
  • Events - The events I've attended and produced on the platform.
  • Venues - Venues of the events I'm watching, attending, or operating in.
  • Users - The users who are attending events I'm involved with.
  • Orders - The orders made for those attending my events.
  • Reports - Reports on the details of each event I've managed.
  • Media - The media I've uploaded and managed to support events.

Eventbrite is an interesting convergence of the physical and digital worlds for me. Tracking on the details of the types of events I attend, which are primarily tech events, but occasionally also other more entertainment, political, and other types of events. If you've ever run a lot of conferences as I have, you understand the importance, value, and scope of information you are tracking about everyone involved. A significant portion of the conference's business model is based on sponsors having access to these bits.

when it comes to smaller gatherings, I depend on Meetup to help me manage my involvement. I actively use Meetup to stay in tune with many of the API Meetup groups I frequent, as well as keep tabs on what other interesting types of gatherings are available out there. Here are the bits I track as part of Meetup usage:

  • Profile - My Meetup profile, account, and messages.
  • Groups - The Meetup groups that operate in various places.
  • Events - The Meetup gatherings I've attended or thrown.
  • Locations - The different cities where Meetups occur.
  • Venues - The venues I've attended and thrown Meetups.
  • Members - The members of Meetup groups I'm involved in.
  • Comments - The conversations that occur within a group and part of events.
  • Pictures - Any pictures that get uploaded in support of an event.

I am hyper aware oft he information Meetup tracks from my own usage when attending Meetups, but also my experience running Meetups. I also use the Meetup API to do research on locations, and the different areas that are related to the API industry--so I know what data is available, or not when it comes to profiling users.

I use CloudFlare to manage the DNS for all my domains. They provide me with a professional, API driven set of tools for managing the addressing of my online presence. Most importantly, CloudFlare helps me manage the portion of my digital presence that I own the most, and allows me to extract the most value from the bits I generate each day.

  • Profile - My CloudFlare profile, and details of my domain registration.
  • Firewall - My firewall settings for my domains.
  • DNS - The addressing for all my public and private domains.
  • Certificates - The certificates I use to encrypt data and content in transit.
  • Analytics - The analytics of traffic, threats, and other details of this layer.

This is a very important service for me, my business and brand. Choosing the right service to manage the frontline for your online presence is increasingly important these days. CloudFlare plays an important role when it comes to securing my bits, and making sure I can operate safely on the Internet, and stay functioning as a viable independent business. 

I use Gumroad to help me sell my white papers, essays, guides, images, video, and other digital goods.  They provide a simple way for me to create new products from the digital goods I produce, and give me an interface, as well as an API for adding, managing, and selling things that are as part of my operations.

  • Profile - My GumRoad account, profile, and history.
  • Papers - All my white papers I sell via Gumroad.
  • Guides - All the guides I sell via Gumroad.
  • Videos - All the videos I sell via Gumroad.
  • Images - All the images and collections I sell via Gumroad.
  • Offer Codes - Offer codes I generate for different campaigns.
  • Sales - All the sales I've made across my sites, and Gumroad store.
  • Subscribers - Any subscribers I have to subscription based products.

I like Gumroad because it allows me to sell digital goods one my site, and they just get out of the way. They aren't looking to build a network effect, just empower you to sell your digital goods by providing embeddable tools, and easy to use ordering, checkout, and payment solutions--they make money when you sell things and are successful.

A lot of what I do targets a business audience, making LinkedIn a pretty important social and content platform for me to operate on. I do not agree with LinkedIn's business strategy, and much of what they have done to restrict their developer community makes them of lower value to me, but I still use them to manage a lot of bits online. 

  • Profile - My LinkedIn profile and resume.
  • People - People I know and have relationships with.
  • Companies - The companies and organizations I follow.
  • Jobs - Employment postings that I keep an eye on, and include in work.
  • Groups - The LInkedIn Groups I am part of and manage.
  • News - Any news I curate and share via the platform.
  • Essays - The essays I write and share via LinkedIn publishing.

Like Facebook, I'm looking to expand my business usage of LinkedIn, and minimize its role in my personal world. Their lack of APIs for some of the important bits I manage on the platform makes it hard for me to expand and automate this very much. As a result, it tends to be just a destination for publishing news and engage with people via messaging and groups.

Amazon & Amazon Web Services
Amazon is another cornerstone of my operations. They provide me with the core elements of my business operations, and are what I'd consider to be a wholesale provider of some the bits I put to work across my operations. I guess I have to consider the consumer relationship I have with Amazon as well, and the role they play in making sure physical goods arrive on my doorstep, as well as the digital bits I manage on their platform.

  • Profile - My Amazon and Amazon Web Services Account.
  • Compute - All my compute resources operate in AWS.
  • Storage - Amazon is my primary storage account.
  • Database - I run several MySQL database on Amazon.
  • Shopping - I do a lot of shopping for my personal and professional worlds here.
  • Books - I buy and publish books using Amazon publishing.
  • Guides - I am looking to expand my publishing of industry guides on Amazon.

Amazon is a very tough service for me to replace. I keep everything I run there as generic as possible, staying away from some of their newer, more proprietary services, but when it comes to what I can do at scale, and the costs associated with it--you can't beat Amazon. Aside from Amazon Web Services, I depend on Amazon for my camera equipment, printer, and other things that help me generate and evolve my bits both on, and offline.

This is is a very specialized service I use, but has provided a core part of my world for over three years now. I use Pinboard to bookmark news and blog posts I read, images, videos, and any other digital bit of others that I keep track on. Pinboard is my memory, and notebook for everything I read and curate across the Web.

  • Profile - My public Pinboard profile, and the private account.
  • Links - All the links I've curated on Pinboard.
  • Tags - How I've tagged everythign I've curated -- linked to my internal tagging.

I have a different relationship with almost every service provider listed on this page, and in some of the cases I have a love/hate relationship with the platform--I need their service, but I don't agree with all of their policies and practices. This IS NOT the case with Pinboard. I absolutely love Pinboard. It is a service that does one thing and does it well, and I'm happy to pay for the service they offer. They do not have any surprises when it comes to helping me manage my bits. They are extremely clear about our partner relationship.

This is where I publish the audio from my world. I've published recorded webinars, audio versions of talks I've given, and Audrey and I publish our Tech Gypsies podcast here. Like Youtube, SoundCloud is a place I'm looking to expand on how I use the platform, to manage the audio side of my digital self with these bits:

  • Profile - My SoundCloud profile and account.
  • Tracks - Any audio tracks I've published to SoundCloud.
  • Playlists - The playlists I've created for my work and others.
  • Comments - The ocmments I've made, or have been made on my work.
  • User - Any users I've engaged with, liked, followed, or otherwise.

At Tech Gypsies (my parent company) been publishing our podcast to SoundCloud each week since April of 2016, and is something we are going to keep doing in 2017. I'm looking to capture more sounds that I can publish here, to accompany some of my videos. I'm also looking at some of the music and sounds of other artists for inclusion in some of my work.

It took me a while to come around to the network effects of publishing on Medium, but I'm there. While I'm not nearly as active here as I am on my other domains, I do syndicate certain pieces here to Medium to tap into these benefits.

  • Profile - My Medium profile which is linked to my Twitter profile.
  • Users - Anyone that I've followed or has follwed me.
  • Publications - The publications I've created or participated in.
  • Posts -  Any posts I've published to Medium.
  • Images - The images I include with any of my posts.

After a recent announcement that they would be downsizing at Medium, the benefits of my approach to managing my bits were clear. Many folks have told me I should move my blogs to Medium, and while I'll keep investing time into the platform, it will never receive the same attention as my own properties, because of the revenue they bring to the table. 

I have been a long time Flickr user, and was a paid customer for many years. I still use the platform to manage certain collections of images, including my APIStrat conference, but its dominance as an image and photo manage channel has been reduced due to concerns about the stability of it's parent company Yahoo.

  • Profile - My profile and account for Flickr.
  • Photos - The photos I've uploaded to Flickr.
  • Collections - The collections I've created of my photos.
  • People - The people I follow, and engage with around my photos.
  • Places - The places where photos have been taken. 
  • Tags - The tags I apply to photos I've published.

In addition to managing my own digital presence using Flickr, I also use it to discover photos and people, by navigating tags and conducting searches via the site and API. This service contributes pretty significantly to my digital presence because I use them in my blog posts, and other publishing I do online (licensing allowing). 

I swore off Instagram when they changed their terms of service temporarily so that they could use your photos in advertising without your permission--it is something they have backed off, but I still kind of lost faith in them. I still try to maintain the platform by publishing photos there, and I've recently setup an account for my photography, drone, and algorotoscope work, so I expect my usage will grow again.

  • Profile - My Instagram profile and account.
  • Users - The users I follow, and who follow me.
  • Images - The photos I have taken and published.
  • Video - The videos I have taken and published.
  • Comments - The comments I've made, and are made on my work.
  • Likes - Anything I've liked, and flagged as being intersting.
  • Tags - How I tag the images and video I publish.
  • Locations - The locations where I've taken photos and videos.

I really like Instagram as a solution. My only problem with it really is that they are part of the Facebook ecosystem. As an image management solution, and social network around photos I think its a great idea, I'm just not a big fan of their priorities when it comes to licensing and surveillance.

My primary payment platform for my business is still Paypal. If I am building an application, or scaling a business I'd be leveraging Stripe, Dwolla, and other leading providers, but because there is still a human element in my payment scenarios I heavily use Paypal.

  • Profile - My personal, and business profile on Paypal.
  • Payments - Any payments I've made and received.
  • Payors - People and companies who have paid me money.
  • Payees- People and companies who I've paid money.
  • Invoices - The invoices associated with payments.

My Paypal is a look at the financial side of my operations. It helps me centralize my money in a way that helps me work with a variety of services, as well as the underlying bank account(s). I can integrate into my site, and use the embeddable tooling that they provide to integrating payments into my sites, and the applications or tooling I develop.

The majority of automation that occurs on my platform is hand coded because I have these skills. However, there are many services that the integration platform as a service provider Zapier offers which I just can't ignore. Zapier makes integration between common API driven services dead simple--something any non-developer can put to use, and even programmers like me can put to work to save time

  • Services - All of the services I've integrated with Zapier to help automate things.
  • Authentication - The authentication tokens I've approved for integration to occur.
  • Connections - The formulas employed from connection to connection.

Zapier has an API, but it doesn't provide control over my account, services, and the connections, or as they call them--zaps, as I'd like to see. It is a much better solution than IFTTT, who takes a proprietary stance about your services and connections, but in my opinion, we will need to evolve more solutions like DataFire.io to help more normals make sense of all of this. 

The Core Services I Depend On
There are numerous other services I use, but these 20+ are the core services I depend on to make my personal and professional world work. These are the services I depend on to make a living. Some of these services I pay for, and there is a sensible and fair terms of service in place. Other services I do not pay for and put to use because of the network, and other positive effects...even if there is often a term of service in place that extracts value from my data and content--hopefully the positives outweigh the negative for me in the end.

My Domain(s)
Now we come to the most important part of my digital self. The portion of it where I get to have 100% control over how things work, and benefit from all of the value that is generated--this is my domain, where I am a domain expert, across a handful of web domains that I own and operate.

  • API Evangelist - Where I write about the technology, business, and politics of APIs.
  • Kin Lane - My personal blog, where I talk about tech, politics, and anything else I want.
  • API.Report - Where I publish the press releases I process each week from the API space.
  • Stack.Network - My research into the API operations of leading API providers.
  • Drone Recovery - Photos and video from my drone work, in an urban and natural environment.
  • Algorithmic Rotoscope - Applying machine learning textures and filters to images and video.

In my world, all roads should begin here and end here. The majority of my work starts here and then gets syndicated to other channels. When this isn't possible, I make sure that I can get my content back out via API, scraping, or good old manual work. My objective in all of this work is to define my entire digital footprint and paint a picture of not just the services I depend on for my online self, but also the valuable little bits that get created, moved around, and ultimately monetized by someone (hopefully that is me). 

Identifying The Bits
When you look at my digital self I would say that the two most defined bits of my existence are the blog post and the Tweet. These are the most public bits I produce most frequently, and probably what I'm best known for which also results in me making money. I began focusing on the digital bits I generate and manage as part of my presence on a regular basis because I am the API Evangelist, and I saw companies using APIs to make money off of all of our bits. Amongst it all, I managed to make a living talking about all of this and generate bits (blogs, guides, white papers, talks) that people are interested in, and sometimes they are interested enough to pay me money. 

I am not looking to identify all of my bits just so that I can make money off them, and I'm not looking to deprive companies who work hard to develop platforms and useful tools from sharing the value generated by my traffic, data, content, photos, videos, and other bits I generate. I'm just looking to assert as much control over these digital bits as I possibly can because, well....they are mine. Here are some of the most common bits I'm producing, managing, and in some cases making a living from across these services I use to define my digital self:

  • Analytics
  • Billing
  • Blog
  • Books
  • Bots
  • Calendar
  • Certificates
  • Channels
  • Collections
  • Comments
  • Companies
  • Compute
  • Database
  • DNS
  • Documents
  • Domains
  • Email
  • Essays
  • Events
  • Events
  • Files
  • Firewall
  • Forums
  • Friends
  • Gists (Code)
  • Groups
  • Guides
  • Hangouts
  • Images
  • Invoices
  • Issues
  • Jobs
  • Likes
  • Links
  • Locations
  • Magazines
  • Members
  • Messages
  • News
  • Offer Codes
  • Orders
  • Organizations
  • Pages
  • Papers
  • Payees
  • Payments
  • Payors
  • People
  • Photos
  • Pictures
  • Places
  • Playlists
  • Posts
  • Profile
  • Profile
  • Projects
  • Publications
  • Relationships
  • Reports
  • Repositories
  • Sales
  • Search
  • Sharing
  • Shopping
  • Storage
  • Subscribers
  • Tags
  • Teams
  • Tracks
  • Translation
  • Tweets
  • Users
  • Venues
  • Videos

All of these digital bits exist for a variety of reasons. Some are more meta, some are casual, while others help me be more informed, organized, and ultimately becomes my research which drives the bits that make me money. I want to respect the digital bits of others that I put to use in my work (images, videos, quotes), and I would like folks to be respectful of my digital bits. I could use some help driving traffic to my sites, and I'm happy to help create unique content, media, and other activities to drive traffic to the services I use, but I want it to be fair, equitable, and acknowledge that these are my digital bits, regardless of the platform, tool, and device where it was generated--it is quite likely that occurred on my laptop, tablet, or mobile phone in my home.

The API Driven Marketplace That Is My Digital Self
I understand that I probably think about this stuff a little too much. More than the average person. It is my job, but I can't shake that the average person might want to be a little more aware of this layer of their increasingly digital life. Especially so if you are looking to make a living off your work as an independent progressional (some day), and successfully develop your career in the digital jungle, where everyone is looking to extract as much value from your digital bits, before you ever even can benefit yourself. As a professional, I want to maximize my control over the traffic and other value generated by my labor, asserting control over my digital bits whenever, and wherever I can.

I notice my digital bits flowing around online at this level because this is what I do as API Evangelist, and I'm working to automate as much of it as I can, maximize my digital footprint and presence, while retaining as much control over all of my bits as I possibly can. All of the platforms listed above have APIs. If possible, I only depend on services that have APIs. I don't do this just because of what I do for a living, I do this because it allows me to automate, and syndicate my presence and work more efficiently and effectively. I have active integrations with Facebook, Twitter, Github, and all of the services listed above. I always have a queue of new integrations I need to tackle, and when I do not have time to code, I try to use Zapier or other integration as a service solution.

All of my work is conducted within my domain, within my workshop. Each blog post I write, each image or video I create, and guide, essay, and white paper I produce is crafted on my desktop or the workshops that are kinlane.com, apievangelist.com, and my other web domains, where I operate my open, publicly available digital workbench. Then I use the API for each of the service listed above to then push my bits to each channel, syndicating to Twitter, LinkedIn, Facebook Medium, and to my network of sites using Github. I know the APIs of these services intimately, including the bits that are exchanged with each transaction. I have written code to integrate with, and crafted OpenAPI Specs for each of the services listed above--I can close my eyes and see these bits being exchanged daily, and think constantly about how they are exchanged, bought, sold, and leveraged across the platforms above. I know, I have a problem.

I'm sure that all of this sounds a little OCD to some. Most folks I talk to about this stuff do not care and dismiss as out of their realm. Most of the entrepreneurs who get things at this level are too busy making money from any value being generated, they do not have much interest in everyone else understanding all of this. In my opinion, you will either want to understand things at this level, and assert control over your bits, or you will work on these other platforms, allowing others to benefit from what you do online, in your car, and in your home. I don't mind other people generating value from my digital bits, especially if they provide me with valuable services, and the details of the relationship are disclosed, I am informed, and we are in agreement before entering.

I am just getting to know my digital self. Sadly I do not fully understand the terms of service that guide all of these relationships as well as I know the technical details of our relationship. Now that I have more of a handle on which core services I depend on, as well as the digital bits that are exchanged in these digital relationships I have entered into. I'm currently working on profiling the business models, pricing and plans available for each of these services I depend on. After that, I'd like to take a better look at the terms of service, privacy policies, and other legal aspects of these relationships. Maybe then I'll understand more about how my digital bits are being bought and sold on the open market, and what dimensions of my digital self really exist.

Service Level Agreements for Researchers Who Depend On APIs

I came across a pretty interesting post on using APIs for research, and the benefits, and challenges that researchers face when depending on APIs. It was another side of API stability and availability that I hadn't considered too much lately. Social media platforms like Twitter and Facebook are rich with findings to be studied across almost any discipline. I regularly find social media API studies at universities from areas like healthcare and Zika virus, algorithmic intellectual property protection, all the way up to US Navy surveillance programs that are studying Twitter.

APIs are being used for research, but there are rarely API platform plans crafted with research in mind. Flexible rate limits, custom terms of service, that give them access to the data they need. I'm assuming that some companies have behind the scenes deals with some universities, or larger enterprise research groups (IBM, etc), as well as government agencies, and police agencies. The problem with this, is that 1) there is no virtual public front door to walk through and understand research levels of access, and 2) the details of partnerships are not publicly, equitable, and auditable by journalists, and other groups.

The author of this essay provides a lot of details regarding what it is like to depend on APIs for your research. Some of them could put your career in jeopardy if the terms of service, and access levels change before you could finish your research, or dissertation. I'm not sure what the responsibility of API providers should be when it comes to making their resources available for research, but it is something I will be exploring further. I will be reaching out to researchers about their API usage, but will also be helping encourage API providers to share their side of things, and maybe eventually formalize how API providers make their valuable resources available for important research.

Evaluating A New Channel For Publishing My Bits

I have used Shutterstock for some time now when it comes stock images but I've only recently started playing around with their publishing program, hoping to make some money from some of my photos and videos. As with any other channel that I am considering for inclusion in my line-up of tools and services, I am spending time going through their platform and evaluate the tech, business, and political considerations of adding any new service to work into my world. 

First, a service should always have an API. This isn't just because of what I do for a living and my obsession with APIs. This is so that I can integrate seamlessly with my existing operations. Another side of this argument is that I will always be able to get my data and content out of a system, but I am working to be a little more proactive than that. I want my system, that operates within my domain to be the lead, and any new channel I adopt only play second fiddle. In this scenario, each photo or video that I publish to Shutterstock will live within my image and video systems and then with the Shutterstock API, I will publish to the Shutterstock domain as I deem worthy. 

The Shutterstock API (potentially) gives me more access and control over my digital bits and allows me to do more with fewer resources. I do not have to depend on APIs, or a platform's data portability to get my data and content out, I've always possessed this control and ownership from the beginning. Then this control and ownership is now exercised and strengthened in three areas: technology, business, and politics. I technical have control over my bits. I have business control over where they are sold, by whom, and how much of the action they get. I have political control, and when I want to change, evolve, or end the relationship I can do what I think is best for me, and my API Evangelist business. 

I still have a lot of reading and learning to do when it comes to understanding the legal terms of services, and the details of my Shutterstock partnership, but a company having an API is increasingly the first step of any new business relationship for me. If I can help it, I will not add any new business relationship into my world unless there is an API. Of course, there are deviations from this, but as a single operator small business, this is critical for me if I expect to be able to scale the technical side of my operations, while also meeting the business and legal requirements I have in place to help me achieve success in my business.

Algorithmia's Multi-Platform Data Storage Solution For Machine Learning Workflows

I've been working with Algorithmia to manage a large number of images as part of my algorithmic rotoscope side project, and they have a really nice omni-platform approach to allowing me to manage my images and other files I am using in my machine learning workflows. Images, files, and the input and output of heavy object is an essential part of almost any machine learning task, and Algorithmia makes easy to do across the storage platforms we use the most (hopefully). 

Algorithmia provides you with local data storage--pretty standard stuff, but they also allow you to connect your Amazon S3 account, or your Dropbox account, and connect to specific folders, buckets, while helping you handle all of your permissions. Maybe I have my blinders on with this because I heavily use Amazon S3 as me default online storage, and Dropbox is my secondary store, but I think the concept still is worth sharing..

This allows me to seamlessly manage the objects, documents, files, and other images I store across my operation as part of my machine learning workflow.  Algorithmia even provides you with an intuitive way of referencing files, by allowing each Data URI to uniquely identifies files and directories, with each composed of a protocol and a path, with each service having its own unique protocol:

  • data:// Algorithmia hosted data
  • dropbox:// Dropbox default connected accounts
  • S3:// Amazon S3 default connected account

This approach dramatically simplifies my operations when working with files, and allows me to leverage the API driven storage services I am already putting to work, while also taking advantage of the growing number of algorithms available to me in Algorithmia's catalog. In my algorithmic rotoscope project I am breaking videos into individual images, producing 60 images per second of video, and uploading to Amazon S3. Once images are uploaded, I can then run Algorithmia's Deep Filter algorithm against all images, sometimes thousands of images, using their text models, or any of the 25+ I've trained myself. 

This approach is not limited to just video and images, this is generic to any sort of API driven machine learning orchestration. Just swap out video and images, with mapping, content, or other resource, and then find the relevant machine learning workflow you need to apply, and get to work. While I am having fun playing with my drone videos and texture filters, the approach can be just as easily applied to streamline any sort of marchine learning workflow.

One additional benefit of storing data this way is I've found Dropbox to be a really amazing layer for including humans in the workflow. I leverage Amazon S3 for my wholesale, compute grade storage, but Dropbox is where I publish images, videos, and documents that I need to put in front of humans, or include them in the machine learning workflow. I find this gives them a role in the process, in a way that gives them control over the data, images, videos, and other objects, on a platform they are already comfortable with. I'd encourage Algorithmia, and other providers to also consider including Google Drive as part of this--it would go a long way logically connected with the human portion of the wokflows.

Anyways, I thought Algorithmia's approach to storage was interesting, worth highlight, and something that other providers might consider implementing themselves.

What I Learned Crafting API Definitions For 66 Of The Amazon Web Services

I just finished crafting API definitions for 66 of the Amazon Web Services. You can find it all on Github, indexed with an APIs.json. While I wish all API providers would do this hard work on their, I do enjoy the process because it forces me to learn a lot of each API, and the details of what providers are up to. I learned quite a bit about Amazon Web Services going through the over 2000 paths that are available across the 66 services

The Importance Of Consistency Across Teams
When you bounce from service to service within the AWS ecosystem you can tell that consistency is a challenge for Amazon. Consistency is lacking in API design, documentation, and other critical areas. This is something that is actually getting worse with some of their newer projects. While the older AWS APIs aren't the best possible design because they are: "?Action= based", at least they are consistent, and the documentation is using the same template. Some of the newer APIs are better designed, but their documentation is all over the place, and they are deviating from the consistency that seemed to exist with some of the older API efforts.  

Clear Picture Of Essential Building Blocks
There are a variety of building blocks employed in support of AWS APIs, but there is a pretty clear definition of what are considered to be the essential building blocks that exist across ALL AWs APIs:

  • Documentation - Overall, developer, and API documentation to support the services.
  • Getting Started - What you need to get up and going with any of the AWS solutions.
  • Frequently Asked Questions - A list of the frequently asked questions asked of each service.
  • Pricing - The pricing for using each service, with some providing a calculator to assist.

Amazon also provides a centralized blog, code, support, and what I'd consider to be essential building blocks, and some of the individual services do a good job linking to these resources, but these four are present across ALL of the AWS services, making them clearly considered to be essential.

Relationship Between CLI and API
I think the relationship between CLI and API isn't discussed enough in the API sector but is something that is clearly strong across the AWS ecosystem. I'm seeing more API providers also offer a CLI alongside their API to support different developer tastes, but I think AWS does a good job investing equally in both approaches to putting AWS resources to work. In some cases, I'd say the CLI is better documented than the API, but this wasn't always the case--for the most part they were equally invested in.

API First And Console Second
Another area I think Amazon provides an interesting case study is when it comes to the relationship between their human interface vs their API and CLI solutions. Many companies launch their human interface and secondarily provide the one for programmatic access, where Amazon delivered the API and CLI first, and their console came second. With current releases, this seems like it is in sync, but in early days they were API first. I appreciated the AWS teams that provided me a link directly to the AWS console, dropping right into the human interface for the API I'm working with. I have a ranking score of 1-3 for how coupled a company's API is with their human interface, and I'd put Amazon as a 2, with a moderate amount of coupling--with a ranking of 1 meaning that they are well linked (tightly coupled).

Meet Other Folks Doing Interesting Things
One of the reasons I'm so transparent with all of my work is that it tends to alert folks to what I'm working on, and is something that attracts like-minded individuals who are headed in a similar direction. Shortly after tweeting out my work, Mike Ralphson (@PermittedSoc) shared his Github repository of OpenAPI Specs. This will save me a ton of work in verifying paths, making sure header, parameters, errors, and the underlying data model actually gets completed. I will be setting up scripts to keep my definitions in sync with his collection, as well as other folks collections that I'm keeping an eye on.

Change Will Come With New Products & Services
I knew that AWS had released a number of new products at their annual conference this year, but I haven't had time to dive in. It was interesting to learn about their efforts in the area of machine learning, and Internet of Things. I also got a good look at their authentication, encryption, identity, access management, and other security-related efforts. I feel like this will continue to be an important offering for all the 1000lb gorilla tech giants--security. Us mere mortals will not be able to muster the resources to do at scale, and AWS scale companies will need a buffet of security solutions for API providers.

I'm going to continue refining my Amazon Web Services Stack, but I'm going to also get to work on a similar one for Google and Microsoft. Once I have these three tech giants profiled from this API standpoint, I will step back and see what I can do to compare, and better understand where things are headed. This is tedious work, but I find it worthwhile because it is something that continues to push my understanding of the space forward. As I've said before, crafting an API definition of an API is one of the best ways to get to know an API in an intimate way, second to actually writing some code and integrating with an API for realz. 

Explaining To Normals Why Every API Is Different

I enjoy having conversations with "normals" about APIs, especially when they approach me after doing a great deal of research, and are pretty knowledgeable about the landscape, even if they may lack deeper awareness around the technical details. These conversations are important to me because it is these folks that will make the biggest impact with APIs--it won't be the entrepreneurs, developers, architects, and us true believers.

While having one of these conversations yesterday, the topic of API design came up, and we were talking about the differences between seemingly similar APIs like Flickr and Instagram, or maybe Twitter and Facebook. I was asked, "why are these APIs are so different? I thought the whole thing with APIs is that they are interoperable, and make integration easier?" << I love getting asked this because it helps me see the API space for what it is, not the delusion that many of us API believers are peddling. 

So why are the designs of APIs so different, even between seemingly similar APIs?

  • Integration - APIs make integration into web, mobile, and devices apps easier. It will also make integration with other systems easier. However, very few API providers truly want their APIs to work seamlessly with the competition!
  • Silos - Many API providers operate in silos, and I have encountered teams who do almost no due diligence on existing API design patterns, standards, or even looking at the potential competition, and what already exists before crafting their API design strategy. 
  • Intellectual Property - Not many folks see the separate between their API design, the naming, ordering, and structure of the interface, and their backend API code, resulting in some distorted views of what is proprietary and what is not.
  • Venture Capital - The investors in many of the companies behind APIs are not interested in being open and interoperable with others in their industry, resulting in a pretty narrow, and selfish focus when it comes to API design patterns.

These are just a handful of the reasons why APIs are so different. It can be hard for people not immersed in the world of technology to cut through the hype, walled garden belief systems, and delusions we peddle in the world of APIs. What makes it even worse, is when you see APIs become a facade for a variety of industries, and begin masking the unique hype, belief systems, and delusions that exist in these verticals. #FUN

When you are only focused on the technology of APIs this all seems simple, and pretty straightforward. Once you begin layering in the business of APIs things get more complicated, but are doable. It is when you start considering the politics of APIs you begin to see the darker motivations behind doing APIs, seeing more of the illnesses that plague the API sector, and infecting each vertical it touches. All of this contributes to many APIs never living up to the hype, or even pragmatic expectations, and will continue to hurt the bottom line of companies who are doing APIs. 

API Calls as Opposed to API Traffic

I was doing some planning around a potential business model for commercial implementations of OpenReferral, which provides Open211 open data and API services for cities, allowing citizens to find local services, and I had separated out two types of metrics: 1) API calls  2) API traffic. My partner in crime on the project asked me what the difference was, looking for some clarification on how it might possibly contribute to the bottom line of municipalities looking to fund this important open data work.

So, what is the difference between API call and API traffic in this context?

  • API Call - This is the measurement of each call made to the API by web, mobile, and device applications.
  • API Traffic - This is the measurement of each click made via URLs / URIs served up as part of any API response.

In this context, we are looking to provide municipalities, non-profit organizations, and even commercial efforts that are delivering 211 services in cities around the world. I am not suggesting that every bit of revenue and profit be squeezed out of the operation of these important services, I am simply suggesting that there are ways to generate revenue that can become important in keeping services up and running, and impact the quality of that services--it takes money to do this stuff right.

Think of API traffic like an affiliate program or in service of lead generation. This approach requires the usage of some sort of URL shortener services so that you can broker, and measure each click made on a link served up by an API. This opens up other security and privacy concerns we should think about, but it does provides a potential layer for generating valuable traffic to internal, and partner web and mobile applications. This is just one of several approaches I am considering when we are thinking about monetization of open data using APIs.

A Glimpse At Minimum Bar For Business API Operations in 2017

I look at a lot of API portals and developer areas , and experience a number of innovative approaches from startups, as well as a handful of leading API providers, but the Lufthansa Airlines API portal (which recently came across on my radar) I feel represents the next wave of API providers, as the mainstream business world wakes up to the importance of doing business online in a machine readable way. Their developer program isn't anything amazing,  it's pretty run of the mill, but I think it represents the minimum bar for SMB and SMEs out there in 2017.

The Lufthansa developer portal has all the basics including documentation, getting started, an application showcase, blog, and they are using Github, Stack Overflow, Twitter, and have a service status page. They provide APIs for their core business resources including cities, airports, aircraft, and the schedule of their flights. Honestly, it is a pretty boring, mundane representation of an API, something you are unlikely to find written up in Techcrunch, but this is why I like it. In 2017, we are getting down to the boring business of doing business on the web (maybe security will come soon?).

I'm hoping this is what 2017 is all about when it comes to APIs--where average small businesses and enterprises getting their API operations up and running. Its is like 2002 for websites, and 2012 for mobile--APIs are what you do if you are doing business online in 2017. They aren't the latest tech trend or fad, it is about acknowledging there are many applications and systems that will need to integrate with your resources, and having simple, low cost, machine-readable APIs is how you do this. Let's all get down to business in 2017, and leave behind the hype when it comes to the API life cycle.