{"API Evangelist"}

Meet The Platform Team Over At Mendeley API

I was playing catchup on my feeds over the weekend, and came across a nice, meet the Mendeley platform team post, from the academic research API. I’m a big fan of these types of efforts, that help humanize API operations, and bring API providers closer to API consumers. From the outside, it is difficult see behind the API curtain, and showcasing the team, is one of the best ways to break down barriers.

Simple things like showcasing the team, let each one tell their story, and providing Twitter accounts for each team member goes a long way in helping me connect with what an API does. Something that will continue beyond just this one post, because I followed each team member that is active on Twitter, and will be able to stay in tune asynchronously.

In the technical shuffle of API oeprations, it can be easy to forget about little touches like this, which is why I’m going to add team showcase as a building block to my API management list. When you are building out your API management strategy, it helps to have things like team showcase on the checklist.

Are Your APIs Ready For The Coming Containerization Evolution Of The API Space?

If you were at Defrag in Colorado last November, or in Sydney, Australia for API Days last week, you’ve heard me talk about what containers are doing to APIs. There is a subtle, but important shift in how APIs are being deployed occurring right now, and as John Sheehan (@johnsheehan), the CEO of Runscope says, containers are doing for APIs, what APIs have been doing for businesses.

As I was processing news for API.Report this morning, I found more evidence of this, with the release of logging API container from Logentries. APIs have made resources like “logging”, much more modular and portable, for use in multiple channels like mobile or via websites. A containerized logging API makes the concept of a logging API much more portable, by adding an entirely new dimension for deployment. You don’t just have a logging API, you now have a logging API that can be deployed anywhere in the cloud, on-premise, or on any physical object.

This opens up a whole new world of APIs, one that goes beyond just a programmable web, quickly evolving us towards a programmable world, for the better, and the worse. Last month I asked, when will my router have docker containers by default? I received an email from a major network equipment manufacturer this week, letting me know that they are working on it. This means little containers on the Internet enabled objects across our worlds, ushering in the ability to deploy localized, wholesale APIs, giving us the ability to manifest exactly the virtual API stacks that we need, for any objective.

I try not to hype up where things are going in the API space, so I will stick with calling containers a significant evolution in how APIs are done. This new way of deploying APIs will push the evolution around the business of APIs, changing how we make generate revenue from valuable resources, while also putting even more stress on the politics of APIs, with introduction of more privacy and security concerns—not to mention adding a whole new dimension of complexity.

I’m not 100% sure where all of this is going. As with the rest of the API space, I struggle with making sense of all of this in real-time, and the allocation the mental bandwidth to see the big picture. All I can say at this moment, is to make sure you work to better understand various approaches to containerization, and adopt a microservice approach to your API design. Beyond this, all we can do is keep an eye on what companies like Logentries are doing, when it comes to containerized API deployment, and try to learn as fast as we possibly can.

An API For The Interactive JumboTron Floor Display At The National Museum of Mathematics (MoMath) In New York

I just found one of the coolest API stories I’ve seen in a while over at CHANCE, the quarterly magazine designed for "anyone who has an interest in the analysis of data, informally highlighting sound statistical practice." CHANCE talked with the executive director of The National Museum of Mathematics (MoMath), Glen Whitney, about their new hands-on, API driven exhibit that the "museum has created a physical and virtual recreational math community to nurture this generation and the next in their mathematical pursuits."

As part of their plans to reach people outside New York City, and encourage them to join the conversation at the museum, they have installed as a JumboTron on the floor, which:

You can walk right onto it and it’s equipped with sensing technology, so it’ll know the location of everyone who’s standing on the floor. We have a variety of exploratory mathematical activities on that floor. We’ll have mazes that have special rules or maybe a lot of turnstiles that trigger changes as you walk through them. It shows the notion that math is about exploring the consequences of simple rules."

At the heart of the interactive mathematics exhibit:

"There will be an API (application programming interface)—a system by which groups can submit their own activity to be displayed on the Math Square floor. We will invite submissions from across the country. And we’ll have a curation process, of course. If one group’s exhibit is selected, we’ll give them the opportunity through live streaming video where the class can see another group in the museum interacting with their creation and get feedback about what these other students experienced as they explored whatever puzzle, problem, or illustration the originators created. We’re looking forward to that as a way of connecting people from around the country."

At the moment where I’m most concerned about the Internet of Things (IoT) API efforts I’m seeing emerge across the landscape, an API project like the MoMath API shows up to make me happy. :-) ;-) Can you image the possibilities here? Not just for interactive, API driven displays on the floor at the National Museum of Mathematics, but interactive mathematics anywhere you can install a visual display, or a network of API driven human interfaces.

I am very curious to see what mathematicians around the world do with the MoMath API project, and better understand how we can use API to make math a much more fun, accessible, and interactive experience, that can be woven into our daily experiences.

P.S. I really, really hope this is good enough to make it into the Hack Education roundup! ;-)

Changes To LinkedIn Developer Program Are No Surprise

LinkedIn recently announced some changes to their developer program, which involves further tightening down the screws on who has access to the API, limiting public access to only a handful of very superficial APIs. If you want greater access to the business social network API, you will need to be an officially approved partner.

As a result of LinkedIn’s announcement, you will hear more discussion about the demise of public APIs, as this is narrative many API providers would like to employ, to support their own command and control positions around their client, or very own API driven resources. There is nothing wrong with having private APIs with supporting partner programs, but this has no bearing on the viability of publicly available APIs.

In reality, LinkedIn’s API never really was open. Sure it is a public API, but the API has never been developer friendly, often times taking a very adversarial stance with its community, as opposed to embracing, nurturing, and investing in its developer ecosystem. Honestly, this is ok. Not every company has the DNA, or business model to make public APIs work—this latest move by LinkedIn reflects their ability, and not the potential of public APIs.

We can’t expect all companies to be able to make public APIs work, it isn't easy. When it comes to making money around valuable content and data online, a closed ecosystem is seen as being better. Tighter control over your users data exhaust, allows you to decide who can do what, limiting to just the partners who have business relationships with you. You just can't monetize user generated to the extent LInkedIn would like, without taking away users control and access to this data.

Even with LinkedIn stance, there are a number of lessons to be learned by studying their approach. Like Twitter and Facebook, there are plenty of positive moves to analyze, as well as numerous other negative elements, that you can learn from when crafting the tone for your API. As an API provider do not dismiss what you can take away from LinkedIn’s platform, and as a consumer LinkedIn is a valuable lesson in what you should look for in an API platform.

Ultimately, the move by LInkedIn is no surprise to me, and the platform is purely a distribution channel for me, and has been for some time.. Meaning I only syndicate content there, and you will never find me actually engaging very deep on the platform, building relationships there, because along with other platforms like Quora I do not have any ownership over any of the exhaust I generate. As a professional this is unacceptable to me, as I have a valuable brand that I carefully maintain. As other professionals realize this, they too will mostly abandon the business social network, leaving it to be a spammy corner of the Internet where HR professional prey upon the semi-professional, aspiring employee types.

Migrating My Own API Infrastructure Conversations To My Personal Blog And Keep API Evangelist About Mainstream Stories

After seeing the conversation around my In The Future There Will Be No Public vs. Private APIs, I'm reminded of my own mission. I write on API Evangelist first and foremost for my own education, and secondarily I do it to help educate the normals about the importance of APIs. Not page-views. Not to educate the API echo chamber. Not to drive conversation over at DZone or Hacker News. Definitely not to insult anyone.

That story was me working through my own service composition, and looking at one possible future. That exchange you hear in the story, and all my stories, is the conversation between the voices in my head, and is never mean to insult anyone (think Michael Keaton in Birdman). All of this has reminded me that API Evangelist is not about cutting edge stories, like the unproven stuff I'm doing with my own API architecture, docker and microservices. However it is critical that I still flush out my ideas, in my own way, so I'll move these stories to my personal blog kinlane.com.

As I re-read that post, I’m faced with the link-baity title, which was not crafted with that intent, and the error and brevity in one statement that was singled out and ultimately fueled the conversation. I’m always happy to see conversation stirred, but not in the way it was around David’s post. It is ridiculous that I would allude to privacy being gone from the conversation around APIs (really?), but ultimately I was pleased to see most people make the same argument that I was having in my own head.

I’m super thankful for having APIEvangelist.com, and my readers. It makes it possible that to bring in the little bit of money I do from 3Scale, Restlet, and WSO2, to pay my rent, and fund my research and learning. I’m also thankful for moments like this that help me remember why it is that I do this, and stay true to my API Evangelist mission.

APIMATIC Code-Generation-as-a-Service Has Built-In Support For API Commons Manifest

Swagger is now Open API Definition Format (OADF) -- READ MORE

The API savvy folks over at Apimatic are at it again, pushing forward the conversation around generating of software development kits, using machine readable API formats, and this time the doorway to your SDK is via the API Commons manifest.

I'm going to go ahead and use their own description, as it sums it well, no augmentation needed. Using the code generation API, you can generate SDKs for your API directly from your Github repository. 

Step 1: Describe you API using some format. You may choose from Swagger, RAML, APIBlueprint, IODocs, and Google Discovery formats. Automatic code generation makes use of information in your API description to generate method and classes names. Please be as expressive as possible by not leaving out any optional fields as applicable e.g., not leaving out types and schemas for your parameters and fields.

Step 2: Define meta information about your API using API Commons manifest format. You can generate your API Commons manifest using the API Commons Manifest generator. Be sure to enter all relevant information. Upload the generated manifest as a new file in the root directory of your Github repo by the name "api-commons- manifest.json". Be sure to have the correct name and location of this file.

Step 3: Open/Create a markdown file (README.md is a good candidate). Add the following markdown syntax to render an image link.

[![apimatic][apimatic-{platform-name}-image]][apimatic-{platform- name}-url]

[apimatic-{platform-name}-url]: https://apimatic.io/api/github/{account-name}/{repo-name}/{branch- name}?platform={platform-name}

[apimatic-{platform-name}-image]: https://apimatic.io/img/github/{platform-name}.svg

Replace the {platform-name} token with one of the following values: windows, android, ios
Replace the {account-name} token with the name of your url-encoded Github account name
Replace the {repo-name} token with the name of your url-encoded Github repository name
Replace the {branch-name} token with the name of your url-encoded Github branch name where the API Commons manifest file is present. 

To validate, open the following url after replacing tokens. This url should open the raw manifest file. https://raw.githubusercontent.com/{account-name}/{repo-name}/{branch- name}/api-commons-manifest.json

You can see an example here. Commit changes and navigate to your Markdown file in your browser. You will see apimatic widgets (image links), which you can click to generate SDKs for your API. To see an example, open this link to view the README.md file in raw text form.

The Apimatic team is owning the conversation when it comes to generation of full fledge SDKs for your APIs. I always hear folks talk about the limitations of auto-generation of client side code, but the Apimatic team is pushing the conversation forward with their persistent approach.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

A Minimum Viable APIs.json File For Your APIs

I'm continuing my work to help people understand what APIs.json is, and the varying ways that it can be put to use. My post the other day, breaking down Fitbits APIs.json file is a good example of where to get started, but I wanted to help further set the bar for a minimum viable APIs.json.

APIs.json starts with a basic set of descriptions of who you are, the API provider. Each header of an APIs.json file gives you a handful of parameters for describing who you are:

  • name - your individual or company name, who is managing the APIs.json file.
  • description - a description of your company and / or the API collection you are building.
  • image - an image, logo, or icon that describes yourself or your company.
  • tags - a handful of key words and phrases that describe your API collection.
  • created - the first date in which an APIs.json file was created.
  • modified - the last date in which an APIs.json file created.
  • url - The url of where the APIs.json lives, allowing your file to be portable.

Those seven parameters provide details on who you are, and what the API collection is all about. Remember an API collection doesn’t always have to live under a specific company domain, it could be a temporary or more permanent collection, which is part of specific project or application.

The next essential element of an APIs.json file, is the APIs collection, providing you the ability to describe one or many APIs as part of this collection. Similar to the parameters provided for the header, each API entry is allowed a handful of parameters that describe the API:

  • name - the name of the API.
  • description - a description of the value an API delivers.
  • image - an image, logo, or icon that describes an API.
  • tags - a handful of key words and phrases that describe the API itself.
  • humanURL - The url any human should visit to learn more about an API.
  • baseURL - The base url any machine should follow to start using an API.

Each API should have at least this information, at a minimum. I could stop here, with my minimum viable APIs.json definition, but I encourage you to take one more step, and put the properties collection to use for each of your API. Using the properties collection, you can provide any other URL you want for an API--I recommend starting with four basic properties:

  • X-documentation - Where the documentation resides for the API.
  • X-signup - Where a user can signup to use an API.
  • X-pricing - What is the pricing for using an API.
  • X-tos - Where do I find the legalize behind AP operations.

Ultimately you can define any property you wish for an API, but recommend with these essential building blocks that all API consumers will need. After that each API has a contact collection, allowing you to provide some basic support for API operations:

  • FN - The name of person to contact.
  • email - The email address to contact.
  • X-twitter - A twitter user for the API.

These contact properties, follow the vCard format, and provide what API consumers will need to get support for an API. The same properties are available again for the overall APIs.json file, as a maintainers collection, which provide contact information for the overall APIs.json maintainer. This will often be a duplicate of information for each API, but allows for ultimately flexibility in aggregating many disparate APIs, into a single collection.

That is it. That is a minimum viable APIs.json definition. We now know who maintains the collection, and essential details about API operations. This goes well beyond just the technical definition of an API, and provides essential business and political elements of API operations, that all API consumers will need to be informed of, and are something that developers will often overlook.

With an APIs.json file, open source API search engines like APIs.io will be able to index your APIs, and any API directory like ProgrammableWeb can do the same. The definition of your API(s) is now machine readable, and is portable, allowing this definition to live on any website, or within any desktop, web or mobile application.

When You Are Ready For Nuanced Discussion About Who Has Access To Your API I Am Here

David Berlin has a rebuttal post on ProgrammableWeb to my recent post In The Future There Will Be No Public vs. Private APIs, called Long Live The Private API. I’m a big fan of doing story responses using our blogs versus the sometimes difficult Twitter conversations that occur--so I am happy to craft a response as well. I’m less of a fan of playing into page view games of fabricated kerfuffles, and link-bait titles, which after re-reading, my own blog title was a little link-baity. However in reality, my story came from my workbench as I was reworking some of the service composition stack, and reflect that process, not the conversation it seems to have created, but hey, I am all about embracing unintended consequences.

First, let me address my mistake. I said "If it has an http:// in front of the address, it is a public API—sorry.”, which is total bullshit. As John Sheehan reminded me after I posted, at the point of DNS is what I should have meant, you can definitely use http:// on a totally private network, and what I meant to articulate was once you start making call via http://, using a DNS address.

Second, David really makes my case for me, so in reality we are in agreement, it is all about semantics—which really is the core of my argument. Private, like the term open is thrown around a lot, usually by marketers, who want to elicit some sort of emotion response from their intended audience, much like David is with his story. All I’m pointing out, in my own journey, is I used to have two buckets, public, and private. Then I had three buckets internal, public, and partner. Now I have mulitiple buckets that reflect products, projects, dev groups, and the organic organizational structure that has emerged around the resources I manage, and put to use.

Private just doesn’t describe my organizational structure anymore. My organization spans the globe, and has overlap with numerous other organizations, and individuals, and my service composition needs to reflect this change. If my statement: "the concept of public and private doesn't exist. This is a reality that plays out in conversations between people who don’t fully understand the world of API management—aka the tech blogosphere.”..is an insult to you. Maybe you can channel that anger into a blog post, and get some page views. ;-)

David is 100% right. It is your right to continue describing things as public vs. private, and I’m completely confident that many of you will continue doing this in the future. Describing very public infrastructure as private, describing very closed things as open, what I’m trying to do is move the conversation forward. When you are ready for a nunanced discussion about who has access to your API, API management, and service composition, I am here.

P.S. Look ma! I did it without a second page, to drive page views!

My Wish Has Been Granted: Swagger Driven API Visualizations From Ardoq

Swagger is now Open API Definition Format (OADF) -- READ MORE

I'm a big fan of putting my ideas for new tools, services, and other stuff out on the Internet, for public consumption. My mother taught me how to manifest things in my life, and this is my digital version of her teachings. By putting my ideas out there, a) I don't actually feel compelled to do them b) someone else might think it is good idea and build it, and c) when someone does build it and they start looking to publicize, they will find you. c) happened today.

One idea that I put out there recently, that I really wanted to manifest, was a visualization layer for APIs using Swagger. My wish has been granted, and a startup called Ardoq, has done just that, developed a visualization layer using Swagger. As they were taking their new product public they came across my story, and pinged me this morning. I’m going to follow-up, with a link to the story, and get a full tour of the product, talk with the team, and better understand what they are up to.

Ardoc is an important next step for machine readable API definition conversation, adding another incentive for API providers to generate definitions for their APIs in API Blueprint, Swagger, and RAML. I really see three main phases of the evolution so far in this conversation, 1) when Wordnik introduce Swagger and Swagger UI, 2) with introduction of Apiary.io moving the conversation further upstream to API design, and now 3) more meaningful visualizations built on top of APIs and microservices.

Machine readable API definitions have done many significant things for the API conversation, allowing us to deploy interactive documentation, mock interfaces, ultimately giving us a common language that we can use to communicate around some very abstract concepts. I’m excited to see visualizations enter the discussion, which will allow us to build powerful visual tooling around APIs, like being able to map the scope of a microservice, or provide a map for a whole stack of microservices.

I think visualizations generated for machine readable API definition is just the beginning of a whole new world of UI elements developed on top of APIs, and we will also see more analysis, interoperability, integration, and discovery tools emerge as well. API Blueprint, RAML, and Swagger will continue to evolve as a central truth throughout the API lifecycle, providing as set of instructions for everything from design and deployment, to management and integration.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Defining Virtual API Stacks Using The Service Broker API Over At IBM Bluemix

Swagger is now Open API Definition Format (OADF) -- READ MORE

I've been talking about developing virtual API stacks for a while now, and as I continue understand current shifts in cloud computing, I am doing my own reshuffling towards a more microservices, and docker-centric way of life. When I say the phrase “virtual API stack”, I’m talking about the ability to deploy a stack of APIs you need for a specific organization, project, app, or other configuration. In 2015, you should be able to quickly define exactly the stack of private, and public API services you need to accomplish exactly what you need--nothing more.

As part of my research in this area, I’m tracking on similar patterns, that I see occurring at platform providers--today I found an example of defining virtual API stacks using the service broker API over at IBM Bluemix. Using Bluemix, you can build virtual API stacks from existing services they have setup, including Twilio, and Sendgrid, or register your own APIs using service broker, via their cloud marketplace (you have to be partner). The services deploy in this way, need to implement a common API surface area dictated by the platform (I’d love to see a Swagger spec for this IBM).

IBM is blending several areas I’ve been tracking on, starting with defining of virtual stacks, but also the aggregation of 3rd party services which opens up a wholesale layer of valuable API resources, and sets up world where API brokers can emerge, and prosper. I see that IBM has embraced machine readable API definitions like Swagger as part of their operations, and I’d love to see them adopt APIs.json for defining of their virtual API stacks or collections. APIs.json would make the API collections that users define, much more portable, shareable, indexable, and ultimately discoverable. If you need example of APIs.json + Swagger working, look through my API Stack inventory, as I map out the top APIs.

My vision of a future with API stacks and collections, is a world where we can buy and sell these modular resources, in IBM’s, Amazon’s, Google’s, and anyone other marketplace, but we can also run them in any infrastructure we choose. We would all possess our own stacks of resources, be able to choose from a wide array of public and private 3rd party resources, and deploy, remix, and re-use these resources in any way we choose, to drive websites, desktop solutions, system integrations, mobile apps, and devices.

It is good to see these patterns emerging over at IBM, via their Bluemix platform. This type of modular service definition, design, deployment, management and reuse is something I’m talking with several other companies about as well--making for a very “containerized microservice” 2015, from what I can see.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

What Exactly Is API Commons?

As I travel around talking to folks about APIs, I spend as much time as I can educating folks about API Commons, and I’m constantly reminded how little people, who have even heard, and read about API Commons, really understand what it is. With this in mind, I will be regularly publishing examples of what API Commons is, to help onboard everyone to mine and Steve's (@njyx) vision of API Commons.

API Commons is a machine readable pointer to the license of your API. As I talk with folks, and watch videos like this one at APICon in UK, I realize how many misconceptions there are about it. Many folks emphasize the Creative Commons license we’ve chosen, or the listing of APIs we’ve had added to our version of the commons, by publishing an API Commons manifest, which references a CC-BY or CC0 license.

If you feel your API should be covered under an open source license, or covered by your patent, or not covered by any sort of license, create an API Commons manifest that points to your API, references your license, and you’ve created your own commons. Now you can spider, and aggregate any API providers who have used the same license, into your own definition of what is the API Commons.

API Commons is not an API directory. API Commons is not a push for copyright in APIs. API Commons is a machine readable format for taking a stance on the licensing (or not) of your API definitions, and data models. Please join us today, by letting us know your stance on licensing of your API, we’ve love to hear your voice, and better understand your stance.

What Do I Mean When I Say APIs Are Just The Next Step In The Evolution Of The Web?

Swagger is now Open API Definition Format (OADF) -- READ MORE

I remember the vision clearly from 2004, when I first changed the URL for my Delicious social bookmarking account to make it return a list of bookmarks as XML instead of HTML. It was a vision of the programmable web--where everything I explored on the Internet, wasn’t just consumable, and right below the surface of any website or application I was using, there was also a machine readable version, allowing me to build whatever I desired.

People are often surprised when they realize I do not have anything to sell them, and that I evangelize that APIs are not some new product, but just the next step in the evolution of the web. This is easy to say, however it can be much harder to demostrate to the “normals”, leaving me always hundting for easy to understand API implementations I can use to help bring me people closer to understanding.

Steve Ziegler (@stevezieglerva) introduced to me to a great new example of this in the wild. An API I haven’t come across before, and I was pleasantly surprised to see it was a partnership between Department of Commerce, Energy, Interior, State Department, Transportation, EPA, Health & Human Services, NASA, National Science Foundation, Smithsonian, USAID, and USDA.

According to their own description the Global Change Information System is:

The US Global Change Research Program (USGCRP) has established the Global Change Information System (GCIS) to better coordinate and integrate the use of federal information products on changes in the global environment and the implications of those changes for society.

For me the Global Change Information System is an example of how websites, linked data, and APIs should work in concert, but is something I understand very little about how to actually do. The GCIS platform organizes an amazing amount of information, all the people, organizations, and relationships involved, in a very elegant way-you can start seeing in action by browsing the database, by clicking on the menu in top left corner.

Immediately I notice how structured everything, then as I scroll to bottom I see that everything is available in a machine readable format. What is even cooler, is that it isn’t just available as JSON, you get it in YAML, Turtle, RDF, and some formats I’m not familiar with. Then of course, you get a robust, yet simple web API as well.

I’m impressed with the amount of detail available in the Global Change Information System, and the amount thought put into the relationships between all the information, and actors involved. It makes me optimistic for what can come out of government, and that something so forward thinking is being applied to an area as import as the environment.

I’m just getting going reviewing the Global Change Information System API, and will make more time to evaluate how it works under the hood, maybe generating a Swagger spec for the interface, to help me better understand how it works. I’m also going to reach out to them and see if I can get more information on the story behind, and possibly what the roadmap looks like—making it likely you will see more stories about it in the near future.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

Recap Of APIs At Dept of Education, And The FAFSA API

My work on APIs for the Department of Education, and the FAFSA API began while I was working in Washington DC as a Presidential Innovation Fellow. Shortly after leaving DC, I was informed that conversations around an API for the FAFSA had been put on back-burner, and in response I developed a prototype FAFSA API to help jumpstart the conversation.

In December 2013, I went to one of the two data jams put on by the White House and Dept. of Education, up in Palo Alto, CA at Stanford. Then, in January 2014 I heard there was talk at the secretary level, about officially pursuing a FAFSA API. Yay! By February I got an email that there was an opportunity for funding such an initiative, if there was a technical specification available. Three days layer I finished a couple of options, which ultimately resulted in a draft technical proposal for a possible FAFSA API.

I’ve heard nothing about the outcome of these efforts. I provided some thoughts on APIs in general, in June of 2014, in response to their RFI, but beyond that it has been radio silence at Department of Education when it comes to a FAFSA API. Granted, I’m not pursuing this, but when it comes to FAFSA API discussion, I own the SEO conversation, so if it came up, I’m sure someone would ping me.

I have no confidence that the Department of Education will pursue a FAFSA API. My motivations around contributing to the conversations stem from a desire to jumpstart investment from both the public, and the private sector. While jumpstarting conversations in the federal government, I was hoping I could also jumpstart the development, and deployment of a federated FAFSA API, which in turn would apply pressure on the federal government to participate. Doing this is not trivial, and a cause that needs a champion or full time evangelist—I’m not your man.

The FAFSA API is an excellent example of the technology, business, and politics of APIs, something that is more art, than science. If it is going to become a thing, I think it has to happen inversely to the way the IRS ecosystem has occurred, from the outside in. I just don’t have confidence that the Department of Education can own this one. I think multiple leaders from the private sector have to make it happen, get private sector buy-in, then convince the Department of Ed to play nicely in a federated FAFSA API ecosystem.

Which is no small task.

P.S. This post is meant to help several groups understand where I am at with the FAFSA API. I just posted a round of updates directly to my FAFSA API research, as well as my overall Dept. of Education research.

I See An Opportunity In Paying Attention To Other Types Of APIs

I've been pretty focused on web APIs in my API Evangelist world, steering clear of hardware, networking, desktop software, and the American Petroleum Institute. While you will never catch me paying attention to oil, I am slowly changing my tune on other types of legacy APIs.

As I read through the first 700, of the 16K API related patents I’ve harvested from USPTO XML files, I initially started dismissing hardware related patents, and then some of the more network related ones as well. Then I started evaluating the impact these patents could have on the Internet of Things (IoT), and I am beginning to shift my stance.

I may not fully profile the approaches of some of these API providers, but I think I will at least bookmark, and consider the approach, and how its being applied, while also putting on my web API architect hat. As I read through a press release today on Infolytica corporation releasing the next generation of their MotorSolve software, complete with mention of an API, I can’t help but think of the implications if Infolytica embraced a web-API strategy.

Imagine the potential, if MotorSolve was broken up, migrated into the cloud, and containerized? Then add in the necessary business building blocks like docs, code, pricing, and the requisite political building blocks like rate limits, terms of services, etc. This is the lens I am looking at older patents I am reading through, that may not 100% reflect modern web APIs, but because our virtual and physical worlds are increasingly merging with the growth of Internet of Things (IoT), and Software Defined Networking (SDN), might have significant impact if just looked at a little differently.

Overall I see an pretty interesting opportunity in trying and consider all types of APIs, no matter their origins, reconsidering them in light of shifts in compute like the cloud, mobile, IoT, and SDN, and see what we can learn. You’ll see more news reports of more “low level” APIs on API.Report, and some of what I learn evolving my analysis here on API Evangelist.

APIs Used To Close, Rather Than Open The Internet

I get a lot of folks who come to my blog, see the title, read one or two posts, and assume that I’m a blind lover of API technology, and that I see APis as a solution to everything. While some of this is true, I do love APIs, and think they are a great solution (in some cases), at the same time I’m also an outspoken critic of APIs, and work hard to be a voice of reason when I see people doing stupid shit with them.

With this theme in mind, I want to once again remind everyone that APIs are neither good, nor bad, nor neutral by themselves, they are merely one of the tools companies can wield, and completely reflect the motivations of their masters. One such example of this in action, where I believe an API is being used for some pretty bad reasons is with the AT&T sponsored data API.

I’ve tried to support AT&T as much as I can, because I really want to help the enterprise make sense of web APIs, and teach them to wield them in positive ways, but I have to say the sponsored data API is not something I can get behind. Upon closer examination this API is working to close down, control and meter the Internet, rather than opening up the Internet making it more accessible, and usable by AT&T customers.

Allowing for mobile users to get their music, video and other content delivered in a way that doesn’t impact their phone bill seems like a good idea, and allowing companies to step in an sponsor the delivery of data and content for users, may smell like a good opportunity when you own the pipes, but this is leading us down a dark road. I’m sorry, there are much more interesting ways to optimize the delivery of content, and make money off the Internet pipes you have--AT&T you lack imagination, and creativity.

I'm sure you are well aware, but what you are doing with sponsored data delivery to mobile phones is another push for allowing the prioritization of Internet traffic, and being able to pay for a better Internet experience for those who can afford it, rather than making the web accessible to everyone. Additionally if you consider that many providers will actively work to slow the Internet delivery of content behind the scenes just so they can generate revenue using approaches like sponsored data, things really start to get really ugly. I'm not ok with you doing this via APIs, and helping your customers see API as something attached to their bill, and speeding up the Internet

This use of an API to close the Internet down, and such bad examples of API monetization really bum me out. There is so much opportunity for monetization if everyone has open, free access to the Internet. We can get more creative than this, when it comes to monetizing the pipes, and approaches like this from AT&T is just going to continue fucking up the Internet, similar to what we are seeing come from Verizon, and the other leading telcos who don't get Internet, let alone APIs.

PS: I wrote this 8 months ago, and just now found in my Evernote. Figured I'd publish in the shadow of FCC announcment on net neutrality.

An Increase In Number Of Press Releases Involving API Integration

I spent a portion of my time each day reviewing press release sites, in addition to the 1000+ blogs I keep an eye on, for syndication to API.Report. During the course of my work this year, I'm noticing an uptick in the number of press releases that are about some new app, feature, and partnership that has an API at its core.

Telling the story of prominent integrations is something I am a big advocate for, but I think the growth in the number of official press releases about API integrations shows that the mainstream SMB and enterprise markets are putting APIs to work more, and looking to showcase. For me, this demonstrates that APIs are playing more of a central role in not just the deployment of apps, but the course of regular business for an increasing number of companies in 2015.

I’m guessing that many more companies will be showcasing the API integrations that they achieve, long before they talk about the APIs they posess, let alone make publicly available. Ideally, everyone would be both a public API provider and consumer, but I’m afraid that many companies just don’t have the culture for such a thing—keeping their APIs closer to their chest, while still beating the API integration drum. Because it is what you do in 2015.

The Logo Page Over At The MYOB API Is Very Helpful

I spend a significant portion of my day looking for company logos, for use in the API stories I tell. When I come across a proper implementation of a logo page, one of the business building blocks I recommend employing, I have to showcase it.

I got an email from my friend and fellow API evangelist Keran McKenzie (@keranm) about updating the listing for the MYOB API on my API Stack. He sent me a better description, and a couple of new links that I didn’t have, including a link to their logo page.

One thing that I think is particularly interesting about the MYOB logo approach, is that they offer an assortment of designs:

  • Corporate Logo - The MYOB name and logo represent the MYOB business and is protected by copyright and trademark laws.
  • Developer Logos - The MYOB Premium Developer Logo is reserved for the exclusive use of MYOB Premium Developer Partners. 
  • Add-On Logos - The MYOB Certified Add-On logo is exclusively available to Add-Ons that have completed the MYOB Add-On Certification process. Product Logos - A full collection of product specific logos.

First, a logo page is essential in assisting the media, and other folks like me, who will be talking about your services, be more successful. Second, their assortment of logos, shows MYOB has a well thought partner framework, that encourages developers to work hard, obtain premium status, and build certified add-ons.

A logo page, with a rich set of, well-planned embeddable logos, and other images, is an important API management building block that tends to also be a signal of APIs who have their operations in order.

Static HTML Documentation Generated From Machine Readable API Blueprint Definitions Using Aglio

Swagger is now Open API Definition Format (OADF) -- READ MORE

I'm on the hunt for new ways to deploy attractive, interactive API documentation, from machine readable API definition formats like Swagger and API Blueprint. I am advocating for the development of new approaches to deploy UI documentation, and part of this showcasing what I find along the way.

In the spotlight today is Aglio, which they describe as:

An API Blueprint renderer that supports multiple themes and outputs static HTML that can be served by any web host. API Blueprint is a Markdown-based document format that lets you write API descriptions and documentation in a simple and straightforward way.

Aglio is significant because it is driven from the machine readable format API Blueprint, and produces static HTML that can be served up anywhere, specifically on Github or Amazon S3 deployed using Jekyll. Additionally it just makes your document simple, easy to follow, and just damn sexy.

I’d love to see a Swagger generated version of Aglio, allowing you to deploy the attractive API documentation from both Swagger or API Blueprint. If more API documentation looked like Aglio, I would be a seriously happy camper.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

In The Future There Will Be No Public vs. Private APIs

As I continue to evovle my service composition definition, using my 3Scale API infrastructure, across my microservices stack, the thought of public vs private doesn’t even enter the equation. I am doing my APIs using the Internet pipes, so they are public by default—then using my service composition I define the layer that actually regulates what is openly accessible by the public, what resources have limited access, and specifically how much of any resource any single person can access.

When I’m working through my API Stack, the concept of public and private doesn't exist. This is a reality that plays out in conversations between people who don’t fully understand the world of API management—aka the tech blogosphere. If it has an http:// in front of the address, it is a public API—sorry. You need to secure it like it is public, and you need to approach service composition in a sensible way, that deals with identity, and access management across all your public infrastructure.

As we move towards a world where the Internet isn’t just on our desktop, laptop, tablet and mobile environments, it is everywhere in our homes, business, cars, and public space, there will be no separation between public and private API resources. Everything will be public, it will just be a matter of properly defining who has access, exactly how much they can consume, and in what way they can actually engage with the valuable API driven resources emerging all around us.

Update: This conversation seems to have generated some buzz, and here are some of the responses:

APIs Are Establishing New And Useful Processes Faster Than Patents Can Keep Pace With

Swagger is now Open API Definition Format (OADF) -- READ MORE

I’m spending a lot of time reading API related patents lately. I downloaded all of the patent applications between 2005 and 2015, filtered for all patents that mention “application programming interface” in the title, abstract, or description of the API, resulting in a database of 16,485 patents. Currently I’m reading the 700+ that just mention API in title or abstract, and once done I’ll take a look at the rest.

First, let me state that patents are not a concept I subscribe to. Personally, I do not believe in defining, and locking up ideas, but I do not live in the world I’d like, and after beginning my patent API search, I see patents are a concept that like API copyright, will be increasingly wielded across many industries where APIs are being put to use. For this reason, I engage in my research, and overall patent awareness journey, not because I’m a believer in the patent system, or am looking to submit any patents--I want to better understand how patents are being used to define API processes, and help see the evolving role APIs are playing in the larger patent story over the last 20 years.

First, What Exactly Is An API Patent?
There are many incarnations of API, and making sense of whether a patent directly describes a specific API, or is more API adjacent can be very difficult to do. I’m operating under the assumption that if “application programming Interface” is present in title or abstract, it is likely that the patent describes an API fueled process, and my initial review of the 700+ patents support this. This realization moved the conversation in my head from “will patents affect APIs” to “APIs are being patented”, and we need to better understand what is going on, regardless of the precise definition of what an API is. Whether an API is more hardware related, system or specifically a web API as we know it today, becomes somewhat blurred on the current Internet landscape.

Precise vs. Broad Stroke Patents
One thing that really stands out as I read these API patents, is the varying scope of the API definition. Some are very precise, talking about a specific API call, to accomplish a very specific, granular task. While others are very broad, talking about using APIs, for an industry-wide use. This distance, represents the rapid expansion of the API space, and how the vision from technologist to technologist will vary widely on how aware of change they are in their specific domain. Some companies seem to just cast a wide net, claiming a whole region of cyberspace, as operating under their patented idea, with little or no regard for many iterations that have evolved within this very space, in just the time they've submitted the patent.

Internet of Things Blurring The Landscape
As I read through patents, it is easy to dismiss some of the application because they are very hardware focused, but when you consider the bigger Internet of Things (IoT) picture, where everyday, common devices are being connected to the Internet, using APIs—things get very blurry. A hardware API in the 1990s was definitely something that was pretty novel, and useful, but in 2015 these concepts are rapidly expanding, becoming commonplace, with the iterations and variations of process occurring at a staggering rate. I’m considering pulling patents all the back to 1995, to evaluate telecommunication era APIs, for re-use, and applying in the IoT era, in hopes of bringing more focus to this blurry portion of the API patent landscape(or quite possibly confusing myself further). What was once a pretty specific hardware specific API in 1998, could have thousands of uses, in an IoT crazy 2015.

The Pace of Change — New & Useful Processes At A Breakneck Speed
After reading the handful of patents that I have, as an API architect, my brain immediately sees the variances in their patent ideas, that when actually applied as an API would become the programmatic points where I can add variety, and iterate on the process being defined. If it's a new and useful way for analyzing network traffic using an API, I can immediately define hundreds of network scenarios with software defined networking, and then exponentially augment with different user and device scenarios on different ends of this patented process. At what point do all my variations fall prey to this patent, requiring me to cut a deal with patent owners, and at which point do my variations begin breaking the established patent idea, as the world that the definition applies to continues its rapid expansion. The automobile patent was new and novel until Henry Ford came long, and the automobile became ubiquitous, Amazon Web Services implementation of compute and storage was new and novel when it came out, at what point does is it just become the way things are done. The Internet is just escalating this process, which each passing moment—something the patent process doesn’t acknowledge.

How Does US Patent Office Maintain The Army Of Domain Experts To Determine What Is New & Useful?
As I read a patent about a specific API analytics patent, something I’d consider myself a domain expert in, I’m thinking this isn’t that novel or new, and is something that has been going on a while, and then I find several more variations on the same subject, with varying scopes, muting each patent along the way. As I read through some of the networking based API patents, which I probably do not consider myself a domain expert, I think WOW, that is pretty new and novel idea, to almost each one, with very little awareness of scope of each idea. I have to ask myself, how does the US patent office maintain the army of domain experts needed to keep pace with what is truly new and useful in this digital age? There is no way this can occur under a single government or institutional entity anymore, it has to be done openly on the Internet, in real-time.

APIs Are All About Re-Use, Remix, And Defining New & Useful Processes
As an API architect, I can confidently say that APIs are all about establishing definitions of new and useful processes, that if you do properly, allows you to re-use, remix, and redefine a sometimes dizzying amount of variants in that new and useful process. Each API or microservice is a single, potentially well defined process, and when you start daisy chaining, stacking and remixing with APIs, you can rapidly rethink legacy processes, with an agility that is seldom seen in the physical world, or earlier evolutions in software development. Ultimately I feel that the patent process is the antithesis of an API, but at the very least, I would make the argument that it is a very antiquated way to look at how we operate in a digital world.

1000LB Gorillas Are Filing Patents — Not The Doers, Defining Next Generation Processes
Another thing that stood out to me, when evaluating the 16K API patents I’ve targeted, is exactly who the characters are that apply for API patents. The two leading the charge are Microsoft and IBM, with a who’s who of enterprise dominating the list after that. The companies and individuals doing patents, are not the doers, at the forefront of each business sector, who are actually defining the next generation of new and useful processes using APIs, that are increasingly spanning both our virtual, and physical worlds. While I don’t have the research done to back this claim, at first glance at the areas in which API patents are being defined, this world does not reflect the API space I’ve been mapping the expansion of. Which tells me that there are two distinct layers to this API expansion, those that are defining and moving almost every industry being touched by APIs forward, and those that are filing definitions with the patent office to define, make bets on, and ultimately lay claim to what might be.

Doing Patents As A Defensive Measure — Its How You Play The Game!
I am sure there are numerous motivations for filing a patent on an API, and the popular claim is to say you are doing it as a defensive measure. Clearly software patents is a game of hardball, where companies can make or break your cool new startup, by burying you in legal woes—I’ve been there myself. Companies like Tesla, and Google are making an open patent pledge, stating that they only do patents as a defensive maneuver. I get this. I can’t argue with companies defending what they’ve built against the more aggressive corporate personalities in the space. I guess this is why I am building, and curating my database of API related patents, and the companies behind them, so that I can eventually connect it to actual litigation by these companies—only time will tell which open patent pledges actually hold out to be true.

Patents Are Rich Person’s Game — We Don’t All Have The Resources To Play!
To play in the patent game, you need money. You have to be able to afford to file your patents, and you need to be able to afford to defend them in court. This is not a doers game, it is a rich person’s game. My everyday world would be an extremely fertile environment for the defining of patents, as I spend my days playing with thousands of API resources, and deeply thinking how these APIs could be used in new an novel ways in our personal and business lives. However without the resource ($$) to be able to file the patents, and defend these virtual, API driven spaces, in a court of law, patents are a game I will never participate in. I can guarantee there are thousands of patentable ideas laying around my workbench, but because patents aren’t a concept I subscribe to, and I don’t have the resources to play in the game, you will never see a patent with my name on it.

Still Not Convinced You Can Define And Lock Up Ideas, Let Alone API Driven Ones
Even after about 60+ hours of patent API research, I’m still not convinced the patent process is something that is applicable in the API space. I’m barely sold on the concept when it involves the assembly of gears, conveyer belts, and physical elements, let alone with the new and useful process is algorithmic, allowing me to orchestrate infinite number of processes using APIs. I’m preparing a keynote talk in Sydney Australia next week, about the opportunities for orchestration with microservices and docker containers, using machine readable definitions like APIs.json and Swagger. As part of my work, I defined 18 specific processes that I depend on to operate as a business, and as soon as those were defined and deployed, I quickly define 8 new iterations on top of the existing processes. I’m still defining my overall approach to API orchestration with virtualized containers, but once ready, I will be able to define a new node on my network in seconds, and remix with other resources instantly, reworking existing processes and defining entirely new ones each day—why would I want to lock these up, I need these ideas to flow to be successful. Execution trumps definition.

If We Are Going To Apply Patents To APIs, We Need A Process That Will Keep Pace
Ok, say patents are even a thing we should be considering applying in the API space. At the very least we need a process that will keep pace with the world of APIs, and acknowledge not just the accelerated pace of change and iterations, but also the exponential variations that can occur, and the potential change of scope, in near real-time. I do not manually apply copyright to all my writing in 2015, because of the Creative Commons I can apply copyright across all of my digital exhaust—why are processes and workflows any different?

I strongly believe the concept of patents is counter to the essence of what an API is, but if we insist on defining our new and useful processes in this way, we need a real-time way of capturing the intellectual exhaust from processes we execute, map out the new and useful processes that exist, in a way that requires them to  have to actually be useful and implemented, while also having a way to share and vet these definitions with the public audience, whether it be industry, government or both. I’m just getting started with my API patent research, and as with my other research in the API space, I’m sure my awareness will continue to expand rapidly, something I doubt I will also see with the patent process itself.

Ultimately I’m left thinking what my friend, and fellow API Evangelist Mehdi Medjaoui (@medjawii) said in his very forward thinking post, that APIs are the new software patent.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

A Machine Readable Version of The Presidents Fiscal Year 2016 Budget On Github

Swagger is now Open API Definition Format (OADF) -- READ MORE

The release of the the president's fiscal year 2016 budget in a machine readable format on Github was one of the most important things to come out of Washington D.C. in a while when it comes to open data and APIs. I was optimistic when the president mandated that all federal agencies need to go machine readable by default, but the release of the annual budget in this way is an important sign that the White House is following its own open data rhetoric, and something every agency should emulate.

There is still a lot of work to be done to make sense of the federal budget, but having it published in a machine readable format on Github saves a lot of time, and energy in this process. As soon as I landed on the Github repository, clicked into the data folder, and saw the three CSV files, I got to work converting them to JSON format. Having the budget available in CSV is a huge step beyond the historic PDFs we’ve had to process in the past, to get at the budget numbers, but having it in JSON by default, would be even better.

What now? Well, I would like to make more sense of the budget, and to be able to slice and dice it in different ways, I’m going to need an API. Using a Swagger definition, I generated a simple server framework using Slim & PHP, with an endpoint for each file, budauth, outlays, and receipts. Now I just need to add some searching, filtering, paging, and other essential functionality, and it will be ready for public consumption--then I can get to work slicing and dicing the budget, and previous years budgets in different ways.

I already have my eye on a couple D3.js visualizations to help me make sense of the budget. First I want to be able to show the scope of budget for different areas of government, to help make the argument against bloat in areas like military. Second, I want to provide some sort of interactive tool that will help me express what my priorities are when it comes to the federal budget--something I've done in the past.

It makes me very happy to see the federal government budget expressed in a machine readable way on Github. Every city, county, state, and federal government agency should be publishing their budgets in this way. PDF is not longer acceptable, in 2015, the minimum bar for government budget is a CSV on Github—let’s all get to work!

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

We Need Better API Documentation And UI Deployment Options

Swagger is now Open API Definition Format (OADF) -- READ MORE

I was having a Twitter conversation with John Sheehan(@johnsheehan) about the easiest way to generate interactive API documentation this weekend, without getting all tangled up in having to get into the weeds of Swagger UI. I love me some Swagger UI, something I think has transformed how we engage with APIs, but the JavaScript for it can be inaccessible, and difficult to customize--to say the least.

There are other UI solutions to API documentation, projects like Slate from Tripit, from Readme.io, and some cool UI stuff over at OpenFDA, but really I haven’t seen much evolution beyond Swagger UI. Sure, Apiary.io has a great UI, but it isn’t the portable, customizable vision I have in my head (they are working on this, BTW). I envision a whole gallery of simple, UI templates that you can choose from, driven by machine readable Swagger or API Blueprint API definitions.

Looks like the gov hackers over at 18F feel the same way, and are working on something like this—"a suite of tools and templates that faciliate the generation of static, human-readable documentation to replace SwaggerUI”. Shawn Allen, has created two repos:

  • swagger-template, just some HTML files that should theoretically be useful for generating static HTML documentation for a Swagger-compliant API.
  • swagger-enhance, a little Node utility for grabbing a Swagger API's JSON and "enhancing" it with JSON data from each of its own endpoints (confusingly, "apis", in Swagger parlance).

It makes me happy to see some brainstorming to push the conversation forward. I’m able to deploy Swagger UI pretty quickly to support my APIs, using Github Pages, but when it comes to extending, and transforming the UI, I hit a wall pretty quickly. I've created some custom UI solutions, to help me manage my own infrastructure, driven by Swagger, but nothing that contributes to the larger conversation.

I’d love to see API design, deployment, and integration tools I depend on like Restlet Studio, and Postman provide more support for easy deployment of UI elements, driven by machine readable API formats like Swagger and API Blueprint. Making API documentation more interactive with Swagger UI was an important step forward, but we are ready for the next step in the evolution of API UI generation.

Updated November 27, 2015: Links were updated as part of switch from Swagger to OADF (READ MORE)

API Management Infrastructure And Service Composition Is Key To Orchestration With Microservices In A Containerized World

As I work to redefine my world using microservices, I have this sudden realization how important my API management infrastructure is to all of this. Each one of my microservices are little APIs that do one thing, and do it well, relying on my API management infrastructure to know who should be accessing, and exactly how much of the resource they should have access to.

My note API shouldn’t have to know anything about my users, it is just trained to ask my API management infrastructure, if each user has proper credentials to accessing the resource, and what the service composition will allow them to do with it (aka read, write, how much, etc.) My note API does what it does best, store notes, and relies on my API management layer to do what it does best--manage access to the microservice.

This approach to API management has llowed me to deploy any number of microservices, using my API management infrastructure to compose my various service packages—this is called service composition. I employ 3Scale infrastructure for all my API / microservice management, which I use to define different service tiers like retail, wholesale, internal, and other service specific groupings. When users sign up for API access, I add them to one of the service tiers, and my API service composition layer handles the rest.

Modern API management service composition is the magic hand-waiving in my microservice orchestration, and without it, it would be much more work for me to compose using microservices in this containerized API world that is unfolding.

Disclosure: 3Scale is an API Evangelist partner.

API Evangelist Logomaker

In 2010, when I started API Evangelist, I sat down to create a logo, and after six hours of frustration in photoshop, I eventually just typed out my logo as a basic JSON representation, kind of as a joke, which I eventually planned on changing, but it stuck.

With the latest batch of t-shirts I've ditched the "logo" portion, but will keep this legacy version floating around, here and there for nostalgic purposes. Along the way, I've added what I consider the function version of the logo, which I use for each of my research sites.

I'm starting so many new research areas lately, I decided to create a logomaker API, so I can make new logos, and anyone else can make their own as well. There are three API endpoints for the API Evangelist logomaker API:

The API is completely open, but I will be cleaning out the images each day, so if you truly want to keep one, save it locally. Have fun playing. I will actually be using it to generate headers for my research sites. If you want to use any of my logos, make sure you follow my branding guidelines. ;-)

Why Would You Ever Give Students API Access To The Student Information System (SIS), And Let Them Build Un-Sanctioned Apps That We Will End Up Having To Support?

I went up to California State University Channel Islands the other day to talk APIs with their tech team, and I was happy to find at least one strong API skeptic on the team. API skeptics also give me material for stories, so I thoroughly enjoy coming across them, and telling these stories is how keep polishing my argument for the next API skeptic encounter at campus IT, at the higher educational institutions that I visit.

During the discussion I was posed several interesting questions, and one of them was: why would you ever give students API access to the Student Information System (SIS), and let them build un-sanctioned apps that we will end up have to support?

Family Educational Rights and Privacy Act (FERPA)
FERPA gives students the right to review, control disclosure, and request amendment of their education record. Increasingly this is going beyond just via a web interface, PDF, or printed copies. President Barack Obama mandated that all federal agencies begin providing information in machine readable formats, and many cities and states are putting it into law as well. A student should always have access to their data, and they should be able to choose to do this via campus applications, or be able to obtain a portable copy of their record for storage in a location of their choosing, or possible use within a 3rd party system of their choice—it's their data. Period.

Un-Sanctioned App Concern Is Just A Red Herring
Modern API management infrastructure like 3Scale, and WSO2, provide an unprecedented level of control over managing API access, requiring secure on-boarding of new developers, the establishment of service composition definitions, which provides rich real-time analytics on how APIs are used, and by whom—while also seamlessly integrating with existing identity and access management solutions. The university gets to choose who has access to which services, revoke access when abused, while also better understanding how resources are really being accessed and put to use. Ideally this applies to all campus-wide usage, as well as with external 3rd parties—modern approaches to API-centric operations, include the management of internal, partner, and public resources in this way.

A More Balanced Governance Across Campus Resources
Modern API management was born out of traditional IT governance, but is something that is more focused on giving access and control to the end-users who are the actual owners, of the valuable data, content, and other digital resources being made available. Legacy campus IT models provide a governance model that involves IT, administrative and faculty stakeholders, but rarely includes the students. APIs give students secure access to their data, and standards like oAuth opens up the ability for them to have a vote in who has access to their data, with oAuth scope being defined by existing institutional governance efforts. When APIs enter the conversation, governance expands to be more self-service, real-time, and within the control of students, as well as administrators, faculty, and campus IT.

Possibility Of Good Things Happening Closer To The Student
In the current educational environment, where students are often more tech savvy than faculty and administrators, why would we want to eliminate serendipity, and the possibility that new things might happen. Solutions to problems that students actually face everyday, that campus administrator may never think of, because they see technology through a very different lens. The days where IT knows best, regarding what devices, browsers, apps, and websites are optimal for getting things done, are in the past. Shadow IT demonstrates this, where students, and even faculty are using un-sanctioned solutions to get their work done. Campus IT should be empowering students, encouraging a more digitally literate individual who soon will be entering the workforce, not suffocating this.

Easy For Campus IT To Miss The Big Picture
I am a recovering IT administrator, so I understand the challenges a skeptic campus IT administrator faces, but ultimately by restricting access to campus resources just makes your job harder, making you the bottleneck that everyone's so commonly complains about, when it comes to IT. APIs don’t create more work for you, it makes you much more agile and nimble in how you integrate systems, build new web and mobile applications, and provide 3rd party vendors access to campus resources—as well as potentially opening up self-service access to students.

With an API-centric approach you will know exactly who is accessing resources, and how they are using them--in real-time. I’m betting that you don’t have this visibility across all of your IT resources right now. When I put on my IT director hat, I prefer the API model, because then all resources are self-service, available to only those who SHOULD have access, all without them having to talk to me. I’m left alone to do what I do best, and can also monitor new signups, real-time usage, and manage support tickets in accordance with wider IT support strategies.

I understand you are a skeptic about APIs being a thing students should have access to, and in reality most students will not care, but there will be a long tail of student users who will do things you never imagined, and potentially change how you look at scheduling, document management, or other staples of campus operations—something that will never happen if you don’t make students a priority when it comes to your digital resource management.

Disclosure: 3Scale and WSO2 are both API Evangelist partners.