{"API Evangelist"}

Breaking Down The Layers of API Security And Considering Link Integrity

One of the reasons I setup individual research projects, is to provide me with a structure for better defining each aspect of the API world, something I am working hard to jump-start within my API security research. You will notice the project does not have any building blocks defined, which when you compare with one of my oldest research areas, you start to see what I mean.

The blog posts, and other links I curate as part of my API security will help me find companies and tools that are providing value to the space. As I break down each company, and what they offer, I often have to read between the lines, in trying to understand how an API, service, or tool can be used by API providers, as well as potentially API consumers. I am looking for APIs that offer security, but also APIs that offer security to APIs--make sense?

As part of this research, I am playing with Metacert, which bills itself as  a security API for mobile application developers, helping them block malicious ads, phishing links & unwanted pornography inside apps, but I think it is so much more. I could see Metacert being pretty valuable to API providers, as well as API consumers building web and mobile apps. Security isn't always about brute force attacks, and could easily just be in a simple link, added with some content, via your API.

I am adding Metacert to my API security research, with a focus on its potential to API providers. I could see API providers seamlessly integrating the Metacert API into their own stack, processing all links that are submitted through regular operations. I will also be adding link screening like this as a building block to my API security research.

If you are looking for a wise investment in the API security space, you should be talking with Metacert. APIs like Metacert provide us with a model for thinking about how we deliver API driven security services for web, mobile, and IoT applications, but it also provides a potential wholesale API layer that other APIs can use to better secure their own APIs. I consider it a strong blueprint, because its API driven, they have all the essential building blocks, which includes a monetization strategy, and they do one thing, and they do it well.

See The Full Blog Post


API Monitoring Is Often About The Little Details

As I make my way across my research projects, I'm learning more about how companies like Metacert can deliver valuable security services to API providers. I'm also getting a better idea of the nuance that goes into monitoring APIs, from API Fortress

API Fortress has very interesting API monitoring story, derived from the Etsy API. This isn't the API monitoring story you'd think, it isn't about the overall stability of the Etsy API, and whether its up or down, it is about the details of API payloads, and inconsistencies they found scanning the API.

Through a payload toast of the Etsy API, the API Fortress team found an occurrence of NULL values in 3,600 out of 50,100 items scanned--which negatively affected what results that were actually accessible via the API vs the Etsy interface. Something that can result in loss of exposure for products, translating into reduced sales. 

While full API outages are definitely front and center when you think about monitoring, things like this could potentially drain revenue over a long period of time, in a way that makes it very difficult to discover. I can only imagine, when you are doing things at the scale Etsy is, this kind of monitoring could plug some pretty big holes in your sales bucket.

API Fortress is already part of my API monitoring research, but just like with API security, I am going to add content analysis, and pattern matching like this as building blocks for API monitoring. After I make my way through all of the current companies I have listed in my API monitoring, I will publish the master list of common building blocks I've collected.

See The Full Blog Post


Please Do Not Hide Your API Definitions From Consumers

I am always pleased to see API providers publishing Swagger definitions, and using them to drive their interactive documentation. Projects like the Global Change Information System API, are getting on the API definition bandwagon, and this is a good thing. I have been pushing API definition formats like Swagger and API Blueprint since 2012, but in 2015, while I want to keep on-boarding folks to the concept of using API definitions for interactive documentation, but I also want them to also understand that their APIs definition will be used in other areas of API operations as well.

Most people think Swagger is the documentation, and have not been able able to separate the specification from the interactive documentation. I think Apiary has done a better job of this separation with the separation of API Blueprint from Apiary. As an API provider, you may not have evolved to a full API-first level of operation, which is ok, but I encourage you to make your Swagger or API Blueprint definition as accessible as possible, so your API consumers can put to use in other ways--even if you don't have the time.

As soon as I saw that the Global Change Information System API employed Swagger for their documentation, the next thing I wanted to do was also use in my Postman client. While the Swagger UI provides me with a hands-on way of getting to know the GCIS API, I have to come back to the site to play with more, and it doesn't give me as much detail about how the API works, as my Postman client does. All GCIS API team has to do, is publish a text or image link to their Swagger definition in a prominent location, so it is obvious to us consumers.

I get easily reverse engineer the Swagger UI, using my Google Developer tools, but it is a couple of extra steps for me, that stands in between me, and making calls to the GCIS API in my local Postman Client. Interactive API documentation via Swagger and Apiary has significantly moved the API definition conversation forward, but just like we should be thinking beyond just language specific clients, we also need to be enabling our API consumers to speak the formats popular HTTP clients like Postman speak.

See The Full Blog Post


Are API Keys and Secrets Actually Very Secure?

When it comes to API security, there are a number of things to consider, something I will be be working to define, and share as part of my ongoing research. However there are three building blocks that are front and center in most security conversations--SSL, API keys, and OAuth. SSL is a must-have, and OAuth is fast becoming a must-have when there is personal data involved, but I still encounter numerous misconceptions around the role API keys actually play in security.

API key, and its accompanying secret are a common way to secure API access. You require developers to register for an account, create a new application, and they are then given an application key, plus a secret that is passed along with each API call. You cannot call the API, without passing in your API key and secret. This act alone, is what people view as the security role API keys are playing, and I get a number of questions if this is truly secure. 

No it is not. Looking for two values being passed with each API call, really doesn't do that much. The actual security of your platform requires a much higher level, IT view of security (which I won't go into here), with API keys being just one tool in your security toolbox. Where keys do play a huge role, is around the awarness that is introduced, of who is accessing what, and managing how much they can access (aka mitigating how much damage is done, when there is a breach).

This short sighted views of API keys is why many companies feel they can just roll their own API management solution, and why many API developers and architects think the keys themselves possess some magical security pixie dust. The actual power comes from your awarness of the resources you are exposing via APIs, organizing into coherent service tiers, applying meaningful rate limits on these resources, and evolving your detailed awareness of who is accessing your resources, how much they are using, and when. 

An API key and its companion secret offer very litle security, but the awarness they bring when you have a modern API management layer in place, can bring huge security benefits to the table. Something that will not prevent every security breach, but with the right mechanisms in place you can be alerted of breaches in real-time, and dramatically limit the extent of their damage, when they do occur.

See The Full Blog Post


A Quick Example Of An API Provider Putting Content Type Negotiation To Work

While there are numerous examples of APIs that successfully offer more than one option when it comes to the content types their API returns, the concept is missing from the large portion of the APIs I review. When I see good examples of this in the wild, I try to make time to showcase, and share, to help with the HTTP literacy of my readership.

One API which is rocking several fronts when it comes to API design for me, is the Global Change Information System, which interestingly is an alliance between government agencies, not something born out of Silicon Valley. When requesting resources from the GCIS API, the following HTTP accept headers are honored:

  • application/json
  • application/x-turtle
  • text/turtle
  • application/n-triples
  • text/n3
  • text/rdf-n3
  • application/ld-json
  • application/rdf+xml
  • application/rdf+json

When integrating with the GCIS API, it all begins with some basic JSON, but then look at all the other linked data goodness in there! JSON-LD, RDF, Turtle!! Discovering APIs that support robust content type negotiation like this can be hard to find, most APIs don't talk about it, so even if they do, there is no way to know about it. Thankfully the GCIS API shares it via email, as well as describes in the GCIS Swagger definition, which is how I learned about it. 

Content-Type negotiation is one of those HTTP literacy areas that need a lot of discussion, helping on-board API newbies to the power and possibilities that are present here, and that you can go beyond just plain ol JSON, and receive much richer API responses.

See The Full Blog Post


The Benefits and Risks of an Open API Standard

I'm immersing myself in the Data Sharing and Open Data for Banks, published by the HM Treasury and Cabinet Office in the United Kingdom, "to explore how competition and consumer outcomes in UK banking could be affected by banks, giving customers the ability to share their transaction data with third parties using external Application Programming Interfaces (APIs)."

While I do not think an open API standard will ride into town like a white knight, saving England from all its banking problems, I do think there is a lot of good that will come from this effort. The banks have a lot of technical debt, corruption, and many other issues to deal with before APIs will even work, but I think there is enough pressure on them right now from consumers, and startups, that they will have to change at some point--or die.

As I look for evidence of movement around an open API standard online, I came across the response to the government proposal back in March. I thought there was a lot to learn from in these responses, not just for banks, but for any company that is struggling with a coherent API strategy. You can read the document yourself--here are the specific responses.

Benefits and risks of an open API standard 

2.1 The first set of questions in the call for evidence invited views on the benefits and risks of an open API standard. 

2.2 The vast majority of the respondents supported the development of an open API standard, and respondents agreed that there are many potential benefits of an open API standard in UK banking. The most commonly cited benefits were increased competition and innovation in banking, a greater degree of consumer choice in financial services, and the development of new and better services tailored to customers that can enhance the overall efficiency of banking. The government also repeatedly heard that an open API standard would result in customers feeling more empowered to engage with their banking or financial services. 

2.3 However, many respondents also recognised that there are risks that could arise from an open API standard. One concern that was raised was the risk that an individual (to whom the data belongs) may not have meaningful control over the exchange of the data between third parties and financial institutions. Respondents from the technology industry and the banking industry cited risks around privacy of customers’ data and the potential for fraudulent use of data. In line with this, the need for appropriate security and vetting systems for third parties was regularly mentioned in the submissions received. In addition, responses received from the payments and banking industries explained that an open API standard should avoid ‘locking in’ a set of standards which cannot be enhanced or restrict banks from their own innovations, and that it should be compatible with European legislation. 

Development of an open API standard 

2.4 The next set of questions in the call for evidence considered the process for the development of an open API standard. 

2.5 The government asked what it could do to facilitate the development and adoption of an open API standard. The majority of respondents said that that the government has a key role to play in coordinating between the banking industry and the fintech community, and driving forward the design of the open API standard, particularly in relation to the development of the standards around data security and confidentiality. Respondents predominantly from the fintech community also explained that government has a role in helping to educate customers to understand the privacy and security issues surrounding an open API standard, and the steps that can be taken to mitigate those risks. 

2.6 The government questioned who should play a role in the development of an open API standard, who should be able to make use of it and how it should be used. The government received responses suggesting various different institutions could be involved in the development of an open API standard. These include banks and other financial institutions; the British Standards Institute and International Standards Organisation; the Financial Conduct Authority; the Bank of England; the Information Commissioner’s Office; the British Bankers’ Association; the Payments Council; the European Banking Authority; data security experts; software developers; fintech businesses and the government. Submissions from the fintech and technology business said that the open API standard should be available to be used by everyone, subject to the appropriate vetting procedures, and not restricted to those who banks have existing relationships with or can afford to pay for access. The banks agreed that appropriate security vetting procedures should be developed. 

2.7 In response to the question on the cost of developing an open API standard, the government received a variety of estimates ranging from negligible costs to tens of millions of pounds. Several respondents from all industries explained that they were unsure of the cost of delivering an open API standard, or that it was difficult to predict how much it would cost, because much of the detail of the design of the open API standard remains to be agreed. Many respondents also highlighted that the costs were likely to vary greatly between institutions. Software and app developers, however, explained that costs may be lower than anticipated because some banks have already developed their own external APIs and so are not starting from a blank sheet of paper. 

2.8 The government asked questions on how long it would take to deliver an open API standard. While there was a degree of variation in the responses, broadly respondents were in agreement that 1 to 2 years was a reasonable timescale to develop and deliver an open API standard in UK banking, although more in depth responses explained that the timescale for delivery would be clearer once the design specification of the open API standard is more defined. 

2.9 The government also asked what issues would need to be considered in relation to data protection and security. Respondents highlighted that customers must at all times be in control of which third parties they grant access to their bank data and for how long, how that data can be used and the level of granularity of data that can be assessed. The banking industry and many technology firms said that third parties should be appropriately vetted before being able to access customer bank data, and a permissions list of authorisations and authentications should set out all approved third parties. One technology company added that the degree of access to customers’ bank data could be tiered according to the level of security standards met by the third party. 

2.10 The government invited views on the technical requirements an open API standard should meet and adhere to. Respondents put forward a number of recommendations, the most popular options involved such requirements as HTTP, JSON, XML, OAuth, REST5 and Cloud based architecture. 

I also found the government responses to be equally valuable:

3.1 It is clear from the vast majority of responses received during the call for evidence that the benefits of an open API standard are numerous and widely recognised. The key benefits that were identified by respondents were an increase in consumer choice, more competition in banking and an enhanced process, experience and outcome for consumers. Respondents strongly emphasised their support for the delivery of an open API standard in UK banking. 

3.2 While much of the detail of an open API standard is still to be agreed, it is evident that an open API standard can be designed in a way which meets requirements around data protection and security, and at reasonable cost. The majority of respondents also said that developing an open API standard to a timescale of 1 to 2 years would not be unreasonable. 

3.3 Respondents from the financial industry and the technology industry were aware of the risks that an open API standard could bring to consumers such as data privacy and the possibility of fraudulent use. It was made clear, however, that these risks are largely addressed through existing data protection laws, and can be mitigated through detailed planning and meticulous scoping of the open API standard design. 

3.4 Submissions to the call for evidence also made clear that the publication of more open data in banking can have benefits to customers and financial institutions, and help to boost competition. The government, however, notes that steps will need to be taken to ensure the appropriate degree of data security and protection to customers is maintained. 

3.5 The government is therefore committing to deliver an open API standard in UK banking, and will set out a detailed framework for the design of the open API standard by the end of 2015. The government will work closely with banks and financial technology firms to take the design work forward and, as part of those discussions, will also take forward ideas to introduce more open data in banking for the benefit of customers. 

3.6 Delivering an open API standard in UK banking will help to drive more competition in banking for the benefit of customers, and enable fintech firms to make use of bank data on behalf of customers in a variety of effective and creative ways. It will ensure that the UK remains a global hub and a world-leader for financial technology and innovation. 

I think 3.5 is the most important: The government is therefore committing to deliver an open API standard in UK banking, and will set out a detailed framework for the design of the open API standard by the end of 2015.

Let's get to work. I'm setting up my monitoring to track on things as they roll out, and hopefully provide a blueprint that can be employed here in the U.S. I'd love to see similar work come out of our Department of Comerce, but I guess, as with other aspects of the open data and API movement here in the U.S., we will be following a UK lead.

See The Full Blog Post


Keeping an Eye on the Open Banking API Movement in the UK

Late in 2014, the HM Treasury and Cabinet Office in the United Kingdom, published Data Sharing and Open Data for Banks, "to explore how competition and consumer outcomes in UK banking could be affected by banks, giving customers the ability to share their transaction data with third parties using external Application Programming Interfaces (APIs)."

Very similar to the Obama open data mandate in May of 2012,  I saw a lot of discussion shortly after the release of the document from the HM Treasury, but not many details on how things are going so far. There was a response to the document published back in March, which holds some interesting perspective, but not much else from the government, or banks, that I can find.

It is only August, I'm sure we'll see more activity this fall and into 2016, when it comes to standardizing banking APIs in the UK. I've tracked on efforts like the Open Banking Project for a while now, but I think what is potentially happening in the UK is worth me taking things up a notch or two. To help me, I will setup a banking research project, which will push up banking and related APIs in my daily and weekly monitoring, something I can review regularly, and maybe push back on some key players to get more information on where things are. Establishing an official research project really helps me find the key players, both individual and companies, which in turns increases the amount of information I gather as part of my research.

If you work for a bank that does business in the UK, or possibly for the government, I'd love to hear your thoughts, even if it is off the record. I'm hopeful for what can occur in the UK, but I'm also eyeballing the effort as a potential blueprint that the rest of the world could follow. I know the banking industry suffers from some crippling legacy technical debt, and are slowed by government regulations, but I'm pumped about the potential after reading through the Data Sharing and Open Data for Banks again--we will see a significant shift in the next five years, when it comes to how we manage our money.

See The Full Blog Post


Influencing Important Work Like the UK Open Banking API Standard Is Why I Do This

I enjoy what I do, but when I embarked on my API Evangelist journey in 2010, I set myself on a mission to educate people about the business of APIs, and highlight that it isn't just the technology that makes APIs a thing. I have worked hard to distill down what it takes to execute an effective API management strategy into usable advice that anyone can run with when crafting an API strategy.

As I conduct my monitoring of the API space, It makes me feel accomplished, when I find my work cited, influencing important API related projects. This just occurred to me as I was taking a fresh look at the Data Sharing and Open Data for Banks, published by the HM Treasury and Cabinet Office in the United Kingdom. As I was reviewing the document I happened to search for apievangelist (I know, I'm vain)--I was pleased to find a single citation to my work.

Reference 152 - Thirdly, making the most out of external APIs involves providing on-going support to the developers and third party organisations that make use of them. Good API management can involve providing samples of code, a curated support forum, a sandpit with dummy data, an up to date blog, FAQs and more.

It isn't much, but it is a nod to my work, that helps validate what I've been studying for the last five years--the business of APIs. For me, this isn't about recognition, its about knowing I'm making an impact, and I'm influencing policy, and helping companies, institutions, and government agencies understand API best practices. Little mission accomplished, I'm contributing a small piece to potentially improving the UK banking system, which is something that will undoubtedly spread beyond just England.

Ok, celebration is over. Nice job Kin. Back to work. There is too much to be done.

See The Full Blog Post


An API Monetization Framework To Help Me Standardize Pricing For The APIs I Bring Online

I'm almost to the point with my API stack, where I can start plugging in new APIs I have planned. Up until now, the APIs i have deployed, are of little use to a wider commercial audience. However some of the APIs I have planned for the next year, I'm looking to monetize their usage, and operate as part of a larger commercially viable API stack. (practice what I preach baby!)

To run this stack, I need a plug and play way to define what an API is costing me, and potentially how much revenue I am generating from each API. With this in mind, here is my draft look at an API monetization framework, that I am employing across my API Stack. 

Acquisition (One Time or Recurring)

  • Discover - What did I spent to find this. I may have had to buy someone dinner or beer to find, as well as time on the Internet searching.
  • Negotiate - What time to I have in actually getting access to something. Sometimes its time, and sometimes it costs me. 
  • Licensing - There is a chance I would license a database from a company or institution, so I want to have this option in there. Even if this is open source, I want the license referenced.
  • Purchase - Also the chance I may buy a database from someone outright, or pay them to put the database together, resulting in one-time fee. 

 Development (One Time or Recurring) 

  • Normalization - What does it take me to cleanup, and normalize a dataset, or across content. This is usually he busy janitorial work necessary.
  • Design - What does it take me to generate a Swagger and API Blueprint, something that isn't just auto-generated, but also has a hand polish to it.
  • Database - How much work am I putting into setting up the database. A lot of this I can automate, but there is always a setup cost.
  • Server - Defining the amount of work I put into setting up, and configuring the server to run a new API, including where it goes in my operations plan.
  • Coding - How much work to I put into actually coding an API. I use the Slim PHP framework, and using automation scripts I can generate 75% of it usually.
  • DNS - What was the overhead in me defining, and configuring the DNS for any API, setting up endpoint domain, as well as potentially a portal URL. 

Operation (Recurring)

  • Compute - What percentage of my AWS compute is dedicated to an API. Flat percentage of the server its one until usage history exists.
  • Storage - How much on disk storage am I using to operate an API? Could fluctuate from month to month, and exponentially increase for some.
  • Bandwidth - How much bandwidth in / out is an API using to get the job done.
  • Management - What percentage of API management resources is dedicated to the API. A flat percentage of API management overhead until usage history exists.
  • Evangelism - How much energy do I put into evangelizing any single API? Did I write a blog post, or am I'm buying Twitter or Google Ads? How is word getting out?
  • Monitoring - What percentage of the API monitoring, testing, and performance service budget is dedicated to this API. How large is surface area for monitoring?

Pricing (Recurring)

  • Tier(s) - Which of the 7 service tiers is an API available in, and which endpoint paths + verbs are accessible in the tier (api-pricing definition).
  • Credit(s) - How many credits does an API use when any single endpoint is engaged, specified as entire endpoint or individual paths + verbs (api-credit definition).

Revenue (Recurring)

  • Monthly - How much revenue is being brought in on a monthly basis for an API and all of its endpoints.
  • Users - How much revenue is being brought in on a monthly basis for a specific user, for an API and all of its endpoints.
  • Applications - How much revenue is being brought in on a monthly basis for a specific application, for an API and all of its endpoints.

I am looking for this framework to help me set pricing, and rate limits for any API I publish. My goal is to rapidly make available some valuable databases, and more functional APIs using common open source software, available for free, but also generate enough revenue from high volume users, to run the whole thing. To do this, I need to understand exactly what an API is costing, allowing me to set a price, with the intent of breaking even, and then generating some revenue where it makes sense.

As part of this work I will be generate an APIs.json type I am calling api-pricing, which I am looking to help be balance out consumption across my API stack. Using my 3Scale API infrastructure I am able to easily add and subtract credits for API usage across users, and apps, then handle the billing based upon what pricing I have set for all my API usage.

My pricing will not just be about retail usage. Some of my APIs I will be deployed in other people's infrastructure, and letting them control the pricing, credits, and service composition. This is the wholesale layer to my strategy, allowing me to go beyond my own internal usage, by B2D usage, and open up new opportunities for API deployment and consumption.

As with most of my work, I'm going to be very transparent about my pricing, making sure it is indexed within each APIs.json file, and available alongside each APIs Swagger, API Blueprint, Postman Collection, and API Science monitor--encouraging wider consumption, and processing.

See The Full Blog Post


We Need an Open Abstraction Layer to Help Us Better Define and Design Our APIs

I walked around San Francisco with Jakub Nesetril (@jakubnesetril), the CEO of Apiary, Wednesday evening, talking about the API space. Eventually we sat down in Union Square, continuing our conversation, which is something I wanted to further process, and blend with some existing thoughts I'm working through. Much of our conversation centered around the need for an open abstraction layer for API design, which would reduce the focus on Swagger vs. API Blueprint vs. WADL vs. RAML vs. any other API definition, and make it just about defining and designing our APIs.

Jakub is right, I'm sure he'd love everyone to use API Blueprint (which thousands are), but it is more important to him that people just use API definitions, and commit to a healthy API design strategy. This line of thinking is in alignment with other thoughts I'm having around there being a common open source API design editor, which I'd like to use as vehicle to get us closer to my vision of a perfect API design editor.

I see an abstraction layer consisting of the following elements:

  • Import - We need to be able to import ANY API definition format we desire.
  • Export - We need to be able to export ANY API definition format we desire.
  • Viewer - We a code view for ANY API definition format we want to look at while working.
  • Editor - We a visual, GUI editor that is all about visual API design, WYSIWG editor for APIs

The import, export, and viewer should work as an API, and the API editor should be a simple, well design JavaScript tool that can be embedded anywhere. The back-end API stack should be available in PHP,Python,Ruby, Node.js, Go, C#, and Java flavors, with a docker image anyone can deploy within their own infrastructure. This should be all about abstracting away the complexities of each individual format, and focus on delivering a simple, yet robust API design editing experience.

API definition format owners should be able to maintain importing, exporting, and viewing layers via some sort of plug-in architecture--meaning the platform is about API definition, the WSDL, WADL, Swagger, API Blueprint, RADL, RAML, Postman Collection, and other formats should be owned by each respective owner. The Open API Abstraction project could provide a single architecture for everyone to plug into, with an emphasis on building the best possible API design editing experience possible.

The Open API Abstraction tooling could be baked into any API service or tooling in the space, allowing the next generation API service providers to focus on what they do best, and not having to build out their own API design editor, while  also baking in compatibility for all API definition formats by default. Such a layer would allow API architects and designers to craft APIs in a consistent way, no matter what API definition format designers and developers might be speaking, opening up a wider world for communicating and collaborating through the API design process.

See The Full Blog Post


Thinking Beyond Just Language Specific Clients and Also Speaking the Formats Popular HTTP Clients Are Using

I was given an introduction to the Microsoft Graph A concept being applied to Office 365 APIs, other Microsoft APIs, and potentially beyond, to map out segments of users and every day objects. As I learn more about this unifying, graph API effort, I will write more, but this particular story is about how we communicate around the first steps taken by developers when integrating with any API. As an API provider, how you talk about integration, and craft your on-boarding resources, can significantly impact how developers view your resources, something that I think still will always need some work across the space.

After being introduced to the Microsoft Graph APIs, we were given a list of code resources, that we could use to hack against the API. The API integration overview had all the modern elements of API integration, with C#, Java, PHP, Node.js, Ruby, and other "coming soon" libraries. The resource toolkit, even had a sandbox account we could use, helping us on-board with less friction. While this approach is very progressive for the Microsoft world I've known, evolving us beyond the endless sea of C# focused WSDLs we all have seen historically, I would like to point what I think should be the next step in our evolution.

It makes me happy that we now speak in multiple programming languages, and provide sandbox or simulation environments. +1 What I'd like to see next, is that we also speak more HTTP, than just language specific clients. I'd like to see these types of API on-boarding toolkits start providing a Postman Collection for the API, or even better, a Swagger or API Blueprint definition that can allow me to not just on-board using the HTTP client of my choice, like Postman, PAW, or Insomnia REST. I agree that we should be speaking the native language of the developers we are courting, but I like to nudge things forward, and encourage speaking a more generic language of HTTP, for those of us who program in many different languages.

Just like being multi-lingual with APIs has moved us out of our web service silos, I'm hopeful that if more developers speak HTTP, it will help move us into the future, where API developers are more HTTP literate, are are really leveraging the strengths of HTTP, or even better--HTTP/2 in their everyday worlds. I started including Postman collections, along with my Swagger definitions, for my APIs. I'm also working to include API Blueprint, and other API definition formats, something that will allow potential API consumers to onboard using my language specific libraries, or the HTTP client of their choice.

See The Full Blog Post


The API Design Guide Is Just The Beginning Of The Journey - Better Get Started!

I'm processing all of my thoughts from some travel to the big city of San Francisco. I was providing feedback on Microsoft's API design guide, as part of the OneAPI Technical Advisory Group. As I was thinking about the journey Microsoft is on, the role of the API design guide, and how many other companies like Paypal, and Cisco, are on the same journey

In parallel to this, I am on my own journey with my own API stack, I've been looking at everything from a slightly different perspective than many other analysts and providers in the space. When I started in 2010, it was all about API management, then after folks kept asking me about options for deploying APIs I expanded my monitoring to API deployment. Then Jakub the CEO of Apiary, moved the dial back ruth on the life-cycle, and got me paying attention to the concept of API design.

Fast-forward to 2015, I am paying attention to almost 20 separate areas, as part of my core API research. I tune into a number of other areas, but these research projects make up the heart of my API storytelling. On the trip back from San Francisco today I had a few thoughts, and needed to organize them in context of my core research. 

Define (A)

  • A1 - Tech
    • Do you use Swagger to define APIs?
    • Do you use API Blueprint to define APIs?
    • Are there other API definition formats you use?
    • Do you employ NO API definitions?
  • A2 - Business
    • Do you support multiple API definition formats?
    • Do you use API definition formats to engage with other services or tools?
    • Are your service tiers represented in your aPI definitions?
  • A3 - Politics
    • Do you share your API definitions publicly?
    • How do you license your API definitions? Are they open?

Design (B)

  • B1 - Tech
    • Do you use Restlet Studio's API design tools?
    • Do you use APIMATIC  API design tools?
    • Do you use Apigee API Studio API design tools?
    • Do you use Gelato API design tools?
  • B2 - Business
    • Do you use a service provider for API design?
    • Do you use a consultant(s) for API design?
    • Do partners and other stakeholders participate in the API design process?
  • B3 - Politics
    • Do you JSON:API?
    • Do you hypermedia?
    • Do you JSON-LD?

Deploy (C)

  • C1 - Tech
    • What gateway tech do you use?
    • What API frameworks do you use? 
    • What cloud solutions do you use?
  • C2 - Business
    • Are you monetizing APIs?
    • Are you covering costs of deployment?
    • Do you hand API deployment internally?.
    • Do you depend on outside resource to deploy APIs?
  • C3 - Politics
    • Are your API deployment solutions open source?
    • Do Your API Deployment resources scale?

Manage (D)

  • D1 - Tech
    • Are you rolling your own API management solution?
    • Which API management solution are you using?
  • D2 - Business
    • Do you require registration to use an API?
    • Do you have service tiers?
    • Do you have partner levels of access?
    • Do all areas of your companies operation apply same approach?
  • D3 - Politics
    • Is your API management solution open source? 
    • Do you provide support for your API developers?

Secure (E)

  • E1 - Tech
    • Do you require SSL?
    • Do you require API keys? 
    • Are you employing OAuth?
  • E2 - Business
    • Are you investing in security?
    • What would a security breach cost you?
    • Do all areas of your companies operation apply consistent template?
  • E3 - Politics
    • Are you more secure than your competition?
    • Do you share your security practices publicly?

Monetize

  • F1 - Tech
    • Which management solution do you use to handle monetization?
    • Is there an API for your API monetization layer?
    • Is there a developer dashboard for pricing, billing, and revenue sharing?
  • F2 - Business
    • Do you apply consistent monetization strategy across all APIs?
    • Is your billing real-time?
    • Is API revenue, your only revenue stream?
    • Do you have a credit based system, beyond just API call based?
  • F3 - Politics
    • Do you share your pricing publicly? 
    • Is your partner program transparent?
    • Do developers have dashboard for managing billing?

Monitoring

  • G1 - Tech
    • What services to you use for monitoring APIs? 
    • What open source tooling do you use for your monitoring?
    • Does your monitoring include testing, performance, and security?
  • G2 - Business
    • What do you spend on API monitoring each month?
    • Is there a dedicated person(s) to monitoring APIs?
    • What has outages cost you in the past?
  • G3 - Politics
    • Do you publish your monitoring reports publicly?
    • Do you keep your ecosystem in tune with monitoring via messaging system(s)?

Testing

  • H1 - Tech
    • What services to you use for testing your APIs? 
    • What open source tooling do you use for your testing?
    • What are your benchmarks?
  • H2 - Business
    • Do you provide testing tools to your developers?
    • Can developers request specific types of testing for APIs?
  • H3 - Politics
    • Do you publish your monitoring reports publicly?

Performance

  • I1 - Tech
    • What services to you use for testing your APIs? 
    • What open source tooling do you use for your testing?
    • What are your benchmarks?
  • I2 - Business
    • Do you have SLAs for any tiers of operation?
    • Do you generate any revenue from SLAs in place?
  • I3 - Politics
    • Do you publish your monitoring reports publicly?
    • Do you consistently meet your SLAs?

Virtualization

  • J1 - Tech
    • What services to you use for virtualizing your APIs? 
    • What open source tooling do you use for your API virtualization?
  • J2 - Business
    • Is virtualization part of your QA process?
    • Do you virtualized instances of your API as an added service?
    • Do you provide a sandbox or simulators for developers by default?
  • J3 - Politics
    • Do you provide virtualization for developers?
    • Are your virtualization images openly sourced?

Orchestration

  • K1 - Tech
    • What services to you use for virtualizing your APIs? 
    • What open source tooling do you use for your API virtualization?
  • K2 - Business
    • Do you have dedicated people to managing your API architecture?
    • Do you have dedicated services for managing your API architecture?
  • K3 - Politics
    • Can you migrate between infrastructure providers? (ie. AWS to Google)
    • Is your server side API code open source?
    • Are you virtualization images openly licensed and available? (ie. Docker Images)

Embeddability

  • L1 - Tech
    • Do you use oEmbed?
    • Do you have bookmarklets?
    • Do you have a JavaScript API?
  • L2 - Business
    • Do you have a standardize strategy for allowing users to embed API driven resources?
    • Is your embeddable strategy integrated with your overall marketing and branding efforts?
    • Do you offer embeddable tool builder?
  • L3 - Politics
    • Are your embeddable tools non-invasive? Protect privacy?

Evangelize

  • M1 - Tech
    • Do you have robot evangelists? Just sounded cool, and couldn't think of anything to put here.
  • M2 - Business
    • Do you have dedicated evangelist resources?
    • Do you contract with 3rd parties for any evangelist resources?
    • Is your evangelism coupled with your marketing?
  • M3 - Politics
    • Are there opportunities for developers to get involved with evangelism?
    • Do you have healthy feedback loop present with your API operations?

Discovery

  • N1 - Tech
    • Do you have an APIs.json for your API operations?
    • Do you employ API definitions?
    • Are you using hypermedia?
  • N2 - Business
    • Are your APIs plug and play with other platforms?
    • Are your APIs public or private?
  • N3 - Politics
    • Does discovery feed every other layer of API life-cycle?
    • Does discover play into your security strategy?

Sorry, I don't mean to be a downer. But...we are just getting started with stabilizing how we do APIs. I feel like we are beginning to formalize how we manage or APIs (thanks Mashery, 3Scale, and APIgee), and we are getting a handle on API deploy (thanks Restlet, Amazon), and we are deep into understanding how we define (thanks Swagger and API Blueprint), and ultimately design (thanks Apiary, Restlet, and Apigee).

We are moving fast into testing, monitoring, performance, and virtualization (thanks Runscope, APITools, API Science, and SmartBear), but our security sucks, embeddability has stagnated, and evangelism and discovery really isn't improving. I'm working on APIs.json, and bringing together the Swagger and API Blueprint communities, but there is so much work left when it comes to discovery, and automating each of the areas listed above.

There really is no point to this post. It is my mental vomit, after a trip to San Francisco, and working on Microsoft's API design guide. Stay tuned for how any of this applies to anything.

See The Full Blog Post


Crafting and Publishing API Design Guide Shows That You Are Further Along In Your API Journey

I spent all day Wednesday, at Microsoft offices in San Francisco, providing feedback on the Microsoft API design guide, as part of the OneAPI Technical Advisory Group. The OneAPI team had already done most of the hard work in hammering out the API design guide, by working with the API leadership from groups across Microsoft--we were just brought in to provide outside perspectives.

A group of about 20 of us, spent the entire day walk through the high levels of the Microsoft API Design philosophy. The Microsoft OneAPI design guide is a draft, so they aren't ready for us to share it publicly, something we'll see in the near future. However, document being ready or not, the process showed me that Microsoft is really working hard to get their API design strategy house in order. They had worked hard to iterate through all the common areas of API design, with their various teams--even some of the more controversial areas like versioning.

I did not feel like I had a lot to contribute to the process. They have some really good API designers, and there was plenty of high quality API design talent in the room, to provide the feedback they wanted. When Microsoft is ready for more of the management, evangelism, and other areas more in the business and politics of APIs, I will have much more to bring to the table. I did however, provide some insight that I think could help the overall process, and will continue to provide feedback--which is why I'm gathering my thoughts in this series of posts.

Prior to me participating in the OneAPI Technical Advisory Group, I had just published the API design guides for Cisco and Paypal, as part of my API design research. Bringing the number of API design guides to 9, which is a good sign the space is getting more serious about standardizing how we do API design. For me, there are two important things going on here:

  • Company Journey - The companies own API journey, allowing them to collaborate and craft a single API design philosophy, with the intent of making it a company-wide initiative.
  • Publicly Sharing - Declaring to the public, this is how we design APIs. Something that allows others to learn from, and merge with their own API design philosophy--benefiting the entire space, while also establishing leaders and followers.

Crafting a document that defines how a company builds software is nothing new. However, APIs aren't just about building software, they are about defining your companies resources, and when you work to standardize how you define your resources as APIs, you are focusing on the core of your business. Next comes the value when building websites, web and mobile applications, and potentially devices as well. Getting your API design house in order, is all about standardizing how you speak API across your company, something that touches every product and service, and is the foundation for your corporate digital strategy.

My participation in the OneAPI Technical Advisory Group gave me a unique glimpse at the API strategy unfolding at Microsoft. While there is still a lot of work to be done, the fact that they are working on a central API design strategy demonstrates to me they are further along in their API journey than I thought they were. Where are you in your own API journey? Does your organization have an API design guide? How do you share this knowledge across your organization, and with the public?

As I monitor the API space, I am always looking for the signals of a healthy API strategy, and I think, when I see a formal API design guide present for any company that I track on, I am going to tack on a couple extra points. A polished, publicly shared API design guide shows organizations are a further along in their overall API journey, than other companies in the space, who are still trying to figure things out.

See The Full Blog Post


Making Sure The APIs Being Served Up Via Your Enterprise Service Bus (ESB) Are Discoverable and Consumable Using APIs.json and Swagger

Making sure APIs that are available via the enterprise service bus, affectionately known as an ESB, more discoverable, accessible, and consumable via the open Internet, is one of the many challenges organizations will face along their API journey. Striking a balance between internal APIs, and public APIs, even if they aren’t open to the wider public, and only partners, is proving to be a big challenge for many enterprise groups I am engaged in conversations with.

When Steve Willmott (@njyx) and I developed APIs.json, an open API discovery format, we were focused on bringing solutions to the table that were focused on API discovery on the open Internet. We knew that the format could also assist in more controlled environments, like within the enterprise, but wanted to focus on the wider discussion first. Our primary focus is making indexing the current landscape of publicly available APIs, using APIs.json, so that we can make available via our open source search engine APis.io.

We have been working with other API service providers like WSO2 to integrate into their enterprise offerings, but pretty much leaving the enterprise landscape to craft its own APIs.json driven solutions. So it pleases me to see that Werewolf ESB has integrated not just Swagger into their open source ESB solution, but also APIs.json. Any service you expose through the Warewolf ESB security layer, will automatically be published in an APIs.json file--additionally these services will also have a Swagger file generated, providing you a machine readable definition for the surface area of each exposed API. 

Warewofl ESB's usage of APIs.json is exactly what we had envisioned when it comes to providing API discover solutions, for APIs that originate within the enterprise. In this scenarios, APIs.json is acting as a portable, machine readable, JSON definition of what APIs can be found via an company, or organization's ESB. The availablity of Swagger makes these services consumable, as soon as they are discovered via the APIs.json index--showing what is possible when you combine APIs.json with Swagger.

I'm not familiar with exactly how Warewolf ESB manages the security layer for APIs on the bus, but I'd like to learn more, so I can help organizations craft not just single APIs.json indexes, but develop meaningful collections, and begin to broker the resources they are making available via any ESB on their network, in more intelligent ways. The Warewolf release has jump started me brainstorming more around the possibilities for API discovery when it comes to the enterprise, using APIs.json--thanks Warewolf, very nice work!

See The Full Blog Post


Further Defining the AngelList API as Part of My API Stack

I am slowly making my way through defining of the APIs available in the API Stack, beginning with the APIs that I depend on to operate API Evangelist. The best way to understand any API in my opinion, is to create a Swagger definition for it, as well as an APIs.json file, indexing the overall API operations. Since my mission is all about understanding APIs, this is something I try to do on a regular basis.

Creating an APIs.json file allows me to index each APIs operations from registration and documentation, to pricing and terms of service. Then I work to break down every possible unit of value represented by an API's endpoints, reducing it down to the minimum viable element possible--something that Swagger allows me to do nicely.

Here is what I ended up with for AngelList so far:

AngelList
  AngelList Accreditation API            
  AngelList Comments API            
  AngelList Follows API            
  AngelList Jobs API            
  AngelList Like API            
  AngelList Me API            
  AngelList Messages API            
  AngelList Paths API            
  AngelList Press API            
  AngelList Reservations API            
  AngelList Reviews API            
  AngelList Search API            
  AngelList Startups API            
  AngelList Status Updates API            
  AngelList Tags API            
  AngelList Users API          

I depend on AngelList API to provide a vital lens for monitoring the companies I track on, who are doing interesting things with APIs. AngelList also provides me with an important discovery tool, that helps me find new people, and companies that are pushing the API conversation forward, as well as monitoring the existing players.

I have had an APIs.json file for the AngelList API for some time, but I only had a single, pretty useless Swagger definition. Now I have 16 separate resources, with each endpoint, verb, parameter, underlying data and security models defined. The process helped me better understand how I already use the search, and startups endpoints, but also opened up my eyes to the jobs, press, user, and tag APIs--which I am now working into my regular API monitoring workflow.

This process isn't just about creating complete profiles I can include in the API Stack, and APIs.io. It is about me better understanding the resources I already depend on, pushing me to better take advantage of them, ultimately improving upon my own API stack, as well as my operational efficiency using APIs.

See The Full Blog Post


Setting a Precedent When Charging for High Volume Access to Government APIs

I'm neck deep in discussions around API monetization lately, from building a business model in the fast growing podcast space with AudioSear.ch, funding scientific research through API driven revenue, and the latest being a continuing conversation around how to monetize high volume usage around the Recreational Information Database (RIDB)

I have been pulled into the conversation around the API for our National Park system information several times now. In October of 2014 I asked for Help To Make Sure The Dept. of Agriculture Leads With APIs In Their Parks and Recreation RFP, something I saw some Next Steps For The The Recreation Information Database (RIDB) API this January. This time I was pulled in to comment on a change in language, which allows the vendor who is operating the API to charge for some levels of API access.

I received this National Forest Service Briefing, regarding the pricing change last week:

U.S. Forest Service
National Forest System Briefing Paper
Date: August 17, 2015

Topic:  Addendum to Recreation One Stop Support Services Contract RFP for a Recreation Information Database API download cost recovery mechanism for high frequency, high-volume requests

Issue:  Questions and comments from prospective contractors for the R1S support services contract included significant concern about the costs associated with supporting a completely open API.  There is an incremental cost for each instance that a third party ‘calls’ the API.  In private industry, the volume of calls is often managed by provisioning access to the API by requiring registration and agreeing to the volume of calls in advance.  For third parties wishing to create an interface that will call the API frequently, private industry typically implements a tiered pricing approach where costs rise as volume increases.

In response to these concerns and to provide a mechanism for cost recovery for high frequency, high-volume requests, the R1S Program Management Team offered this solution by posting this statement to questions on FedBizOpps (FBO.gov).

Additionally, automated access to recreation data shall be free of charge for users making nominal data requests. The contractor may propose a fee structure applicable only to high volume data consumers. Such a fee structure shall be enforced through an agreement directly between the Contractor and the data consumer and shall be consistent with industry best practices and established market pricing. Should the contractor opt to propose such a fee structure, their proposal shall clearly state the applicable rates and details of the proposed fee structure.

A member of the open-data community quickly reacted to this provision indicating that it no longer meets the intent of the President’s Open Government executive order.  It is possible that media coverage will daylight dissatisfaction over this provision.

It is important to note that it shall be the R1S contractor’s responsibility to manage and control access to the API so that excessive calls from outside entities do not put unreasonable stress on the system that may be cause performance issues or be malicious in nature.  To accomplish this, the R1S contractor will need to provide sufficient server capability and staff to manage and support the API and the consumers using it.  The costs for the basic service are contained in the fee-per-transaction model, which will support free access to the API for all users, with a cost-recovery mechanism in place for high-use consumers.

To clarify the intent of the government, the RFP will be amended to state:

The Government recognizes that high frequency, high-volume data requests may have a detrimental effect on the performance and security of R1S Reservation Services system and that the management and mitigation of such negative consequences drives costs to the contractor.  Accordingly, automated access to recreation data shall be free of charge for users making nominal data requests, however, the contractor may propose a fee structure, or establish access limitations, applicable only to higher volume data consumers.  Any proposed fee structure shall comply with OMB Circular A-130; Section 8 – Policy, which states, “Agencies will … Set user charges for information dissemination products at a level sufficient to recover the cost of dissemination but no higher.”

Summary/Key Points:

  • The RIDB API is now open and available to anyone to download free of charge.
  • Federal recreation data is and shall continue to be available in machine-readable formats and shall safeguard privacy, confidentiality, and security in compliance with the Open Data Executive Order.
  • The follow-on contract for R1S requires that in addition to more static recreation and inventory data, real-time availability data shall also be made available through an API.
  • The audiences we anticipate using the API is widely varied and includes those who may want to incorporate federal recreation data into tourism portals and travel planning applications.  Others however include those who wish to produce new interfaces to the real-time availability data that could generate a very high volume of calls to the API.
  • We will continue to offer completely free access to the RIDB API for routine and reasonable requests in support of the President’s Open Government Executive Order.
  • R1S is allowing offerors for the follow-on contract to propose a cost-recovery fee structure for high-volume data customers that exceed reasonable access in accordance with OMB Circular A-130; Section 8.  These proposals will be considered as a provision within the new contract expected to be awarded in 2017.
  • The Recreation.gov API(s) will be funded entirely by recreation fee revenue generated through reservation transactions made by the general public.  By following private industry standards, R1S will be able to continue to provide free and open access to nominal users of the API without passing on higher costs associated with high volume use to the general reservation making public.  

Background: Charging fees for access to government APIs is a relatively new concept, however open-data evangelists and private industry all agree that there is a time and place for creating a reasonable tiered pricing structure which supports free open data and provides a framework for managing increased costs associated with higher end use.

Here are a few articles weighing both sides of this debate:

That concludes the briefing paper, but after I shared my thoughts with them, I received an update of what the language has evolved to, resulting in the following:

The Government seeks to encourage usage of the Recreation.gov API, especially for third parties that could use the API to initiate additional reservations. At the same time, the Government recognizes that it is difficult to predict the likely query volume on Recreation.gov’s APIs, and that very high-frequency API requests from third parties that do not result in reservations on the system could have a detrimental effect on the performance or cost of the system, without providing associated benefits to the contractor or the Government.

Accordingly, the contractor may propose an API management plan that protects against extremely high-frequency usage of the API from third-parties that are not driving reservations to the system, while also encouraging widespread usage from third parties that are making a reasonable number and frequency of requests, and provides a mechanism for supporting and encouraging heavy API usage from third parties who demonstrate value and success in driving reservations on the R1S reservation system. Such plans may include establishing guidelines for third party interaction with the API (i.e., recommended best practices for caching API responses, implementing conditional requests, and defining “abusive” API usage that may be restricted), requiring users to register to receive a token or key to access the API and using techniques such as rate-limiting the number of API requests allowed from a given third party over a given period of time (i.e., XXXX requests per hour), or introducing “tiers” of access that limit high-frequency, high-volume API usage to those third parties who are successfully driving reservations on the system or are willing to pay a nominal fee that covers the incremental costs of serving non-reservation-generating high-frequency requests. 

This is the first precedent I have seen, of a modern API driven monetization strategy in the federal government. There are many examples of private companies charging for access to federal government data, but this is the first example of applying modern API business models on top of government APIs and open data.

To me, this conversation also goes well beyond just charging for high volume access to government APIs, to cover the cost of delivering API driven resources reliably. It also introduces the concept of service composition into government APIs. We've had government APIs keyed up with API Umbrella for some time now, an open source approach that is modeled after modern, commercial API management offerings. What the RIDB API approach does, is open up the ability to introduce different levels of access tiers, rate limit, and charge for commercial levels of usage around vital government resources. 

When government follows the business model applied across the API sector, it will allow for the free, lower levels of access, while also charging for higher levels of access, that will keep critical APIs operating at scale, in a dependable way. I'm also hoping it opens up other approaches to service composition, like allowing developers to write and contribute to the evolution of government data. I'm just hoping the possibility of covering the cost of API operations, is enough of an incentive for government agencies, and the vendors that serve them, to explore other approaches to API service composition.

The trick in all of this will be teaching the agencies and vendors, about the transparency required to make all of this work. Agencies, and their vendors, will have to make sure and share the algorithm they use to establish service tiers, rate limits, and pricing levels. They will also have to be transparent about which API consumers / partners exist in which tiers, to eliminate speculation around favoritism. This transparency will be critical to all of this working smoothly, otherwise the whole approach will suffer from similar illnesses that existing government procurement practice suffer from. APIs != Good by Default

The RIDB API approach, which allows vendors to add API service levels, rate limits, and a pricing layer, sets a precedent for generating much needed revenue that can cover the costs of API operations. While this may seem like a footnote on a single government RFP,  as I mentioned in earlier posts on this subject, it represents how we will manage commercial usage of our virtual government resources in the future, in the same way we've done for our physical government resources for many years now.

See The Full Blog Post


Easy Way to Inspect HTTP(S) API Traffic in a Multi-platform, Multi-device Environment

This is a deep dive from one of my loyal readers, who doesn't just listen to what I write, he is pushing my own research in new directions, and reporting back to me. You have read his work before in API Police Report: Raw Thoughts From On-Boarding With Your API, and this time Bob Salita is building on my own proxy work with Charles Proxy. Guest posts isn't something I do on API Evangelist, but when you are pushing the conversation forward like Bob does, I can't help but share.

I'm a multi-platform, multi-device developer. I wanted an easy way to inspect HTTP(S) API traffic (requests, responses) from one of my many development devices. Inspection can be achieved by using (reverse) proxy software such as Charles, Fiddler, squid, or mitmproxy. The usual method is to make proxy changes to a device so traffic is forced to a system running a reverse proxy. This process is inconvenient and error prone in multi-device environments. For every device, one has to discover how to make a proxy change, then manually effect the change, and then manually reverse the change when inspection is done. I found the process annoying and error-prone. There had to be a better way.

It occurred to me that the ideal setup would be a router where the WAN's gateway was a system running a transparent proxy. Thus simply by connecting the device to the router, the transparent proxy software would capture HTTP and HTTPS traffic. After weeks of research, testing of many configurations and software, I am pleased to document a working configuration.

The first step is to install mitmproxy (Linux, OS/X), or mitmdump (Linux, OS/X, Windows) on a system. I've tested mitm software on OS/X and Windows only. The same software is known to run on Linux, I personally haven't tested there yet. OS/X proved to be a better platform than Windows because it can run the console UI program called mitmproxy, whereas Windows users have to settle for the simplistic screen scrolling text program called mitmdump. There actually is another option, called mitmweb, but it's beta. mitmweb a fascinating variation of proxy UI, I highly recommend giving it a try. It may become a killer proxy UI.

The second step is to configure a dedicated router, call it "PROXY", that gatways traffic to the transparent proxy created in step 1. PROXY router eliminates the need for the annoying proxy changes for every device. You simply connect the device to PROXY and your traffic instantly becomes inspected. I used a super cheap router (TP-Link WR841N $20), loaded dd-wrt, but I think any router will probably work. Configure the router's WAN gateway to point to the transparent proxy system (e.g. 192.168.1.27). You may need to explicitly configure the WAN's DNS servers too, if so, I used Google's DNS servers at 8.8.8.8 and 8.8.4.4. Configure the LAN to be some private IP, I used 192.168.3.1. When you connect a device, the router will assign a private IP address (e.g. 192.168.3.50).

You're now done. Connect any of your devices to PROXY router and the HTTP(S) traffic will appear in mitm software on the transparent proxy (gateway'ed) system. When finished inspecting, reconnect to your everyday router. No device fiddling ever needed.

Although the above steps are simple, it took considerable time to investigate all the alternative configurations. In the end, I found no other working software option, only mitm transparent proxy worked. Charles and Fiddler are lacking a working transparent proxy feature. squid transparent proxy likely works on Linux but I didn't test it. It's unfortunate that Charles and Fiddler proxy software didn't work as a transparent proxy. I ​much prefer Fiddler over mitmproxy because of Fiddler's superior UI and my personal preference for C# and .Net.

I am sharing Bob's instructions, so that you all can play with as well. I have an API setup to process any Charles Proxy file, and generate Swagger definitions from them, something I'd like to see occur from this implementation as well. Understanding how the web and mobile applications we depend on are using APIs, is becoming increasingly important. Thanks for your work Bob!

See The Full Blog Post


What We Can Do To Make A Difference In The Wake Of Oracle v Google API Copyright Case

While we wait for the next steps of the long drawn out Oracle v Google Java API copyright battle, I wanted to take some time and talk about what we can all be doing to actually make a difference. If you aren't familiar with the legal case, is a legal dispute related to Oracle's copyright and patent claims on Google's Android operating system. The court case started in California courts in 2012, with the rmost recent verdict coming in May, 2014, where the Federal Circuit partially reversed the district court ruling, ruling in Oracle's favor on the copyright-ability issue, and remanding the issue of fair use to the district court. 

While we wait for appeals, endure the continued discussion, and read the trickle of FUD that comes out from the tech press, what can the tech community actually do to make a difference? First, we can reduce our anxiety about this being hellfire and brimstone for the current web API movement. The Oracle v Google legal battle is  focused on Java APIs, which are a different beast than a web API, and it would take another legal case to actually set a precedent that copyright applies to web APIs--when this happens, I will be showing up with my Internet of Things enabled pitchfork, and torch. 

When it comes to actually making a difference, you can openly license and share your existing API designs. If you think copyright applies to APIs, publish them CC-BYCC0, or other Creative Commons license--something you can link to using API Commons. If you do not think copyright applies to APIs, apply whichever licensing stance you feel is relevant. The important part is that we share how our APIs licensed, accompanied with machine readable API definitions that can act as a representation of what is covered by a license. 

Your web API definitions are the recipes for your digital resources, and machine readable API definition formats like Swagger and API Blueprint are how we describe our recipes in a machine readable way. This isn't your secret sauce--how you execute on this recipe is your secret sauce. The name, and ordering of your recipe is how you communicate on the menu that is your developer portal, where you will be cook'n up an assortment of API driven dishes. How you operate your business, and deliver on your recipes is what you should be protecting, not the way you define, and communicate what you do as an API driven business. 

Openly sharing the definitions of our APIs, that we operate on the Internet, is a meaningful action we can take in response to the Oracle v Java copyright case, but an action that goes so much further in helping also make the API space better. Sharing machine readable definitions of the recipes that are our APIs, using formats like Swagger and API Blueprint, embracing media types like Collection+JSON and Siren, using common data formats like JSON API, and wrapping it all up as an APIs.json collection, will help us evolve beyond the bespoke API space we currently have. 

Copyright is lower down on the list of obstacles we face in the API economy. There are numerous interoperability, reusability, and scalability issues that are much bigger threats than a far off Java API copyright battle. We can take all of these challenges head on by using existing API definition formats, media types, and data models when designing, deploying, and managing our APIs. If we do this, any future API copyright battle will never even take root, in a world where open formats, source, and API patterns will rule.

Please work with me to contribute to this world, rather than giving fuel to one possible dystopian world suggested by Oracle.

See The Full Blog Post


Building Everything You Need For A Global Nervous System Using The Twitter API

This is one of several stories I am evolving as part of series I'm calling API fairy tales. With these tales I want to educate business leaders, technologists, and government about the importance of Application Programming Interfaces (API), and how they are being applied in almost every aspect of business occurring online today--providing simple examples that mainstream users can learn from, as well as retell in their circles.

Just two months after launching there new messaging startup in 2006, Twitter introduced the Twitter API to the world. Twitter's API release was in response to the growing usage of Twitter by those scraping the site or creating rogue APIs, so Twitter exposed the Twitter API, returning machine readable JSON and XML that developers could put to use, building things beyond what Twitter could build.

In just four short years Twitter’s API had become the center of countless desktop clients, mobile applications, web apps, and businesses -- even by Twitter itself, in its own mobile apps, and for its public website. The majority of what you know as Twitter today, came the API ecosystem. If Twitter is the nervous system of the world, the Twitter API is the nervous system of Twitter.

In my opinion, Twitter is one of the most influential APIs operating today, showing what is possible when a dead simple platform does one thing well, then opens up access via an API and lets an open API ecosystem of almost a million 3rd party developers, build out the rest of the platform--establishing an external R&D environment for the fast growing communication platform.

In addition to importance, Twitter is also one of the most complex APIs platforms in operation today showing us the difficulty in running large developer ecosystems, where you have a million developers building whatever they want, around your valuable companies assets. Sometimes this is a blessing, and sometimes this is a curse, and Twitter does pretty well at striking a balance between the two — acknowledging that APIs are more than just tech, they are also about business, and have a huge social and political aspect as well.

Not everyone loves Twitter, but you can’t argue that it has made a huge impact on how we conduct global business, and even how the government and our larger society operates. When you look at the success Twitter has achieved, I can’t help but see API driven success, allowing a simple startup to spread around the globe, using APIs to connect people, places, and conversations—becoming the nervous system for the coming API economy, which will always have that human, social layer that Twitter reflects so well. 

See The Full Blog Post


Can We Keep Important Scientific Research Projects Alive Through Revenue Generated From API Access?

I am spending an increasing amount of time thinking about how you monetize data, content, and other digital resources via APIs. A couple of very compelling layers to all of this work, is pushing forward my thoughts on how and when government should charge for access to public data, as well as the how and when private sector companies should charge for access to public data--lots to think about here.

Another layer to this conversation that was introduced this week, centers around how we can keep data and content generated via publicly funded research, at publicly funded institutions, available, accessible, and moving forward, by applying the technology and business of the modern web API. I was contacted this week by a group at Caltech, that is proposing a research to database hyperlinking project, which would provide one-click access from published biomedical research papers to authoritative biological databases such as WormBase (WB; wormbase.org), Flybase (FB; flybase.org), the Saccharomyces Genome, and Database (SGD; yeastgenome.org).

The primary objective of the project is to make information around scientific research, and the resources you need to accomplish the research, more accessible at the time of research--not after the fact. The secondary objective is to identify monetization opportunities within the industry for the data, and apply modern API methodologies, to take advantage of these opportunities. In short, we are looking to build a viable business model on top of this highly valuable scientific data, to help keep these databases accessible, evolving, and living. 

This is where the API monetization discussions I've had in the past around generating revenue around public data, come into play, but this time it is with publicly funded scientific research. It takes money to keep these databases up and running. It takes money to design, deploy, and manage APIs, and operate the API ecosystem that will arise around these databases. Where is the ethical line between providing access to this important data, and generating enough revenue to keep the important work alive?

I am most definitely biased, but this is where APIs really begin to shine for me. It isn't just that we have the technology to open up access to these databases, in a simple way that can be used in websites, web applications, browser add-ons, and mobile apps. We also have the ability to meter access, giving free access to those who need it, while charging for access from those who can afford it, in order to fund operations. The third aspect of a modern approach to APIs that makes this possible, is the transparency. We can strike this operational balance around monetizing this public data, in a transparent way, so ethical concerns can be minimized.

This represents the technology, business, and politics of APIs that you here me ranting about on a regular basis. I am not saying this formula will work for all scientific research, but I think if we can strike the right balance, we can uncover another revenue stream to keep important scientific research moving forward. I am just beginning these conversations, and in my own style, I'm working through my thoughts here on the blog. I would love to hear your thoughts, if you have any opinions in these areas.

Stay tuned for more as this project evolves...

See The Full Blog Post