{"API Evangelist"}

Keep Publishing Your API Definitions To Github So We Can Find Them

I was just getting started evolving upon my API definition discovery tools before I left this summer, and is something I am just picking up again, now that I am back at it. Historically there are three ways in which I find API definitions like OpenAPI Spec and API Blueprint for APIs:

  1. Behind API Document - When I come across API documentation deployed using Swagger UI or Apiary, I know that behind them there is an API definition -- sadly they are usually obfuscated rather than proudly shared with an icon + link.
  2. Website Harvesting - When I find a company who is doing things with APIs either because they have a public API portal, or have issued a press release, I add their URL to my crawler and I suck down their entire site and sift through the results for API definitions.
  3. Github - Using the Github API I am regularly searching the social coding platform using a variety of search terms which have been proven to produce results for OpenAPI Specs and API Blueprints, which are used in API related operations. Many are just for prototypes or from people playing around, but many often yield some pretty interesting results about API operations I couldn't find any other way. 

Out of all three of these approaches, I would say that Github holds the most promise for actually improving the world of API discovery, allowing me to find APIs in the wild. Ideally, APIs would employ hypermedia, or use solutions like JSON Home and APIs.json, but in the meantime, if y'all could use Github to host your API definitions, that would be awesome.

Remember, Github provides other benefits for your hosting your JSON, YAML, and Markdown API definitions like hosting your documentation, helping you manage the versions, and providing other ways to display using Jekyll and Liquid. It also helps developers and API analysts like me find your APIs, adding to your existing marketing and evangelism efforts.


GraphQL Seems Like We Do Not Want To Do The Hard Work Of API Design

We were talking about GraphQL in the API Evangelist Slack channel the other day, and the consensus seemed to be that GraphQL is a way to avoid the hard work involved with properly getting to know your API resources, and it is just opening up a technical window to the often messy backend of our database-driven worlds.

As an old database guy (1980s) I love me some SQL, but I am also a believer in what the API design, deployment, and the management life cycle can bring to the table. APIs are about taking often very technically defined resources, and making them accessible, and more intuitive (not always) to the people who are consuming and putting API resources to work. 

Technologists love their new shiny toys that reflect their tech ideology, and this is what GraphQL seems like to me on the surface. I get the reasons behind doing it, and why developers like it, but I think it's missing the important aspects of why we are doing APIs. Sure, you won't know every possible scenario a developer will want to query using your API, but this is why we have feedback loops associated with API operations.

It seems like you would attract 5 very technical API developers who would love it, but then exclude 5 non-technical users. Why not just make a simple and limited API, and have a conversation with all 10 users about what they need, and come up with an agile way to design, deploy, and manage new paths, and additional parameters to your existing paths. GraphQL seems like the numerous TEXTAREAS I've deployed behind the firewall to allow trusted users to write SQL to get at the database--which is just poking a hole, not very evolutionary at all.

GraphQL feels like the DBAs coming out from behind the firewall and rather than adjusting to the world, they are implementing their way of doing things on everyone--which is classic IT / backend behavior. No average user will ever use GraphQL interfaces, where they will copy / paste a URL and put it to work for them--if it is intuitive. Like the TEXTAREA solution, GraphQL feels like a quick fix to me and not one that looks to the future--I think my friend Mehdi Medjaoui (@medjawii) says it well with:

"When the wise man shows the moon, the idiot looks at the finger” s/moon/REST/ | s/finger/GraphQL/"

Us developers and IT class are always looking for the quick fix that fits with our ideology, and are resistance to change, and solutions that look outwards and to the future. I am not being critical of GraphQL just to be mean, I'm just asking if it is the forward-looking solution, the simplicity that we need to do this at scale. I feel like GraphQL is just another tech solution, that will have to be executed by tech people in the know, and not a solution that everyone will be able to implement. If you disagree with me, maybe you could help me learn more about the reason behind GraphQL, and understand how this benefits non-technical folks and end-users.


The API Lightbulb Went On For Me When Amazon EC2 Launched A Decade Ago

It is the 10th anniversary of the launch of Amazon EC2 this month, and I think it is a good time to revisit what this has meant to the API space. If you have heard any of my keynote talks where I visit the history of APIs and share my story of how I became the API Evangelist, you have heard this before, but is something I feel is worth repeating so that my new readers can play catch up.

In March of 2006 Amazon launched their new Amazon S3 service, and in August of 2006 they followed up with their launch of Amazon EC2. The Amazon S3 release interested me and I signed up right away, but the Amazon EC2 release is when the lightbulb went on for me when it came to the potential of web APIs--which eventually led to me launching API Evangelist in July of 2010. 

Prior to 2006 APIs were being used for what is considered very non-business activities, like publishing and sharing photos, videos, and of web links. S3 opened up storage in a new way, but the potential for deploying server infrastructure around the globe using web APIs was a serious game changer (I do not use this phrase often). With the release of EC2 web APIs weren't just for fun and games anymore, or just a "hobby toy" as my SAP IT directors in Germany liked to tell me--you could now do real world business things with them.

It would take a couple more years for me to realize this potential while I was running events for SAP and for Google, as the VP of Technology at WebEvents global, but by 2010 I had been touched by the holy API spirit. I then quit my nice six figure job, fired up a blog, and hit the road spreading the gospel of how web APIs could make digital, and increasingly physical resources more accessible to not just business, but also individuals, government agencies, and beyond.


Github Is Quickly Becoming My Most Important Discovery Source For API Space

I have monitored the Github accounts and organizations for individuals and companies doing interesting things with APIs for some time now. However, recently this channel is increasingly being the way that I discover truly interesting companies, individuals, specifications, tools, and even services. The most interesting people and companies doing things with APIs usually understand the importance of being transparent and aren't afraid of publishing their work on Github.

Developers are often very poor at blogging, tweeting, and sharing their work, but because Github allows me to follow their work, and provides additional ways to surface things using Github trending, I'm able to find things often before they'll show up on other common channels like Twitter, LinkedIn, etc--if they do at all. You can subscribe to the changes for a Github user, and organization using RSS, or you can do like I do, and use the API to dial in what you are following, and identify some pretty interesting relationships and patterns.

The interesting things I'm discovering aren't always directly code related either. With the increased usage of Github for publishing API portals, documentation, and other resources, I am increasingly finding valuable security guides, white papers, presentations, and much more. All of this makes Github an important place to discover what is going, while also helping ensure what you are working on around your API is being discovered. I'm thinking it is time for a refresh of my Github guide for API management, which I published a couple years back, and provide a fresh look at how the successful API providers are using Github.


Beyond Just API Discovery: The Technical, Business & Political Decisions Needed At Runtime

I was included in a conversation the other day on Twitter about runtime API discovery which reminded me of some thoughts I was processing before I walked away from work this summer, and before I dive back into the technical work, I wanted to refresh these thoughts and bring them to the surface. Blogging on API Evangelist, and other channels which I publish my work on is how I work through these ideas out in the open, something that saves me expensive time and research bandwidth while I'm down in the trenches doing the coding and API definition work.

The Wider Considerations Of What Is API Discovery
Like APIs themselves, the concept of API discovery means a lot of different things to different people. I find that broadly it means actually finding an API (ie. searching on Google or ProgrammableWeb), but once you talk to a more technical API crowd, it often means the programmatic discovery of APIs. Ideally, this is something that is done using hypermedia suupported discovery, but can also apply a standard like JSON Home, or APIs.json. There are also many folks who are thinking about programmatic API discovery using OpenAPI Spec, API Blueprint, and other common API specification formats.

Some Thoughts On API Discovery At Runtime Today
The conversation I was pulled into was between some of the leading minds in the area of not just defining what APIs are, but also how we truly can scale, and conduct API discovery, consumption, and evolution of our resources in a logical way. This discussion is pushing forward how our web, mobile, and other systems can discover, put to work, and roll with the changes that occur around critical API resources. How a human finds a single API for their use is one thing, but how a system and application finds a single API and puts it to work at runtime is a whole other conversation.

The Hard Work To Define Runtime Discovery of APIs
Separating out the human and programmatic discussions around what is involved with the runtime discovery of APIs is just the first line of challenges we face. The second layer of challenges is often about cutting through dogma and ideology around specific approaches to defining an API. The third layer I'd say is that this is just hard work of separating out the numerous differences between APIs, each often possessing their own nuances and differing approaches to authentication. As with every other aspect of APIs, the challenges are both technical, and human-centered, which slows expectations around the progress we make, but I trust the community will ultimately execute on this properly. 

The Even Harder Work To Define Runtime Discovery Of Many APIs
While I'm actively participating in the current discussions around runtime API discovery using both hypermedia, as well as other approaches, I can't help but keep an eye our for the future of how we are going to do the same thing across many APIs--this is what I do as the API Evangelist. We have a lot of work ahead of us to make each individual API is discoverable at runtime, but we also have a significant amount of work to harmonize this at web scale across ALL APIs--which is why so many hypermedia evangelists are so passionate about their work. 

The Technical Considerations Of API Discovery At Runtime
98% of the discussions around API discovery at runtime focus on the technical--as it should be at this phase. Hypermedia design constraints, leading API definition specifications like OpenAPI Spec and API Blueprint, and API discovery formats like JSON Home and APIs.json are providing us with vehicles for moving this technical discussion forward. Ideally, our APIs should reflect the web, and when you land on the "home page" of an API, you should be presented with a wealth of links reflecting what the possibilities are (does your API have a navigation). Secondarily, if hypermedia is not desired or feasible, JSON Home and APIs.json should be considered, providing a machine readable index of what APIs are available within any domain, as well as additional details on what is possible using OpenAPISpec and API Blueprint.

The Business Considerations of API Discovery At Runtime
As technologists, we often fail when it comes to considering the business implications of our solutions, ranging from making sure to make money to keep them operational, all the way to industry-wide influences we should be aware of. I see many discussions amongst API specialists fall short in this area, which is why I started API Evangelist in the first place, and which is why I'm pushing these thoughts forward, and sharing with the public, even before they are fully baked.  

At runtime, the technical considerations of where an API is, how to authenticate, and what parameters and other details need to be clear. However, when you elevate this process to operate across many APIs, important business criteria also become important--things like what plans are available, what do API resources cost, and are there volume options available. The example I like to use in this scenario is from the world of SMS, and making runtime business decisions across nine separate SMS APIs.

At runtime, I may have different business concerns with each execution, even after I know where the APIs exist. Some SMS blasts I may want to use the cheapest provider, while in other campaigns I may choose to use a higher priced, more trusted provider. These considerations made by a human in 2016 can be difficult, let alone having what we need to do in a programmatic way at runtime--something I've spent some cycles developing schemas and tools to help me sort through the mess. I have been able to establish patterns across some of the more mature API areas like SMS, email, search, and compute, but we are going to have to wait for other areas to evolve before this is even feasible.

There is a reason why I call my research in this area API plans and not simple API pricing. I feel this label reflects the future of business decisions we will have to make at runtime, which won't always be simply about pricing, and hopefully reflect our overall business plans--which are executed in real time at runtime in milliseconds. Sadly old ways of doing business by the enterprise continue to cast a shadow on this area, with companies hiding their pricing page behind firewalls, and not sharing the algorithm behind pricing decisions, let alone looking outward and following common industry patterns--beliefs around intellectual property and what is secret sauce will continue to hinder this all moving forward.

The Political Considerations of API Discovery At Runtime
Another area I have found myself paying attention to as the API Evangelist, beyond just the technology and business of APIs, is what I call the politics of APIs. Alongside the technical and business considerations, these often politically charged areas will have to be considered at runtime. Which API have the terms of service and privacy policies that reflect my companies strategy? Which API is the most reliable and stable? Can I get support if something fails? Is the long-term strategy of an API in alignment with our long-term strategy, or will they be gone within months due to funding and investment decisions (or lack of)? There are many political considerations that will have to be made at the programmatic level and included in runtime discovery and decision making around API integration(s).

Similar to the business considerations I have also invested some cycles into understanding the variability some providers are applying when it comes to the politics of APIs, like variability in terms of service and pricing, and how pricing, plan availability, availability, stability, and other ranking criteria can be made more machine readable and applied at runtime. As with the business concerns around API integration, there are many obstacles present when we are trying to make sense of the political impact at runtime. As more API providers emerge which are not resistant to sharing their API plans, I am able to document the variables at play in these algorithms, and share with the wider industry, but alas, many companies are holding these elements too close to their chest for the conversation to move forward in a healthy manner.

It is easy to think about the political runtime decisions that need to be made around APIs as purely being about terms of service, but there are much more grander considerations emerging, like which country and region we deploy into, and regulatory considerations that will have to be followed when putting API resources to work, or possibly injected at runtime like we are seeing within the drone space. Like terms of service are guiding almost everything we do online today, the politics of APIs will govern the runtime decisions that are made in the future. 

Beyond Discovery And Considering The Technical, Business And Political Decisions Needed At Runtime
This is just a glimpse at the long road we have ahead of us when it comes to truly reaching the API economy we all like to talk about in the sector. Unfortunately, there are also many obstacles in the way of us getting to this possible future. We have to increase our investment in hypermedia and web-centric API solutions, and not just vendor-driven API solutions if we are going to move down this road. We have to be more transparent about our API plans, pricing, and the variables that go into the human and algorithmic business decisions that are driving our API platforms. We also have to start having honest discussions about the terms of service, privacy policies, service level agreements, and regulation that are increasingly defining the API space. 

I am optimistic that we can move forward on all of this, but current beliefs around what is intellectual property, something that is fueled by venture capital, and further set back by legal struggles like the Oracle v Google API copyright case are seriously hurting us. The definition of your API is not IP or secret sauce. Your pricing and plan variables are not your secret sauce, and should not be hidden behind the firewall in the Internet age--regardless of your enterprise sales belief. The only way that we are going to continue meaningful automation of the growing number of resources being made available via APIs using Internet technology, is to share vital metadata out in the open, so we can make sure we are all making proper, consistent decisions at runtime--not just technically, but also the right business and political decisions that will make the API economy go round.


APIs Are Not Just Meant For Killer Apps, They Can Also Be A Lifeline For Users

In the Silicon Valley rat race users often become collateral damage amidst the entrepreneurial quest to get rich building the next killer startup. I've heard many startups like Snapchat and Pinterest state the reason they don't want to do APIs, is they don't want developers building unwanted applications on their services, something that stems from a mix of not understanding modern approaches to API management, and not really thinking about their end-users needs (both these companies now have APIs but for different reasons).

I am sure that these platforms are often more concerned with locking in their userbase, then allowing them to be able to migrate their data, content, and other media off the platform for their own interests and protection. As companies race forward towards their exits, or in many cases their implosions, users often lose everything that they have published on a platform, many times even if they've been paying for the service.

An API is not always meant just for developers to build the next killer website and mobile applications integration on top of an API that benefits themselves, and the platform. Sometimes these applications are focused on providing data portability, syncing, and important backup solutions for users--allowing them to minimize the damage in their personal and professional worlds when things go wrong with startups. While data portability and data dumps can alleviate some of this, often times what they produce is unusable, and an API often allows for more usable real world possibilities.

As an API provider, you do not have to approve every developer and application that requests access. If an application is in direct competition or does not benefit your platform, and its users--you can say no. I encourage ALL platforms to have a public presence for their APIs (you know you have them) and incentivize developers to build data portability, syncing, and backup solutions for users. APIs are not just for encouraging developers to build the next killer startup, sometimes they will just help protect your users from when things go wrong with your startup vision--make sure and think beyond just your desires and remember that there are people who depend on your service.


Add A Prominent Icon Link To Your API Definition On Your Documentation Page

In an effort to help folks understand the many layers of just exactly what is an API and how people are using them, I'm going to emphasize (again) the importance of sharing your API definition publicly. I'm not going to talk about why you should have an API definition for your API, if you need a reason, go look at the growing number of ways that API definitions are driving a modern API life cycle--this post is about making sure you are sharing it properly once you have one crafted.

I'm increasingly stumbling across OpenAPI Spec-driven Swagger UI documentation for APIs which I then have to fire up my Chrome developer tools to reverse engineer the path of the OpenAPI Spec--this is dumb. If you have an API definition available for your API, make sure it is available in a prominent location within your API portal, preferably using an easy-to-find icon and supporting link.

Your API definition isn't just driving your API documentation. They are being used for API discovery search engines like APIs.io, to get up and running in API clients like Postman, and to help me monitor, test, and troubleshoot my API integrations using Runscope. Please stop hiding them! I know many of you think this is some secret sauce, but it isn't. You should be proudly sharing your definitions, and making them available to your consumers with one click, so they can more quickly integrate, as well as successfully manage their ongoing integration.


Using Anchors In Your FAQ And Other API Support Pages

I was going through some of the Twitter feeds of the APIs that I track on and noticed Spotify's team providing support to some of their API users with quick links / anchors to the answers in their API user guide available at developer.spotify.com. This might sound trivial, but having an arsenal of these links, so you can tweet out like Spotify does can be a real time saver.

This is pretty easy to do with a well-planned API portal and developer resources but is something you can rapidly add / change to using a frequently asked questions page for your API. The trick is to make sure you have anchors to the specific areas you are looking to reference when providing support for your community.

Another benefit of doing this beyond just developer support is in the name of marketing and evangelism. I'm often looking for specific concepts and topics to link to in my stories, and if an API doesn't have a dedicated page or an anchor for it, I won't link it--I do not want my readers to have to dig for anything. The trick here is you need to think like your consumers, and not just wear your provider's hat all the time.

When crafting your API portal, and supporting resources make sure you provide anchors for the most requested resources and other information related to API operations, and keep the links handy so you can use across all your support and marketing channels.


Is Your Sales Deal Size Just Too Big To Be Reading API Evangelist?

I am blessed to have people in the space who have supported what I do for the last six years. Companies like 3Scale, Restlet, WSO2, Cloud Elements, and others have consistently helped me make ends meet. Numerous individuals stepped up in May to help me make it through the summer--expecting nothing in return, except that I continue being the API Evangelist.

I do API Evangelist because I enjoy staying in tune with the fast-growing landscape of industries being touched by APIs. I believe in what is possible when individuals, companies, organizations, institutions, and government agencies embark on their API journey (aka digital transformation). I do not operate as the API Evangelist to sell you a product, service, or to get rich. Don't get me wrong, I do ok, but I definitely am not getting rich--all I got is the domain, Twitter account, and my stories.

The prioritization of sales and profits over what is really important in the space always blows my mind, but rarely ever surprises me. I find myself regularly worrying about the companies and individuals who focus on sales over actual transformation, but I have to admit my friend Holger Reinhardt's post about the motivations behind their Wicked (cool) open source API management made me chuckle. Their API management work was in response to a sales lead that "felt that our focus on ‘just enough API management’ was too narrow and not addressing the larger needs (and bigger deal) of the ‘Digital Transformation’ of the Haufe Group." << I LOVE IT!!!

I've been through hundreds of enterprise sales pitches, sitting on both sides of the table, and experiencing this bullshit song and dance over and over was one of the catalysts for leaving working with SAP in 2010 and starting API Evangelist. I just wanted to tell honest, real stories about the impact technology could make--not scam someone into signing a two or three-year contract, or be duped by a vendor into doing the same. Granted, not all sales people are scammers, but if you are in the business, you know what I'm talking about.

All I can say is I am very glad I do not have to live in a sales deal-driven world and I refuse to go back. To brag a little, I know that a significant portion of my readers are enterprise. People who work at IBM, SAP, Oracle, SalesForce, Microsoft, Capital One, and on, and on read my blog, and I want you all to know: That NONE of your deal sizes are too big, or too small to be reading my blog--I give a shit about all of you. However, maybe you could let me know what your expected budget might be? ;-)


I Am Digging Stripes New Interactive API Documentation Walkthrough

I am digging Stripes new documentation release, and specifically their interactive API documentation walkthrough. The new "try now" section of their documentation provides an evolve look at what is possible when it comes to providing your API consumers the documentation they need to get up and running.

The new documentation provides not just a code example of processing a credit card charge, they walk you through accepting a credit card, creating a new customer, charge the card, establish a recurring plan, and establishing a recurring customer subscription.

The walkthrough is simple, informative, and helpful. It helps you understand the concepts at play when integrating with the Stripe API, in a language agnostic way. I was super impressed with the ability to copy, paste, and run the curl commands at the command line, and when I came back to the browser--it had moved to the next step in the walkthrough. 

The new Stripe API documentation walkthrough is the most sophisticated movement forward in API documentation I've seen since Swagger UI. It isn't just documentation, in an interactive way--it walks you through each step, bordering on what I'd consider API curriculum. All without needing an actual live token--I wasn't even logged in. Additionally, Stripe made sure they Tweeted out the changes and included a slick GIF to demonstrate the new interactive capabilities of their documentation.


Going The Distance To Help API Consumers Find Their API Keys And Tokens

I am always amazed at how difficult it can be to obtain the API keys, or fire up an initial set of oAuth tokens when kicking the tires on a new API. I would also say that I am also regularly impressed the distance API providers will go to help API consumers obtain the keys they need to make a successful API call.

One example of this is present in the new Stripe API documentation. Their new code samples give you a slick little alert every time you see a demo key and mouse over. The alert gives you a quick link to log in and obtain the keys you need to make an actual call.

While I like this approach, I also  like the way Twitter does this and gives me a dropdown listing all of my applications, allowing me to choose from any of my current apps I have registered--maybe it is something that could be merged?

Both are great examples of API providers going the extra distance to make sure you understand how to authenticate with an API, and get your API keys and OAuth tokens. If you know of other good examples of how API providers are working to make sure authentication is as frictionless as possible, making API keys and oAuth tokens more accessible directly within API docs--let me know.

This is an area I think interactive documentation has made significantly easier, but things have seemed to stagnate in this area. It is definitely an area I'd like to see move forward, eventually providing cross-API provider solutions that developers can put to use.


Watching Out For Your API Keys & Tokens On Open Internet

I was just learning about Auth0's new password breach detection service, adding to the numerous reasons why you'd use their authentication service, instead of going at it on your own. It's an important concept I wanted to write about so that it was added to my research, and present in my thinking around API authentication and security going forward.

Keeping an eye out for important identity and authentication related information used as part of my API consumption is a lot of work--it is something that I'd love to see more platforms assist me with. I've written about AWS communicating with me around my API keys, and I could see an API key and token management solution be built on top of their AWS Key Management Service. I've also received emails from Github about my OAuth token that show up in a public repo (happened once ;-( ).

Many application developers do not have the discipline to always manage API keys & tokens in a safe and secure way (guilty). It seems like something that could become default for API providers--if you issue keys and tokens, then maybe you should be helping consumers keep an eye out for them on the open Internet << Which smells like an opportunity for some API-focused security startup. 

Have you seen any other API providers provide key and token monitoring services? Is there anything that you do as an API consumer to keep an eye out for your own keys and tokens? Search for them on Github via the API? Manually search on Google? I am curious to learn more about what people are doing to manage their API keys and tokens.


Providing A Dedicated Mobile SDK Page For Your API

Every API provider will have slightly different needs, but there are definitely some common patterns which providers should be considering as they are kicking off their API presence, or looking to expand an existing platform. While there are some dissenting opinions on this subject, many API providers provide a range of specific language, mobile, and platform SDKs for their developers to put to use when integrating with their platforms. 

A common approach I see from API providers when it comes to managing their SDKs is to break out their mobile SDKs into their own section, which the communications API platform Bandwidth has a good example of. Bandwidth provides iOS and Android SDKs and provides a mobile SDK quick start guide, to help developers get up and going. This approach provides their mobile developers a dedicated page to get at available SDKs, as well as other mobile-focused resources that will make integration as frictionless as possible.

Unless your anti-SDK, you should at least have a dedicated page for all your available SDK. I would also consider putting all of them on Github, where you will gain the network effect brought by managing your SDKs on the social coding platform. Then when it makes sense, also consider breaking out a dedicated mobile SDK page like Bandwidth--I will also work on a roundup of other providers who have similar pages, to help understand a wide variety of approaches when it comes to mobile SDK management.


More Considerations When Providing An Anonymous App For Your API Service

I wrote a post the other day about Postman.io having a limited, anonymous version of their API modeling tool. I stumbled across it while I was trying to upgrade my Stoplight.io account. Shortly after I tweeted out the blog post, John Sheehan (@johnsheehan) from Runscope chimed in with some wisdom on the subject.

Definitely, something to consider. In the current online environment, it might become quite a pain in the ass to maintain an anonymous app, as John points out. This is one reason I work to publish my API tooling as standalone JavaScript applications, which run 100% on Github. First off they run on Github infrastructure, and use Github's bandwidth. Second, this type of app is forkable, and people can choose to run them wherever they desire--on Github, or any other site they wish.

I'll keep an eye out for other anonymous apps built on top of API service providers, or individual APIs--maybe there are other successful models out there, or maybe there is also some other cautionary tales we should hear.


Managing The Apps Across All My API Accounts

I am going through all of my online accounts changing passwords, and one of the things I do along the way is check which applications have access to my digital self. Increasingly my accounts have two dimensions of applications: 1) apps I have created to allow me to make API calls for my system(s) 2) apps I have given access to any account using OAuth. This is a process that can take quite a bit of time, something that is only going to grow in coming years. 

The quickest example of this in the wild is Twitter. I have authorized 3rd party applications to access my account, and I have also developed my own applications, which have various types of access to my profile--this is how I automate my Tweets, profiling of the space, etc. I'm regularly deleting apps from both of these dimensions, which I tend to add as I test new services, and build prototypes. 

I really wish the platforms I depend on would allow me to manage my internal and 3rd party applications via an API. If I could aggregate applications across all the accounts I depend on, manage the details of these applications (including keys & tokens), and add and remove them as I need--that would be awesome! If nothing else, maybe this will put the bug in your ear to consider this for your own world, and you can help put the pressure on existing API providers to open up oAuth and app management APIs for us to help automate our operations.


Adding An Atom Feed For The API Evangelist Blog

The API Evangelist platform is far from perfect, there are always portions of it that just aren't finished yet (always work in progress). I am always thankful that people put up with my API Evangelist workbench, always changing and evolving. Even with this unfinished status, there are some unfinished or broken elements that are just unacceptable--one of these is the lack of an Atom feed for my blog.

Thankfully I have other folks in the space who are kind enough to remind me of what's broken when it comes to specifications, and ultimately what is broken on my website.

Thanks Erik for gently pushing back. In response I went ahead and added an Atom feed for the API Evangelist blog, to add to the existing RSS feed. I made sure the Atom feed validated and added a link relation to the header of the blog. I am going to do the same to all my individual research areas with the next push of their website template.

Syndication of my writing is important, so my blog is now available via RSS, Atom, and JSON. Thanks Erik for helping make sure the web is not entirely broken. ;-)


You Can Make Money While Also Doing Important Work For The API Space

I see a lot of companies doing things with APIs, and I often find myself struggling to find companies who are doing important things that benefit the community, have a coherent business model, and providing clear value via their services. In the drive to obtain VC funding, or after several rounds of funding, many companies seem to forget who they are, slowly stop doing anything important (ie. research, open source, etc.) with their platform, and seem to just focus on just making money. 

One phrase I hear a lot from folks in the space is, "it's just business", and that I should stop expecting altruistic behavior around APIs, and within the business sectors which they are impacting--APIs are about making money, and building businesses hippie! Often times I begin to fall for the gaslighting I experience from some in the API space, then I engage with services like CloudFlare.

I use CloudFlare for all my DNS, but I also stay in tune with their operations because of what they do to lead the DNS space, and because of their DNS API. I was going to craft this post after reading their blog post on the Cuban CDN, then I read their post on an evenly distributed future, and I'm renewed with hope that the web just might be ok--things might not be as dark as they feel sometimes.

I follow what CloudFlare is doing because their work represents the frontline of the API sector--DNS. This makes it not just about DNS, it also becomes about security, and potentially one of the most frightening layers of security--the distributed denial of service attack (DDoS). CloudFlare clearly get DNS, and care so much that they have become super passionate about understanding the web as it exists (as messy as it is), and pushing the conversation forward when it comes to DNS, performance, and security. 

CloudFlare makes DNS accessible for me, and for other less-technical professionals like my partner in crime Audrey Watters (@audreywatters), who also uses CloudFlare to manage her DNS, with no assistance from me. I operated my own DNS servers from 1998 until 2013, and it is something that I will never do again, as long as CloudFlare exists. CloudFlare knows their stuff and they help me keep the frontline of my domains healthy and secure.

There are a number of companies I look up to in the space, and CloudFlare is one of them. For me, they prove that you can build a real business, do important work that moves the web forward, be passionate about what you do, while also being transparent along the way. Knowing this is possible keeps me going forward with my own research, and optimistic that this experiment we call the web might actually survive.


If You Use API Definitions There Is No Excuse For Not Having An API Sandbox

I have long been a proponent of using API definitions, not just because you can deploy interactive API documentation, but because they open up almost every other stop along the API life cycle. Meaning, if you have an OpenAPI Spec definition for your API you can also generate SDKs using APIMATIC, and API monitors using Runscope. 

One of the examples I reference often is the API Sandbox solution appropriately named Sandbox. One of the reasons I use Sandbox in this way is that API mocking using API definitions is a pretty easy concept for developers to wrap their heads around, but also because their home page is pretty clear in articulating the opportunities opened up for your API when you have machine-readable definitions available.

Their opening text says it well, helping you understand that because you have API definitions you can "accelerate application development", and provide "quick and easy mock RESTful API and SOAP webservices". The presence of common API definition icons including API Blueprint, OpenAPI Spec, RAML, and WSDL then provide a visual re-enforcement for the concept.

Sandbox opens up mocking and sandbox capabilities, which I lump together under one umbrella which I call API virtualization. You can easily create, manage, and destroy sandboxes for your APIs using their API, and your API definitions. I envision API providers following Cisco's lead and having any number of different types of sandboxes running for developers to put to work, using server virtualization (virtualization on virtualization).

With the evolution of API definition-driven solutions like Sandbox for providing virtualized instances of your APIs, there really isn't any excuse for not having a sandbox for your API. For device focused APIs, a sandbox is essential, but even for web and mobile-focused APIs you should be providing places for your API consumers to play, and not requiring them to code against production environments by default.


CRX Extractor Wins For The Best Customer Quote Ever

Having quotes from your customers on your company website is a no-brainer. Finding the best examples of brands and companies putting your valuable service, or tool to work demonstrates it has value, and that people are using it.

While playing around with a new chrome add-on reverse engineering tool called CRX Extractor, I noticed the quote at the bottom of their page:

They win in my book for having a funny, but also pretty realistic endorsement for why you should be using a product. I'm using the tool to better understand how browser add-ons are putting APIs to work and evolve my own creations as well, but I can see reverse engineering them to make sure they are secure is a pretty important aspect of operating your company securely online.

When it comes to marketing your API, make sure you have quotes from smart people, as well as brands that people know, makes sense, but I would also add that making them funny, and allowing ourselves to laugh along the way can make a significant impact with the right people as well


An OpenAPI Spec For A Building Permits API

One of the reasons why crafting API definitions like OpenAPI Spec for our APIs, and openly sharing them on the web, is so that the pattern will get used, and reused by other API providers. That might sound scary to some companies, but really that is what you want--your API design used across an industry. Your API definition is not your IP, it is the magic behind your API, and the way you approach all the supporting elements around your API operations.

There are numerous industries where I'd like to see a common API definition emerge, and get reused, and one of the more obvious ones is in the area of building permits. Open Permit has shared their API definition, by publishing the OpenAPI Spec to drive their Swagger UI documentation. This is a great example of an API definition that should be emulated across the industry because the money to be made is not around the API design, but the portion of our economy that the API will fuel when it is in operation.

Can you imagine if all cities, contractors, and vendors who service the construction industry could put APIs to use, and even better, put common patterns to use? If you have ever tried to build something residential or commercial and had to pull a permit, you understand. This is one industry where APIs need to be unleashed, and we need to make sure we share all possible API definitions so that they can get used, and we aren't ever re-inventing the wheel.