The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


What APIs Excite Me And Fuels My Research And Writing

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

The number API that gets me out of bed each day, with an opportunity to apply what I’ve learned in the API sector is with the Human Services Data API (HSDA). Which is an open API standard I am the technical lead for which helps municipalities, and human service organizations better share information that helps people find services in their communities. This begins with the basics like food, housing, and healthcare, but in recent months I’m seeing the standard get applied in disaster scenarios like Hurricane Irma to help organize shelter information. This is why I do APIs. The project is always struggling for funding, and is something I do mostly for free, with small paychecks whenever we receive grants, or find projects where we can deliver an actual API on the ground.

Next, I’d say it is government APIs at the municipal, state, but mostly at the federal levels. I was a Presidential Innovation Fellow in the Obama administration, helping federal agencies publish their open data assets, take inventory of their web services. I don’t work for the government anymore, but it doesn’t mean the work hasn’t stopped. I’m regularly working on projects to ensure RFPs, and RFIs have appropriate API language in them, and talking with agencies about their API strategy, helping educate them what is going on in the private sector, and often times even across other government agencies. APIs like the new FOIA API, Recreational Information Database API, Regulations.gov, IRS APis, and others will have the biggest impact on our economy and our lives in my opinion, so I make sure to invest a considerable amount of time here whenever I can.

After that, working with API education and awareness at higher educational institutions is one my passions and interest. My partner in crime Audrey Watters has a site called Hack Education, where she covers how technology is applied in education, so I find my work often overlapping with her efforts. A portion of these conversations involve APIs at the institutional level, and working with campus IT, but mostly it about how the Facebook, Twitter, WordPress, Dropbox, Google, and other public APIs can be used in the classroom. My partner and I are dedicated to understanding the privacy implications of technology, and how APIs can be leveraged to give students and faculty more control over their data and content. We work regularly to tell stories, give talks, and conduct workshops that help folks understand what is possible at the intersection of APIs and education.

After that, I’d say the main stream API sector keeps me interested. I’m not that interested in the whole startup game, but I do find a significant amount of inspiration from studying the API pioneers like SalesForce and Amazon, and social platforms like Twitter and Facebook. As well as the cool kids like Twilio, Stripe, and Slack. I enjoy learning from these API leaders, studying their approaches, but where I find the most value is sharing these stories with folks in SMB, SME, and the enterprise. These are the real-world stories I thrive on, and enjoy retelling as part of my work on API Evangelist. I’m a technologist, so the technology of doing APIs can be compelling, and the business of doing this has some interesting aspects, but it’s mostly the politics of doing APIs that intrigues me. This primarily involves the politics of the humans involved within a company, or industry, providing what I always find to be the biggest challenges of doing APIs.

In all of these areas, what actually gets me up each day, is being able to tell stories. I’ve written about 3,000 blog posts on API Evangelist in seven years. I work to publish 3-5 posts each weekday, with some gaps in there due to life getting in the way. I enjoy writing about what I’m learning each day, showcasing the healthy practices I find in my research, and calling out the unhealthy practices I regularly come across. This is one of the reasons I find it so hard to take a regular job in the space, as most companies are looking to impose restrictions, or editorial control over my storytelling. This is something that would lead to me not really wanting to get up each day, and is the number one reason I don’t work in government, resulting in me pushing to make change from the outside-in. Storytelling is the most important tool in my toolbox, and it should be in every API providers as well.


Who Are The Most Influential People And Companies To Keep An Eye On In API Space

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

When it comes to the most influential people and companies in the API space that I am keeping an eye on, it always starts with the API pioneers. This begins with SalesForce, eBay, and Amazon. Then it moves into the social realm with Twitter and Facebook. All of these providers are still moving and shaking the space when it comes to APIs, and operating viable API platforms that dominate in their sector. While I do not always agree with the direction these platforms are taking, they continue to provide a wealth of healthy, and bad practices we should all be considering as part of our own API operations, even if we aren’t doing it at a similar scale.

Secondarily, I always recommend studying the cloud giants. Amazon is definitely the leader in this space, with their pioneering, first mover status, but Google is in a close second, and enjoys some API pioneering credentials with Google Maps, and other services in their stack. Even though Microsoft waiting so long to jump into the game I wouldn’t discount them from being an API mover and shaker with their Azure platform making all the right moves in the last couple of years as they played catch up. These three API providers are dictating much of what we know as being APIs in 2017, and will continue to do so in coming years. They will be leading the conversation, as well as sucking the oxygen out of other conversations they do not think are worthy. If you aren’t paying attention to the cloud APIs, you won’t remain competitive, no matter how well you do APIs.

Next, I always recommend you study the cool kids of APIs. Learning about how Twilio, Stripe, SendGrid, Keen, Stripe, and the other API-first movers and shakers are doing what they do. These platforms are the gold standard when it comes to how you handle the technical, business, and politics of API operations. You can spend weeks in their platforms learning from how they craft their APIs, and operate their communities. These companies are all offering viable resources using web APIs, that developers need. They are offering these resources up in a way that is useful, inviting, and supportive of their consumers. They are actively investing in their API community, keeping in sync with what they are needing to be successful. It doesn’t matter which industry you are operating in, you should be paying attention to these companies, and learning from them on a regular basis.


What Is The Biggest Challenge For Big Companies Doing APIs?

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

The biggest challenge for big companies doing APIs is always about people and culture. Change is hard. Decoupling things at large companies is difficult. While APIs can operate at scale, they excel when they do one thing well, and aren’t burdened with the scope, and complexity of much of the software systems we see already operating within large companies. These large systems take large teams of people to operate, and shifting this culture, and displacing these people isn’t going to happen easily. People are naturally skeptical of new approaches, and get very defensive when it comes to their data, content, and other digital assets, as it can be seen as a threat to their livelihood–opening up and sharing these resources outside their sphere of influence.

The culture that has been established at large companies won’t be easily undone. It is a culture that historically has had a pretty large gulf between business groups, and the IT groups who delivered the last generation of APIs (web services), that weren’t meant to be accessible, and understandable by business users. Web APIs have become simpler, more intuitive, and have the potential to be owned, consumed, and even in some cases deployed by business users. Even with this potential, many of the legacy rifts still exist, and business users feel this isn’t their domain, and IT and developer groups often feel APIs are something that should stay in their domain–perpetuating and confusing existing challenges already in place.

While there may be small API success within large companies, they often experience significant roadblocks when they try to scale, or spread to other groups. A huge investment is needed in API training amongst not just business users, but also developer and IT groups who may not have the experience with the web that is needed to make an API program successful. This can be the most costly and time-consuming aspect of doing APIs, and with many APIs being born out of technical groups, and are often under-funded, experimental efforts, investment in the basics of web literacy, and API training is often anemic. Setting the stage for what is happening when you begin unraveling legacy systems and processes is essential to minimize friction across API implementations. Without it, humans and culture will be your biggest obstacles to API success.

Web literacy, and API training really isn’t much different than other areas where corporate training is being applied, but for some reason many companies just expect the technology folks to know what they know already, or problem solve and learn on the job. This might have been find when things got done in purely technical circles, but web APIs aren’t purely about tech. They are about leveraging the web to solve problems people face within a company, getting access to resources, and working with external partners to help move business forward. IT and developer staff aren’t always ready for this type of external facing roles, and if business users aren’t up to speed on what is needed, API implementations will stumble, sputter, and ultimately fail. Think of the partnerships it’s taken to make the web work at your company, everyone is using the web, why should it be different with APIs? If APIs are done right, and people are properly educated, there is no reason an entire group can’t work in concert.

Every API effort I’ve seen fail had one common road block–people. There were IT groups that sabotaged, sales teams that felt threatened, executive leadership who didn’t understand what was happening, or partners who were in proper alignment with API efforts. Sure, sometimes the challenges are purely technical. Lack of proper API design. Insufficient security or capacity. These are simply API training and education issues as well. You can’t throw the need for integration of resources between internal groups, external partners, or 3rd party developers using the web at any technical group and expect them to understand what is needed. Similarly, you can’t mandate APIs across business groups, and just expect them to get on-board without any friction. Invest in the web literacy skills, API training and awareness, and communication skills that will be required to do APIs right, and the chances your API efforts will succeed will greatly increase.


What Has Been The Biggest Change In The Industry Since I Started API Evangelist

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

The biggest change in the industry since I started doing API Evangelist in 2010 is who is doing APIs. In 2010 it was 95% startups doing APIs, with a handful of enterprise, and small businesses doing them. I’d say over the last couple years the biggest change is that this had spread beyond the startup community and is something we see across companies, organizations, institutions, and government agencies of all shapes and sizes. Granted, there are a variety when it comes to the level they are doing them, and the quality, but APIs are something that has been moving mainstream over the last seven years, and becoming more commonplace in many different industries.

In 2010 it was all about Twitter, Facebook, Amazon, and many of the API pioneers. This has been rapidly shifting to each wave of startups like Twilio, Stripe, Slack, and others. However, now in 2017 I am seeing insurance companies, airlines, car companies, universities, cities, and federal agencies with API programs. I mean, c’mon, Capital One has an API program (wink, wink). While I still hold influence with each wave of API service providers looking to sell to the space, and many of the API startup providers, my main audience is folks on the frontline of the enterprise, and government agencies at all levels. I also have a growing number of people at higher educational institutions tuning into what I’m writing as they look to evolve their approach to technology. APIs were mainly a startup thing in 2010, and in 2017 it is about getting business done in a digital age thing.

The technology of APIs is still expanding and we are seeing things push beyond just REST, and web APIs, but by far the biggest change has been more about the business of doing APIs, and more importantly sometimes, the politics of doing APIs. These are areas of the industry that are rapidly expanding and evolving as new people onboard with the concept of an API, and the opportunity for doing APIs. As we add new companies, organizations, institutions, agencies, and industries to API conversation, the technology of APIs hasn’t shifted to much, but the business and political landscape is flexing, shifting, and evolving at a pretty rapid pace, and it is something that isn’t always a good thing. Along with it comes privacy, security, financial, and other challenges that will only get worse if there isn’t more discussion and collective investment.

The shift I’ve seen between 2010 and 2017, feels a lot like the change I witnessed from 1995 to 2002 with the web, but this time it’s more than just about websites, it is also about mobile applications, devices, conversational interfaces, automation, and much more. Honestly, it is simply just the next evolution of the web, where there is significantly more channels for operating on than just a browser, and there is a growing amount of digital assets being distributed via the web beyond just text, and images. Video has picked up speed, voice and audio are finally maturing, and algorithms, machine learning, and artificial intelligence are seeing a significant uptick. While all of these areas will have their impact, the biggest changes will come from leading industries like healthcare, education, banking, transportation, and others going beyond just dipping their toes in the API space, but baking it into everything they do.


Why Did We Need The API Evangelist?

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

You needed the API Evangelist because there was nobody paying attention to the big picture of the API space. Sure, there are many vendors who pay attention to the big picture, and there are analysts who are paid to pay attention to the bigger picture to help validate the vendors, but there is nobody independent. At least there wasn’t in 2010 when I started. Now, there are a number of leading API experts and visionaries who work at different companies, and are able to maintain an independent view of the space, but in 2010 this did not exist. I’d like to think I helped make such a thing possible, but honestly it probably would have happened without me.

Developer advocates, and evangelists tend to pay attention to a specific API, set of APIs, or API services and tooling. I pay attention to everything. I keep an eye on as many APIs as I possibly can, and work to evaluate all the services, tools, and technology that emerges on the landscape. I try to remain objective about what is working, and what is not, and share stories about both. I still have my biases, and tend to hold grudges against a few companies for their bad behavior, but for the most part I’m just trying to share an honest view of what is going on at the 100K view. Something that differs from the analysts, because I don’t have a vendor driven agenda, I’m just looking to understand.

Another area that I benefit the space is educating the normals about what is going on. I’m priming your customers, and the decision makers who will be buying your products, services, and putting your tooling to work. Not every company is willing to invest heavily in the area of API education beyond their own products and services, and it is something that needs significant investment. I’ve had API service providers thank me for providing articles, white papers, research, and guides they can use to help validate what they are saying. I’m used by newspapers, tech blogs, and ocasionally analysts to validate their own findings and stories about what is going on in the API space. I help API providers and service providers do better, and this is why some companies support me financially by sponsoring my work.

Even with all my work over the last seven years, we need hundreds of more API Evangelists. We need API Evangelist in every industry, and in every country and region. It’s not a model that will scale, and don’t think about going to get some funding to make it happen. We just need other people who care about their sectors, have the capacity to make sense of the technology, while also still be able to explain what is going on to normals, and holding their own with developers and IT folks. You need the API Evangelist because most people are just looking to sell you something, even if they are really nice folks. You need the API Evangelist, because I’m going to look at things with an honest and critical eye that isn’t blinded by what I’m trying to sell you, what my boss’s or investor’s agenda is. Even if that agenda is mostly positive, they will always miss a significant portion of what is going on. I’m where you come to ask questions, and read stories about what is happening, without things being skewed by money. I’m not doing this to get rich, build a startup, exit, or doing anything beyond just making a decent living, and paying my bills.


What Is The Role Of An Influencer In The API Industry?

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

The idea of an influencer in the API space will mean many things to many different people. I have pretty strong opinions about what an influencer should do, and it is always something that should be as free of product pitches as it possibly can. Influencing someone in the API space should mean that you are not just influencing their decision to buy your product or service. That is sales, which has it’s place, but we are talking about influencing. I would also add that influencing SHOULD NOT be steeped in convincing folks regarding what they should invest in, at the technology purchasing level, all the way up to the venture capital level. The role of an influencer in the API industry should always be about education, awareness, and helping influence how average flks get everyday problems solved.

Being an influencer always begins with listening and learning. We are not broadcasting or pitching, we want to influence, so we need to have an idea about who we are influencing, and what will resonate and help them solve the problems they face. I do a significant portion of this by reading blogs, tuning into Twitter, and spending time on Github understanding what folks are building. Next, I engage in conversations with folks who are doing APIs, looking to understand APIs, and listening to what their challenges are, and what matters to them. At this stage I am not influencing anyone. I am being influenced. I’m absorbing what is going on, educating myself about what the problem set looks like, and better understanding my potential audience, when and if I get around to doing some of that influencing.

With a better understanding of an industry, a specific audience, and potentially the problems and challenges faced with doing APIs, I will usually step back from APIs entirely. I want to better understand the industry outside of just doing APIs. I want to understand the companies, organizations, instutions, and potentially government influence on what is happening. Everything that is already going on often weighs on doing APIs way more than the technology will ever by itself. I’m looking to understand the business and politics of operating in any sector before I will ever begin doing any sort of influencing within an industry, and to any specific audience. In technology circles, I find that many of us operate within silos, with our blinders on, and don’t always understand the scope of the problem we are looking to provide API solutions for. Stepping back is always healthy.

Once I’ve done my research, engaged in conversations with folks in an area I’m looking to influence, I’ll begin to write stories on the blog. This is all just exercising and training for the white papers, guides, workshops and talks I will be giving in any area I’m trying to influence. I will do this for months, repeating, reworking my ideas, and developing my understanding. The process usually brings more people out of the woodworks, opening up even more conversations, and influencing my industry, but also potentially adding to the number of folks I will be influencing. Slowly I will build the knowledge and awarness needed to truly be able to influence people in any industry, ensuring I have the platform of knowledge I will need, and grasp the scope of the challenges and problems we will be looking to deliver API solutions for.

The role of an API influencer is always a two-way street. You should be influenced just as much, or more than you are influencing. You should be working with influencers to understand your challenges. Tell us your stories, even if they are confidental. Help us understand your industries, and the unique problems and challenges that exist in there. Invest in us listening to your stories, and us telling your story on our blogs, and other longer form content. This is how we’ll help work through what is going on, and find the right path for your API journey. We can bring a lot of value to your API operations, and help you work through the challenges you face. This isn’t about content creation, or simply workshops, training, white papers, and public speaking. This is about influencing, and making an impact. You can’t do this without truly knowing what is going on, and being able to intelligently speak what is going on. This takes time, practice, investment, and actually giving a shit. It is something not everyone can pull off.


Why Was My Week of API Rants So Well Received?

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

A couple weeks back I spent the entire week ranting on API Evangelist, instead of my usual lineup of API stories. Normally these types of stories end up on KinLane.com, or my rants edition, and usually don’t get tweeted out. I’m just venting. However on this particular week, I had enough people “piss in my cheerios”, that I felt the space needed to hear my rants, instead of the usual “nice guy” tone I tend to take on here. Granted, I can be pretty outspoken and blunt in my storytelling, but I usually work very hard to keep a professional tone, and be as nice as I possibly can. There are plenty of assholes in the API space, and I really don’t want to be one of them–even though it comes pretty naturally for me. ;-)

I actually got tired of the tone by mid-week, but I had so many people Tweet, email, and post on LinkedIn, Facebook, and Slack that they were enjoying the rants, I kept it going until that Friday. I was moving to New York, and really didn’t have much time to do the normal amount of research it takes to write stories, and the rants were an easy want to get content up that took me about 10 minutes to write. So why did people like the posts? First, I have to say that not everyone did. I heard from a number of other people that thought I was being a diva, and found the tone offensive. I also heard from a number of folks who were concerned for my mental state, and made sure they checked in on me. So, there were a number of emotions shared, but overwhelmingly people did find it worthwhile.

I feel that people enjoyed it because I was speaking truth in an environment where not many folks do. You see most people have jobs, bosses, and investors. I do not. Few people want to get in trouble with their boss and lose their job. If you run a startup you don’t want to piss off your investors, and lose your funding, or not be able to get funding when you are seeking it, because you said the wrong thing. I’d say these are the top two reasons. Sure, there are other reasons, like you might not get invited to te right events, or be able to hang out with the cool kids, but money is the number one reason people don’t speak the truth on a day to day basis. This is how the world keeps people in line, is with the purse strings. Honestly, most of it isn’t direct, it is self-filtering, where people just perceive there will be repercussions, and they just keep things quiet, and do not rock the boat.

People look to me to not bullshit them in API space. Sell them unneeded products and services, and tell them make believe marketing stories. They are used to me speaking my mind, and being honest about what I’m seeing. I’m not going to be pushing a product, service, or company I don’t agree with just because I was given money. I’m going to be honest about each wave of technology that comes along. I’m also going to call out the everyday bullshit and games we all encounter, but are forced to keep playing to keep our jobs, and get that next round of investment. I’m always amazed at how people change over time when they get to know me, begin to lower their barriers, and realize I’m not going to screw them over, take their ideas, or sell them something they don’t need, simply because they are in my sales funnel, or because I need to make my numbers. I don’t think people realize how puckered up they are on a daily basis from all the stuff we are bombarded with.

Even when you do find rants or seemingly truth speakers online, most people know they have an agenda. I was just ranting because my bullshit levels had overflowed, and enough people had pissed me off. I really don’t think I’m going to change much with my rants, I was just blowing off steam. I think people enjoy an agenda-less ranty story everyone once in awhile. In a world where everything is awesome, and each wave of technology is revolutionary, some folks just want the real deal, no bullshit coating. Much of what I’m saying is what all y’all are thinking anyways. You just don’t get to say it, so you enjoy hearing me say it. My ability to be able to write this way comes from many years of having a job, mortgage, boss, investors, and a wife, where I had to keep my mouth shut. Now I have none of those things, and I have a hot girlfriend who is rantier than I am, but in a way smarter way than I could ever hope to be. So, why not. Speak truth. I mean there is a lot of fake out there these days, it feels good to just tell it like it is.


What It Was About Web APIs That First Captured My Attention?

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

In the spring of 2010 I was ready for a career shift. I was running North American event for SAP, and had also taken up running events for Google, which included Google I/O and Developer Days. I was the VP of Technology, and made all the decisions around usage of tech, from email blasts, to registration, session scanning, and follow-up reporting. When I took over the role I was dealing with a literal hostage colocation facility for server infrastructure, and massive hardware expenditure on servers that I didn’t need most of the year. Then in 2007 I began using the Amazon Cloud, and got to work re-engineering systems to be more API-centric, leverage AWS APIs to orchestrate my operations.

By 2007 I had been playing around with web APIs for some time. I had incorporated payment and shipping APIs into commerce systems, and integrated Flickr, Delicious, Twitter, Facebook and other APIs into applications. I had plenty of SOAP web service experience when it came to enterprise infrastructure, but this was the first time I was deploying global infrastructure at scale using web APIs. I realized that web APIs weren’t just hobby toys, as my SAP IT director in Germany called them, they were an actual a tool I could use to operate a business at scale. My success resulted in more work, taking on more events, and scaling operations, which didn’t always pencil out to me actually being happier, even though the events scaled more efficiently, and out-performed what had come before.

The two Google I/O events where I managed the technology were the first ones where Google gave away their new Android mobile phones. I saw first hand what was happening in the mobile market, with the growth of the iPhone, and everyone scrambling to deploy APIs to support the new applications their were developing. Now, I was also beginning to develop new APIs to support what was possible via Android devices. It was clear that web APIs were going to be the preferred way to deliver the resources needed on mobile phones, and by 2010 there was no doubt that this mobile thing was going to be around for a while. Both SAP and Google were pushing on us to deliver resources that could be used on mobile platforms across all the events we were managing, and I saw that web APIs were how we would do this at scale.

I was using web APIs to deliver compute, storage, and other essential infrastructure to support global events. I was also using web APis to deliver resources to iPhone applications, and now Android applications. I wanted to better understand how this was being done, so in 2010 I began studying the world of APIs, looking at the common approaches to delivering APIs. I quickly saw there were plenty of pundits discussing the technical details of doing APIs, and I decided that I would focus on the business of doing APIs, and specifically how I can help convince business leaders to understand the potential. By summer of 2010 I had settled in on the name of my research blog, and by October I was beginning to publish my research on the blog. Seven years later, 3,000 blog posts later, I’m still doing it, and enjoy the focus on this important layer of not just the web, cloud, and mobile, but how APIs are being used in devices, on the network, and for bots, voices, and other conversational applications.


Regulations Creeping In On AI, ML, Cognitive, And Other Fronts

I wrote an piece earlier today about not fearing AI, but possessing a significant amount of concern when it comes to the people behind. I figured I’d continue with the trend on this Friday afternoon, and talk about the coming regulations when it comes to artificial intelligence (AI), machine learning (ML), and everything cognitive, intelligent, and algorithmic. I am not fully a believer in regulations being the only solution, but I know they are the solutions that bigcos tend to pay attention to. Which is why they spend so much money to distort, and bend them to what they want to see in their industries.

We are entering a phase of the Internet where there are going to an increased number of calls for regulations. Whether it’s privacy, security, breaches, or specifically on technology like drones, artificial intelligence, bots, and machine learning, expect more government involvement in the future. This isn’t because government is inheriently bad, and is looking to suffocate business, it is primarily because these areas of technology are being defined by the worst among us. When you bundle with the not so bad folks, and even many of the good folks refusing to reign in their industry partners, and fellow technologists, you end up with more regulations imposed to stablize things. If the tech space was more willing to step up and take the lead regarding acceptable practices, this wouldn’t be necessary.

Algorithms are making more decisions in our lives. After seeing what Facebook and Twitter have done during the last US election, and seeing AI and ML continue being applied to important aspects of our lives, there will be more inquirires by the government, and calls for the government to step in. I know that platforms don’t want to be regulated, and with very libertarian stances in much of Silicon Valley, there is a significant undertow of anti-regulatory, and anti-government sentiment. However, if you believe in the wisdom of the crowds, you have to acknowledge your role in determining how the crowds behave. You are developing the platforms that we can automate with bots. You are providing the platforms for deploying the next generation of AI and ML. In many cases, these existing tech players are also investing in the next wave of startups. If you don’t set the tone for what acceptable practices are, the federal government eventually will.

When this happens I’m going to be here to point at APIs as one possible regulatory solution. I’m going to have years of blog posts on the subject, with plenty of evidence about how APIs, and API management can be provide observability into how platforms, and algorithms operate. I’m not doing this because this is the future I want. I’m doing this because this is the future you all have set the stage for. I get that you don’t want to reign in your startup co-founders, trusted partners, or investors. The system is designed to punish you if you do. However, if you let the worst of the worst lead the conversation when it comes to AI, ML, algorithms, and other tech trends, regulations will be the only answer. So, put on your seatbelt, and get ready for the government stepping up to dictate the rules of the road, because it seems to be the only thing many of you will actually respond to, whether you like it or not.


Sensibly Thinking About Where Technology Ends And The Human Part Begins With APIs

Our team at a hackathon I’m participating in this week is working on a data aggregation tool for helping merge multiple hurrican shelter data sources from Irma in Florida. While the need for the data is winding down, the use case for the tool could be something that lives on, and could help communities in the future. This projects aggregates multiple data sources for shelters from FEMA, municipal, and sources pulled together by volunteers. Our team is focused on aggregating, and doing as much heavy lifting to automatically merge and cleanse the data as they can, but then at the right moment render it for humans to step in and finish the work.

I was impressed with the balance struck by the team. Knowing where to apply technology, and when to rely on humans. The problem of merging open data from multiple sources is a big and complex one. It is one that I’ve seen many technologists think they can step up and solve simply with their tech toolbox, no humans necessary. Our team quickly saw the scope of the program, discussed at length about what they could accomplish, and what they couldn’t accomplish, then got to work in the code to deliver the functionality, but then also developed a web interface to allow humans to step in at just the right point. Striking a balance between the human and technological aspects of doing this–which is what the Human Services Data Specification (HSDS) is all about.

I know there is a significant amount of information out there about User Experience (UX), and also increasingly Developer Experience (DX). However, I think skills to know where to apply technology, and when to step back from using it, and focusing on augmenting, empowering, and putting the humans in charge are seriously deficient in our sector. I regularly encounter developers who think that technology is the solution, and humans are the problem. This contempt always degrades the amount of investment in the user interface portion of the question, and will also shift the developer experience portion, ensuring the API speaks to the technological needs, and not the human needs. This isn’t how all of this should work. It isn’t about the tech. It is about what the technology does for humans, not the other way around

The balance of API backend to human front end on this week’s human services hackathon was refreshing to see. Early on I saw the team leaning towards trying to merge, clean up, and solve all the data problems in the code, and I was a little concerned. However, by the end of the 2nd night they showed me their API definition and design, as well as the web interface meant for humans. I felt they struck a perfect balance between the tech and human aspects of delivering human services. This balance is a topic you will here more about here on the blog as I talk about APIs, and how they are being wielded for artificial intelligence, machine learning, voice, bots, and every other digital layer of our world that often seems to be being consumed by technology. I’m always looking for the emphasis of human over the technology, and I am pleased with the outcome of this hackathon. I’ll showcase the work once we are done. It is something I’m thinking will be useful in supporting future natural disasters, something I’m feeling is going to become a more common occurrence in our world.


I Do Not Fear AI, I Fear The People Doing AI

There is a lot of FUD out there when it comes to artificial intelligence (AI) and machine learning (ML). The tech press enjoy yanking people’s chain when it comes to the dangers of artificial intelligence. AI is coming for your jobs. AI is racist, sexist, and biased. AI will be lead to World War III. AI will secure and protect us from the bad out there. AI will be the source of all of our worries, and the solution to all of our worries. I’m interested in the storytelling around all of this, and I’m fascinated by the distracting quality of technology when it comes to absolving the humans behind of doing bad things.

We have the technology to make this black boxes more observability and accountable. The algorithms feeding us news, judging us in courtrooms, and deciding if we are insurable or a risk, can all be wrapped with APIs, and made more accountable. However, there are many human reasons why we don’t do this. Every AI out there can be held accountable, it isn’t rocket science. The technology exists to keep AI from hurting us, judging us, and impacting our lives in negative ways. However, it is the people behind who do not want it, otherwise their stories won’t work. Their stories won’t have the desired effect and control over our lives.

APIs are the layer being wielded for good and for bad on the Internet. Facebook, Twitter, and Reddit, all leverage APIs to be available on our mobile phones. APIs are how people automate, advertise, and fund their activities on their platforms. APIs are how AI and ML are being exposed, wielded, and leveraged. The technology is already there to make them more accountable, we just don’t have the human will to use the technology we have. There is more money to be made in telling wild stories about what is possible. Telling stories that make folks afraid, and in awe of what is possible with technology. APIs are used to tell you the stories, while also making the fire shoot from the stage, and the smoke and the mirrors operate, instead of helping us see, understand, and verify what is going on behind the scenes.

We rarely discuss the fact that AI isn’t coming for our jobs. It is the people behind the AI, at the companies developing, deploying, and operating AI that are coming for our jobs. AI, like APIs, are neither good, nor bad, nor neutral–they are a tool. They are technology, and anything they do is because of us humans. I don’t fear AI. I only fear the people doing AI. The people who tell the stories. The people who are believers. I don’t fear technology because I know we have the tools to do what is right, and hold the people who are using technology in bad ways accountable. I’m afraid because we don’t seem to have the will to look behind the curtain. We hold up many of the people telling stories about AI as visionaries, leaders, and truth tellers. I don’t fear AI, I only fear its followers.


I Will See You At APIStrat In Portland This November

We are putting the finishing touches on the schedule for APIStrat in Portland, OR, October 31st through November 2nd. We have all the workshops, sessions, and keynotes dialed in (not all keynotes announced, wink, wink), and it is all just about making sure y’all show up and participate in the conversation. This is my 2nd favorite part of the event, the build-up for the big day(s). This is the 8th APIStrat we’ve done, and it is the first one we’ve done as part of the Linux Foundation, and with the OpenAPI Initiative. I’m excited.

Make sure and take a look at the session schedule. We received over 165 submissions, and had a program committee of almost 30 people vote to decide with 60 would be accepted. I am the program chair and helped make some difficult decisions, but ultimately I”m pretty proud of the lineup we’ve pulled together. It’s much of the same popular topics as you’ve seen at previous events, with new faces, and brands, but there is also some of the leading edge conversations around serverless, gRPC, GraphQL. Of course, there is also going to be a lot of talk about OpenAPI, in workshops, sessions, and on the main stage. So check out the schedule if you haven’t, it’s pretty sweet lineup.

I want to personally thank Microsoft, Stoplight, SmartBear, Postman, CapitalOne DevExchange, APIMATIC, Red Hat, Google, and Cloud Elements for sponsoring and making sure APIStrat happens. Of course, thank you to The Linux Foundation, and the OpenAPI Initiative (OAI) for taking the lead on APIStrat as it continues to grow and mature with the API community. I want to also thank my partner in crime, Steve Willmott, and the 3Scale / Red Hat team–without them APIStrat wouldn’t be a thing.

Next up for me, now that the schedule is dialed in. Is to just tell stories about what will be happening. I’m going to go through each of the speakers, and companies who are present and look to see what they are up to with APIs. It is something I always try to do in the final months of build up to the conference. APIStrat has been an important part of how I learn about what is going on with APIs, and who the interesting companies, and people are. Hopefully it is the same for you, and we can both be there in November, and learn what is going on together. It will be my first event where I’m not giving a talk. ;-) I’ll still be MC’ing, and harassing y’all in the hallways, but I’m looking forward to being able to tune into more of the talks as they occur.


Using 3rd Party APIs To Break You Out Of Your Enterprise Bubble

I’m participating in a hackathon in Princeton, New Jersey as part of my work on the Human Services Data API (HSDA). We are at a large enterprise financial group’s office, as part of a three day social good hackathon / code sprint. Everybody participating is taking time off from their normal day job as back-end or front-end programmer, and business analyst, to build something for the greater good. Since it is an enterprise developer group the concept of a hackathon is somewhat new to them, and is the first time they’ve worked on external projects, instead of an internally focused hackathon event.

I’m enjoying watching the two teams working on human services projects be forced out of their bubble. One of the projects has three separate 3rd party APIs to work with. 1) Simple spreadsheet deployed web API, 2) government agency published web API, and 3) HSDA API operated by a municipal organization. I am sitting here watching them get exposed to the variety of implementations, quality of data and interface, and wrestle with establishing their project requirements. After being pulled from their bubble trying to understand the APIs, they are also finding themselves pulled out of their local development world, having to potentially use 3rd party tools, services, and even reverse engineering a library or codebase in a language they are not familiar with.

This is all very, very healthy. No matter what gets built at this hackathon, the fact that they are being pulled out of their bubbles, will benefit their world. They are thinking outside their governance bubble. They are forced to learn about the API best or worst practices of other organizations. They are having to use services, tools, and programming languages they aren’t familiar with. All with the motivation of potentially building something for good. They are exercising their skills and knowledge in ways that they won’t encounter in the routine, and highly structured worlds they exist in. Another layer of all of this is that a portion of the team members are from an external group, and have never even met in person–I just watched two of them introduce themselves, and make the connection that they’ve worked together on many projects, but never met in person. #win

This isn’t just startup style thinking for a hackathon. The objective of this event is to build on top of existing tooling, improve existing processes, and add value to existing non-profit organizations. Even with these objectives, the most value is the exhaust from the conversations, planning, and what folks are learning along the way. Also, getting these folks out of their bubble tackling meaningful problems, pulling them away from their routine, and feeling like they are making a change. The hackathon format is part of this, but the API(s) are really a catalyst for change, and a vehicle for helping pull folks out of their carefully crafted environments. The APIs are helping these enterprise developers, project managers, and business analysts think differently, and consider other approaches to getting things done. Hopefully something that will stick with them in the future.


Lost In API Transit

I got on the New York Subway today heading for Penn Station to catch a train (New Jersey Transit) out to Princeton for a hackathon. As I was navigating my way through Metropolitan Transit Authority (MTA) and the New Jersey Transit I was thinking about my usage of API transit instead of API lifecycle. The number one response I had to this concept from readers was in regards the cognitive load experienced when you first look at a subway map that represents API infrastructure, and would anyone even know what I was talking about.

It’s true, when you first look at any of the API subway maps I’ve created so far, you scratch your head to figure out what they mean. I haven’t spent a lot of time making them coherent, but I am also just getting going with the work. Truthfully, they’ll get more complicated, over getting simpler. However, each time I first use the subway in NYC there is also a pretty significant cognitive load. I’ve ridden the subway many times, but each time I still have to study the map, learn the portion I need to get what I need done, and accept that much of it I won’t actually ever understand. I usually only learn what applies to me, and the more time I spend riding a transit system, the more time it comes into focus–something that applies to any transit system in the world I’ve used.

Think about when you start a new job, or adopt an existing legacy project as an API product manager. You do not immediately understand all the moving parts, absorb any diagrams, or documentation the first time you look at them. It takes time experiencing a system, before you will get acquainted, and become a local, like someone riding the MTA or NJT transit systems. Now that I live in NYC I’m going to spend time learning the transit system so I can get around, but I’m also going to invest energy learning it from an operators perspective, and understand the challenges they are facining maintaining, evolving, and keeping the system usable for users. I’m sure there will be a wealth of analogies in there for me when it comes to IT and API infrastructure.

Currently, I’m pretty lost on the MTA and NJT transit systems, but it’s slowly coming into focus. A significant piece of this is the maps that are available to me. Also the physical display systems available to me in the stations, as well as online, and on my mobile phone. I’m pushing forward the next generation of my API Subway MAP tooling at the same time. I’m creating a simple Siren-defined, Jekyll-driven, Github hosted map that helps me walk through a variety stops along the API lifecycle, along a handful of “lines” from design, to deployment, testing, and security. We’ll see how well I do bridging these concepts, but I’m hopeful that eventually it will come into focus, and I’ll stop being so lost, and develop a better understanding of what is going on.


A Sample OpenAPI 3.0 File To Get Started

I am investing more time into my Schema.org work, alongside my learning about OpenAPI 3.0. I’m pretty excited about the components object, and I want to push forward some of my Schema.org dictionary ideas, to help folks get better at reusing common schema throughout their work. Schema.org is the most robust vocabulary out there, and we shouldn’t be reinventing the wheel in this area. I know the most important reason that folks aren’t using is that they either don’t know about it, or they are just lazy. I figure if I create some ready to go schema in an OpenAPI 3.0 components object, maybe people will be more inclined to put common schema to use.

To share my components I need basic OpenAPI 3.0 shell to hold all my reusable schema. I really don’t care about the paths, and other elements being their. So I headed over to the OpenAPI 3.0 Github repo and borrowed the sample Petstore OpenAPI 3.0 my friend Darrel Miller created:

I will change all the information in this sample to reflect my work, but I figured before I did I would share this example document with my readers. At first glance it doesn’t look much different than version 2.0 of OpenAPI, but once you start studying you see the differences. You see the responses have JSON specific content types inserted in between their schema references. There is also a components object, with a couple of schema present–this is all I need. There are a bunch of other things you can store in your components object, but I think this provides a nice first look at what is going on.

If you are looking for some other working examples of OpenAPI 3.0 in action, head over to Mike Ralphson’s repository, he has some additional ones you can play with. I don’t know about you, but I learn from others. I need to reverse engineer API definitions from other people before I become fluent myself. I’m going to spend some time hand-crafting some OpenAPI 3.0 definitions, so that I become more fluent. It is tedious work when you are just getting going, but once you get it down, it becomes like any other language you use. I’m hoping to cut my teeth on this Schema.org work. I’m going to replicate the OpenAPI 2.0 work I did when I created over a 1,000 OpenAPIs for each of the Schema.org objects. I’m going to be using them to deploy APIs for clients, and in my API training and storytelling. I want all my examples to be reuable patterns that already exist, not anything custom that I pull out of my magic arse.


Kubernetes JSON Schema Extracted From OpenAPI

I’ve been doing my regular trolling of Github lately, looking for anything interesting. I came across a repository this week that contained JSON Schema for Kubernetes. Something that is interesting by itself, but I also thought the fact that they had autogenerated the individual JSON Schema files from the Kubernetes OpenAPI was worth a story. It demonstrates for me, the growing importance of schema in all of this, and shows that having them readily available on Github is becoming more important for API providers and consumers.

Creating schema is an important aspect of crafting an OpenAPI, but I find that many API providers, or the consumers who are creating OpenAPIs and publishing them to Github are not always investing the time into making sure the definitions, or schema portion of them are complete. Another aspect, as Gareth Rushgrove, the author of the Github repo where I found these Kubernetes schema points out, is the JSON Schema in OpenAPI often leaves much to be desired. Until version 3.0 it hasn’t supported everything you need, and many of the ways you are going to use these schema aren’t going to be able to use them in an OpenAPI, and you will need them as individual schema files like Gareth has done.

I just published the latest version of the OpenAPI for my Human Services Data API (HSDA) work, and one of the things I’ve done is extracted the JSON Schema into separate files so I can use them in schema validation, and other services and tooling I will be using throughout the API lifecycle. I’ve setup an API that automatically extracts and generates them from the OpenAPI, but I’m also creating a Github repo that does this automatically for any OpenAPI I publish into the data folder for the Github repository. This way all I have to do is publish an OpenAPI, and there is automatically a page that tells me how complete or incomplete my schema are, as well as generates individual representations that I can use independent of the OpenAPI.

I am hoping this is the beginning of folks investing more into getting their schema act together. I’m also hoping this is something that OpenAPI 3.0 will help us focus on more as well. Pushing API designers, architects, and developers to get their schema house in order, and publish them not just as OpenAPI, but individual JSON Schema, so they can be used independently. I’m investing more cycles into helping folks learn about JSON Schema as I’m pushing my own awareness forward, and will be creating more tooling, training material, and stories that help out on this front. I’m a big fan of OpenAPI, and defining our APIs, but as an old database guy I’m hoping to help stimulate the schema side of the equation, which I think is often just as important.


VersionEye SDK Security Notifications

I’ve written about VersionEye a couple of times. They help you monitor the 3rd party code you use, keeping an eye on dependencies, license violations, and security issues. I’ve written about the license portion of this equation, but they came up again while doing my API security research, and I wanted to make sure I revisited what they were up to in this aspect of the API lifecycle, floating them up on my radar.

VersionEye is keeping an eye on multiple security databases and helps you monitor the SDKs you are using in your application. Inversely, if you are an API provider generating SDKs for your API consumers to put to use, it seems like you should be proactively leverage VersionEye to help you be the eye on the security aspects of your SDK management. They even help developers within their existing CI/CD workflows, which is something that you should be considering as you plan, craft, and support your APIs. Making it as easy for you to leverage your APIs SDKs in your own workflow, and doing the same for your consumers, while also paying attention to security at each step, breaking your CI/CD process when security is breached.

I also wrote about how VersionEye has open sourced their APIs a while back, highlighting how you can also deploy into any environment you desire. I’m fascinated by the model VersionEye provides for the API space. They are offering valuable services that help us manage our crazy worlds, with a viable commercial and open source offering, that integrates with your existing CI/CD workflow. Next, I’m going to study the dependency portion of what VersionEye offer, then take some time to better understand their business model and pricing. VersionEye is pretty close to what I like to see in a service provider. They don’t have all the shine of a brand new startup, but they have all the important elements that really matter.


Webhook Delivery Headers From Github API

I am continuing my learning about Webhooks, and Github keeps my notebook full with interesting building blocks we can use when crafting our own webhook strategies. I’m not using everything I’m learning from Github in my current strategy, but I like adding each of these building blocks to my webhook research, so that I can use in future guides that I publish. Today’s post overlaps two areas of my research into webhooks, and how headers are being used by a variety of API providers.

Github is using HTTP headers as part of the webhook response, providing the recipients of webhooks with more information about what is happening with each outgoing request. They are providing three custom headers along with each payload:

  • X-GitHub-Event - Name of the event that triggered this delivery.
  • X-Hub-Signature - HMAC hex digest of the payload, using the hook’s secret as the key (if configured).
  • X-GitHub-Delivery - Unique ID for this delivery.

In addition to these three custom headers, the User-Agent for the requests will have the prefix GitHub-Hookshot–so that your systems can identify these incoming requests more specifically. I like getting the name of the event, and definitely like the example of using the signature to make sure the payload hasn’t been tampered with, or from an untrustworthy source. Additionally you get a unique identifier for the delivery, allowing you to be able to record, and pull up unique webhook receipts.

I’m adding these all as building blocks to my webhook research. I still have a notebook full of other Github, Stripe, Twilio, and leading approaches to webhooks. Once I get through this round I’m going to apply what I’ve learned to the project I’m working on, and then see about pushing out the first draft of my webhooks guide–something I’ve never done before. If nothing else, I’m learning a lot. I’m learning from all the leaders in the space, who are several versions into their webhook designs. I’m finding the biggest challenge right now, is how I hold back and don’t do everything, keeping my webhook designs simple, intuitive, but as powerful as possible.


Machine Readable Definitions For All Things API, Including Your Bots

Every aspect of my business runs as either YAML or JSON. This blog post is YAML stored on Github, viewed as HTML using Jekyll. All the companies, services, tooling, building blocks, patents, and other components of my research all live as YAML on Github. Any API I design is born, and lives as an OpenAPI YAML document on Github. Sure, much of this will be imported, exported, and exported with a variety of other tools, but the YAML and JSON definition is key to every stop along the life cycle of my business, and the work that I do.

It isn’t just me. I’m seeing a big shift in how many platforms, services, and tooling operate, with often times YAML, and still in many situations it has JSON, XML, and CSV at its core. Everything you do should have some sort of schema definition, providing you with a template that you can reuse, share, collaborate, and communicate around. Platforms should allow for the creation of these template schema, and enable the exporting, and importing of them, opening up interoperability, and cross-platform functionality–much like APIs do in real-time using HTTP. This is what OpenAPI has done for the API lifecycle, and there should be many complementary, or even competing formats that accomplish the same, but for specific industries, and use cases.

You can see this in action over at AWS, with the ability to export your Lex bot schema for use in your Alexa skill. Sure, this is interoperability on the same platform, but it does provide one example of how YAML and JSON definitions can help use share, reuse, and develop common templates for not just APIs, but also the clients, tooling, and other platforms we are engaging with. You’ll see this expand to every aspect of tech as continuous integration and deployment takes root, and Github continues it’s expansion beyond startups, into the enterprise, government, and other institutions. Along the way there will be a lot of no name schema finding success, but we will also need a lot more standardization and maturing as we’ve seen with OpenAPI, for all of this to work.

I hear a lot of grumbling from folks when it comes to YAML. I get it, I had the same feeling. It also reminds me of how I felt about JSON when it first emerged. However, I find YAML to be very liberating of brackets, slashes, and other delimiters, but I also find it is just one format, and I should always be supporting JSON, XML, and CSV when it comes to one dimensional schema. I don’t find it a challenge to convert between the formats, or keep some things one-dimensional to bridge to my spreadsheet oriented users. I actually feel it helps me think outside of my bubble. I enjoy rifling through the YAML and JSON templates I find on Github from a variety of operations, defining their bots, conversational interfaces, visualizations, CI/CD, configuration, clients, and other aspects of operations. Even if I’m never using them, I find it interesting to learn how others define what they are up to.


OpenAPI 3.0 Tooling Discovery On Github And Social Media

I’ve been setting aside time to browse through and explore tagged projects on Github each week, learning about what is new and trending out there on the Githubz. It is a great way to explore what is being built, and what is getting traction with users. You have to wade through a lot of useless stuff, but when I come across the gems it is always worth it. I’ve been providing guidance to all my customers that they should be publishing their projects to Github, as well as tagging them coherently, so that they come up as part of tagged searches via the Github website, and the API (I do a lot of discovery via the API).

When I am browsing API projects on Github I usually have a couple of orgs and users I tend to peek in on, and my friend Mike Ralphson (@PermittedSoc) is always one. Except, I usually don’t have to remember to peek in on Mike’s work, because he is really good at tagging his work, and building interesting projects, so his stuff is usually coming up as I’m browsing tags. He is the first repository I’ve come across that is organizing OpenAPI 3.0 tooling, and on his project he has some great advice for project owners: “Why not make your project discoverable by using the topic openapi3 on GitHub and using the hashtag #openapi3 on social media?” « Great advice Mike!!

As I said, I regularly monitor Github tags, and I also monitor a variety of hashtags on Twitter for API chatter. If you aren’t tagging your projects, and Tweeting them out with appropriate hashtags, the likelihood they are going to be found decreases pretty significantly. This is how Mike will find your OpenAPI 3.0 tooling for inclusion in his catalog, and it is how I will find your project for inclusion in stories via API Evangelist. It’s a pretty basic thing, but it is one that I know many of you are overlooking because you are down in the weeds working on your project, and even when you come up for air, you probably aren’t always thinking about self-promotion (you’re not a narcissist like me, or are you?)

Twitter #hashtags has long been a discovery mechanism on social media, but the tagging on Github is quickly picking up steam when it comes to coding project discovery. Also, with the myriad of ways in which Github repos are being used beyond code, Github tagging makes it a discovery tool in general. When you consider how API providers are publishing their API portals, documentation, SDKs, definitions, schema, guides, and much more, it makes Github one of the most important API discovery tools out there, moving well beyond what ProgrammableWeb or Google brings to the table. I’ll continue to turn up the volume on what is possible with Github, as it is no secret that I’m a fan. Everything I do runs on Github, from my website, to my APIs, and supporting tooling–making it a pretty critical part of what I do in the API sector.


My Favorite Part Of OpenAPI 3.0 Is The Components Object

There were a number of changes made to the structure of Open API in the move to version 3.0 that I am a fan of, but if I had to point at a single seismic shift that I think will move the conversation forward it is the components object. According to the specification the components object, “holds a set of reusable objects for different aspects of the OAS. All objects defined within the components object will have no effect on the API unless they are explicitly referenced from properties outside the components object.” It is the store for for all the common and reusable aspects of defining, and designing your APIs–which will have huge benefits on how we are doing all of this.

Here is the laundry list of what you can put into your OpenAPI 3.0 components object, and reference throughout your API definitions:

  • schemas - An object to hold reusable data schema used across your definitions.
  • responses - An object to hold reusable responses, status codes, and their references.
  • parameters - An object to hold reusable parameters you are using throughout your API requests.
  • examples - An object to hold reusable the examples of requests and responses used in your design.
  • requestBodies - An object to hold reusable the bodies that will be sent with your API request.
  • headers - An object to hold reusable headers that define the HTTP structure of your requests.
  • securitySchemes - An object to hold reusable security definitions that protect your API resources.
  • links - An object to hold reusable links that get applied to API requests, moving it towards hypermedia.
  • callbacks - An object to hold reusable callbacks that can be applied.

I’ve written about how many API developers see this stuff as duplicate work across our APIs, where I see them as common, resusable patterns that we should be getting organized–the OpenAPI 3.0 components object is the beginning of us getting this house in order. The components object is how API architects and designers can ensure that API developers are being consistent in their work, and not just reusing common elements, but reusing well thought out, fully baked elements that adhere to standards and common definitions used throughout the industry.

The OpenAPI 3.0 components object is where we are going to start injecting API literacy training into the development process. It is where we will teach developers about headers, and common ways of securing our APIs. It is where we will start reusing common dictinaries like Schema.org so we STOP re-inventing the wheel when it comes to defining our schema definitions, fields, and other mundane aspects of crafting an API. The components object isn’t just where we are reusing components within a single OpenAPI, it is where we will start reusing across all the OpenAPIs we are crafting, and learning, sharing, collaborating, and reusing across OpenAPIs that are made publicly available.

The OpenAPI 3.0 components object is where we are going to start delivering the hypermedia literacy that was required to get the adopttion that hypermedia advocates envision, but were stonewalled because people just didn’t get it. I’m pretty excited about this aspect of OpenAPI 3.0, and I got myself so fired up about it last night I started building some of my API dictionary tooling I’ve had in my head for a while, but didn’t have just the right vehicle in mind for delivering at scale. I haven’t had much time for playing with OpenAPI 3.0, or the tooling that has emerged, but I got the bug now. I’m going to prioritize some work in this area, if nothing else for generating some relevant stories here on the blog, and keeping me in tune with folks are doing. Oh, that reminds me, have you seen what my friend Mike Ralphson (@PermittedSoc) is up to? He is leading the charge when it comes to OpenAPI 3.0 tooling « I recommend keeping an eye on what he is up to on Github.


The US Postal Service Wakes Up To The API Management Opportunity In New Audit

The Office Of Inspector General for US Postal Service published an audit report on the federal agencies API strategy, which has opened their eyes to the potential of API management, and the direct value it can bring to their customers, and their business. The USPS has some extremely high value APIs that are baked into ecommerce solutions around the country, and have even launched an API management solution recently, but until now have not been actively analyzing and using API usage to guide them in any of their business planning decisions.

According to the report, “The Postal Service captures customer API usage data and distributes it to stakeholders outside of the Web Tools team via spreadsheets every month. However, management is not using that data to plan for future API needs. This occurred because management did not agree on which group was responsible for reviewing and making decisions about captured usage data.” I’m sure this is common in other agencies, as APIs are often evolved within IT groups, that can have significant canyons between them and any business units. Data isn’t shared, unless a project specifically designates it to be shared, or leadership directs it, leaving real-time API management data out of reach of those business groups making decisions.

It is good to see another federal agency wake up to the potential of API management, and the awareness it can bring to business groups. It’s not just some technical implementation with logfiles, it is actual business intelligence that can be used to guide the agency forward, and help an agency better serve constituents (customers). The awareness introduced by doing APIs, and then properly managing APIs, analyzing usage, and building and understanding what is happening, is a journey. It’s a journey that not all federal agencies have even begun (sadly). It is important that other agencies follow USPS lead, because it is likely you are already gathering valuable data, and just passing it on to external partners like USPS has been doing, not capturing any of the value for yourself. Compounding the budget, and other business challenges you are already facing, when you could be using this data to make better informed decisions, or even more important, establishing new revenue streams from your valuable public sector resources.

While it may seem far fetched at the moment, but this API management layer reflects the future of government revenue and tax base. This is how companies in the private sector are generating revenue, and if commercial partners are building solutions on top of public sector data and other digital resources, these government agencies should be able to generate new revenue streams from these partnerships. This is how government works with physical public resources, there should be no difference when it comes to digital public resources. We just haven’t reached the realization that this is the future of how we make sure government is funded, and has the resources it needs to not just compete in the digital world, but actually innovate as many of us hope it will. It will take many years for federal agencies to get to this point. This is why they need to get started on their API journey, and begin managing their data assets in an organized way as the USPS is beginning to do.

API management has been around for a decade. It isn’t some new concept, and their are plenty of open source solutions available for federal agencies to put to use. All the major cloud platforms have it baked into their operations, making it a commodity, alongside compute, storage, DNS, and the other building blocks of our digital worlds. I’ll be looking for other ways to influence government leadership to light the API fire within federal agencies like the Office of the Inspector General has done at the U.S. Postal Service. It is important that agencies be developing awareness, and making business decisions from the APIs they offer, just like they are doing from their web properties. Something that will set the stage for future for how the government serves its constituents, customers, and generates the revenue it needs to keep operating, and even possibly leading in the digital evolution of the public sector.


Always Being Prepared For An API Future That May Not Come

I’m just coming out of a sprint for my Human Services Data API (HSDA) work. Throughout the process of gathering feedback across emails, Slack Channels, and Github Issues, and trying to decide where I should be steering this ship, I’m regularly reminded that I’m often preparing for a future that may never come. I’m working real hard to make my API design as future proof as possible, but I find that in many cases I risk leaving folks behind with some of my API design decisions. When it comes to the audience for this API, municipalities and nonprofit organizations, this concern was present with every decision I have been making.

As part of this latest evolution, I took hypermedia and GraphQL off the table, as both areas seem to confuse and muddy the conversation, not help. I was hoping that GraphQL might help some of the requests around query-ability of APIs, and the tendency to load up individual paths with numerous parameters, and enums. I tried to facilitate discussions around unique identifiers, moving things beyond just incremental integers, taking a cue from Twitter and other large providers, but many just deemed these conversation overkill, unnecessary, and bothersome. While I have learned a lot over the last seven years as the API Evangelist, I am regularly reminded that not everyone has been along for the ride, and I need to always bring things back to ground level, even if it means making some cringe-worthy API design choices.

I find as a technologist, I suffer from hopeless futurism. Even though I know better, I still tend to prefer looking at what is next, preparing as much as I can for the future, even at the expense of where we’ve been, and possibly leaving people behind. I can argue until I’m blue in the face regarding the benefits of hypermedia when it comes to supporting clients, and the benefits of GraphQL when it comes to giving API consumers a stronger voice when it comes to querying and getting access to EXACTLY the data they need, but without the proper groundwork, and education, my audience is rarely going to care. APIs are a journey, and I feel like APIs have to take their course, and folks have to be along for the ride. There is just no hurrying this process, no matter how much knowledge about the future I may possess, or how passionate and aggressive I am about why a particular API design decision will matter down the road.

I am working very hard to tame my tech bro futurism fetish, and better understand what the humans here on the ground in the present will need. I’m also working a lot harder to try and figure out how I can incorporate API lessons into my API design and definition work. How can I teach folks about headers, with specific design decisions I’ve made? How can I teach folks about how the client will break with a certain API design approach? As a technologist it is very hard to allow myself the space to make sub-standard API design decisions for the sake of helping an audience learn along the way. I want everything to be right, dammit! However, I’d much rather that my APIs actually get used, and the folks I’m targeting with my designs aren’t turned off by what I’ve delivered because they don’t see the value, or relevance. I’m working hard to not always be a tech boy scout, and always being prepared for a future that may not come.

P.S. I am guessing that all the folks who keep saying I’m so anti-GraphQL will not recognize how much I’m incorporating it into my work, and storytelling, and that this is a POSITIVE story about how GraphQL might have helped. It also still highlighting my argument(s) around investment in API education, and not being aggressive when you are pushing GraphQL, as I’ve learned first hand and keep trying to share in my (aggressive) GraphQL posts.


API Education Is Needed But Rarely Prioritized In The Current Environment

I wrote about this in a mean way during my rant week, but I wanted to bring up the topic of education and training when it comes to APIs in a more constructive way this week. Amidst the regular requests I get for API architects, developers, product managers, and evangelists I am reminding many companies that they will often need to hire for these roles internally, training and grooming existing employees, as finding seasoned veterans in any of these areas will prove to be difficult. I wish I had my own API school, where I was helping train waves of qualified employees, but sadly most of the folks with existing skills are employed.

The challenge of investing in API training and education doesn’t stop with your immediate team, this is something that needs to occur in most cases company-wide. I’ve talk with several groups about developing internal workshops, and training, but I find most of them aren’t truly interested in the investment needed, and are often looking for some free content, or someone they can get to come and speak for free or very low pay. It shows me that many companies aren’t quite ready to make the investment it will take to ensure their staff are ready for the work that lies ahead, and don’t value making sure their workers have the skills they’ll need to be successful in the API-driven world we are finding ourselves in.

This isn’t something I’ve just encountered at SMB, SME, and the enterprise. Government agencies are always cashed strapped, under-resourced, and lacking in the skills needed for the next wave. This is also a problem I’m seeing across startups. I’ve had discussions with startup groups selling tools and services to the API space, who are hitting significant challenges once they start selling their solutions outside the mainstream tech ecosystem. Many folks at large companies, small businesses, and government agencies just don’t have some of the basics when it comes the web, and how modern approaches to APIs work. To become an active customer, they are going to need some investment to get their customers up to speed, something I find the investors behind startups are rarely keen on spending money on.

I am working on a workshop series for a health care group in October, and I’m working real hard to develop some structure to help make sure I cover the fundamentals of why APIs are important, beginning with the web and HTTP. I’m trying to show the space from not just the API provider perspective, but also from the API consumer view, because everyone should be both. I’m also working on more stories to help educate why companies, investors, institutions, and government agencies should be investing more into the area of API education. Not just understanding how to provide and consume them, but how to secure them, understand that they are driving everything mobile, and can help securely open up their operations to allow for assistant by 3rd party providers. Humans will be the number one challenge you face when it comes to doing APIs in your organization, so make sure you are investing wisely. I’d love to hear more about the challenges you are facing, or how you are finding success when it comes to educating your staff or customers about everything API.


Making Sure Definitions In OpenAPI Are Robust For Use In Schema Validation

I’m working on v1.2 of my Human Sevices Data API (HSDA), and with this wave of work I’m making sue there is a functional API for validating all JSON that gets posted as the body in requests, as well as when it gets returned as part of API responses. To drive my validator I’m using JSON schema, which I already have defined as part of the OpenAPI definition for the project. I want to reuse, and build on top of this work, but I found the definitions for my OpenAPI to be pretty deficient in much of the details I am needing to validate the request and response bodies of my HSDA APIs.

The process has showed me the importance of making sure the definitions portion of my OpenAPIs are as robust as I can. Possessing required, default, regex patterns, and other details I’m going to need to make sure my schema validator is as robust as possible. I’m entering the phase of this project where vendors, and implementors are looking for guidance on whether or not their schema are HSDS/A compliant, and they are supporting the fields necessary to get a stamp of approval. The schema validator is essential to this, but the new validation API I’ve created is only as good as the JSON schema that I’m using as part of its engine.

I come across a number of OpenAPIs in the wild which do not possess schema definitions, and references for each API. These API providers are only describing enough of the surface area of their API to be able to generate API documentation using Swagger UI. This is something I’ve also been guilty of in the past, where I would only define the surface area of the API, just to get what I needed for my API discovery needs. Over the last year, I’ve spent more time making sure the definitions portion of the OpenAPI is also present, but it isn’t until now that I’ve been making sure the fine details of the schema are present. I need this to be in place for validation, which will be used across monitoring, testing, and other stops along the API life cycle I will be delivering as part of this work.

Honestly, my JSON schema chops were not up to snuff for this work, something I’ve struggled with making the time for this summer. However, I feel like I’m finally getting there. I’m beyond the basics of JSON schema validation, and have my simple API validator API in place. I just need to make sure I’m always investing the time required to develop robust JSON schema for all my APIs, so that the validator provides rich responses with each API schema validation. I’m thinking I will be build a tool for helping identify what is lacking with the definitions in my OpenAPIs, pointing out the common things I’m lacking, and running this before I ever consider actually validating the schema that are used in the API request body, or the API response body for the projects I’m working on. Always making sure the schema, and API definitions are harmonized, and speaking the same language is essential to all of this human services API effort to work properly.


Version 1.2 Draft Of The Human Services Data API

I have been working on the next version of the Human Services Data API (HSDA) OpenAPI lately, taking all the comments from the Github repository, and pushing forward the specification as far as I can with the minor v1.2 release. I have the Github issues organized by v1.2, and have invested time moving forward the OpenAPI for the project, as well as my demo site for the effort.

With this release I am focusing on six main areas, based upon feedback from the group, and what makes sense to move forward without any non-breaking changes:

  • /complete - add an /everything to each core resource, allowing access to all sub resouces.
  • query - Shifting query parameter to be array, allowing for multiple fields to be queried.
  • content negotiation - Allow for JSON, XML, and responses.
  • sorting - Adding sorting.
  • pagination - Adding pagination.
  • status codes - Add more status codes.

These were main concerns regarding what was missing from the last release, and were the top items that made sense to push forward this round. I’ve made some other major shifts to the project, but before I go through those, I wanted to provide some more insight into these v1.2 changes to the core HSDA specification. Helping shed some light on why I did what I did, while I am looking to make the API interface as usable as possible for HSDA implementations, vendors servicing the space, as well as developers looking to build web, mobile, voice, and other applications on top of any APIs that support the implementation.

Complete Getting access to the entire surface area of the core resources (organizations, locations, and services), as well as all the subresources (phones, physical address, mailing address, etc.) was the most voiced request from v1.1. I had laid out several options to access the entire surface are of HSDA resources, but folks seem focused on a single set of API paths to accomplish what they needed from a vendor and implementation perspective. It is my job to keep the API serving all types of integrations and use cases, but definitely couldn’t ignore providing a single set of paths for GET, POST, and PUT of organizations, locations, and services.

It was important to me to keep the core resources accessible as a flat, one dimensional, and machine readable documentation, so API consumers could quickly import into spreadsheets as CSV, or make lists of addresses, or possible lookups and updates to a single phone number. I didn’t want to abandon these use cases, or introduce breaking changes, so I introduced a /complete path for all three of the core sources (organizations, locations, and services. These paths allow for GET, POST, and PUT requests of multi-dimensional JSON objects, access the core, as well as sub-resources for any data stored within the API. These paths should accommodate the heavy system to system vendor and implementation usage that was voiced as part of the feedback process, while still preserving other individual use cases.

Query With v1.1 the query parameter for making API requests on organizations, locations, and services was simply a string, which you could provide a set of strings for. At the request of the community we’ve made this an array, allowing you to specify multiple fields and values as part of your query. To ensure I didn’t introduce a breaking change, I did not alter the existing query parameter, instead I added a new parameter called queries, which allows you to get more detailed with your queries. Now there is a simple search, and more robust multi-field search, giving more control to API consumers. In the future we will explore more query power, but this might reside in the search API portion of this conversation which we’ll address later.

Content Negotiation The Human Services Data Specification (HSDS) is a CSV format. While HSDA has focused on using JSON as part of all API requests and responses, I made sure that v1.1 stayed true to the original HSDS format, making the entire surface area accessible via simple API paths, and returning one dimensional responses that can still be returned as CSV. So with v1.2 I allowed for the negotiation of either CSV, XML, or JSON content types for all the primary HSDA paths. The /complete paths do not support CSV, as they provide access to resources, and sub-resources that cannot be returned as CSV, but the rest of the surface allows API consumers to negotiate the format they desire.

Keeping the surface area of the entire HSDS format accessible via simple API paths, without authentication, and providing the option of returning data in CSV, opens up the API to be used to export to Excel and Google docs in a single step. This will open up the ability to extract core resources like organizations, locations, services, as well as sub-resources such as phone and address lists into CSV format, which can then easily be used by a much wider audience than just developers, and other common API consumers.

Sorting There was no ability to sort any data within an HSDA responses with previous version. With version 1.2 you can now provide a sort_by parameter which determines the field to sort by, and order, which determines whether to sort by ascending (asc), or descending (desc) order. I had in there to add the ability to sort by what has been changed recently, but I have run out of time, and will make sure it gets into future releases. It was important that we at least get basic sorting features in there for this release, and can add more aspects to this dimension in the near future.

Pagination In previous versions of HSDA you could pass in a parameter to specify which page to return, and per_page to determine how many results to return per page. However, there was no data return telling you which page you were on, the count per page, or anything else about what is next or previous as part of each results. Several solutions to this were presented as part of the feedback process for v1.0 and v1.1, but not much feedback was given on the subject. Again, I was looking to introduce this feature without any breaking changes. With the flat, one dimensional array structure of the HSDA response structure it would be difficult to add in any envelope, or collection for returning pagination data, so I set out to find examples of how it can be done without disrupting the current response structure.

After looking at Github an a handful of other approaches I opted to add a customer header called x-pagination which provides a JSON object containing total_pages, first_page, last_page, previous_page, and next_page to each GET response, allowing consumers to easily navigation the pagination for large API responses. This approach does not introduce any breaking changes, while still providing all the data needed by API consumers to navigate the surface area of any HSDA implementation, across organizations, locations, and services. I do have some concerns about developers being HTTP header aware, and know how to access headers, but it is something that with a little bit of education, can open a whole new world to them–something any API developer should have in their toolbox.

Status Codes One area that HSDA v1.0 and 1.1 were deficient in was when it came to HTTP status code guidance. I had this slated for v1.3, but I was needing to know when I hit an error in some of the validation, documentation, and other tooling had been working on. So I took this opportunity to add 403, and 500 HTTP status codes to all API responses. All the GET paths are publicly available, but with this edition I’ve introduced an API management service, allowing me to secure all POST, PUT, and DELETE paths, opening them up to multiple users in a secure way. I didn’t want all other users to simply get a 404, so I added 403 guidance. I will be adding more specific HTTP status code guidance, and error response schema in future versions.

Additional Services That was the majority of features involved with the v1.2 release. However there were other aspects of HSDA that were left out of v1.1, like meta, search, and taxonomy. Also, as part of the v1.1 feedback process there were other features thrown out that were needed as part of future releases. All of this had the potential to add unnecessary complexity to the core set of resources, making the specification bloated, making things even more complex than it already is. To help alleviate these challenges I’ve started breaking future APIs into separate projects, or services. Here are the additional seven services I’ve added:

  • HSDA Search - A service dedicated to search across HSDA implementations.
  • HSDA Bulk - A service dedicated to managing bulk operations across HSDA implementations.
  • HSDA Taxonomy - A service dedicated to working with taxonomy across HSDA implementations.
  • HSDA Orchestration - A service dedicated to handling orchestration, evented infrastructure, and webhooks across HSDA implementations.
  • HSDA Meta - A service dedicated to handling meta data and logging across HSDA implementations.
  • HSDA Management - A service dedicated to introducing an API management across HSDA implementations.
  • HSDA Utility - A service dedicated to housing all utility APIs across HSDA implementations.

All of these services are meant to augment and complement the core set of HSDA resources, and sub-resources without adding unnecessary complexity to them. These projects are meant to act as a buffet of services that human service providers can choose from, or they can opt to just stick with the basic. I’ve reset the version for each of these projects to v1.0, and will be moving them forward at their own pace, independent of what the core HSDA specification is doing. I’ve introduced separate OpenAPI definitions for each project, and I am pushing forward independent code repositories for delivering PHP/MySQL implementations for each service area. Like the other features for HSDA v1.2 above, I wanted to take a moment and explain the logic behind each of these services.

HSDA Search Early on in the release of v1.1 the question of search quickly began to muddy the conversation. I saw this was going to be a challenge. I saw the communities desire to deliver features via query parameters, and knew that search was going to introduce a number of parameters beyond what was needed to just manage core resources (organizations, locations, and services). We also started getting some great feedback from vendors in the Github issues for search features that went well beyond what was needed for individual resources, and often spanned all the core resources. Search had been a separate path in the core HSDA specification, and with this version I decided to break it off the core specification and put it into it’s own project, where it can become a first class citizen.

The current HSDA search v1.0 specification got all query, queries, sorting, and pagination that the core HSDA resource received in the v1.2 release, but once they were added it got broken off into it’s own service. Technically this is a breaking change, but I think it is one the community will support, as it will add some valuable search features. Immediately, I added a set of collections that spanned all core resources, including organizations, locations, and services, and I took the /complete features I had just added to HSDA and introduced them to HSDA search. This makes all the search results as rich, and complete as possible, providing access to the entire surface area. I feel like ultimately this is where we will be experimenting with allowing consumers to restrict, or expand the results they are looking to get back, while keeping core resources cachable, available at known paths.

HSDA Bulk One of the conversations that led to the introduction of /complete paths for all core resources in v1.2 was the needs of vendors, and the heavy lifting requirements of individual implementations. There was the need to load large volumes of data into systems, as well as between disparate systems. There was talk about the load this causes of systems that drive critical websites and other infrastructure, resulting it being something done during off-peak hours. Some of this work can be done via the primary HSDA /complete service, but it was clear that a separate bulk set of API would be needed to help handle the load, and meet the unique needs of system integrators, that were different than what web, mobile, voice, and other applications developers would be building.

HSDA Bulk reflects the HSDA /complete paths for organizations, locations, and services, but these paths accept the posting of arrays of objects, complete with all their sub-object. Instead of directly loading these into the main database upon POSTing, they are entered individually into a jobs database, where they can be queried, and run independently on a schedule, or based upon specific events. HSDA bulk will work in conjunction with HSDA, HSDA Meta, HSDA Management, and HSDA orchestration, or it can be run independently, based upon custom criteria. The goal is to provide a way to handle the bulk needs of HSDA implementations which can be deployed, scaled, and operated independently of any core HSDA implementation, limiting the impact on core websites, mobile applications, and other applications.

HSDA Taxonomy The taxonomy portion of HSDA got separated as part of the v1.1 release. I quickly saw it needed more thinking regarding the handling of multiple taxonomies, as well as allowing for the accessing of services beyond core HSDA resource management, or even search. HSDA Taxonomy is now it’s own project, and can be deployed independently of any HSDA implementation, but provide another doorway for querying, browsing and getting at services based upon any supported taxonomy. The v1.0 version of HSDA Taxonomy will support AIRS and Open Eligibility, but will be designed to support any other taxonomy, and allow for customization by individual implementations, while maintaining a common API for usage across multiple HSDA implementations.

HSDA Orchestration Throughout HSDA v1.1 discussion I kept hearing about the need for notifying users of changes to HSDA data, and the need to push and ping external systems with information. As part of the build up to v1.2 I conducted a significant amount of research into the event and webhook implementations of leading API providers like Box, Twilio, Stripe, and others. I’ve taken this research and created a v1.0 draft for an HSDA Orchestration solution. Working to alleviate a wide variety of needs for handling events that occur across HSDA implementations, engaging with external implementations, and making HSDA a two-way street.

HSDA Orchestration will potentially work with HSDA Meta, HSDA Bulk, HSDA Management, and of course, HSDA core to bring implementations alive. A number of events will be defined around common HSDA task that occur like POSTing of new organizations, or locations, or updating of individual records, or possibly the submission of bulk updates that need running. Every API call within an HSDA implementation can now be tracked using HSDA Meta, and HSDA Orchestration will monitor this, and allow API consumers to subscribe to these events via webhooks, and receive a ping when event occurs, or receive a fat ping, which pushes data associated with an event to an external URL. HSDA orchestration will handle all the monitoring, tracking, notification, and syncing needed between HSDA implementations via a separate, independent service that works with the HSDA stack.

HSDA Meta HSDA Meta is another feature that got set aside with the v1.1 release. With v1.2 I’ve set it up as it’s own project. Now each API call made to any core HSDA path will be added to the HSDA Meta system, recording the service, path, verb, parameters, and body of each request. HSDA Meta is designed to providing a logging solution, eventually a transactional layer that can be rolled forward or backwards, and is intended as stated before to work with HSDA Bulk, HSDA Orchestration, and leveraging HSDA Management for access, and auditability of all activity.

HSDA Management HSDA is in need of an API management layer. Many of the paths available allow for reading, writing, and deleting of data. The original HSDA v1.0 and v1.1 only allowed for a single administrative key for accessing all POST, PUT, and DELETE API paths. With the v1.2 release I’ve begun this separate project for allowing the adding, authenticating, and managing of API users who are looking to get at HSDA data. The current implementation allows for many users, and each users to have access to one, or many of the services, and supporting API paths. It is up to each implementation to decide which users get access to which. In future releases we will add the notion of access plans, allowing for trusted groups to be established, including partners, and internal consumers. The goal with this is to identify a common interface for HSDA implementations, which behind the scenes could be any number of existing API management implementations.

HSDA Utility Last, I needed a place to put any utility APIs I needed to help manage HSDA implementations. Right now there are two core set of APIs. One for managing which services are available across and HSDA implementation, and another for validating HSDA schema, and eventually the APIs themselves. I will be putting any other utility API within this service area. It will become the catch-all for any API that doesn’t fit into it’s service area.

HSDA Specification That is it. That is the bulk of the work I’ve done for the v1.2 release of HSDA. I’m pretty happy with how things have worked out. I feel there is a lot more coherency across the specification now, and the service mindset will allow for much more constructive conversations across the projects. I have updated the HSDA specification site with all eight of the OpenAPIs, publishing a separate documentation page for each one. Each page provides an HTML view for each service, as well as link to the YAML version of the OpenAPI, the demo website, as well as Github Issues for each project. Next step is to drive the feedback and comments via the Github issues, include anything that is missing, and push v1.2 out the door, and begin working on v1.3, as well as the v1.1 for the seven other projects that were added with this release.

HSDA Implementation I do not ever feel an OpenAPI is ready for prime time until I have a working version of it. I have created working versions of all eight HSDA implementations. The core HSDA is the most complete and robust, with HSDA Search, HSDA Bulk, HSDA Meta, HSDA Management, HSDA Utility, HSDA Taxonomy, and HSDA Orchestration following up in that order. They are v1.0 draft implementations, and for the most part are working, but have not been hardened yet. I would feel comfortable putting HSDA, and HSDA Management, and HSDA Meta into a real world implementation in coming weeks, something I will actually be doing with two separate implementations–using real world projects to harden them.

I have updated the HSDA demo portal to contain all eight projects, and I have leveraged Github authentication as the HSDA Management layer, allowing anyone to signup and use their Github account to access with each API. Each API call is logged, and I can easily revoke access to any account, or push reset on the demo as needed. Now that I have a working copy, I will be publishing a development version of the portal, so that I do not break the demo in the future, and can move forward with releases a little more gracefully than I did with this one. I will be maturing all eight implementations, and offering them up as official Adopta.Agency products for deployment on AWS in the near future.

Looking To The Future This release of HSDA and the supporting code is all about looking towards the future. I’ve separated things out into independent services to handle what is next, and I’ve re-engineered my PHP/MySQL implementations to prepare for the future. Each of the eight solutions are 100% OpenAPI driven. The database and server side code is all OpenAPI driven. The portal, documentation, and the schema validation is all OpenAPI driven. Next, I’m setting up monitoring, and testing, that will all be OpenAPI driven.

I also have two other services I did not include in this story because they are meant for the future. One is HSDA Custom, which allows for the addition of any field, or collection to the core HSDA implementation, accommodating the needs of individual providers. This is only possible because of OpenAPI, and each custom field will be added as x-[field], keeping things validating. The second one I’m calling HSDA Aggregation, which will be my first attempt to sync, aggregate, and migrate data across many HSDA implementations. Now that I have the base, I’m going to setup five separate demo implementations, and begin to work on robust sets of test data, which I can use to push forward an aggregate and federated version of HSDA.

The OpenAPI core for my HSDA work has allowed me to do some interesting things with how the APIs are delivered, as well as many of the supporting tooling. This approach to delivering HSDA implementations can be applied to any API. I will be taking my list of several hundred Schema.org OpenAPIs, and building a catalog of API definitions that can be easily deployed on AWS. I’m not going to do this as an automated software service, but I will be hand deploying solutions for clients using this approach. Providing streamlined, well-defined, yet hand-crafted API implementations for any possible situation. This was born out of hearing from HSDA providers about how they begin storing all types of data into the organizations, locations, and service data stores–things that really should be in a separate system. Eventually I’ll be suggesting other HSDA projects that assist providers with events, messaging, and other common solutions beyond just organizations, locations, and services.

Anyways, that concludes this sprint. I will be doing more work throughout the week, and we have a three day hackathon this week. So I’m looking forward to moving things forward more, but for right now I’m pretty glad with what I’ve achieved.


When I Look At The Landscape Of API Services & Tooling I See The Future Of Technical Debt

There are a number of API service and tooling providers that I still get excited about in the space. 3Scale, Restlet, Runscope, and Tyk - to begin with my sponsors! ;-) ;-) ;-) However, there are others like Postman, APIMATIC, Materia, OAuth.io, Stoplight, Apicurio, API Platform, API Umbrella, Github, API Science, and others that keep me thinking good thoughts about the things that API service providers are doing. However, I also see a lot of services and tooling that are simply playing the startup game, and have more to do with investment, then they do about APIs.

It is these services and tools I see as the next generation of technical debt. When you bundle the vendors who are usually chasing trends as part of their investment and exit strategy, and really don’t care about truly helping you solve your technical, and business challenges, with your existing problems, you are just multiplying your problems. These types of customers only want you as an active customer, preferably locked into a contract, with their services and tools baked into your operations. You know what all of this leads to? Technical debt. When you buy into the vendor stories, and jump on trends, without thinking through the consequences of your actions, and the long term effects on your road map, you end up with a significant amount of technical debt down the road.

I have taken a number of IT and developer leadership positions in my career, where I had to come in and clean up the mess from the previous guy (always guys). Nobody was questioning the decisions being made, and allowed someone to make purchasing, and technology decisions that ended up just taking things in a bad direction. That vendor we bought into was acquired, and now that tool we depend on is part of a larger enterprise suite we really don’t need, but because we can’t unwind it from our systems, we are forced to keep paying the subscription. We went for that trendy to way of doing things, decoupling, automating, assembling a framework, offshoring, outsourcing, and whatever came along with the current technological season, and investment cycle. We didn’t invest in internal capacity, or leveraging the web and standards, and now we are locked into this proprietary way of getting things done.

Ok, I get it. It is hard to see what is a trend, and which vendors are full of shit. I mean, they took us out to lunch, and were real nice guys. Right? They spoke all the buzz words, and seemed to really get the problems we faced keeping things up and running. I’ve made many bad decisions when it came to leading the IT or developer charge (I used to program in ColdFusion), but most of the time these were decisions that were handed to me, not ones that I made on my own (1/3 were my bad decisions ;-). These experiences have made me very skeptical about which technology I invest in, and the world of APIs has taken this to new levels for me–I trust nobody. This is the default stance I take now. I won’t adopt something new, or change the way I do things, if there is no way to easily recover, evolve, or mitigate form the decision. While the majority of these lessons have come from unreliable APIs, I still see many folks doubling down on API services and tooling that is going to burn them down the road. I just don’t think some of us are being honest with ourselves about how all this technical debt occurs in the first place, and somehow it is all just inevitable.


I Wish I Had Time To Tell That API Story

If you have followed my work in the API space you know that I consider myself an API storyteller before I ever would an API evangelist, architect, or the other skills I bring to the table. Telling stories about what folks are up to in the space is the most important thing to me, and I feel it is the most common thing people stumble across, and end up associating with my brand. You hear me talk regularly about how important stories are, and how all of this API thing is only a thing, because of stories. Really, telling stories is the most important you should be doing if you are an API provider or API service provider, and something you need to be prioritizing.

I was talking with a friend, and client the other day about their API operations, and after they told me a great story about the impact their APIs were making I said, “you should tell that story”! Which they responded, “I wish I had time to tell that story, but I don’t. My boss doesn’t prioritize me spending time on telling stories about what we are doing.” ;-( It just broke my heart. I get really, really busy during the week with phone calls, social media, and other project related activity. However, I always will stop what I’m doing and write 3-5 blog posts for API Evangelist about what I’m doing, and what I’m seeing. I know many of the stories are mundane and probably pretty boring, but they are exercise for me, of my ideas, my words, and how I communicate with other people.

The way that enterprise groups and startups operate is something I’m very familiar with. I’ve been scolded by many bosses, and told not read or write on my blog. This is one of the reasons I don’t work in government anymore, or in the enterprise, as it would KILL ME to not be able to tell stories. I need storytelling to do what I do. To work through ideas. It is how I learn from others. Why would I want to do something that I can’t tell others about? Why would I not prioritize the cool things my clients are doing with my APIs? Sure, there are some classified, and sensitive situations where you definitely would not, but most of the reasons I hear for not telling stories publicly about the cool things you are doing are complete bullshit. I’m sorry, but they are. Even if you have to package it as a white paper or case study, you should be putting this down for others to learn from.

When you find yourself telling your creative side (or me), that you wish you had time to tell that story, you should consider that a canary in the coal mine. A sign that there is other illnesses going on. Sure, once or twice is fine, but if this becomes a sustained thing, or worse–you stop wanting to tell stories at all, then you should be looking for a new gig. You just had your mojo killed. Nobody deserves that. No employer should kills their employees storytelling mojo. Even if you are all business, telling stories is essential to making things work. Press releases are stories. Ok, they are usually a very sad, pathetic story, but they are a story. Your company blog should be active. Your personal blog should be active. Go check out your personal blog, when was the last time you wrote something you were passionate about? If it was more than a year ago, your employer has put you in a box, and is looking to keep you there.

Photo Credit: The Boyhood of Raleigh by Sir John Everett Millais, oil on canvas, 1870. A seafarer tells the young Sir Walter Raleigh and his brother the story of what happened out at sea, from the Wikipedia entry for storytelling.


Responding To A Webhook

There are many details of doing APIs you don’t think about until you either a) gain the experience from doing APIs, or b) learn from the API providers already in the space. When you are just getting going with your API efforts you pretty much have to rely on b), unless you have the resources to hire a team with existing API experience. Which many of my readers will not have the luxury to do, so they need as much helping learning from the pioneers who came first, wherever they can.

One of the API pioneers you should be learning from is the payment API provider Stripe. I’ve been studying their approach to webhooks lately, and I’ve managed to extract a number of interesting nuggets I will be sharing in separate blog posts. Today’s topic is responding to a webhook, which Stripe provides the following guidance:

To acknowledge receipt of a webhook, your endpoint should return a 2xx HTTP status code. Any other information returned in the request headers or request body is ignored. All response codes outside this range, including 3xx codes, will indicate to Stripe that you did not receive the webhook. This does mean that a URL redirection or a “Not Modified” response will be treated as a failure.

To be honest, I had never thought I should be responding to the webhooks I’ve setup. I treated them like a UDP request and once they went out the door and I processed, I didn’t need to response at all. How rude! I hadn’t seen any of my existing API providers offer up guidance in this area, or more likely I never noticed it. This is one of the reasons I like going though API providers documentation when I’m not integrating with them, because I tend to have a different eye for what is going on.

Anyways, I’m adding webhook responses to my list of building blocks for my webhook research, and will be including it in future guidance. It seems like a pretty significant thing to help API consumers deliver to complete the webhook loop, and it injects more HTTP status code awareness and literacy into the conversation, which I think is always a good thing for everyone. Thinking about how we are signaling back and forth in the API game is always an important part of the equation.


Cloud Marketplace Becoming The New Wholesale API Discovery Platform

I’m keeping an eye on the AWS Marketplace, as well as what Azure and Google are up to, looking for growing signs of anything API. I’d have to say that, while Azure is in close second, that AWS is growing faster when it comes to the availability of APIs in their marketplace. What I find interesting about this growth is it isn’t just about the cloud, it is about wholesale APIs, and as it grows it quickly becomes about API discovery as well.

The API conversation on AWS Marketplace has for a while been dominated by API service providers, and specifically the API management providers who have pioneered the space:

After management, we see some of the familiar faces from the API space doing API aggregation, database to API deployment, security, integration platform as a service (iPaaS), real time, logging, authentication, and monitoring with Runscope.

All rounding off the API lifecycle, providing a growing number of tools that API provides can deploy into their existing AWS infrastructure to help manage API operations. This is how API providers should be operating, offering retail SaaS versions of their APIs, but also cloud deployable, wholesale versions of their offerings that run in any cloud, not just AWS.

The portion of this aspect of API operations that is capturing my attention is the individual API providers are moving to offer their API up via AWS marketplace, moving things beyond just API service providers selling their tools to the space. Most notably are the API rockstars from the space:

After these well known API providers there are a handful of other companies offering up wholesale editions of their APIs, so that potential customers can bake into their existing infrastructure, alongside their own APIs, or possibly other 3rd party APIs.

These APIs are offering a variety of services but real quick I noticed location, machine learning, video editing, PDFs, health care, payments, sms, and other API driven solutions. It is a pretty impressive start to what I see as the future of API discovery and deployment, as well as any other stop along the lifecycle with all the API service providers offering their warez in the marketplace.

I’m going to setup a monitoring script to alert me of any new API focused additions to the AWS marketplace, using of course, the AWS Marketplace API. I’ve seen enough growth here to warrant the extra work, and added monitoring channel. I’m feeling like this will grow beyond my earlier thoughts about wholesale API deployment, and potentially pushing forward the API discovery conversation, and changing how we will be finding the APIs we use across our infrastructure. I will also keep an eye on Azure and Google in this area, as well as startup players like Algorithmia who are specializing in areas like machine learning and artificial intelligence.


Automatically Generating OpenAPI From A YAML Dataset Using Jekyll

I was brainstorming with Shelby Switzer (@switzerly) yesterday around potential projects for upcoming events we are attending, looking for interesting ideas we can push forward, and one of the ideas we settled in on, was automatically generating OpenAPIs from any open data set. We aren’t just looking for some code to do this, we are looking for a forkable, reusable way of doing this that anyone could potentially put to work making open data more accessible. It’s an interesting idea that I think could have legs, and compliment some of the existing projects I’m tackling, and would help folks make their open data more usable.

To develop a proof of concept I took one of my existing projects for publishing an API integration page within the developer portal of API providers, and replaced the hand crafted OpenAPI with a dynamic one. The project is driven from a single YAML data file, which I manage and publish using Google Sheets, and already had a static API and OpenAPI documentation, making it a perfect proof of concept. As I said, the OpenAPI is currently static YAML, so I got to work making it dynamically driven from the YAML data store. The integrations.yaml data store has eight fields, which I hd published as four separate API paths, depending on which category each entry is in. I was able to assemble the OpenAPI using a handful of variables already in the config.yaml for the project, but the rest I was able to generate by mounting the integrations.yaml, dynamically identifying the fields and the field types, and then generating the API paths, and schema definitions needed in the OpenAPI.

It’s totally hacky at the moment, and just a proof of concept, but it works. I’m using the dynamically generated OpenAPI to drive the Swagger UI documentation on the project. I’m not sure why I hadn’t thought of this before, but this is why I spend time hanging with smart folks like Shelby, who ask good questions, and are curious about pushing forward concepts like this. Liquid, the language used by Jekyll to deliver HTML in Github driven project like this is very limiting, providing some serious constraints when it comes to delivering tools like this. As I get stronger in my knowledge of it, and push the boundaries of what it can do, I’m able to do some pretty interesting things on top of YAML and JSON data stored on Github, within Jekyll sites like this. It can be pretty hacky, and would make many programmers cringe, but I like it.

While the idea needs a lot more work, it provides an interesting seed for how OpenAPI can be generated from a single (or multiple) open data file in CSV, JSON, or YAML–which Jekyll speaks natively. The possibilities to commit open data files into a Github repo and have OpenAPI, schema, documentation, and even UI elements automatically generated is pretty huge. This approach to making open data accessible holds a significant amount of potential when it comes to making the open data more discoverable, accessible, forkable, and reusable–which all open data should be by default. I will keep pushing the idea forward, and see where Shelby takes it, and report back here when I have anything more to share.


Why I Like A Service Mindset Over A Resource Focus When It Comes To APIs

I am currently crafting a set of services as part of my Human Sevices Data API (API) work. The core set of services for organizations, locations, and services are grouped together as a single service, as this is what I was handed, but all the additional APIs I introduce will be bundled as separate set of individual services. Over the last couple of weeks I’ve introduced seven new services, with a handful more coming in the near future. I’m enjoying this way of focusing on services, over the legacy way that is very resource focused, as I feel like it lets me step back and look at the big picture.

When I was defining the core API for this work I was very centered on the resources I was making available (organization, locations, and services), but once I took on a service mindset I began to see a number of things I was missing. With each service I find myself thinking about the full life cycle, not just the APIs that deliver the service. I’m thinking about the easy ones like design, deployment, and management, but I’m also thinking about monitoring, testing, and security. Then I’m delivering documentation, support, communications, and thinking about my monetization strategy, and access plans. I’m not just doing this once, I am thinking about it in the context of each individual service, as well as across all of them, taking care of the business of the services I’m delivering, not just the technical.

While some folks I talk to look at some of this as repeat work across my projects, I just see them as common patterns, that I should be reusing, refining, and delivering in consistent ways. I’m thinking about delivering the technology in a consistent way, and the operational, but I’m beginning to think about education, training, and how I can help folks on the provider and consumer side of things learn how things are working. I’m not just doing the technical heavy lifting to deliver APIs and then walking away, I’m bundling each search with what is needed to be valuable and successful as an actual service, that is API driven from start to finish. The service is accessible via an API, but it is also delivered, managed, and supported using APIs–everything has an API.

The Human Services Data APIs (HSDA) I am delivering aren’t just a single API, or set of service. They are an open source set of services that I’m putting out there for others to adopt and deliver as part of their own operations. I don’t want these to just be plug and play APIs, and want them to be plug and play services that deliver the information people need to find vital services in their community. Thinking of my APIs as services, and breaking them up into independent microservices helps me address the technical, business, and politics of delivering the technical components cities and organizations are needing. I’ve been pushing the business and politics of APIs since I’ve started, and trying to doing things in as small pieces as I can since the beginning, but the microservices conversations I’ve been tuning into have helped me think beyond the tech, the size, and actually consider how I’m doing this to deliver services to humans–it just an interesting twist that my primary project is all about delivering human service microservices. ;-)


All Federal Government Public API Projects Should Begin With A Github Repo

I’m gearing up for a conversation about the next edition of the FOIA API, and in preparation I’ve created an OpenAPI definition to help guide the conversation, which I drafted based upon the specifications published to Github by the FOIA API team at 18F. This was after spending some time reading through the FOIA recommendations for the project, which is also published to Github. Having the project information available on Github, makes it easy for analysts like me to quickly get up to speed on what is going on, and provide valuable feedback to the team.

In my opinion, EVERY government API should start with a Github repo flushing out the needs and requirements for the project, exactly like 18F is doing as part of their FOIA work. All the details of the project are there for not just the project team, but for external participants like myself. When it comes to engaging with folks like me, the API project team doesn’t have to do anything, except send me a link to the Github repository, and maybe point out some specifics, but if the README is complete, only the repo link is necessary. This opens up conversation around the project using Github Issues, which leaves a history of the discussions that are occurring throughout the project’s life cycle. Any newcomers can invest the time into digesting the documentation, discussion, and then begin to constructively add value to what is already happening.

I know this type of transparent, observable project performance is hard for many folks in government. Hell, it is hard for 18F, and people like myself who do it regularly, by default. It takes a certain fortitude to do things out in the open like this, but this is precisely why you should be doing it. The process injects sunlight into ALL government projects by default. You know your work will be scrutinized from day one, all the way to delivery, so you tend to have your act together. It forces you to open up to other folks ideas and feedback, which isn’t always pleasant, but when done right, can make or break an API project. I mean, your API is going to be public, why not kick it off in the same way? Doing public APIs are all about learning, growing, and establishing a sort of R&D lab around a specific set of resources and services. If this is baked into the DNA of your API project, the chances the API itself will find success is much greater.

I spend a lot of time interfacing with government agencies around APIs. I spend more unpaid time on the phone with folks, and with the right groups I am more than happy to do this. However, I regularly encounter groups who are looking to do APIs, don’t have any existing public APIs, and no Github presence. These are the individuals I encounter who have the worst skills at working well with others, coherently sharing documentation, and many of these projects never get off the ground due to politics. Doing public APIs helps us learn how to be more transparent, observable, and accountable for the projects we are delivering. It isn’t always easy work. It is something that is a journey, and we get better at over time. More government agencies should be working with, and learning from 18F when it comes to delivering projects using Github. Your agency will be better off for it, and the public will benefit from a more observable, and accountable government.


An OpenAPI Contract For The Freedom Of Information

Today’s stories are all based around my preparation for providing some feedback on the next edition of the FOIA.gov API. I have a call with the project team, and want to provide ongoing feedback, so I am loading the project up in my brain, and doing some writing on the topic. The first thing that I do when getting to know any API project, now matter where it is at in it’s lifecycle, is craft an OpenAPI, which will act as a central contract for discussions. Plus, there is no better way, short of integration, to get to know an API than crafting a complete (enough) OpenAPI definition.

After looking through the FOIA recommendations for the project, I took the draft FOIA API specification and crafted this OpenAPI definition:

The specification is just for a single path, that allows you to POST a FOIA request. I made sure I thought through the supporting schema that gets posted, flushing out using the definitions (JSON schema) portion of the OpenAPI. This helps me see all the moving parts, and connect the dots between the API request and response, complete with definition for three HTTP status codes (200,404,500)–just the basics. Now I can see the technical details of a FOIA request in my head, preparing me for my discussion with the project owners.

After loading the technical details in my head, I always like to step back and think about the business, political, and ultimately human aspects of this. This is a Freedom of Information Act (FOIA) API, being used by U.S. citizens to request that information within the federal government be freed. That is pretty significant, and represents why I do API Evangelist. I enjoy helping ensure APIs like this exist, are usable, and become a reality. It is interesting to think of the importance of this OpenAPI contract, and the potential it will have to make information in government more accessible. Providing a potential blueprint that can be used by all federal agencies, establishing a common interface for how the public can engage with government when it comes to holding it more accountable.


When To Build Or Depend On An API Service Provider

I am at that all too familiar place with a project where I am having to decide whether I want to build what I need, or depend on an API service provider. As an engineer it is always easy to think you can just build what you need, but the more experience you have, you begin to realize this isn’t always the smartest move. I’m at that point with API monitoring. I have a growing number of endpoints that I need to make sure are alive and active, but I also see an endless road map of detailed requests when it comes to granularity of what “alive and active” actually means.

At first I was just going to use my default cron job service to hit the base url and API paths defined in my OpenAPI for each project, checking for the expected HTTP status code. Then I thought I better start checking for a valid schema. Then I thought I better start checking for valid data. My API project is an open source solution, and I thought about each of my clients and implementations as me for testing and monitoring for their needs. Then I thought, no way!! I’m just going to use Runscope, and build in documentation and processes that each of my clients and implementations can also use Runscope to dial in monitoring and testing of their API on their own terms.

Since all of my API projects is OpenAPI driven, and Runscope is an OpenAPI driven API service provider (as ALL should be), I can use this as the seed for setting up testing and monitoring. Not all of my API implementations will be using 100% of the microservices I’m defining, or 100% of the API paths available fo each of the microservices I’m defining. Each microservice has it’s core set of paths that deliver the service, but then I’m also bundling in database, server, DNS, logging and other microservice operational level APIs that not all my implementations will care about monitoring (sadly). So it is important for my clients and implementations to be easily select with APIs they care about monitoring, which OpenAPI will help do the heavy lifting. When it comes to exactly what API monitoring and testing means to them, I’ll rely on Runscope to do the heavy lifting.

If Runscope didn’t have the ability to import an OpenAPI to plant the seeds for API testing and monitoring I might have opted to just build out a basic solution myself. The manual process of setting up my API monitoring and testing for each client would quickly become more work than just building a solution–even if it was nowhere near as good as Runscope. However, we are increasingly living in an OpenAPI driven API lifecycle where service providers of all shapes and sizes allow for the importing and exporting of common API definition formats like OpenAPI. Helping API providers and architects like myself stick to what we do best, and not reinvent the wheel for each stop along the API lifecycle.

Disclosure: Runscope is an API Evangelist partner.


Github OAuth Applications As A Blueprint

I was creating a very light-weight API management solution for one of my projects the other day, and I wanted to give my API consumers a quick and dirty way to begin making calls against the API. Most of the API paths are publicly available, but there were a handful of POST, PUT, and DELETE paths I didn’t want to just have open to the public. I didn’t feel like this situation warranted a full blown API management solution like Tyk or 3Scale, but if I could just let people authenticate with their existing Github account, it would suffice.

This project has it’s own Github organization, with each of the APIs living as open source API repositories, so I just leveraged Github, and the ability to create Github OAuth applications to do what I needed. You can find OAuth applications under your Github organizational settings, and when you are creating it, all you really need is to give the application a name, description, and a home page and callback URL, then you are given a client id and secret you can use to authenticate individual users with their Github accounts. I didn’t even have to do the complete OAuth dance to get access to resources, or refresh tokens (may will soon), I was just able to implement a single page PHP script to accomplish what I needed for this version:

I am wiring this script up to a Github login icon on my developer portal, and each API consumer will be routed to Github to authenticate, and then the page will handle the callback where I capture the valid Github OAuth token, and the login, name, email, and other basic Github information about the user. Right now the API is open to anyone who authenticates, but eventually I will be evaluating the maturity of the Github account, and limiting access based upon a variety of criteria (number of repos, account creation date, etc.). For now, I’m just looking for a quick and dirty way to allow my API consumers to get access to resources without creating yet another account. Normally I would be using OAuth.io for this, but I’m trying to minimize dependencies on 3rd party services for this project, and Github OAuth applications plus this script worked well.

Once a user is authenticated they can use their Github user name as the appid, and the valid Github OAuth token as the appkey, which are both passed through as headers, leveraging encryption in transport. I’m not overly worried about security of my APIs, this is more about a first line of defense and identifying consumers, however I will be validating the token with particular API calls. I’m also considering publishing API consumption data to Github repository created within each users accounts as part of API activity, publishing it as YAML, with a simple dashboard for view (authenticated with Github of course). I’ve had this model in my head for some time, and have written about it before, but I’m just now getting around to having a project to implement it in. I’m calling it my poor man’s API management, and something that can be done on a budget (FREE), but if my needs grow any further I will be using a more professional grade solution like 3Scale or Tyk.


Azure Matching AWS When It Comes To Serverless Storytelling

I consume a huge amount of blog and Twitter feeds each week. I evaluate the stories published by major tech blogs, cloud providers, and individual API providers. In my work there is a significant amount of duplicity in stories, mostly because of press release regurgitation, but one area I watch closely is the volume of stories coming out of major cloud computing providers around specific topics that are relevant to APIs. One of these topics I’m watching closely is the new area of serverless, and what type of stories each providers are putting out there.

Amazon has long held the front runner position because AWS Lambda was the first major cloud provider to do serverless, coining the term, and dominating the conversation with their brand of API evangelism. However, in the last couple months I have to say that Microsoft is matching AWS when it comes to the storytelling coming out of Azure in the area of serverless and function as a service (FaaS). Amazon definitely has an organic lead in the conversation, but when it comes to the shear volume, and regular drumbeat of serverless stories Microsoft is keeping pace. After watching several months of sustained storytelling, it looks like they could even pass up Amazon in the near future.

When you are down in the weeds you tend to not see how narratives spread across the space, and the power of this type of storytelling, but from my vantage point, it is how all the stories we tell at the ground level get seeded, and become reality. It isn’t something you can do overnight, and very few organizations have the resources, and staying power to make this type of storytelling a sustainable thing. I know that many startups and enterprise groups simply see this as content creation and syndication, but that is the quickest way to make your operations unsustainable. Nobody enjoys operating a content farm, and if nobody cares about the content when it is being made, then nobody will care about the content when it is syndicated and consumed–this is why I tell stories, and you should to.

Stories are how all of this works. It is stories that developers tell within their circles that influence what tools they will adopt. It is stories at the VC level that determine which industries, trends, and startups they’ll invest in. Think about the now infamous Jeff Bezos mandate, which has been elevated to mythical status, and contributed to much of the cloud adoption we have seen to date. It is this kind of storytelling that will determine each winner of the current and future battles between cloud giants. Whether it is serverless, devops, microservices, machine learning, artificial intelligence, internet of things, and any other scifi, API-driven topic we can come up with in the coming years. I have to admit, it is interesting to see Microsoft do so well in the area of storytelling after many years of sucking at it.


Keeping Things One Dimensional To Go From API To Spreadsheet In One Step

I have been working on the next version of my human services work, which provides a way for cities to make information about organizations, locations, and services available on the web. Part of the feedback from the community around what was missing from the last version, was the number of API calls you needed to make to get a complete representation of a resource, and its sub-resources, as each API response was one dimensional. An example would be that you could get a list of locations, but to get at the list of services you had to make a separate API call. This wasn’t a lapse in API design, it was a result of the schema being born out of a CSV format, and me working to stay true to the original design, and usage of the schema.

In the latest version, I did release a handful of paths that provide a complete representation of each resource and it’s sub-resources. However, I have maintained the original one dimension representation of each resource and sub-resources, allowing me to offer an XML, JSON, as well as CSV representation for each API call. This allows API consumers to pull CSV lists of organizations, locations, services, and their sub-resources like address and phone lists. While not something that would be useful in all API implementations, I feel like the audience for municipal level human services data will benefit significantly being able to go from API to spreadsheet in a single step. All the GET paths for organizations, locations, and services are publicly available by default, not requiring authentication, making CSV data available via a single URL–something anyone can make happen.

While weighing API design decisions as part of my Human Services Data API (HSDA) work I am having to consider not just the technical of how I should be doing this. I am also deeply considering how the API will be put to use, and who will be doing that. While I am thinking about the heavy system to system integration needs of human service providers, as well as the web, mobile, and other applications. I am also thinking about the individual user who might just need a list of the names of organizations, or the addresses of services in a simple CSV format, so that they can work with the data in their most familiar format–the spreadsheet. I am just focusing on the API side of things at the moment, but once I’m done with the latest version I am going to think about some simple linking, and embeddable tooling that allows users to put CSV data from the API to work in a single click using Google Sheets, and Microsoft Excel.


Just Waiting The GraphQL Assault Out

I was reading a story on GraphQL this weekend which I won’t be linking to or citing because that is what they want, and they do not deserve the attention, that was just (yet) another hating on REST post. As I’ve mentioned before, the GraphQL’s primary strength seems to be they have endless waves of bros who love to write blog posts hating on REST, and web APIs. This particular post shows it’s absurdity by stating that HTTP is just a bad idea, wait…uh what? Yeah, you know that thing we use for the entire web, apparently it’s just not a good idea when it comes to exchanging data. Ok, buddy.

When it comes to GraphQL, I’m still watching, learning, and will continue evaluating it as a tool in my API toolbox, but when it comes to the argument of GraphQL vs. Web APIs I will just be waiting out the current assault as I did with all the other haters. The link data haters ran out of steam. The hypermedia haters ran out of steam. The GraphQL haters will also run out steam. All of these technologies are viable tools in our API toolbox, but NONE of them are THE solution. These assaults on “what came before” is just a very tired tactic in the toolbox of startups–you hire young men, give them some cash (which doesn’t last for long), get them all wound up, and let them loose talking trash on the space, selling your warez.

GraphQL has many uses. It is not a replacement for web APIs. It is just one tool in our toolbox. If you are following the advice of any of these web API haters you will wake up in a couple of years with a significant amount of technical debt, and probably also be very busy chasing the next wave of technology be pushed by vendors. My advice is that all API providers learn about the web, gain several years of experience developing web APIs, learn about linked data, hypermedia, GraphQL, and even gRPC if you have some high performance, high volume needs. Don’t spend much time listening to the haters, as they really don’t deserve your attention. Eventually they will go away, find another job, and technological kool-aid to drink.

In my opinion, there is (almost) always a grain of usefulness with each wave of technology that comes along. The trick is cutting through the bullshit, tuning out the haters, and understanding what is real and what is not real when it comes to the vendor noise. You should not be adopting every trend that comes along, but you should be tuning into the conversation and learning. After you do this long enough you will begin to see the patterns and tricks used by folks trying to push their warez. Hating on whatever came before is just one of these tricks. This is why startups hire young, energetic, an usually male voices to lead this charge, as they have no sense of history, and truly believe what they are pushing. Your job as a technologist is to develop the experience necessary to know what is real, and what is not, and keep a cool head as the volume gets turned up on each technological assault.


A New Minimum Viable Documentation(MVD) Jekyll Template For APIs

I am a big fan of Jekyll, the static content management system (CMS). All of API Evangelist runs as hundreds of little Jekyll driven Github repositories, in a sort of microservices concert, allowing me to orchestrate my research, data, and the stories I tell across all of my projects. I recommend that API providers launch their API portals using Jekyll, whether you choose to run on Github, or anywhere else using the light-weight portable solution. I have several Jekyll templates I use to to fork and turn into new API portals, providing me with a robust toolbox for making APIs more usable.

My friend and collaborator James Higginbotham(@launchany) has launched a new minimum viable documentation (MVD) template for APIs, providing API provides with everything they need out of the gate when it comes to a presence for their API. The MVD solution provides you with a place for your getting started, workflows, code samples, reference material, with OpenAPI as the heartbeat–providing you with everything you need when it comes to API documentation. It all is an open source package available on Github, allowing any API provider to fork and quickly change the content and look and feel to match your needs. Which in my opinion, is the way ALL API documentation solutions should be. None of us should be re-inventing the wheel when it comes to our API portals, there are too many good examples out their to follow.

I know that Jekyll is intimidating for many folks. I’m currently dealing with this on several fronts, but trust me when I say that Jekyll will become one of the most important tools in your API toolbox. It takes a bit to learn the structure of Jekyll, and get over some of the quirks of learning to program using Liquid, but once you do, it will open up a whole new world for you. It is much more than just a static content management system (CMS). For me, it’s most significant strength has become as a data management system (DMS)??, with OpenAPI as the heart. I use Jekyll (and Github) for managing all my OpenAPI definitions, JSON and YAML files, and increasingly publishing my data sets in this way instead of relying on server-side technology. If you are looking for an new solution when it comes to your API portal, I recommend taking a look at what James is up to.


API Evangelist Is A Performance

I think I freaked a couple of folks out last week, so I wanted to take a moment and remind folks that API Evangelist is a performance. Sure, it is rooted in my personality, and I keep it as true to my view of the world of APIs as I can, but it is just a performance I do daily. When I sit down at the keyboard and research the world of APIs I am truly (mostly) interested in the technology, but when I craft the words you read here on the blog I am performing a dance that is meant to be interesting to the technology community in a way that draws them in, but then also gives them a swift kick in the pants when it comes to ethics of the technology, business, and politics of doing all of this.

Sure, my personality shines through all of this, and I’m being genuine when I talk about my own battles with mental illness, and other things, but please remember API Evangelist is a performance. It is a performance that is meant to counteract the regular stream of fake news that comes out of the Silicon Valley funded technology machine. API Evangelist is a Contrabulist production, pushing back on the often exploitative nature of APIs. Not that APIs are exploitative, it is the people who are doing APIs are exploitative. Back in 2010, I saw that APIs were providing a peek behind the increasingly black box nature of web technology that was invading our lives through our mobile devices, and jumped at the opportunity to jam my foot in the door, even as the VC power brokers continue to look to look for ways to close this door.

In 2011, I found my voice as the API Evangelist explaining APIs to the normals, making these often abstract things more accessible. Along the way, I also developed the tone of this voice pushing back on the politics of doing APIs, calling out the illnesses I see in the space. These are the two areas I hear the most praise from my readers, something that has significantly shaped my performance over the last seven years. I have a pantheon of API characters in my head when I tell stories on API Evangelist, speaking to specific groups, while showcasing as many of the best practices from the space as I possibly can. I’m looking to shine a light on the good, first and foremost, but I’m also never going to shy away from showcasing the illnesses in the space as I have nothing to lose. I’m never looking to get VC funding, or do any technology startup, so throwing myself against the machine doesn’t ever worry me–I will keep doing it until I grow weary of this production.

I just wanted to take the time to help folks understand that all of this is a show. Sure, my rant last week was rooted in my own dark personal thoughts, but it was meant to be a reflection of the space (your darkness). I’m touched at the folks who reached out to me with concern, but I’m fine. If I am ranting on the Internet you can always be sure I’m fine. It is when I go silent for any sustained amount of time is when you should have concern. When I’m in my dark place I have NO interest in performing as API Evangelist, and increasingly I have little interest in Internet technology when I feel this way. If you are reading this, thank you for tuning into my little production. I enjoy doing it because it keeps me learning each day. It keeps me writing and telling stories each day. Hopefully along the way some of you also get some value from the stories I tell, whether their are positive, or a little dark like they were last week.


Acknowledging The Good In The API Space

With such a dark week of blog posts last week I wanted to make sure and start this week off with a brighter post, talking about the good I see in the API space. It can be easy to find than some of the darker things I talked about, but after seven years doing this I see enough good things going on in the API community, that I keep doing this performance I call API Evangelist. It can be easy to rant and rave about the bad, but I find it takes a lot of work to identify the good things going on in the cracks, as they rarely get the attention of the mainstream tech community propaganda engine.

First, there are some really smart folks who truly care about human beings and are dedicated to the world of APIs. I do not know of any other layer of technology that sustains a community of people that is not just about startups and mindless moving forward of technology in every industry. I can use all of my fingers counting the folks who truly care about doing APIs, and making a meaningful impact with them. I have had the pleasure of working with these folks, and brining many of them together as part of my APIStrat conference, and regularly enjoy learning from them, reading their stories, and engaging with them on a regular basis as part of this API journey.

Second, not all APIs are startup focused. I work with many API providers who are doing very interesting, non-startup, non-VC investment, and most importantly, non-exploitative API things. I regularly work with passionate folks doing APIs at all levels of government, making an impact on the environment, pushing for transparency in our legal system, helping provide human services, and truly making change in a meaningful way using web APIs. APIs are neither good, nor bad, or are they neutral, they are simply a reflection of their creators and operators. It keeps me going, to learn about the many ways in which APIs are being used for good, and moving beyond their startup origins, doing meaningful things that makes the lives of human beings better.

Third, APIs are just the next evolution of the web. All the bad things I talked about last week can just as easily be applied to the web. APIs do not have to be the next vendor solution, or something you have to buy. APIs are just about moving our bits and bytes around online, using low cost web technology. They often become the scapegoat for exploitation, unreliability, security, and privacy concerns, but as I said before they are just a reflection of their creator and operators. This is one of the main reasons I’m an evangelist for APIs, not because they are always a good idea, but because when they are done right, they can bring some important observability into some existing technological situations, helping us understand exactly what is going on behind the digital and algorithmic curtain.

It was a learning experience to spend a week ranting openly about the space last week. The way people responded, or didn’t response was very telling about the API community. It feels like something I will be doing regularly (maybe not at that scale), because I felt like it pushed back much of the illness in the space that can become very suffocating to an independent operator like me. The venture backed technology machine doesn’t always realize (or maybe it does) what an invasive and assaulting force it is. They think they are just asking for some free time, or for a guest post opportunity, and don’t often see how damaging they are, because everyone is doing it. I feel like I was able to carve out a defensive zone around what I do, even if it was just for a bit. Thanks again for all your support folks.


The Why (And End) Of The Unhinged (Decoupled) API Evangelist Rant Week

I know many of you are thinking Kin Lane has lost his marbles (again). In reality, I lost them last week for a couple days because someone really pissed me off, then after a couple more folks pissing in my Cheerios, I checked out last week (this happens from time to time). This week I am actually feeling quite fine after moving to NYC from LA, but the posts for the last couple of days are from my notebook entries made while in a dark place last week. Normally, these posts would never see the light of day, but I’m feeling like they probably should this week. Its no secret, I’m fairly sure I’d be classified in the bi-polar realm (never been diagnosed), something I’ve thoroughly enjoyed since I was a teen, but for the last 20 years is something I’ve had 96% control of. I get angry, fly off the handle sometimes, and have bouts of depression, and life feels like a roller coaster, but for the most part I know the signals, know when to check out, and I am actually able leverage it to my favor–crafting the person that you all know as API Evangelist. It is the fuel for my research, and how I write these words.

Shocking? Run you off? Ok. I’ll accept that. I just wanted to show my readers the contrast between the night and the day, and showcase how hard I work to be really, really nice, and highlight the best of the API space on a daily regular basis. I’m hoping the honesty helps you see what is really going on, with the contrast showing you how much I work to sift through the world of APIs and find useful nuggets of information you might find valuable in your API journey. I really do enjoy what I do as the API Evangelist (most of the time), and I take pleasure in helping people understand the good and the bad of it, in as nice as possible way as I can. What grinds my gears is the folks who feel they need to jump on me, question my motivations, assume there is a hidden agenda, or just inflict their messed up version of the world on this magical world I’ve manage to carve out for myself (for all of us). I may seem pushy and intense this week, but I’m guessing y’all are in denial about how pushy and intense about the things you are passionate about in your world.

I also want to take a moment to highlight the mental illness that exists in the tech sector. It is everywhere if you know what to look for. How do you think I’m able to wrap my head around everything going on with APIs, and why I am an autodidact, and have an affinity for computers from an early age? Most of the white men programmers y’all are putting on a pedestal are mentally ill, they are either just really good at hiding it, or are so privileged that nobody has diagnosed or called them out for it. It is why they are so good at the computerz and Internetz. It is why many are taking pharmaceuticals and microdosing. Trust me, I’ve been there. Done that. Give them 10, 20 years, a divorce, more startup failures, and health problems, you’ll see more of them lose their shit. It is just a matter of time. The real danger there is that most of them don’t know they are ill, or are in denial. I got hints when I was 16, and saw the full spectrum from 20-25, then by the time my daughter was born at 28 I had already figured out most of the telltale signs I needed to keep myself grounded–most of the time. There are still exceptions, and moments when things sneak up on me.

As you read my posts this week, I’m sure you were like damn. WTF is going on? He’s paranoid, wacky, or unhinged. Read them all again, I’m only speaking truth. Is the racism and sexism that is ubiquitous in the tech industry any more crazy than me? Is the endless quest for money at all costs in the startup world any whackier than what I’ve written? Following every trend. Telling wild tales of what computer and technology is capable of. Worshipping the tech gods like Elon Musk, Peter Thiel, Marc Andressen, Mark Zuckerberg, Bill Gates, Jeff Bezos and others any more sane that what I do as the API Evangelist? Is the exploitation of people’s privacy and security anymore more sane than what I’m putting forth about the API space? Is ignoring what advertising is doing to the web all about logical straightforward thinking? What makes tech CEOs, and entrepreneurs so much more valuable than teachers, nurses, and other folks? Y’all seem way crazier than I do. I’m sorry. It is just the crazy you know, and is being sensationalized–normalized.

I’m just living, doing what I love, studying the world of APIs, and trying to share my knowledge through my writing. I’m not exploiting and taking advantage of you to get rich. I’m just trying to make a living, and make sense of this Internet age we find ourselves in. Which I have to say that your crazy, seems to making the world a pretty crazy place lately, with Trump and all. Just saying. Anyways, I’m going to take things back down a notch. I’m going to stay off the phone with some of you crazy folks, and stay out of your chaotic companies and organization, and settle back in with my nice NPR like API Evangelist tone. So please don’t come pissing me off again, make sure and pay your invoices, and don’t pick fights with me, and hopefully we won’t have to go here again. I’ll keep things way more sane, less rantier, with just the occasional amping up of things to make some points get across properly along the way.

I also want to thank all of you who reached out privately to make sure I was ok. This means the world to me. You guys are my heroes, and I encourage more of you to do this with other folks in the space. Together, maybe we can all take the crazy down a notch or two and begin to get things back to normal. I have a pretty good handle on my crazy, but I know there are many other folks out there that need your help right now. We need more discussion, education, and support when it comes to mental illness in the space. I personally have talked two people down off the ledge privately in my time as the API Evangelist, and I’m sure there are plenty of others I haven’t have the chance to help. So please talk to each other, and be understanding. You never know when someone might be slipping into the dark.

As I told Tony Tam (my hero) earlier this week–thanks for putting up with me, I really appreciate it.


The Fact That You Do Not Know Who I Am Shows You Live In A Silo

Don’t who know who I am? I am the API Evangelist. Ok, I know this post is dripping with ego. However, it is the last post in my week of API rants, and I’m just pumped from writing all of these. These types of posts are so easy to write because I don’t have to do any research, and real work, I just write, putting my mad skills at whitesplaining and mansplaining to work–tapping into my privilege. So I’m going to end the week with a bang, and fully channel the ego that has developed along with the persona that is API Evangelist.

However, there is a touch of truth to this. If you are operating an API today, and you do not know who I am, I’m just going to put it out there–you live in a silo. I have published around 3,000 blog posts since 2010 on APIs. I’m publishing 3-5 posts a day, and have consistently done so for seven years. There are definitely some major gaps in that, but my SEO placement is pretty damn good. You type API or APIs, and I’m in the top 30 usually, with the occasional popping up on home page. The number thing I get from folks who message me is that they can’t search for anything API without coming across one of my posts, so they want to talk to me. So why is it that you do not know who I am? I have some ideas on that.

It is because you do not read much outside your silo. When you do, you don’t give any credit to authorship. So when you have read any of my posts you didn’t associate them with a person named Kin Lane. You operate within your silo 98% of the time, and the 2% you get out, you really don’t read much, or learn from others. I on the other hand spend 98% of my time studying what others are doing, and 2% hiding away. My goal is to share this with you. I’d say 75% of my work is just referencing and building on the work of others, only 25% is of my own creation. I’m putting all of this out there for you, and you don’t even know it exists. What does that tell you about your information diet? It tells me that you probably aren’t getting enough exercise and nutrients as part of your regular daily intake, that will make your API operations be less healthy and strong–reading is good for healthy bones girls and boys!

Kin Lane doesn’t have this much ego, but API Evangelist does. The fact that you don’t know who I am shows you aren’t spending enough time studying the API space before launching an API. I’m hoping that you in your API journey you learn more about the importance of coming out of your bubble, and learning from your community, and the wider API community. It is why we do APIs, and why APIs work (when they do). I wrote this title to be provocative and part of my week of rants, but honestly it is true. If you haven’t come across one of my API posts, and stumbled on my blog at some point you should probably think about why this is. The most successful API providers and evangelists I know are tuned into their communities, industries, and wider API space, and are familiar with my work–even if they don’t all like me. ;-)

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


You Think You Are So Smart You Did Not Conduct Any Due Diligence Before Launching Your API

You know your API stuff. You know it so well, you don’t even need to look at other APIs. There is no reason to Google and look for other APIs because your stuff is that good. Your idea came to you in a flash, and you worked for an entire weekend to bring to life. Your a genius. Everyone has told you so. This stuff just comes to you, and as long as you are left alone, the magic just happens. If people just stay out of your way, do not burden you with outside influences, and unnecessary concerns, you will keep rolling out amazing APIs that everyone will love and need.

You consume books, and digest endless blog posts and white papers recommended by your trusted network of friends. You don’t ever notice authorship. They don’t matter. It is all about feeding your mind, and you will decide whether it is worthy or not. You don’t save bookmarks for citations or attributions, once inside your brain ALL ideas becomes yours. If someone’s idea is dumb, you make sure an let them know, making sure they are aware of how they are substandard and beneath you. If your friends let you know your ideas are amazing, you let them know they are great too, and will be rewarded by being in your presence, and part of your team.

That one chick that was hired last year made the mistake of blurting out in a meeting, “isn’t that the same thing as that startup that launched last month?”. She isn’t on the core team anymore. You did look at what she was talking about, and their API design is inferior, and the look of their site just turned you off–no need to continue. This is why you don’t conduct due diligence for your API projects. Why spend time looking at so many bad ideas? It takes away from your time to make the magic happen. Why do people keep wasting your time with this stuff? It is clear your ideas are superior, just look at your numbers. Clearly your APIs are well received, and all the feedback from partners have been great. I mean 60% of top startups in the sector are using your service, who cares what else is out there.

All of this is true in your bubble, but it won’t always pencil out outside. What you do not know or see will always eventually begin to diminish your work. The aspects of the markets you don’t see will never see the value of your work. You will never truly grow and evolve if you do not acknowledge the ideas that have influenced you. You weren’t born with all this knowledge, you are learning, borrowing, stealing, and building on the ideas of others. You should always learn from what is already out there, even if it is the worst of the worst. Through studying your competition your ideas will become hardened and truly competitive, and they will have strength when operating outside your bubble, beyond your control. You never know, you might actually come across a gem in the pile of bad ideas, something or someone you never imagined. Something that might shift the paradigm for you.

In your youth, this type of isolation, and bubble creation might work, but eventually you will miss certain signals and trends. There will be market forces that you and your network will miss, and once you begin to fall behind, it will be difficult to catch up. You will be building genius APIs that nobody wants or understands because your work as lost its relevance and context. You may have your finger on the pulse now, but you are not being honest with yourself about how you found that pulse, and are too confident that you will always have it. A significant part of your success has been your privilege, and that you are playing in this game while things are still new. With each wave of growth, and entry of new players, you will become irrelevant, and eventually old and in the way. The new geniuses will make the same mistakes you did, and shut you out, just like you have done to so many. It is how all of this works as time moves on.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


You Have No API Imagination, Creativity, Or Sensibility

I know you are used to people telling you that you are creative, and your ideas are great, but I’m here to tell you they aren’t. You lack any imagination, creativity, or sensibility when it comes to your APIs. Some of it is because you are personally lacking in these areas, part of it is because you have no diversity on your team, but it is mostly because you all are just doing this to make money. As creative as you think doing a startups is, they are really just about making money for your bosses, and investors–not a lot of imagination, creativity, or sensibility is required.

You could invest the time to come up with good ideas for applications and stories on your blog, but you really don’t want to do the work, or even stand out in the group. It is much easier to just phone it in, follow the group, and let your bosses and the existing industry trends dictate what you do each day. If the business sector you operate within is doing it, you are doing it. If you see something funny online or at a conference you will do it. You have a handful of blogs you read each weekend, that you will rewrite the best posts from and publish on your own blog. Your Twitter account is just retweeting what you find, and you don’t even push out your own stories, because you have already tweeted out the story you copied in the first place.

Don’t beat yourself up about this, you come by it honestly. Your privilege affords you never really getting out of your comfort zone, and the people around you make you feel good enough. Everyone on your team is the same, and your bosses really don’t care, as long as you are just creating content, and sending out all the required signals. Just make it look like you are always busy, and keep all the channels active. You don’t actually have to support your API consumers, just make sure you are having conversations with them on forums, and the channels your boss can keep an eye on. Doing too much will make you a target. Keep an eye on your coworkers and never do anymore than they are, establishing a kind of solidarity of mediocrity. This isn’t rocket surgery, its API theater.

You know deep down that you have some creativity in there, but it is something that has never been encouraged. This is the damaging effects of your privileged world. Your parents, teachers, bosses, and friends never push you, and neither do you. You don’t have to. If this story pisses you off, you really have nobody to blame. You’ve never had to work hard. You have never pushed yourself to do any of the hard work required to fail, on the path to becoming creative, developing your sensibility, and honing your imagination. How are you ever going to know what you are capable of if you do not put yourself out there. Creativity isn’t created in a silo. People aren’t just born with sensibility. And API aren’t lacking in imagination by default. There are many APIs operating out there that possess all these characteristics, and are leading the conversation–why aren’t you one of them?

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.