20 Oct 2016
This is a topic that has come up in several discussions lately and is a topic I struggle with on a regular basis. What is more important, helping new users, both developer and non-developer be more aware of APIs, or is helping push forward the concept of APIs amongst those who are already tuned in? While I don't think there is a perfect answer, I think it is an important concept to explore and discuss.
I started API Evangelist on the premise of helping educate new users about the importance of APIs and is something I regularly strive to do, but if you read my blog I definitely do not always keep things simple and accessible to a wide audience. I keep coming back to this founding premise of API Evangelist and work hard to write about 101, and 201 level concepts, despite definitely being a card carrying member of the core API community.
There are a number of leading-edge topics in the space right now from hypermedia to GraphQL, as well as leading edge implementations like bot and voice enablement, but should we be careful to not focus too heavily on these leading issues, and make sure we all invest in new users education as well. I'm feeling like we should keep a core academic group focusing on the big issues, but also enlist armies of folks to help translate down the food chain.
I do not feel there is a right answer to this question. You ask me on separate days, I will answer differently. Sometimes the topics that push forward the API conversation seems most important, but then sometimes educating the developer masses, and the average business users about APIs seems like what will truly push forward the conversation. Even with my flip-flopping, which is why I will never run for president, I think this topic is something we should ask ourselves regularly, and try to understand where we stand as time marches forward.
20 Oct 2016
While it can be easy to bash on API providers for being tight with their API resources, it can be very difficult to be an API provider operating in today's online environment. Some developers are just badly behaved and hold some pretty unrealistic expectations when it comes to opening up access to valuable content, data, and algorithms.
This is why I work to be supportive of providers when locking down their API resources, as long as they do it a constructive way. One positive approach I came across recently was from the Oxford Dictionaries API, who ask a lot of questions as part of their API signup form, but I felt it was a positive experience, and provided no barriers to me gaining access to their valuable API resources.
They asked me details about what type of application I'm developing, as well as the platform and language I am employing--allowing me to provide a summary of what it was I would be doing. They provide me with a summary of the terms of service, and the ability to opt into a platform email list, and they also let me know that they will not be selling my personal details to third parties.
I have never been a fan of lengthy API signup forms, as I tend to feel like I'm just a lead in their sales funnel, but this left me feeling the opposite. The Oxford Dictionary API signup form felt more like it was about making sure everyone was informed and in the service of protecting valuable API assets. I'm a supporter of API providers requesting more details on what API developers are up to, as long as they do it in the right way. If you don't make me feel like I'm a criminal, that I'm just one of many users in your sales funnel, and that you are just looking to extract value from me, I'm more than happy to share more information about myself as I consider a deeper business relationship with you.
20 Oct 2016
I enjoy learning from the OpenAPI Specs of the API providers I track on. Just having an OpenAPI Spec present tells a lot about an API provider in my book, but the level of detail some providers put into their API definitions adds another level to this for me. While reviewing the OpenAPI Spec for the Oxford Dictionaries API, I noticed their robust usage of the OpenAPI Spec parameters definitions collection, which provides an interesting overview of the surface area of the API, augmenting the benefits brought to the table by the definitions collection of an APIs underlying data schema.
When you are defining each path for an API you can either define the parameters using each paths parameters, or you can add them to the overall parameters definition object, allowing them to be reused across all paths. This object provided me with a centralized place to learn about the parameters used when making calls to the Oxford Dictionary API, and I'm assuming it helped them be more organized in how they defined the surface area of their APIs.
I can see how the processing of defining each path's parameters, and centrally organizing them for reuse can be a healthy thing. The more you lift yourself out of the individual definition of each path and consider the parameter patterns that have been used for other paths, the chances you will have a better view of the landscape will increase. I am optimistic about this OpenAPI Spec object, and curious about how it can be evolved as part of other conversation around GraphQL--something I'll work to understand better in the future.
20 Oct 2016
I keep an eye on things that are trending daily and weekly on Github because it is a great way to discover new companies and individuals doing interesting things with APIs. While looking at this earlier this week I came across the open guide to Amazon Web Services, a pretty robust, and well organized getting started guide to everything AWS.
Here is their description of this resource out of the leading cloud computing platform:
A lot of information on AWS is already written. Most people learn AWS by reading a blog or a “getting started guide” and referring to the standard AWS references. Nonetheless, trustworthy and practical information and recommendations aren’t easy to come by. AWS’s own documentation is a great but sprawling resource few have time to read fully, and it doesn’t include anything but official facts, so omits experiences of engineers. The information in blogs or Stack Overflow is also not consistently up to date.
This guide is by and for engineers who use AWS. It aims to be a useful, living reference that consolidates links, tips, gotchas, and best practices. It arose from discussion and editing over beers by several engineers who have used AWS extensively.
I find it interesting when API providers invest in authoritative solutions like this, acknowledging the scattered and often out of date nature of blogs, QA sites, forums, and the wealth of other self-service resources available for APIs. Amazon is getting seriously organized with their approach to provider resources for developers--they have been doing this a while, and know where the pain points are.
Amazon's organized approach, the breaking down by service, and the usage of Github are all interesting things I think are worth noting as part of my research. AWS is a tough API pioneer to showcase because they have way more resources than the average API provider, but as one of the early leaders in the API space they possess some serious wisdom and practices that are worth emulating. I'll keep going through their open guide, and see what other patterns I can extract and showcase so that my readers can consider.
20 Oct 2016
I am going through the Oxford Dictionaries API, learning about this valuable resource. Their onboarding process for registration, and learning about what the API does using interactive documentation, is very smooth. One of the things that really cuts the rough edges off learning about each API are the enums that are available for each path.
The parameters required for making calls to many of the paths, like language and country, have their enum values populated as part of their API definition. I look at numerous OpenAPI Specs in the course of my work and they rarely have values present for enum, providing critical default values for developers to use--eliminating some often serious frustration.
Not having the right values available when making even the simplest of API calls can be a significant point of friction when trying to get up and running using an API. While it may seem like a small thing, the work the Oxford Dictionaries API team has put into this level of detail for their API definitions will go a long way towards making their API resources more accessible and usable.
19 Oct 2016
I have a number of folks at companies, organizations, institutions, and government agencies come to me saying that they want to do APIs, and they need some help. In many of these discussions, the first task centers around addressing the motivations behind this declaration and helping folks realize they are already doing APIs, they are just not doing it in any sort of coherent and organized fashion.
Any business, organization, institution and government agency is moving machine readable data between internal, and with external systems in 2016, for use in a variety of applications. This is just done in a variety of often proprietary, ad-hoc, data dump, and other approaches that are often dictated without any coherent vision. API does not always mean 100% REST, it is about establishing interfaces between systems for programmatic usage across a variety of applications--the trick is to do all of this in a simplified, standardized, consistent, and low-cost way modeled on what has been working for the web.
You are already doing API. You need to identify all the different ways you are already exporting, importing, migrating, syncing, and putting data to work across operations and begin to establish a coherent roadmap and approach to communicating about all of this. Then get to work identifying where the opportunities for standardizing how everything is defined, how access is needed, and a baseline for communication around these critical integrations on an ongoing basis. It's not about whether we should do APIs or not, you are already there, it is how do start doing it in a more organized and coherent way.
19 Oct 2016
I have been playing around with different ways of using Google Spreadsheet to drive YAML and JSON data to Jekyll data projects hosted as Github repositories. It is an approach I started playing around with in Washington DC, while I was helping data stewards publish government services as JSON-LD. It is something I've been playing around with lately using to drive D3.js visualizations and even a comic book.
There are couple of things going on here. First, you are managing machine-readable data using Google Spreadsheets, and publishing this data as two separate machine readable formats: JSON and YAML. When these formats are combined with the data capabilities of a Jekyll website hosted on Github Pages, it opens up some pretty interesting possibilities for using data to fuel some pretty fun things. Plus...no backend needed.
To push this approach forward I wanted to apply to managing OpenAPI Specs that can be used across the API life cycle. I pulled together a spreadsheet template for managing the details I need for an OpenAPI Spec. Then I created a Github repository, forked my previous spreadsheet to YAML project, and modified it to pull data from a couple of worksheets in the Google Doc and publish as both JSON and YAML OpenAPI Specs.
My OpenAPI Spec Google Sheet to YAML for use in a Jekyll project hosted on Github is just a prototype. The results don't always validate, and I'm playing with different ways to represent and manage the data in the Google Sheet. It is a fun start though! I am going to keep working on it, and probably start a similar project for managing an APIs.json index using Google Sheets. When done right it might provide another way that non-developers can participate in the API design process, and apply OpenAPI Specs to other stops along the API life cycle like with API documentation, SDK generation, or testing and monitoring.
19 Oct 2016
It is tough to keep a sustained fire burning in the world of technology, at the individual, organizational, and community level. I have been doing API Evangelist full time for six years, and it is no secret that I have had several moments where I've experienced a crisis of faith, and I do not doubt that there will be many more of these in my future--there is no perfect solution. It takes hard work, creativity, and a myriad of other considerations to keep going, stay energized, and keep other folks doing the same.
I have spent a great deal of time this fall thinking about all of the factors that influence me, and contribute to the fire burning, or acting as a flame retardant to me and the API space. When exploring these contributing factors, it is only logical we start with the external forces right? Because this all sucks because of everything external, aka all of you! Couldn't possibly be me?
So what are some of the external forces out there that contribute to the fire burning brightly, or possibly being dampened across API space are?
- People Aren't Always Nice - For some reason, the Internet has brought the worst out in many of us. I personally feel this is the nature of technology -- it isn't human, and the more time you spend with it, the less human we are, and less empathy we will have for other people.
- Everyone Takes A Little - Until you've spent a great deal of time in the spotlight writing, speaking, or otherwise you don't fully grasp this one. Every person you talk to, every hand you shake, takes a little bit from you -- making it super critical for people to give back -- it all takes a toll, whether we acknowledge it or not.
- Few Ever Give Back - The general tone set by startup culture and VC investment is to take, take, take, and rarely give back. The number of people who want to pick your brain, take your open work, and not ever give anything in return is far greater than the people openly giving back, or acknowledging they should pay you for your time.
- Use & Subscribe To Open Channels - Follow me on Twitter, and Medium. Subscribe to the Atom Feed and email newsletter. Support those people in the space who provide open channels of information by tuning in and engaging.
- Fork, Contribute & Build Upon - When you fork someone's work plan on how you will contribute back, build upon, and cite their work. Don't just use open source, contribute back to it, and augment it with the value you bring to the table.
- Events Are Exhausting - Producing, organizing, and pulling of valuable events that focus on ideas over products are fucking hard, and you should support the open tech events you think contribute most to the tech space. Invest time, money, and your energy wherever you can afford it. I know your company demands you get the most out of your sponsorships, but step back and consider how you can give the most as part of your sponsorship as well.
- Where We Invest - 95% of the investment in the APi space is into proprietary products, services, and technology. The other 10% is an equal investment in ideas, open concepts, specifications, definitions, and software. If companies do invest in "open" it is in name only, and not a true invest. Everyone suffers from this behavior, making the space pretty fucking business as usual--which is a fire that nobody wants to tend to for very long.
- Intellectual Property - Unhealthy views on IP lock up ideas. I'm not saying copyright, patents, and licensing shouldn't exist. I am saying that aggressive, greedy views will continue to act as a wet blanket when it comes APIs making a meaningful impact, and making the game pretty untenable for individuals.
Next, up, what are some of the internal forces (aka my responsibility) that can contribute to the fire burning more brightly in the API space?
- Going To Burning Man - Yes burning man will reinvigorate you each year, but we have to find a way to establish a regular alter that we can burn on a regular basis, finding renewal on a regular basis without so much fucking dust involved.
- Eat Well - If we've learned anything over the last six years it is that man cannot live on pizza alone. Words of caution for you youngsters--eventually you will start falling apart, and this shit will break if you do not eat well.
- Alcohol - Drinking is fun and an important social lubricant, but it can also lead to some very bad behavior in real-time, as well as contributing to a multitude of other issues in life ranging from health to relationships.
- Exercise - We just can't sit on our ass all the time, as much as we'd like to think we can. Again, this isn't a problem for the youngsters, but as we get on in life, it will catch up with you, and we are better off finding a way to work it in regularly.
- Family Time - Spending time with family is important. Too much travel, screen time and work all hurt this. Regularly check in on what is important when it comes to family. Even if it is just working on API integrations with your kids (did I tell you I'm doing a project with my daughter) -- woohoo!
- Creativity - Take time to invest in and move forward creative projects. All work and no play makes us all very fucking boring, and are not conducive to any flame burning. The less I invest in my creative side, the less productive I am in my regular work. As an individual and a business make sure there is the investment in creative projects and space.
- Money - Money is not everything, but making some is required. I've had several waves of $$ making good in my career, and it rarely ever brought me more happiness, and always brought me more concern. There is a lot of focus on VC investment, and showcasing the successful founders in the space. To keep a sustained fire burning we have to realize there is a lot more to all of this than just making money.
These are just some of the key external and internal forces contributing to the fire burning within me individually when it comes to APIs, and I also feel contribute the fire burning also across the community (which I am part of). Startup and VC investment success do not equal community and individual success. Rarely does a startup winning contribute to any single individual success, or the wider community being more vibrant, creative, and rich? You have a rare rock star founder, and always the wealth of corporate and brand success, but these do not make a fire burn brighter in the community. It might attract a few moths to the flame along the way but doesn't truly enrich everyone, and provide fuel for a sustained burn--it is about burning bright, fast, and hard, which aren't good for most of us humans.
I keep going as the API Evangelist because I'm interested in what I'm doing. I'm fulfilled by learning, writing, sharing, and building. I will keep going for another 10, 20, hopefully until the end of my life, because a real fire is truly burning--not because I met my sales goals, sold my startup, or reached API economy nirvana (that is API singularity). Most of the time I'm learning, I am being creative, and I've made more money than was required to pay my rent, my bills, and could eat well. More meetings. More projects. More handshakes. More money does not always nurture me, and keep the fire alive personally, or within the wider API community.
18 Oct 2016
API branding is an area that I find to be contradictory in the space, with the loss of brand control being in the top concerns for companies when doing APIs, while simultaneously one of the most deficient areas of API operations, with most API providers have no branding guidance in their developer portal whatsoever. I think it is just one of those telling aspects of how dysfunctional many companies are, and how their concerns are out of alignment with reality, and where they are investing their resources.
Every API should have some sort of branding page or area for their API operations--I even have a branding page. ;-) If you are looking for a healthy example to consider as a baseline of your branding page, take a look at Twitters branding page, which provides the essential building blocks you should be considering:
- Simple URL - Something easy to find, easily indexed by search engines, even a subdomain like Twitter does.
- Logos & Assets - Provide us with at least a logo for your company, if not a wealth of assets to put to use.
- Branding Guidelines - Offer up some structure, helping guide us, and show you've put some thought into branding.
- Link to Embeddables - If you have any buttons, badges, and widgets, point us to your embeddable resources.
- Link to Terms of Service - Provide us with a quick link to the terms of service as it applies to branding.
- Contact Information - Give me an email, or another channel for asking a question if I get confused at any time.
I do not agree with all of Twitter's branding enforcement, but I respect that they have set a bar, and provide the ecosystem with guidance. At the very least, it makes sure all of us are using the latest logo, and when done right it can help encourage consistency across every integration that is putting API resources to work. I find it hard time respecting branding concerns from companies who don't have a dedicated branding resource area for me to consult.
When done right, APIs extend the reach of any brand. Think about Twitter and Facebook sharing, and other embeddables. Most API consumers are more than happy to help extend your brand, but we are going to be lazy, and ALWAYS need guidance and things to copy / paste. At the very least, make sure you have the minimum viable branding assets and guidance like Twitter does, it really is the minimum bar for any API operating out there, and should not be ignored.
18 Oct 2016
I am working on a project with a 16-year-old young lady to extract and tell a story using the YouTube API. I'm pretty excited about the project because the young lady happens to be my daughter Kaia Lane. If you've ever seen my API Blah Blah Blah t-shirt, you've seen her work. Historically she could care less about APIs, until recently when pulling data about one of her favorite YouTube stars came up--suddenly she is interested in learning more about APIs.
During regular chatting with my daughter, I shared a story on the entire history of Kickstarter projects broken down by a city. She is a little geeky and likes Kickstarter, so I figured I'd share the approach to telling stories with data, and said that if she ever wanted help telling a story like this using YouTube, Instagram, or another platform, that she should let me know. She came back to me a couple days later asking to learn more about how she could tell a story like this using data pulled from one of the YouTube stars she follows.
Ok. I have to stop there for a moment. My 16-year-old daughter just asked me to learn more about APIs. :-) As an old goofy dad who happens to be the API Evangelist, I am beside myself.
I'm not 100% sure where this project will go. Right now I'm just seeing what data I can pull on Dan & Phil's video game YouTube channel, and from there we'll talk more about what type of story we want to tell about their followers and get to work pulling and organizing the data we need. I couldn't think of a tougher audience to be trying to get her interested in APIs. She isn't going to care about APIs, wants to learn about APIs, let alone become proficient with APIs unless they are relevant and interesting to her world.
I do not think this lesson is exclusive to teaching 16-year-olds about API. I think this applies to ANYONE potentially learning about APIs. I am a big fan of EVERONE learning about APIs because they are the plumbing that moves our bits and bytes around in our personal and professional worlds. The more we are aware and the more we know we can put APIs to work, the more successful we are going to be in our lives. I want EVERYONE to open up and learn about APIs for this reason, but I REALLY REALLY want my daughter to find success in this way.
Just something to consider, as we are trying to help key internal, essential partner and other public stakeholders understand the API potential. How can we present APIs in a way that is relevant and interesting? Otherwise, most people probably aren't going to care, and it will all just be API Blah Blah Blah!
18 Oct 2016
I was reading a virtual panel: document and description formats for web APIs, and thought the conversation was very productive when it comes to helping bring the world of API documentation and definitions into better focus. I encounter daily reminders that folks do not see the many dimensions of API definitions, and the role they play in almost every stop along the life cycle. This virtual panel helps move this discussion forward for me, providing some clarification for when it comes to the separation between API definitions and API documentation.
One of the questions asked of the panels was "Do you see API Documentation and Description formats as a single thing? Or multiple things?" Which I found Zdenek Nemec (@zdne) answer to be a great introduction for folks when it comes to understanding the importance of this separation:
There are definitely two different things. But truth be told, the initial incentive for the use API description formats was definitely the vision of API documentation without much work. However, the tide is turning as more and more people are discovering the benefits of the upfront design, API contracts, and quick prototyping
Many people still see machine readable definitions as purely something that drives API documentation. OpenAPI Specs are just for deploying Swagger UI, and API Blueprint is just for using Apiary. When in reality, the why and how you are doing API definitions is much, much deeper. As Z from Apiary points out, it is key to the API design and prototyping process, and critical to establishing the API contract.
Realizing that crafting machine readable API definitions is not just about API documentation, and that it is essential to establishing a meaningful technical, business, and legal contract internally, with partners, and maybe the public, early on in this API life cycle is empowering. I would say that I didn't fully appreciate API design, and understanding the depth of it until I had OpenSpec providing me with a scaffolding to hang things on.
Anyways, it is a great conversation from some very smart folks over at InfoQ, I recommend heading over and spending time absorbing it. I'm leaving open for a week and rereading until it all sinks in.
18 Oct 2016
I look at a lot of websites for companies who are providing APIs and selling services to the API space. When I find a new company, I can spend upwards of 10 minutes looking for all the relevant information I need to connect. Elements like where their Twitter and Github accounts are. These are all the key channels I am looking for so that I can better understand what a company does and stay in tune with any activity, but they are also the same channels that developers will be looking for so that they can stay in tune a platform as well.
I spend a great deal of time looking for these channels, so I'm always happy when I find companies who provide a near complete set of icons for all the channels that matter. Restlet, the API design, deployment, management,and testing platform has a nice example of this in action, providing the following channels:
- Stack Overflow
All of these channels are available as orderly icons in the footer of their site. Making my job easier, and I'm sure making it easier for other would be API developers. They also provide an email newsletter signup along with the set of icons. While this provides me with a nice set of channels to tune into, more than I usually find, I would still like to have a blog and atom feed icons, as well as maybe an AngelList or Crunchbase, so that i can peak behind the business curtain a little.
I know. I know. I am demanding, and never happy. I am just trying to provide an easy checklist for companies looking to do interesting things APIs of the common channels they should consider offering. You should only offer up channels that you can keep active, but I recommend that you think about offering up as many of these as you possibly can manage. No matter which ones you choose, make sure you organize them all together, in the header and footer of your website, so nobody has to go looking for them.
18 Oct 2016
One of the reasons I write so much on API Evangelist is to refine how I tell stories about APIs and hopefully make a bigger impact by being more precise in what I'm saying. I feel like one of the reasons why hypermedia API concepts have to take longer than we anticipated to spread is because many (not all) of the hypermedia elite suck at telling stories. I am sorry, but you have done a shit job selling the importance of hypermedia, and often times were just confrontational and turning many folks off to the subject.
I am working on playing around with telling different stories about hypermedia, hoping to soften some of the sharp edges of the hypermedia stories we tell. One of the core elements of hypermedia APIs is they provide us with links as part of each API response, emulating much of what works with the web, in the system to system, and application layers of the web. One of the benefits of these links is they help facilitate the evolution and change that is inevitable in our API infrastructure.
The default argument for hypermedia folk is often about versioning and change-resistant API clients. I'm looking to help make these concepts more human, and that employing hypermedia as part of our API contracts is more about providing an honest and flexible view of the business relationship we are entering into. As API provider and consumer, we are acknowledging that there will be change and evolution in the resources that are being delivered, and being more honest that this exists, as we craft this API contract.
As an API provider I am not just going to dump this JSON product response on you, and expect you to know what to do, refusing to have a discussion with my consumers that this product will change. it might be out of inventory, and might be replaced by another product, and any number of evolution changes that may occur in the normal course of business. Hypermedia gives me a framework to acknowledge that this will indeed evolve upon entering our agreed upon API contract, and provide a channel for all integrated API clients to receive future instructions about what is possible when things do change--our API contract is flexible.
As an API provider, when I share this machine-readable representation of my product I'm not just assuming you know you can add it to the cart, or to a wish list, I am providing you with the links to do this, and the link relations that define this relationship. As things change and evolve, I have a way to share other opportunities, and changes to this business contract. When a product is recalled, there is a link routing you to what you need. When a product is replaced, you are routed to what you need. As an API provider, I'm committed to providing you with what you need to be successful when putting API resources to work.
Hypermedia goes being just providing links, managing versioning, and ensuring more resilient API clients--a very technical story. Hypermedia gives us a more honest and flexible way to define the API business contracts we are entering into when we make API resources available internally, with partners, and even with the public at large. Change is inevitable, and I feel like we need to be more honest about this change and be less rigid in how we are doing things, and I keep coming back to hypermedia as a meaningful way we can make this happen.
17 Oct 2016
I am spending a lot of time thinking about conversational interfaces, and how APIs are driving the voice and bot layers of the space. While I am probably not as excited about Siri, Alexa and the waves of Slack bots being developed as everyone else, I am interested in the potential when it comes to some of the technology and business approaches behind them.
When it comes to these "conversational interfaces", I think voice can be interesting, but not always practical for actually interacting with everyday systems--I just can't be talking to devices to get what I need done each day, but maybe that is just me. I'm also not very excited about the busy, chatty bots in my Slack channels, as I'm having trouble even dealing with all the humans in there, but then again maybe this is just me.
I am interested in the interaction between these conversational interfaces and the growing number of API resources I track on, and how the voice and bot applications which are done thoughtfully, might be able to do some interesting things and enable some healthy interactions. I am also interested in how webhooks, iPaaS, and push approaches like we are seeing out of Zapier, can influence the conversation around conversational interfaces.
Conceptually I can be optimistic about voice enablement, but I work in the living room across from my girlfriend, I'm just not going to be talking a lot to Siri, Alexa or anyone else...sorry. Even if I move back to our home office, I'm really not going to be having a conversation with Siri or Alex to get my work done, but then again maybe its just me. I'm also really aware of the damaging effects of too much messaging, chat, and push notification channels open, so the bot thing just doesn't really work for me, but then again maybe it's me.
I am more of a fan of asynchronous conversations than I am of the synchronous variety, which I guess could be more about saved conversations, crafted phrases or statements that run as events triggered by different signals, or even by me when I need via my browser--like Push by Zapier does. I see these as conversations, that enable single or a series of API enabled events to occur. This feels more like orchestration, or scripted theater which accomplishes more of what I'm looking to do than synchronous conversations would accomplish for me.
Anyways, just some exercising of my brain when it comes to conversational interfaces. I know that I'm not the model user that voice and bot enablement will be targeting with their services, but I can't be all that out in left field (maybe I am). Do we really want to have conversations with our devices or the imaginary elves that live on the Internet in our Slacks and Facebook chats? Maybe for some things? What I'd really like to see is a number of different theaters where I can script and orchestrate one time, and recurring conversations with the systems and services I depend on daily, with the occasional synchronous engagement required with myself or other humans, when it is required.
17 Oct 2016
With a lot of my storytelling, I feel like captain obvious, but I also recognize the importance of simple, and sometimes repetitive storytelling to help reach my audience of time and resource-strapped API providers. Sometimes API providers are just too busy to remember the small things, and this is where I come in to help you remember some of the most obvious aspects of providing APIs that can be essential to success.
This morning's reminder is that nobody will know the cool things your API does if you do not showcase what is being done with it. This is why I added my API showcase research, to highlight the approach of the successful API providers, and give me a regular reminder to write about the topic. I understand you are busy, but so are your API consumers, and there is a good chance they need a little help understanding what can be done with your super valuable API resource(s).
Showcasing your successful API integrations is where the rubber meets the road with API operations. Tell us all about the cool things people are doing with your APIs. It doesn't have to be a full blown case study (although that would be nice too), it can be 250 words, plus some bullets, and another 250 words, helping us understand the possibilities in 2 minutes or less. From my experience in the space, people eat up these simple, easy to read examples of APIs solving problems, and providing solutions.
Ideally, you are showcasing the cool things being done with your API on a regular basis via your blog, but if for some reason, you are too busy, make sure and at least share your thoughts with me via email. You never know, if it is worthy, I might take the time to share here on API Evangelist.
17 Oct 2016
I tune into a number of different channels looking for signs of individuals, companies, organizations, institutions, and government agencies doing APIs. I find APIs using Google Alerts, monitoring Twitter and Github, using press releases and via patent filings. Another way I am learning to discover APIs is via alerts and notifications about security events.
An example of this can be found via the Industrial Control Systems Cyber Emergency Response Team out of the U.S. Department of Homeland Security (@icscert), with the recent issued advisory ICSA-16-287-01 OSIsoft PI Web API 2015 R2 Service Acct Permissions Vuln to ICS-CERT website, leading me to the OSIsoft website. They aren't very forthcoming with their API operations, but this is something I am used to, and in my experience, companies who aren't very public with their operations tend to also cultivate an environment where security issue go unnoticed.
I am looking to aggregate API related security events and vulnerabilities like the feed coming out of Homeland Security. This information needs to be shared more often, opening up further discussion around API security issues, and even possibly providing an API for sharing real-time updates and news. I wish more companies, organizations, institutions, and government agencies would be more public with their API operations and be more honest about the dangers of providing access to data, content, and algorithms via HTTP, but until this is the norm, I'll continue using API related security alerts and notifications to find new APIs operating online.
17 Oct 2016
I am working on a project using the Youtube API, and came across their inline OAut 2.0 scopes, allowing you to explore what the API does as you are browsing the API docs. I am a huge fan of what interactive documentation like Swagger UI, and Apiary brought to the table, but I'm an even bigger fan of the creative ways people are evolving upon the concept, making learning about APIs a hands-on, interactive experience wherever possible.
To kick off my education of the YouTube API I started playing with the search endpoint for the Youtube Data API. As I was playing with I noticed the had an API explorer allowing me to call the search method and see the live data.
Once I clicked on the "Authorize requests using OAuth 2.0" slider I got a popup that gave me options for selecting OAuth 2.0s copes, that would be applied by the API explorer when I make API calls.
The inline OAuth is simple, intuitive, and what I needed to define my API consumption, in line within the Youtube API documentation. I didn't have to write any code or jump through a bunch of classic OAuth hoops. It gves me what I need for OAuth, right in the documentation--simple OAuth is something you don't see very often.
I'm a supporter of more API documentation being an attractive static HTML layout like this, with little interactive modules embedded throughout the API docs. I'm also interested in seeing more web literacy being thrown in at this layer as well, pulling common web concepts and specification details, and providing popups, tooltips, and other inline API design learning opportunities.
I'm adding YouTube's approach to OAuth to my list of approaches to a modular approach to delivering interactive API documentation, for use in future storytelling.
17 Oct 2016
Someone turned me on to an OpenAPI Spec to Slate / Shins compatible markdown converter on Github this last week. I have been an advocate for making sure we are still using machine readable API definitions for our API documentation, even if we are deploying the more attractive Slate. I've been encouraging folks to develop an attractive option for API documentation driven by OpenAPI Spec for some time, so I am happy to add this converter to my API documentation research and toolbox.
I am increasingly publishing YAML editions of my OpenAPI Specs which drive API documentation that operates on Jekyll, using Liquid. So I am all about having many different ways to skin the API documentation beast, allowing it to be easily deployed as part of any CI flow, and enabling the publishing of API docs for many different APIs, in many different developer portals or embedded on any device as part of IoT deployments. I think that a diverse range of approaches are optimal, as long as we do not lose our machine readable core.
14 Oct 2016
I was learning about Geofeedia providing law enforcement access to social media data from Twitter, Facebook, and Instagram via their API(s) this week. Geofeedia was making money by selling surveillance services to law enforcement build on top of these social APIs and is something that I guess Facebook and Instagram have cut-off access, but they could still have Twitter access through a reseller (Gnip?).
This isn't something that will just go away. If law enforcement wants access to user's data on Facebook, Twitter, and Instagram, they are going to get it. I am guessing that the rules regarding what law enforcement can or can't do aren't clear (I will have to learn more), and something that is just left up to platforms to enforce via their terms of service. It is a problem that modern approaches to API authentication, management, and analytics are well designed to help make sense of--we just have to come up with a new layer defined specifically for law enforcement.
Law enforcement should be able to fire up any standard, or customized solution they desire to search against social media data via APIs. However, they should be required to obtain an application key, and obtain the OAuth tokens that any other developer would need to. Rather than law enforcement being the customer of companies like Geofeedia, they should each get their own app id and keys, providing an identifying application that represents a specific law enforcement agency. They can still buy the software from providers, they just need the unique identifier when it comes to API consumption.
Along with this access, we also need to begin to define an auditable or regulatory layer, where other government agencies or 3rd party auditors can get access to the access logs for all applications registered to law enforcement agencies. A kind of real time FOIA access to the API management layer, allowing for a window into how law enforcement agencies are searching and putting social media data to use.
Of course, there will be special considerations regarding the OAuth interactions at play. At what point are end-users notified that their data is being accessed by law enforcement, and at what point do other government agencies and 3rd party auditors have access to API access to log file APIs for the law enforcement applications that are consuming Facebook, Instagram, Twitter, and other APIs.
There is a lot of work ahead to define how law enforcement can put social data to work via APIs, but the tools are there. Modern API infrastructure excels at this when done right. We can give law enforcement access to the data they need, while also enabling transparency in the process, making the platform operators like Twitter and Facebook feel better, while also respecting the privacy of US citizens. We need to just hammer out the OAuth scopes for these relationships similar to how we do it for energy, healthcare, and other vital data being served up via APIs.
This is a problem that will keep popping up. We can't just rely on groups like ACLU to find the companies who are acting as brokers and waiting for the platforms to play whack a mole when these companies are singled out. We need a formal definition to guide how law enforcement is obtaining access to increasingly vital social media and network data via APIs. We need some transparency and consistency in the process, something that APIs do well when executed properly.
While it makes cringe think about. I predict that many companies will be required to have API access in the future--specifically for this purpose. My hope is that there is also some transparency and consistency baked into this approach, leverage what web APIs do best. Allowing law enforcement to get what they legally need, allowing other government agencies, watchdog groups, and journalist to get self-service, predefined access to understanding what law enforcement is up to when it comes to surveillance using online services.
I will spend more time on this subject. Mapping out how it might work across the top platform like Twitter, Facebook, and Instagram. I'm hoping that we can make some movement in this area before too many other episodes occur.
14 Oct 2016
While spending some time going through my API monitoring research I found myself creating an OpenAPI spec and APIs.json index for the DataDog API, and had the realization that this is the beginning of what I'm looking for when I was talking about a DevOps aggregation API platform. DataDog is just the monitoring layer of this vision I have, but it has many of the other elements I'm looking for.
DataDog has all the monitoring elements present in their API platform, and they have all the platform integrations I'm envisioning in a DevOps aggregate API. We just need the same thing for design, deployment, virtualization, serverless, DNS, SDK, documentation, and the other critical stops along a modern API life cycle.
I'll keep profiling the APIs for the service providers in my life cycle research until I get more of this DevOps aggregate API definition mapped out. Hopefully, I will stumble across other providers like DataDog who are doing such an interesting job with the choreography, and orchestration that will be needed to work across so many platforms. I appreciate API aggregation service providers who 1) have an API, and 2) share so much of the definition behind their work.
The next thing I will work on is profiling the metrics that DataDog has defined across the platforms they integrate with. Take a look at the metrics they have defined for each integration, there are some valuable patterns available in their work. I'd love to see a common set of API monitoring metrics emerge from across providers, something that if we standardize and share in a machine readable way, others will emulate--making interoperability much smoother when it comes to monitoring.
I just wanted to keep beating my drum about the fact that APIs aren't just about building applications, they are also critical to the API life cycle, and making sure there are stable, scalable APIs to build applications on top of in the first place.