Elevating My Awareness of Artificial Intelligence

I am not excited about recent evolutions in artificial intelligence (AI) as many others are, but as with APIs, I am looking to find and follow the story. This is something that takes regular studying and thinking about things, so here I aim spending my weekend thinking about all of this, trying to define where I stand, and speak somewhat intelligently about where things are going. As with most things tech, I am coming at this through the API lens, but since OpenAI, and most of the AI / ML conversation occurring right now is API driven and defined, so I think my lens will prove relevant. I have to be honest, I don’t know a whole lot about the inner workings of AI / ML. It is all a little black box and a little out of reach, but I’ve been paying attention and used a lot of solutions, and I think my vantage point likely reflects where many others find themselves, whether they are honest about it or not.  

I am looking to come at this with as honest of an opinion as possible, so I am staying out of the realm of fiction and making claims about AI actually having any sentience or intelligence. I know better. I respect humanity and the realm of the living more than that, but I am also willing to acknowledge that GPT-3 is pretty impressive when it comes to working with some very specific slices of our world. The little I’ve played with I am pretty impressed with the results, but I am playing more in the realm of knowns knowns with data and existing algorithmic processes, not art, literature, and the other more magical realms of existence. This post is my first deep dive into thinking about artificial intelligence, understanding where things are headed, and staking my claim in this new digital land rush, and hopefully equipping myself with some tools  

Vocabulary

I an a storyteller, so words matter. I regularly am left confused by the vocabulary used in the realm of artificial intelligence. I like to develop vocabularies that shape any realm I am studying, as it helps me ground how I see things, while also understanding how others are being influenced through storytelling. Remember, I am doing all of this to understand the stories, not AI, ML, or even API. The words used are very intentional, and they are often broad and vague on purpose, as it helps light up the imagination when we talk about what is possible, and often hide the fact that some things aren’t possible. Here is the vocabulary I am using to help shape how I see this world — I will add others as I find them.

  • Artificial Intelligence - Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to intelligence of humans and other animals. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.
  • Machine Learning - Machine learning (ML) is a field of inquiry devoted to understanding and building methods that “learn” – that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so.
  • Neural Networks - A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological neurons, or an artificial neural network, used for solving artificial intelligence (AI) problems.
  • Large Language Model (LLM) - Though the term large language model has no formal definition, it generally refers to deep learning models having a parameter count on the order of billions or more. LLMs are general purpose models which excel at a wide range of tasks, as opposed to being trained for one specific task (such as sentiment analysis, named entity recognition, or mathematical reasoning).  

I am really not looking to go too deep down this rabbit hole. I know it is something that could seriously dominate my time. I have spent a few separate instances where I have gone deep into learning about TensorFlow, ML APIs, and other dimensions, but I know what a time suck it can be, so I avoid spending too many cycles. However, as with other aspects of my work I always have to balance how deep I go on a subject, striking the right balance between knowing what I am talking about and not having to do the time to be an expert. This vocabulary will help provide me with a grounded frame of reference that I will use thorough-out my storytelling, keeping things mapped to the words others are using, but rooted in some research I have done.  

Models

For me, models are how we make sense of this. It is how we break things down into usable, evolvable, and auditable units. Models depend on the scope and quality of what they trained upon. Models can be small or large, good or bad, and their quality, usefulness, and accuracy will depend on a number of factors. I’ve created and trained models with TensorFlow, but haven’t ever gone beyond that, with only recently playing around with GPT-3, and I am on the waiting list for GPT-4. I am fascinated by how companies are positioning their models, and included one example from Bloomberg to help plant seeds around how the narrative around how industry or domain models are being used to differentiate stories.

  • TensorFlow - TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.
  • GPT-3 & GPT-4 - Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model released in 2020 that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt.
  • BloombergGPT - A Large Language Model for Finance** - Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data.
  • Existing Models - It takes a lot of work to build your own models and utilizing existing models is the preferred method of approaching business use cases for AI, but you are always beholden to the model you are using, and having over your data to someone else.
  • New Models - Creating, training, and maintaining new models or at least being able to refine existing models is ideal for specialization but you must have the expertise, compute, time, and other resources it takes to actually get your model doing something meaningful.  

I am going to do more research into the different types of models in use out there, and how they are being positioned. A big factor for me, which I will cover more about later is the observability that exists around these models. Whether it is intentionally a black box and smoke and mirrors, or there is actually provenance and transparency behind the model. For me, this is where the real story is at. This game will all be about the stories spun around models and what they are capable of, and less about the real world results they deliver. Models provide us with a potential tangible unit of meaning when it comes to talking about the different moving parts of artificial intelligence, and I’ve seen some pretty interesting approaches to iterating on models that give me hope that we can do this right.   Shipping Containers on the Dock

Knowledge

The models used to power AI are only as good as the data they are trained on. The quality and usefulness of the models will be defined by data fed into them. This is where we really crack open the API portion of this discussion, in that Chat-GPT is trained using its API, but the Chat-GPT plugin is provided access to knowledge by sharing an OpenAPI for any API it will be using to answer questions. It is easy to assume that OpenAI and other people in the business of developing models have some magic access to the worlds knowledge, but this is the intersection that will matter the most to the success of AI / ML, and how it is applied in our personal and professional lives.=

  • Web - Since the web is what most of see on a daily basis, and represents the most tangible part of what we consider to be “knowledge”, we assume it is easy to consume and train models, when in reality it can be very difficult to do, but there are resources like Common Crawl that help make it something anyone can train a model on.
  • Content - Creating and maintaining content beyond the open web and training AI models on it is a great way to develop the knowledge being made available, and if you are training your own models you are capturing most of the value, where if you are using other people’s models you are transferring the value available in your content to theirs.
  • Databases - There are endless numbers of public and private databases full of useful and useless data about our world, and the need to get this databases into machine learning models will be another boost to the world of APIs, pushing companies to invest in APIs so that they can be applied as part of artificial intelligence applications.
  • APIs - If you have APIs already as a company and are used to exposing your digital resources and capabilities as APIs you are ahead of the game when it comes to potentially tapping into the AI evolution, but companies who haven’t will be racing to invest heavily in APIs so that they can be used to train machine learning models.  

Knowledge is distributed on the web. Knowledge is made available via APIs in structured ways. APIs are how Chat GPT and other AI players will gain access to knowledge when it comes to training their models, or convincing others to train their models for them. This is something that will continue to increase the popularity and importance of APIs, but in my experience it won’t necessary mean better APIs, and there will continue to a lot of noise, garbage, and low-level knowledge littering the digital landscape. Access to knowledge will continue to shape artificial intelligence in the same way it has shaped real world human intelligence—both for good and bad.  

Medium

I find myself swirling in the possibilities with AI, and breaking down the mediums where it is being applied helps me compartmentalize my thinking. It was images that first captured my imagination when it came to machine learning, but I’d say that the artifact and code mediums are what have my attention in this particular moment. Honestly I find many of the image AI floating around to be boring and exhausting, but the video and audio applications offer some potential time savings for me. Breaking down the mediums helps me focus my energies and separate out how and what I think about when it comes to artificial intelligence, otherwise I find I can easily get lost in the spectacle of it and can’t make sense of what is happening.

  • Text - Generating text, creating summaries, takeaways, writing descriptions for API, producing social media snippets from interviews, and using AI to product text.
  • Images - Applying machine learning models to images, and training models using images, generating and evolving images using a variety of approaches.
  • Video - Taking video and performing some magic on creating transcription, editing, enhancing quality, and doing the things I can’t do or afford to pay someone.
  • Audio - Producing and enhancing audio as part of podcasts, but I’d also like to explore other avenues for producing audio that helps augment my existing work.
  • Voice - Taking things I say and then turning them into text or even turning them into AI commands that I can use to trigger other actions that matter to me.
  • Artifacts - I have used to Chat GPT to produce OpenAPI, JSON Schema, and other machine readable artifacts from me using a variety of commands.
  • Code - Writing scripts and other snippets of code the produce of consumer APIs, test, and help me automate, driven by the artifacts I am producing elsewhere.  

Compartmentalizing AI by medium helps me focus. There are times I really enjoy playing with images, but for business purposes right now I am very interested in helping abstract away complexity with the artifacts and code associated with producing and consuming APIs. This is something that is also directly related to how we all will be making knowledge available for training AI models, whether we are using Chat GPT or other set of models. Using medium to help me organize things helps ensure that I can maximize different approaches to using AI, and leveraging different types of models for different purposes, and even connect the dots between them like I will be with voice to text, and voice to artifacts and code.  

Compute

I have dabbled enough in the realm of AI / ML to know that the compute bills can be eye watering. I’ve spend a thousand dollars in a weekend training AI models for my algorotoscope videos and images. I also know that where and how we apply compute will also shape how AI impacts our personal and professional worlds. I think that the cloud is essential to this moment we find ourselves in with Chat GPT and beyond, but I think that the real opportunity is augmenting our worlds in the browser on and our mobile devices.

  • Cloud - Using Amazon, Azure, Google, or other cloud platform to train and deliver models.
  • On-Premise - Utilizing physical compute power on-premise in data center or hosting environment.
  • Local - Applying compute locally to train and deliver machine learning models in a mix of ways.
  • Browser - Baking machine learning into our browser like Microsoft is doing with their Edge browser.
  • Mobile - Weaving machine learning into our mobile applications, leveraging on-device compute.  

Compute is one of the essential building blocks of all of this. It is both a technical but also business one. It costs a lot to play in this game. Maximizing how you wield compute is going to be a critical aspect of how this works or doesn’t work. I will have to investigate whether or not there is some algorithm or law governing how fast models are being trained and how fast they are being applied over time. Compute is a commodity, but it is needed at a scale that isn’t cheap to maintain, so this will be something that cuts a lot of people out of the conversation.   Shipping Containers on the Dock

APIs

Next up in my stack are the APIs. APIs are how knowledge flows in an out of OpenAI, the platform behind Chat GPT.  You can use and refine models via the OpenAI API. You can plug knowledge into Chat GPT models via their plugin infrastructure. OpenAPI is part of the manifest for plugging knowledge into Chat GPT models. API are the pipes for AI, just like they have been for web, mobile, and device applications, so I want to make sure that I am thinking about what will matter to delivering useful models.

  • Authentication - Managing authentication and authorization will make or break a lot of this.
  • Read - Being able to easily read data and the metadata around it will be critical to models.
  • Write - Right now things seem very read-only, but write capabilities are going to important.  

OpenAPI encourages you to keep your OpenAPI files small, doing one thing well. This reflects my belief in APIs, keep them small and meaningful—reducing complexity. I am concerned how the authentication for how all of this will work, and be managed. I am also worried about the metadata that will be needed to properly query APIs, and make knowledge properly available for interactions via AI implementations, and to train models with the right context. I know I am biased when it comes to APIs, but they are definitely the key to all of this.  

Specifications

The bridge between the knowledge available in APIs and the artificial intelligence universe are specifications. OpenAI has adopted the OpenAPI specification as the contract between any API and OpenAI via plugins. This uses OpenAPI and JSON Schema to connect to any API, and OpenAPI is just the start, and I am confident that other specifications will be needed to help make the future of artificial intelligence possible. We are going to have to scale and automate all of this API connectivity, and API specifications are key to doing all of this scale, mapping out the knowledge landscape in a machine readable way.

  • OpenAPI - Defining access to HTTP APIs, making resources available for us in machine learning models.
  • JSON Schema - The schema of all the digital bits of knowledge being used to power artificial intelligence.
  • Collections - Add a layer of data and context to any query being made to APIs for knowledge.
  • Environments - Providing an essential layer for managing authentication and other secrets needed.
  • AsyncAPI - Connecting models to other event-driven APIs, making all of this much more interactive.
  • Spectral - Governing APIs being used to ensure they possess different characteristics that are needed.  

These specifications are needed as the glue between the knowledge and how it is used in artificial intelligence, but interestingly you can create, convert, and manage these specifications. You can ask Chat GPT to create OpenAPI, JSON Schema, Collections, and Spectral rules. Chat GPT is great at working with these specifications. Machine readable formats, generating them from nothing, comparing, converting, and helping automate how they are used. This is the area I am most interested in when it comes to making sense of the artificial intelligence landscape, because it is what matters the most at scale.  

Creating Specifications

It is interesting to explore the role that OpenAPI is playing all of this connectivity, but also what is possible with Chat GPT when it comes to creating OpenAPI. I have to admit that I was pretty dam impressed with what was possible when it comes creating OpenAPI out of thin air. I found pretty good success with just a handful of interesting areas I have been trying to solve for some time now.

  • Create an OpenAPI 3.0 JSON Schema, and Postman Collection from this documentation URL
  • Create an OpenAPI 3.0 JSON Schema, and Postman Collection for a schema.org events object
  • Create an OpenAPI 3.0 JSON Schema, and Postman Collection for [any company] API
  • Create an OpenAPI 3.0 JSON Schema, and Postman Collection for [any word] API  

Give me some more time and I feel like I could come up some other interesting ways to create an OpenAPI. I was just working on very simple, yet powerful examples. Next, I am going to get more precise in how I was for the OpenAPI, and the elements of the API.  I spent about an hour or two playing around, and I have to admit I am pretty impressed with what it could create, and I am looking forward to exploring more.  

Managing Specifications

After playing around with creating OpenAPI, I started playing around with managing them. I wanted to see how good Chat GPT was at injecting specific elements of an OpenAPI. Turns out it was pretty good at injecting some of the common things into an OpenAPI, offering a sneak peak at how AI can reduce friction for API architects, designers, and developers. I am confident that I can find an endless number of ways to make our lives easier when it comes to apply Chat GPT to OpenAPI documents, but here were a couple of quick ways I found out of the gate.

  • Adding description to an OpenAPI 3.0
  • Adding license to an OpenAPI 3.0
  • Adding contact information to an OpenAPI 3.0
  • Adding operation summaries to an OpenAPI 3.0 operation
  • Adding operation description to an OpenAPI 3.0 operation
  • Adding operation id to an OpenAPI 3.0 operation
  • Adding descriptions to parameters
  • Adding descriptions to schema properties
  • Adding enumerators to schema properties
  • Adding summaries to Postman Collections  

Chat GPT was pretty damn good at helping me with some of these repetitive things I find myself doing on a regular basis. I am genuinely excited about the potential here. API design takes on a whole other tone when you can just bark commands at your OpenAPI. The challenge I think though with all of this is that you have to know what needs to be done. You have to know OpenAPI to know to ask these things. Regardless, it provides a compelling look at what is possible when it comes to managing our API specifications.  

Testing Everything

The next area I am was playing around with Chat GPT was when it comes to generating different types of tests. I wanted to understand how we could further automate quality when it comes to our APIs in general, but also when it comes to supporting Chat GPT integrations. Which I find pretty profound, that you can generate tests using the same AI that you are powering with the APIs that you are testing. Here are just a few of the ways I was using Chat GTP to test different aspects of an API.

  • Generate Postman test script to check the status code of the [operation name]
  • Generate a test script to validate the schema for the [operation name] response
  • Generate a test script to check the response time for each individual API path
  • Generate a regular expression to test for a specific pattern of values in API response value.
  • Generate a Spectral rule to ensure that every OpenAPI possesses the proper media type
  • Generate a Spectral rule to ensure that every OpenAPI possesses the proper HTTP status codes
  • Generate a Spectral rule to ensure that every OpenAPI possesses the proper headers  

These are just a few of the ways in which I pushed Chat GPT to help me with testing. Testing  not only an instance of an API, but also the interface and implementation of an API. Chat GPT proved pretty adept at producing tests for Postman collections, with or without referencing OpenAPI as the contract. Like the contract themselves, I was impressed at the ability for Chat GPT to produce tests needed to produce high quality contracts and ensure the contracts were implemented properly, showing potential for reducing drift.   Shipping Containers on the Dock

Context

Perpetually throughout my recent AI journey I am reminded of how much context is required for all of this to work. We are really good at under valuing how much information we humans process in any given moment, and how much of the web works because it is us humans navigating it. I see this regularly throughout the world of APIs where many developers love to overlook the role that humans play and that all of this is easy to automate. I found Chat GPT to work well when you gave it more context and understand what was possible and what is not possible, something I think we’ll have to think deeply about if we are to be successful.

  • Questions - How you ask questions is such an art, and will completely determine your success using Chat GPT.
  • Bounded - I as flapping in the wind until I started developing bounded context for my questions and the answers.
  • Domains - It was clear to me that general intelligence will only get us so far and the need for domains is key.
  • Scope - Scooping quickly emerged as how I was going to find success, keeping things precise rather than large.
  • Right-Sized - Small only gets you so far, so you have to explore and right-size your questions applied to chat.
  • Accuracy - You really have to evaluate your answers and be scientific in assessing the accuracy of your responses.  

This is where I feel like we are going to spin out the most when it comes to using AI. Without context it will get lost, and AI will lead us astray. AI will be all about developed walled gardens of knowledge in the form of proprietary models. These models will be black boxes and often not fully understand by those who wield them, but also kept even darker out of real and perceived proprietary concerns. The benefits of AI will be shaped by domains and meaningful bounded context articulated by experts, and I feel that artificial general intelligence will only get us so far.  

Awareness

One of the things I found myself thinking a lot about as I read through various articles on AI, Chat-GPT, and the applications in different industries was the lack of awareness that exists, and the opportunity for exploitation that exists because of it. I admittedly only have a certain level of awareness when it comes to AI and ML, but I have a heightened level of awareness of APIs, specifications, as well as the business and politics that exists across the tech sector. This I feel will be one of the most damaging aspects of AI when it comes to what it does to markets and our personal lives, resulting from a lack of awareness at the levels that matter the most to humans.

  • Understand Artificial Intelligence - AI / ML is tough to see and complex to understand and will further obfuscate what is happening.
  • Understand APIs - I have spent over a decade trying to shine a light into the inner workings of the digital economy, which Ai will make harder.
  • Understand Specifications - The semantics and metadata data of how all the knowledge and “intelligence” behind all of this will be critical.
  • Understand Business - Understanding the motivations of investors and large enterprises is a key aspect where you site in regards to awareness.
  • Understand Politics - There are a lot of games being played right now, from lower levels of labor up to highest levels of government regulation.
  • Above or Below the Line - As with APIs, it will all depend on whether your awareness is above or below the API or now the Ai line.  

The whole purpose of this post is to evolve my awareness in the realm of AI and ML, but also the intersection with APIs. I refuse to go too far down the rabbit hole with actual AI, but I will keep dabbling and playing with so that I get the fundamentals. Similarly, I won’t go down the rabbit hole of business too far, but I do stay aware of what is happening with investors and business leadership, so that I can speak to it all. Really my expertise centers around APIs, specifications, and the politics that exist at this intersection, which I am looking to translate into the world of AI.  

Correctness

One area I find myself very frustrated is when it comes to the correctness. When I encounter a bad response I want to be able to correct it. I really, really, really hate the black box nature of machine learning models. I have isolated, trained, and maintaining a single suite of TensorFlow models to power my Algorotoscope work, and likely will go this far with some text and audio models, just so that I understand things at a deeper level. I will begin with establishing a framework for improving on Chat GPT models, but there is only so far I am going to go when it comes to enriching their own models, over developing my own.

  • Questions - Documenting and fine tuning how you as questions is essential to what you receive as an output from Chat GPT.
  • Responses - Scrutinizing responses is the default and you can never just take it for face value assuming what it put out was right.
  • Fine-Tuning - Next on my list is to evaluate OpenAI’s fine tuning API and how it works for working with the questions you ask.  

Building with Chat GPT requires you have a scientific approach. I may even setup an API for tracking my questions, responses, and come up with a strategy for fine-tuning. However I am guessing that fine-tuning and correctness will be one of the first things to go for those who don’t have the time and resources—which will create problems. I feel like we suck at talking about the correctness of real world human intelligence and is something that will only get worse when it comes to artificial intelligence. I am looking to be as thorough as I can in this area, but the jury is still out what I am actually capable of with the resources I have.

Shipping Containers on the Dock  

Concerns

I have a lot of concerns about AI. Like many others, I just don’t have much faith that those leading the way will be doing this with much concern for humanity. However, I am also trying to not be too emotional about this and strengthen the awareness I have across all key areas, and establish logical bounded context for the concerns I have. Honestly, my anxiety levels are very high right now from technology in general, and I am concerned that AI will elevate them even more, so I am not sure how much appetite I will have for working in this area, but it is my job to make sense of this stuff, so I am going to dive in and track on these concerns.

  • Privacy - The potential for privacy invasion is massive here, and I see almost no limitations on preventing machine learning models from being trained on our personally identifiable information that exists publicly or privately on the web, and once the model is trained it is unlikely that much can be corrected.
  • Security - It is a Wild West when it comes to training machine learning models and I see no boundaries between how models are applying public or proprietary information. There is very little provenance available behind any of the models I am using, and keeping our information and livelihoods secure is a concern.
  • Provenance - The existence of provenance or not is a hallmark of good and bad machine learning in my opinion, and I see a lot of people just consuming as much knowledge as possible without actually standardizing or being transparent about provenance, which seems to be something only those in academia are doing.
  • Transparency - AI and ML are black boxes, and there is often similar levels of transparency surrounding the technology or business of AI, which allows for the creative storytelling we are hearing when it comes to what is possible with Ai and ML, and continue to be how it is used to exploit and extract value from people and businesses.
  • Obfuscation - Like APIs, I suspect there will be a lot of continued obfuscation of what is really happening using artificial intelligence. It is easy to obfuscate shady business practices with AI usage, or simply thought telling the story of what your AI is capable of, basically hiding stupidity behind artificial intelligence.
  • Exploitation - I am less worried about AI being sentient or truly intelligence than I am for just the random run of the mill exploitation hat will occur using AI, reducing human beings to robots by forcing them to work with AI, feed and train the models, and reduce human intelligence to levels where it makes artificial intelligence look smart.
  • Disinformation - We have seen the negative impact of disinformation with regular old web technology, which is something that is only going to get worse when it comes to AI writing stories and generating images and videos, leaving most of us questioning everything we see online, and others believing everything they see.
  • Control - I am very concerned with the lack of control most of us will have over AI. Similar to APIs, a lot of this technology is out of our reach, yet it will have an outsized role in how it controls our daily lives, governing everything from our calendars to our insurance, and for others whether or not you need to be incarcerated.
  • Gatekeepers - History shows that knowledge will always have gatekeepers, which is something I’ve seen with databases and APIs, and it will continue to be a very contentious aspect of the knowledge used to train machine learning models, and help shape the frontline of how companies are wielding their models.
  • Quality - Garbage in and garbage out will be one of the governing principles of artificial intelligence, and the quality of data, content, metadata, and the other aspects of how knowledge is managed, will determine the success of different models, but ultimately the story of a model could transcend actual quality.
  • Bias - If your data and content has bias your machine learning models will have bias, and this will be something that companies will continue to struggle with, and if models aren’t observable and transparent, then  

It is easy to just be concerned and freaked out about AI. I am working to be level headed about this, and track on my concerns in a logical way. I am going to gather references and citations for each of my concerns, finding real world examples to back them. I am going to develop a structured way to file my concerns and tell stories about them, while investing in self-care along the way. My anxiety levels are already really high due to the fuckery that goes on in the world of startups and APIs, and I don’t need my blood pressure going any higher. I need to be methodical about this and leave emotion to the side.  

Future of Work

So what does AI mean for the future of work? Like most of technology I don’t hold out hope that AI will be balanced across both employers and employees. Employers are definitely going to have an outsized amount of control over the machine learning models in use across all industries. For me the future of work depends less on AI and more on capitalism, but AI provides a nice place to obfuscate, exploit, and control every aspect of my AI concerns. I don’t believe the extremes of AI will allow us to establish more leisure time or that AI will leave us all without jobs. I am convinced it will be a very dystopian version somewhere in the middle of these.

  • Sensible - There is a lot of room for exploitation in all of us getting emotional about this so let’s try to stay sensible and level headed about how AI is applied.
  • Labor - Now is a good time for us to get back to strengthening our unions and evolving next generation approaches to building consensus and working together.
  • Revenue - There will be entirely new ways to generate revenue for businesses by using AI and developing AI, which will create new ways of keeping up employed.
  • Cost - Only those who can afford the compute and can afford the skills needed to develop ML models, leaving cost of doing business a huge factor in all of this.
  • Value - Ai will be all about finding the value, and I am guessing there will be way more AI cycles spent on things that do not create value than there will be of value.
  • Bullshit Jobs - AI is going to perpetually force us to ask whether or not this job or task should be done in the first place, but sadly we won’t always be listening.  

I do not think that AI will put us all out of work. I do think it will radically transform what work is. I also think we are in such a dysfunctional state of late stage capitalism, that the future of work is going to be very dystopian with or without AI. I am just guessing AI is going to inject a maddening level of velocity and confusion that only the best and brightest will be able to make sense of what the hell is going on. I honestly don’t think the technology of AI will be the thing that shakes everything up, I think it will be how business and politics of AI that shapes things.  

The Performance

For me, this is all about the performance. I mean Hollywood and Broadway Music type performance, and not speed, velocity, or value. This is why I am developing my awareness—so I can compete. I am not looking to compete with ML models. I am looking to compete with the stories that are being told. I am looking to capture just as much attention or more than what Chat GPT and others have commanded. This is all theater and whoever produces the best stories will win, and it will rarely be about the actual power of your AI, leaving whoever spins the greatest, most believable, and spectacular tales as the winner.

  • PT Barnumification - AI will be a big tent like we’ve never seen, with infinite rings at the center, and endless sideshows and hustles at the edge.
  • Spectacle - To get the normals attention you will have to continually being out doing previous waves with the greatest spectacle possible.
  • Entertainment - The number one job of AI will be to entertain, keeping the masses distracted while they are being made to work or spend money.
  • Purpose - For AI stop stick in any meaningful way it will have to have a purpose, but this doesn’t mean many hours won’t be spent without purpose.
  • Confusion - There will be a lot more confusion on the ground floor of companies who are putting AI to work, leaving workers unsure of their part in the play.
  • Madness - For some this will just be madness, and I think we’ve severely underestimated folks grip on reality and there will be many who just lose it.
  • Opportunity - there is a massive opportunity in the performance of AI, so make sure you know your part, and understand if it is below or above the line.  

I love the storytelling opportunity present here. However, I am determined to ground my awareness and performance in reality so I don’t get lost in the confusion and madness that is unfolding. I am serious about the P.T. Barnumification of all of this, and I don’t just mean the circus. P. T. Barnum wasn’t just a performer, he was a racist perpetrator of hoaxes, businessman and politician. He was a hustler. When you look out across the artificial intelligence landscape right now, consider one thing—how quiet the Web3 hustlers are. This is because they’ve joined in on the P.T. Barnumification of AI.

Shipping Containers on the Dock  

Closing Thoughts

I feel like I achieved what I set out to do writing this piece. I got my bearings when it comes to the role APIs will play in all of this. I also have laid the foundation for a storytelling framework that should keep me sane and grounded. I will need to do a lot more research and carefully read the stories that are emerging when it comes to AI. There is so much more to learn. I have a huge amount of experimentation ahead of me. I have a number of projects in motion at work, and I feel like I’ve spent enough time at the 500K level that I can be moderately successful at the 1000K level. I am pleased to see the role that OpenAPI is playing at the center of all of this, and it reassures me that my expertise with APIs will continue to grow in this new API fueled madness. However, like the API chaos I operate in each day, I am confident that the AI universe will be anything but straightforward and logical—it will be a virtual circus.  

To play in this space I need a handful of magic tricks that set me apart from other acts in the circus. I also need to develop a couple a main stage acts as part of my Postman performance. Postman enjoys a leadership role in the world of APIs, something that will only exponentially grow in the world of AI. I am determined to be a producer and show runner on numerous main stage acts in this circus. I am also looking to sharpen my storytelling and hustling abilities so that I can hold my own as part any side show act in any side alley. I am also really curious how the circus does as it makes its way through Main Street America, Europe, Asia, and out of the technology bubble. Some of the circus acts that perform well in Silicon Valley may capture, but won’t always keep the attention of the muggles. The artificial intelligence circus isn’t going anywhere, and I might as well polish my awareness of what acts others are performing, as well as what the different audiences are looking for when it comes to being entertained—keeping my finger on the pulse of the spectacle every seems enamored by in this moment.