I am using Postman public workspaces to manage all of my projects right now, and as part of my Postman collection workspace I have a variety of collections where I am bending the concept of how Postman was intended to be used. I was needing many little APIs for my API specification projects to organize a mix of data I will be using to automate and orchestrate information gathering, publishing, and any other task I can automate in my world. After considering using AWS API Gateway + DynamoDB for this, I thought that it would be easier, and more cost effective for me to just use Postman. The platform doesn’t have an API deployment capability, and we do have integrations like with AWS API Gateway, but I really feel like these simple little APIs were even too small for justifying the increase on my AWS bill—I was determined to find a way to do on Postman.
I knew I could accomplish what I wanted on Postman, I just needed to think out of the box a little. After some brainstorming I decided to just define each individual data store as a single collection, use the examples as the actual data store, then publish a mock server—-treating the result as more of a “static” API than a mock for testing or other purposes. I ended up with six separate static APIs that are hosted 100% on Postman.
- Articles (Docs) (Data) - Interesting articles that have been written about Postman Collections.
- Competition (Docs) (Data) - A list of example of how Postman competitors are using Postman Collections.
- Government (Docs) (Data) - Showing the different government agencies who are using Postman Collections.
- Pages (Docs) (Data) - The interesting pages that API Providers have published showcasing their collections.
- Partners (Docs) (Data) - Demonstrating how Postman partners are importing and exporting Postman Collections.
- Tooling (Docs) (Data) (Data) - All of the tooling I track on that is built around the Postman API schema specification.
Each of these data store have their own URL, and can be iterated upon independently. They are all published in a public workspace so anyone can discover and use the static API collections, fork the collection, and mock using your own account. Sort of Bring Your Own Mock (BYOM), that way the rate limits on the API calls do not affect my accounts. I am using my simple APIs to manage data for each of these resources so that I can use as part of different types of orchestrations, maybe tweeting out something about a tool or article, to maybe parsing the domain behind each of the pages and pulling a screenshot. I have a whole mix of different API storytelling outcomes I am looking to set in motion, and these data stores will play a central role in helping me automate things.
One thing I really like about this approach is I get documentation for each data store automatically as part of each collection—-I just have to provide good descriptions. If you noticed I included a link to the docs and the data for each one. If I am referencing access to the data I’ll send over the docs, and if I am speaking specifically to the data I will send over a link directly to the example, or examples, as I may have the data sharded in different ways, like I have begun doing with the tooling data store. I already use Postman environments as a run-time data store, but now I am using collections in a similar way, but as something that is executable, mockable, publishable, and shareable by default. Honestly, when it comes to managing the hundreds of datasets I depend on for my world this works better than a proper database store, and it is a hell of lot cheaper for me to host all of these datasets within public and private Postman workspaces.