While chatbots are super popular these days, there are a lot of other ways to use large language models like GPT. Mose applications can leverage LLMs in a variety ways including:
- Structured Extraction
- Content Generation
When using GPT for these tasks often you want to use GPT like a structured API rather then as a text completion model. By structured API I mean an API with defined inputs and outputs.
This is why we created Prompt Wrangler. The OpenAI API accepts a
text param and then outputs
text. Usually you want to send in
json and get
json back. Prompt Wrangler makes this easy
By treating each prompt as a structured API you get a lot of extra benefits including:
- Prompt Versioning - Prompt Wrangler automatically versions your prompt using semantic versioning so that you can roll out changes just like API changes. Over time we will add automated testing to the new versions.
- Logging - Each time a prompt is called, we log the request and response. This allows you to easily debug your prompts.
- Analytics - Understand the average response time of your prompts and costs associated with them.
- Collaborate with Teammates - Invite teammates to your workspace and collaborate on prompts together.
- Cost Analysis - Understand the cost of your prompts in real time and understand how that's changing over time
- Zapier Integration - Easily use a prompt as a Zapier action. Simply oauth to Prompt Wrangler and you are ready to get started.