Random Prompt Injection

May 25, 2023

4 min read

Most large language models have a built in temperature setting that adjusts how “random” the output is. If you set it to 0 then the LLM will be completely deterministic. As you raise temperature closer to 1 it will get more and more random and less predictable. This blog post does a good job explaining how temperature impacts the generated content.

Now let’s say you want to generate that content that is predictably random in some way. As a simple example what if you want to generate a tweet that has anywhere between 5 and 10 words. Your plan is it run this prompt a bunch of a times and the goal is to get a list of tweets with an even distribution between 5 and 10 towards.

Your first thought might be to set temperature to 1.0 and use a prompt like

Write a tweet that has between 5 to 10 words

It turns out that this approach doesn’t work as well as you might expect. Even with the temperature set to 1.0 you will not get anywhere close to a random distribution. LLMs at their core are probabilistic models and just telling it to be random is not enough. If you run this prompt 50 times you will get a distribution that looks like the below image.

As you expected if you set the temperature to 0 it’s completely deterministic (10 words) but even with temperature set to 1.0 you still see very few tweets with 5 and 6 words and a clump around 8 9 and 10.

Random Prompt Injection (RPI)

To solve this problem you should turn to random prompt injection. RPI is a technique where you randomize part of the prompt to force increased variety in outcome. Instead of telling the LLM to be random, you randomize a variable and then inject that into the prompt. Let’s update our prompt to look like this:

Write a tweet that has {{ RANDOM_NUMBER_OF_WORDS }} words

In this case you can compute a random number before running the prompt and inject that random number into the prompt. This works much better regardless of the temperature setting. Here is the distribution across 50 samples with the same temperatures.

You can see you get a much more even distribution. LLMs are overall pretty bad at matching word length so you still see some anomalies with the temperature set to 0 but it’s still much more evenly distributed. And the nice thing is this isn’t just limited to numbers. You can inject a random sentiment or a random feeling or anything else that you want to feel different across runs.

RPI in Prompt Wrangler

This is a common pattern that you will want to use whenever you want “random” responses to the “same” prompt. In Prompt Wrangler we’ve made this trivial to implement with helpers specifically designed for this case.

The random helper picks a random integer in a given range. For example if you wanted to pick a random number between 1 and 10 you could do the following:

{{{ random 1 10 }}}

The pickRandom helper picks a random item from a list of items. For example if you wanted to pick a random item from a list of strings you could do the following:

For example:

// Static List {{{ pickRandom "short" "medium" "long" "very long" }}} // Array passed as argument {{{ pickRandom args.string_array }}}

You can learn more about these helpers and the other helpers that are available here

Prompt Wrangler makes it easy to turn GPT prompts into structured APIs. This make it easier to iterate on prompts without having to make code changes. Best of all it's 100% free!
Try Out Prompt Wrangler