Chain-of-Thought Prompting
Last updated
Last updated
Image Source: Wei et al. (2022) (opens in a new tab)
Introduced in Wei et al. (2022) (opens in a new tab), chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding.
Prompt:
Output:
Wow! We can see a perfect result when we provided the reasoning step. In fact, we can solve this task by providing even fewer examples, i.e., just one example seems enough:
Prompt:
Output:
Keep in mind that the authors claim that this is an emergent ability that arises with sufficiently large language models.
Image Source: Kojima et al. (2022) (opens in a new tab)
One recent idea that came out more recently is the idea of zero-shot CoT (opens in a new tab) (Kojima et al. 2022) that essentially involves adding "Let's think step by step" to the original prompt. Let's try a simple problem and see how the model performs:
Prompt:
Output:
The answer is incorrect! Now Let's try with the special prompt.
Prompt:
Output:
It's impressive that this simple prompt is effective at this task. This is particularly useful where you don't have too many examples to use in the prompt.