Chain-of-Thought Prompting
Last updated
Last updated
Prompt:
Output:
Wow! We can see a perfect result when we provided the reasoning step. In fact, we can solve this task by providing even fewer examples, i.e., just one example seems enough:
Prompt:
Output:
Keep in mind that the authors claim that this is an emergent ability that arises with sufficiently large language models.
Prompt:
Output:
The answer is incorrect! Now Let's try with the special prompt.
Prompt:
Output:
It's impressive that this simple prompt is effective at this task. This is particularly useful where you don't have too many examples to use in the prompt.
Image Source:
Introduced in , chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding.
Image Source:
One recent idea that came out more recently is the idea of (Kojima et al. 2022) that essentially involves adding "Let's think step by step" to the original prompt. Let's try a simple problem and see how the model performs: