Chain-of-Thought (CoT)
LLM & Language ModelsA prompting technique where you ask the AI to show its reasoning step by step — significantly improving accuracy on complex problems.
Chain-of-thought prompting is the simple but powerful technique of adding 'think step by step' or 'show your reasoning' to your prompt. Instead of jumping to an answer, the model works through the problem sequentially, catching errors it would otherwise make.
Research showed that chain-of-thought prompting dramatically improves LLM performance on math, logic, and reasoning tasks — sometimes doubling accuracy. The technique works because it forces the model to generate intermediate reasoning steps, which keeps the computation on track.
Chain-of-thought is so effective that it inspired dedicated reasoning models (OpenAI's o1/o3, DeepSeek-R1) that automatically engage in extended internal reasoning. These models essentially do chain-of-thought by default, producing better results on complex tasks.
Real-World Example
Adding 'Let's think through this step by step' to a math problem prompt can dramatically improve the AI's accuracy — that's chain-of-thought prompting in action.
Related Terms
More in LLM & Language Models
FAQ
What is Chain-of-Thought (CoT)?
A prompting technique where you ask the AI to show its reasoning step by step — significantly improving accuracy on complex problems.
How is Chain-of-Thought (CoT) used in practice?
Adding 'Let's think through this step by step' to a math problem prompt can dramatically improve the AI's accuracy — that's chain-of-thought prompting in action.
What concepts are related to Chain-of-Thought (CoT)?
Key related concepts include Reasoning Model, Prompt Engineering, Prompt. Understanding these together gives a more complete picture of how Chain-of-Thought (CoT) fits into the AI landscape.