Technical Fundamentals

Chain of Thought

A prompting technique that improves LLM reasoning by instructing the model to show its intermediate reasoning steps before arriving at an answer.

Chain of Thought (CoT) prompting is a technique that significantly improves the reasoning capabilities of large language models by instructing them to generate explicit intermediate reasoning steps before producing a final answer.

First demonstrated in 2022 research from Google Brain, CoT prompting showed that LLMs make substantially fewer errors on complex reasoning tasks (math, logic, multi-step problems) when they "think out loud" rather than jumping directly to an answer.

In agentic systems, chain-of-thought reasoning is the foundation of the planning step in the agentic loop. Before taking any action, the agent reasons through:
- What is the user's underlying goal?
- What information do I currently have?
- What information is missing?
- What is the optimal next action?
- What are the potential failure modes?

Modern LLMs often implement CoT automatically (as in "extended thinking" modes), but explicit CoT prompting—where the system prompt instructs the model to reason step by step—remains a reliable technique for improving performance on complex agentic tasks.

Advanced CoT variants used in production agentic systems include:
- Tree of Thoughts: Exploring multiple reasoning paths in parallel and selecting the best
- ReAct: Interleaving reasoning and action steps in a structured pattern
- Reflexion: Using self-critique loops to improve reasoning quality over iterations