Added visible chain-of-thought reasoning that users can inspect, bridging the gap between fast responses and deep analysis.
Product AnnouncementIntroduced toggleable extended thinking mode where Claude can reason more deeply about complex problems. Developers can set a 'thinking budget' to control reasoning depth. First hybrid reasoning model from Anthropic.
A mode where Claude produces an explicit reasoning trace before answering, solving problems through visible intermediate steps. Unlike hidden internal reasoning, extended thinking exposes the deliberation process to the user, improving transparency, debuggability, and trust while providing more surface area for self-correction.
Developer-controlled parameters that determine how many tokens Claude can allocate to internal reasoning before producing a final answer. This allows fine-grained control over the cost-quality tradeoff: set a high budget for hard math problems, low budget for simple tasks. No competitor offered this granularity at launch.
The architectural combination of explicit reasoning (extended thinking) with action execution (tool use, code execution). Rather than reasoning then acting separately, Claude can interleave thinking and acting, planning actions, testing them, and adapting based on results in a single coherent loop.
Making the model's reasoning process visible to users and developers rather than hidden in internal states. This enables better debugging (seeing where the model went wrong), better trust (understanding the model's reasoning), and better iteration (refining reasoning prompts and budgets based on what actually happened).