Beta
Model familiesReading · ~3 min · 81 words deep

Claude

Anthropic's LLM family · three-tier Opus/Sonnet/Haiku naming with a focus on safety, coding, and long context.

Claude family on /family
TL;DR

Anthropic's LLM family · three-tier Opus/Sonnet/Haiku naming with a focus on safety, coding, and long context.

Level 1

Claude is Anthropic's family of large language models, named after Claude Shannon. Tier structure: Opus (largest, highest capability), Sonnet (balanced · the workhorse), Haiku (fastest, cheapest). Versioned numerically: Claude 3 (2024), Claude 3.5, Claude 4, Claude 4.5, Claude Mythos Preview (experimental 2026). Claude leads on coding benchmarks and excels at long-context comprehension and agentic tasks.

Level 2

Claude models are decoder-only transformers trained with Constitutional AI · an alignment technique developed by Anthropic that uses AI-assisted self-critique during RLHF. Claude 4.5 Opus and Claude Sonnet 4 support Extended Thinking mode · hidden CoT tokens analogous to OpenAI o-series reasoning models. Claude models emphasize: long context (200K tokens default, 1M on Enterprise), tool use, agentic workflows, and coding performance (leading SWE-bench Verified scores). Anthropic serves Claude via direct API, AWS Bedrock, and Google Vertex AI · notable multi-cloud distribution.

Level 3

Model sizes: undisclosed by Anthropic. Claude's post-training uses Constitutional AI (2022) for self-critique and RLHF (later RLAIF). Claude 3.5 Sonnet was the first model to cross 90% on SWE-bench Verified with the right agent scaffold. Extended Thinking is implemented as a budget parameter · models allocate up to 100K internal reasoning tokens per response, exposed transparently unlike OpenAI's hidden CoT. Context window handling uses high-quality long-context training (not just extension) · effective context preservation tops 90% at 200K, better than most peers.

Why this matters now

Claude Sonnet 4 and Claude Mythos Preview dominate SWE-bench Verified and coding-agent deployments. Anthropic's enterprise ARR surpassed $5B in 2026.

The takeaway for you
If you are a
Researcher
  • ·Constitutional AI + RLHF/RLAIF for alignment
  • ·Extended Thinking exposes reasoning tokens · unlike OpenAI's hidden approach
  • ·Three-tier Opus/Sonnet/Haiku naming consistent across generations
If you are a
Builder
  • ·Claude Sonnet for most coding + agent workloads · best SWE-bench per dollar
  • ·Claude Opus for hardest reasoning / highest quality
  • ·Haiku for cheap, fast, high-volume chat
If you are a
Investor
  • ·Anthropic's coding-agent leadership drives enterprise deals
  • ·Multi-cloud (direct + AWS + Vertex) is strategic hedge against channel risk
  • ·~$60B valuation as of 2026 · growing faster than OpenAI by enterprise metric
If you are a
Curious · Normie
  • ·Anthropic's answer to ChatGPT
  • ·Three models · small, medium, large
  • ·Known as the best coding AI and long-document AI
Gecko's take

Claude is the enterprise/coding champion of the 2026 frontier. If your workload is code or agents, start with Sonnet.

Named after Claude Shannon, the founder of information theory. Not an acronym.