Side-by-side comparison of AI model specs, benchmarks, and pricing.
| Spec | Claude Haiku 4.5 Anthropic | Gemini 3.1 Pro Google |
|---|---|---|
| Tier | mid | frontier |
| Release Date | 2025-10-01 | 2026-02-19 |
| Context Window | 200K | 1049K |
| Max Output | 64K | 66K |
| Input Price (/1M) | $1 | $2 |
| Output Price (/1M) | $5 | $12 |
| Arena Elo | 1,404 | 1,500 |
| MMLU | 82% | 92.6% |
| GPQA | 55% | 94.3% |
| MATH | — | 96.8% |
| HumanEval | 90% | 94.6% |
| SWE-bench | — | 80.6% |
| AIME | 28% | 91.2% |
| SimpleQA | 19% | 79.6% |
| Capabilities | vision, tool-use, code, extended-thinking | vision, tool-use, code, reasoning, agentic, audio |