DeepSeek V3.2 vs Gemini 2.5 Pro
Lado a lado. Cada métrica. Cada benchmark.
| Tipo | DeepSeek V3.2 | Gemini 2.5 Pro |
|---|---|---|
| Provider | ||
| puntuación promedio | 53.0 | 56.2 |
| Precio de entrada | $0.26 | $1.25 |
| Precio de salida | $0.38 | $10.00 |
| Ventana de contexto | 164K tokens (~82 books) | 1.0M tokens (~524 books) |
| Publicado el | 2025-12-01 | 2025-06-17 |
| Código abierto | Open Source | Proprietary |
Puntuaciones de benchmark
21 benchmarks · DeepSeek V3.2: 11, Gemini 2.5 Pro: 9
| Benchmark | Categoría | DeepSeek V3.2 | Gemini 2.5 Pro |
|---|---|---|---|
| Aider polyglot | coding | 74.2 | 83.1 |
| ARC-AGI | reasoning | 57.0 | 41.0 |
| ARC-AGI-2 | reasoning | 4.0 | 4.9 |
| Artificial Analysis — Agentic Index | speed | 52.9 | 32.7 |
| Artificial Analysis — Coding Index | speed | 36.7 | 31.9 |
| Artificial Analysis — Quality Index | speed | 41.7 | 34.6 |
| Chatbot Arena Elo — Coding | arena | 1326.9 | 1202.0 |
| Chatbot Arena Elo — Overall | arena | 1424.4 | 1448.2 |
| Chess Puzzles | knowledge | 14.0 | 20.0 |
| FrontierMath-2025-02-28-Private | math | 22.1 | 14.1 |
| FrontierMath-Tier-4-2025-07-01-Private | math | 2.1 | 4.2 |
| GPQA diamond | knowledge | 77.9 | 80.4 |
| OpenCompass — AIME2025 | math | 93.0 | 88.7 |
| OpenCompass — GPQA-Diamond | knowledge | 84.6 | 84.7 |
| OpenCompass — HLE | knowledge | 23.2 | 21.1 |
| OpenCompass — IFEval | language | 89.7 | 90.0 |
| OpenCompass — LiveCodeBenchV6 | coding | 75.4 | 71.3 |
| OpenCompass — MMLU-Pro | knowledge | 85.8 | 85.8 |
| OTIS Mock AIME 2024-2025 | math | 87.8 | 84.7 |
| SimpleQA Verified | knowledge | 27.5 | 56.0 |
| Terminal Bench | coding | 39.6 | 32.6 |