Live47 models · 44 open source · avg 29.5
Meta logo

Meta

🇺🇸United StatesWebsite
Top Model
Llama 3.3 70B Instruct
avg score
47
Total Models
tracked on BenchGecko
44
Open Source
94% of models
$0.02
Cheapest Model
per 1M input tokens
29.5
Avg Benchmark
across 20 scored models

Model Categories

LLM42Multimodal5

Pricing Range $/1M input tokens

$0.02
$0.03
$0.03
$0.05
$0.08
$0.10
$0.15
$0.18
$0.24
$0.40
$0.48
$0.51
Low: $0.02Median: $0.15High: $0.51

Open Source Ratio

94%
44 open source3 proprietary
#ModelAvgaider editaider poly?ANLI?APEX-Agents?ARC AI2?ARC-AGI?ARC-AGI-2?aa agentic?aa coding ?aa qualityseal audioseal audioseal audioBalrog?BBH?hf bbhC-EvalCadEval?charxiv re?charxiv re?arena elo arena elo chess puzz?CMMLUCSQA2Cybench?deepresear?EnigmaEvalfiction li?Fortressfrontierma?frontierma?GeoBench?GPQAGPQA diamond?graphwalks?GSM8K?GSO-Bench?HellaSwag?HELM · GPQAhelm ifevahelm mmlu helm omni helm wildbHLE?hle toolsseal humanseal humanIFEvaljp jcommonJHumanEvalJMMLUJNLIJSQuADLAMBADA?lech mazur?livebench livebench livebench livebench livebench livebench livebench livebench jp overallMASKMATH level 5?MATH Level 5MCP AtlasMMLU?MMLU-PROMMMLUmmmlu armmmlu bnmmmlu zhmmmlu frmmmlu demmmlu himmmlu idmmmlu itmmmlu jammmlu kommmlu ptmmmlu esmmmlu swmmmlu yoseal multiMultiNRCMUSROpenBookQA?oc aime202oc gpqa dioc hleoc ifevaloc livecodoc mmlu prOSWorld?otis mock ?PIQA?posttrainbseal pro rseal pro rseal propeseal remotScienceQA?SciPredictSimpleBench?simpleqa v?seal swe aseal swe aswe bench swe bench swe bench seal swe bseal swe bswe bench ?swe bench ?terminal b?the agent ?TriviaQA?TutorBenchUSAMOVideoMME?VISTAseal visuaVPCT?WeirdML?Winogrande?$/1M inContextReleased
1Meta logoLlama 3.3 70B Instruct🇺🇸 MetaOpen75.959.4--------------56.6-----1318.0-----------10.5--------------90.0------------------48.3--48.1-----------------15.6---------------------------------------$0.10131KDec 241y ago
2Meta logoMeta Llama 3 8B🇺🇸 MetaOpen65.9---------------48.7-----------------19.7--------------16.082.9-44.760.988.9----------48.9--18.6--41.2-----------------16.0---------------------------------------N/A-Apr 242y ago
3Meta logoLlama 3.1 70B Instruct🇺🇸 MetaOpen53.858.6------------27.9-55.9-----1292.8-64.4---------14.225.6-------------86.7-----------------36.738.1-73.547.9-----------------17.7--------3.5--------------------6.9-------9.0-$0.40131KJul 241y ago
4Meta logoLlama 3.1 405B🇺🇸 MetaOpen44.1----93.7---------77.27.8---------7.5-------5.934.5---85.6---------18.1-----------------49.80.0-79.325.7-----------------2.232.3-------9.671.8-------7.6-----------7.482.7------21.478.4N/A-Jul 241y ago
5Meta logoLlama 3 8B Instruct🇺🇸 MetaOpen41.7--36.0-77.1----------18.4-----1222.2-----------2.11.4-------------24.0-----------------6.13.9-58.417.8-----------------19.976.8-------0.7---------------------67.7-------51.4$0.038KApr 242y ago
6Meta logoLlama 2-13B🇺🇸 MetaOpen40.7----47.1---------44.3---------0.1---------1.8-36.9-74.3---------------76.5-----------3.3--40.8-------------------42.7--------61.6-----41.0--------------79.6-------45.6N/A-Jan 242y ago
7Meta logoLlama 3.2 90B🇺🇸 MetaOpen37.8-------------27.3------------------52.0-21.4-------------------------------39.4--73.7---------------------------2.5------------------------------N/A-Jan 242y ago
8Meta logoLlama 3.2 3B Instruct🇺🇸 MetaOpen35.9---------------24.1-----1165.7-----------3.8--------------73.9------------------17.7--24.4-----------------1.4---------------------------------------$0.0580KSep 241y ago
9Meta logoLlama 3.1 8B Instruct🇺🇸 MetaOpen34.337.6------------15.1-29.2-----1211.0-----------9.51.3-82.4-----------50.6-----------------22.915.5-41.530.9-----------------8.5--------2.462.4---------------------------1.7-$0.0216KJul 241y ago
10Meta logoMeta Llama 3 8B Instruct🇺🇸 MetaOpen31.7---------------28.2-----------------1.2--------------74.187.7-46.761.189.5----------49.6--8.7--29.6-40.536.451.455.853.541.451.053.342.346.555.555.837.531.0--1.6---------------------------------------N/A-Apr 242y ago
11Meta logoLLaMA-13B🇺🇸 MetaOpen30.4----36.9---------17.225.338.8----970.9-39.8---------3.5--20.6-72.3---------25.3-----75.2------------3.1-30.323.1-----------------2.041.9--------60.2-----24.4--------------77.9-------46.0N/A-Jan 242y ago
12Meta logoLlama 3 70B Instruct🇺🇸 MetaOpen29.9---------------------1275.1-36.9-5.0--------20.8-------------------------------22.6--72.4-------------------30.1-------4.2-----------------------------67.0$0.518KApr 242y ago
13Meta logoLlama 3.3 70B Instruct (free)🇺🇸 MetaOpen29.6-------------23.0--------------33.3-----29.9-------------------------------41.6--81.7---------------------------5.0--------3.9-------------------14.4-Free66KDec 241y ago
14Meta logoLlama 2 7b Chat Hf🇺🇸 MetaOpen26.5---------------4.5-----------------0.5--------------39.952.6-33.335.683.0----------38.3--2.0--7.6-----------------3.3---------------------------------------N/A-Jul 232y ago
15Meta logoLlama 2 7b Hf🇺🇸 MetaOpen22.2---------------10.3-----------------2.2--------------25.225.5-28.636.179.9----------37.2--1.7--9.6-----------------3.8---------------------------------------N/A-Jul 232y ago
16Meta logoLlama 4 Maverick🇺🇸 MetaOpen22.0-15.6---4.40.17.215.618.4------------------46.2-0.7-52.0-56.0---------0.9----------63.7----------73.0------------------------------20.5--------13.2---------21.0---------24.5-$0.151.0MApr 251y ago
17Meta logoLlama 3.2 1B Instruct🇺🇸 MetaOpen19.9---------------8.3-----1110.2-----------2.4--------------58.1------------------8.2--8.2-----------------1.9---------------------------------------$0.0360KSep 241y ago
18Meta logoLlama 4 Scout🇺🇸 MetaOpen15.3-----0.50.15.26.713.5------------------36.0-0.1---35.8-------------------------------62.3------------------------------7.7------------------9.1-----------$0.08328KApr 251y ago
19Meta logoLlama 3.2 3B Instruct (free)🇺🇸 MetaOpen14.7---------------14.2-----------------2.4--------------13.4------------------1.9--16.5-----------------3.8---------------------------------------Free131KSep 241y ago
90+ Gold 80-89 70-79 60-69 <60Scores in % unless noted. Avg = unweighted mean across tested benchmarks.

Quick answers · sourced from our data

How many models does Meta have?

BenchGecko tracks 47 models from Meta, of which 44 (94%) are open source. Every entry is updated daily from live provider feeds.

What is the best model from Meta?

Llama 3.3 70B Instruct is currently the highest scoring Meta model we track, with an average benchmark score of 46.9. Scores are computed across every public benchmark we have data for.

What is the cheapest Meta model?

The cheapest Meta model on BenchGecko starts at $0.02 per 1M input tokens. Pricing is pulled from OpenRouter and cross-checked against official provider rate cards.

How does Meta compare on benchmarks?

Meta models average 29.5 across the benchmarks we track · see the All Providers page for the full ranking by model count, open source ratio, and average score.

Where is Meta based?

Meta is headquartered in United States. BenchGecko groups providers by region to make it easy to compare US, EU, China, and Rest of World markets.

Is Meta open source?

44 of 47 Meta models are open source (94%). The rest are proprietary · closed weights served via API.