stepfun
Model Categories
Open Source Ratio
All stepfun Models2 total
| #▲ | Model | Avg | aider edit | aider poly? | ANLI? | APEX-Agents? | ARC AI2? | ARC-AGI? | ARC-AGI-2? | aa agentic? | aa coding ? | aa quality | seal audio | seal audio | seal audio | Balrog? | BBH? | hf bbh | C-Eval | CadEval? | charxiv re? | charxiv re? | arena elo | arena elo | chess puzz? | CMMLU | CSQA2 | Cybench? | deepresear? | EnigmaEval | fiction li? | Fortress | frontierma? | frontierma? | GeoBench? | GPQA | GPQA diamond? | graphwalks? | GSM8K? | GSO-Bench? | HellaSwag? | HELM · GPQA | helm ifeva | helm mmlu | helm omni | helm wildb | HLE? | hle tools | seal human | seal human | IFEval | jp jcommon | JHumanEval | JMMLU | JNLI | JSQuAD | LAMBADA? | lech mazur? | livebench | livebench | livebench | livebench | livebench | livebench | livebench | livebench | jp overall | MASK | MATH level 5? | MATH Level 5 | MCP Atlas | MMLU? | MMLU-PRO | MMMLU | mmmlu ar | mmmlu bn | mmmlu zh | mmmlu fr | mmmlu de | mmmlu hi | mmmlu id | mmmlu it | mmmlu ja | mmmlu ko | mmmlu pt | mmmlu es | mmmlu sw | mmmlu yo | seal multi | MultiNRC | MUSR | OpenBookQA? | oc aime202 | oc gpqa di | oc hle | oc ifeval | oc livecod | oc mmlu pr | OSWorld? | otis mock ? | PIQA? | posttrainb | seal pro r | seal pro r | seal prope | seal remot | ScienceQA? | SciPredict | SimpleBench? | simpleqa v? | seal swe a | seal swe a | swe bench | swe bench | swe bench | seal swe b | seal swe b | swe bench ? | swe bench ? | terminal b? | the agent ? | TriviaQA? | TutorBench | USAMO | VideoMME? | VISTA | seal visua | VPCT? | WeirdML? | Winogrande? | $/1M in | Context | Released |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 89.5 | - | - | - | - | - | - | - | 52.0 | 31.6 | 37.8 | - | - | - | - | - | - | - | - | - | - | - | 1391.4 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 95.7 | 83.7 | 21.6 | 93.2 | 83.9 | 83.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | $0.10 | 262K | Jan 263mo ago |
About stepfun
Quick answers · sourced from our data
How many models does stepfun have?
BenchGecko tracks 2 models from stepfun, of which 2 (100%) are open source. Every entry is updated daily from live provider feeds.
What is the best model from stepfun?
Step 3.5 Flash is currently the highest scoring stepfun model we track, with an average benchmark score of 76.9. Scores are computed across every public benchmark we have data for.
What is the cheapest stepfun model?
The cheapest stepfun model on BenchGecko starts at $0.10 per 1M input tokens. Pricing is pulled from OpenRouter and cross-checked against official provider rate cards.
How does stepfun compare on benchmarks?
stepfun models average 76.9 across the benchmarks we track · see the All Providers page for the full ranking by model count, open source ratio, and average score.
Where is stepfun based?
stepfun is headquartered in China. BenchGecko groups providers by region to make it easy to compare US, EU, China, and Rest of World markets.
Is stepfun open source?
Every stepfun model we track is open source (2 of 2).