About CodeSOTA
Independent ML benchmark tracking. One source of truth for what state-of-the-art looks like across every task that matters.
CodeSOTA tracks state-of-the-art results in machine learning across 17 research areas — from computer vision and NLP to medical AI and robotics. Every result is linked to its paper and code. Every claim has a source URL.
We verify results independently where possible, rather than just aggregating what papers report. All data is available as open JSON.
Timeline
First centralized ML benchmark tracking
Promises to keep it open and free
9,327 benchmarks, 79,817 papers — gone overnight
OCR benchmarks, 50+ models, methodology docs, comparison pages
Vision, NLP, reasoning, code, speech, medical, robotics, and more
125 countries, 30K+ page views, growing 26% month-over-month
Who uses CodeSOTA
Researchers
Track SOTA for your papers. Compare your model to baselines. Find prior work and implementations.
Engineers
Pick the right model for production. Compare accuracy, speed, cost. Find open-source alternatives.
Decision makers
Understand the AI landscape. Make informed build vs buy decisions. Cut through marketing claims.
How it's different
Papers with Code (was)
- Aggregated paper-reported scores
- Wiki model — anyone could edit
- Corporate-owned (Meta)
- Shut down without notice
CodeSOTA
- Verified results — we run benchmarks ourselves
- Curated — editorial quality, not wiki noise
- Independent — no corporate owner
- Open data — all JSON, freely available
Built by
CodeSOTA is built by Kacper Wikiel. If you want to discuss ML benchmarking, consulting, or partnerships — let's talk.