About CodeSOTA

Independent ML benchmark tracking. One source of truth for what state-of-the-art looks like across every task that matters.

CodeSOTA tracks state-of-the-art results in machine learning across 17 research areas — from computer vision and NLP to medical AI and robotics. Every result is linked to its paper and code. Every claim has a source URL.

We verify results independently where possible, rather than just aggregating what papers report. All data is available as open JSON.

286+
Benchmarks
17
Research areas
143
Models tracked
12,600+
Visitors (year one)
125
Countries

Timeline

Jul 2018Papers with Code launches

First centralized ML benchmark tracking

Dec 2019Meta acquires PWC

Promises to keep it open and free

Jul 2025Meta shuts down PWC

9,327 benchmarks, 79,817 papers — gone overnight

Dec 10, 2025CodeSOTA launches

OCR benchmarks, 50+ models, methodology docs, comparison pages

Jan 2026Expands to 17 areas

Vision, NLP, reasoning, code, speech, medical, robotics, and more

Mar 202612,600+ visitors

125 countries, 30K+ page views, growing 26% month-over-month

Who uses CodeSOTA

Researchers

Track SOTA for your papers. Compare your model to baselines. Find prior work and implementations.

Engineers

Pick the right model for production. Compare accuracy, speed, cost. Find open-source alternatives.

Decision makers

Understand the AI landscape. Make informed build vs buy decisions. Cut through marketing claims.

How it's different

Papers with Code (was)

  • Aggregated paper-reported scores
  • Wiki model — anyone could edit
  • Corporate-owned (Meta)
  • Shut down without notice

CodeSOTA

  • Verified results — we run benchmarks ourselves
  • Curated — editorial quality, not wiki noise
  • Independent — no corporate owner
  • Open data — all JSON, freely available

Built by

CodeSOTA is built by Kacper Wikiel. If you want to discuss ML benchmarking, consulting, or partnerships — let's talk.