CheXpert

Unknown

224,316 chest radiographs from 65,240 patients with 14 pathology labels. Includes uncertainty labels and expert radiologist annotations for validation set. The gold standard for chest X-ray classification.

Benchmark Stats

Models7
Papers7
Metrics1

SOTA History

Coming Soon
Visual timeline of state-of-the-art progression over time will appear here.

auroc

auroc

Higher is better

RankModelCodeScorePaper / Source
1chexpert-auc-maximizer

Mean AUC across 5 competition pathologies. Competition-winning ensemble.

-93stanford-leaderboard
2biovil

Microsoft's biomedical vision-language model.

89.1microsoft-research
3chexzero

Zero-shot performance without task-specific training. Expert-level on multiple pathologies.

88.6research-paper
4gloria

Global-Local Representations. Zero-shot evaluation.

88.2research-paper
5medclip

Decoupled contrastive learning. Zero-shot transfer.

87.8research-paper
6torchxrayvision

Pre-trained on multiple datasets. Strong transfer learning baseline.

87.4GitHub
7densenet-121-cxr

Baseline DenseNet-121. Trained on CheXpert training set.

-86.5research-paper