CheXpert
Unknown
224,316 chest radiographs from 65,240 patients with 14 pathology labels. Includes uncertainty labels and expert radiologist annotations for validation set. The gold standard for chest X-ray classification.
Benchmark Stats
Models7
Papers7
Metrics1
SOTA History
Coming SoonVisual timeline of state-of-the-art progression over time will appear here.
auroc
auroc
Higher is better
| Rank | Model | Code | Score | Paper / Source |
|---|---|---|---|---|
| 1 | chexpert-auc-maximizer Mean AUC across 5 competition pathologies. Competition-winning ensemble. | - | 93 | stanford-leaderboard |
| 2 | biovil Microsoft's biomedical vision-language model. | 89.1 | microsoft-research | |
| 3 | chexzero Zero-shot performance without task-specific training. Expert-level on multiple pathologies. | 88.6 | research-paper | |
| 4 | gloria Global-Local Representations. Zero-shot evaluation. | 88.2 | research-paper | |
| 5 | medclip Decoupled contrastive learning. Zero-shot transfer. | 87.8 | research-paper | |
| 6 | torchxrayvision Pre-trained on multiple datasets. Strong transfer learning baseline. | 87.4 | GitHub | |
| 7 | densenet-121-cxr Baseline DenseNet-121. Trained on CheXpert training set. | - | 86.5 | research-paper |