ImageNet Linear Probe

Unknown

Linear classification on frozen ImageNet-1K features. Used to evaluate representation quality of self-supervised and contrastive models without fine-tuning the backbone.

Benchmark Stats

Models5
Papers5
Metrics1

SOTA History

Not enough data to show trend.

top-1-accuracy

top-1-accuracy

Higher is better

RankModelSourceScoreYearPaper
1DINOv2 ViT-g/14

DINOv2 ViT-g/14, self-supervised via distillation. Linear probe on frozen features. Source: facebookresearch/dinov2 README pretrained models table. Paper: Oquab et al. 2023, arxiv:2304.07193.

Community86.52026Source
2DINOv2 ViT-L/14

DINOv2 ViT-L/14, self-supervised via distillation. Linear probe on frozen features. Source: facebookresearch/dinov2 README pretrained models table. Paper: Oquab et al. 2023, arxiv:2304.07193.

Community86.32026Source
3CLIP ViT-L/14

OpenAI CLIP ViT-L/14, contrastive pre-training on 400M image-text pairs. Linear probe on frozen features. 85.3% reported in original CLIP paper (Table 10, Appendix). Paper: Radford et al. 2021, arxiv:2103.00020.

Community85.32026Source
4MAE ViT-H/14

Masked Autoencoder ViT-H/14. Linear probe on frozen features (PyTorch reimplementation). Source: facebookresearch/mae FINETUNE.md linear probing table. Paper: He et al. 2022, arxiv:2111.06377. Note: MAE is optimized for fine-tuning not linear probing; finetuned accuracy is 87.8%.

Community77.22026Source
5MAE ViT-L/16

Masked Autoencoder ViT-L/16. Linear probe on frozen features (PyTorch reimplementation). Source: facebookresearch/mae FINETUNE.md. Paper: He et al. 2022, arxiv:2111.06377.

Community762026Source

Submit a Result