CIFAR-10
Unknown
60K 32x32 color images in 10 classes. Classic small-scale image classification benchmark with 50K training and 10K test images.
Benchmark Stats
SOTA History
accuracy
accuracy
Higher is better
| Rank | Model | Source | Score | Year | Paper |
|---|---|---|---|---|---|
| 1 | deit-b-distilled Near-SOTA on CIFAR-10 with transfer learning. | Editorial | 99.1 | 2025 | Source |
| 2 | ViT-L/16 (IN-21K) Vision Transformer ViT-L/16, pretrained on ImageNet-21K and finetuned on CIFAR-10. 99.0% reported in ViT paper. Paper: Dosovitskiy et al. 2021, arxiv:2010.11929. | Community | 99 | 2026 | Source |
| 3 | EfficientNet-B8 (NoisyStudent) NoisyStudent EfficientNet-B8 trained with self-training and noise. 98.7% on CIFAR-10. Paper: Xie et al. 2020, arxiv:1911.04252. | Community | 98.7 | 2026 | Source |
| 4 | convnext-v2-base Strong CNN performance on small-scale benchmark. | Editorial | 98.7 | 2025 | Source |
| 5 | ViT-B/16 (IN-21K) Vision Transformer ViT-B/16, pretrained on ImageNet-21K and finetuned on CIFAR-10. 98.13% reported in ViT paper. Paper: Dosovitskiy et al. 2021, arxiv:2010.11929. | Community | 98.13 | 2026 | Source |
| 6 | Swin-B Swin Transformer Base, finetuned from IN-21K pretraining on CIFAR-10. Paper: Liu et al. 2021, arxiv:2103.14030. | Community | 98 | 2026 | Source |
| 7 | resnet-50 With Cutout augmentation. | Editorial | 96.01 | 2025 | Source |