Model card
AIMv2-3B.
Appleopen-source2.7B paramsVision Transformer (Autoregressive Pre-trained)
Multimodal autoregressive pre-training of large vision encoder. 2.7B params, patch14, 448px resolution. Trained with image+text autoregressive objective on proprietary data. Released Nov 2024. Paper: arxiv:2411.14402.
§ 01 · Benchmarks
Every benchmark AIMv2-3B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | ImageNet-1K | Computer Vision · Image Classification | top-1-accuracy | 89.5% | #4 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv-paper
1
result
0 of 1 rows marked verified.