Codesota · Models · swin_large.ms_in22k_ft_in1kMicrosoft1 results · 1 benchmarks
Model card

swin_large.ms_in22k_ft_in1k.

Microsoftopen-sourceSwin-L, IN22K pre-train, IN1K fine-tune
§ 01 · Benchmarks

Every benchmark swin_large.ms_in22k_ft_in1k has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01ImageNetComputer Vision · Image Classificationtop-1-accuracy86.3%#9/102021-03-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where swin_large.ms_in22k_ft_in1k actually performs.

Computer Vision
1
benchmark
avg rank #9.0
§ 03 · Papers

1 paper with results for swin_large.ms_in22k_ft_in1k.

  1. 2021-03-25· Computer Vision· 1 result

    Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

§ 04 · Related models

Other Microsoft models scored on Codesota.

RAD-DINO
2 results · 1 SOTA
NaturalSpeech 3
~500M params · 1 result · 1 SOTA
Swin Transformer V2 Large
197M params · 1 result · 1 SOTA
WavLM Large (SV)
316M params · 1 result · 1 SOTA
ResNet-50
25M params · 3 results
Florence-2-Large
2 results
KOSMOS-2.5
2 results
ResNet-152
60M params · 2 results
§ 05 · Sources & freshness

Where these numbers come from.

codesota-editorial
1
result
1 of 1 rows marked verified.