Codesota · Models · MAERec-SResearch1 results · 1 benchmarks
Model card

MAERec-S.

Researchopen-sourceUnknown paramsMasked AutoEncoder for scene text Recognition (ViT-Small)

MAERec: Symmetrical Linguistic Feature Distillation with CLIP for Scene Text Recognition. Small variant. Uses MAE pre-training tailored for irregular text. Evaluated on Union14M benchmark in original Union14M paper. ICCV 2023. arXiv 2209.00823.

§ 01 · Benchmarks

Every benchmark MAERec-S has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Union14MComputer Vision · Scene Text Detectionaccuracy62.4%#6/8source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where MAERec-S actually performs.

Computer Vision
1
benchmark
avg rank #6.0
§ 04 · Related models

Other Research models scored on Codesota.

DenseNet-121 (Chest X-ray)
8M params · 4 results · 2 SOTA
SimpleNet
2 results · 2 SOTA
DGN
1 result · 1 SOTA
DeepASD
1 result · 1 SOTA
DefectDet (ResNet)
1 result · 1 SOTA
PROXI
1 result · 1 SOTA
ASD-SWNet
2 results
ASDFormer
2 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv-paper
1
result
0 of 1 rows marked verified.