Model card
LayoutLMv3-Large.
Microsoft Researchopen-sourceUnknown paramsMultimodal Transformer (text + layout + image unified pre-training)
LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking. Large variant. Strong baseline for document understanding tasks including layout analysis. ACM MM 2022. arXiv 2204.08387.
§ 01 · Benchmarks
Every benchmark LayoutLMv3-Large has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | DocLayNet | Computer Vision · Document Understanding | mAP | 79.5% | #3 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where LayoutLMv3-Large actually performs.
§ 04 · Related models
Other Microsoft Research models scored on Codesota.
Faster R-CNN
Unknown params · 7 results
Swin-L (Cascade R-CNN)
1 result
DiT-L (Cascade R-CNN)
Unknown params · 0 results
Faster R-CNN (VGG-16)
~137M params · 0 results
NaturalSpeech
N/A params · 0 results
NaturalSpeech 3
Unknown params · 0 results
SwinV2-G
0 results
ViT-Adapter-L (BEiT-3)
Unknown params · 0 results
§ 05 · Sources & freshness
Where these numbers come from.
arxiv-paper
1
result
0 of 1 rows marked verified.