Model card
UniTable Large.
Georgia Tech (Peng et al.)open-sourceUnknown paramsViT encoder + autoregressive decoder; self-supervised pretraining on unannotated tabular images
Unified framework for table structure, cell content, and bbox via language modeling objective. Achieves SOTA on PubTabNet, FinTabNet, SynthTabNet. Published Mar 2024.
§ 01 · Benchmarks
Every benchmark UniTable Large has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | pubtabnet | Computer Vision · Table Recognition | teds-struct | 97.9% | #2 | 2024-03-07 | source ↗ |
| 02 | pubtabnet | Computer Vision · Table Recognition | teds-all-samples | 96.5% | #8 | 2024-03-07 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where UniTable Large actually performs.
§ 03 · Papers
1 paper with results for UniTable Large.
- 2024-03-07· Computer Vision· 2 results
UniTable: Towards a Unified Framework for Table Recognition via Self-Supervised Pretraining
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
2
results
2 of 2 rows marked verified.