Model card
DAT-SEG.
Wan et al. (Baidu)open-sourceUnknown paramsInteractive attention transformer with segmentation head for multi-granularity text detection
Segmentation head of DAT (Dual-granularity Attention Transformer). SOTA on Total-Text as of May 2024. ICML 2024. arxiv:2405.19765
§ 01 · Benchmarks
Every benchmark DAT-SEG has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | Total-Text | Computer Vision · Scene Text Detection | f-measure | 92.0% | #1 | 2024-05-30 | source ↗ |
| 02 | Total-Text | Computer Vision · Scene Text Detection | precision | 95.0% | #1 | 2024-05-30 | source ↗ |
| 03 | Total-Text | Computer Vision · Scene Text Detection | recall | 89.2% | #1 | 2024-05-30 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for DAT-SEG.
- 2024-05-30· Computer Vision· 3 results
Towards Unified Multi-granularity Text Detection with Interactive Attention
§ 04 · Related models
Other Wan et al. (Baidu) models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
3
results
3 of 3 rows marked verified.