Codesota · Models · DAT-SEGWan et al. (Baidu)3 results · 1 benchmarks
Model card

DAT-SEG.

Wan et al. (Baidu)open-sourceUnknown paramsInteractive attention transformer with segmentation head for multi-granularity text detection

Segmentation head of DAT (Dual-granularity Attention Transformer). SOTA on Total-Text as of May 2024. ICML 2024. arxiv:2405.19765

§ 01 · Benchmarks

Every benchmark DAT-SEG has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Total-TextComputer Vision · Scene Text Detectionf-measure92.0%#1/332024-05-30source ↗
02Total-TextComputer Vision · Scene Text Detectionprecision95.0%#1/302024-05-30source ↗
03Total-TextComputer Vision · Scene Text Detectionrecall89.2%#1/302024-05-30source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where DAT-SEG actually performs.

Computer Vision
1
benchmark
avg rank #1.0
§ 03 · Papers

1 paper with results for DAT-SEG.

  1. 2024-05-30· Computer Vision· 3 results

    Towards Unified Multi-granularity Text Detection with Interactive Attention

§ 04 · Related models

Other Wan et al. (Baidu) models scored on Codesota.

DAT-DET
Unknown params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.