Codesota · Models · DPANUnknown6 results · 6 benchmarks
Model card

DPAN.

UnknownunknownUnknown paramsUnknown

Imported from Papers With Code

§ 01 · Benchmarks

Every benchmark DPAN has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01icdar2013Computer Vision · Optical Character Recognitionaccuracy97.7%#11/362021-08-01source ↗
02cute80Computer Vision · Scene Text Recognitionaccuracy91.9%#16/202021-08-01source ↗
03icdar2015Computer Vision · Optical Character Recognitionaccuracy85.5%#16/292021-08-01source ↗
04svtpComputer Vision · Scene Text Recognitionaccuracy89.0%#17/192021-08-01source ↗
05iiit5kComputer Vision · Scene Text Recognitionaccuracy96.2%#18/212021-08-01source ↗
06svtComputer Vision · Scene Text Recognitionaccuracy93.9%#19/402021-08-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where DPAN actually performs.

Computer Vision
6
benchmarks
avg rank #16.2
§ 03 · Papers

1 paper with results for DPAN.

  1. 2021-08-01· Computer Vision· 6 results

    Look Back Again: Dual Parallel Attention Network for Accurate and Robust Scene Text Recognition

§ 04 · Related models

Other Unknown models scored on Codesota.

fglihai
Unknown params · 6 results · 1 SOTA
CLIP4STR-L
Unknown params · 1 result · 1 SOTA
USYD NLP_CS29-2
Unknown params · 6 results
Corner-based Region Proposals
Unknown params · 3 results
EAST + VGG16
Unknown params · 3 results
SSTD
Unknown params · 3 results
TextBoxes++_MS
Unknown params · 3 results
WordSup (VGG16-synth-coco)
Unknown params · 3 results
§ 05 · Sources & freshness

Where these numbers come from.

papers-with-code
6
results
6 of 6 rows marked verified.