Model card
CLIP4STR-H (DFN-5B).
UnknownunknownUnknown paramsUnknown
Imported from Papers With Code
§ 01 · Benchmarks
Every benchmark CLIP4STR-H (DFN-5B) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | svt | Computer Vision · Scene Text Recognition | accuracy | 99.1% | #1 | 2023-05-23 | source ↗ |
| 02 | wost | Computer Vision · Scene Text Recognition | 1-1-accuracy | 90.9% | #1 | 2023-05-23 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where CLIP4STR-H (DFN-5B) actually performs.
§ 03 · Papers
1 paper with results for CLIP4STR-H (DFN-5B).
- 2023-05-23· Computer Vision· 2 results
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
§ 04 · Related models
Other Unknown models scored on Codesota.
fglihai
Unknown params · 6 results · 1 SOTA
CLIP4STR-L
Unknown params · 1 result · 1 SOTA
USYD NLP_CS29-2
Unknown params · 6 results
Corner-based Region Proposals
Unknown params · 3 results
EAST + VGG16
Unknown params · 3 results
SSTD
Unknown params · 3 results
TextBoxes++_MS
Unknown params · 3 results
WordSup (VGG16-synth-coco)
Unknown params · 3 results
§ 05 · Sources & freshness
Where these numbers come from.
papers-with-code
2
results
2 of 2 rows marked verified.