Codesota · Models · CLIP4STR-L (RBU 6.5M)Zhao et al.1 results · 1 benchmarks
Model card

CLIP4STR-L (RBU 6.5M).

Zhao et al.open-sourceUnknown paramsCLIP ViT-L/14 visual branch + cross-modal branch, trained on RBU 6.5M real data

CLIP4STR-L trained on RBU 6.5M real dataset. Published in IEEE TIP Dec 2024. arxiv:2305.14014.

§ 01 · Benchmarks

Every benchmark CLIP4STR-L (RBU 6.5M) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01icdar-2013Computer Vision · Scene Text Detectionaccuracy99.0%#2/152023-05-23source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where CLIP4STR-L (RBU 6.5M) actually performs.

Computer Vision
1
benchmark
avg rank #2.0
§ 03 · Papers

1 paper with results for CLIP4STR-L (RBU 6.5M).

  1. 2023-05-23· Computer Vision· 1 result

    CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model

§ 04 · Related models

Other Zhao et al. models scored on Codesota.

CLIP4STR-H (DFN-5B)
Unknown params · 0 results
TextMamba
Unknown params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
1
result
1 of 1 rows marked verified.