Codesota · Models · LLaVA-1.5UW-Madison / Microsoft3 results · 3 benchmarks
Model card

LLaVA-1.5.

UW-Madison / Microsoftopen-sourceUnknown paramsCLIP ViT-L + MLP projector + Vicuna-13B

Improved LLaVA with MLP connector and VQA data. 13B params. Strong open-source VLM baseline 2023-2024. Source: arxiv:2310.03744.

§ 01 · Benchmarks

Every benchmark LLaVA-1.5 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01VQA v2.0Multimodal · Visual Question Answeringaccuracy80.0%#5/72023-10-05source ↗
02MMBenchMultimodal · Visual Question Answeringaccuracy67.7%#8/82023-10-05source ↗
03TextVQAMultimodal · Visual Question Answeringaccuracy61.3%#8/92023-10-05source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where LLaVA-1.5 actually performs.

Multimodal
3
benchmarks
avg rank #7.0
§ 03 · Papers

1 paper with results for LLaVA-1.5.

  1. 2023-10-05· Multimodal· 3 results

    Improved Baselines with Visual Instruction Tuning (LLaVA-1.5)

§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.