Codesota · Models · PEGASUS-LargeGoogle3 results · 1 benchmarks
Model card

PEGASUS-Large.

Googleopen-sourceUnknown paramsTransformer encoder-decoder (gap-sentence generation pre-training)

PEGASUS large model. ICML 2020.

§ 01 · Benchmarks

Every benchmark PEGASUS-Large has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01CNN/DailyMailNatural Language Processing · Text Summarizationrouge-221.5%#3/32019-12-18source ↗
02CNN/DailyMailNatural Language Processing · Text Summarizationrouge-144.2%#6/62019-12-18source ↗
03CNN/DailyMailNatural Language Processing · Text Summarizationrouge-l41.1%#6/62019-12-18source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where PEGASUS-Large actually performs.

Natural Language Processing
1
benchmark
avg rank #5.0
§ 03 · Papers

1 paper with results for PEGASUS-Large.

  1. 2019-12-18· Natural Language Processing· 3 results

    PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

§ 04 · Related models

Other Google models scored on Codesota.

Gemini 2.5 Pro
16 results · 3 SOTA
Gemini 3 Pro
Undisclosed params · 13 results · 2 SOTA
Gemini 1.5 Pro
12 results · 1 SOTA
Gemini 3.1 Pro
3 results · 1 SOTA
ViT-H/14
632M params · 2 results · 1 SOTA
CoCa (finetuned)
2.1B params · 1 result · 1 SOTA
Gemini 2.0 Flash
1 result · 1 SOTA
Gemini 3.1 Pro Preview
1 result · 1 SOTA
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.