Model card
BRIO.
Yale NLPopen-sourceUnknown paramsBART-large with contrastive learning objective
BRIO: Bringing Order to Abstractive Summarization. Liu et al. ACL 2022. Trains a BART-large model using a contrastive loss that assigns probability mass proportional to candidate summary quality, achieving SOTA on CNN/DM and XSum.
§ 01 · Benchmarks
Every benchmark BRIO has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | CNN/DailyMail | Natural Language Processing · Text Summarization | rouge-1 | 47.8% | #1 | 2022-03-31 | source ↗ |
| 02 | CNN/DailyMail | Natural Language Processing · Text Summarization | rouge-2 | 23.6% | #1 | 2022-03-31 | source ↗ |
| 03 | CNN/DailyMail | Natural Language Processing · Text Summarization | rouge-l | 44.6% | #1 | 2022-03-31 | source ↗ |
| 04 | cnn-/-daily-mail | Computer Vision · Optical Character Recognition | rouge-1 | 47.8% | #2 | 2022-03-31 | source ↗ |
| 05 | cnn-/-daily-mail | Computer Vision · Optical Character Recognition | rouge-2 | 23.8% | #2 | 2022-03-31 | source ↗ |
| 06 | cnn-/-daily-mail | Computer Vision · Optical Character Recognition | rouge-l | 44.5% | #2 | 2022-03-31 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where BRIO actually performs.
§ 03 · Papers
1 paper with results for BRIO.
- 2022-03-31· Natural Language Processing· 6 results
BRIO: Bringing Order to Abstractive Summarization
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
6
results
6 of 6 rows marked verified.