Model card
HuBERT Large (LS-960).
Meta AIopen-source317M paramsCNN + Transformer (BERT-style)
HuBERT Large fine-tuned on LibriSpeech 960h. Self-supervised learning via masked prediction.
§ 01 · Benchmarks
Every benchmark HuBERT Large (LS-960) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | LibriSpeech | Speech · Speech Recognition | wer-test-clean | 1.9% | #3 | 2021-06-14 | source ↗ |
| 02 | LibriSpeech | Speech · Speech Recognition | wer-test-other | 3.6% | #5 | 2021-06-14 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for HuBERT Large (LS-960).
- 2021-06-14· Speech· 2 results
HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
§ 04 · Related models
Other Meta AI models scored on Codesota.
GENRE
1 result · 1 SOTA
SeamlessM4T v2 Large
2.3B params · 1 result · 1 SOTA
DINOv2 (ViT-g) + Linear
Unknown params · 1 result
Fairseq S2T (MuST-C)
~150M params · 1 result
Mask2Former (Swin-L)
Unknown params · 1 result
MusicGen Large
3.3B params · 1 result
Voicebox
330M params · 1 result
convnext_base.fb_in22k_ft_in1k
1 result
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
2
results
2 of 2 rows marked verified.