Model card
PolyglotCodeBERT.
UC Daviscode-lm125M paramsTransformer encoder
Multilingual CodeBERT pre-trained across 6+ programming languages. From arXiv 2112.02043.
§ 01 · Benchmarks
Every benchmark PolyglotCodeBERT has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | codesearchnet---java | Computer Vision · Optical Character Recognition | smoothed-bleu-4 | 20.1% | #4 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where PolyglotCodeBERT actually performs.
§ 05 · Sources & freshness
Where these numbers come from.
codexglue-leaderboard
1
result
1 of 1 rows marked verified.