Model card
CodeBERT.
Microsoftopen-sourceUnknown paramsBERT
Pre-trained model for programming and natural languages. 125M parameters.
§ 01 · Benchmarks
Every benchmark CodeBERT has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | Bugs2Fix | Computer Code · Bug Detection | accuracy | 62.5% | #6 | 2020-02-19 | source ↗ |
| 02 | CodeSearchNet | Computer Vision · Optical Character Recognition | bleu-4 | 17.6% | #7 | 2020-02-19 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where CodeBERT actually performs.
§ 03 · Papers
1 paper with results for CodeBERT.
- 2020-02-19· Computer Code· 2 results
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
§ 04 · Related models
Other Microsoft models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
2
results
2 of 2 rows marked verified.