Codesota · Models · GPT-2-Medium (fine-tuning)OpenAI5 results · 1 benchmarks
Model card

GPT-2-Medium (fine-tuning).

OpenAIunknown355M paramsTransformer

GPT-2 Medium fine-tuned on E2E NLG. Reported in HTLM paper (arxiv:2107.06955).

§ 01 · Benchmarks

Every benchmark GPT-2-Medium (fine-tuning) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01e2eComputer Vision · Optical Character Recognitioncider2.5%#2/92021-07-14source ↗
02e2eComputer Vision · Optical Character Recognitionmeteor46.2%#2/92021-07-14source ↗
03e2eComputer Vision · Optical Character Recognitionrouge-l71.0%#4/92021-07-14source ↗
04e2eComputer Vision · Optical Character Recognitionbleu68.2%#6/92021-07-14source ↗
05e2eComputer Vision · Optical Character Recognitionnist8.6%#6/92021-07-14source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where GPT-2-Medium (fine-tuning) actually performs.

Computer Vision
1
benchmark
avg rank #4.0
§ 03 · Papers

1 paper with results for GPT-2-Medium (fine-tuning).

  1. 2021-07-14· Computer Vision· 5 results

    HTLM: Hyper-Text Pre-Training and Prompting of Language Models

§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
o1-preview
Undisclosed params · 8 results
§ 05 · Sources & freshness

Where these numbers come from.

papers-with-code
5
results
5 of 5 rows marked verified.