Codesota · Models · nnFormerZhou et al.2 results · 2 benchmarks
Model card

nnFormer.

Zhou et al.open-source150M paramsInterleaved Transformer for volumetric segmentation

Interleaved local and global self-attention for volumetric medical image segmentation.

§ 01 · Benchmarks

Every benchmark nnFormer has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01ACDCMedical · Medical Image Segmentationmean-dsc90.2%#5/62021-09-07source ↗
02Synapse Multi-Organ CTMedical · Medical Image Segmentationmean-dsc80.9%#8/112021-09-07source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where nnFormer actually performs.

Medical
2
benchmarks
avg rank #6.5
§ 03 · Papers

1 paper with results for nnFormer.

  1. 2021-09-07· Medical· 2 results

    nnFormer: Interleaved Transformer for Volumetric Segmentation

§ 05 · Sources & freshness

Where these numbers come from.

arxiv
2
results
2 of 2 rows marked verified.