Model card
Swin UNETR.
NVIDIA (MONAI)open-source62M paramsSwin Transformer encoder + U-Net decoder
Self-supervised pre-training on 5,050 CT scans (task-agnostic). CVPR 2022 workshop winner.
§ 01 · Benchmarks
Every benchmark Swin UNETR has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | BTCV | Medical · Medical Image Segmentation | mean-dsc | 79.1% | #4 | 2022-01-04 | source ↗ |
| 02 | Synapse Multi-Organ CT | Medical · Medical Image Segmentation | mean-dsc | 79.1% | #9 | 2022-01-04 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for Swin UNETR.
- 2022-01-04· Medical· 2 results
Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
2
results
2 of 2 rows marked verified.