Safetensors
dinov2
medical
Eval Results
curia / README.md
cdancette's picture
Update README.md
8a374f9 verified
---
tags:
- medical
license: other
license_name: research-only-rail-m
model-index:
- name: Curia
results:
- task:
type: classification
dataset:
type: CuriaBench
name: CuriaBench Anatomy Recognition
metrics:
- name: Accuracy
type: accuracy
value: 98.1
datasets:
- raidium/CuriaBench
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/62cdea59a9be5c195561c2b8/JaS4YslW9wFR8dZ7LMawz.png" width="40%" alt="Raidium" />
</div>
<hr>
<p align="center">
<a href="https://github.com/raidium-med/curia"><b>🌟 Github</b></a> |
<a href="https://arxiv.org/abs/2509.06830"><b>📄 Paper Link</b></a> |
<a href="https://raidium.eu/blog.html#post-curia-foundation-model"><b>🌐 Blog post</b></a>
</p>
<h2>
<p align="center">
<h1 align="center">Curia: A Multi-Modal Foundation Model for Radiology</h1>
</p>
</h2>
We introduce Curia, a foundation model trained on the entire cross-sectional imaging output
of a major hospital over several years—which to our knowledge is the largest such corpus of
real-world data—encompassing 150,000 exams (130 TB). On a newly curated 19-task external validation benchmark,
Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions,
and predicts outcomes in tumor staging. Curia meets or surpasses the performance of radiologists and recent
foundation models, and exhibits clinically significant emergent properties in cross-modality, and low-data regimes.
Check the research paper: https://arxiv.org/abs/2509.06830
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/62cdea59a9be5c195561c2b8/BzxEbRLYX2pbRV_Oev-Ze.png" width="60%" alt="Results" />
</div>
## Loading the model
To load the model, use the `AutoModel` class from huggingface transformers library.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("raidium/curia")
```
You can also load the image pre-processor
```python
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("raidium/curia", trust_remote_code=True)
```
Then to forward an image:
```python
img = np.random.rand(-1024, 1024, size=(256, 256)) # single axial slice, in PL orientation
model_input = processor(img)
features = model(**model_input)
```
The image must follow the following format:
```
input: numpy array of shape (H, W)
Images needs to be in:
- PL for axial
- IL for coronal
- IP for sagittal
for CT, no windowing, just hounsfield or normalized image
for MRI, similar, no windowing, just raw values or normalized image
```
## Loading model with heads
The following heads are available:
```abdominal-trauma
anatomy-ct
anatomy-mri
atlas-stroke
covidx-ct
deep-lesion-site
emidec-classification-mask
ich
ixi
kits
kneeMRI
luna16-3D
neural_foraminal_narrowing
oasis
spinal_canal_stenosis
subarticular_stenosis
```
To load the head, specify its name when loading the model
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("raidium/curia", trust_remote_code=True)
model = AutoModelForImageClassification.from_pretrained(
"raidium/curia", subfolder="anatomy-ct", trust_remote_code=True
)
```
## License
The model is released under the RESEARCH-ONLY RAIL-M license.
https://huggingface.co/raidium/curia/blob/main/LICENSE
## Cite our paper
```
@article{dancette2025curia,
title={Curia: A Multi-Modal Foundation Model for Radiology},
author={Dancette, Corentin and Khlaut, Julien and Saporta, Antoine and Philippe, Helene and Ferreres, Elodie and Callard, Baptiste and Danielou, Th{\'e}o and Alberge, L{\'e}o and Machado, L{\'e}o and Tordjman, Daniel and others},
journal={arXiv preprint arXiv:2509.06830},
year={2025}
}
```