Brain MRI Synthesis with DDPM (64x64)
This model is a diffusion-based model for unconditional image generation of brain MRI FLAIR slices of size 64x64 pixels.
The model was trained using the DDPM architecture, with attention mechanisms in the middle of the U-Net.
It is trained from scratch on a dataset of brain MRI slices, specifically designed for generating synthetic brain images.
Training Details
- Architecture: DDPM (Denoising Diffusion Probabilistic Model)
- Resolution: 64x64 pixels
- Dataset: Lesion2D VH splitted (FLAIR MRI slices) (70% of the dataset)
- Channels: 1 (grayscale, FLAIR modality)
- Epochs: 50
- Batch size: 32
- Optimizer: AdamW with learning rate of
1.0e-4
- Scheduler: Cosine with 500 warm-up steps
- Gradient Accumulation: 1 steps
- Mixed Precision: No
- Hardware: Trained on one NVIDIA GeForce GTX 1080 Ti GPU of 12GB
- Memory Consumption: Around 7 GB during training
U-Net Architecture
- Down Blocks: [DownBlock2D, DownBlock2D, AttnDownBlock2D, DownBlock2D]
- Up Blocks: [UpBlock2D, AttnUpBlock2D, UpBlock2D]
- Layers per Block: 2
- Block Channels: [128, 128, 256, 512]
Usage
You can use the model directly with the diffusers
library:
from diffusers import DDPMPipeline
import torch
# Load the model
pipeline = DDPMPipeline.from_pretrained("benetraco/brain_ddpm_64")
pipeline.to("cuda") # or "cpu"
# Generate an image
image = pipeline(batch_size=1).images[0]
# Display the image
image.show()
- Downloads last month
- 21
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support