VBoussot's picture
Update README
e00ba49 verified
|
raw
history blame
3.51 kB
metadata
license: apache-2.0
tags:
  - medical-imaging
  - image-registration
  - torchscript
  - impact
  - pretrained
  - segmentation

🧠 TorchScript Models for the IMPACT Semantic Similarity Metric

This repository provides a collection of TorchScript-exported pretrained models designed for use with the IMPACT similarity metric, enabling semantic medical image registration through feature-level comparison.

The IMPACT metric is introduced in the following preprint, currently under review:

IMPACT: A Generic Semantic Loss for Multimodal Medical Image Registration
V. Boussot, C. HΓ©mon, J.-C. Nunes, J. Downling, S. RouzΓ©, C. Lafond, A. Barateau, J.-L. Dillenseger
arXiv:2503.24121 [cs.CV]

πŸ”§ The full implementation of IMPACT, along with its integration into the Elastix framework, is available in the repository:
➑️ github.com/vboussot/ImpactLoss

This repository also includes example parameter maps, TorchScript model handling utilities, and a ready-to-use Docker environment for quick experimentation and reproducibility.


πŸ“š Pretrained Model

The TorchScript models provided in this repository were exported from publicly available pretrained networks. These include:

  • TotalSegmentator (TS) β€” U-Net models trained for full-body anatomical segmentation
  • Segment Anything 2.1 (SAM2.1) β€” Foundation model for segmentation on natural images
  • DINOv2 β€” Self-supervised vision transformer trained on diverse datasets
  • Anatomix β€” Transformer-based model with anatomical priors for medical images

Each model provides multiple feature extraction layers, which can be selected independently using the corresponding model l_Layers. This can be configured through the LayerMask parameter in the IMPACT configuration.

In addition, the repository also includes:

  • MIND β€” A handcrafted Modality Independent Neighborhood Descriptor, wrapped in TorchScript
Model Specialization Paper / Reference Field of View License
MIND Handcrafted descriptor Heinrich et al., 2012 2r + 1 Research only
SAM2.1 General segmentation (natural images) Ravi et al., 2023 29 MIT
TS Models Multi-resolution CT/MRI segmentation Wasserthal et al., 2022 2^l + 3 Apache 2.0
Anatomix Anatomy-aware transformer encoder Dey et al., 2024 Hierarchical MIT
DINOv2 Self-supervised vision transformer Oquab et al., 2023 Global / ViT-Base MIT

πŸ” TS Model Variants

TS Models refer to the following TotalSegmentator-derived TorchScript models:
M258, M291, M293, M294, M295, M297, M298, M730, M731, M732, M733, M850, M851

Each model is specialized for a specific anatomical structure or resolution (e.g., 3mm / 6mm) and shares the same encoder-decoder architecture.