Dataset Viewer (First 5GB)
Auto-converted to Parquet
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 400.27 MiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

NucGen3D: A Synthetic Dataset for Large-Scale 3D Nuclear Segmentation with Open-Source Training Data and Models

NucGen3D is an open dataset of realistically simulated, annotated 3D microscopy-like images of cell nuclei. It was generated using a procedural simulation framework designed to reproduce the structural and visual complexity of real fluorescence microscopy data.

The dataset provides paired 3D images and ground truth masks, enabling training and benchmarking of 3D segmentation models. It addresses the scarcity of large, well-annotated 3D datasets in bioimage analysis by offering controllable, unbiased, and reproducible training data.

πŸ“‚ Dataset structure

This repository hosts:

  • The NucGen3D dataset (40 000 3D images β‰ˆ 10 M nuclei) together with the labelisation
  • The noise templates used for augmentation and realistic image generation (Perlin and anisotropic noise)

πŸš€ What you can do

  • Use the dataset for training or benchmarking 3D nuclear segmentation models
  • Generate new synthetic 3D images using the NucGen3D simulator (code on GitHub)
  • Augment existing data with our provided noise templates (Perlin / anisotropic) for more realistic training

πŸ’» Example: loading a NucGen3D image and applying random noise

Code available on GitHub.

from torch.utils.data import DataLoader
from nucgen3d.dataset.loader import SimImageNoiseDataset

# Example use
ds = SimImageNoiseDataset(
    img_dir="data/simulated/images",                                 # directory containing simulated images
    noise_dirs=["data/noise2", "data/noise3", "data/noise_aniso1"],  # str or list[str] - noise templates directory (.tif) - Perlin/anisotropic
    crop_size=256,                                                   # example: 256, None = full image
    z_slices=8,                                                      # example: 8 -> number of z slices
    quant_prob=0.3,                                                  # Noisator quantification parameter
    background_prob=0.7,
    background_coeff_max=0.6,
    readout_max=0.03,
    random_shot=0.5,
)

dl = DataLoader(ds, batch_size=4, shuffle=True, num_workers=0)
noisy, clean, names = next(iter(dl))

πŸ™ Acknowledgements

This work was developed within the RESTORE (INSERM, UniversitΓ© de Toulouse) and IRIT (UniversitΓ© de Toulouse) laboratories, as part of the ANITI program and the CALM research chair.

πŸ“„ Citation

If you use NucGen3D as part of your workflow in a scientific publication, please consider citing the paper:

@article {Grandgirard2025.10.08.681092,
  author = {Grandgirard, Emma and Dmitrasinovic, Theotime and Barreau, Corinne and Sengenes, Coralie and Serrurier, Mathieu},
  title = {NUCGEN3D: A synthetic framework for large-scale 3D nuclear segmentation with open-source training data and models},
  elocation-id = {2025.10.08.681092},
  year = {2025},
  doi = {10.1101/2025.10.08.681092},
  publisher = {Cold Spring Harbor Laboratory},
  abstract = {Robust nuclear segmentation in 3D microscopy images is a critical yet unresolved challenge in quantitative cell biology, hindered by the scarcity and variability of annotated volumetric datasets. Because such data are difficult to obtain, most state-of-the-art approaches, including Cellpose, segment individual 2D slices and then heuristically reconstruct 3D volumes, thereby losing critical spatial context. Our analysis of human expert annotator performance confirms that ignoring 3D context introduces substantial variability in nuclear detection and annotation. While a few 3D models have been trained on small or toy datasets, no large-scale, openly available resource currently exists to enable robust training of high-capacity 3D segmentation networks. To address this, we present NucGen3D, a customizable simulation framework that generates large-scale, annotated 3D microscopy datasets from limited 2D input, specifically the 2018 Data Science Bowl dataset. NucGen3D produces realistic 3D volumes across diverse biological and imaging scenarios, including variations in nuclear morphology, spatial arrangement, acquisition artifacts, and imaging noise. Using this synthetic data, we trained two models from scratch: a 2D convolutional neural network under Cellpose-like conditions, and a fully 3D convolutional model that extends the 2D settings. We evaluated both on a challenging, independent real-world dataset with complex nuclear architectures. Both models, especially the 3D model, consistently outperformed state-of-the-art methods, including those trained on larger annotated datasets or based on more complex architectures. These results demonstrate that synthetic data can effectively substitute for real 3D annotations in training performing models at scale. To promote reproducibility and further research, we release both the NucGen3D framework and the fully trained 3D segmentation model as open source, making this the first end-to-end open resource for large-scale 3D nuclear segmentation.Competing Interest StatementThe authors have declared no competing interest.ANR, ANR-24-INBS-0005 FBI BIOGEN, ANR-18-EURE-0003, ANR-19-PI3A-0004AFM Telethon, 23263, 28950},
  URL = {https://www.biorxiv.org/content/early/2025/10/08/2025.10.08.681092},
  eprint = {https://www.biorxiv.org/content/early/2025/10/08/2025.10.08.681092.full.pdf},
  journal = {bioRxiv}
}
Downloads last month
83