Tournesol-Saturday's picture
Update README.md
f00ceeb verified
metadata
tags:
  - model_hub_mixin
  - pytorch_model_hub_mixin
  - medical
license: apache-2.0
language:
  - en
metrics:
  - Dice
  - Jaccard
  - 95HD
  - ASD
pipeline_tag: image-segmentation
library_name: pytorch

This model has been pushed to the Hub using the PytorchModelHubMixin integration:

Steps to use our model in this repository:

  1. Clone this repository with the following command:
    git clone https://huggingface.co/Tournesol-Saturday/railNet-tooth-segmentation-in-CBCT-image
    cd railNet-tooth-segmentation-in-CBCT-image
    
  2. Create a virtual environment to experience our model using the following command:
    conda create -n railnet python=3.10
    conda activate railnet
    pip install -r requirements.txt
    python gradio_app.py
    
  3. In the current working directory, find the example_input_file folder.
    Select an arbitrary .h5 file in this folder and drag it into the Gradio interface for model inference.
  4. Waiting for about 1min~2min30s, the model inference is completed and the segmentation result and 3D rendering visualization will be produced.
    Both the original image and the segmentation result are saved in .nii.gz format in the output folder of the same directory.
  5. Since Gradio performs 1/2 downsampling on the 3D segmentation visualization, the segmentation accuracy is degraded.
    Users can drag the .nii.gz format files in the output folder into the ITK-SNAP software to view the accurate segmentation visualization.