Purpose
Objectives
The primary objective of this project is to use deep learning model nn-unet for segmenting up to 140 anatomical structures from CT images.
Inputs and Outputs
-
Input: CT Images in NIfTI format (
.nii.gz
). -
Output: Segmentation masks, also in NIfTI format (
.nii.gz
), corresponding to the segmented anatomical structures.
Setup Dukeseg on your device
To build and run the containerized version of our software, follow these simple steps:
Step 0: Prepare your dataset and model weights
Get absolute path to directory containing all the CT images. (It does not have to be named CT_images, just make sure you have the absolute path.)
...
├── CT_images/
│ ├── Patient_1.nii.gz
│ ├── Patient_2.nii.gz
│ └── Patient_3.nii.gz
...
You will also need to download model weights (contact lavsen.dahal@duke.edu), and get the absolute path to model weights (nnUNet_results)
Additionally, you also need to create .txt file with names of the cases you want to run segmentation (Get its absolute path as well). Reccommended to save it in the same folder as all the CT images.
Caution!! : Avoid the file extensions on the list of file
Eg: <test_cases.txt>
Patient_1
Patient_2
Patient_3
...
Step 1: Clone this repo
git clone https://gitlab.oit.duke.edu/cvit-public/dukeseg_public
Step 2: Configure config.py
Open config.py file on this repository and make following changes.
Make sure you use absolute paths for everything
1. Specify the File Names, CT images path and save path for segmentations
Set FNAMES_PATH to the location of the text file listing the CT images you want to segment.
CT_BASE_PATH = '/absolute/path/to/CT_images/' # This is the CT_images folder you had in step 0
FNAMES_PATH = '/absolute/path/to/CT_images/test_cases.txt' #This is the txt file you created in step 0. Reccomended to have it inside the folder with CT images as well.
SAVE_BASE_PATH = '/absolute/path/to/segmentations/' # This is the folder where you want to save your results. Reccomended to have it inside the folder with CT images as well.
2. Set the Model Weights Path
Update the RESULTS_FOLDER
environment variable on line 4 to point to the directory containing the nnUNet model weights:
os.environ["RESULTS_FOLDER"] = "/absolute/path/to/nnUNet_results/" # This is the path to model weights you downloaded from developers.
3. Choose the segmentation model
Configure TASK_NAME to select the segmentation model.
TASK_NAME = 'model3' # Choose among skeleton, model2, model3, bodycomposition or dukeseg
Options include skeleton, model2, model3, and bodycomposition, each tailored to segment different anatomical structures. If you use dukeseg, it will execute segmentation with skeleton, model2, and model3 combined. For additional details about how each model maps to specific anatomical structures, refer to the class_map.py file.
Step 3: Pull the Docker Image and run the image
Install docker from https://docs.docker.com/engine/install/ if you don't have it installed.
First, pull the pre-built Docker image from Docker Hub by running the following command:
docker pull lavsendahal/dukesegv1:alpha
Once the Docker image has been pulled, you can run the container with GPU support and mount necessary directories for data and code.
You can use the following command to run inference.
/absolute/path/to/data
must be the folder containing CT images, text file and the save location.
Similarly /absolute/path/to/code
is the absolute path to this repository.
Caution : If you donot correctly mount the data and code folders, following command cannot run properly. If you are unsure about what you are doing, please refer to Docker Tutorials or Documentation
docker run --rm -it --gpus 'device=0' --ipc=host -v /absolute/path/to/data:/absolute/path/to/data -v /absolute/path/to/code:/absolute/path/to/code lavsendahal/dukesegv1:alpha python /absolute/path/to/code/inference.py
Intended Audience/Users
This tool is designed for:
- Medical Professionals: Radiologists and other healthcare providers who require precise anatomical segmentation for diagnostic and therapeutic purposes.
- Researchers: Individuals conducting medical research in fields that involve anatomical studies, image processing, and computational anatomy.
OS Platform, Language, Dependencies
Platform Compatibility
The software is containerized using Docker, ensuring compatibility across any platform that can run Docker. This includes:
- Windows
- macOS
- Linux
Programming Language
The software is implemented in Python.
Dependencies
The following libraries and packages are required and included within the Docker container:
- Python Standard Libraries
- scikit-learn:
- numpy
- pandas
- matplotlib
- seaborn
- scipy
- torch
- torchvision
- torchaudio
- nnunetv2
Docker Container
A Docker container with all the required libraries and packages pre-installed is made publicly available, ensuring that users can run the software with minimal setup and maintain a consistent environment across different systems.
Hardware Requirements
To ensure optimal performance and efficiency, the following hardware specifications are recommended:
- GPU Requirement: A GPU with at least 24GB of memory is recommended.
License Details
The software is open-source and freely available for both personal and commercial use.
Acknowledgements
This project heavily borrows from TotalSegmentator and nnUNet. We extend our gratitude to the creators and maintainers of these projects, which have significantly influenced the development of our tool. For more details on their implementations and licenses, please visit the TotalSegmentator GitHub repository:
- TotalSegmentator: https://github.com/wasserth/TotalSegmentator
We encourage users to also consider these sources when utilizing our software, especially for academic or commercial purposes.
Contributions
Contributions to this project are welcome. We aim to foster a collaborative environment where community input is valued and integrated into ongoing development.
If you use this software or the derived data in your research or any other published work, please cite our paper to acknowledge the source and the authors' contributions. Here is the citation information:
Recommended Citation
BibTeX Entry
For those utilizing LaTeX, here is a BibTeX entry you can use to cite this paper in your documents:
@article{dahal2024xcat,
title={XCAT-3.0: A Comprehensive Library of Personalized Digital Twins Derived from CT Scans},
author={Dahal, Lavsen and Ghojoghnejad, Mobina and Vancoillie, Liesbeth and Ghosh, Dhrubajyoti and Bhandari, Yubraj and Kim, David and Ho, Fong Chi and Tushar, Fakrul Islam and Abadi, Ehsan and Samei, Ehsan and Lo, Joseph and others},
journal={arXiv preprint arXiv:2405.11133},
year={2024}
}