Segmentation in radiology is crucial for many applications, such as volumetry, visualization, 3D printing, radiotherapy, and more. However, it is often seen as tedious and uninteresting. Recent advances in AI have made it possible to use automated and interactive segmentation, which we believe is ready for widespread clinical use. MedSeg is not intended for direct clinical use, but we hope that our open-access segmentation database will accelerate the development of clinically validated models. We believe that segmentation should be simple, fun, and accessible to everyone. If you share our vision, please consider helping us annotate more data or by collaboration.

MedSeg is based on the following openly available tools and resources:
DICOM reader: https://github.com/rii-mango/Daikon
Nifti reader: https://github.com/rii-mango/NIFTI-Reader-JS
Deep learning (DL) in browser: https://www.tensorflow.org/js
Development of DL models: https://keras.io/ + https://www.tensorflow.org/
DeepGrow module: https://arxiv.org/abs/1903.08205
Ideas and experience from a Python-based segmentation tool with AI-capabilities. RILContour: https://link.springer.com/article/10.1007/s10278-019-00232-0
Another great advanced application with many post-processing tools, 3D Slicer: https://www.slicer.org/
Overview of web-based DICOM viewers: https://medevel.com/14-best-browser-web-based-dicom-viewers-projects/