Value Proposition
· Higher accuracy: Uses a novel organ-attention mechanism to reduce false positives and improve organ boundaries
· Integrated view predictions: Utilizes axial, sagittal, and coronal images to produce robust and consistent organ labels
· Rapid approach: Utilizes accessible GPU hardware over stronger, but slower, alternatives
· Highly Interpretable: Provides visual cues for clinicians to understand the algorithms choices
Unmet Need
· Detailed abdominal organ segmentation of CT images is critical for performing computer-aided diagnosis and surgery
· Current gold standard relies on manual human annotation and automatic segmentation methods (via atlas fusion) which cannot perform at the standards required
· Challenges for current algorithms include morphological complexity, high variation across patients, and low contrast in various tissues.
· Therefore, there is a strong need for a refined automatic segmentation method to be developed to make computer aided diagnosis and surgery a reality.
Technology Description
· Novel AI method for performing automatic organ segmentation of CT images
· Addresses challenges including the complexity of organs, the large variation within and between subjects, and the low image complexity.
· Uses a two stage deep convolutional network in which the first stage results are combined with the original image to further refine organ structures.
· Available data demonstrates stronger performance than similar deep convolutional networks.
Stage of Development
· As of 11/10/2025, the disclosed technology has demonstrated proof of concept through a peer-reviewed, published paper in Medical Image Analysis.
Data Availability
· Data available upon request
Publication
· Y. Wang, Y. Zhou, W. Shen, S. Park, E. Fishman, A. Yuille, "Abdominal multi-organ segmentation with organ-attention networks and statistical fusion", arXiv:1804.08414, 2018.
· Yan Wang, Yuyin Zhou, Wei Shen, Seyoun Park, Elliot K. Fishman, Alan L. Yuille, “Abdominal multi-organ segmentation with organ-attention networks and statistical fusion”, Medical Image Analysis, Volume 55, 2019, Pages 88-102, ISSN 1361-8415, https://doi.org/10.1016/j.media.2019.04.005.
· Dreizin, D., Yuyin, Z., Zhang, Y., Nikki, T., & Yuille, A. L. (2020). Performance of a Deep Learning Algorithm for Automated Segmentation and Quantification of Traumatic Pelvic Hematomas on CT. Journal of Digital Imaging, 33(1), 243-251. https://doi.org/10.1007/s10278-019-00207-1