Deep learning semantic segmentation for high-resolution medical volumes.
2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 2020.
Automated semantic segmentation in the domain of medical imaging can enable a faster, more reliable, and more affordable clinical workflow. Fully convolutional networks (FCNs) have been heavily used in this area due to the level of success that they have achieved. In this work, we first leverage recent architectural innovations to make an initial segmentation: (i) spatial and channel-wise squeeze and excitation mechanism; (ii) a 3D U-Net++ network and deep supervision. Second, we use classical methods for refining the initial segmentation: (i) spatial normalization and (ii) local 3D refinement network applied to patches. Finally, we put our methods together in a novel segmentation pipeline. We train and evaluate our models and pipelines on a dataset of a 120 abdominal magnetic resonance – volumetric – images (MRIs). The goal is to segment five different organs of interest (ORI): liver, kidneys, stomach, duodenum, and large bowel. Our experiments show that we can generate high resolution segmentation of comparable quality to the state-of-the-art methods on low resolution without adding significant computational cost.