Helmholtz Gemeinschaft


Intraretinal layer segmentation using cascaded compressed U-Nets

PDF (Original Article) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader

Item Type:Article
Title:Intraretinal layer segmentation using cascaded compressed U-Nets
Creators Name:Yadav, S.K. and Kafieh, R. and Zimmermann, H.G. and Kauer-Bonin, J. and Nouri-Mahdavi, K. and Mohammadzadeh, V. and Shi, L. and Kadas, E.M. and Paul, F. and Motamedi, S. and Brandt, A.U.
Abstract:Reliable biomarkers quantifying neurodegeneration and neuroinflammation in central nervous system disorders such as Multiple Sclerosis, Alzheimer's dementia or Parkinson's disease are an unmet clinical need. Intraretinal layer thicknesses on macular optical coherence tomography (OCT) images are promising noninvasive biomarkers querying neuroretinal structures with near cellular resolution. However, changes are typically subtle, while tissue gradients can be weak, making intraretinal segmentation a challenging task. A robust and efficient method that requires no or minimal manual correction is an unmet need to foster reliable and reproducible research as well as clinical application. Here, we propose and validate a cascaded two-stage network for intraretinal layer segmentation, with both networks being compressed versions of U-Net (CCU-INSEG). The first network is responsible for retinal tissue segmentation from OCT B-scans. The second network segments eight intraretinal layers with high fidelity. At the post-processing stage, we introduce Laplacian-based outlier detection with layer surface hole filling by adaptive non-linear interpolation. Additionally, we propose a weighted version of focal loss to minimize the foreground-background pixel imbalance in the training data. We train our method using 17,458 B-scans from patients with autoimmune optic neuropathies, i.e., multiple sclerosis, and healthy controls. Voxel-wise comparison against manual segmentation produces a mean absolute error of 2.3 μm, outperforming current state-of-the-art methods on the same data set. Voxel-wise comparison against external glaucoma data leads to a mean absolute error of 2.6 μm when using the same gold standard segmentation approach, and 3.7 μm mean absolute error in an externally segmented data set. In scans from patients with severe optic atrophy, 3.5% of B-scan segmentation results were rejected by an experienced grader, whereas this was the case in 41.4% of B-scans segmented with a graph-based reference method. The validation results suggest that the proposed method can robustly segment macular scans from eyes with even severe neuroretinal changes.
Keywords:Optical Coherence Tomography (OCT), Intraretinal Layer Segmentation, Retina, U-Net, Deep Learning
Source:Journal of Imaging
Page Range:139
Date:17 May 2022
Official Publication:https://doi.org/10.3390/jimaging8050139
PubMed:View item in PubMed

Repository Staff Only: item control page


Downloads per month over past year

Open Access
MDC Library