AUTHOR=Lopes Marinho André , Kazimi Bashir , Ćwieka Hanna , Marek Romy , Beckmann Felix , Willumeit-Römer Regine , Moosmann Julian , Zeller-Plumhoff Berit TITLE=A comparison of deep learning segmentation models for synchrotron radiation based tomograms of biodegradable bone implants JOURNAL=Frontiers in Physics VOLUME=12 YEAR=2024 URL=https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2024.1257512 DOI=10.3389/fphy.2024.1257512 ISSN=2296-424X ABSTRACT=

Introduction: Synchrotron radiation micro-computed tomography (SRμCT) has been used as a non-invasive technique to examine the microstructure and tissue integration of biodegradable bone implants. To be able to characterize parameters regarding the disintegration and osseointegration of such materials quantitatively, the three-dimensional (3D) image data provided by SRμCT needs to be processed by means of semantic segmentation. However, accurate image segmentation is challenging using traditional automated techniques. This study investigates the effectiveness of deep learning approaches for semantic segmentation of SRμCT volumes of Mg-based implants in sheep bone ex vivo.

Methodology: For this purpose different convolutional neural networks (CNNs), including U-Net, HR-Net, U²-Net, from the TomoSeg framework, the Scaled U-Net framework, and 2D/3D U-Net from the nnU-Net framework were trained and validated. The image data used in this work was part of a previous study where biodegradable screws were surgically implanted in sheep tibiae and imaged using SRμCT after different healing periods. The comparative analysis of CNN models considers their performance in semantic segmentation and subsequent calculation of degradation and osseointegration parameters. The models’ performance is evaluated using the intersection over union (IoU) metric, and their generalization ability is tested on unseen datasets.

Results and discussion: This work shows that the 2D nnU-Net achieves better generalization performance, with the degradation layer being the most challenging label to segment for all models.