AUTHOR=Gottlich Harrison C. , Korfiatis Panagiotis , Gregory Adriana V. , Kline Timothy L. TITLE=AI in the Loop: functionalizing fold performance disagreement to monitor automated medical image segmentation workflows JOURNAL=Frontiers in Radiology VOLUME=3 YEAR=2023 URL=https://www.frontiersin.org/journals/radiology/articles/10.3389/fradi.2023.1223294 DOI=10.3389/fradi.2023.1223294 ISSN=2673-8740 ABSTRACT=Introduction

Methods that automatically flag poor performing predictions are drastically needed to safely implement machine learning workflows into clinical practice as well as to identify difficult cases during model training.

Methods

Disagreement between the fivefold cross-validation sub-models was quantified using dice scores between folds and summarized as a surrogate for model confidence. The summarized Interfold Dices were compared with thresholds informed by human interobserver values to determine whether final ensemble model performance should be manually reviewed.

Results

The method on all tasks efficiently flagged poor segmented images without consulting a reference standard. Using the median Interfold Dice for comparison, substantial dice score improvements after excluding flagged images was noted for the in-domain CT (0.85 ± 0.20 to 0.91 ± 0.08, 8/50 images flagged) and MR (0.76 ± 0.27 to 0.85 ± 0.09, 8/50 images flagged). Most impressively, there were dramatic dice score improvements in the simulated out-of-distribution task where the model was trained on a radical nephrectomy dataset with different contrast phases predicting a partial nephrectomy all cortico-medullary phase dataset (0.67 ± 0.36 to 0.89 ± 0.10, 122/300 images flagged).

Discussion

Comparing interfold sub-model disagreement against human interobserver values is an effective and efficient way to assess automated predictions when a reference standard is not available. This functionality provides a necessary safeguard to patient care important to safely implement automated medical image segmentation workflows.