Skip to main content

ORIGINAL RESEARCH article

Front. Radiol.
Sec. Artificial Intelligence in Radiology
Volume 4 - 2024 | doi: 10.3389/fradi.2024.1420545
This article is part of the Research Topic Artificial Intelligence in Radiology and Radiation Oncology View all 3 articles

DreamOn: a data augmentation strategy to narrow the robustness gap between expert radiologists and deep learning classifiers

Provisionally accepted
Luc Lerch Luc Lerch 1,2*Lukas S Huber Lukas S Huber 3,4Amith Kamath Amith Kamath 2Alexander Pöllinger Alexander Pöllinger 5Aurélie Pahud De Mortanges Aurélie Pahud De Mortanges 5Verena Obmann Verena Obmann 5Florian Dammann Florian Dammann 5Walter Senn Walter Senn 1Mauricio Reyes Mauricio Reyes 2
  • 1 Other, Bern, Switzerland
  • 2 ARTORG Center for Biomedical Engineering Research, Faculty of Medicine, University of Bern, Bern, Switzerland
  • 3 Institute of Psychology, Faculty of Humanities and Philosophy, University of Bern, Bern, Switzerland
  • 4 Department of Computer Science, Faculty of Mathematics and Natural Sciences, University of Tübingen, Tübingen, Germany
  • 5 University Hospital of Bern, Bern, Switzerland

The final, formatted version of the article will be published soon.

    Purpose. Successful performance of deep learning models for medical image analysis is highly dependent on the quality of the images being analysed. Factors like differences in imaging equipment and calibration, as well as patient-specific factors such as movements or biological variability (e.g., tissue density), lead to a large variability in the quality of obtained medical images. Consequently, robustness against the presence of noise is a crucial factor for the application of deep learning models in clinical contexts. Materials and Methods. We evaluate the effect of various data augmentation strategies on the robustness of a ResNet-18 trained to classify breast ultrasound images and benchmark the performance against trained human radiologists. Additionally, we introduce DreamOn, a novel, biologically inspired data augmentation strategy for medical image analysis. DreamOn is based on a conditional generative adversarial network (GAN) to generate REM-dream-inspired interpolations of training images. Results. We find that while available data augmentation approaches substantially improve robustness compared to models trained without any data augmentation, radiologists outperform models on noisy images. Using DreamOn data augmentation, we obtain a substantial improvement in robustness in the high noise regime. Conclusions. We show that REM-dream-inspired conditional GAN-based data augmentation is a promising approach to improving deep learning model robustness against noise perturbations in medical imaging. Additionally, we highlight a gap in robustness between deep learning models and human experts, emphasizing the imperative for ongoing developments in AI to match human diagnostic expertise.

    Keywords: deep learning, robustness, ultrasound, breast cancer, generative adversarial network, Convolutional Neural Network

    Received: 20 Apr 2024; Accepted: 22 Nov 2024.

    Copyright: © 2024 Lerch, Huber, Kamath, Pöllinger, Pahud De Mortanges, Obmann, Dammann, Senn and Reyes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Luc Lerch, Other, Bern, Switzerland

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.