AUTHOR=Lopez Kevin , Fodeh Samah J. , Allam Ahmed , Brandt Cynthia A. , Krauthammer Michael
TITLE=Reducing Annotation Burden Through Multimodal Learning
JOURNAL=Frontiers in Big Data
VOLUME=3
YEAR=2020
URL=https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2020.00019
DOI=10.3389/fdata.2020.00019
ISSN=2624-909X
ABSTRACT=
Choosing an optimal data fusion technique is essential when performing machine learning with multimodal data. In this study, we examined deep learning-based multimodal fusion techniques for the combined classification of radiological images and associated text reports. In our analysis, we (1) compared the classification performance of three prototypical multimodal fusion techniques: Early, Late, and Model fusion, (2) assessed the performance of multimodal compared to unimodal learning; and finally (3) investigated the amount of labeled data needed by multimodal vs. unimodal models to yield comparable classification performance. Our experiments demonstrate the potential of multimodal fusion methods to yield competitive results using less training data (labeled data) than their unimodal counterparts. This was more pronounced using the Early and less so using the Model and Late fusion approaches. With increasing amount of training data, unimodal models achieved comparable results to multimodal models. Overall, our results suggest the potential of multimodal learning to decrease the need for labeled training data resulting in a lower annotation burden for domain experts.