AUTHOR=Ou Chubin , Zhou Sitong , Yang Ronghua , Jiang Weili , He Haoyang , Gan Wenjun , Chen Wentao , Qin Xinchi , Luo Wei , Pi Xiaobing , Li Jiehua TITLE=A deep learning based multimodal fusion model for skin lesion diagnosis using smartphone collected clinical images and metadata JOURNAL=Frontiers in Surgery VOLUME=9 YEAR=2022 URL=https://www.frontiersin.org/journals/surgery/articles/10.3389/fsurg.2022.1029991 DOI=10.3389/fsurg.2022.1029991 ISSN=2296-875X ABSTRACT=Introduction

Skin cancer is one of the most common types of cancer. An accessible tool to the public can help screening for malign lesion. We aimed to develop a deep learning model to classify skin lesion using clinical images and meta information collected from smartphones.

Methods

A deep neural network was developed with two encoders for extracting information from image data and metadata. A multimodal fusion module with intra-modality self-attention and inter-modality cross-attention was proposed to effectively combine image features and meta features. The model was trained on tested on a public dataset and compared with other state-of-the-art methods using five-fold cross-validation.

Results

Including metadata is shown to significantly improve a model's performance. Our model outperformed other metadata fusion methods in terms of accuracy, balanced accuracy and area under the receiver-operating characteristic curve, with an averaged value of 0.768±0.022, 0.775±0.022 and 0.947±0.007.

Conclusion

A deep learning model using smartphone collected images and metadata for skin lesion diagnosis was successfully developed. The proposed model showed promising performance and could be a potential tool for skin cancer screening.