AUTHOR=Zhang Guangwen , Chen Lei , Liu Aie , Pan Xianpan , Shu Jun , Han Ye , Huan Yi , Zhang Jinsong TITLE=Comparable Performance of Deep Learning–Based to Manual-Based Tumor Segmentation in KRAS/NRAS/BRAF Mutation Prediction With MR-Based Radiomics in Rectal Cancer JOURNAL=Frontiers in Oncology VOLUME=11 YEAR=2021 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2021.696706 DOI=10.3389/fonc.2021.696706 ISSN=2234-943X ABSTRACT=

Radiomic features extracted from segmented tumor regions have shown great power in gene mutation prediction, while deep learning–based (DL-based) segmentation helps to address the inherent limitations of manual segmentation. We therefore investigated whether deep learning–based segmentation is feasible in predicting KRAS/NRAS/BRAF mutations of rectal cancer using MR-based radiomics. In this study, we proposed DL-based segmentation models with 3D V-net architecture. One hundred and eight patients’ images (T2WI and DWI) were collected for training, and another 94 patients’ images were collected for validation. We evaluated the DL-based segmentation manner and compared it with the manual-based segmentation manner through comparing the gene prediction performance of six radiomics-based models on the test set. The performance of the DL-based segmentation was evaluated by Dice coefficients, which are 0.878 ± 0.214 and 0.955 ± 0.055 for T2WI and DWI, respectively. The performance of the radiomics-based model in gene prediction based on DL-segmented VOI was evaluated by AUCs (0.714 for T2WI, 0.816 for DWI, and 0.887 for T2WI+DWI), which were comparable to that of corresponding manual-based VOI (0.637 for T2WI, P=0.188; 0.872 for DWI, P=0.181; and 0.906 for T2WI+DWI, P=0.676). The results showed that 3D V-Net architecture could conduct reliable rectal cancer segmentation on T2WI and DWI images. All-relevant radiomics-based models presented similar performances in KRAS/NRAS/BRAF prediction between the two segmentation manners.