AUTHOR=Ling Yating , Ying Shihong , Xu Lei , Peng Zhiyi , Mao Xiongwei , Chen Zhang , Ni Jing , Liu Qian , Gong Shaolin , Kong Dexing TITLE=Automatic volumetric diagnosis of hepatocellular carcinoma based on four-phase CT scans with minimum extra information JOURNAL=Frontiers in Oncology VOLUME=12 YEAR=2022 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2022.960178 DOI=10.3389/fonc.2022.960178 ISSN=2234-943X ABSTRACT=Summary

We built a deep-learning based model for diagnosis of HCC with typical images from four-phase CT and MEI, demonstrating high performance and excellent efficiency.

Objectives

The aim of this study was to develop a deep-learning-based model for the diagnosis of hepatocellular carcinoma.

Materials and methods

This clinical retrospective study uses CT scans of liver tumors over four phases (non-enhanced phase, arterial phase, portal venous phase, and delayed phase). Tumors were diagnosed as hepatocellular carcinoma (HCC) and non-hepatocellular carcinoma (non-HCC) including cyst, hemangioma (HA), and intrahepatic cholangiocarcinoma (ICC). A total of 601 liver lesions from 479 patients (56 years ± 11 [standard deviation]; 350 men) are evaluated between 2014 and 2017 for a total of 315 HCCs and 286 non-HCCs including 64 cysts, 178 HAs, and 44 ICCs. A total of 481 liver lesions were randomly assigned to the training set, and the remaining 120 liver lesions constituted the validation set. A deep learning model using 3D convolutional neural network (CNN) and multilayer perceptron is trained based on CT scans and minimum extra information (MEI) including text input of patient age and gender as well as automatically extracted lesion location and size from image data. Fivefold cross-validations were performed using randomly split datasets. Diagnosis accuracy and efficiency of the trained model were compared with that of the radiologists using a validation set on which the model showed matched performance to the fivefold average. Student’s t-test (T-test) of accuracy between the model and the two radiologists was performed.

Results

The accuracy for diagnosing HCCs of the proposed model was 94.17% (113 of 120), significantly higher than those of the radiologists, being 90.83% (109 of 120, p-value = 0.018) and 83.33% (100 of 120, p-value = 0.002). The average time analyzing each lesion by our proposed model on one Graphics Processing Unit was 0.13 s, which was about 250 times faster than that of the two radiologists who needed, on average, 30 s and 37.5 s instead.

Conclusion

The proposed model trained on a few hundred samples with MEI demonstrates a diagnostic accuracy significantly higher than the two radiologists with a classification runtime about 250 times faster than that of the two radiologists and therefore could be easily incorporated into the clinical workflow to dramatically reduce the workload of radiologists.