Skip to main content

ORIGINAL RESEARCH article

Front. Oncol.
Sec. Radiation Oncology
Volume 14 - 2024 | doi: 10.3389/fonc.2024.1440944

A joint learning framework for multisite CBCT-to-CT translation using a hybrid CNNtransformer synthesizer and a registration network

Provisionally accepted
Ying Hu Ying Hu 1Mengjie Cheng Mengjie Cheng 2Hui Wei Hui Wei 3Zhiwen Liang Zhiwen Liang 4*
  • 1 School of Mathematics and Economics, Hubei University of Education, Wuhan, Hubei 430205, China, Wuhan, China
  • 2 Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
  • 3 Department of Radiotherapy, Affiliated Hospital of Hebei Engineering University, Handan 056002, China, Handan, China
  • 4 Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China

The final, formatted version of the article will be published soon.

    Background: Cone-beam computed tomography (CBCT) is a convenient method for adaptive radiation therapy (ART), but its application is often hindered by its image quality. We aim to develop a unified deep learning model that can consistently enhance the quality of CBCT images across various anatomical sites by generating synthetic CT (sCT) images.A dataset of paired CBCT and planning CT images from 135 cancer patients, including head and neck, chest and abdominal tumors, was collected. This dataset, with its rich anatomical diversity and scanning parameters, was carefully selected to ensure comprehensive model training.Due to the imperfect registration, the inherent challenge of local structural misalignment of paired dataset may lead to suboptimal model performance. To address this limitation, we propose SynREG, a supervised learning framework. SynREG integrates a hybrid CNN-transformer architecture designed for generating high-fidelity sCT images and a registration network designed to correct local structural misalignment dynamically during training. An independent test set of 23 additional patients was used to evaluate the image quality, and the results were compared with those of several benchmark models (pix2pix, cycleGAN and SwinIR). Furthermore, the performance of an autosegmentation application was also assessed.The proposed model disentangled sCT generation from anatomical correction, leading to a more rational optimization process. As a result, the model effectively suppressed noise and artifacts in multisite applications, significantly enhancing CBCT image quality. Specifically, the mean absolute error (MAE) of SynREG was reduced to 16.81±8.42 HU, whereas the structural similarity index (SSIM) increased to 94.34±2.85%, representing improvements over the raw CBCT data, which had the MAE of 26.74±10.11 HU and the SSIM of 89.73±3.46%. The enhanced image quality was particularly beneficial for organs with low contrast resolution, significantly increasing the accuracy of automatic segmentation in these regions. Notably, for the brainstem, the mean Dice similarity coefficient (DSC) increased from 0.61 to 0.89, and the MDA decreased from 3.72 mm to 0.98 mm, indicating a substantial improvement in segmentation accuracy and precision.Conclusions: SynREG can effectively alleviate the differences in residual anatomy between paired datasets and enhance the quality of CBCT images.

    Keywords: CBCT, synthetic CT, deep learning, Hybrid transformer, adaptive radiotherapy

    Received: 30 May 2024; Accepted: 19 Jul 2024.

    Copyright: © 2024 Hu, Cheng, Wei and Liang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Zhiwen Liang, Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.