Skip to main content

ORIGINAL RESEARCH article

Front. Radiol.
Sec. Artificial Intelligence in Radiology
Volume 4 - 2024 | doi: 10.3389/fradi.2024.1466498
This article is part of the Research Topic Image Synthesis in Medical Imaging View all articles

Modular GAN: Positron emission tomography image reconstruction using two generative adversarial networks

Provisionally accepted
Rajat Vashistha Rajat Vashistha 1Viktor Vegh Viktor Vegh 1*Hamed Moradi Hamed Moradi 1Amanda Hammond Amanda Hammond 2Kieran O'brien Kieran O'brien 2David Reutens David Reutens 1
  • 1 The University of Queensland, Brisbane, Australia
  • 2 Siemens Healthcare Pty Ltd, Melbourne, Australia

The final, formatted version of the article will be published soon.

    The reconstruction of PET images involves converting sinograms, which represent the measured counts of radioactive emissions using detector rings encircling the patient, into meaningful images. This study aims to investigate the properties of training images that contribute to GAN performance when non-clinical images are used for training. Additionally, we describe a method to correct common PET imaging artefacts without relying on patient-specific anatomical images. The modular GAN framework includes two GANs. The first module, resembling Pix2pix architecture, is trained on non-clinical sinogram-image pairs. Training data are optimised by considering image properties defined by metrics. The second module utilises adaptive instance normalisation and style embedding to enhance the quality of images from the first module. Additional perceptual and patch-based loss functions are employed in training both modules. The performance of the new framework was compared with that of existing methods, (filtered backprojection (FBP) and ordered subset expectation maximisation (OSEM) without and with point spread function (OSEM-PSF)) with respect to correction for attenuation, patient motion and noise in simulated, NEMA phantom and human imaging data. Evaluation metrics included structural similarity (SSIM), peak-signal-to-noise ratio (PSNR), relative root mean squared error (rRMSE) for simulated data, and contrast-to-noise ratio (CNR) for NEMA phantom and human data. For simulated test data, the performance of the proposed framework was both qualitatively and quantitatively superior to that of FBP and OSEM. In the presence of noise, the first module generated images with a SSIM of 0.48 and higher. These images exhibited coarse structures that were subsequently refined by the second module, yielding images with an SSIM higher than 0.71 (at least 22% higher than OSEM). The proposed method was robust against noise and motion. For NEMA phantoms, it achieved higher CNR values than OSEM. For human images, the CNR in brain regions was significantly higher than that of FBP and OSEM (p<0.05, paired t-test). The CNR of images reconstructed with OSEM-PSF was similar to those reconstructed using the proposed method. The proposed image reconstruction method can produce PET images with artefact correction.

    Keywords: PET image reconstruction, deep learning, generative adversarial network, noise and motion correction, non-clinical training data

    Received: 18 Jul 2024; Accepted: 08 Aug 2024.

    Copyright: © 2024 Vashistha, Vegh, Moradi, Hammond, O'brien and Reutens. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Viktor Vegh, The University of Queensland, Brisbane, Australia

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.