Skip to main content

ORIGINAL RESEARCH article

Front. Nucl. Med.
Sec. PET and SPECT
Volume 4 - 2024 | doi: 10.3389/fnume.2024.1469490
This article is part of the Research Topic Rising Stars in PET and SPECT: 2024 View all articles

SMART-PET: A Self-SiMilARiTy-Aware Generative Adversarial Framework for Reconstructing Low-count [18F]-FDG-PET Brain Imaging

Provisionally accepted
  • 1 Multimodal Imaging of Neurodegenerative Diseases (MiND) Lab, Lawson Health Research Institute, London, Canada
  • 2 Department of Electrical and Computer Engineering, University of British Columbia, British Columbia, Canada
  • 3 Siemens Medical Solutions USA, Inc.,, Knoxville, United States
  • 4 Department of Medical Imaging, Ghent University, Ghent, Belgium
  • 5 Clinical Neurological Sciences, Western University, London, Canada
  • 6 Department of Physics, Federal University of Technology, Minna, Nigeria
  • 7 Department of Medical Biophysics, Western University, London, Canada
  • 8 Department of Physics and Astronomy, Western University, London, Canada
  • 9 Department of Pediatrics, Western University, London, Canada
  • 10 Department of Medical Imaging, Western University, London, Canada
  • 11 Montreal Neurological Institute, McGill University, Montreal, Canada

The final, formatted version of the article will be published soon.

    In Positron Emission Tomography (PET) imaging, the use of tracers increases radioactive exposure for longitudinal evaluations and in radiosensitive populations such as pediatrics. However, reducing injected PET activity potentially leads to an unfavorable compromise between radiation exposure and image quality, causing lower signal-to-noise ratios and degraded images. Deep learning-based denoising approaches can be employed to recover low count PET image signals: nonetheless, most of these methods rely on structural or anatomic guidance from magnetic resonance imaging (MRI) and fails to effectively preserve global spatial features in denoised PET images, without impacting signal-to-noise ratios.In this study, we developed a novel PET only deep learning framework, the Self-SiMilARiTy-Aware Generative Adversarial Framework (SMART), which leverages Generative Adversarial Networks (GANs) and a self-similarity-aware attention mechanism for denoising [18F]fluorodeoxyglucose (18F-FDG) PET images. This study employs a combination of prospective and retrospective datasets in its design. In total, 114 subjects were included in the study, comprising 34 patients who underwent 18F-Fluorodeoxyglucose PET (FDG) PET imaging for drug-resistant epilepsy, 10 patients for frontotemporal dementia indications, and 70 healthy volunteers. To effectively denoise PET images without anatomical details from MRI, a self-similarity attention mechanism (SSAB) was devised. which learned the distinctive structural and pathological features. These SSAB-enhanced features were subsequently applied to the SMART GAN algorithm and trained to denoise the low-count PET images using the standard dose PET image acquired from each individual participant as reference. The trained GAN algorithm was evaluated using image quality measures including structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), normalized root mean square (NRMSE), Fréchet inception distance (FID), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR).In comparison to the standard-dose, SMART-PET had on average a SSIM of 0.984 ± 0.007, PSNR of 38.126 ± 2.631 dB, NRMSE of 0.091 ± 0.028, FID of 0.455 ± 0.065, SNR of 0.002 ± 0.001, and CNR of 0.011 ± 0.011. Regions of interest measurements obtained with datasets decimated down to 10% of the original counts, showed a deviation of less than 1.4% when compared to the groundtruth values.

    Keywords: SMART-PET, positron emission tomography (PET), Frontotemporal dementia (FTD), Drug-resistant epilepsy (DRE), generative adversarial networks (GANs), denoising, Low-dose, deep learning

    Received: 23 Jul 2024; Accepted: 28 Oct 2024.

    Copyright: © 2024 Raymond, Zhang, Cabello, Liu, Moyaert, Burneo, Dada, Hicks, Finger, Soddu, Andrade, Jurkiewicz and Anazodo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Confidence Raymond, Multimodal Imaging of Neurodegenerative Diseases (MiND) Lab, Lawson Health Research Institute, London, Canada
    Udunna C. Anazodo, Multimodal Imaging of Neurodegenerative Diseases (MiND) Lab, Lawson Health Research Institute, London, Canada

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.