Skip to main content

MINI REVIEW article

Front. Appl. Math. Stat., 12 May 2023
Sec. Mathematics of Computation and Data Science
This article is part of the Research Topic Machine Learning for Mathematical Modeling and Computation View all 4 articles

Data augmentation using generative adversarial networks for images and biomarkers in medicine and neuroscience

  • 1Faculty of Computing and Informatics, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia
  • 2Advanced Machine Intelligence Research Group, Faculty of Computing and Informatics, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia
  • 3Evolutionary Computing Laboratory, Faculty of Computing and Informatics, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia

The fields of medicine and neuroscience often face challenges in obtaining a sufficient amount of diverse data for training machine learning models. Data augmentation can alleviate this issue by artificially synthesizing new data from existing data. Generative adversarial networks (GANs) provide a promising approach for data augmentation in the context of images and biomarkers. GANs can synthesize high-quality, diverse, and realistic data that can supplement real data in the training process. This study provides an overview of the use of GANs for data augmentation in medicine and neuroscience. The strengths and weaknesses of various GAN models, including deep convolutional GANs (DCGANs) and Wasserstein GANs (WGANs), are discussed. This study also explores the challenges and ways to address them when using GANs for data augmentation in the field of medicine and neuroscience. Future works on this topic are also discussed.

1. Introduction

Generative adversarial networks (GANs) are a type of generative modeling using deep learning techniques such as convolutional neural networks. According to Gui et al. [1], GANs can be divided into two categories: explicit density models and implicit density models. Explicit density models assume the existence of a distribution and train a model to represent this distribution or fit its parameters using true data. This allows the model to generate new examples based on the learned distribution. These models have a clear and defined distribution. On the other hand, implicit density models do not estimate or fit the data distribution directly. Instead, they generate instances from the distribution without a clear hypothesis and use these examples to modify the model.

Data augmentation refers to the generation of additional training data from existing data by transforming the original data to increase the volume and possibly the diversity of the training data to help the model learn more robustly. This is particularly useful when the training data are limited, where overfitting of the existing data may occur. Data regularization, on the other hand, refers to the reduction of overfitting through the addition of some constraints or penalties to the model. These techniques include L1 and L2 regularization, dropout, early stopping, and batch normalization with the aim of preventing the model from memorizing the training data and instead encouraging it to learn the underlying patterns that will potentially generalize better to previously unseen data. While data augmentation and regularization both aim to improve the performance of a model, they operate at different levels: Data augmentation operates at the level of the input data, while regularization operates at the level of the model parameters. Moreover, data augmentation will increase the amount of data available for training, whereas regularization will modify the model to better handle the existing data.

Generative adversarial networks (GANs) have been gaining interest in the past several years due to their robust applications which have no doubt made them a palpably popular augmentation technique for both image and biosignal data. Therefore, in this study, the application of data augmentation using generative adversarial networks on images in the medical and neurosciences fields is studied to summarize the existing applications and the state-of-the-art models and discuss the conducted experiments as well as review the results of the data. The structure of the article is as follows: Section 2 explains the methodology, Section 3 describes the state-of-the-art, Section 4 represents the discussion, and Section 5 is the conclusion of the article.

2. State-of-the-art

One of the key strengths of GAN models is their ability to generate high-quality synthetic data that can be used for various purposes. In recent years, there have been many new developments in GAN models, including the introduction of new architectures, such as conditional GANs [2] and CycleGANs [3]. One of the most important advances in GAN models has been the development of improved training techniques, such as Wasserstein GANs [4] and self-attention GANs [5]. These techniques have led to more stable and efficient training as well as improved generation quality. In addition, the use of GAN models for semi-supervised learning has also been explored in recent years [6], providing a new approach for using limited labeled data to train deep learning models. In this section, the GAN models utilized in the articles are reviewed.

2.1. Conditional GAN

The conditional generative adversarial network (cGAN), introduced by Mirza and Osindero [2] in 2014, is a variation of GANs that enable the creation of data that is conditioned on certain external information, such as class labels or data from another source. In the cGAN, both the generator and discriminator receive extra input, such as a class label or an image, which is used to influence the generated data. This allows the model to generate samples with specific characteristics, such as specific object classes in an image. The generator is trained to create samples that can deceive the discriminator while also meeting the given conditions. The discriminator is trained to differentiate between real data and generated data, taking the given conditions into consideration. The generator and the discriminator keep updating their parameters until the generator can produce samples that are indistinguishable from real data for a given condition [2].

A surge of articles using cGAN in studies has emerged ever since it was first introduced. A respiratory signal augmentation using cGAN was experimented with, achieving a high accuracy of 98.87% in detecting lung disorders [7]. cGAN also proved to help increase the size of the dataset in classifying chest X-rays into six different categories (COVID-Mild, COVID-Medium, COVID-Severe, Normal, Pneumonia, and Tuberculosis) in a study by Mehta and Mehendale [8] and achieved an accuracy of 93.67%. In 2022, a study by Lee and Nishikawa developed a cGAN to simulate mammograms (X-ray images of the breast) to detect mammographically occult (MO) cancer in women which achieved a value of 0.77 using the area under curve (AUC) test. The study observed that cGAN-simulated mammograms can help to detect MO cancer.

In a more recent publication, Jung et al. [9] synthesized high-quality 3D MR images at different stages of Alzheimer's disease, and the quality of the images generated was then measured using Fréchet Inception Distance (FDI) and Kernel Inception Distance (KDI). A proposed improved cGAN model to generate labeled samples of facial images for facial emotion recognition was managed to obtain a structural similarity (SSIM) score of 0.929, which demonstrated the effectiveness of the approach [10].

2.2. CycleGAN

The cycle generative adversarial network (CycleGAN) is used for image-to-image translation without a paired dataset. Unlike regular GANs, CycleGANs do not require the same sample to be present in both domains for the translation to occur. Instead, they use a cycle consistency loss to guarantee that the generated images can be converted back to their original domain. A CycleGAN has two generators and two discriminators, one for each image domain. The generators are trained to translate images from one domain to the other, while the discriminators are trained to identify real and generated images. The cycle consistency loss makes sure that the generated image, after being converted back to its original domain, is similar to the original image [3].

A study in 2019 used CycleGAN to generate synthetic MRI images of a brain tumor, and across three datasets, the highest sensitivity achieved was 80.93 and the specificity was 80.43. The study proved that the image classification task gained better results when the data were augmented using CycleGAN [11]. CycleGAN was used for the generation of synthetic MRI images of subjects with glioblastoma to classify high-risk and low-risk groups as well as for the prediction of whether the subjects can survive for more than 3 years, and its prediction model reached up to 94% accuracy [12]. The model was also applied in a semi-supervised manner for opacity classification of diffuse lung diseases, where computerized tomography (CT) images were labeled with classes and the transformed images were then evaluated using the F-measure, and the proposed method obtained an average score of 0.736 across the source and target domains [13].

Another study demonstrated that the use of CycleGAN optimized the detection and localization of retinal pathologies on color fundus photographs, which in turn improved the detection efficiency of retinal diseases [14]. The study managed to obtain high scores for all test accuracy, F1 score, and AUC metrics of 97.3%, 0.946, and 0.992, respectively. CycleGAN was once again employed to tackle the problem of data scarcity for speech emotion recognition where synthetic data were generated and added to the original dataset. The quality of the generated data was tested using an emotion classification task and showed an accuracy of 83.33%, which was significantly higher than the baseline method [15].

2.3. Wasserstein GAN

The Wasserstein generative adversarial network (WGAN), introduced in 2017 by Arjovsky et al., is a type of GAN that solves some of the difficulties in training traditional GANs, such as instability and mode collapse. WGANs employ the Earth Mover's Distance, also known as the Wasserstein distance, as the loss function for both the generator and the discriminator. The Wasserstein distance gauges the effort needed to convert one probability distribution into another, making it a more steady loss function compared to other options used in traditional GANs. In a WGAN, the discriminator's objective was to evaluate the Wasserstein distance between the actual data distribution and the generated data distribution. The generator, on the other hand, is trained to create samples that can mislead the discriminator, resulting in a convergence in the Wasserstein distance between the two distributions [16].

A study in 2019 employed WGAN to increase the sample diversity in an effort to improve automatic epileptic electroencephalogram (EEG) detection. The WGAN model was used to multiply multi-channel time-series EEG recordings, and the now enlarged dataset was used in the classification task, which yielded an accuracy of 84% [17]. Another study addressed the challenge of limited labeled training samples in predicting driver's cognitive states, thus using WGAN to generate EEG data, and the proposed solution achieved an average AUC score of 66.49% which was an improvement compared to the baseline [18]. The implementation of WGAN in generating synthetic electrocardiogram (ECG) signals was done by Munia et al. [19], where it was then evaluated using FID and F-measure metrics. Binary classification of anterior myocardial infarction was then done using the ECG signals to verify its performance, with the best FID score of 88.27 and F-measure of 85.12%. Artificial EEG data were also created using WGAN which was sampled using a dataset that contains EEG recordings of 20 normal and autistic subjects, and the highest accuracy for the classification task when data augmentation was implemented was 87.57%, which was the training accuracy using the KNN classifier [20].

2.4. Deep convolutional GAN

A deep convolutional generative adversarial network (DCGAN) is made up of two components: a generator network and a discriminator network. The generator network takes a random noise vector as the input and creates an image through a series of convolutional and normalization layers that have been transposed. The discriminator network, on the other hand, inputs an image and predicts whether it is real or fake by using a series of convolutional and normalization layers, followed by a dense layer. The generator and discriminator are trained to work against each other, with the generator trying to produce images that the discriminator cannot tell are not real, while the discriminator attempts to identify the generated images as fake [21].

In 2020, a study by Desai et al. utilized DCGAN in generating new mammography images for the early detection of breast cancer and obtained an accuracy of 87% when using the augmented data, which showed an improvement of 8.77% when compared to the evaluation of classification task without the DCGAN-generated images. Another study created synthetic chest X-ray images using DCGAN to diagnose pneumonia, and when the augmented data were used in the classification task, it achieved a high accuracy of 94.5% [22]. DCGAN was also used to generate dermoscopic images to aid the detection of melanoma, and with only 200 generated labeled images, the proposed method managed to classify malignant and benign samples with an accuracy of 75.25% [23].

2.5. Least square GAN

The least square generative adversarial network (LSGAN) is a different version of GAN that employs a different loss function than the traditional GAN. Instead of using the binary cross-entropy loss, which can cause issues such as mode collapse and vanishing gradients, the LSGAN utilizes a least square loss. The discriminator is trained to predict the difference between the real and generated data instead of just classifying it as real or fake. The generator, in turn, is trained to minimize the mean squared error between the generated and real data, providing smoother gradients and reducing the risk of mode collapse compared to the traditional GAN loss function [24].

A study proposed the use of LSGAN in generating biopotential signals, electromyography (EMG), skin conductance level (SCL), and ECG signals to detect pain intensity levels. The generated fake data were then classified into four levels using the support vector algorithm and obtained an accuracy of 82.8% across all levels of pain, which was an increment of 44.2% compared to the original data [25].

2.6. Hybrid GANs

The advantage of using GAN in augmenting data is that it is versatile, and while individual models have their limitations, combinations can be made to overcome them, and many researchers have proposed combining different GAN models to create hybrid models. For example, a study proposed the combination of WGAN with LSGAN named capture network or CapGAN. The baseline framework of CapGAN is a deep convolutional network architecture that is utilized in both the generator and the discriminator, modeled after a publicly available DCGAN network. To adapt it for medical image synthesis, the standard convolutions in the DCGAN's discriminator are substituted with capsule layers. CapGAN training employs a least square loss for both the generator and the discriminator to improve stability and generate high-quality images. The method demonstrated exceptional performance in comparison with only DCGAN and LSGAN, where the synthetic MR images achieved a classification accuracy of 89.20% for prostate cancer [26].

In 2021, Zhang et al. conducted a study that introduced the multiple generator conditional Wasserstein generative adversarial network (MG-CWGAN). This study used the MG-CWGAN model for data augmentation of EEG data. By including label-based constraints in the model, the generators are able to learn various features and patterns of the real data from different viewpoints. To reduce computational complexity and maintain underlying information, most of the generator parameters are shared. The convergence of the model was improved by changing the gradient penalty term to a zero-centered gradient penalty term. As the models learn more patterns from the real data, they are expected to generate artificial data that are less noisy and more similar in distribution to the real data, while also preserving diversity among the same type of data. The results of this study showed that the proposed model was efficient, achieving an accuracy of 84.00% with synthetic data.

A study by Mukherjee et al. [27] highlighted the issue that standalone GANs may only capture limited features in an image's latent representation. To address this, the authors developed AGGrGAN, which is an aggregation of three base GANs: two variants of DCGAN and a WGAN. The model is used to generate synthetic MRI scans of brain tumors, and the style transfer technique is applied to enhance image similarity. AGGrGAN effectively addresses the challenge of limited data availability and effectively captures information variance in multiple representations of raw images, as evidenced by the method's application on two datasets which achieved SSIM scores of 0.57 and 0.83, respectively.

3. Discussion

One of the strengths of GANs in the fields of medicine and neuroscience is their ability to overcome the limitations of data availability. In many cases, obtaining real medical images is time-consuming and expensive, making it difficult to train machine learning models. By using GANs to generate synthetic images, the amount of real data required for training can be reduced, making it possible to perform data analysis and model training with limited real data.

Another strength of GANs is their ability to capture complex patterns and structures in medical and neuroscientific data. GANs are capable of learning both the local and global features of real data, making them well-suited for generating high-quality synthetic images that closely resemble real medical images. Moreover, by combining multiple GAN models, it is possible to capture a broader range of patterns in real data and generate more diverse synthetic images.

Despite the many strengths of GANs, there are also several weaknesses to consider. One of the major limitations of GANs is their instability during the training process, which can result in mode collapse or other training failures. Mode collapse occurs when the generator generates only a limited set of outputs, causing the model to fail to capture the diversity of the real data. Additionally, GANs are computationally demanding, requiring large amounts of computing resources and time to train.

Another thing to consider is that the reliability of generated data is a challenge in healthcare. The use of GANs to generate medical images may not be trusted by clinicians because the mechanism behind the generator and the discriminator, which are deep neural networks, is not fully understood. In medical images, the intensities have specific meanings, such as in CT data where every intensity can be mapped to the Hounsfield scale, and represents certain tissues. However, GANs currently lack this mapping and association, leading to a lack of trust in images synthesized with GANs by healthcare professionals. In contrast, in computer vision where overall appearance is the primary concern, GANs are considered more suitable.

In addition, as mentioned in the previous section, individual GAN models have their own weaknesses. Even cGAN, which is one of the most popular GAN models, has difficulty in training, which can be prone to instability and mode collapse [28]. To address this challenge, various training techniques and architectures have been developed, such as gradient penalty cGANs [29] and progressive growing cGANs [30]. Another challenge is the limited interpretability of cGAN models, which can make it difficult to understand how the models are generating new data [31]. Like cGAN, CycleGAN is also weak in terms of mode collapse and instability. CycleGANs are limited in their applications and may not work well for all image-to-image translation tasks. They are best suited for tasks where there is a clear one-to-one mapping between the input and output domains [3].

As is the problem with most GAN models, WGAN is prone to mode collapse. To address this challenge, techniques such as mini-batch discrimination, weight clipping, and one-sided label smoothing can be used. For example, Arjovsky et al. [4] proposed the use of weight clipping to enforce the Lipschitz constraint in the WGAN. The solution of using hybrid GAN models is presented in this study, but the studies included are not without weaknesses either, as Mukherjee et al. [27] stated in their study, the proposed AGGrGAN is limited to only combining two images. Table 1 shows the summary of all the articles selected in this review, which is intended to provide the reader with a snapshot view of all the main experimental elements of the articles reviewed.

TABLE 1
www.frontiersin.org

Table 1. Articles of data augmentation using GANs for images and biosignals in medicine and neuroscience.

4. Conclusion

GAN is a robust method of augmenting data and consists of many variations, each to suit a different purpose. It was found that the most commonly used GAN models are cGAN and CycleGAN, nevertheless, there are also numerous studies that employ WGAN and DCGAN models. However, these variations have their limitations as is the nature of GAN models, which are prone to mode collapse and training stability. Thus, recent research has shown a trend in creating hybrid models, and these models evidently show potential in addressing the data scarcity issue in the medicine and neuroscience fields.

Author contributions

MM: writing—original draft and investigation. JT: writing—review and editing and supervision. Both authors contributed to the article and approved the submitted version.

Funding

The APC was funded by Universiti Malaysia Sabah.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Gui J, Sun Z, Wen Y, Tao D, Ye J. A review on generative adversarial networks: algorithms, theory, and applications. In: IEEE Transactions on Knowledge and Data Engineering. Piscataway, NJ: IEEE (2022). p. 1.

Google Scholar

2. Mirza M, Osindero S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. (2014).

Google Scholar

3. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE (2017). p. 2223–32. doi: 10.1109/ICCV.2017.244

CrossRef Full Text | Google Scholar

4. Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. New York, NY: ACM (2017). p. 214–23.

Google Scholar

5. Zhang H, Goodfellow I, Metaxas D, Odena A. Self-attention generative adversarial networks. In: International Conference on Machine Learning. PMLR (2019). p. 7354–7363.

Google Scholar

6. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training gans. Adv Neural Inform Process Syst. (2016) 29.

Google Scholar

7. Jayalakshmy S, Sudha GF. Respiratory signal classification by CGAN augmented EMD-scalograms. In: 2021 IEEE 2nd International Conference on Applied Electromagnetics, Signal Processing, and Communication (AESPC). Piscataway, NJ: IEEE (2021). doi: 10.1109/AESPC52704.2021.9708484

CrossRef Full Text | Google Scholar

8. Mehta T, Mehendale N. Classification of X-ray images into COVID-19, pneumonia, and TB using CGAN and fine-tuned deep transfer learning models. Res Biomed Eng. (2021) 37:803–13. doi: 10.1007/s42600-021-00174-z

CrossRef Full Text | Google Scholar

9. Jung E, Luna M, Park SH. Conditional gan with 3D discriminator for MRI generation of Alzheimer's disease progression. Pattern Recogn. (2023) 133:109061. doi: 10.1016/j.patcog.2022.109061

CrossRef Full Text | Google Scholar

10. Sun Z, Zhang H, Bai J, Liu M, Hu Z. A discriminatively deep fusion approach with improved conditional Gan (IM-cgan) for facial expression recognition. Pattern Recogn. (2023) 135:109157. doi: 10.1016/j.patcog.2022.109157

CrossRef Full Text | Google Scholar

11. Xu Z, Qi C, Xu G. Semi-supervised attention-guided CycleGAN for data augmentation on medical images. In: 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Piscataway, NJ: IEEE (2019). doi: 10.1109/BIBM47256.2019.8982932

CrossRef Full Text | Google Scholar

12. Fu X, Chen C, Li D. Survival prediction of patients suffering from glioblastoma based on two-branch DenseNet using multi-channel features. Int J Comput Assist Radiol Surg. (2021) 16:207–17. doi: 10.1007/s11548-021-02313-4

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Mabu S, Miyake M, Kuremoto T, Kido S. Semi-supervised Cyclegan for domain transformation of chest CT images and its application to opacity classification of diffuse lung diseases. Int J Comput Assist Radiol Surg. (2021) 16:1925–35. doi: 10.1007/s11548-021-02490-2

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Zhang Z, Ji Z, Chen Q, Yuan S, Fan W. Joint optimization of cyclegan and CNN classifier for detection and localization of retinal pathologies on color fundus photographs. IEEE J Biomed Health Inform. (2022) 26:115–26. doi: 10.1109/JBHI.2021.3092339

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Shilandari A, Marvi H, Khosravi H, Wang W. Speech emotion recognition using data augmentation method by cycle-generative Adversarial Networks. Signal Image Video Process. (2022) 16:1955–62. doi: 10.1007/s11760-022-02156-9

CrossRef Full Text | Google Scholar

16. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. Improved training of wasserstein GANs. In: Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press (2017). p. 5767–77.

Google Scholar

17. Wei Z, Zou J, Zhang J, Xu J. Automatic epileptic EEG detection using convolutional neural network with improvements in time-domain. Biomed Signal Process Control. (2019) 53:101551. doi: 10.1016/j.bspc.2019.04.028

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Panwar S, Rad P, Quarles J, Golob E, Huang Y. A semi-supervised Wasserstein Generative Adversarial Network for classifying driving fatigue from EEG signals. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). Piscataway, NJ: IEEE (2019). doi: 10.1109/SMC.2019.8914286

CrossRef Full Text | Google Scholar

19. Munia MS, Nourani M, Houari S. Biosignal oversampling using Wasserstein generative Adversarial Network. In: 2020 IEEE International Conference on Healthcare Informatics (ICHI). Piscataway, NJ: IEEE (2020). doi: 10.1109/ICHI48887.2020.9374315

CrossRef Full Text | Google Scholar

20. Bouallegue G, Djemal R. EEG data augmentation using Wasserstein Gan. In: 2020 20th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). Piscataway, NJ: IEEE (2020). doi: 10.1109/STA50679.2020.9329330

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. (2015).

Google Scholar

22. Srivastav D, Bajpai A, Srivastava P. Improved classification for pneumonia detection using transfer learning with gan based synthetic image augmentation. In: 2021 11th International Conference on Cloud Computing, Data Science and Engineering (Confluence). Piscataway, NJ: IEEE (2021). doi: 10.1109/Confluence51648.2021.9377062

CrossRef Full Text | Google Scholar

23. Agarwal N, Singh V, Singh P. Semi-supervised learning with gans for melanoma detection. 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS). (2022). doi: 10.1109/ICICCS53718.2022.9787990

CrossRef Full Text | Google Scholar

24. Mao X, Li Q, Xie H, Lau RYK. Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE (2017). p. 2794–802. doi: 10.1109/ICCV.2017.304

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Al-Qerem A. An efficient machine-learning model based on data augmentation for pain intensity recognition. Egyp Inform J. (2020) 21:241–57. doi: 10.1016/j.eij.2020.02.006

CrossRef Full Text | Google Scholar

26. Yu H, Zhang X. Synthesis of prostate MR images for classification using capsule network-based GAN model. Sensors. (2020) 20:5736. doi: 10.3390/s20205736

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Mukherjee D, Saha P, Kaplun D, Sinitca A, Sarkar R. Brain tumor image generation using an aggregation of gan models with style transfer. Sci Rep. (2022) 12:9141. doi: 10.1038/s41598-022-12646-y

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Yang D, Hong S, Jang Y, Zhao T, Lee H. Diversity-sensitive conditional generative adversarial networks. arXiv [Preprint]. (2019). arXiv: 1901.09024.

Google Scholar

29. McKeever S, Walia MS. Synthesising Tabular Datasets Using Wasserstein Conditional GANS with Gradient Penalty (WCGAN-GP). Technological University Dublin (2020).

Google Scholar

30. Han C, Murao K, Noguchi T, Kawata Y, Uchiyama F, Rundo L, et al. Learning more with less: Conditional PGGAN-based data augmentation for brain metastases detection using highly-rough annotation on MR images. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. ACM (2019). p. 119–27.

Google Scholar

31. Zhang Z, Schomaker L. Optimized latent-code selection for explainable conditional text-to-image GANs. In: 2022 International Joint Conference on Neural Networks (IJCNN). IEEE (2022). p. 1–9.

Google Scholar

32. Lee J, Nishikawa RM. Identifying women with mammographically- occult breast cancer leveraging GAN-simulated mammograms. IEEE Trans Med Imaging. (2022) 41:225–36. doi: 10.1109/TMI.2021.3108949

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Desai SD, Giraddi S, Verma N, Gupta P, Ramya S. Breast cancer detection using gan for limited labeled dataset. In: 2020 12th International Conference on Computational Intelligence and Communication Networks (CICN). Piscataway, NJ: IEEE (2020). doi: 10.1109/CICN49253.2020.9242551

CrossRef Full Text | Google Scholar

34. Zhang A, Su L, Zhang Y, Fu Y, Wu L, Liang S. EEG data augmentation for emotion recognition with a multiple generator conditional Wasserstein Gan. Complex Intell Syst. (2021) 8:3059–71. doi: 10.1007/s40747-021-00336-7

CrossRef Full Text | Google Scholar

Keywords: data augmentation, generative adversarial networks, medical images, biosignals, disorder classification, disease prediction

Citation: Meor Yahaya MS and Teo J (2023) Data augmentation using generative adversarial networks for images and biomarkers in medicine and neuroscience. Front. Appl. Math. Stat. 9:1162760. doi: 10.3389/fams.2023.1162760

Received: 10 February 2023; Accepted: 17 April 2023;
Published: 12 May 2023.

Edited by:

Zhennan Zhou, Peking University, China

Reviewed by:

Alex Jung, Aalto University, Finland

Copyright © 2023 Meor Yahaya and Teo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jason Teo, anR3dGVvJiN4MDAwNDA7dW1zLmVkdS5teQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.