Corrigendum: Artificial Intelligence for Monte Carlo Simulation in Medical Physics
- 1University Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, France
- 2University of Lyon, Université Claude Bernard Lyon 1, CNRS/IN2P3, IP2I Lyon, Villeurbanne, France
Monte Carlo simulation of particle tracking in matter is the reference simulation method in the field of medical physics. It is heavily used in various applications such as 1) patient dose distribution estimation in different therapy modalities (radiotherapy, protontherapy or ion therapy) or for radio-protection investigations of ionizing radiation-based imaging systems (CT, nuclear imaging), 2) development of numerous imaging detectors, in X-ray imaging (conventional CT, dual-energy, multi-spectral, phase contrast … ), nuclear imaging (PET, SPECT, Compton Camera) or even advanced specific imaging methods such as proton/ion imaging, or prompt-gamma emission distribution estimation in hadrontherapy monitoring. Monte Carlo simulation is a key tool both in academic research labs as well as industrial research and development services. Because of the very nature of the Monte Carlo method, involving iterative and stochastic estimation of numerous probability density functions, the computation time is high. Despite the continuous and significant progress on computer hardware and the (relative) easiness of using code parallelisms, the computation time is still an issue for highly demanding and complex simulations. Hence, since decades, Variance Reduction Techniques have been proposed to accelerate the processes in a specific configuration. In this article, we review the recent use of Artificial Intelligence methods for Monte Carlo simulation in medical physics and their main associated challenges. In the first section, the main principles of some neural networks architectures such as Convolutional Neural Networks or Generative Adversarial Network are briefly described together with a literature review of their applications in the domain of medical physics Monte Carlo simulations. In particular, we will focus on dose estimation with convolutional neural networks, dose denoising from low statistics Monte Carlo simulations, detector modelling and event selection with neural networks, generative networks for source and phase space modelling. The expected interests of those approaches are discussed. In the second section, we focus on the current challenges that still arise in this promising field.
1 Introduction
Techniques based on Deep Learning have seen huge interest for several years showing, in particular, significant progress in computer vision. Many medical applications have adopted them (see Shen et al. [132] for a recent review) and a lot of research is currently underway. These recent developments around Machine Learning in medical physics have found applications in the field of Monte Carlo simulations. In this work, we will review and discuss the use of artificial intelligence, or more specifically machine Learning, for Monte Carlo simulation for particle transport especially in the context of medical physics. Links to other fields such as particle physics, nuclear physics or solid state physics also exist, but they would be beyond the scope of this work.
The article is structured in the following three parts: Sections 1.1 and 1.2 give a brief introduction of the principles of Monte Carlo simulation as well as deep learning, Section 2 presents a literature review in the context of medical physics, and Section 3 discusses on current challenges.
1.1 Monte Carlo Modelling in Medical Physics
Monte Carlo codes in medical physics are similar to those used in high energy physics community (HEP). Specifically, the simulation engine simulates the transport of particles, mainly photons and light charged particles, across a set of geometrical objects made of well-defined materials, modeling the physical interactions between particles and matter. The transport is performed particle by particle in a step-by-step fashion. For all particles, at each step, stochastic models describing physical interactions (such as Compton scattering, photo-electric effect, ionisation, etc) are repeatedly evaluated based on databases of cross-sections. Thanks to this approach, the quantities determined via simulation, for example the absorbed dose distribution or the number of detected particles, are very accurately estimated even in complex geometries. Depending on the complexity of the simulated configuration, millions or even billions of particles should be tracked to reach an acceptable statistical convergence, making the whole process usually very long.
The use of Monte Carlo techniques for medical physics started to become increasingly popular in the late 1970s, in particular for the modelling of imaging systems in nuclear medicine, for the characterization of particle beam accelerators in radiotherapy, and for calculating the absorbed dose in patients for planning treatment [5, 14, 115, 130]. Since then, Monte Carlo simulations have become a widespread tool in research and development (R&D) for the design of nuclear imaging systems and dose calculation engines in treatment planning systems (TPS) [19, 41, 129, 139, 148].
An example of system development where Monte Carlo simulations are involved in the design, the control and test of the devices, and the tuning of reconstruction algorithms is the new generation of whole-body PET scanners. Prototypes currently under development include the EXPLORER [10] at UC Davis (United States), the PennPET [64] in Philadelphia (United States), the PET20.0 [146] in Ghent (Belgium) and J-PET [72] in Krakow (Poland). With regard to TPS, Monte Carlo simulations are often necessary to characterize the beam lines and the resulting particle phase spaces (photons or charged particles), to determine the dose point-kernels in analytical dose engines, or to directly calculate the absorbed dose in patients [129, 148]. The great accuracy of Monte Carlo calculations is particularly crucial for new radiotherapy protocols, such as hypo-fractionation [95], “flash” radiotherapy [74], and hadrontherapy (proton or ions [13, 50]) which involve very high dose rates and a high spatial conformation.
R&D activities in the field of Monte Carlo simulations have resulted in the development of generic computer codes, i.e. which allow the user to simulate a wide range of particles, energies, geometrical elements and physical interactions (EGSnrc [65], MCNPX [55], Penelope [120], Fluka [18], Geant4 [1, 4], Gate [57, 123, 124], etc.). The accuracy of the underlying physical models and cross-section databases has continuously been improved, also thanks to new experimental data. To counter the low efficiency of Monte Carlo simulation techniques, variance reduction techniques (VRT) [129, 139] have been developed and continue to be proposed to speed up computation times at a given precision.
The development of increasingly sophisticated acquisition systems and finer representations of patient data requires a complex modelling, costly in computer resources. Monte Carlo codes dedicated to a specific application also exist and usually offer a better computational performance than generic codes, but they are in general very restricted to the targeted applications. Most research teams rely on the latter for their work.
Monte Carlo simulation is inherently parallel because particle histories are treated as independent from each other. This is a major advantage for accelerating computations [148]. Powerful computing infrastructures (clusters) can thus be used by researchers to obtain Monte Carlo simulation results in an acceptable time. The recent enthusiasm for scientific computing on graphics cards (GPU) has also concerned several Monte Carlo developments [16, 45, 89, 118], but the codes ported to GPUs tend to be difficult to maintain and partly lose in generality by limiting themselves to well-defined applications.
1.2 Deep Learning Principles
Deep learning [17, 47, 76, 128] is a machine learning method performing supervised, non-supervised or semi-supervised learning tasks, in which the learning takes place across many different stages, as for example defined in [128]. It is most commonly accomplished using neural networks. A neural network is composed of connected neurons typically (but not necessarily) organized in layers. Connections between neurons have associated weights and a neuron is associated with an activation function which generates the neuron’s output, e.g. a non-linear function mapping from an open into a closed real domain (e.g., values bounded between zero and one). The input to a neuron’s activation function is the weighted sum over the outputs of all the connected neurons (belonging to the previous layer in a fully connected feedforward net), generating a complex mapping between the network’s inputs and outputs. In Eq. 1, x(i), W(i) and b(i) represent respectively the output, the weights matrix and the bias for the layer i, whereas f stands for the activation function which is applied element-wisely. The number of neurons, the way they are connected (layers), the choice of activation functions and other parameters are referred to as the “network architecture”. The weights’ values of the connections are parameters that will be determined during the training phase.
Indeed, training the model means optimizing a value for every weight in order to adapt the network to handle a task. This learning process uses a training dataset as input which, for supervised learning, groups pairs of input-output samples. Optimizing the network is typically performed by stochastic gradient descent where weights are updated using backpropagation (in a feedforward network) that computes the gradient of a loss function with respect to the weights of the network. The loss function is chosen depending on the problem at hand, for example to quantify how well the current model prediction matches the training dataset or, indirectly, to measure a distance between current and expected distribution.
Convolutional neural networks [73, 77, 156] is a famous approach to deal with high dimension input data such as images. They are regularized versions of (fully connected) networks based on convolution kernels that slide along input features and provide activation when some specific type of feature is detected at some spatial position in the input image. Hence, shared weights and local connections allow reducing the number of parameters and can thereby simplify the training process, improve generalisation, and reduce overfitting. A CNN architecture is composed of several building blocks (convolution layer, pooling layer, fuller connected layer, activation function, loss function, etc) that must be selected and put together into a network for each task.
Generative Adversarial Networks (GANs) are special deep neural network architectures recently reported [48] that, once trained, can be used to generate data with similar statistics as the training set. A GAN consists of two models that are simultaneously trained: a generative model G that aims to generate a targeted data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than from the generative model. The discriminator D is trained to maximize the probability of correctly identifying samples from the training data as real and those generated by G as fake. The generator G is trained to produce data samples distributed similarly to the data distribution. Once trained, the resulting G model is able produce set of samples that are supposed to belong to the underlying probability distribution of the targeted data represented by the training dataset. A review can, for example, be found in [31]. This type of architecture is frequently used in multiple applications, in particular in the synthesis of photorealistic images or, for example in the medical physics field, to generate synthetic CT from MR images [80]. In the field of Monte Carlo particle tracking simulations in medical physics, several works have been proposed and will be discussed in the next sections.
2 Literature Review
Within the High Energy Physics (HEP) community, a lot of effort has already been made to improve and accelerate Monte Carlo simulation with the help of Machine Learning (including Deep learning) for various applications, in particular around the Geant4 code [44]. Among various examples: simulation of particle showers [108], modelling the response of detectors [144], pairs of jet simulation at LHC [39], nuclear interaction modelling [30], condensed matter physics [25, 133], etc. Interested readers may, for example, refer to several reviews [2, 3, 22, 51, 114] or to https://iml-wg.github.io/HEPML-LivingReview.
To our knowledge, no review has been proposed for the medical physics field. In the following sections, we thus review works which combine machine learning with Monte Carlo simulations in the medical physics field. Of course, particle transport simulation via Monte Carlo in HEP and medical physics share many similarities. Exchange among researchers working in these different fields would be desirable in order to share new knowledge and discoveries. Some of the works reviewed in the following deal with input data that are not an image, but are related to sets of particle properties. Table 1 summarizes the type of input that is considered for each application. The motivation behind many of the presented works is to speed up the computation, e.g. dose calculations or image reconstructions, to the order of minutes rather than hours or days. Other motivation is to improve detector quality by better event selection or reconstruction.
TABLE 1. AI-based applications related to Monte Carlo simulations and their corresponding input data type. The word “particles” as input type refers to a vector of particle properties such as energy, position, direction, weight, etc. CNN stands for convolutional neural networks and MLP stands for multi-layer perceptron.
2.1 AI-Based Dose Computation
Different studies have used Monte Carlo simulations and CNNs to estimate the dose distribution in imaging and radiotherapy. The general idea is to develop a fast neural network as an alternative to computationally intensive simulations. Typically, dose distributions computed with Monte Carlo simulations are used to generate large training and validation sets from CT images and treatment plans. For example, Lee et al. [79] proposed deep learning-based methods to estimate the absorbed dose distribution for internal radiation therapy treatments, i.e., where the radiation source is a radionuclide injected into the patient. A CNN was trained from PET and CT image patches associated with their corresponding dose distributions computed by GATE simulation [124] and considered as ground truth. The training database was composed of 10 patients with eight PET/CT timepoints after intravenous injection of 68Ga-NOTA-RGD, from 1 to 62 min post-injection. The network architecture was based on a U-net structure [116]. The first part of the network performed image downsampling operations (contracting path) and the second part, image upsampling (expansive path). The U-net considers both PET and CT as input data to predict the dose. It was operated on image patches rather than on full images because, in the studied scenario, the dose is mainly deposited locally (millimeters) around the source voxels, allowing memory and computation time gain. Note that local dose deposit would not be a valid assumption in case of radiation with larger range (high energy photons for example). The voxel dose rate errors between CNN-estimated and Monte Carlo-estimated dose were found to be less than 3% and was obtained within a few minutes compared to hours with Monte Carlo. Similarly, Götz [49] presented a hybrid method combining a U-net with empirical mode decomposition technique. The method takes as input CT images and corresponding absorbed dose maps estimated with the MIRD protocol (organ S-value [21]) from SPECT images for 177Lu internal radiation therapy treatment. Again, results seem very good, better than the fast Dose-Volume-Kernel (DVK) method [20] and faster than Monte Carlo.
Principles relatively similar to those developed in the previous examples related to internal radiation therapy have also been applied to external radiation therapy, i.e., where the radiation source is an external particle beam generated by an accelerator. Kalantzis et al. [63] performed a feasibility study of a multi-layer perceptron (MLP) to convert a 2D fluence map obtained from an electronic portal imaging device (EPID) to a dose map for IMRT, replicating conventional convolution kernel in TPS. Nguyen et al. [104] proposed to perform 3D radiotherapy dose prediction for head and neck cancer patients with a hierarchical densely connected U-net deep learning architecture, with prediction error lower than 10%. Liu et al. [85] developed a deep learning method for prediction of 3D dose distribution of helical tomotherapy for nasopharyngeal cancer leading to less than 5% prediction error.
Other developments for imaging dose and brachytherapy have been proposed. For example, Roser et al. [117] use a U-Net fed with first order fluence maps computed by fast ray-casting in order to estimate the total dose exposure including scattered radiation during image-guided x-ray procedures. The CNN was trained using smoothed results of MC simulations as output and ray-casting simulations of identical imaging settings and patient models as inputs. As a result, the proposed CNN estimated the skin dose with an error of below 10% for the majority of test cases. The authors conclude by stating that the combination of CNN and MC simulation has the potential to decrease the computational complexity of accurate skin dose estimation. As an example in brachytherapy, Mao et al. [90] investigated a CNN-based dose prediction models, using structure contours, prescription and delivered doses as training data, for prostate patients and cervical patients. Predictions were found to be very close to those from MC, with less than few percent differences for various dosimetry indexes (CTV).
At the current stage, it is unlikely that dose distributions predicted via DL will be used as the main dose computation method in clinical practice because the dose is expected to be estimated from physically plausible effects and modeling and not really by learning processes. Nevertheless, it may be useful for plan checking consistency, fast plan comparison or to guide plan optimization.
2.2 Deep Learning Based Monte Carlo Denoising
Instead of mapping from some kind of image data (e.g., patient CT, SPECT image) to a dose distribution, deep learning methods have also been developed as a post-processing step to Monte Carlo dose computations to reduce the noise in dose maps due to inherent statistical fluctuations in the deposited dose per voxel. Indeed, Monte Carlo denoising methods have been studied for a long time and have shown to be able to reduce computation time by smoothing statistical fluctuations [101]. The noise of the Monte Carlo computed dose is related to the variance on the deposited energy and decreases as the number of simulated particles, N, increases, specifically at a
The principle of CNN-based denoising is to feed a network with pairs of high-noise/low-noise dose distributions obtained from low and high statistics Monte Carlo simulations with the goal to generate low-noise dose maps from noisy ones. In many cases, the CNN architecture is derived from U-Net, but other architectures such as Dense-Net [54] or Conveying-Path Convolutional Encoder-decoder (CPCE [131]) were studied as well. CNN-based denoising has been applied to photon [43, 71, 103, 111] and proton dose [59]1 for various indications including brain, head and neck, liver, lung, prostate, and to dose delocalization due to charged particles within MRI (in magnetic resonance-guided radiation therapy [29]). Evaluations were performed based on peak signal-to-noise ratio (PSNR), Dose Volume Histogram (DVH [53]) or gamma index ([81, 86]) as comparison metrics. Results were generally very encouraging. The CNN produced noise equivalent dose maps with approximately 10–100 times fewer particles than originally needed [11, 153]. Some difficulties remain: results depend on the size and complexity of the training datasets and it is to be seen how the method can be generalised to other datasets, e.g., how well does a network which was mainly trained on head and neck patients perform on prostate patients. Furthermore, denoised dose maps must preserve dose gradients and it is not yet fully clear how to guarantee this.
In SPECT and PET imaging, the image noise is (partly) related to the scan-time duration. Reducing the scan-time directly improves the clinical workflow and decreases involuntary motion during scanning on the one hand, but on the other hand increases image noise. Different denoising approaches based on DL have been proposed such as [32, 82, 121]. In particular, Ryden et al. [119] proposed an approach based on sparse projection data sampling where intermediate projections were interpolated using a deep CNN to avoid image degradation. DL-based denoising methods were also investigated for low dose CT imaging [111, 131, 151, 153]. More generally, deep learning image denoising methods may be a source of inspiration in this field [142].
2.3 AI for Modelling Scatter
DL-based methods have also been applied to cone-beam CT (CBCT) imaging. The main issues to be addressed in this modality are the poor image quality and the artefacts due to scatter. These arise because the imager panel, which is two-dimensional without anti-scatter grid, not only captures the attenuated primary photons from the x-ray source, but also those originating from coherent and incoherent scatter within the patient. For accurate image reconstruction, the scatter contribution would need to be known and subtracted from the raw projection images. In practice, this is impossible because the imager panel only provides a non-discriminative integrated intensity signal. A Monte Carlo simulation, on the other hand, can specifically tag scattered photons so that perfect scatter-free projections can be obtained via simulation. In fact, some earlier works on CBCT scatter correction rely on Monte Carlo simulation to estimate the scatter contribution in raw projection [58]. However, the direct Monte Carlo simulation of kV photons is too slow to be integrated into a clinical image reconstruction software, although heavy use of variance reduction techniques might improve this [88].
Recent works propose to use deep convolutional networks which learn from CBCT projections simulated via Monte Carlo. They generate estimated scatter images (projections) as output based on raw projections as input [75, 78, 87, 145]. Once trained, the network can replace the Monte Carlo simulation and be used as scatter estimator within the image reconstruction workflow. The technical details of the networks vary, but all report promising results with significant higher CNR (Contrast to Noise Ratio) compared to previous heuristic methods. It is worth mentioning that these methods rely on Monte Carlo simulations for training where primary photons can be distinguished from scattered ones and could not be easily trained on experimentally acquired projections which cannot directly provide explicit scatter images to learn from.
Other authors have reported CBCT scatter correction methods based on deep learning which operate in the image domain [27, 60, 84, 140, 152, 155]. More specifically, they take a CBCT image as input and generate a synthetic CT image as output, i.e. they estimate how a CT image of the patient anatomy described by the CBCT image would have looked like. These synthetic CT images seem to contain much fewer artefacts than the original CBCT images.
Attenuation and scatter correction in the image domain for PET imaging has also been proposed using deep convolutional neural networks [6, 97]. Datasets to train the networks consisted in experimentally acquired images, but in principle, these image-based scatter correction studies would also work on Monte Carlo generated data that may help to create large databases.
2.4 AI for Modelling Imaging Detector Response
The works presented so far rely on the output of Monte Carlo simulations, but they do not alter the simulation itself. The works in this section, on the other hand, replace part of a Monte Carlo simulation in an attempt to accelerate it. More specifically, the proposed works model the particle transport through part of the geometrical components implemented in the simulation. In contrast to the previous methods, the model’s input and output are not necessarily images, but may be sets of particle properties (energy, position, etc).
To our knowledge, few works have been published on this topic in the medical physics field. One example was recently proposed to speed up simulation by modelling the response of a detector: instead of explicitly simulating the particle transport in the detector, this is emulated by the network. For example, one idea could be to speed-up simulations of SPECT imaging by modelling the collimator-detector response function (CDRF) that combines the cumulative effects of all interactions in the imaging head and may be approximated with Angular Response Functions (ARF) [38, 119, 126, 135]. In [126], the tabulated model of the CRDF has been replaced by a deep neural network trained to learn ARF of a collimator-detector system. The method has been shown to be efficient and to provide variance reduction that speeds up the simulation. Speed-up compared to pure Monte-Carlo was between 10 and 3,000: ARF methods are more efficient for low count areas (speed-up of 1,000–3,000) than for high count areas (speedup of 20–300) and more efficient for high energy radionuclides (such as 131I) that show large collimator penetration.
2.5 AI for Monte Carlo Source Modelling
Recent works in the medical physics field have explored the use of generative networks, GANs in particular, to model particle source distributions and potentially speed up Monte Carlo simulations [125, 127]. In the proposed methods, the training data set is a phase space file generated by an analog MC simulation which contains properties (energy, position, direction) of all particles reaching a specific surface. Once the GAN is trained, the resulting network G acts as a compact and fast phase space generator for the MC simulations, replacing a large file of several gigabytes by a NN (G) of several megabytes. G has the ability to quickly generate a large number of particles which allows the user to speed-up the simulations significantly (up to a few orders of magnitude depending on the simulation configuration). In the first case [127], the GAN method was used to learn the distribution of particles exiting a the nozzle of a therapeutic linear electron accelerator (linac), and to model a brachytherapy treatment where the network learned the source distribution generated by seeds in the prostate region. Simulations performed with the GAN as a phase space generator showed a very good dosimetric accuracy compared to the real phase space. In the second case [125], the authors proposed to apply this approach to a more complex particle distribution, namely that of particles exiting a patient in a SPECT acquisition. Results showed that images of complex sources with low error compared to the reference image reconstructed from real phase space data were feasible.
It should be mentioned, although beyond the scope of this review, that several works in the HEP community have also shown how generative models may be very useful to model high-dimensional distributions. Among others, Paganini et al. [108] proposed a GAN model to simulate computer intensive electromagnetic showers in a multi-layer calorimeter, and de Oliveira et al. [34] also exploited GAN to produce jet images (2D representations of energy depositions from particles interacting with a calorimeter). Both methods reports large computational speedups compared to conventional Monte Carlo simulations.
2.6 Deep Learning in Nuclear Imaging
DL has also been explored in the context of nuclear imaging (PET, SPECT, Compton camera, etc.) - a field where Monte Carlo simulation plays a vital role in designing and validating imaging systems and reconstruction algorithms. Many of the proposed DL methods focus on post processing steps of raw data acquired by the imaging system which impact image quality.
In PET, for example, NNs have been investigated to identify random data points arising from annihilation events which lead to image noise [107] and for the correct sequence identification of PET events with multiple interactions of an annihilation photon in several detector elements, in which the first interaction position must be identified in order to recover the actual line of response [93, 102]. NNs have also been used to estimate the two-dimensional interaction position in the monolithic scintillator crystal in PET imagers, or a three-dimensional position when the depth of interaction (DoI) is estimated as well. The investigated NNs have yielded results with better spatial resolution [37, 122], higher uniformity across the crystal volume [98] or faster implementation [149] compared to other existing methods (e.g. maximum likelihood [112] or nearest neighbours [141], among many others).
In Compton imaging devices, ML has been investigated for sequence ordering of multiple-interaction events [157]2 and for signal and background discrimination of Compton camera data in the context of prompt gamma imaging [100]. It is also worth mentioning that DL-based methods have been studied for event selection in data measured by radiation detectors, in particular in HEP, as shown for example in [51]. Applications include detectors at the LHC [12], neutrino-dedicated detectors [8, 40] or measurement of gamma-rays in astrophysics [46]. We refer the reader to [2, 24, 51] for an overview in HEP field.
3 Current Challenges
Monte Carlo based particle transport codes are a central tool for many research questions and applications. This is certainly true for medical physics, the area we concentrate on here, as well as for other fields. We have shown in the previous sections that deep learning methods can be useful for various tasks during simulations, in particular to reduce the computation time (denoising, scatter modelling), but also to model complex systems (detector, source modelling) or perform advanced event selection. However, before those methods can replace conventional methods, especially in industrial or clinical settings, several challenges must be addressed.
Conventional methods in the context of Monte Carlo particle transport simulations are usually instructed by knowledge about the underlying physics processes. This results in specific mathematical or statistical models, usually containing parameters to be adjusted, e.g. based on a calibration or reference measurements. Neural networks, on the other hand, are effectively physics or model agnostic and simply learn properties from a given training dataset. Therefore, there is no a priori guarantee that a trained NN provides a plausible representation of the physics underlying the learned processes. At the same time, there are usually quantitative requirements associated with MC simulation tasks, e.g. accurate estimate of deposited energy or dose, accurate estimated particle properties, accurate phase space distributions. All of the following challenges are linked to the requirement that DL methods in MC simulations be reproducibly accurate and that accuracy can be evaluated.
Challenge 1: Quality of Data
One conventional challenge in DL is related to the limited database size, or its limited variability and adequacy to the learning process. There are several pitfalls such as: non homogeneous data, difficult data curation, insufficient representativity, etc. However, when learning from Monte Carlo data, the size of the training datase could, in principle, be only limited by the computation time. As the latter can quickly become prohibitive, data augmentation may be used anyway. When learning from MC data, the quality of the learnt process becomes strictly tightened to the quality of the simulation itself. If the modelled system contains error or bias, it will be present in the training dataset and learnt by the neural network. Simulation results must therefore be properly validated to avoid bias (see next section) and comparison with experimental data, if feasible, is required.
Challenge 2: Performance Metric and Uncertainty
Evaluation of a trained neural network is conventionally performed by splitting the dataset into three separated parts for training, validation, and test. The model is first trained on the training dataset in order to optimize weights values according to the loss function. The validation dataset is successively used to provide unbiased validation of the final model during the training in order to tune hyperparameters (e.g., number of layers, number of epoch) and prevent overfitting (when the loss function still decreases in the training dataset, but starts to increase in the validation set). The test dataset provides final unbiased validation. The validation process in the context of Monte Carlo simulations may be different from traditional computer vision applications (photos, cinema, games etc) where visual perception is assessed. Here, quantitative validation of physical quantities is needed. The figures of merit to evaluate usually depend on the kind of application for which the network is developed. For example, for dose computation, standard criteria such as the “gamma index” [81, 86] or Dose Volume Histograms [53] could be used. It is to be explored how (clinically) relevant metrics and tolerance levels might be incorporated into the training and validation process. Ideally, final validation of a network should not only be performed against simulated data, but also against experimental data. Furthermore, collaborative open datasets [26] or challenges (such as [113]), yet specific to medical physics applications, may be useful.
One of the advantages of Monte Carlo simulations is that they are able to easily associate an (uncertainty) error with the simulated data. The MC statistical uncertainty (e.g. [28]) could hence be used to provide a target tolerance. In the context of medical physics, the uncertainty of data produced by generative networks needs to be carefully studied and understood, especially if those networks are to (partly) replace conventional MC simulations. In more practical terms, what are the noise properties of DL-generated images or dose distributions compared to their MC simulated counterparts? Two forms of uncertainty have been proposed which are referred to as epistemic and aleatoric [69, 91], where epistemic is the reducible (related to the lack of training data in certain areas of the input domain) and aleatoric the irreducible part of uncertainty (dealing with the potential intrinsic randomness of the real data generating process). As an example, approaches based on Bayesian neural networks [70, 106], by its ability to give an estimation of the uncertainty, may be an interesting lead.
Challenge 3: Neural Network Architecture and Hyperparameters
A challenge when working with deep neural networks is to select the appropriate network structure and capacity, i.e., number of neurons, number of layers, type of activation, etc., for a given problem and adjust the training process with appropriate hyperparameter values. Underfitting may occur if the model is too simple (not enough capacity) or too much regularized, so it tends to have poor predictive performance. Overfitting can be an issue which occurs e.g. when a too flexible network (too many parameters) learns structure in the data which merely derives from noise or other artefacts rather than true information. Several regularization methods, such as adding a penalty term on the weights (e.g., L1, L2) in the loss, or using dropout regularization [138], that randomly ignores some layer outputs, might help preventing overfitting issues and improving model generalization (capacity to perform on inputs not previously seen by the network).
When images are the input, convolution operations can be added in between network layers. Moreover, several architectures such as the well known UNet [116] or pix2pix conditional adversarial networks [56] (among others) have been proposed. When a network is used to bypass the Monte Carlo simulation and use as input an image, e.g. a patient CT or a PET image, conventional convolution filters can be applied. However, when particle properties are the input, the nature of each entry in the property vector may be different (position, energy, time, etc.). It remains to be studied how meaningful convolution operations can be defined on such an inhomogeneous input space, or if it may be applied only partly, e.g. on a single dimension such as energies. Furthermore, some particle properties such as charge or atomic weight are bound to be integer number rather than reals and may require specific processes, such as one-hot encoding as used for example in [126]. Finally, conservation laws or other physical principles might pose constraints which need to be built into the network optimisation, either by a specific architecture or an adapted loss function.
Challenge 4: Generative Models, Generative Adversarial Network
To simulate particle transport through a medium, a Monte Carlo code must generate particles according to some probability distribution. This can be the initial phase space distribution of the particle source, but also an intermediate step which creates new particles as a result of interactions with the target, e.g. inelastic nuclear scattering. In conventional Monte Carlo methods, this is done by sampling from a cumulative probability distribution and the accuracy with which the distribution is modelled and parametrized directly impacts the accuracy of the simulation results. Source particles can also be sampled explicitly from tabulated phase space files. In this context, generative models represent a new way to replace conventional particle generation methods in a Monte Carlo simulation. Understanding and mastering the technical aspects of such methods represents an important challenge.
Our review concentrated on GAN (Section 2.5) which have been explored for Monte Carlo simulation in medical physics. Many variants of GAN have been proposed to improve performance or to adapt to various applications (auxiliary GAN, bidirectional GAN, conditional GAN, Cycle GAN, InfoGAN, etc). Despite a large number of successful results, GAN have been shown to be notoriously difficult to train, suffering from several pitfalls: mode collapse, vanishing gradient, instability. To tackle these issues, various formulations based on different metrics, such as the Wasserstein distance [7] and regularization methods [52, 61, 62, 94] have been proposed. An in-depth study of the most suitable variants for Monte Carlo simulations remains to be undertaken. For example, is it possible to obtain a precise and reliable modelling of all spatial characteristics of the dose distributions [67]? Can a GAN model a Linac gamma source precisely enough to include 511 keV peak [127]? Alternative generative learning processes, such as VAE (Variational AutoEncoders, see for example [68]) or, recently, Score-Based diffusion Generative Models [105, 136, 137, 143] may have a larger role to play in distribution modelling within Monte Carlo. In particular, VAE networks are designed to compress the input information into a constrained multivariate latent distribution (encoding) to reconstruct it as accurately as possible (decoding). Although VAEs seem generally less efficient than GAN in the field of photo-realistic image synthesis, it could be an interesting alternative to GAN in the medical physics field. Additionally, transfer learning may also be of interest where a model already trained on a given dataset may be adapted through training on another dataset.
The problem of reproducing a probability distribution by generative networks such as GAN arises far beyond the simple source modelling. In Monte Carlo simulations, certain interactions between particles, in particular nuclear processes, are based on very elaborate statistical distributions which require a lot of computing time, and generative networks would have a role to play. For instance, Bayesian neural networks have been proposed to improve mass predictions of nuclear models [106] and quantify the prediction uncertainty which becomes larger when the network is extrapolated away from the training region.
Finally, it is interesting to observe some subtle differences between GAN in computer vision and for tasks such as particle generation in a Monte Carlo simulation. In a computer vision application where a GAN generates images, it is mainly of interest that each image be as realistic as possible. In a Monte Carlo simulation, any generated particle with reasonable physical properties judged by itself is realistic. What really counts is whether the distribution of many generated particles is correct. The corresponding question in the computer vision application would be e.g., whether the GAN generates the correct proportion of long-haired brown dogs compared to short-haired black ones, albeit all of them individually might be realistic. In more technical terms, an image has a much higher dimension, i.e., number of pixels, than the vector of physical properties describing a particle. Out of the space of all images (including images with random noise), only a very small and sparse subspace contains realistic images, i.e., containing pixels which depict a desired kind of object. A particle distribution instead densely fills a relatively large portion of the full phase space. These differences likely impact the way GANs and other generative models perform in Monte Carlo simulations as opposed to computer vision tasks and will deserve more detailed attention.
Challenge 5: Explicability and Interpretability
Deep neural networks are sometimes criticized as being black boxes, or in other words for not providing direct insight into the way they link input and output. As an example: when modelling the response properties of a detector explicitly via a physics-motivated analytical model, the mathematical form of the model together with its parameters inform the user directly which kind of events will be detected in which fashion. In contrast, a deep neural network trained on Monte Carlo simulated data does not offer this transparency. The underlying reason is that a neural network is a highly flexible non-linear function whose parameters are the neuron weights optimized to best represent the training data. As the weights have no a priori meaning attached to them, they are difficult to interpret.
Monte Carlo simulation, on the other hand, is based on physics models with meaningful parameters and a thereby described quantitative relationship between input and output. Clearly, the randomized and iterative evaluation of a multitude of physics models make the final simulation output complex in certain cases, but the underlying mechanism remains explicitly defined. A challenge when using deep neural networks in the context of Monte Carlo simulations is therefore to gain insight in and control over the workings of the network. This leads to the concepts of interpretation and explanation.
Definitions of these terms can be found in [96], namely, an interpretation is the mapping of an abstract concept into a domain that the human can make sense of. An explanation is the collection of features of the interpretable domain that have contributed for a given example to produce an output. It is important to note that both terms apply to trained networks. Picking up the example of the detector response (Section 2.4), an interpretation links a specific detector response, e.g. the detection window in a SPECT imager, to the particle properties, i.e. its energy, direction etc. In this same example, the explanation is the collection of properties which have led a specific particle to be associated with a certain detector response? In this sense, explanation and interpretation are expected to aid with the validation of deep neural networks in terms of physical plausibility.
The difficulty of visualizing and studying explanation and interpretation of a network grows with the dimension of the input data. When the input is merely a vector with a particle’s kinematic properties, i.e. with six or seven entries, the relevance of each of them for a given network decision can still be interpreted “manually”. For high dimensional input such as CT images, other methods must be employed, e.g. activation maximization with an expert [15, 134]. For interpretation, gradient based methods such as deep Taylor expansion and backward propagation techniques such as layer-wise relevance propagation should be mentioned here [9].
A rich literature about machine learning interpretation methods exists [83], with a large part of the methods exploiting the gradient information flowing through the layers of the network in order to highlight their impact. Investigating and developing interpretation and explanation techniques in the context of Monte Carlo simulations to make DNN sufficiently “transparent” will be one of the challenges to address.
4 Concluding Remark
There may be a methodological change associated with the use of deep learning methods in medical physics simulation: to some extent, instead of mathematically mastering the phenomenon under investigation, the modelling relies on a large amount of data to learn from heuristically. However, the Monte Carlo simulation which generates the training data needs to be skillfully set-up and evaluated in the first place. For the moment, even if it is envisioned that deep learning can improve simulations, it does not seem certain that it can always replace Monte Carlo. As the use of deep learning methods evolves, physics-driven dataset modelling, i.e., a mix between modelling based on large datasets and understanding of the underlying physics, will become increasingly important.
Data Availability Statement
Publicly available datasets were analyzed in this study. This data can be found here: https://github.com/OpenGATE/Gate.
Author Contributions
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Funding
This work was performed within the framework of the SIRIC LYriCAN Grant INCa-INSERM-DGOS-12563, the LABEX PRIMES (ANR-11-LABX-0063) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007), the MOCAMED project (ANR-20-CE45-0025) and the POPEYE ERA PerMed 2019 project (ANR-19-PERM-0007–04).
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1http://hdl.handle.net/11603/19255
2http://hdl.handle.net/11603/19255
References
1. Agostinelli S, Allison J, Amako K, Apostolakis J, Araujo H, Arce P, et al. Geant4 - a Simulation Toolkit. Nucl Instr Methods Phys Res Section A: Acc Spectrometers, Detectors Associated Equipment (2003) 506:250–303. doi:10.1016/S0168-9002(03)01368-8
2. Albertsson K, Altoe P, Anderson D, Anderson J, Andrews M, Espinosa JPA, et al. (2019). Machine Learning in High Energy Physics Community White Paper. J Phys Conf Ser 1085:022008.
3. Albrecht J, Alves AA, Amadio G, Andronico G, Anh-Ky N, Aphecetche L, et al. A Roadmap for HEP Software and Computing R&D for the 2020s. Comput Softw Big Sci (2019) 3:7. doi:10.1007/s41781-018-0018-8
4. Allison J, Amako K, Apostolakis J, Arce P, Asai M, Aso T, et al. Recent Developments in GEANT4. Nucl Instr Methods Phys Res Section A: Acc Spectrometers, Detectors Associated Equipment (2016) 835:186–225. doi:10.1002/hbm.25039
5. Andreo P. Monte Carlo Techniques in Medical Radiation Physics. Phys Med Biol (1991) 36:861–920. doi:10.1088/0031-9155/36/7/001
6. Arabi H, Bortolin K, Ginovart N, Garibotto V, Zaidi H. Deep Learning‐guided Joint Attenuation and Scatter Correction in Multitracer Neuroimaging Studies. Hum Brain Mapp (2020) 41:3667–79. doi:10.1002/hbm.25039
7. Arjovsky M, Chintala S, Bottou L. Wasserstein Generative Adversarial Networks. Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17; Sydney, NSW (2017). p. 214–223.
8. Aurisano A, Radovic A, Rocco D, Himmel A, Messier MD, Niner E, et al. A Convolutional Neural Network Neutrino Event Classifier. J Inst (2016) 11:P09001. doi:10.1088/1748-0221/11/09/p09001
9. Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W. On Pixel-Wise Explanations for Non-linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE (2015) 10:e0130140. doi:10.1371/journal.pone.0130140
10. Badawi RD, Shi H, Hu P, Chen S, Xu T, Price PM, et al. First Human Imaging Studies with the EXPLORER Total-Body PET Scanner*. J Nucl Med (2019) 60:299–303. doi:10.2967/jnumed.119.226498
11. Bai T, Wang B, Nguyen D, Jiang S. Deep Dose Plugin: Towards Real-Time Monte Carlo Dose Calculation through a Deep Learning-Based Denoising Algorithm. Mach Learn Sci Technol (2021) 2:025033. doi:10.1088/2632-2153/abdbfe
12. Baldi P, Bauer K, Eng C, Sadowski P, Whiteson D. Jet Substructure Classification in High-Energy Physics with Deep Neural Networks. Phys Rev D (2016) 93:094034. doi:10.1103/physrevd.93.094034
13. Battistoni G, Pinsky L, Santana M, Lechner A, Lari L, Smirnov G, et al. FLUKA Monte Carlo Calculations for Hadrontherapy Application (2012). Available at: https://cds.cern.ch/record/1537386.
14. Berger MJ, Seltzer SM. Calculation of Energy and Charge Deposition and of the Electron Flux in a Water Medium Bombarded with 20-MeV Electrons. Ann NY Acad Sci (1969) 161:8–23. doi:10.1111/j.1749-6632.1969.tb34035.x
15. Berkes P, Wiskott L. On the Analysis and Interpretation of Inhomogeneous Quadratic Forms as Receptive Fields. Neural Comput (2006) 18:1868–95. doi:10.1162/neco.2006.18.8.1868
16. Bert J, Perez-Ponce H, Bitar ZE, Jan S, Boursier Y, Vintache D, et al. Geant4-based Monte Carlo Simulations on GPU for Medical Applications. Phys Med Biol (2013) 58:5593–611. doi:10.1088/0031-9155/58/16/5593
18. Böhlen TT, Cerutti F, Chin MPW, Fassò A, Ferrari A, Ortega PG, et al. The FLUKA Code: Developments and Challenges for High Energy and Medical Applications. Nucl Data Sheets (2014) 120:211–4. doi:10.1016/j.nds.2014.07.049
19. Bolch WE. The Monte Carlo Method in Nuclear Medicine: Current Uses and Future Potential. J Nucl Med (2010) 51:337–9. doi:10.2967/jnumed.109.067835
20. Bolch WE, Bouchet LG, Robertson JS, Wessels BW, Siegel JA, Howell RW, et al. MIRD Pamphlet No. 17: the Dosimetry of Nonuniform Activity Distributions-Rradionuclide S Values at the Voxel Level. Medical Internal Radiation Dose Committee. J Nucl Med (1999) 40:11S–36S.
21. Bolch WE, Eckerman KF, Sgouros G, Thomas SR. MIRD Pamphlet No. 21: A Generalized Schema for Radiopharmaceutical Dosimetry-Standardization of Nomenclature. J Nucl Med (2009) 50:477–84. doi:10.2967/jnumed.108.056036
22. Bourilkov D. Machine and Deep Learning Applications in Particle Physics. Int J Mod Phys A (2019) 34:1930019. doi:10.1142/s0217751x19300199
23. Bruyndonckx P, Leonard S, Tavernier S, Lemaitre C, Devroede O, Yibao Wu Y, et al. Neural Network-Based Position Estimators for PET Detectors Using Monolithic LSO Blocks. IEEE Trans Nucl Sci (2004) 51:2520–5. doi:10.1109/TNS.2004.835782
24. Carleo G, Cirac I, Cranmer K, Daudet L, Schuld M, Tishby N, et al. Machine Learning and the Physical Sciences. Rev Mod Phys (2019) 91:045002. doi:10.1103/RevModPhys.91.045002
25. Carrasquilla J, Melko RG. Machine Learning Phases of Matter. Nat Phys (2017) 13:431–4. doi:10.1038/nphys4035
26. [Dataset] Cern . CERN Open Data Portal (2021). Available at: https://opendata.cern.ch/.
27. Chen L, Liang X, Shen C, Jiang S, Wang J. Synthetic CT Generation from CBCT Images via Deep Learning. Med Phys (2020) 47:1115–25. doi:10.1002/mp.13978
28. Chetty IJ, Rosu M, Kessler ML, Fraass BA, Ten Haken RK, Kong F-M, et al. Reporting and Analyzing Statistical Uncertainties in Monte Carlo-Based Treatment Planning. Int J Radiat Oncology*Biology*Physics (2006) 65:1249–59. doi:10.1016/j.ijrobp.2006.03.039
29. Chin S, Eccles CL, McWilliam A, Chuter R, Walker E, Whitehurst P, et al. Magnetic Resonance‐guided Radiation Therapy: A Review. J Med Imaging Radiat Oncol (2020) 64:163–77. doi:10.1111/1754-9485.12968
30. Ciardiello A, Asai M, Caccia B, Cirrone Ga. P, Colonna M, Dotti A, et al. Preliminary Results in Using Deep Learning to Emulate BLOB, a Nuclear Interaction Model. Physica Med (2020) 73:65–72. doi:10.1016/j.ejmp.2020.04.005
31. Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. Generative Adversarial Networks: An Overview. IEEE Signal Process Mag (2018) 35:53–65. doi:10.1109/MSP.2017.2765202
32. Cui J, Gong K, Guo N, Wu C, Meng X, Kim K, et al. PET Image Denoising Using Unsupervised Deep Learning. Eur J Nucl Med Mol Imaging (2019) 46:2780–9. doi:10.1007/s00259-019-04468-4
33. Acilu PGd., Sarasola I, Canadas M, Cuerdo R, Mendes PR, Romero L, et al. Study and Optimization of Positioning Algorithms for Monolithic PET Detectors Blocks. J Inst (2012) 7:C06010. doi:10.1088/1748-0221/7/06/c06010
34. de Oliveira L, Paganini M, Nachman B. Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis. Comput Softw Big Sci (2017) 1:4. doi:10.1007/s41781-017-0004-6
35. Deasy JO. Denoising of Electron Beam Monte Carlo Dose Distributions Using Digital Filtering Techniques. Phys Med Biol (2000) 45:1765–79. doi:10.1088/0031-9155/45/7/305
36. Deasy JO, Wickerhauser MV, Picard M. Accelerating Monte Carlo Simulations of Radiation Therapy Dose Distributions Using Wavelet Threshold De-noising. Med Phys (2002) 29:2366–73. doi:10.1118/1.1508112
37. Decuyper M, Stockhoff M, Vandenberghe S, Van Holen R. Artificial Neural Networks for Positioning of Gamma Interactions in Monolithic PET Detectors. Phys Med Biol (2021) 66:075001. doi:10.1088/1361-6560/abebfc
38. Descourt P, Carlier T, Du Y, Song X, Buvat I, Frey EC, et al. Implementation of Angular Response Function Modeling in SPECT Simulations with GATE. Phys Med Biol (2010) 55:N253–N266. doi:10.1088/0031-9155/55/9/n04
39. Di Sipio R, Giannelli MF, Haghighat SK, Palazzo S. DijetGAN: A Generative-Adversarial Network Approach for the Simulation of QCD Dijet Events at the LHC. J High Energ Phys (2019) 2019:110. doi:10.1007/JHEP08(2019)110
40.DUNE Collaboration Abi B, Acciarri R, Acero MA, Adamov G, Adams D, et al. Neutrino Interaction Classification with a Convolutional Neural Network in the DUNE Far Detector. Phys Rev D (2020) 102:092003. doi:10.1016/j.jacr.2017.09.045
41. Fahey FH, Grogg K, El Fakhri G. Use of Monte Carlo Techniques in Nuclear Medicine. J Am Coll Radiol (2018) 15:446–8. doi:10.1016/j.jacr.2017.09.045
42. Fippel M, N sslin F. Smoothing Monte Carlo Calculated Dose Distributions by Iterative Reduction of Noise. Phys Med Biol (2003) 48:1289–304. doi:10.1088/0031-9155/48/10/304
43. Fornander H. Denoising Monte Carlo Dose Calculations Using a Deep Neural Network. Ph.D. thesis. KTH Royal Institute Of Technology School of Electrical Engineering and Computer Science (2019).
44. Foundation HS, Apostolakis J, Asai M, Banerjee S, Bianchi R, Canal P, et al. (2018). HEP Software Foundation Community White Paper Working Group - Detector Simulation. arXiv:1803.04165.
45. Garcia M-P, Bert J, Benoit D, Bardiès M, Visvikis D. Accelerated GPU Based SPECT Monte Carlo Simulations. Phys Med Biol (2016) 61:4001–18. doi:10.1088/0031-9155/61/11/4001
46. Garnett RL, Hanu AR, Byun SH, Hunter SD. Event Selection and Background Rejection in Time Projection chambers Using Convolutional Neural Networks and a Specific Application to the AdEPT Gamma-ray Polarimeter mission. Nucl Instr Methods Phys Res Section A: Acc Spectrometers, Detectors Associated Equipment (2021) 987:164860. doi:10.1016/j.nima.2020.164860
48. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Advances in Neural Information Processing Systems, 2 (2014). p. 2672–80.Generative Adversarial Nets.
49. Götz TI (2019). Technical Report: Time-Activity-Curve Integration in Lu-177 Therapies in Nuclear Medicine. arXiv:1907.06617 [physics].
50. Grevillot L, Boersma DJ, Fuchs H, Aitkenhead A, Elia A, Bolsa M, et al. Technical Note: GATE-RTion: a GATE/Geant4 Release for Clinical Applications in Scanned Ion Beam Therapy. Med Phys (2020) 47:3675–81. doi:10.1002/mp.14242
51. Guest D, Cranmer K, Whiteson D. Deep Learning and its Application to LHC Physics. Annu Rev Nucl Part Sci (2018) 68:161–81. doi:10.1146/annurev-nucl-101917-021019
52. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. (2017). Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems Curran Associates, Inc., 30.
53. Speer TW, Knowlton CA, Mackay MK, Ma C, Wang L, Daugherty LC, et al. Dose Volume Histogram (DVH). In: LW Brady, and TE Yaeger, editors. Encyclopedia of Radiation Oncology. Berlin, Heidelberg: Springer (2013). p. 166. doi:10.1007/978-3-540-85516-3_659
54. Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely Connected Convolutional Networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017). p. 4700–8. doi:10.1109/cvpr.2017.243
55. Hughes HG, Chadwick MB, Corzine RK, Egdorf HW, Gallmeier FX, Little RC, et al. Status of the MCNPX Transport Code. In: A Kling, FJC Baräo, M Nakagawa, L Távora, and P Vaz, editors. Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications. Berlin, Heidelberg: Springer (2001). p. 961–6. doi:10.1007/978-3-642-18211-2_154
56. Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-Image Translation with Conditional Adversarial Networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017). p. 5967–76. doi:10.1109/cvpr.2017.632
57. Jan S, Benoit D, Becheva E, Carlier T, Cassol F, Descourt P, et al. GATE V6: A Major Enhancement of the GATE Simulation Platform Enabling Modelling of CT and Radiotherapy. Phys Med Biol (2011) 56:881–901. doi:10.1088/0031-9155/56/4/001
58. Jarry G, Graham SA, Moseley DJ, Jaffray DJ, Siewerdsen JH, Verhaegen F. Characterization of Scattered Radiation in kV CBCT Images Using Monte Carlo Simulations. Med Phys (2006) 33:4320–9. doi:10.1118/1.2358324
59. Javaid U, Souris K, Dasnoy D, Huang S, Lee JA. Mitigating Inherent Noise in Monte Carlo Dose Distributions Using Dilated U‐Net. Med Phys (2019) 46:5790–8. doi:10.1002/mp.13856
60. Jiang Y, Yang C, Yang P, Hu X, Luo C, Xue Y, et al. Scatter Correction of Cone-Beam CT Using a Deep Residual Convolution Neural Network (DRCNN). Phys Med Biol (2019) 64:145003. doi:10.1088/1361-6560/ab23a6
61. Jolicoeur-Martineau A, Mitliagkas I (2019). Connections between Support Vector Machines, Wasserstein Distance and Gradient-Penalty GANs. arXiv:1910.06922 [cs, stat]. October, 2019.
62. Jolicoeur-Martineau A, Mitliagkas I (2020). Gradient Penalty from a Maximum Margin Perspective. arXiv:1910.06922 [cs, stat]. November 2020.
63. Kalantzis G, Vasquez-Quino LA, Zalman T, Pratx G, Lei Y. Toward IMRT 2D Dose Modeling Using Artificial Neural Networks: A Feasibility Study. Med Phys (2011) 38:5807–17. doi:10.1118/1.3639998
64. Karp JS, Michael Geagan J, Muehllehner G, Matthew Werner E, McDermott T, Jeffrey Schmall P, et al. The PennPET Explorer Scanner for Total Body Applications. In: 2017 IEEE Nuclear Science Symposium and Medical Imaging Conference. NSS/MIC (2017). p. 1–4. doi:10.1109/nssmic.2017.8533068
65. Kawrakow I. NRCC Report PIRS-701 (2001).The EGSnrc Code System, Monte Carlo Simulation of Electron and Photon Transport.
66. Kawrakow I. On the De-noising of Monte Carlo Calculated Dose Distributions. Phys Med Biol (2002) 47:3087–103. doi:10.1088/0031-9155/47/17/304
67. Kearney V, Chan JW, Wang T, Perry A, Descovich M, Morin O, et al. DoseGAN: A Generative Adversarial Network for Synthetic Dose Prediction Using Attention-Gated Discrimination and Generation. Sci Rep (2020) 10:11073. doi:10.1038/s41598-020-68062-7
68. Kingma DP, Welling M. An Introduction to Variational Autoencoders. FNT Machine Learn (2019) 12:307–92. doi:10.1561/2200000056
69. Kiureghian AD, Ditlevsen O. Aleatory or Epistemic? Does it Matter? Struct Saf (2009) 31:105–12. doi:10.1016/j.strusafe.2008.06.020
71. Kontaxis C, Bol GH, Lagendijk JJW, Raaymakers BW. DeepDose: Towards a Fast Dose Calculation Engine for Radiation Therapy Using Deep Learning. Phys Med Biol (2020) 65:075013. doi:10.1088/1361-6560/ab7630
72. Kowalski P, Wiślicki W, Shopa RY, Raczyński L, Klimaszewski K, Curcenau C, et al. Estimating the NEMA Characteristics of the J-PET Tomograph Using the GATE Package. Phys Med Biol (2018) 63:165008. doi:10.1088/1361-6560/aad29b
73. Krizhevsky A, Sutskever I, Hinton GE. Advances in Neural Information Processing Systems, 25. Curran Associates, Inc.) (2012). p. 1–9.ImageNet Classification with Deep Convolutional Neural Networks.
74. Lai Y, Jia X, Chi Y. Modeling the Effect of Oxygen on the Chemical Stage of Water Radiolysis Using GPU-Based Microscopic Monte Carlo Simulations, with an Application in FLASH Radiotherapy. Phys Med Biol (2021) 66:025004. doi:10.1088/1361-6560/abc93b
75. Lalonde A, Winey B, Verburg J, Paganetti H, Sharp GC. Evaluation of CBCT Scatter Correction Using Deep Convolutional Neural Networks for Head and Neck Adaptive Proton Therapy. Phys Med Biol (2020) 65:245022. doi:10.1088/1361-6560/ab9fcb
77. LeCun Y, Bengio Y, Laboratories TB. The Handbook of Brain Theory and Neural Networks. MIT Press (1998). p. 255–8.Convolutional Networks for Images, Speech, and Time-Series.
78. Lee H, Lee J. A Deep Learning-Based Scatter Correction of Simulated X-ray Images. Electronics (2019) 8:944. doi:10.3390/electronics8090944
79. Lee MS, Hwang D, Kim JH, Lee JS. Deep-dose: A Voxel Dose Estimation Method Using Deep Convolutional Neural Network for Personalized Internal Dosimetry. Sci Rep (2019) 9:10308. doi:10.1038/s41598-019-46620-y
80. Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, et al. MRI‐only Based Synthetic CT Generation Using Dense Cycle Consistent Generative Adversarial Networks. Med Phys (2019) 46:3565–81. doi:10.1002/mp.13617
81. Li H, Dong L, Zhang L, Yang JN, Gillin MT, Zhu XR. Toward a Better Understanding of the Gamma index: Investigation of Parameters with a Surface-Based Distance Methoda). Med Phys (2011) 38:6730–41. doi:10.1118/1.3659707
82. Lin C, Chang Y-C, Chiu H-Y, Cheng C-H, Huang H-M. Reducing Scan Time of Paediatric 99mTc-DMSA SPECT via Deep Learning. Clin Radiol (2021) 76:315–e13. doi:10.1016/j.crad.2020.11.114
83. Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy (2020) 23:18. doi:10.3390/e23010018
84. Liu Y, Lei Y, Wang T, Fu Y, Tang X, Curran WJ, et al. CBCT‐based Synthetic CT Generation Using Deep‐attention cycleGAN for Pancreatic Adaptive Radiotherapy. Med Phys (2020) 47:2472–83. doi:10.1002/mp.14121
85. Liu Z, Fan J, Li M, Yan H, Hu Z, Huang P, et al. A Deep Learning Method for Prediction of Three‐dimensional Dose Distribution of Helical Tomotherapy. Med Phys (2019) 46:1972–83. doi:10.1002/mp.13490
86. Low DA, Harms WB, Mutic S, Purdy JA. A Technique for the Quantitative Evaluation of Dose Distributions. Med Phys (1998) 25:656–61. doi:10.1118/1.598248
87. Maier J, Eulig E, Vöth T, Knaup M, Kuntz J, Sawall S, et al. Real-time Scatter Estimation for Medical CT Using the Deep Scatter Estimation: Method and Robustness Analysis with Respect to Different Anatomies, Dose Levels, Tube Voltages, and Data Truncation. Med Phys (2019) 46:238–49. doi:10.1002/mp.13274
88. Mainegra-Hing E, Kawrakow I. Fast Monte Carlo Calculation of Scatter Corrections for CBCT Images. J Phys Conf Ser (2008) 102:012017. doi:10.1088/1742-6596/102/1/012017
89. Maneval D, Ozell B, Després P. pGPUMCD: An Efficient GPU-Based Monte Carlo Code for Accurate Proton Dose Calculations. Phys Med Biol (2019) 64:085018. doi:10.1088/1361-6560/ab0db5
90. Mao X, Pineau J, Keyes R, Enger SA. RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy via Deep Learning. Int J Radiat Oncology*Biology*Physics (2020) 108:802–12. doi:10.1016/j.ijrobp.2020.04.045
91. Matthies HG. Quantifying Uncertainty: Modern Computational Representation of Probability and Applications. In: A Ibrahimbegovic, and I Kozar, editors. Extreme Man-Made and Natural Hazards in Dynamics of Structures. Dordrecht: Springer Netherlands), NATO Security through Science Series (2007). p. 105–35. doi:10.1007/978-1-4020-5656-7_4
92. Miao B, Jeraj R, Bao S, Mackie TR. Adaptive Anisotropic Diffusion Filtering of Monte Carlo Dose Distributions. Phys Med Biol (2003) 48:2767–81. doi:10.1088/0031-9155/48/17/303
93. Michaud J-B, Tétrault M-A, Beaudoin J-F, Cadorette J, Leroux J-D, Brunet C-A, et al. Sensitivity Increase through a Neural Network Method for LOR Recovery of ICS Triple Coincidences in High-Resolution Pixelated- Detectors PET Scanners. IEEE Trans Nucl Sci (2015) 62:82–94. doi:10.1109/tns.2014.2372788
94. Miyato T, Kataoka T, Koyama M, Yoshida Y. International Conference on Learning Representations, 1–26 (2018). Spectral Normalization for Generative Adversarial Networks.
95. Moiseenko V, Liu M, Bergman AM, Gill B, Kristensen S, Teke T, et al. Monte Carlo Calculation of Dose Distribution in Early Stage NSCLC Patients Planned for Accelerated Hypofractionated Radiation Therapy in the NCIC-BR25 Protocol. Phys Med Biol (2010) 55:723–33. doi:10.1088/0031-9155/55/3/012
96. Montavon G, Samek W, Müller K-R. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Process. (2018) 73:1–15. doi:10.1016/j.dsp.2017.10.011
97. Mostafapour S, Gholamiankhah F, Dadgar H, Arabi H, Zaidi H. Feasibility of Deep Learning-Guided Attenuation and Scatter Correction of Whole-Body 68Ga-PSMA PET Studies in the Image Domain. Clin Nucl Med (2021) 46(8):609–615. doi:10.1097/RLU.0000000000003585
98. Müller F, Schug D, Hallen P, Grahe J, Schulz V. Gradient Tree Boosting-Based Positioning Method for Monolithic Scintillator Crystals in Positron Emission Tomography. IEEE Trans Radiat Plasma Med Sci (2018) 2:411–21. doi:10.1109/trpms.2018.2837738
99. Müller F, Schug D, Hallen P, Grahe J, Schulz V. A Novel DOI Positioning Algorithm for Monolithic Scintillator Crystals in PET Based on Gradient Tree Boosting. IEEE Trans Radiat Plasma Med Sci (2019) 3:465–74. doi:10.1109/trpms.2018.2884320
100. Muñoz E, Ros A, Borja-Lloret M, Barrio J, Dendooven P, Oliver JF, et al. Proton Range Verification with MACACO II Compton Camera Enhanced by a Neural Network for Event Selection. Sci Rep (2021) 11:9325. doi:10.1038/s41598-021-88812-5
101. Naqa IE, Kawrakow I, Fippel M, Siebers JV, Lindsay PE, Wickerhauser MV, et al. A Comparison of Monte Carlo Dose Calculation Denoising Techniques. Phys Med Biol (2005) 50:909–22. doi:10.1088/0031-9155/50/5/014
102. Nasiri N, Abbaszadeh S. Medical Imaging 2021: Physics of Medical Imaging, 11595. International Society for Optics and Photonics (2021). p. 115953W. doi:10.1117/12.2582063A Deep Learning Approach to Correctly Identify the Sequence of Coincidences in Cross-Strip CZT Detectors.
103. Neph R, Huang Y, Yang Y, Sheng K. DeepMCDose: A Deep Learning Method for Efficient Monte Carlo Beamlet Dose Calculation by Predictive Denoising in MR-Guided Radiotherapy. In: D Nguyen, L Xing, and S Jiang, editors. Artificial Intelligence in Radiation Therapy, 11850. Cham: Springer International Publishing (2019). p. 137–45. doi:10.1007/978-3-030-32486-5_17
104. Nguyen D, Jia X, Sher D, Lin M-H, Iqbal Z, Liu H, et al. 3D Radiotherapy Dose Prediction on Head and Neck Cancer Patients with a Hierarchically Densely Connected U-Net Deep Learning Architecture. Phys Med Biol (2019) 64:065020. doi:10.1088/1361-6560/ab039b
105. Nichol A, Dhariwal P. (2021). “Improved Denoising Diffusion Probabilistic Models,” in Proceedings of the 38th International Conference on Machine Learning, 8162–8171.
106. Niu ZM, Liang HZ. Nuclear Mass Predictions Based on Bayesian Neural Network Approach with Pairing and Shell Effects. Phys Lett B (2018) 778:48–53. doi:10.1016/j.physletb.2018.01.002
107. Oliver JF, Fuster-Garcia E, Cabello J, Tortajada S, Rafecas M. Application of Artificial Neural Network for Reducing Random Coincidences in PET. IEEE Trans Nucl Sci (2013) 60:3399–409. doi:10.1109/tns.2013.2274702
108. Paganini M, de Oliveira L, Nachman B. CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks. Phys Rev D (2018) 97:014021. doi:10.1103/physrevd.97.014021
109. Pedemonte S, Pierce L, Van Leemput K. A Machine Learning Method for Fast and Accurate Characterization of Depth-Of-Interaction Gamma Cameras. Phys Med Biol (2017) 62:8376–401. doi:10.1088/1361-6560/aa6ee5
110. Peng P, Judenhofer MS, Jones AQ, Cherry SR. Compton PET: A Simulation Study for a PET Module with Novel Geometry and Machine Learning for Position Decoding. Biomed Phys Eng Express (2018) 5:015018. doi:10.1088/2057-1976/aaef03
111. Peng Z, Shan H, Liu T, Pei X, Zhou J, Wang G, et al. (2019). Deep Learning for Accelerating Monte Carlo Radiation Transport Simulation in Intensity-Modulated Radiation Therapy. arXiv:1910.07735 [physics], October 2019.
112. Pierce LA, Pedemonte S, DeWitt D, MacDonald L, Hunter WCJ, Van Leemput K, et al. Characterization of Highly Multiplexed Monolithic PET/Gamma Camera Detector Modules. Phys Med Biol (2018) 63:075017. doi:10.1088/1361-6560/aab380
113. Prevedello LM, Halabi SS, Shih G, Wu CC, Kohli MD, Chokshi FH, et al. Challenges Related to Artificial Intelligence Research in Medical Imaging and the Importance of Image Analysis Competitions. Radiol Artif Intelligence (2019) 1:e180031. doi:10.1148/ryai.2019180031
114. Radovic A, Williams M, Rousseau D, Kagan M, Bonacorsi D, Himmel A, et al. Machine Learning at the Energy and Intensity Frontiers of Particle Physics. Nature (2018) 560:41–8. doi:10.1038/s41586-018-0361-2
115. Rogers DWO. Low Energy Electron Transport with EGS. Nucl Instr Methods Phys Res Section A: Acc Spectrometers, Detectors Associated Equipment (1984) 227:535–48. doi:10.1016/0168-9002(84)90213-4
116. Ronneberger O, Fischer P, Brox T. (2015). U-net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 [cs]. doi:10.1007/978-3-319-24574-4_28
117. Roser P, Zhong X, Birkhold A, Strobel N, Kowarschik M, Fahrig R, et al. Physics‐driven Learning of X‐ray Skin Dose Distribution in Interventional Procedures. Med Phys (2019) 46:4654–65. doi:10.1002/mp.13758
118. Rydén T, Heydorn Lagerlöf J, Hemmingsson J, Marin I, Svensson J, Båth M, et al. Fast GPU-Based Monte Carlo Code for SPECT/CT Reconstructions Generates Improved 177Lu Images. EJNMMI Phys (2018) 5:1. doi:10.1186/s40658-017-0201-8
119. Ryden T, van Essen M, Marin I, Svensson J, Bernhardt P. Deep Learning Generation of Synthetic Intermediate Projections Improves 177Lu SPECT Images Reconstructed with Sparsely Acquired Projections. J Nucl Med Official Publication, Soc Nucl Med (2020) 62(4):528–535. doi:10.2967/jnumed.120.245548
120. Salvat F, Fernández-Varea J, Sempau J. Penelope. A Code System for Monte Carlo Simulation of Electron and Photon Transport. Barcelona: NEA Data Bank, Workshop Proceeding (2007). p. 4–7.
121. Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, Zaidi H. Deep Learning-Assisted Ultra-fast/low-dose Whole-Body PET/CT Imaging. Eur J Nucl Med Mol Imaging (2021). doi:10.1007/s00259-020-05167-1
122. Sanaat A, Zaidi H. Depth of Interaction Estimation in a Preclinical PET Scanner Equipped with Monolithic Crystals Coupled to SiPMs Using a Deep Neural Network. Appl Sci (2020) 10:4753. doi:10.3390/app10144753
123. Sarrut D, Bała M, Bardiès M, Bert J, Chauvin M, Chatzipapas K, et al. Advanced Monte Carlo Simulations of Emission Tomography Imaging Systems with GATE. Phys Med Biol (2021) 66:10TR03. doi:10.1088/1361-6560/abf276
124. Sarrut D, Bardiès M, Boussion N, Freud N, Jan S, Létang J-M, et al. A Review of the Use and Potential of the GATE Monte Carlo Simulation Code for Radiation Therapy and Dosimetry Applications. Med Phys (2014) 41:064301. doi:10.1118/1.4871617
125. Sarrut D, Etxebeste A, Krah N, Létang J. Modeling Complex Particles Phase Space with GAN for Monte Carlo SPECT Simulations: A Proof of Concept. Phys Med Biol (2021) 66:055014. doi:10.1088/1361-6560/abde9a
126. Sarrut D, Krah N, Badel JN, Létang JM. Learning SPECT Detector Angular Response Function with Neural Network for Accelerating Monte-Carlo Simulations. Phys Med Biol (2018) 63:205013. doi:10.1088/1361-6560/aae331
127. Sarrut D, Krah N, Létang JM. Generative Adversarial Networks (GAN) for Compact Beam Source Modelling in Monte Carlo Simulations. Phys Med Biol (2019) 64:215004. doi:10.1088/1361-6560/ab3fc1
128. Schmidhuber J. Deep Learning in Neural Networks: An Overview. Neural Networks (2015) 61:85–117. doi:10.1016/j.neunet.2014.09.003
129.J Seco, and F Verhaegen, editors. Monte Carlo Techniques in Radiation Therapy. Boca Raton: CRC Press (2013). doi:10.1201/b13961
130. Seltzer SM. Electron-photon Monte Carlo Calculations: The ETRAN Code. Int J Radiat Appl Instrumentation. A. Appl Radiat Isotopes (1991) 42:917–41. doi:10.1016/0883-2889(91)90050-b
131. Shan H, Zhang Y, Yang Q, Kruger U, Kalra MK, Sun L, et al. 3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning from a 2-D Trained Network. IEEE Trans Med Imaging (2018) 37:1522–34. doi:10.1109/tmi.2018.2832217
132. Shen C, Nguyen D, Zhou Z, Jiang SB, Dong B, Jia X. An Introduction to Deep Learning in Medical Physics: Advantages, Potential, and Challenges. Phys Med Biol (2020) 65:05TR01. doi:10.1088/1361-6560/ab6f51
133. Shen H, Liu J, Fu L. Self-learning Monte Carlo with Deep Neural Networks. Phys Rev B (2018) 97:205140. doi:10.1103/PhysRevB.97.205140
134. Simonyan K, Vedaldi A, Zisserman A. Deep inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In: Workshop at International Conference on Learning Representations (2014). p. 1–8.
135. Song X, Segars WP, Du Y, Tsui BMW, Frey EC. Fast Modelling of the Collimator-Detector Response in Monte Carlo Simulation of SPECT Imaging Using the Angular Response Function. Phys Med Biol (2005) 50:1791–804. doi:10.1088/0031-9155/50/8/011
136. Song Y, Ermon S. Generative Modeling by Estimating Gradients of the Data Distribution. Adv Neural Inf Process Syst (2019) 32.
137. Song Y, Ermon S. (2020). “Improved Techniques for Training Score-Based Generative Models,” In NIPS Workshop (Neural Information Processing Systems).
138. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Machine Learn Res (2014) 15(1):1929–58.
139. Staelens S, Buvat I. Monte Carlo Simulations in Nuclear Medicine Imaging. In: P Verdonck, editor. Advances in Biomedical Engineering. Amsterdam: Elsevier (2009). p. 177–209. doi:10.1016/B978-0-444-53075-2.00005-8
140. Stockhoff M, Van Holen R, Vandenberghe S. Optical Simulation Study on the Spatial Resolution of a Thick Monolithic PET Detector. Phys Med Biol (2019) 64:195003. doi:10.1088/1361-6560/ab3b83
141. Taasti VT, Klages P, Parodi K, Muren LP. Developments in Deep Learning Based Corrections of Cone Beam Computed Tomography to Enable Dose Calculations for Adaptive Radiotherapy. Phys Imaging Radiat Oncol (2020) 15:77–9. doi:10.1016/j.phro.2020.07.012
142. Tian C, Fei L, Zheng W, Xu Y, Zuo W, Lin C-W. Deep Learning on Image Denoising: An Overview. Neural Networks (2020) 131:251–75. doi:10.1016/j.neunet.2020.07.025
144. Vallecorsa S. Generative Models for Fast Simulation. J Phys Conf Ser (2018) 1085:022005. doi:10.1088/1742-6596/1085/2/022005
145. van der Heyden B, Uray M, Fonseca GP, Huber P, Us D, Messner I, et al. A Monte Carlo Based Scatter Removal Method for Non-isocentric Cone-Beam CT Acquisitions Using a Deep Convolutional Autoencoder. Phys Med Biol (2020) 65:145002. doi:10.1088/1361-6560/ab8954
146. Vandenberghe S, Mikhaylova E, Brans B, Defrise M, Lahoutte T, Muylle K, et al. PET20.0: a Cost Efficient, 2mm Spatial Resolution Total Body PET with point Sensitivity up to 22% and Adaptive Axial FOV of Maximum 2.00m. In: Annual Congress of the European Association of Nuclear Medicine, Vol. 44 (2017). p. 305.
147. Vasudevan V, Huang C, Simiele E, Yu L, Xing L, Schuler E. Combining Monte Carlo with Deep Learning: Predicting High-Resolution, Low-Noise Dose Distributions Using a Generative Adversarial Network for Fast and Precise Monte Carlo Simulations. Int J Radiat Oncology*Biology*Physics (2020) 108:S44–S45. doi:10.1016/j.ijrobp.2020.07.2157
148. Verhaegen F, Seuntjens J. Monte Carlo Modelling of External Radiotherapy Photon Beams. Phys Med Biol (2003) 48:R107–R164. doi:10.1088/0031-9155/48/21/r01
149. Wang Y, Wang L, Li D, Cheng X. A New Method of Depth-Of-Interaction Determination for Continuous crystal PET Detectors. In: 2014 IEEE Nuclear Science Symposium and Medical Imaging Conference. NSS/MIC (2014). p. 1–2. doi:10.1109/NSSMIC.2014.7430765
150. Wang Y, Zhu W, Cheng X, Li D. 3D Position Estimation Using an Artificial Neural Network for a Continuous Scintillator PET Detector. Phys Med Biol (2013) 58:1375–90. doi:10.1088/0031-9155/58/5/1375
151. Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative Adversarial Networks for Noise Reduction in Low-Dose CT. IEEE Trans Med Imaging (2017) 36:2536–45. doi:10.1109/tmi.2017.2708987
152. Xie S, Yang C, Zhang Z, Li H. Scatter Artifacts Removal Using Learning-Based Method for CBCT in IGRT System. IEEE Access (2018) 6:78031–7. doi:10.1109/ACCESS.2018.2884704
153. Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, et al. Low-Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss. IEEE Trans Med Imaging (2018) 37:1348–57. doi:10.1109/tmi.2018.2827462
154. Zatcepin A, Pizzichemi M, Polesel A, Paganoni M, Auffray E, Ziegler SI, et al. Improving Depth-Of-Interaction Resolution in Pixellated PET Detectors Using Neural Networks. Phys Med Biol (2020) 65:175017. doi:10.1088/1361-6560/ab9efc
155. Zhang T, Chen Z, Zhou H, Bennett NR, Wang AS, Gao H. An Analysis of Scatter Characteristics in X-ray CT Spectral Correction. Phys Med Biol (2021) 66:075003. doi:10.1088/1361-6560/abebab
156. Zhou D-X. Theory of Deep Convolutional Neural Networks: Downsampling. Neural Networks (2020) 124:319–27. doi:10.1016/j.neunet.2020.01.018
Keywords: AI, Monte Carlo simulation, medical physics, GAN, deep learning
Citation: Sarrut D, Etxebeste A, Muñoz E, Krah N and Létang JM (2021) Artificial Intelligence for Monte Carlo Simulation in Medical Physics. Front. Phys. 9:738112. doi: 10.3389/fphy.2021.738112
Received: 08 July 2021; Accepted: 27 September 2021;
Published: 28 October 2021.
Edited by:
Susanna Guatelli, University of Wollongong, AustraliaReviewed by:
Carlo Mancini, Sapienza University of Rome, ItalyOlaf Nackenhorst, Technical University Dortmund, Germany
Copyright © 2021 Sarrut, Etxebeste, Muñoz, Krah and Létang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: David Sarrut , ZGF2aWQuc2FycnV0QGNyZWF0aXMuaW5zYS1seW9uLmZy