- 1Department of Radiology, Mayo Clinic, Rochester, MN, United States
- 2Department of Radiology, University of Iowa, Iowa City, IA, United States
- 3Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, United States
- 4University of Illinois at Urbana-Champaign, Champaign, IL, United States
- 5Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN, United States
Background: MR fingerprinting (MRF) is a novel method for quantitative assessment of in vivo MR relaxometry that has shown high precision and accuracy. However, the method requires data acquisition using customized, complex acquisition strategies and dedicated post processing methods thereby limiting its widespread application.
Objective: To develop a deep learning (DL) network for synthesizing MRF signals from conventional magnitude-only MR imaging data and to compare the results to the actual MRF signal acquired.
Methods: A U-Net DL network was developed to synthesize MRF signals from magnitude-only 3D T1-weighted brain MRI data acquired from 37 volunteers aged between 21 and 62 years of age. Network performance was evaluated by comparison of the relaxometry data (T1, T2) generated from dictionary matching of the deep learning synthesized and actual MRF data from 47 segmented anatomic regions. Clustered bootstrapping involving 10,000 bootstraps followed by calculation of the concordance correlation coefficient were performed for both T1 and T2 MRF data pairs. 95% confidence limits and the mean difference between true and DL relaxometry values were also calculated.
Results: The concordance correlation coefficient (and 95% confidence limits) for T1 and T2 MRF data pairs over the 47 anatomic segments were 0.8793 (0.8136–0.9383) and 0.9078 (0.8981–0.9145) respectively. The mean difference (and 95% confidence limits) were 48.23 (23.0–77.3) s and 2.02 (−1.4 to 4.8) s.
Conclusion: It is possible to synthesize MRF signals from MRI data using a DL network, thereby creating the potential for performing quantitative relaxometry assessment without the need for a dedicated MRF pulse sequence.
Introduction
The power of MRI as a noninvasive diagnostic test is due not only to the range of soft tissue contrasts and functional information generated but more significantly to their correlation with anatomic and physiologic changes across a range of conditions and disease states (1). In the clinical setting, this versatility is utilized by executing several MR pulse sequences that provide multiple visualizations and quantitative data of the abnormality or disease in question. With the potential for each acquisition to last multiple minutes, MR examination times can range from tens of minutes to one or even two hours for specific imaging indications and number of anatomic regions covered. Thus, an MR exam represents a trade-off between allowing sufficient imaging time necessary to acquire the requisite MR data needed for diagnosis and the need to limit overall MR exam duration to ensure patient compliance, provide access to, and ensuring efficient and cost-effective use of an expansive and restricted imaging resource. Within this context, the demand to obtain additional imaging data—particularly that derived from multiple sequences—is limited.
Because MR image contrasts are a surrogate of the underlying and intrinsic relaxometry values of the tissue being imaged, it seems intuitive that quantitative assessment of these values would provide a more accurate and rapid diagnostic tool when compared to acquiring multiple MR data sets. Despite this, quantitative relaxometry methods have found limited clinical application due in part to their long acquisition times, the lack of multiparametric quantitation, and susceptibility to machine and environmental effects (2). Bobman et al. (3) described early approaches to multiparametric quantification and “synthetic” MR image generation in which a set of source images acquired with differing imaging parameters were used to generate quantitative relaxometry data. These data where then used as inputs into the Block equations for a given pulse sequence type. Although the approach demonstrated high precision and accuracy (4), long computing times impeded clinical introduction and widespread adoption.
Within the past decade there has been renewed interest in acquiring quantitative multiparametric imaging data, particularly from a single acquisition. One such approach, referred to as multi-dynamic multi-echo (MDME) or magnetic resonance imaging compilation (MAGIC) (5–7) involves acquiring multiple echo saturation recovery spin echo data from which relaxometry data are generated and then used as input to generate multiple synthetic MR images (contrasts). Another approach, first described by Ma et al. (8) and referred to as MR fingerprinting (MRF) involves the continuous repetition of a given MR imaging sequence, generating multiple 2D or 3D datasets of the object under interrogation. Unlike conventional steady-state MR imaging approaches in which parameters such as the pulse repetition rate (TR), echo time (TE), and the radio frequency (RF) excitation pulse flip angle (α) are held constant throughout the acquisition, MRF acquisition strategies rely upon varying multiple MR acquisition parameters according to a pre-determined history throughout the acquisition process. The result of this approach is to destroy the steady state signal, thereby making the acquired images non-diagnostic. However, given that the signal from a given pulse sequence can be described mathematically and that scan parameter values are known, the signal evolution of a given voxel can be generated for a chosen set of relaxometry parameters. If this process is repeated over a range of diagnostic relaxometry values, a so-called dictionary or library of signal evolutions can be generated (8–12). Comparison of the acquired signal evolution of a given voxel with the generated dictionary allows estimation of the actual relaxometry of the voxel in question by identifying the best match between the acquired and dictionary signals. Then, repetition of this process on a voxel-by-voxel basis allows for the quantitative and spatial resolution of these parameters. Additionally, once the spatial topography of various MR relaxometry parameters are quantified, multiple image contrasts can be synthesized by using these maps as inputs into the Block equation (13) describing the MR signal for the pulse sequence type and for the scan parameters of interest. Therefore, MRF provides a method for quantitative assessment of tissue relaxometry values, together with the ability to synthesize multiple MR image contrasts from a single acquisition.
Given the potential for MRF to address the limitations of conventional relaxometry approaches and the ability to synthesize multiple MR contrasts, MRF is an active area of research and development. However, limitations of this technology are the fact that it can only be used prospectively, thereby constraining its application to those subjects imaged since its inception (circa 2013) (8) and requires a dedicated pulse sequence and reconstruction pathway both of which are available only within research settings. Generation of a synthetic MRF signal obtained without the need for a dedicated MRF sequence has the potential to significantly expand this technology both in terms of prospective application but also by enabling its use retrospectively, thereby accessing the wealth of MR data acquired during the three-decade long period prior to the inception of the MRF technique.
Interest in the use of deep learning (DL) methods to solve a variety of challenges in MRI has increased significantly over the past several years given the promise of improved precision, accuracy and reduced computation times afforded by advanced DL models and graphical processing units (GPUs). Given the described limitations related to current MRF acquisition techniques and reconstruction methods, the development of a DL MRF method has the potential to address these and in doing so expand development and use of this promising technology. The purpose of this study is to report on the development and evaluation of a DL-based algorithm for synthesizing MRF data from a limited set of MR image information, specifically a rapidly acquired, magnitude only volumetric T1-weighted dataset. By contrast, this work distinguishes itself from existing DL approaches in MRF which are directed towards improvements in reconstruction accuracy and processing speed (14, 15).
Materials and methods
Subjects
This prospective study was approved by the lead author's Institutional Review Board (IRB), was HIPAA complaint and involved obtaining written informed consent from all subjects. Imaging data from 37 normal subjects were used in the study. Subject ages ranged from 21 to 62 years of age and included 10 males (minimum, maximum age = 24, 43 years) and 27 females (minimum, maximum age = 21, 62 years). There was no inclusion or exclusion criteria for subjects other than being able to successfully complete the MR imaging examination with recruitment being in response to internal, IRB approved research protocol advertising.
Imaging protocol
All imaging was performed on a single 3T MR scanner (Signa Premier, GE Healthcare, Waukesha, WI). Each subject was imaged using the protocol listed in Table 1 that included a 3D MRF sequence described previously (16, 17) and multiple conventional MR imaging sequences (series). Each normal subject was scanned using a single 48 channel receive-only RF coil for signal reception.
MR image data were reconstructed by the host computer system of the MR scanner and stored in the DICOM imaging standard format while MRF reconstruction was performed offline using a proprietary Matlab (Mathworks, Natick, MA) software package provided by the University of Pisa (Pisa, Italy) on a Linux workstation equipped with two 8-core Intel Xeon Gold 6244 central processing unit and NVIDIA (NVIDIA Corporation, Santa Clara, CA) Tesla V100 graphical processing unit. Gomez et al. (9) have described the MRF processing pipeline that was used to process the raw MRF data into the first 15 singular value decomposition (SVD) coefficients of the temporal MRF signal (18). This compressed MRF signal was used to reconstruct quantitative parametric maps of T1, and T2 with relaxometry maps being generated in units of milliseconds (ms).
DL network
DL involves training artificial neural networks to model complex patterns in data. These models, particularly convolutional neural networks (CNNs), have revolutionized various fields, including medical imaging, by enabling automated feature extraction and image classification. CNNs are particularly adept at recognizing spatial hierarchies in images through layers of convolutions, pooling, and nonlinear activations. In this study, we employed a U-Net architecture, a type of CNN originally designed for biomedical image segmentation (19).
MRF data consist of time-series signals which are projected onto the SVD space to reduce the dimensionality of the problem to a manageable level, as discussed by McGivney et al. (18). In this implementation, only the first four singular values (SV) are used, since higher order ones are found to contain very low signal and therefore only contribute to the noise component of the MRF SVD space. Phase renormalization is first applied by setting the imaginary component of the first SV (SV1) to zero. As a result, the real part of SV1 approximately resembles the spatial distribution of the proton density signal. Each complex SV is represented mathematically with a real and imaginary matrix associated with two distinct channels. Since SV1 is only real, the model establishes a multi-valued link between a single-channel MRI and seven-channel MRF information for the fours SV's.
Figure 1 describes in block form the overall structure of the network which consists of seven identical DL networks. The network was designed to replicate the multichannel characteristics of the SVD MRF data with each network being considered an individually trainable channel. Each complex SV was considered a complex eigen value with each eigenvector being associated with two distinct channels, establishing a multiple-valued link between single-channel MRI and seven-channel MRF information. Each of the seven networks were trained simultaneously in the computational pipeline where the final composite network contains a total of 1,940,902 trainable parameters. The seven networks denoted U1 to U7 of Figure 1 are identical U-Net DL networks [fully convolutional DL network (20)] adapted for regression tasks (21). The overall architecture of each network is shown in Figure 2 which includes an enhancement achieved with batch normalization (22) and dropout layers to improve generalization and prevent overfitting (23). While these modifications are not novel in isolation, their inclusion in this adaptation of U-Net is intended to improve the robustness of the model in generalizing production of synthetic MR data from limited input MR images. Leveraging these established techniques serves to enhance the model's performance in predicting relaxometry values and synthesizing a wide range of image contrasts from a single acquisition.
Figure 1. Schematic representation of seven parallel U-Net architectures utilized for regressing MRI input data. Each U-Net (U1 … U7) predicts a specific real or imaginary component of the singular value decomposition (SVD) from the multi-parametric MRF data. The predicted singular values are then compared (dashed box) with the ground truth continuous MRF singular values, and the mean squared error of each loss (L1 to L7) is employed to train iteratively each neural network, by updating the weights, for accurate reconstruction. Each singular value is complex and comprised of a real (R) and imaginary (I) component that is processed through separate U-Net convolutional neural networks (CNNs) except for the first singular value which was treated as being only real (I1 = 0) and therefore required only one CNN compared to higher order values that required two networks, one for the real and imaginary values.
Figure 2. U-Net architecture for MRI image regression. The network begins with an input layer, proceeds through a contracting path on the left, characterized by convolutional and max pooling layers that increase in feature channels while incorporating dropout for regularization. The bottleneck at the center serves as a critical transition between the contracting and expansive paths, where up-sampling and concatenation with corresponding feature maps from the contracting path occur, followed by convolutions for detailed feature construction. The output layer, through a 1 × 1 convolution, translates the feature information into a continuous MRF space.
By assigning distinct U-Net architectures to each SV, each individual network can be optimized to accurately reconstruct the specific component without interference or conflation. In a single network processing all components, the shared layers may unintentionally learn features that overlap or blend information from different SVs. This can dilute the specificity of the learned representations for each SV component. Instead, distinct networks for each component distribute the reconstruction tasks among independent networks, thereby favoring high fidelity reconstruction of synthetic MRF SVs. This is because errors in one component can propagate and degrade the overall quality of the reconstruction, affecting convergence. Multitask learning processes often struggle with interference between tasks when shared parameters optimize for diverse and potentially conflicting objectives. This is known as negative transfer and work in this area suggests that separating tasks can improve performance for problems with distinct characteristics. In our case these can be the different imaging modalities (e.g., proton density, T1, and T2), or simply the specific nature of the real and imaginary components of the SVs (24).
During the network training process, convergence is monitored by generating repeatedly synthetic MRF results from the MRI data sets reserved for verification. A running comparison is made of the synthetic MRF with the corresponding original MRF datasets which provide the ground truth and are never utilized in the network training process. Following the progression of the verification along with the training iterations ensures that convergence of both processes reaches acceptable levels of the error and that the downward trends show no significant overfitting behavior.
The U-Net is characterized by its U-shaped structure, which includes an encoder (contracting) path to capture context, and a decoder (expansive) path that enables precise localization. In between, the two paths are connected via a set of convolutional layers (bottleneck). Structurally, the network includes three consecutive pathways; a contractive path followed by a bridging and finally expansive path. 3D MRI data are input into each network and trained to generate each of the seven synthetic SVD outputs. While 3D data is used as input, each 3D data set is considered as a series of contiguous 2D inputs in which consecutive 2D inputs are fed into the network. Each slice follows the contracting path of the U-Net architecture, by application of two consecutive convolution steps with 16 different 3 × 3 filters. Each convolution is followed by a rectified linear unit (ReLU) and a batch normalization step generating a (256 × 256 × 16) feature block. Next, a 2 × 2 max pooling operation follows for down-sampling, in each of the separate 16 channels, resulting in a (128 × 128 × 16) block. The previous convolution scheme is repeated with a double number of 3 × 3 filters yielding now a (128 × 128 × 32) block. This overall down-sampling step is then repeated until a (16 × 16 × 256) feature block is obtained. The bridging or bottleneck step repeats the previous two consecutive convolution steps with the addition at the end of an up-sampling layer based on bilinear interpolation, which results in a (32 × 32 × 256) feature block.
After completion, the decoder expansive path begins with a concatenation step that combines the up-sampled block with the corresponding one obtained at the same level in the previous down-sampling path. Thus, the concatenation increases by 50% the number of channels. Then, two consecutive convolution steps follow as applied in the down-sampling path, but without the batch normalization step. At each up-sampling level, the same number of filters used in the corresponding down-sampling level is used ensuring that, after convolution, the same number of channels is reached. Finally, before climbing to the next level, another upscaling step is applied. The process is repeated until the top layer is reached, at which point the output contains a single channel. Each axial 2D slice of the 3D data are processed sequentially through the network until the entire 3D volume has been processed.
Data preparation
Before being used for network training, both MRI and MRF data underwent several preprocessing procedures. The first involved selection of the appropriate MRI data. While multiple MRI contrasts were available as described in Table 1, only 3D magnitude prepared rapid gradient echo (MPRAGE) data was used as input MRI information due to its superior gray/white parenchymal differentiation (25) and to limit the complexity of the network by only accepting a single volumetric dataset as input. The second involved converting both data sets to the NIFTI format to ensure a single consistent format and image coordinate system. Thirdly, registration and resolution matching between the MRF SVD and MPRAGE data were performed to ensure equivalent spatial and geometric concordance of voxel pairs. Interpolation was performed on MPRAGE data producing equivalent 256 × 256 × 256 matrices with isotopic voxels of 1mm3. MPRAGE data underwent additional indexing to generate a 3D stack of axial slices. Registration was performed using the SimpleITK open-source package (26) and involved performing 3D rigid body registration and optimization based on mutual information optimization metric (26). Finally, normalization and skull stripping of both MRF and MRI data were performed. For MRI data, normalization involved rescaling the dynamic range of the voxel intensities to a minimum and maximum of 0 and 1 by dividing by the maximum voxel intensity of the volume. For MRF each real and imaginary SVD volume was similarly normalized between 0 and 1. Skull stripping was performed using binary masks of the whole brain generated as part of the tissue segmentation process described below. Both the segmentation and voxel-wise normalization steps were found to be necessary to ensure optimal model performance.
While interpolation is not necessarily considered as “augmentation”, it serves a similar purpose by artificially increasing the variability of training data or modifying data to fit specific needs. In this application, interpolation was applied to resample MRI images acquired with varying voxel sizes, to achieve a consistent resolution. Throughout this work MRF voxelization has served as the reference because it is generated on a regular grid as indicated above. Rigid affine transformations align the volumes but do not ensure voxel-to-voxel alignment of MRI to MRF data due to differences in voxel dimensions across different subjects. Registration and augmentation by resolution matching performed (resampling) between MRI an MRF data were therefore critical to ensure resolution consistency, effectively increasing the variety and quality of the training data.
Network training and testing
To train the network, 16 slices were selected from the training data. Slices were randomly selected across subjects and location within the 3D imaging volume of each. Given that there were a total of 7,680 slices (30 subjects × 256 slices/subject) 480 iterations of the training phase were performed. The use of randomly sampled paired slices (MPRAGE, MRF) in this manner was necessary to address the need for substantial input data to attain the requisite precision and accuracy for quantitative analysis, amplifying the input volume by a factor of 256. While this strategy significantly augmented the dataset size, potential spatial correlations arising from contiguous slice acquisition were not accounted for in the model training.
To ensure convergence of the network, each synthetically generated SVD was rescaled using the ranges listed in Table 2. These values were generated from the maximum and minimum values of the ground truth MRF SVD values of the entire training set. Renormalization was applied to the SVD for each slice given that the network is designed as a 2D reconstruction network as opposed to the SVD of the entire 3D synthetic MRF volume.
Table 2. Minimum and maximum values of each component of the compressed ground truth MRF (i.e., acquired) data used for rescaling of the synthetic MRF singular values (SVs).
Quantification of network performance was achieved by comparison of the synthetic SVs and their ground truth counterparts. For each SV pair the batch mean squared error (MSE) in relation to the ground truth MRF data were calculated. Network weights were adjusted iteratively using an ADAM optimization algorithm (27), selected for its adaptive learning capabilities, and configured with a learning rate of 10−4. This process was repeated over numerous iterations, constituting an epoch (number of times network processes all data in the training set), with a total of 1,000 epochs executed for each network. Model performance was monitored, and the optimal weights corresponding to the lowest MSE throughout the training process were preserved. MSE is the standard choice of loss function for regression problems. As the differences between the ground truth and predicted values are squared, MSE tends to give more weights and thus be more sensitive to large errors.
Estimation of relaxometry values
After training of the network, MPRAGE data from the five test subjects were input into the network to generate the four SVs for each. While the MPRAGE data was 3D, each dataset was treated as a stack of continuous 2D slices. As a result, 256 2D slices were input generating 256 × 4 complex SVs. The 256 2D SVs were combined to create a single 3D SV for each of the four complex values and rescaled to the global SV maximum and minimum values listed in Table 2. The 3D true and synthetic SVs were then input into the MRF dictionary matching algorithm (9, 28) using the software and hardware described above thereby generating 3D T1 and T2 data for both.
Segmentation
MPRAGE derived anatomical regions of interest were generated using Statistical Parametric Mapping version 12 (SPM12) (SPM12: https://www.fil.ion.ucl.ac.uk/spm/) (29) with templates, settings and priors from the Mayo Clinic Adult Lifespan Template (MCALT: https://www.nitrc.org/projects/mcalt/). Subject specific segmentation maps were then applied to MRF derived relaxometry maps providing gray matter (GM), white matter (WM), cerebral spinal fluid (CSF) and whole brain (used for skull stripping described previously) segmentation maps. Additional segmentation was performed resulting in regional brain parcellations using the MCALT_ADIR122 atlas (https://www.nitrc.org/projects/mcalt/) (30) with Advanced Normalization Tools (31). In total, 47 individual regions were identified. For each region, the average relaxometry value (T1, T2) was calculated and used as input for statistical processing.
Statistics
Region specific mean and standard deviation values were averaged for the five normal subjects for both true (i.e., acquired) MRF vs. DL MRF relaxometry values. To assess the degree of agreement between the relaxometry data pairs, the concordance correlation coefficient (CCC) was calculated. Prior to calculation of the CCC, clustered bootstrapping of data pairs (T1 true vs. T1 DL, T2 true vs. T2 DL) was performed using 10,000 bootstrapping operations to account for the multi-level nature of the data (multiple subjects and multiple correlated regions provided by the segmentation process). CCC values and 95% confidence intervals were calculated in addition to the mean difference between true and DL relaxometry values. All calculations were performed using the RStudio software package [Posit team (2023). RStudio: Integrated Development Environment for R. Posit Software, PBC, Boston, MA. http://www.posit.co/version 2023.12.0.369].
Results
Table 3 lists mean and SD values averaged over the five normal subjects for both the true and DL derived T1 and T2 estimates for the 47 anatomical regions of interest. Overall, the DL estimates showed lower variance as measured by the average ratio of True to DL SD values across all subjects and regions (235 = 5 × 47 individual regions) and quantitated by the average ratio of 1.14 (minimum = 0.30, maximum = 2.19) and 1.79 (minimum = 0.35, maximum = 6.67) of the true to DL SD values. This is expected given the inherent smoothing nature of the DL process: The final layer of the neural network uses an activation function tanh which effectively compresses the data into the interval [−1, 1] with a low-pass filtering smooth function, as needed to maintain the stability of the training process.
Table 3. Mean and standard deviation values for the actual and deep learning (DL) generated relaxometry values for the five normal subjects for 47 anatomical regions of interest.
Table 4 lists the bootstrap CCC values and 95% confidence intervals for both T1 and T2 true—DL data pairs. T2 values showed a slightly higher degree of correlation between the true and DL values compared to T1 (0.9078 vs. 0.8793). This is also reflected in the mean difference values with the mean differences being 48.23 and 2.02 ms for T1 and T2 respectively. The positive differences indicate an underestimation of DL relaxometry estimates compared to the acquired, i.e., true values. However, T2 estimates closer agreement due in part to the smaller absolute values and the fact that the 95% confidence intervals included zero difference. Figures 3, 4 show scatter plots of data pairs for T1 and T2 estimates for each region and subject (235 data pairs = 47 regions × 5 subjects) respectively and illustrate the bias, that is, the underestimation of relaxometry values estimated by the DL network.
Table 4. Concordance correlation coefficient (CCC) and 95% confidence intervals based on bootstrapping involving 10,000 bootstrap replicates.
Figure 3. Scatter plot of true vs. deep learning (DL) T1 relaxometry values. A total of 235 points are shown with each point representing a given region (47 total) and subject (5 total).
Figure 4. Scatter plot of true vs. deep learning (DL) T2 relaxometry values. T1 relaxometry values. A total of 235 points are shown with each point representing a given region (47 total) and subject (5 total).
Figures 5, 6 show representative mid-brain axial slices of both true and DL reconstructed T1 and T2 relaxometry maps for the five normal subjects used for network testing. All data sets were preprocessed using an automated skull stripping algorithm and zeroing of non-brain (background) pixels and reconstructed to provide isotropic voxel dimensions (1 × 1 × 1 mm3). The same window and level settings were used for all T1 and T2 data (T1: window/level = 2/1 s, T2: window/level = 0.12/0.06 s). Both T1 and T2 DL maps were “smoother” in appearance which can be attributed to the inherent low-pass effect of the network noted previously while the true MRF data qualitatively exhibited lower signal-to-noise ratio (SNR) due to the apparent increase in image noise. The observed lower SNR of the true MRF relaxometry data is due to the relatively short acquisition time (∼4 min), high resolution, volumetric acquisition.
Figure 5. Mid-brain axial T1 relaxometry maps for the deep learning (DL) and true MRF reconstructions.
Figure 6. Mid-brain axial T2 relaxometry maps for the deep learning (DL) and true MRF reconstructions.
Discussion
In this work we have developed a DL network for the purpose of generating a synthetic MRF signal from standard magnitude-only MR imaging data, in this case a T1-weighted (i.e., MPRAGE) 3D data of the brain of normal subjects. The potential for such a technique is that it provides the opportunity to generate quantitative relaxometry information from an MR examination that does not include an MRF as part of the original acquisition.
Given the complex acquisition strategies, computational requirements, and unique reconstruction methods of MRF, significant efforts are underway to integrate various DL based approaches to address these challenges. In general, these efforts can be categorized into those attempting to improve the precision and accuracy of quantitative relaxometry results, improve the computational efficiency of the data reconstruction process, or enhancement of specific MRF derived imaging applications. Efforts to improve the precision and accuracy of MRF-derived relaxometry values include using DL to increase the spatial resolution of MRF relaxometry data (32, 33), the accuracy and precision of quantitative relaxation values (34–39), decreasing MRF acquisition times (40), or replacing the dictionary matching process using DL (41). Similarly, DL has been used for MRF-related applications such as MRF chemical exchange saturation transfer (CEST) imaging (42–49), arterial spin labelling (50), improved anatomical mapping of disease processes (51–53), and MR spectroscopy (54). By contrast, the network described in this work has been designed with the specific intent of synthesizing an MRF signal from a previously acquired magnitude-only MR image data set. The implications of this approach are several fold; First, since the MRF technique was first reported in 2013 (8) there exists approximately 30 years of diagnostic MR image information that could benefit from this technology given that the first clinical MR imaging systems were developed in the early 1980s (55). Second, access to quantitative relaxometry information derived through a synthetic DL MRF could provide new insights into the development and progression of multiple disease processes by providing quantitative relaxometry information over time spans exceeding 50 years. Third, the described network demonstrates as a proof of concept the ability to derive quantitative information from an inherently qualitative (i.e., MRI) signal thereby opening new areas of investigation as well as DL methodologies for extracting quantitative metrics from inherently qualitative data.
Overall, as measured by the CCC, there was agreement between the DL-derived and actual MRF relaxometry values given their absolute values and 95% confidence intervals of 0.8793 and 0.8136–0.9383 for T1 and 0.9078 and 0.8981–0.9145 for T2. The authors have not attempted to assign a degree of agreement to these values given that there is disagreement between the interpretation of degree of agreement based on the absolute CCC value. For example. Akoglu (56) has described how CCC values can be interpreted as being similar to other correlation coefficients with values of <0.2 being assigned as poor and >0.8 as excellent. In contrast, Akoglu (56) also noted that other authors have indicated that poor agreement exists for values of <0.9 and substantial being within the range of 0.95–0.99. However, the data does indicate, as illustrated in Figures 1, 2 that strong agreement exists between both MRF approaches but that the degree of agreement is related to the absolute relaxometry value in question. This is particularly true for comparisons of T1 estimates across the rang of values (700–1,500 ms) while T2 estimates showed a decrease in DL-derived compared to actual MRF T2 estimates greater than 80 ms.
The discrepancies between relaxometry estimates are further quantified by the bootstrapped average (and 95% confidence intervals) of the difference between the true and DL-derived T1 and T2 estimates which were 48.23 ms (23.0–77.3 ms) and 2.02 ms (−1.4 to 4.8 ms) respectively. While the T2 confidence interval indicates that the mean difference includes zero, the T1 estimate did not indicating that, on average, the DL estimate of T1 was systematically less than the actual value. Previous comparison between calculated (i.e., MRF) estimates and National Institute of Standards and Technology (NIST)/International Society of Magnetic Resonance in Medicine (ISMRM) quantitative phantom (https://www.nist.gov/programs-projects/quantitative-mri) relaxometry values using the same MRF acquisition sequence have shown agreement for both T1 and T2 over clinically encountered relaxometry values (17). However, for both T1 and T2, the linear regression fit showed an intercept of 22.4 ms and 1.7 ms for the T1 and T2 values respectively with a positive intercept indicating an over estimation of the MRF-derived relaxometry value. Similar results were also reported by Buonincontri et al. (57) who performed a multicenter reproducibility study using a similar MRF sequence run on the same scanner manufacturer as used in this study in which both T1 and T2 derived relaxometry values were overestimated compared to the NIST stated values. Taken together, the underestimation of absolute T1 and T2 values by the DL network are compensated by the overestimation of these values by the actual MRF sequence therefore indicating overall accuracy and precision of the DL-derived values.
Comparison of the true vs. DL relaxometry maps as seen in Figures 5, 6 identify the overall smoothing of relaxometry maps generated from the DL MRF data when compared to the actual MRF relaxometry maps. This is to be expected given that DL in general and U-Net networks in particular are designed to find a local minimum of smoothly varying optimization functions and therefore are less susceptible to noise both in terms of signal value and spatial distribution. This is further quantified by comparison of the coefficient of variation (CoV) of the bootstrapped individual regional T1 and T2 estimates for both MRF approaches. For T1, the mean and range (minimum and maximum) of the CoV was 0.22 ms (0.066–0.399 ms) and 0.203 ms (0.054–0.364 ms) for the true and DL estimates across all 47 anatomic segments. Similarly, T2 estimates of the mean and range were 0.512 ms (0.136–1.291 ms) and 0.348 ms (0.065–1.048 ms) for the true and DL estimates respectively. Given the inherently noisy nature of the MRF data acquired in this work, the effect of smoothing introduced by the DL network was seen as an advantage thereby improving the precision of these estimates.
Our DL implementation differs from the standard U-Net architecture (20) in several ways. The batch normalization and the dropout layers have been added to avoid overfitting, which results in better prediction on untrained data. Seven distinct implementations of U-Net were trained independently and they were concatenated only at the final inference stage after convergence of the network weights. We plan to investigate in the future alternative implementations in which the networks are coupled during training, for example through weight sharing or weight pre-training strategies. For regression problems, as in this case, a linear activation function is commonly used for the output layer (21). However, this work adopted a tanh function which introduces nonlinearity in the prediction process for MRF data, yielding better convergence. Of note, the U-Net was originally designed for segmentation applications with a sigmoid activation function. Preliminary network configurations demonstrated poor convergence suggesting an ineffective activation function prompting the adoption of the tanh function. Future work will include assessment of other nonlinear activation functions.
A unique feature of the current network configuration was the use of a limited number of singular values and associated networks employed in generating the DL MRF. While the MRF compression algorithm generated a total of 14 complex SVs, initial investigation of all 14 indicated that most of the signal of the compressed MRF data was contained within the first four with the latter values contributing only noise to the system. Thus, the network was only trained on the initial four complex values except for the first SV in which the signal was considered to be only real (i.e., zero imaginary component). Zeroing of the imaginary component of the first SV was necessitated by the fact that their values were low, resulting in artifacts and a relatively large error function suggesting that the network was not optimized. Simply setting the imaginary component of the first SV eliminated this problem and resulted in rapid optimization, achieved in part by the optimization of seven vs. eight U-Net networks.
The results of this study suggest that it is possible to generate accurate T1- and T2-weighted relaxometry maps from a single rapidly acquired T1 dataset. T1 and T2 relaxometry data, in combination with proton density information makes possible synthetic MRI, allowing for creation of multiple additional contrasts typically used in clinical MRI. The MRF DL technique described herein has the potential to unlock hidden contrasts not typically seen by the eye on routine T1 images, and may enable calculation of T1, T2, fluid attenuated inversion recovery (FLAIR)-weighted images typically acquired on a traditional clinical MRI from a single acquisition. Furthermore, the approach provides the potential for adding multiparametric quantitative data without the need for additional imaging series, addressing concerns regarding increased MR examination times.
Recently Monga et al. (15) described developing trends in MRF and identified several emerging clinical applications including quantitative assessment of the heart, musculoskeletal, abdomen, brain and malignancies, specifically quantifying their response to radiation therapy. Collectively, they highlight the utility of MRF as a method for deriving quantitative MR biomarkers for multiple diseases and their related processes. However, a likely short-term application of MRF involves the assessment of intracranial diseases and tumors. This is due in part to the fact that in vivo feasibility was first demonstrated within the brain (8) but also because the brain is a relatively easy organ to image due to its overall spherical geometry, relative insensitivity to physiologic motion, and overall homogeneous tissue properties making correction of magnetic field inhomogeneities including both B0 and B1+ straightforward. Unsurprisingly, multiple authors have demonstrated the efficacy of MRF for diagnosing a range of disease processes and masses including assessment of mesial temporal lobe lesions associated with epilepsy (58), meningiomas (17), and multiple sclerosis (16). MRF is also providing increased clinical specificity particularly regarding further characterization and classification of both benign malignant tumors. For example, Badve et al. (59) demonstrated that MRF derived relaxometry values can differentiate solid tumor regions of lower grade gliomas from metastases and peritumoral regions of glioblastomas from lower grade gliomas. When MRF is combined with additional information, for example 18F PET-MR, it can be used to identify tumor grade and predict mutational status in gliomas (60), of inherent therapeutic significance. These data thus support the viability of the approach described in this work, particularly when applied to MRF of the brain.
The diversification of MRF applications to multiple organs and diseases highlights the clinical significance of quantitative relaxometry data in diagnostic MR imaging. The ability to synthesize an MRF signal from magnitude-only MR imaging data addresses a major limitation of this approach by allowing generation of this data without the associated MRF infrastructure (pulse sequence and reconstruction pipeline) and addressing clinical imaging constraints by not increasing overall examination times through the addition of extra pulse sequence acquisitions. Retrospective processing of existing MR data further points to the potential of this approach by creating additional opportunities for longitudinal studies that precede the arrival of MR fingerprinting.
The significance of this work is multifaceted. The proof-of-concept results presented highlight the transformative potential of DL to address current challenges in medical imaging that extend beyond the specific application of MRF. Also, since MRF is still not widely available nor integrated in routine clinical workflows, the development of this and other DL-based methods and tools, once appropriately trained and tested, have the potential to facilitate synthetic MRF information rapidly and inexpensively, thus contributing to efficiency and resource optimization. Finally, by demonstrating the feasibility and accuracy of this approach in normal subjects, the groundwork for extending these techniques to patients with various pathologies in the future has been established. As a first-of-its-kind method, this study has focused on establishing the feasibility and accuracy of generating synthetic MRF data from MRI. Unfortunately, benchmarking with other existing methods is not possible, due to the lack of prior approaches addressing this specific problem.
Limitations
There are several limitations associated with this study. First, the DL network has been trained based on a single MRF pulse sequence and acquisition strategy on a single MR scanner and field strength. To address this, thereby increasing the generalizability of the approach, ongoing work is being performed to train and evaluate additional networks to create a synthetic MRF that can be described by a generalized scan parameter history that would accommodate various acquisition strategies, scan times and pulse sequences. This includes using MRF sequences from other MR scanner manufacturers and obtaining data from scanners at differing field strengths. Second, the network has only been trained on contiguous 2D data from the brains of normal subjects based on T1-weighted MR data thereby potentially reducing the sensitivity of the network to the T2 component of the synthetic MRF signal. It is important to note that, while heavily T1-weighted, the MPRAGE signal does include a T2 signal component (61) thereby influencing the learning phase of the network. Also, the incorporation of additional imaging data from a given subject such as T2-weighted or other contrast data sets greatly increases both the complexity of the network as well as the computation time, requiring additional computational resources. This also imposes a practical challenge of providing spatially registered data of equal resolution acquired at the same timepoint as input to the network which may not be available in a prospective clinical setting. Ongoing work is currently underway to train additional networks based on multiple MRI data inputs including T2-weighted data from patients referred to our clinical imaging practice and to input a single 3D volume into the network. We are therefore transitioning network development from a single stand-alone university server to national supercomputing resources, in particular the Delta GPU cluster at the National Center for Supercomputing Applications (delta.ncsa.illinois.edu). Finally, a small number of subjects were used for this study in all phases of the network development and training. However, we were still able to obtain excellent convergence by applying data augmentation. While the MRF data consists of a regular 3D grid of (256)3 voxels with 1 mm3 resolution, the available MRI scans are obtained with larger spacing between slices and the corresponding 2D grids, although regular, have usually lower resolution (larger grid spacing). The MRI data was carefully augmented by interpolation during the process of alignment with the MRF volume, which increased the effective distribution of MRI samples on a regular grid, used in training. Such augmentation techniques are widely used in machine learning to improve resolution and reduce model overfitting. Despite these limitations, the results indicate that a synthetic MRF signal can be generated from a single contrast MRI data set. We predict that additional network development and training will further increase the precision, accuracy and general applicability of the DL network.
Conclusion
The results of this study support the hypothesis that MRF signals can be synthesized from conventional MR imaging data using a DL network. Overall agreement between the acquired and synthetic MRF signals were acceptable for both T1 and T2 derived relaxometry estimates for normal brain tissue at 3T. The work also demonstrates the potential to retrospectively analyze MR imaging information in the absence of an MRF signal, thereby enabling quantitative relaxometry to be performed on data acquired prior to the development of the MRF technique. Future work includes expanding the DL network capabilities to synthesize MRF data from multiple MR scanner manufacturers and to train additional networks on multiple T1-weighted and non-T1-weighted image contrasts.
Data availability statement
The datasets presented in this article are not readily available because original MR data is maintained and owned by Mayo Clinic. Requests to access the datasets should be directed to mcgee.kiaran@mayo.edu.
Ethics statement
The studies involving humans were approved by Mayo Clinic Internal Review Board. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
KM: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. YS: Data curation, Methodology, Resources, Software, Validation, Writing – review & editing. RW: Conceptualization, Investigation, Project administration, Writing – review & editing. AP: Data curation, Investigation, Writing – review & editing. NC: Investigation, Methodology, Visualization, Writing – review & editing. TM: Data curation, Formal Analysis, Methodology, Writing – original draft. NS: Methodology, Resources, Software, Writing – review & editing. UR: Resources, Supervision, Validation, Writing – review & editing. SZ: Conceptualization, Data curation, Writing – review & editing. KF: Conceptualization, Data curation, Writing – review & editing. NL: Data curation, Validation, Writing – review & editing. CS: Resources, Software, Validation, Writing – review & editing. JG: Resources, Software, Writing – review & editing.
Funding
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. Funding was provided by the Mayo Clinic Center for Individualized Medicine (CIM).
Acknowledgments
The authors would like to acknowledge IMAG07 Foundation, Pisa, Italy, GE Healthcare, Waukesha WI, USA for providing the MRF pulse sequence and analysis software.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. van Beek EJR, Kuhl C, Anzai Y, Desmond P, Ehman RL, Gong Q, et al. Value of Mri in medicine: more than just another test? J Magn Reson Imaging. (2019) 49(7):e14–25. doi: 10.1002/jmri.26211
2. Tippareddy C, Zhao W, Sunshine JL, Griswold M, Ma D, Badve C. Magnetic resonance fingerprinting: an overview. Eur J Nucl Med Mol Imaging. (2021) 48(13):4189–200. doi: 10.1007/s00259-021-05384-2
3. Bobman SA, Riederer SJ, Lee JN, Suddarth SA, Wang HZ, MacFall JR. Synthesized Mr images: comparison with acquired images. Radiology. (1985) 155(3):731–8. doi: 10.1148/radiology.155.3.4001377
4. Bobman SA, Riederer SJ, Lee JN, Tasciyan T, Farzaneh F, Wang HZ. Pulse sequence extrapolation with Mr image synthesis. Radiology. (1986) 159(1):253–8. doi: 10.1148/radiology.159.1.3952314
5. Konar AS, Paudyal R, Shah AD, Fung M, Banerjee S, Dave A, et al. Qualitative and quantitative performance of magnetic resonance image compilation (magic) method: an exploratory analysis for head and neck imaging. Cancers (Basel). (2022) 14(15):3624. doi: 10.3390/cancers14153624
6. Tanenbaum LN, Tsiouris AJ, Johnson AN, Naidich TP, DeLano MC, Melhem ER, et al. Synthetic mri for clinical neuroimaging: results of the magnetic resonance image compilation (magic) prospective, multicenter, multireader trial. AJNR Am J Neuroradiol. (2017) 38(6):1103–10. doi: 10.3174/ajnr.A5227
7. Wang Q, Wang G, Sun Q, Sun DH. Application of magnetic resonance imaging compilation in acute ischemic stroke. World J Clin Cases. (2021) 9(35):10828–37. doi: 10.12998/wjcc.v9.i35.10828
8. Ma D, Gulani V, Seiberlich N, Liu K, Sunshine JL, Duerk JL, et al. Magnetic resonance fingerprinting. Nature. (2013) 495(7440):187–92. doi: 10.1038/nature11971
9. Gomez PA, Cencini M, Golbabaee M, Schulte RF, Pirkl C, Horvath I, et al. Rapid three-dimensional multiparametric mri with quantitative transient-state imaging. Sci Rep. (2020) 10(1):13769. doi: 10.1038/s41598-020-70789-2
10. Jiang Y, Ma D, Seiberlich N, Gulani V, Griswold MA. Mr fingerprinting using fast imaging with steady state precession (Fisp) with spiral readout. Magn Reson Med. (2015) 74(6):1621–31. doi: 10.1002/mrm.25559
11. Ma D, Jiang Y, Chen Y, McGivney D, Mehta B, Gulani V, et al. Fast 3d magnetic resonance fingerprinting for a whole-brain coverage. Magn Reson Med. (2018) 79(4):2190–7. doi: 10.1002/mrm.26886
12. Ma D, Jones SE, Deshmane A, Sakaie K, Pierre EY, Larvie M, et al. Development of high-resolution 3d Mr fingerprinting for detection and characterization of epileptic lesions. J Magn Reson Imaging. (2019) 49(5):1333–46. doi: 10.1002/jmri.26319
13. Bernstein MA, King KF, Zhou XJ. Handbook of Mri Pulse Sequences. Burlington, MA: Elsevier Academic Press (2004).
14. Li P, Hu Y. Deep magnetic resonance fingerprinting based on local and global vision transformer. Med Image Anal. (2024) 95:103198. doi: 10.1016/j.media.2024.103198
15. Monga A, Singh D, de Moura HL, Zhang X, Zibetti MVW, Regatte RR. Emerging trends in magnetic resonance fingerprinting for quantitative biomedical imaging applications: a review. Bioengineering. (2024) 11(3):236. doi: 10.3390/bioengineering11030236
16. Mostardeiro TR, Panda A, Campeau NG, Witte RJ, Larson NB, Sui Y, et al. Whole brain 3d Mr fingerprinting in multiple sclerosis: a pilot study. BMC Med Imaging. (2021) 21(1):88. doi: 10.1186/s12880-021-00620-5
17. Mostardeiro TR, Panda A, Witte RJ, Campeau NG, McGee KP, Sui Y, et al. Whole-Brain 3d Mr fingerprinting brain imaging: clinical validation and feasibility to patients with meningioma. Magma. (2021) 34(5):697–706. doi: 10.1007/s10334-021-00924-1
18. McGivney DF, Boyacıoğlu R, Jiang Y, Poorman ME, Seiberlich N, Gulani V, et al. Magnetic resonance fingerprinting review part 2: technique and directions. J Magn Reson Imaging. (2020) 51(4):993–1007. doi: 10.1002/jmri.26877
19. McGivney DF, Pierre E, Ma D, Jiang Y, Saybasili H, Gulani V, et al. Svd compression for magnetic resonance fingerprinting in the time domain. IEEE Trans Med Imaging. (2014) 33(12):2311–22. doi: 10.1109/tmi.2014.2337321
20. Ronneberger O, Fischer P, Brox T, editors. U-Net: convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—mICCAI 2015 (2015). Cham: Springer International Publishing.
21. Yao W, Zeng Z, Lian C, Tang H. Pixel-wise regression using U-net and its application on pansharpening. Neurocomputing. (2018) 312:364–71. doi: 10.1016/j.neucom.2018.05.103
22. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Francis B, David B, editors. Proceedings of the 32nd International Conference on Machine Learning; Proceedings of Machine Learning Research: PMLR. (2015). p. 448–56.
23. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. (2014) 15:1929–58.
24. Ruder S. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:170605098 (2017).
25. Mugler JP III, Brookeman JR. Three-dimensional magnetization-prepared rapid gradient-Echo imaging (3d mp rage). Magn Reson Med. (1990) 15(1):152–7. doi: 10.1002/mrm.1910150117
26. Lowekamp BC, Chen DT, Ibáñez L, Blezek D. The design of simpleitk. Front Neuroinform. (2013) 7:45. doi: 10.3389/fninf.2013.00045
28. Gómez PA, Molina-Romero M, Buonincontri G, Menzel MI, Menze BH. Designing contrasts for rapid, simultaneous parameter quantification and flow visualization with quantitative transient-state imaging. Sci Rep. (2019) 9(1):8468. doi: 10.1038/s41598-019-44832-w
29. Ashburner J, Friston KJ. Unified segmentation. Neuroimage. (2005) 26(3):839–51. doi: 10.1016/j.neuroimage.2005.02.018
30. Schwarz CG, Gunter JL, Wiste HJ, Przybelski SA, Weigand SD, Ward CP, et al. A large-scale comparison of cortical thickness and volume methods for measuring Alzheimer’s disease severity. Neuroimage Clin. (2016) 11:802–12. doi: 10.1016/j.nicl.2016.05.017
31. Avants BB, Epstein CL, Grossman M, Gee JC. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal. (2008) 12(1):26–41. doi: 10.1016/j.media.2007.06.004
32. Fang Z, Chen Y, Hung SC, Zhang X, Lin W, Shen D. Submillimeter Mr fingerprinting using deep learning-based tissue quantification. Magn Reson Med. (2020) 84(2):579–91. doi: 10.1002/mrm.28136
33. Gu Y, Pan Y, Fang Z, Ma L, Zhu Y, Androjna C, et al. Deep learning-assisted preclinical Mr fingerprinting for sub-millimeter T(1) and T(2) mapping of entire macaque brain. Magn Reson Med. (2024) 91(3):1149–64. doi: 10.1002/mrm.29905
34. Balsiger F, Jungo A, Scheidegger O, Carlier PG, Reyes M, Marty B. Spatially regularized parametric map reconstruction for fast magnetic resonance fingerprinting. Med Image Anal. (2020) 64:101741. doi: 10.1016/j.media.2020.101741
35. Barbieri M, Brizi L, Giampieri E, Solera F, Manners DN, Castellani G, et al. A deep learning approach for magnetic resonance fingerprinting: scaling capabilities and good training practices investigated by simulations. Phys Med. (2021) 89:80–92. doi: 10.1016/j.ejmp.2021.07.013
36. Cabini RF, Barzaghi L, Cicolari D, Arosio P, Carrazza S, Figini S, et al. Fast deep learning reconstruction techniques for preclinical magnetic resonance fingerprinting. NMR Biomed. (2024) 37(1):e5028. doi: 10.1002/nbm.5028
37. Fang Z, Chen Y, Lin W, Shen D. Quantification of relaxation times in Mr fingerprinting using deep learning. Proc Int Soc Magn Reson Med Sci Meet Exhib Int Soc Magn Reson Med Sci Meet Exhib, Vol. 25. (2017).
38. Hoppe E, Körzdörfer G, Würfl T, Wetzl J, Lugauer F, Pfeuffer J, et al. Deep learning for magnetic resonance fingerprinting: a new approach for predicting quantitative parameter values from time series. Stud Health Technol Inform. (2017) 243:202–6. doi: 10.3233/978-1-61499-808-2-202
39. Khajehim M, Christen T, Tam F, Graham SJ. Streamlined magnetic resonance fingerprinting: fast whole-brain coverage with deep-learning based parameter estimation. Neuroimage. (2021) 238:118237. doi: 10.1016/j.neuroimage.2021.118237
40. Chen Y, Fang Z, Hung SC, Chang WT, Shen D, Lin W. High-Resolution 3d Mr fingerprinting using parallel imaging and deep learning. Neuroimage. (2020) 206:116329. doi: 10.1016/j.neuroimage.2019.116329
41. Cohen O, Zhu B, Rosen MS. Mr fingerprinting deep reconstruction network (drone). Magn Reson Med. (2018) 80(3):885–94. doi: 10.1002/mrm.27198
42. Cohen O, Yu VY, Tringale KR, Young RJ, Perlman O, Farrar CT, et al. Cest Mr fingerprinting (cest-mrf) for brain tumor quantification using epi readout and deep learning reconstruction. Magn Reson Med. (2023) 89(1):233–49. doi: 10.1002/mrm.29448
43. Kang B, Kim B, Park H, Heo HY. Learning-based optimization of acquisition schedule for magnetization transfer contrast Mr fingerprinting. NMR Biomed. (2022) 35(5):e4662. doi: 10.1002/nbm.4662
44. Kang B, Kim B, Schär M, Park H, Heo HY. Unsupervised learning for magnetization transfer contrast Mr fingerprinting: application to cest and nuclear overhauser enhancement imaging. Magn Reson Med. (2021) 85(4):2040–54. doi: 10.1002/mrm.28573
45. Kang B, Singh M, Park H, Heo HY. Only-train-once Mr fingerprinting for B(0) and B(1) inhomogeneity correction in quantitative magnetization-transfer contrast. Magn Reson Med. (2023) 90(1):90–102. doi: 10.1002/mrm.29629
46. Kim B, Schär M, Park H, Heo HY. A deep learning approach for magnetization transfer contrast Mr fingerprinting and chemical exchange saturation transfer imaging. Neuroimage. (2020) 221:117165. doi: 10.1016/j.neuroimage.2020.117165
47. Perlman O, Farrar CT, Heo HY. Mr fingerprinting for semisolid magnetization transfer and chemical exchange saturation transfer quantification. NMR Biomed. (2023) 36(6):e4710. doi: 10.1002/nbm.4710
48. Perlman O, Zhu B, Zaiss M, Rosen MS, Farrar CT. An end-to-end ai-based framework for automated discovery of rapid cest/mt mri acquisition protocols and molecular parameter quantification (autocest). Magn Reson Med. (2022) 87(6):2792–810. doi: 10.1002/mrm.29173
49. Singh M, Jiang S, Li Y, van Zijl P, Zhou J, Heo HY. Bloch simulator-driven deep recurrent neural network for magnetization transfer contrast Mr fingerprinting and cest imaging. Magn Reson Med. (2023) 90(4):1518–36. doi: 10.1002/mrm.29748
50. Fan H, Su P, Huang J, Liu P, Lu H. Multi-band Mr fingerprinting (Mrf) asl imaging using artificial-neural-network trained with high-fidelity experimental data. Magn Reson Med. (2021) 85(4):1974–85. doi: 10.1002/mrm.28560
51. Hermann I, Golla AK, Martínez-Heras E, Schmidt R, Solana E, Llufriu S, et al. Lesion probability mapping in ms patients using a regression network on Mr fingerprinting. BMC Med Imaging. (2021) 21(1):107. doi: 10.1186/s12880-021-00636-x
52. Shiradkar R, Panda A, Leo P, Janowczyk A, Farre X, Janaki N, et al. T1 and T2 Mr fingerprinting measurements of prostate cancer and prostatitis correlate with deep learning-derived estimates of epithelium, lumen, and stromal composition on corresponding whole mount histopathology. Eur Radiol. (2021) 31(3):1336–46. doi: 10.1007/s00330-020-07214-9
53. Sun H, Luo G, Lui S, Huang X, Sweeney J, Gong Q. Morphological fingerprinting: identifying patients with first-episode schizophrenia using auto-encoded morphological patterns. Hum Brain Mapp. (2023) 44(2):779–89. doi: 10.1002/hbm.26098
54. van Zijl P, Knutsson L. In vivo magnetic resonance imaging and spectroscopy. Technological advances and opportunities for applications continue to abound. J Magn Reson. (2019) 306:55–65. doi: 10.1016/j.jmr.2019.07.034
55. Kabasawa H. Mr imaging in the 21st century: technical innovation over the first two decades. Magn Reson Med Sci. (2022) 21(1):71–82. doi: 10.2463/mrms.rev.2021-0011
56. Akoglu H. User’s guide to correlation coefficients. Turk J Emerg Med. (2018) 18(3):91–3. doi: 10.1016/j.tjem.2018.08.001
57. Buonincontri G, Biagi L, Retico A, Cecchi P, Cosottini M, Gallagher FA, et al. Multi-Site repeatability and reproducibility of Mr fingerprinting of the healthy brain at 1.5 and 3.0t. Neuroimage. (2019) 195:362–72. doi: 10.1016/j.neuroimage.2019.03.047
58. Liao C, Wang K, Cao X, Li Y, Wu D, Ye H, et al. Detection of lesions in mesial temporal lobe epilepsy by using Mr fingerprinting. Radiology. (2018) 288(3):804–12. doi: 10.1148/radiol.2018172131
59. Badve C, Yu A, Dastmalchian S, Rogers M, Ma D, Jiang Y, et al. Mr fingerprinting of adult brain tumors: initial experience. AJNR Am J Neuroradiol. (2017) 38(3):492–9. doi: 10.3174/ajnr.A5035
60. Haubold J, Demircioglu A, Gratz M, Glas M, Wrede K, Sure U, et al. Non-invasive tumor decoding and phenotyping of cerebral gliomas utilizing multiparametric (18)F-fet pet-Mri and Mr fingerprinting. Eur J Nucl Med Mol Imaging. (2020) 47(6):1435–45. doi: 10.1007/s00259-019-04602-2
Keywords: U-Net, convolutional neural network, magnetic resonance fingerprinting, MPRAGE, relaxometry
Citation: McGee KP, Sui Y, Witte RJ, Panda A, Campeau NG, Mostardeiro TR, Sobh N, Ravaioli U, Zhang S, Falahkheirkhah K, Larson NB, Schwarz CG and Gunter JL (2024) Synthesis of MR fingerprinting information from magnitude-only MR imaging data using a parallelized, multi network U-Net convolutional neural network. Front. Radiol. 4:1498411. doi: 10.3389/fradi.2024.1498411
Received: 18 September 2024; Accepted: 27 November 2024;
Published: 16 December 2024.
Edited by:
Elisa Scalco, National Research Council (CNR), ItalyReviewed by:
Ricardo A. Gonzales, Harvard Medical School, United StatesAldo Rodrigo Mejía Rodríguez, Autonomous University of San Luis Potosí, Mexico
Copyright: © 2024 McGee, Sui, Witte, Panda, Campeau, Mostardeiro, Sobh, Ravaioli, Zhang, Falahkheirkhah, Larson, Schwarz and Gunter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Kiaran P. McGee, bWNnZWUua2lhcmFuQG1heW8uZWR1