- 1Department of Geriatric Medicine, Radboudumc Alzheimer Center, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
- 2Department of Neurology, University Medical Center Groningen, Groningen, Netherlands
- 3Department of Cardiovascular Sciences, NIHR Leicester Biomedical Research Centre, Glenfield Hospital, Leicester, United Kingdom
- 4Department of Intensive Care, University of Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
- 5Department of Neurology, Faculty of Medicine, Hospital das Clinicas University of São Paulo, São Paulo, Brazil
- 6Department of Applied Mathematics and Computer Science, Faculty of Natural Sciences and Mathematics, Universidad del Rosario, Bogotá, Colombia
- 7Department of Engineering Informatics, Institute of Biomedical Engineering, University of Santiago, Santiago, Chile
- 8Department of Clinical Neurophysiology, Maastricht University Medical Centre, Maastricht, Netherlands
- 9Department of Electronic Engineering (ESAT), Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, Katholieke Universiteit Leuven, Leuven, Belgium
- 10Interuniversity Microelectronics Centre, Leuven, Belgium
- 11Department of Electrical, Computer and Software Engineering, McGill University, Montreal, QC, Canada
- 12Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- 13Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
- 14Department of Bioengineering, McGill University, Montreal, QC, Canada
- 15Department of Neurology, Luzerner Kantonsspital, Luzern, Switzerland
- 16Faculty of Engineering and the Environment, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
- 17Departamento de Emergencia, Hospital de Clínicas, Universidad de la República, Montevideo, Uruguay
- 18Institute for Exercise and Environmental Medicine, Presbyterian Hospital of Dallas, University of Texas Southwestern Medical Center, Dallas, TX, United States
Parameters describing dynamic cerebral autoregulation (DCA) have limited reproducibility. In an international, multi-center study, we evaluated the influence of multiple analytical methods on the reproducibility of DCA. Fourteen participating centers analyzed repeated measurements from 75 healthy subjects, consisting of 5 min of spontaneous fluctuations in blood pressure and cerebral blood flow velocity signals, based on their usual methods of analysis. DCA methods were grouped into three broad categories, depending on output types: (1) transfer function analysis (TFA); (2) autoregulation index (ARI); and (3) correlation coefficient. Only TFA gain in the low frequency (LF) band showed good reproducibility in approximately half of the estimates of gain, defined as an intraclass correlation coefficient (ICC) of >0.6. None of the other DCA metrics had good reproducibility. For TFA-like and ARI-like methods, ICCs were lower than values obtained with surrogate data (p < 0.05). For TFA-like methods, ICCs were lower for the very LF band (gain 0.38 ± 0.057, phase 0.17 ± 0.13) than for LF band (gain 0.59 ± 0.078, phase 0.39 ± 0.11, p ≤ 0.001 for both gain and phase). For ARI-like methods, the mean ICC was 0.30 ± 0.12 and for the correlation methods 0.24 ± 0.23. Based on comparisons with ICC estimates obtained from surrogate data, we conclude that physiological variability or non-stationarity is likely to be the main reason for the poor reproducibility of DCA parameters.
Introduction
The importance of cerebral autoregulation (CA) has been clearly established, as a cerebro-protective mechanism to alterations in blood pressure (BP) by keeping cerebral blood flow (CBF) relatively constant (van Beek et al., 2008). Dynamic CA (DCA) is the transient cerebrovascular response to rapid changes in BP (Aaslid et al., 1989). Compared to the more classical modality of “static” autoregulation, that often requires the use of pharmacological agents to induce steady-state changes in BP (Tiecks et al., 1995), DCA has benefitted from recent developments in non-invasive techniques to record CBF and BP, and it is now the preferred approach for assessment of CA in physiological and clinical studies.
Despite its many advantages, protocols to reliably assess DCA remain the object of considerable debate (Simpson and Claassen, 2018a,b; Tzeng and Panerai, 2018a,b). On the one hand, maneuvers that induce relatively large and rapid changes in BP, such as the sudden release of compressed thigh cuffs (Aaslid et al., 1989), lead to recordings with better signal-to-noise ratio and the possibility of visualizing and quantifying the DCA response with measurements as short as 30 s. On the other hand, using the spontaneous fluctuations in BP and CBF, that can be observed in most individuals, allows estimation of DCA parameters at rest, without the need for a physiological disturbance or challenge. This can lead to better acceptance and feasibility in most clinical conditions.
Which road to take? The answer to this fundamental question is not straightforward as it is unlikely that a single protocol will be suitable for all different scenarios of patient care and physiological intervention (Simpson and Claassen, 2018a,b; Tzeng and Panerai, 2018a,b).
A definition of an optimal protocol could be one which, combined with robust modeling techniques (Panerai, 2008), leads to the best sensitivity and specificity performance for detection of CA disturbances, as well as predictive accuracy for patient prognosis.
Before reaching this stage though, it is essential that measurement reproducibility is demonstrated as a key property of any method of assessment. This target is at the forefront of the collaborative initiatives promulgated by the International Cerebral Autoregulation Network (CARNet) as part of the effort to identify potential sources of methodological disparity (Meel-van den Abeelen et al., 2014) and encourage technical standardization (Claassen et al., 2016). The most recent stage of this pathway is described in this article and involves an international, multi-center assessment of the reproducibility of the main parameters that are currently available to assess DCA based on spontaneous fluctuations of BP and CBF.
Examining the reproducibility of DCA parameters, obtained from spontaneous fluctuations at rest, is important due to the widespread use of this approach for both physiological and clinical studies. Early assessments of the reproducibility of the spontaneous fluctuations approach were not encouraging (Brodie et al., 2009; Gommer et al., 2010; Smirl et al., 2015), but were not regarded as the definitive answer, only as indicative of a single method, handled by a single center. This limitation was addressed in the current multi-center study. An initial report (Sanders et al., 2018) described the influence of different methods of analysis on the reproducibility of synthetic data, where surrogate time-series of CBF velocity (CBFv) were generated based on real measurements of BP, coupled with a realistic signal-to-noise ratio. These generated CBFv data were based on a linear model. Thus, compared to real CBFv data, these generated data are free of any physiological influences on the BP–CBFv relationship. Such physiological influences could include non-stationary behavior of autoregulatory function (i.e., variations in function over time), and factors known to influence CBFv (e.g., PaCO2, cognitive activity, autonomic nervous activity, temperature, breathing pattern).
The present communication therefore had as aim to provide a much broader description of the reproducibility of “real” estimates of DCA from 14 leading international centers, using a diversity of analytical methods. In particular, this study addressed two main objectives: (1) to compare the reproducibility of DCA parameters from these real physiological measurements to that of surrogate data and (2) to establish the influence of different analytical methods used by a variety of research centers worldwide on the reproducibility of DCA metrics.
Materials and Methods
Subjects
A database was created from available datasets of cerebral hemodynamic measurements from participating centers (Supplementary Table S1). Included were healthy adults >18 years of age. Exclusion criteria were uncontrolled hypertension, smoking, cardiovascular disease, diabetes, irregular heart rhythm, TIA/stroke, or significant pulmonary disease. The study has been carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Written informed consent was obtained from all subjects.
Description of Datasets
Six of a total of 14 centers (Supplementary Table S1) provided datasets that consisted of two measurements from 10 to 15 healthy volunteers in each center, resulting in a total of 75 healthy subjects. Time between the two measurements varied between centers, from minutes to a maximum of 4 months. Data sets consisted of 5 min of beat-to-beat artifact free mean CBFv (transcranial Doppler ultrasound, TCD), mean BP (digital artery volume clamping), and end-tidal CO2 (EtCO2, capnography) measurements at rest. Beat-to-beat parameters were re-sampled at 10 Hz. In 22 subjects, the TCD data were unilateral. The dataset was as follows: N = 55 left side signals, N = 71 right side signals.
DCA Analysis
Data analyses were performed by 14 participating centers. The following DCA analysis methods were used: transfer function analysis (TFA) (Panerai et al., 1998a; Zhang et al., 1998; Mitsis et al., 2002; Muller et al., 2003; Reinhard et al., 2003; Liu et al., 2005; Gommer et al., 2010; van Beek et al., 2010; Meel-van den Abeelen et al., 2014; Muller and Osterreich, 2014; Panerai, 2014), Laguerre expansion of first-order Volterra kernels or finite impulse response models (Marmarelis, 2004; Mitsis et al., 2004, 2009; Marmarelis et al., 2013, 2014a,b), wavelet analysis (Torrence and Webster, 1999; Grinsted et al., 2004; Peng et al., 2010), parametric finite-impulse response filter-based methods (Panerai et al., 2000; Simpson et al., 2001), autoregulation index (ARI) analysis (Panerai et al., 1998b), autoregressive moving average (ARMA)-based ARI methods and variant ARI methods (Panerai et al., 2003), autoregressive with exogenous input (ARX) methods (Liu and Allen, 2002; Liu et al., 2003; Panerai et al., 2003), and correlation coefficient-like indices (Heskamp et al., 2013; Caicedo et al., 2016). A summary of the methods and corresponding references are given in Table 1.
Reproducibility of DCA Metrics
For the reproducibility and variability analysis of the DCA parameters, DCA methods were grouped into three broad categories: (1) TFA-like output, (2) ARI-like output; and (3) correlation coefficient-like outputs. These categories were created from the perspective of similar output parameters, not because of similarity on mathematical grounds. In general, all centers were free to use their own settings to cover the standard frequency range between 0 and 0.5 Hz. In the majority of cases though, for the TFA-like output methods, the settings for TFA were similar to what was later proposed in the CARNet White Paper (Claassen et al., 2016). In summary, this involved spectral estimates using the Welch method with multiple segments of data of at least 100 s, 50% superposition, and cosine windowing to reduce spectral leakage. Individual method settings are listed in Supplementary Table S4. Estimates of gain and phase were averaged for different frequency bands, very low frequency (VLF), and LF bands (Supplementary Table S4; Claassen et al., 2016).
The ARI-like output methods consisted of time domain estimates of the impulse or step response, using the inverse Fourier transform of gain and phase, or ARMA models (Panerai et al., 1998b, 2003; Liu and Allen, 2002; Liu et al., 2003).
Finally, the correlation coefficient-like outputs consisted of a single parameter, obtained by linear regression or similar methods (Heskamp et al., 2013; Caicedo et al., 2016).
Statistical Analysis
We assessed reproducibility as follows: To quantify the level of agreement between first and second measurement, we applied the Bland–Altman method to obtain mean difference (or bias) and to determine limits of agreement (LOA). This was done for the methods in the TFA-like, ARI-like, and correlation-like category. A non-parametric Wilcoxon signed rank test was used to check if there were significant differences between left and right side results. Left and right output results were averaged for further analyses. To correct for abnormal data distributions, Box–Cox transformations were performed, which is a power transformation with different power levels (Box and Cox, 1964). Within one analysis method, the same transformation was applied to both the first and second measurement, but different transformations may be used for different methods and different variables.
Further quantification of agreement between the repeated measurements for all DCA analysis methods was determined by one-way intraclass correlation coefficient (ICC) analysis. ICC results of TFA-like methods combined for the parameters gain and phase were compared for VLF and LF. Furthermore, the differences between the ICC results of previously obtained surrogate data (Sanders et al., 2018) and physiological data were analyzed for the methods combined in parameters gain VLF, gain LF, phase VLF, phase LF, ARI, and correlation. These differences between ICC parameter values were tested with the paired Wilcoxon signed rank test, considering that most parameters, such as TFA estimates, are not normally distributed. SPSS 22 was used for all analyses; a value of p < 0.05 was adopted to indicate statistical significance.
Interpretation of the absolute and maximal values of ICC were based on often quoted guidelines: poor (ICC < 0.40); fair (0.40–0.59; good (0.60–0.74); and excellent (0.75–1.00) (Cicchetti, 1994).
Results
Subject characteristics are listed in Table 2. No significant differences were found for MAP, CBFv, and EtCO2 for the two measurements (T1 and T2).
The scatterplots of Figure 1A show examples of TFA-like metrics of the estimated LF gain and Figure 1B of the ARI-like results of the repeated measurements for both physiological and surrogate data. The figures show a difference in distribution of the data between Figures 1A,B, with a higher correlation between the repeated measurements for lower gain values only in the TFA-like results. Despite the lower number of cases in the surrogate results, it is clearly shown that there is less variability in the surrogate data (bottom) compared to physiological data (top) for all TFA-like methods (Figure 1A) and the ARI an IR-filter methods (Figure 1B). Physiological data are presented in Supplementary Tables S2a,k.
Figure 1. (A) Gain LF results of TFA-like methods for repeated measurements. Top row: physiological data, bottom row: surrogate data. For each method group (TFA, Laguerre, Wavelet, IR-filter, and ARX) the results of similar methods are combined (Table 1). TFA: black dots are 10 methods (cm/s/mmHg), gray dots are 3 methods (%/% or %/mmHg); Laguerre: 4 methods (cm/s/mmHg); Wavelet: 1 method (cm/s/mmHg); IR-filter: 2 methods (%/%); ARX: 2 methods (cm/s/mmHg). See Supplementary Figures S1–S3 for Phase VLF/LF and Gain VLF. (B) ARI-like results of different methods for repeated measurements. Top row: physiological data, bottom row: surrogate data. For each method group (ARI/ARMA, ARX, IR-filter, and correlation) the results of similar methods are combined (Table 1). ARI: black dots are three methods (ARI 0–9 arbitrary units); gray dots are two methods (ARMA-ARI 0–9 arbitrary units); ARX: one method (ARX coefficient); IR-filter: one method (arbitrary units); correlation: two methods.
Comparing different autoregulation metrics with Bland–Altman analysis, we see a difference between gain variables and all the other variables (Figure 2). Both gain VLF and LF show a strong increase in the difference between two measurements on the y-axis for higher values of mean gain on the x-axis. For the smallest values of gain, where the DCA is considered most effective, the agreement is the strongest. Results for T1, T2, bias (T1-T2), and the LOA of the different method categories per method group are listed in Table 3. Each method group corresponds to results of several methods combined (Table 1 and Supplementary Tables S3a–c).
Figure 2. Bland–Altman plot of TFA-like parameters: gain VLF (top left), gain LF (top right), phase VLF (middle left), and phase LF (middle right); ARI-like parameters (bottom left); correlation-like parameters (bottom right). Units are similar to Figures 1A,B.
Left and right ICC results were not different. ICC analysis of physiological data is shown in Figure 3. Despite minor differences in ICC values between methods, 12 methods qualified as having good reproducibility (ICC > 0.6). TFA-like and ARI-like methods scored significantly higher ICC for surrogate data compared to physiological data, combined for centers using the same methods, for gain VLF (p < 0.001), gain LF (p < 0.001), phase VLF (p < 0.001), phase LF (p < 0.001), and ARI (p = 0.018) (Sanders et al., 2018). Only the correlation-like methods did not score higher ICC values for surrogate data compared to physiological data (p = 0.18). ICC results of the surrogate data are presented in Supplementary Tables S5a,b.
Figure 3. ICC values for methods using TFA or similar approaches with gain VLF and LF (top), phase VLF or LF (middle), and ARI or correlation-like methods (bottom). Results are shown per method (Table 1). ICC values <0.40: poor, between 0.40 and 0.59: fair, between 0.60 and 0.74: good, and between 0.75 and 1.00: excellent (Cicchetti, 1994).
For the TFA-like methods, ICC gain VLF [mean (SD)] was lower than ICC gain LF, respectively, 0.38 (0.057) and 0.59 (0.078), p < 0.001. Also for phase, the corresponding ICC values were lower for VLF than for LF, 0.17 (0.13) and 0.39 (0.11), respectively, p = 0.001. For ARI-like methods the mean (SD) ICC results were 0.30 (0.12) and for the correlation-like 0.24 (0.21).
Discussion
With this multi-center, multi-method study, we aimed to provide an internationally representative and broader evaluation of the reproducibility of many DCA assessment methods. By comparing real physiological measurements with those where physiological variability was reduced by use of surrogate data, we have been able to assess the contribution of physiological non-stationary to the reproducibility of DCA parameters. For surrogate data, with realistic CBFv signals generated from measured BP data, we had demonstrated good to excellent reproducibility for most DCA methods. We now hypothesized that in real recordings of BP and CBF, non-stationarity in the BP–CBF relationship would reduce reproducibility for these DCA methods.
We asked researchers from various centers with expertise in DCA to apply their DCA method(s) to a common dataset with repeated physiological measurements of BP and CBFv. Participating centers, and respective analytical methods, are representative of the literature on DCA assessment (Panerai et al., 1998a,b, 2000, 2003; Zhang et al., 1998; Torrence and Webster, 1999; Simpson et al., 2001; Liu and Allen, 2002; Mitsis et al., 2002, 2004, 2009; Liu et al., 2003, 2005; Muller et al., 2003; Reinhard et al., 2003; Grinsted et al., 2004; Marmarelis, 2004; Gommer et al., 2010; Peng et al., 2010; van Beek et al., 2010; Heskamp et al., 2013; Marmarelis et al., 2013, 2014a,b; Meel-van den Abeelen et al., 2014; Muller and Osterreich, 2014; Panerai, 2014; Caicedo et al., 2016).
Main Findings
Two main outstanding findings came out of the study: (i) the reproducibility of most DCA metrics, independently of the analytical approach adopted, should be regarded as “poor,” given the prevailing values of ICC < 0.4 (Cicchetti, 1994) and (ii) physiological variability is likely to be the main reason for the degradation in reproducibility, when compared to results obtained from surrogate data (Sanders et al., 2018).
Strictly speaking, these results indicate that, at this moment, most DCA metrics do not meet criteria for individual and clinical use for diagnostic and/or monitoring purposes. Despite the high variability across DCA parameters, only TFA and ARX scored ICC results that could be categorized as “good” (ICC > 0.6, Figure 3) for approximately half of the gain metrics in the LF band (Cicchetti, 1994). As discussed in more detail below though, these findings need to be placed into perspective, taking into account methodological issues and current knowledge of the wider application of DCA assessment metrics.
Methodological Considerations
Although indicative of the deterioration of DCA metrics, from what was obtained with surrogate data, to the case of “real” physiological measurements, the ICC can be misleading when estimated using only healthy subjects. Differently from the intra-subject standard error, the ICC takes into account both intra- and inter-subject variability. Given that healthy subjects would be expected to cluster around values indicative of a good working DCA, this would reduce inter-subject variability, in comparison with intra-subject variance, thus putting a bias toward reduced values of ICC. However, as can be observed in Figure 1, there was wide inter-subject variability, indicating that this alone cannot explain the low ICC results. Nonetheless, despite the indication that most DCA metrics have limited reproducibility, it would be premature to use our findings to put a halt on their use in physiological and clinical studies, before further research is conducted, ideally assessing the ICC for much larger cohorts of both patients and healthy individuals.
The analysis of physiological data presents large within and between subject variability, similar to what has been reported before in patient data (Gommer et al., 2010; van Beek et al., 2010; Elting et al., 2014; Smirl et al., 2015). Non-Gaussian distributions were corrected by the Box–Cox transformations (Box and Cox, 1964). The ICC values were much lower than what was found when these same methods were applied to analyze surrogate data (Sanders et al., 2018). In that study, physiological variability was reduced to only the BP signal, because the CBF signal was software-generated using the repeated BP signals as input. Even though realistic levels of noise were added to the generated CBF signal, all DCA methods demonstrated good to excellent reproducibility (ICC 0.6–1.00) on those surrogate data, whereas the majority of these same methods had poor reproducibility (ICC < 0.4) for the current dataset where both BP and CBF signals represented physiological data. One interpretation of these results is that the poor reproducibility of DCA is not solely explained because the methods provide poor accuracy or poor precision. With surrogate data, all methods showed accuracy and precision, leading to good reproducibility.
Comparable with results of Smirl et al. (2015), the highest ICC results were obtained with gain LF parameters, although Figure 2 shows that reproducibility differs for different gain values, with highest reproducibility for lower gain values. This is a proportional increase in variability, recognizable by the arrowhead shape in Figure 2. ICC for gain and phase parameters is decreased in VLF compared to LF, and may be explained by the lower coherence between BP and CBFv in VLF oscillations, resulting in wider confidence limits for VLF and lower ICC values. Comparing gain ICC results with phase, one can see decreased reproducibility in the phase results over both frequency bands. This does not immediately favor gain parameters as more suitable DCA metrics, since a lower ICC value for phase can be expected purely based on the definition and dependence between the two parameters (Bendat and Piersol, 1986). This explains that confidence limits will automatically be wider for phase compared to gain. We recommend to routinely plot confidence limits when creating TFA results.
To improve reproducibility, it may be beneficial to use measurement conditions where the DCA regulatory system is maximally activated, for example in sit-to-stand measurements (Simpson and Claassen, 2018a,b) or squat-stand measurements (Smirl et al., 2015). This may result in minimal gain values in the LF band and improve reproducibility. However, it remains an ongoing debate whether TFA gain is the most suitable parameter to reflect state of DCA, or if phase may be more physiologically relevant.
Clinical Implications
Given the limited reproducibility shown by most indices of DCA, to what extent should we trust their use in clinical studies? This is a crucial question given the stage of research on DCA, with many centers advocating the use of DCA metrics in clinical decision-making and patient management. In this context, the results of this study might be a watershed. Until recently, the prevailing view has been that, among a plethora of DCA metrics, there could be one that could become a “gold standard” based on its reproducibility, as well as its sensitivity and specificity, to detect changes in DCA, either due to disease or physiological status. What this study is showing though, is that none of the methods in use could fulfill this role, at least not as reproducibility is concerned. Furthermore, the comparison between physiological and surrogate data also suggests that it is unlikely that other current or future methods will have an outstanding reproducibility either. The reason for this somber perspective lies with the growing awareness that regulation of CBF, not only in response to BP changes, but also due to changes in CO2 or neural stimulation, is a highly non-stationary phenomenon, thus requiring an entirely different conceptual paradigm to ascertain their clinical usefulness (Panerai, 2014). On the other hand, it is not all gloom and doom. Looking back into a vast literature, too extensive to be enumerated here, reporting on clinical applications of most of the DCA metrics included in this study, there is plenty of evidence to suggest their sensitivity to detect worsening DCA in a range of cerebrovascular and, increasingly, also systemic conditions. To study reproducibility in the presence of disease is a major challenge though, as patient conditions are either worsening or improving on a daily basis. Nevertheless, several follow up studies have been able to use diverse indices of DCA to describe the natural history of conditions like severe head injury (Czosnyka et al., 1997), ischemic stroke (Salinet et al., 2014), or intracerebral hemorrhage (Ma et al., 2016) which is also reassuring. Certainly much more research is needed, mainly to understand the nature of DCA non-stationarity and how this is affected by, and manifested in, clinical conditions, to improve the reliability and usefulness of DCA assessment for patient care.
Limitations and Future Directions
Only methods that could be applied to short data segments (5 min) were evaluated; therefore, the correlation-like methods were underrepresented. The correlation-like methods clearly showed reduced reproducibility compared to the other categories (Figure 3) under these conditions.
It is difficult to select a suitable method to assess reproducibility of DCA analysis parameters. We selected ICC, although this method being sensitive to outliers. This has probably affected phase VLF results the strongest in a negative way, since high variability and outliers were most present in phase VLF.
The time interval differences between repeated measurements were not considered in the analysis. A dataset consisting of rest measurements was used, with limited BP fluctuations, resulting in a low power of BP and CBFv oscillations. At rest, cerebral perfusion is usually well maintained and DCA may not be activated, while during a physical challenge, when sufficient DCA functioning is crucial, will give more meaningful results (Simpson and Claassen, 2018a,b; Tzeng and Panerai, 2018a,b). Moreover, it will be relevant to add clinical data to the healthy controls to have a greater spread of inter-subject variability.
It could not yet be answered what the precise reason is for low reproducibility of DCA assessment in physiological data. It is necessary to study physiological variation in DCA function within individuals in repeated measurements. From a theoretical perspective, the variability in DCA results can be reduced in two ways: Increase the coherence or increase the number of averages (Bendat and Piersol, 1986; Halliday et al., 1995). To increase the coherence, oscillations could be induced and included in the measurement protocol. Increased coherence could also be achieved by selection of the data used for DCA analysis based on the power of BP oscillations. This line of investigation will be pursued as part of this wider project. To increase the number of averages, more or longer measurement protocols should be used, although duration of recordings is usually limited in most clinical settings.
Selecting the most promising DCA parameter is complex, since the most reproducible parameter is not necessarily the best parameter to reflect DCA status. Although there was not a single method that outperformed others both linear and non-linear, there are inter-method differences that are worth investigating. In particular, future studies could look to the influence of measurement length or increased oscillations in the measurement protocol or data selection (Simpson and Claassen, 2018a,b).
Furthermore, the question to answer is to what extent does reproducibility depend on autoregulation status. Are DCA parameters less reproducible in case of worse DCA status and functioning? One interesting and relatively easy next step could be to perform repeated measurements in hypercapnic data (Katsogridakis et al., 2013), as a model for impaired DCA, and compare these with repeated measurements in normocapnia to assess differences in reproducibility.
Conclusion
The physiological nature of these measurements strongly reduced reproducibility of DCA when assessed in short data recordings in healthy subjects. This conclusion is not affected by the choice of analytical method used to derive different DCA metrics, or by local procedures in multiple international centers which participated in this study. Further investigation is needed to improve our understanding of how physiological variability affects DCA reproducibility in health and disease.
Data Availability
The datasets generated for this study are available on request to the corresponding author.
Ethics Statement
All subjects gave written informed consent in accordance with the Declaration of Helsinki. The six data providing centers and the ethical approval details: (1) JC: Radboudumc, Netherlands (ethical approval was given by the Local Medical Ethics Committee Arnhem–Nijmegen, Netherlands). (2) JE: University Medical Center Groningen, Netherlands (ethical approval was given by the Local Ethics Committee of the University Medical Centre Groningen). (3) EG: University Hospital Maastricht, Netherlands (ethical approval was given by the Medical Ethical Review Board of the Maastricht University Hospital/Maastricht University with METC reference no. 07-2-003). (4) RBP: Glenfield Hospital, Leicester, United Kingdom and University of Southampton, United Kingdom [ethical approval was given by the Research Ethics Committee of the National Research Ethics Service Southampton and Southwest Hampshire (10/H0502/1)]. (5) DMS: University of Southampton, United Kingdom [ethical approval was given by the National Research Ethics Service and Southampton University Hospitals NHS Trust (RHM HOS0199)]. (6) RZ: the Institute for Exercise and Environmental Medicine (IEEM), University of Texas Southwestern Medical Center, United States (ethical approval was given by the Institutional Review Boards of University of Texas Southwestern Medical Center and Texas Health Presbyterian Hospital of Dallas, TX, United States).
Author Contributions
MS, JE, RBP, and JC developed the idea for the study and drafted the manuscript. All authors performed the data analyses, participated in revising the manuscript, and approved the final version of this paper prior to submission.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
We would like to thank all the subjects who contributed with data for this study.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fphys.2019.00865/full#supplementary-material
References
Aaslid, R., Lindegaard, K. F., Sorteberg, W., and Nornes, H. (1989). Cerebral autoregulation dynamics in humans. Stroke 20, 45–52. doi: 10.1161/01.str.20.1.45
Bendat, J. S., and Piersol, A. G. (1986). Random Data: Analysis, and Measurement Procedures. New York, NY: Wiley.
Box, G. E. P., and Cox, D. R. (1964). An analysis of transformations. J. R. Stat. Soc. Ser. B-Stat. Methodol. 26, 211–252.
Brodie, F. G., Atkins, E. R., Robinson, T. G., and Panerai, R. B. (2009). Reliability of dynamic cerebral autoregulation measurement using spontaneous fluctuations in blood pressure. Clin. Sci. 116, 513–520. doi: 10.1042/cs20080236
Caicedo, A., Varon, C., Hunyadi, B., Papademetriou, M., Tachtsidis, I., and Van Huffel, S. (2016). Decomposition of near-infrared spectroscopy signals using oblique subspace projections: applications in brain hemodynamic monitoring. Front. Physiol. 7:515.
Cicchetti, D. (1994). Guidelines, criteria, and rules of thumb for evaluating normed, and standardized assessment instrument in psychology. Psychol. Assess. 6, 284–290. doi: 10.1037//1040-3590.6.4.284
Claassen, J. A., Meel-van den Abeelen, A. S., Simpson, D. M., Panerai, R. B., and international Cerebral Autoregulation Research Network. (2016). Transfer function analysis of dynamic cerebral autoregulation: a white paper from the international cerebral autoregulation research network. J. Cereb. Blood Flow Metab. 36, 665–680. doi: 10.1177/0271678X15626425
Czosnyka, M., Smielewski, P., Kirkpatrick, P., Laing, R. J., Menon, D., and Pickard, J. D. (1997). Continuous assessment of the cerebral vasomotor reactivity in head injury. Neurosurgery 41, 11–17.
Elting, J. W., Aries, M. J., van der Hoeven, J. H., Vroomen, P. C., and Maurits, N. M. (2014). Reproducibility, and variability of dynamic cerebral autoregulation during passive cyclic leg raising. Med. Eng. Phys. 36, 585–591. doi: 10.1016/j.medengphy.2013.09.012
Gommer, E. D., Shijaku, E., Mess, W. H., and Reulen, J. P. (2010). Dynamic cerebral autoregulation: different signal processing methods without influence on results, and reproducibility. Med. Biol. Eng. Comput. 48, 1243–1250. doi: 10.1007/s11517-010-0706-y
Grinsted, A., Moore, J. C., and Jevrejeva, S. (2004). Application of the cross wavelet transform, and wavelet coherence to geophysical time series. Nonlin. Process. Geophys. 11, 561–566. doi: 10.5194/npg-11-561-2004
Halliday, D. M., Rosenberg, J. R., Amjad, A. M., Breeze, P., Conway, B. A., and Farmer, S. F. (1995). A framework for the analysis of mixed time series/point process data - theory, and application to the study of physiological tremor, single motor unit discharges, and electromyograms. Progr. Biophys. Mol. Biol. 64, 237–278. doi: 10.1016/s0079-6107(96)00009-0
Heskamp, L., Meel-van den Abeelen, A., Katsogridakis, E., Panerai, R., Simpson, D., Lagro, J., et al. (2013). Convergent cross mapping: a promising technique for future cerebral autoregulation estimation. Cerebrovasc. Dis. 35, 15–16.
Katsogridakis, E., Bush, G., Fan, L., Birch, A. A., Simpson, D. M., Allen, R., et al. (2013). Detection of impaired cerebral autoregulation improves by increasing arterial blood pressure variability. J. Cereb. Blood Flow Metab. 33, 519–523. doi: 10.1038/jcbfm.2012.191
Kostoglou, K., Debert, C. T., Poulin, M. J., and Mitsis, G. D. (2014). Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia. Med. Eng. Phys. 36, 592–600. doi: 10.1016/j.medengphy.2013.10.011
Liu, J., Simpson, D. M., and Allen, R. (2005). High spontaneous fluctuation in arterial blood pressure improves the assessment of cerebral autoregulation. Physiol. Meas. 26, 725–741. doi: 10.1088/0967-3334/26/5/012
Liu, Y., and Allen, R. (2002). Analysis of dynamic cerebral autoregulation using an ARX model based on arterial blood pressure, and middle cerebral artery velocity simulation. Med. Biol. Eng. Comput. 40, 600–605. doi: 10.1007/bf02345461
Liu, Y., Birch, A. A., and Allen, R. (2003). Dynamic cerebral autoregulation assessment using an ARX model: comparative study using step response, and phase shift analysis. Med. Eng. Phys. 25, 647–653. doi: 10.1016/s1350-4533(03)00015-8
Ma, H., Guo, Z. N., Liu, J., Xing, Y., Zhao, R., and Yang, Y. (2016). Temporal course of dynamic cerebral autoregulation in patients with intracerebral hemorrhage. Stroke 47, 674–681. doi: 10.1161/STROKEAHA.115.011453
Marmarelis, V. Z. (2004). Nonlinear Dynamic Modeling of Physiological Systems. New Jersey, NJ: Wiley-Interscience.
Marmarelis, V. Z., Shin, D. C., Orme, M., and Rong, Z. (2014a). Time-varying modeling of cerebral hemodynamics. IEEE Trans. Biomed. Eng. 61, 694–704. doi: 10.1109/TBME.2013.2287120
Marmarelis, V. Z., Shin, D. C., Orme, M. E., and Zhang, R. (2014b). Model-based physiomarkers of cerebral hemodynamics in patients with mild cognitive impairment. Med. Eng. Phys. 36, 628–637. doi: 10.1016/j.medengphy.2014.02.025
Marmarelis, V. Z., Shin, D. C., Orme, M. E., and Zhang, R. (2013). Model-based quantification of cerebral hemodynamics as a physiomarker for Alzheimer’s disease? Ann. Biomed. Eng. 41, 2296–2317. doi: 10.1007/s10439-013-0837-z
Meel-van den Abeelen, A. S., Simpson, D. M., Wang, L. J., Slump, C. H., Zhang, R., Tarumi, T., et al. (2014). Between-centre variability in transfer function analysis, a widely used method for linear quantification of the dynamic pressure-flow relation: the CARNet study. Med. Eng. Phys. 36, 620–627. doi: 10.1016/j.medengphy.2014.02.002
Mitsis, G. D., Poulin, M. J., Robbins, P. A., and Marmarelis, V. Z. (2004). Nonlinear modeling of the dynamic effects of arterial pressure, and CO2 variations on cerebral blood flow in healthy humans. IEEE Trans. Biomed. Eng. 51, 1932–1943. doi: 10.1109/tbme.2004.834272
Mitsis, G. D., Zhang, R., Levine, B. D., and Marmarelis, V. Z. (2002). Modeling of nonlinear physiological systems with fast, and slow dynamics. II. application to cerebral autoregulation. Ann. Biomed. Eng. 30, 555–565. doi: 10.1114/1.1477448
Mitsis, G. D., Zhang, R., Levine, B. D., Tzanalaridou, E., Katritsis, D. G., and Marmarelis, V. Z. (2009). Autonomic neural control of cerebral hemodynamics. IEEE Eng. Med. Biol. Mag. 28, 54–62. doi: 10.1109/MEMB.2009.934908
Muller, M., Bianchi, O., Erulku, S., Stock, C., Schwerdtfeger, K., Homburg, G., et al. (2003). Changes in linear dynamics of cerebrovascular system after severe traumatic brain injury. Stroke 34, 1197–1202. doi: 10.1161/01.str.0000068409.81859.c5
Muller, M. W., and Osterreich, M. (2014). A comparison of dynamic cerebral autoregulation across changes in cerebral blood flow velocity for 200. Front. Physiol. 5:327. doi: 10.3389/fphys.2014.00327
Panerai, R. B. (2008). Cerebral autoregulation: from models to clinical applications. Cardiovasc. Eng. 8, 42–59. doi: 10.1007/s10558-007-9044-6
Panerai, R. B. (2014). Nonstationarity of dynamic cerebral autoregulation. Med. Eng. Phys. 36, 576–584. doi: 10.1016/j.medengphy.2013.09.004
Panerai, R. B., Eames, P. J., and Potter, J. F. (2003). Variability of time-domain indices of dynamic cerebral autoregulation. Physiol. Meas. 24, 367–381. doi: 10.1088/0967-3334/24/2/312
Panerai, R. B., Rennie, J. M., Kelsall, A. W., and Evans, D. H. (1998a). Frequency-domain analysis of cerebral autoregulation from spontaneous fluctuations in arterial blood pressure. Med. Biol. Eng. Comput. 36, 315–322. doi: 10.1007/bf02522477
Panerai, R. B., White, R. P., Markus, H. S., and Evans, D. H. (1998b). Grading of cerebral dynamic autoregulation from spontaneous fluctuations in arterial blood pressure. Stroke 29, 2341–2346. doi: 10.1161/01.str.29.11.2341
Panerai, R. B., Simpson, D. M., Deverson, S. T., Mahony, P., Hayes, P., and Evans, D. H. (2000). Multivariate dynamic analysis of cerebral blood flow regulation in humans. IEEE Trans. Biomed. Eng. 47, 419–423. doi: 10.1109/10.827312
Peng, T., Rowley, A. B., Ainslie, P. N., Poulin, M. J., and Payne, S. J. (2010). Wavelet phase synchronization analysis of cerebral blood flow autoregulation. IEEE Trans. Biomed. Eng. 57, 960–968. doi: 10.1109/TBME.2009.2024265
Reinhard, M., Muller, T., Guschlbauer, B., Timmer, J., and Hetzel, A. (2003). Transfer function analysis for clinical evaluation of dynamic cerebral autoregulation–a comparison between spontaneous, and respiratory-induced oscillations. Physiol. Meas. 24, 27–43. doi: 10.1088/0967-3334/24/1/303
Salinet, A. S., Panerai, R. B., and Robinson, T. G. (2014). The longitudinal evolution of cerebral blood flow regulation after acute ischaemic stroke. Cerebrovasc. Dis. Extra. 4, 186–197. doi: 10.1159/000366017
Sanders, M. L., Claassen, J. A. H. R., Aries, M., Bor-Seng-Shu, E., Caicedo, A., Chacon, M., et al. (2018). Reproducibility of dynamic cerebral autoregulation parameters: a multi-centre, multi-method study. Physiol. Meas. 39:125002. doi: 10.1088/1361-6579/aae9fd
Simpson, D., and Claassen, J. (2018a). CrossTalk opposing view: dynamic cerebral autoregulation should be quantified using induced (rather than spontaneous) blood pressure fluctuations. J. Physiol. 596, 7–9. doi: 10.1113/jp273900
Simpson, D., and Claassen, J. (2018b). Rebuttal from David Simpson, and Jurgen Claassen. J. Physiol. 596:13. doi: 10.1113/jp275041
Simpson, D. M., Panerai, R. B., Evans, D. H., and Naylor, A. R. (2001). A parametric approach to measuring cerebral blood flow autoregulation from spontaneous variations in blood pressure. Ann. Biomed. Eng. 29, 18–25. doi: 10.1114/1.1335537
Smirl, J. D., Hoffman, K., Tzeng, Y. C., Hansen, A., and Ainslie, P. N. (2015). Methodological comparison of active-, and passive-driven oscillations in blood pressure; implications for the assessment of cerebral pressure-flow relationships. J. Appl. Physiol. 119, 487–501. doi: 10.1152/japplphysiol.00264.2015
Tiecks, F. P., Lam, A. M., Aaslid, R., and Newell, D. W. (1995). Comparison of static, and dynamic cerebral autoregulation measurements. Stroke 26, 1014–1019. doi: 10.1161/01.str.26.6.1014
Torrence, C., and Webster, P. J. (1999). Interdecadal changes in the ENSO-monsoon system. J. Clim. 12, 2679–2690. doi: 10.1175/1520-0442(1999)012<2679:icitem>2.0.co;2
Tzeng, Y. C., and Panerai, R. B. (2018a). CrossTalk proposal: dynamic cerebral autoregulation should be quantified using spontaneous blood pressure fluctuations. J. Physiol. 596, 3–5. doi: 10.1113/jp273899
Tzeng, Y. C., and Panerai, R. B. (2018b). Rebuttal from Tzeng, Y. C., and Panerai, R. B. J. Physiol. 596, 11–12. doi: 10.1113/jp275040
van Beek, A. H., Claassen, J. A., Rikkert, M. G., and Jansen, R. W. (2008). Cerebral autoregulation: an overview of current concepts, and methodology with special focus on the elderly. J. Cereb. Blood Flow Metab. 28, 1071–1085. doi: 10.1038/jcbfm.2008.13
van Beek, A. H., Lagro, J., Olde-Rikkert, M. G., Zhang, R., and Claassen, J. A. (2012). Oscillations in cerebral blood flow, and cortical oxygenation in Alzheimer’s disease. Neurobiol. Aging 33, e421–e431. doi: 10.1016/j.neurobiolaging.2010.11.016
van Beek, A. H., Olde, M., Rikkert, G., Pasman, J. W., Hopman, M. T., and Claassen, J. A. (2010). Dynamic cerebral autoregulation in the old using a repeated sit-stand maneuver. Ultrasound Med. Biol. 36, 192–201. doi: 10.1016/j.ultrasmedbio.2009.10.011
Keywords: ARI index, cerebral blood flow, cerebral hemodynamics, transcranial Doppler, transfer function analysis
Citation: Sanders ML, Elting JWJ, Panerai RB, Aries M, Bor-Seng-Shu E, Caicedo A, Chacon M, Gommer ED, Van Huffel S, Jara JL, Kostoglou K, Mahdi A, Marmarelis VZ, Mitsis GD, Müller M, Nikolic D, Nogueira RC, Payne SJ, Puppo C, Shin DC, Simpson DM, Tarumi T, Yelicich B, Zhang R and Claassen JAHR (2019) Dynamic Cerebral Autoregulation Reproducibility Is Affected by Physiological Variability. Front. Physiol. 10:865. doi: 10.3389/fphys.2019.00865
Received: 22 March 2019; Accepted: 20 June 2019;
Published: 09 July 2019.
Edited by:
Yih-Kuen Jan, University of Illinois at Urbana–Champaign, United StatesReviewed by:
Jonathan David Smirl, The University of British Columbia, CanadaXiuyun Liu, University of California, San Francisco, United States
Copyright © 2019 Sanders, Elting, Panerai, Aries, Bor-Seng-Shu, Caicedo, Chacon, Gommer, Van Huffel, Jara, Kostoglou, Mahdi, Marmarelis, Mitsis, Müller, Nikolic, Nogueira, Payne, Puppo, Shin, Simpson, Tarumi, Yelicich, Zhang and Claassen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jurgen A. H. R. Claassen, Jurgen.Claassen@radboudumc.nl