- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
The study of the brain criticality hypothesis has been going on for about 20 years, various models and methods have been developed for probing this field, together with large amounts of controversial experimental findings. However, no standardized protocol of analysis has been established so far. Therefore, hoping to make some contributions to standardization of such analysis, we review several available tools used for estimating the criticality of the brain in this paper.
1. Introduction
There are quite a few excellent reviews regarding the subject of brain criticality by now (see e.g., Wilting et al., 2018; Wilting and Priesemann, 2019a; Plenz et al., 2021; Zeraati et al., 2021). But the field is full of controversies and clearly lacks a standardization. In order to make all the results more unified, a standardization of the tools used for analyzing may be of priority. Here, we review several existing tools for estimating the brain's criticality as a small part of such an attempt.
The subject of brain criticality is closely related to neuronal avalanches. However, the definition of experimental avalanches varies among research due to different recording techniques. A detailed illustration can be found in Girardi-Schappo (2021). Nevertheless, when dealing with discrete time series, a suitable bin size (see Levina and Priesemann, 2017) may be used to extract all the avalanches as long as there exists a clear separation of time scales, namely the time of quiescence between avalanches is much longer compared to the duration of any single avalanche. Theoretical models often bear an intrinsic separation of time scales, thus defining their avalanches is relatively easy and should be standardized. Real experimental data are, however, much more complicated. For example, in reality, the sample size is always very large, and in such a regime, a clear separation of time scales is always not available, since at each time bin, the probability that at least one among all recorded neurons is active can be extremely high, leading to one single avalanche that may never terminate. Yet in fact, in most studies, avalanches are defined as consecutive time bins with at least one active site. Thus, much more work should be devoted to making the definition of avalanches standardized and applicable to different cases. Until then, drawing any conclusion may never be fully convincing.
Besides the definition of avalanches, the analysis process of avalanches is of equal importance, so efforts should be taken to make this process as rigorous and standardized as possible. By far, many methods have been developed for such analysis, and we do not intend to exhaust them all. In the following sections, we review several important tools commonly used to testify the criticality hypothesis in the brain, including the power-law fitting, crackling noise scaling relation, shape collapsing, and branching ratio, hoping to make some contributions to the standardization of avalanche analysis. Of course, a suitable definition of avalanches is presumed.
2. Power-Law Fitting
Power-law fitting is essential in most works searching for evidence of brain criticality, but many did not apply rigorous tests to the fitting results. The commonly used way of fitting is by using the maximum likelihood estimation (MLE) method which has many desirable properties (see e.g., Casella and Berger, 2002). But, simply fitting power-law distributions by plain MLE has several drawbacks (see e.g., Goldstein et al., 2004; Bauke, 2007; White et al., 2008; Clauset et al., 2009). Clauset et al. (2009) developed a more rigorous method of fitting untruncated power-law distribution by combining MLE and the Kolmogorov–Smirnov (KS) goodness-of-fit test, allowing statistical evaluation of the fitting results. Approximately, they first derived a distribution of KS values by using synthetic power-law surrogates, which were then compared to the KS value of the empirical data, and the proportion of sample distributions with KS values larger than the KS value between the empirical data and the model distribution gives the p-value of rejecting the power-law fit hypothesis. Later Deluca and Corral extended Clauset et al.'s work, making the fitting available for doubly truncated (continuous) power-law distributions (Deluca and Corral, 2013, see Corral and González, 2019 for illustration).
Marshall et al. (2016) developed the Matlab Neural Complexity and Criticality (NCC) Toolbox for doing neuronal avalanche analysis, which included an automated MLE fitting routine based on the method developed by Deluca and Corral, where the fitting range (i.e., from a minimum cutoff to a maximum cutoff) was found automatically instead of being manually determined, detailed illustrations can be found in their paper. They extended previously used MLE techniques to doubly truncated discrete power-law distributions that are more appropriate for describing empirical data which have maximum cutoffs typically caused by finite size effects.
But as pointed out by Corral and González (2019), their method has two important drawbacks, i.e., lacking the estimation of the uncertainty of the minimum and maximum cutoffs and underestimating the uncertainty in the obtained power-law exponent. They provided a solution to tackle these problems, i.e., taking bootstrap resamplings of the original data (Good, 2006) and repeating the fitting procedure with them to obtain distributions for cutoffs and the power-law exponent from which the uncertainty may be estimated. In the meantime, studying the dependence of the fitted power-law exponent on both cutoffs should be a good complementation (Baró and Vives, 2012). Thanks to Serra-Peralta et al. (2022), a Python version of this fitting method is available, and the source codes can be found on Github.
Another available tool is provided by Destexhe and Touboul (2021a), who used a similar MLE-based method for fitting power-laws truncated to a minimum cutoff. The fit was validated by the Akaike information criterion (AIC, see Akaike, 1974), which uses the maximum likelihood value of models
and the model with smaller AIC value is a relatively better fit for a given data set. However, when using this method for doubly truncated data, a maximum needs to be specified manually. Nevertheless, utilizing the AIC principle may be a good complement to the methods discussed above.
Actually, very similar to the AIC principle, Alstott et al. (2014) tested whether the power-law distribution is the best descriptor of the data compared to alternative heavy-tailed distributions, e.g., the exponential or log-normal distribution, by calculating the log-likelihood ratio (LLR). The underlying reason for doing LLR tests are that real world systems have all kinds of noises, and so few empirical phenomena could be expected to follow a power-law with the perfection of a theoretical distribution, especially in the large sample size regime where even small deviations from a perfect power-law would lead to the rejection of the power-law hypothesis by using the fitting methods discussed above. There is an existing toolbox for doing LLR tests: the power-law Python package, whose descriptions and sources can be found in Alstott et al. (2014).
Additionally, before all fitting procedures, enough number of avalanches should be generated to make the results more accurate, since small samples may cause strong biases. Meanwhile, before fitting the data, a preview of it would be helpful in order to choose a proper fitting procedure. For example, while the NCC toolbox provided a method to automatically find the exponent and the fitting range, it may take an unexpectedly long time to find the range and when it is finally done, it may happen that just a very small part of the data is fitted. Such circumstances may be avoided if the data were plotted first to see how it departs from a pure power-law distribution. However, if the results fitted by these tools are not satisfying, though may be subjective, manually setting the fitting range is always an option, which is just the way used in Carvalho et al. (2021) and many others.
We highly recommend that future research make the best use of these freely available advanced tools for power-law fitting of both empirical data and simulation results, hoping for a much more statistically rigorous and unified results in the future.
3. Crackling Noise Scaling Relation
Recent research (see e.g., Ponce-Alvarez et al., 2018; Fontenele et al., 2019; Carvalho et al., 2021; Fosque et al., 2021) have applied the crackling noise scaling relation (Muñoz et al., 1999; Sethna et al., 2001) to estimate the criticality of neural systems, though for which criticisms exist (see Destexhe and Touboul, 2021b). Nevertheless, the scaling relations observed in the spiking activity of mammalian primary visual cortex in both anesthetized (rats and monkey) and freely moving animals (mice) showed striking consistency in a specific activity regime (Fontenele et al., 2019), and the work of Ponce-Alvarez et al. (2018) showed consistent crackling noise scaling relations in whole-brain neuronal activity of zebrafish larvae, making the scaling relation itself interesting enough for further investigation. When using the crackling noise scaling relation, many research often refer to Muñoz et al. (1999) and Sethna et al. (2001), where methods for derivation were proposed, but the relation was not explicitly written out or derived rigorously. There is a simple derivation proposed in Girardi-Schappo (2021), but it seems that the author presumed relation between the size and duration of avalanches, which may lack explanation. Here, we illustrate the derivation of the crackling noise scaling relation briefly based on the work of Sethna et al. (2005). The probability function of having an avalanche of size S, duration T, long axis L, short axis W, at disorder R, and external field H in Sethna et al. (2005) has the following scaling form (see Equation 1 in Sethna et al., 2005, for reference to scaling functions see e.g., Henkel et al., 2008):
where τ, σ, υ, β, δ, and z are critical exponents, , , and Ds is a normalization factor. Throwing out non-universal factors, and setting the external field to its critical value, we can write in a more simplified asymptotic scaling form (close to the critical point):
From (2), we can get the scaling forms for the avalanche sizes
and durations as well
where , and are some scaling functions. Meanwhile, given avalanche duration T, we have the following average avalanche size
where again D′ is some scaling function, and . Combining (3)−(5), we get the crackling scaling relation:
Whether the above equation holds or not may be tested by using the two-sample t-test (see e.g., Ponce-Alvarez et al., 2018; Destexhe and Touboul, 2021b), a statistical method for testing whether the unknown population means of two groups are equal or not. Obviously, such a test relies on the estimation of power-law fitting exponents, thus care must be taken when drawing conclusions from the results.
Note also that the exponents can be obtained using different methods, and they should be consistent across all reliable methods. Taking the above exponent as an example, Ponce-Alvarez et al. (2018) showed that the power spectral density (PSD) of the time courses of neuronal avalanches decays also with exponent , thus one can estimate it from the relation (5), through the decay of the avalanche PSD, and through other possible ways like the scaling shape collapse discussed in the next section. The estimated should be consistent across these different ways of estimating the exponent, and the same should apply to estimating other exponents.
Another point is that the scaling relation is one way to reveal some spatiotemporal statistics of the data, and for neuronal avalanche analysis uncovering the true spatiotemporal statistics of the system, results should be compared to surrogate datasets to avoid false conclusions. For instance, one can use randomized surrogates that destroy the correlations or time-shuffled surrogates that destroy the temporal organization of the data while preserving the spatial correlations (see e.g., Ponce-Alvarez et al., 2018).
4. Scaling Shape Collapse
Apart from power-law distributions and scaling relations, the existence of universal scaling functions which capture the dynamics of systems at different scales can be used as a further signature of criticality.
Universal scaling functions are one of the most important predictions of universality (see Sethna et al., 2001). The time history of the average size over all avalanches of fixed duration T, denoted as 〈V〉(T, t), is one example of universal scaling functions and has the following scaling form
where b is some scaling exponent. If we plot T−b〈V〉(T, t) vs. scaled time t/T, then we are supposed to observe what is called scaling shape collapse, a simple way to check the scaling hypothesis and measure the universal scaling function. However, quantifying the quality of shape collapses is difficult and can be very subjective by just looking at the plotted results. Included in their NCC toolbox, Marshall et al. (2016) developed a shape collapse algorithm that did not intend to determine if a particular data set exhibits shape collapse, but automatically found the scaling parameter that produced the best possible collapse instead. Specifically, it used linear interpolation to each avalanche at a number of points along the scaled duration t/T and then calculated the variance among all the avalanche profiles at the interpolated points. The final fitted scaling exponent is the one that minimized the shape collapse error among a range of possible exponent values, and this whole analysis can be quantitated, for details see Marshall et al. (2016). Compared to the former shape collapse analysis method (see e.g., Friedman et al., 2012), such shape collapse method takes much more avalanches into account and is automated and quantitative, though not able to determine whether a given data set exhibits shape collapses.
Attempts have been made to directly quantify the quality of shape collapses by Shaukat and Thivierge (2016), where a complicated method based on functional data analysis was proposed. Their method first smoothed avalanches with a Fourier basis, followed by rescaling using a time-warping function, and finally employed an F-test combined with a bootstrap permutation to determine if avalanches collapse together in a statistically reliable way. Although seemingly more statistically rigorous, this method also seemed not able to measure the scaling parameter or the universal scaling function. Therefore, the analysis of scaling shape collapse obviously requires more future research, and for now, these existing well-established methods should be used when analyzing scaling shape collapse to avoid being subjective.
5. Branching Ratio
Apart from studying distribution functions of avalanches, analyzing the branching ratio of events which describes activity propagation is a complementary approach to test if a system is critical or not, where the branching ratio is defined as the average number of descendants per ancestor (de Carvalho and Prado, 2000, for reference to the branching process see Harris et al., 1963). A branching ratio being smaller than, equal to or larger than 1 would imply a subcritical, critical or supercritical system, respectively. The fundamental work established by Beggs and Plenz (2003) showed that the branching ratio of their experimental data, defined as the ratio of the average number of electrodes activated in two consecutive time bins, was close to 1, indicating a critical branching process as the mechanism behind the power-law distributions in cortical networks. Such a fascinating scenario inspired a great amount of work in the field, and many research studies also applied the branching ratio as one way of defining the criticality of neural systems (see e.g., Haldeman and Beggs, 2005; Beggs, 2008; Williams-Garcia et al., 2014; Timme et al., 2016).
However, the conventional estimator of the branching ratio using linear regression can be strongly biased under subsampling (Wilting and Priesemann, 2018), which unfortunately happens to be the case in real experiments. Luckily, Wilting and Priesemann (2018) derived a novel estimator (called MR estimator) based on multistep regression, which was analytically proved to be consistent under subsampling. They showed that the branching ratio was close to but slightly smaller than 1 and proposed that the cortex may be operating in a reverberating regime. Short after that Wilting et al. (2018) proposed that operating in a reverberating regime may enable the cortex to rapidly tune itself according to various task requirements. Such an opinion is very interesting and bears its own reasoning and makes the estimation of the branching ratio worth more considering in future research.
Now there is a python toolbox developed by Spitzner et al. (2021), and future research estimating the branching ratio should utilize these novel tools as much as possible.
6. Discussion
The current opinions regarding the subject of brain criticality vary among researchers, it remains unclear whether or not the brain possesses self-organized criticality, quasicriticality, supercriticality, or subcriticality, or rather it operates in a reverberating regime, or it may have many critical states, or perhaps, it is so complicated that its dynamics could not be well explained by any existing theory (for reviews see e.g., Priesemann, 2014; Williams-Garcia et al., 2014; Costa et al., 2017; Wilting et al., 2018; Wilting and Priesemann, 2019b; Fosque et al., 2021; Gross, 2021; Plenz et al., 2021; Zeraati et al., 2021). Nevertheless, probing underlying mechanisms of the complex brain dynamics is rather intriguing and needless to say, important. Several tools as part of such an attempt have been reviewed in this article. But as pointed out by Destexhe and Touboul (2021b), there is currently no clear evidence that the crackling noise scaling relation is a sufficient or necessary condition for the criticality of the brain, and the same happens to other methods used for determining brain's criticality as well. Future research should try to utilize the available statistically reliable tools for analyzing both experimental and theoretical results so as to make them less controversial and more rigorous. Meanwhile, though full of challenges, developing new more advanced reliable tools is of essence to eventually reaching unified and convincing conclusions.
Author Contributions
The author confirms being the sole contributor to this work and has approved it for publication.
Funding
CY was supported by the National Natural Science Foundation of China (11671354).
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Acknowledgments
I thank Prof. Jian Zhai for bringing the subject to attention and providing suggestions.
References
Akaike, H. (1974). A new look at the statistical model identification. IEEE Trans. Autom. Control 19, 716–723. doi: 10.1109/TAC.1974.1100705
Alstott, J., Bullmore, E., and Plenz, D. (2014). Powerlaw: a python package for analysis of heavy-tailed distributions. PLoS ONE 9:e85777. doi: 10.1371/journal.pone.0085777
Baró, J., and Vives, E. (2012). Analysis of power-law exponents by maximum-likelihood maps. Phys. Rev. E 85:066121. doi: 10.1103/PhysRevE.85.066121
Bauke, H. (2007). Parameter estimation for power-law distributions by maximum likelihood methods. Eur. Phys. J. B 58, 167–173. doi: 10.1140/epjb/e2007-00219-y
Beggs, J. (2008). The criticality hypothesis: how local cortical networks might optimize information processing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 366, 329–343. doi: 10.1098/rsta.2007.2092
Beggs, J., and Plenz, D. (2003). Neuronal avalanches in neocortical circuits. J. Neurosci. 23, 11167–11177. doi: 10.1523/JNEUROSCI.23-35-11167.2003
Carvalho, T. T. A., Fontenele, A. J., Girardi-Schappo, M., Feliciano, T., Aguiar, L. A. A., de Silva, T. P. L., et al. (2021). Subsampled directed-percolation models explain scaling relations experimentally observed in the brain. Front. Neural Circuits 14:576727. doi: 10.3389/fncir.2020.576727
Casella, G., and Berger, R. L. (2002). “Statistical inference,” in Thomson Learning, 2nd Edn (Pacific Grove, CA: Duxbury Press).
Clauset, A., Shalizi, C. R., and Newman, M. E. J. (2009). Power-law distributions in empirical data. SIAM Rev. 51, 661–703. doi: 10.1137/070710111
Corral, Á., and González, Á. (2019). Power law size distributions in geoscience revisited. Earth Space Sci. 6, 673–697. doi: 10.1029/2018EA000479
Costa, A., Brochini, L., and Kinouchi, O. (2017). Self-organized supercriticality and oscillations in networks of stochastic spiking neurons. Entropy 19:399. doi: 10.3390/e19080399
de Carvalho, J. X., and Prado, C. P. C. (2000). Self-organized criticality in the Olami-Feder-Christensen model. Phys. Rev. Lett. 84, 4006–4009. doi: 10.1103/PhysRevLett.84.4006
Deluca, A., and Corral, Á. (2013). Fitting and goodness-of-fit test of non-truncated and truncated power-law distributions. Acta Geophys. 61, 1351–1394. doi: 10.2478/s11600-013-0154-9
Destexhe, A., and Touboul, J. D. (2021b). Is there sufficient evidence for criticality in cortical systems? eneuro 8. doi: 10.1523/ENEURO.0551-20.2021
Destexhe, A., and Touboul, J. (2021a). Matlab Code to Simulate Neuronal Avalanches in Networks of Neurons Away From Criticality. Zenodo.
Fontenele, A. J., de Vasconcelos, N. A. P., Feliciano, T., Aguiar, L. A. A., Soares-Cunha, C., Coimbra, B., et al. (2019). Criticality between cortical states. Phys. Rev. Lett. 122:208101. doi: 10.1103/PhysRevLett.122.208101
Fosque, L. J., Williams-García, R. V., Beggs, J. M., and Ortiz, G. (2021). Evidence for quasicritical brain dynamics. Phys. Rev. Lett. 126:098101. doi: 10.1103/PhysRevLett.126.098101
Friedman, N., Ito, S., Brinkman, B. A. W., Shimono, M., DeVille, R. E. L., Dahmen, K. A., et al. (2012). Universal critical dynamics in high resolution neuronal avalanche data. Phys. Rev. Lett. 108:208102. doi: 10.1103/PhysRevLett.108.208102
Girardi-Schappo, M. (2021). Brain criticality beyond avalanches: open problems and how to approach them. J. Phys. Complex. 2:031003. doi: 10.1088/2632-072X/ac2071
Goldstein, M. L., Morris, S. A., and Yen, G. G. (2004). Problems with fitting to the power-law distribution. Eur. Phys. J. B 41, 255–258. doi: 10.1140/epjb/e2004-00316-5
Good, P. (2006). Resampling Methods: A Practical Guide to Data Analysis, 3rd Edn. Boston, MA: Birkhäuser Boston Springer e-books.
Gross, T. (2021). Not one, but many critical states: a dynamical systems perspective. Front. Neural Circuits 15:614268. doi: 10.3389/fncir.2021.614268
Haldeman, C., and Beggs, J. M. (2005). Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys. Rev. Lett. 94:058101. doi: 10.1103/PhysRevLett.94.058101
Harris, T. E. (1963). The Theory of Branching Processes, Vol. 6. Berlin: Springer. doi: 10.1007/978-3-642-51866-9
Henkel, M., Hinrichsen, H., Lübeck, S., and Pleimling, M. (2008). “Non-equilibrium phase transitions,” in Theoretical and Mathematical Physics (Berlin: Springer) Vol. 1.
Levina, A., and Priesemann, V. (2017). Subsampling scaling. Nat. Commun. 8:15140. doi: 10.1038/ncomms15140
Marshall, N., Timme, N. M., Bennett, N., Ripp, M., Lautzenhiser, E., and Beggs, J. M. (2016). Analysis of power laws, shape collapses, and neural complexity: new techniques and MATLAB support via the NCC toolbox. Front. Physiol. 7:250. doi: 10.3389/fphys.2016.00250
Mu noz, M. A., Dickman, R., Vespignani, A., and Zapperi, S. (1999). Avalanche and spreading exponents in systems with absorbing states. Phys. Rev. E 59, 6175–6179. doi: 10.1103/PhysRevE.59.6175
Plenz, D., Ribeiro, T., Miller, S., Kells, P., Vakili, A., and Capek, E. (2021). Self-organized criticality in the brain. Front. Phys. 9:389. doi: 10.3389/fphy.2021.639389
Ponce-Alvarez, A., Jouary, A., Privat, M., Deco, G., and Sumbre, G. (2018). Whole-brain neuronal activity displays crackling noise dynamics. Neuron 100, 1446–1459.e6. doi: 10.1016/j.neuron.2018.10.045
Priesemann, V. (2014). Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front. Syst. Neurosci. 8:108. doi: 10.3389/fnsys.2014.00108
Serra-Peralta, M., Serrá, J., and Corral, Á. (2022). Lognormals, power laws and double power laws in the distribution of frequencies of harmonic codewords from classical music. Sci. Rep. 12:2615. doi: 10.1038/s41598-022-06137-3
Sethna, J. P., Dahmen, K. A., and Myers, C. R. (2001). Crackling noise. Nature 410, 242–250. doi: 10.1038/35065675
Sethna, J. P., Dahmen, K. A., and Perkovic, O. (2005). Random-field ising models of hysteresis. arXiv:cond-mat/0406320. doi: 10.1016/B978-012480874-4/50013-0
Shaukat, A., and Thivierge, J. (2016). Statistical evaluation of waveform collapse reveals scale-free properties of neuronal avalanches. Front. Comput. Neurosci. 10:29. doi: 10.3389/fncom.2016.00029
Spitzner, F. P., Dehning, J., Wilting, J., Hagemann, A., P. Neto, J., Zierenberg, J., et al. (2021). MR. Estimator, a toolbox to determine intrinsic timescales from subsampled spiking activity. PLoS ONE 16:e0249447. doi: 10.1371/journal.pone.0249447
Timme, N. M., Marshall, N. J., Bennett, N., Ripp, M., Lautzenhiser, E., and Beggs, J. M. (2016). Criticality maximizes complexity in neural tissue. Front. Physiol. 7:425. doi: 10.3389/fphys.2016.00425
White, E. P., Enquist, B. J., and Green, J. L. (2008). On estimating the exponent of power-law frequency distributions. Ecology 89, 905–912. doi: 10.1890/07-1288.1
Williams-Garcia, R., Moore, M., Beggs, J., and Ortiz, G. (2014). Quasicritical brain dynamics on a nonequilibrium Widom line. Phys. Rev. E 90:062714. doi: 10.1103/PhysRevE.90.062714
Wilting, J., Dehning, J., Pinheiro Neto, J., Rudelt, L., Wibral, M., Zierenberg, J., et al. (2018). Operating in a reverberating regime enables rapid tuning of network states to task requirements. Front. Syst. Neurosci. 12:55. doi: 10.3389/fnsys.2018.00055
Wilting, J., and Priesemann, V. (2018). Inferring collective dynamical states from widely unobserved systems. Nat. Commun. 9:2325. doi: 10.1038/s41467-018-04725-4
Wilting, J., and Priesemann, V. (2019a). 25 years of criticality in neuroscience - established results, open controversies, novel concepts. Curr. Opin. Neurobiol. 58, 105–111. doi: 10.1016/j.conb.2019.08.002
Wilting, J., and Priesemann, V. (2019b). Between perfectly critical and fully irregular: a reverberating model captures and predicts cortical spike propagation. Cereb. Cortex 29, 2759–2770. doi: 10.1093/cercor/bhz049
Keywords: brain criticality, neuronal avalanche, power-law, scaling relation, branching ratio, statistical tests, standardization
Citation: Yu C (2022) Toward a Unified Analysis of the Brain Criticality Hypothesis: Reviewing Several Available Tools. Front. Neural Circuits 16:911245. doi: 10.3389/fncir.2022.911245
Received: 02 April 2022; Accepted: 19 April 2022;
Published: 20 May 2022.
Edited by:
Germán Sumbre, École Normale Supérieure, FranceReviewed by:
Adrián APA Ponce-Alvarez, Pompeu Fabra University, SpainCopyright © 2022 Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Chaojun Yu, MTE4MzUwMTcmI3gwMDA0MDt6anUuZWR1LmNu