Historically, neuroscience research has almost exclusively focused on using mean-based analyses to understand a wide variety of experimental conditions and groups. The mean is a measure of central tendency that is considered to be representative of an individual’s performance on a given task or captures the pattern of brain activation across a group. In contrast to measures of central tendency are measures of dispersion, which capture the spread of data. Although historically measures of dispersion are not typically examined, it is possible that examining measures of dispersion (e.g. standard deviation) could lead to a better conceptualization of cognition and/or our understanding of data sets.
Indeed, analysis of dispersion is gaining popularity and recently there has been an increasing number of neuroscience studies that have focused on understanding rather than ignoring variability. These studies have led to a better understanding of how variability in performance and neuroimaging signals can help us understand differences between healthy and diseased groups in addition to brain function in general. For instance, relationships have been found between variability and measures including age and cognitive performance. Group differences in resting state BOLD variability have also been detected between individuals with Alzheimer’s disease and healthy controls using an approach that has the potential for clinical application. Clearly, there are pros to examining variability.
Conversely, there have been many conclusions made in neuroscience that have been based on mean effects in a large number of trials or participants. For example, analyses of electroencephalographic (EEG) data and event-related potentials (ERP) largely rely on the calculation of grand means, with little to no attention paid to the variability of the signal within or between groups. Closer attention to dispersion could lead to a better understanding of EEG and ERP data and the pros and cons associated with variability.
The proposed Research Topic will focus on both the pros and cons of variability, calling for broad perspectives and contributions related to how examining variability can lead to a better understanding of neuroscientific findings.
We are interested in receiving submissions related to the pros and cons of the analysis of variability. We welcome the submission of original research articles and encourage authors to re-examine their datasets through the lens of variability rather than mean based analyses. Subtopics of interest include, but are not limited to:
- fMRI BOLD variability
- EEG variability
- fNRIS variability
- Variability in cognitive data and the relationship with brain structure
Historically, neuroscience research has almost exclusively focused on using mean-based analyses to understand a wide variety of experimental conditions and groups. The mean is a measure of central tendency that is considered to be representative of an individual’s performance on a given task or captures the pattern of brain activation across a group. In contrast to measures of central tendency are measures of dispersion, which capture the spread of data. Although historically measures of dispersion are not typically examined, it is possible that examining measures of dispersion (e.g. standard deviation) could lead to a better conceptualization of cognition and/or our understanding of data sets.
Indeed, analysis of dispersion is gaining popularity and recently there has been an increasing number of neuroscience studies that have focused on understanding rather than ignoring variability. These studies have led to a better understanding of how variability in performance and neuroimaging signals can help us understand differences between healthy and diseased groups in addition to brain function in general. For instance, relationships have been found between variability and measures including age and cognitive performance. Group differences in resting state BOLD variability have also been detected between individuals with Alzheimer’s disease and healthy controls using an approach that has the potential for clinical application. Clearly, there are pros to examining variability.
Conversely, there have been many conclusions made in neuroscience that have been based on mean effects in a large number of trials or participants. For example, analyses of electroencephalographic (EEG) data and event-related potentials (ERP) largely rely on the calculation of grand means, with little to no attention paid to the variability of the signal within or between groups. Closer attention to dispersion could lead to a better understanding of EEG and ERP data and the pros and cons associated with variability.
The proposed Research Topic will focus on both the pros and cons of variability, calling for broad perspectives and contributions related to how examining variability can lead to a better understanding of neuroscientific findings.
We are interested in receiving submissions related to the pros and cons of the analysis of variability. We welcome the submission of original research articles and encourage authors to re-examine their datasets through the lens of variability rather than mean based analyses. Subtopics of interest include, but are not limited to:
- fMRI BOLD variability
- EEG variability
- fNRIS variability
- Variability in cognitive data and the relationship with brain structure