The human brain is one of the longest standing mysteries, perplexing people in all corners of the world. Yet the clues to solving this international enigma seem to also be internationally dispersed. New efforts in collaborative neuroscience aim to pool together not just these clues, but also the researchers finding them. Large-scale consortia such as Enhancing Neuro Imaging Genetics through Meta Analysis (ENIGMA), Cohorts for Heart and Aging Research in Genetic Epidemiology (CHARGE), and others have the ability to pool together inferences from neuroimaging and genetic data from sites worldwide to provide greater statistical power to answer challenging questions. Key contributors to these global efforts are already multisite efforts themselves, such as the Alzheimers Disease Neuroimaging Initiative (ADNI), the Autism Brain Imaging Data Exchange (ABIDE), and 1000 Functional Connectomes, among many others, which are generating vast datasets for rich data-driven discoveries, empowering global efforts and providing critical means for reproducible science.
These national and international efforts have also resulted in an expanding network of scientists and medical professionals who are successfully collaborating to piece together how the brain is altered in cases of disease, how it changes throughout life, and the genetic and environmental factors that promote its well being or pose a risk to brain health. In some cases (e.g., ENIGMA), individual investigators can even contribute to large-scale studies of global trends and effects without ever parting with their data.
While these efforts may be extremely fruitful, new challenges arise when combining data collected across multiple sites in prospective coordinated studies as well as retrospective collaborative efforts.
Effectively combining data across sites requires harmonized protocols, processing streams and workflows comprised of stringent data specifications, image processing steps, rigorous quality control, clinical trait calibration, and standardized statistical tests. The development of these protocols is not trivial, and requires rigorous testing, and retesting, to ensure they are applicable and reproducible across diverse, uniquely collected data and will help answer a variety of questions about the living brain. As imaging and image processing technology advances and new modalities are introduced, an unimaginable number of features are able to be extracted from a single brain imaging session; this parallels the increased breadth of clinical, genomic and diagnostic data collection and drives big data questions and answers requiring a solid infrastructural foundation.
For this Topic, we invite original research papers and reviews with areas of interest related to large-scale collaborative neuroscience. Some potential areas of focus include:
*reliability, reproducibility and data assurance in common brain measures across various types of scans or software (eg research vs clinical quality, children vs adult, variability in regional reliability across different software etc)
*large-scale informatics approaches to big-data neuroscience and/or genomics
*exploring the variability and diversity in neuroimaging traits and applications
* reliability in brain networks, including structural or functional connections and patterns of genomic correlations across brain regions
* statistical approaches for big data imaging and/or genomics
* statistical approaches for pooling data (meta analyses comparisons, meta vs mega analyses)
*distributed or multisite machine learning across diverse datasets
While the scope of this Topic is broad, this issue will include only original works that analyze two or more diverse datasets and data sources.
The human brain is one of the longest standing mysteries, perplexing people in all corners of the world. Yet the clues to solving this international enigma seem to also be internationally dispersed. New efforts in collaborative neuroscience aim to pool together not just these clues, but also the researchers finding them. Large-scale consortia such as Enhancing Neuro Imaging Genetics through Meta Analysis (ENIGMA), Cohorts for Heart and Aging Research in Genetic Epidemiology (CHARGE), and others have the ability to pool together inferences from neuroimaging and genetic data from sites worldwide to provide greater statistical power to answer challenging questions. Key contributors to these global efforts are already multisite efforts themselves, such as the Alzheimers Disease Neuroimaging Initiative (ADNI), the Autism Brain Imaging Data Exchange (ABIDE), and 1000 Functional Connectomes, among many others, which are generating vast datasets for rich data-driven discoveries, empowering global efforts and providing critical means for reproducible science.
These national and international efforts have also resulted in an expanding network of scientists and medical professionals who are successfully collaborating to piece together how the brain is altered in cases of disease, how it changes throughout life, and the genetic and environmental factors that promote its well being or pose a risk to brain health. In some cases (e.g., ENIGMA), individual investigators can even contribute to large-scale studies of global trends and effects without ever parting with their data.
While these efforts may be extremely fruitful, new challenges arise when combining data collected across multiple sites in prospective coordinated studies as well as retrospective collaborative efforts.
Effectively combining data across sites requires harmonized protocols, processing streams and workflows comprised of stringent data specifications, image processing steps, rigorous quality control, clinical trait calibration, and standardized statistical tests. The development of these protocols is not trivial, and requires rigorous testing, and retesting, to ensure they are applicable and reproducible across diverse, uniquely collected data and will help answer a variety of questions about the living brain. As imaging and image processing technology advances and new modalities are introduced, an unimaginable number of features are able to be extracted from a single brain imaging session; this parallels the increased breadth of clinical, genomic and diagnostic data collection and drives big data questions and answers requiring a solid infrastructural foundation.
For this Topic, we invite original research papers and reviews with areas of interest related to large-scale collaborative neuroscience. Some potential areas of focus include:
*reliability, reproducibility and data assurance in common brain measures across various types of scans or software (eg research vs clinical quality, children vs adult, variability in regional reliability across different software etc)
*large-scale informatics approaches to big-data neuroscience and/or genomics
*exploring the variability and diversity in neuroimaging traits and applications
* reliability in brain networks, including structural or functional connections and patterns of genomic correlations across brain regions
* statistical approaches for big data imaging and/or genomics
* statistical approaches for pooling data (meta analyses comparisons, meta vs mega analyses)
*distributed or multisite machine learning across diverse datasets
While the scope of this Topic is broad, this issue will include only original works that analyze two or more diverse datasets and data sources.