Reproducibility is critical to scientific inquiry, which relies on the independent verification of results. Progress in science also requires that we determine whether conclusions were obtained using a rigorous process, and we must know whether results are robust to small changes in conditions. Computational approaches present unique challenges for these requirements. As models and data analysis routines become more complex, verification that is completely independent of the original implementation may not be pragmatic, since re-implementation often requires significant resources and time. Model complexity also increases the difficulty in sharing all details of the model, hindering transparency.
Although true reproducibility should be the goal when possible, resources and tools that aim to promote the replication of computational results using the original code are extremely valuable to the community. These platforms such as open source code sharing sites and model databases provide the means for increasing the impact of models and other computational approaches through re-use and allow for further development and improvement. Simulator-independent model descriptions provide a further step toward reproducibility and transparency. Despite this progress, best practices for verification of computational neuroscience research have not been established.
Increasing the impact of modeling across neuroscience areas also requires better descriptions of model assumptions, constraints, and validation. For data driven models, better reporting is needed regarding which data were used to constrain model development, the details of the data fitting process, and the quantitative evaluation of how well the emergent properties of the model dynamics match empirical data. When model development is driven by theoretical or conceptual constraints, modelers must carefully describe the assumptions and the process for model development and validation in order to improve transparency and rigour.
With this Research Topic, we aim to describe the challenges of reproducibility and rigour and efforts to address them across areas of quantitative neuroscience. We include descriptions of resources and platforms that promote replication and re-use of computational models, as well as resources that aid in validating models against empirical studies. We also include examples that successfully address verification of computational neuroscience research that may lead to progress in establishing best practices.
Reproducibility is critical to scientific inquiry, which relies on the independent verification of results. Progress in science also requires that we determine whether conclusions were obtained using a rigorous process, and we must know whether results are robust to small changes in conditions. Computational approaches present unique challenges for these requirements. As models and data analysis routines become more complex, verification that is completely independent of the original implementation may not be pragmatic, since re-implementation often requires significant resources and time. Model complexity also increases the difficulty in sharing all details of the model, hindering transparency.
Although true reproducibility should be the goal when possible, resources and tools that aim to promote the replication of computational results using the original code are extremely valuable to the community. These platforms such as open source code sharing sites and model databases provide the means for increasing the impact of models and other computational approaches through re-use and allow for further development and improvement. Simulator-independent model descriptions provide a further step toward reproducibility and transparency. Despite this progress, best practices for verification of computational neuroscience research have not been established.
Increasing the impact of modeling across neuroscience areas also requires better descriptions of model assumptions, constraints, and validation. For data driven models, better reporting is needed regarding which data were used to constrain model development, the details of the data fitting process, and the quantitative evaluation of how well the emergent properties of the model dynamics match empirical data. When model development is driven by theoretical or conceptual constraints, modelers must carefully describe the assumptions and the process for model development and validation in order to improve transparency and rigour.
With this Research Topic, we aim to describe the challenges of reproducibility and rigour and efforts to address them across areas of quantitative neuroscience. We include descriptions of resources and platforms that promote replication and re-use of computational models, as well as resources that aid in validating models against empirical studies. We also include examples that successfully address verification of computational neuroscience research that may lead to progress in establishing best practices.