About this Research Topic
Open access is now widely accepted as desirable and is slowly beginning to become a reality. However, the second essential element, evaluation, has received less attention. Open evaluation, an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current system. However, it is unclear how exactly such a system should be designed.
The evaluation system steers the attention of the scientific community and, thus, the very course of science. For better or worse, the most visible papers determine the direction of each field and guide funding and public policy decisions. Evaluation, therefore, is at the heart of the entire endeavor of science. As the number of scientific publications explodes, evaluation and selection will only gain importance. A grand challenge of our time, therefore, is to design the future system, by which we evaluate papers and decide which ones deserve broad attention.
So far scientists have left the design of the evaluation process to journals and publishing companies. However, the steering mechanism of science should be designed by scientists. The cognitive, computational, and brain sciences are best prepared to take on this task, which will involve social and psychological considerations, software design, and modeling of the network of scientific papers and their interrelationships.
This Research Topic in Frontiers in Computational Neuroscience collects visions for a future system of open evaluation. Because critical arguments about the current system abound, these papers will focus on constructive ideas and comprehensive designs for open evaluation systems. Design decisions include: Should the reviews and ratings be entirely transparent, or should some aspects be kept secret? Should other information, such as paper downloads be included in the evaluation? How can scientific objectivity be strengthened and political motivations weakened in the future system? Should the system include signed and authenticated reviews and ratings? Should the evaluation be an ongoing process, such that promising papers are more deeply evaluated? How can we bring science and statistics to the evaluation process (e.g. should rating averages come with error bars)? How should the evaluative information about each paper (e.g. peer ratings) be combined to prioritize the literature? Should different individuals and organizations be able to define their own evaluation formulae (e.g. weighting ratings according to different criteria)? How can we efficiently transition toward the future system?
Ideally, the future system will derive its authority from a scientific literature on community-based open evaluation. We hope that these papers will provide a starting point.
This Frontiers Research Topic, having been closed in 2012 with a full cohort of articles, will remain open indefinitely for submission.
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.