About this Research Topic
It is widely acknowledged that in the current academic landscape, publishing is the primary measure for assessing a researcher’s value. This is manifested by associating individuals' academic performance with different types of metrics, typically the number of publications or citations (quantity), rather than with the content of their works (quality). Concerns have been raised that this approach for evaluating research is causing significant ambiguity in the way science is done and how scientists' perceived performance is evaluated. For example, bibliometric indicators, such as the h-index or journal impact factor (JIF) in scientific assessments, are currently widespread. This causes multiple issues because these methods usually overlook the age or career stage of a researcher, the field size, publication, and citation cultures in different areas, any potential co-authorship, etc. Although the number of publications and citations, the h-index, the JIF, and so forth may indeed be relevant and should be considered as an indicator of visibility and popularity, they are certainly not indications of intellectual value or scientific quality by themselves.
To interrogate the conundrum between quantity and quality in research evaluation, some researchers dwell on rigorous and complementary indicators of a scientist's performance by critically analyzing a plethora of scientometric data. Others have argued that the scientific performance of an individual or group must be evaluated by peer-review processes based on their impact in their respective fields or the originality, strength, reproducibility, and relevance of their publications. Nevertheless, scientific project reviews, grant funding decisions, and university career advancement steps are often based on decisive input from non-experts who can readily use bibliometric indices. As a consequence, the newer and more robust tools or methods that consider the normalization of bibliometric indicators by the field and other influential parameters are encouraged to be shared and embraced by the research community, universities, and funding agencies. In addition, it is vital to investigate newly developed indicators or proposed quantitative methods for quality analysis and find out whether high quantity also implies high quality/significance/reputation. The role of peer review or in-depth studies in highlighting the quality based on the originality, strength, reproducibility, and relevance of the publications is additionally essential when investigating the merits of metrics in research assessment.
In this Research Topic, we invite contributions from both academic and industry researchers from different disciplines that examine the responsible causes for the current system of evaluating research performance that favors the number of publications, citations, h-index, and JIF. We also welcome contributions on potential avenues for assessing the quality of research as well as scientists’ performance in various scientific fields. Contributions can address, but are not limited to the following topics:
• innovative research indicators
• research quality assessment
• responsible research metrics
• normalization of scientometrics indices
• funding and financing
• peer-review and data mining for evaluating research quality
• research ethics related to assessing research
• responsible use of metrics policy
Keywords: conduct of research, metrics, bibliometrics, research assessment ethics, research evaluation, research quality, Responsible Research Metrics, Responsible Use of Metrics Policy
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.