The replicability crisis in science, spanning all disciplines, requires comprehensive examination and various perspectives. This Research Topic focuses on defining and analyzing the replicability crisis, with a particular emphasis on Psychology. We also address a critical factor contributing to the lack of replicability: post-data collection data analysis.
We start from 3 premises:
1. Technological advancements have made data analysis more accessible. However, researchers often overlook the suitability of advanced methods for their data, resorting to complex analyses to compensate for planning and execution deficiencies.
2. The increasing reliance on online and electronic data collection presents challenges in participant sample control and understanding non-participant characteristics. Experimental control becomes difficult, necessitating more statistical control, subgroup analysis, propensity score application, and sensitivity analysis, which are frequently absent.
3. Evaluation of data analysis approaches is primarily conducted by method researchers through Monte Carlo simulation experiments. However, applied researchers often lack the knowledge to draw accurate conclusions. Method researchers should assume responsibility for providing clear guidelines to ensure appropriate analysis approaches for investigated conditions.
This Research Topic aims to improve research integrity and increase replicability, emphasizing the need to make appropriate data use, realize comprehensive and responsible data analysis, and provide clear guidelines for researchers. Therefore, this Research Topic addresses the following topics:
• Concerns arise regarding data analysis methods, especially when used with small sample sizes. How frequently do researchers use inadequate sample sizes and present impressive yet misleading results? What errors commonly occur in such research? What is essential for publishable studies? Are applied researchers correctly utilizing Machine Learning and Deep Learning data analysis methods? Are reviewers adequately trained to assess their usage?
• Experimental control becomes challenging in quasi-experimental or non-experimental research conducted under natural conditions. Although experimental control has limitations, bias control remains vital for ensuring validity. Tutorials on propensity score analysis and sensitivity analysis are necessary, along with empirical investigations showcasing results with and without these techniques.
• Effective ways of presenting Monte Carlo simulation results need to be devised to assist applied researchers in selecting suitable techniques and understanding associated risks.
Regarding Monte Carlo simulation studies:
• Due to powerful software and hardware, numerous simulations are conducted under varying conditions. However, reporting all results in an article is impractical. Generally, averaging results and presenting them in a tabular/graphical format is suggested. Research providing decision rules for averaging simulation study results would be valuable.
• When graphing Monte Carlo simulation results, how can we indicate the information gained or lost accurately?
• Literature reviews should focus on identifying knowledge gaps in frequently used analysis methods, emphasizing aspects that are both overstudied and understudied, as well as high-risk statistical techniques.
There is consensus that meta-analyses provide the most precise measure of scientific evidence. The objectives achieved in this Research Topic will contribute to the meta-analyses' ability to provide more accurate results on the effectiveness of the interventions studied.
Keywords:
Replicability Crisis, Research Integrity, Sensitivity Analysis, Propensity Score, Sample Size, Monte Carlo Simulation Studies, Experimental Control, Meta-analysis, Machine Learning and Deep Learning techniques, Planning, Online Data, Validation
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
The replicability crisis in science, spanning all disciplines, requires comprehensive examination and various perspectives. This Research Topic focuses on defining and analyzing the replicability crisis, with a particular emphasis on Psychology. We also address a critical factor contributing to the lack of replicability: post-data collection data analysis.
We start from 3 premises:
1. Technological advancements have made data analysis more accessible. However, researchers often overlook the suitability of advanced methods for their data, resorting to complex analyses to compensate for planning and execution deficiencies.
2. The increasing reliance on online and electronic data collection presents challenges in participant sample control and understanding non-participant characteristics. Experimental control becomes difficult, necessitating more statistical control, subgroup analysis, propensity score application, and sensitivity analysis, which are frequently absent.
3. Evaluation of data analysis approaches is primarily conducted by method researchers through Monte Carlo simulation experiments. However, applied researchers often lack the knowledge to draw accurate conclusions. Method researchers should assume responsibility for providing clear guidelines to ensure appropriate analysis approaches for investigated conditions.
This Research Topic aims to improve research integrity and increase replicability, emphasizing the need to make appropriate data use, realize comprehensive and responsible data analysis, and provide clear guidelines for researchers. Therefore, this Research Topic addresses the following topics:
• Concerns arise regarding data analysis methods, especially when used with small sample sizes. How frequently do researchers use inadequate sample sizes and present impressive yet misleading results? What errors commonly occur in such research? What is essential for publishable studies? Are applied researchers correctly utilizing Machine Learning and Deep Learning data analysis methods? Are reviewers adequately trained to assess their usage?
• Experimental control becomes challenging in quasi-experimental or non-experimental research conducted under natural conditions. Although experimental control has limitations, bias control remains vital for ensuring validity. Tutorials on propensity score analysis and sensitivity analysis are necessary, along with empirical investigations showcasing results with and without these techniques.
• Effective ways of presenting Monte Carlo simulation results need to be devised to assist applied researchers in selecting suitable techniques and understanding associated risks.
Regarding Monte Carlo simulation studies:
• Due to powerful software and hardware, numerous simulations are conducted under varying conditions. However, reporting all results in an article is impractical. Generally, averaging results and presenting them in a tabular/graphical format is suggested. Research providing decision rules for averaging simulation study results would be valuable.
• When graphing Monte Carlo simulation results, how can we indicate the information gained or lost accurately?
• Literature reviews should focus on identifying knowledge gaps in frequently used analysis methods, emphasizing aspects that are both overstudied and understudied, as well as high-risk statistical techniques.
There is consensus that meta-analyses provide the most precise measure of scientific evidence. The objectives achieved in this Research Topic will contribute to the meta-analyses' ability to provide more accurate results on the effectiveness of the interventions studied.
Keywords:
Replicability Crisis, Research Integrity, Sensitivity Analysis, Propensity Score, Sample Size, Monte Carlo Simulation Studies, Experimental Control, Meta-analysis, Machine Learning and Deep Learning techniques, Planning, Online Data, Validation
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.