About this Research Topic
In many cases, fundamental details are not learned, such as how an intervention is framed, how it was implemented, what aspects of it are responsible for the effects, and how effective it is relative to other alternatives. Such absences hinder the replicability of interventions, learning what program aspects could be improved and how the knowledge from a single intervention can be integrated with other findings. All this prevents the growth of cumulative knowledge, the ability to use research to inform policy and even the advancement of science.
According to previous research, much of this methodological weakness can be attributed to two factors: disagreement about how to conceptualize and measure methodological quality in evaluation, and the context dependency of existing instruments that claim to measure such quality.
The concept quality is complex and multidimensional. It has been defined from different theoretical perspectives that variously emphasize individual concepts or sets of concepts dealing with, for example, internal, external and construct validity.
This theoretical diversity leads to different approaches to measuring research quality. The main approaches described in the literature are: (a) scales: tools where at least content, construct and criterion validity evidence was tested. These are usually structured into different dimensions, which are then either summed to obtain a global index or kept separate as various sub-indexes; (b) checklists: their main difference from scales is that checklists have not been tested through an extensive validation process; and (c) general recommendations: these take the form of advice, including general aspects to consider when assessing quality.
There is not a clear approach to measuring the methodological quality of the different experimental and quasi-experimental designs that are traditionally used in evaluation. The situation is even worse with most rated assessment techniques. The instruments and procedures available to measure methodological quality are even more controversial and less developed than methods for assessing the quality of experiments and quasi-experiments.
The second methodological weakness stems from the context dependency of the instruments used that reduces the chance of the information they generate to be general. Indeed, many tools are used on just one occasion, and so dependable knowledge about its psychometric properties, including reliability and validity, are rarely available.
In this Research Topic, first, we would like to describe the state of the art in assessing the methodological quality of interventions. We also would like to identify the better instruments independent of context, and especially to measure the methodological quality of different kinds of designs used to evaluate the effectiveness of a wide variety of social interventions. Finally, we will examine ongoing original work in different areas where methodological quality has been better assessed in order to estimate the extent to which method quality acts as a moderator variable influencing the size of the effect obtained.
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.