Skip to main content

ORIGINAL RESEARCH article

Front. Health Serv., 07 March 2023
Sec. Implementation Science
This article is part of the Research Topic Going Beyond the Traditional Tools of Implementation Science View all 13 articles

Repeated measures of implementation variables

\r\nDean L. Fixsen
&#x;Dean L. Fixsen*Melissa K. Van Dyke&#x;Melissa K. Van DykeKaren A. Blase&#x;\r\nKaren A. Blase
  • Active Implementation Research Network, Inc., Chapel Hill, NC, United States

It is commonly acknowledged that implementation work is long-term and contextual in nature and often takes years to accomplish. Repeated measures are needed to study the trajectory of implementation variables over time. To be useful in typical practice settings, measures that are relevant, sensitive, consequential, and practical are needed to inform planning and action. If implementation independent variables and implementation dependent variables are to contribute to a science of implementation, then measures that meet these criteria must be established. This exploratory review was undertaken to “see what is being done” to evaluate implementation variables and processes repeatedly in situations where achieving outcomes was the goal (i.e., more likely to be consequential). No judgement was made about the adequacy of the measure (e.g., psychometric properties) in the review. The search process resulted in 32 articles that met the criteria for a repeated measure of an implementation variable. 23 different implementation variables were the subject of repeated measures. The broad spectrum of implementation variables identified in the review included innovation fidelity, sustainability, organization change, and scaling along with training, implementation teams, and implementation fidelity. Given the long-term complexities involved in providing implementation supports to achieve the full and effective use of innovations, repeated measurements of relevant variables are needed to promote a more complete understanding of implementation processes and outcomes. Longitudinal studies employing repeated measures that are relevant, sensitive, consequential, and practical should become common if the complexities involved in implementation are to be understood.

Introduction

Measurement of implementation variables in practice has been a challenge because of the complexities in human service environments, the novelties encountered in different domains (e.g., education, child welfare, global public health, pharmacy), and the ongoing development of implementation as a profession and as a science.

Greenhalgh et al. (1) conducted an extensive review of the diffusion and dissemination literature. They reflected a commonly held view when they concluded: “Context and “confounders” lie at the very heart of the diffusion, dissemination, and implementation of complex innovations. They are not extraneous to the object of study; they are an integral part of it. The multiple (and often unpredictable) interactions that arise in particular contexts and settings are precisely what determine the success or failure of a dissemination initiative.” For a science of implementation to develop, measures of implementation-specific independent and dependent variables must be established and used in multiple studies. Those variables and measures must be usable across the “multiple (and often unpredictable)” situations Greenhalgh et al. described.

Implementation is viewed by many as a process that takes time and planned activities at multiple levels so that innovations can be used fully and effectively and at scale (2). Yet, studies labeled as “implementation science” predominately use unique measures and one-time assessments of something of interest to the investigator (3, 4). Currently, avid readers of the “implementation science” literature are hard pressed to find a measure of an implementation-specific independent or dependent variable. Even when one is found, one data point at one point in time is a poor fit with the complexity of implementation as described in the literature. For example, Allen et al. (4) reviewed the literature related to the “inner setting” of organizations as defined by the Consolidated Framework for Implementation Research (CFIR). Consistent with previous findings from a review and synthesis of the implementation evaluation literature (3), Allen et al. found only one measure that was used in more than one study and noted that the definitions of constructs with the same name varied widely across the measures.

Repeated measures of multiple variables are needed to match the complexity of the practice, organization, and system environments in which changes must occur to support the full and effective uses of innovations in practice. Researchers have documented the years typically required to accomplish implementation goals (5, 6) even when skilled implementation teams are available (7). To advance a science of implementation, repeated measures and methods must be well suited to cope with research in applied settings where there are too few cases, too many variables, and too little control over multi-level variables that may impact outcomes (8, 9).

Implementation specialists and researchers who are doing the work of implementation and studying the results over time continually deal with complexity and confounders to accomplish their implementation practice and science aims (10). In their description of implementation practice and science, Fixsen et al. (10, chapter 16) proposed criteria for “action evaluation” measures that can be used to inform action planning and monitor progress toward full and effective use of innovations. Action evaluation measures are: (1) relevant and include items that are indicators of key leverage points for improving practices, organization routines, and system functioning, (2) sensitive to changes in capacity to perform with scores that increase as capacity is developed and decrease when setbacks occur, (3) consequential in that the items are important to the respondents and users and scores inform prompt action planning; repeated assessments each year monitor progress of action planning as capacity develops and outcomes are produced, and (4) practical with modest time required to learn how to administer assessments with fidelity to the protocol, and modest time required of staff to respond to rate the items or prepare for an observation visit.

While the lack of assessment of psychometric properties has been cited as a deficiency (1113), what is missing from nearly all of the existing implementation-related measures is a test of consequential validity (14). That is, evidence that the variable under study, and the information generated by the measure of the variable, is highly related to using an innovation with fidelity and related to producing intended outcomes to benefit a population of recipients. Given that implementation practice and science are mission-driven (15), consequential validity is an essential test of any measure, an approach that favors external validity over internal validity (16, 17).

Galea (18), working in a health context, stated the problem and the solution clearly:

A consequentialist approach is centrally concerned with maximizing desired outcomes, and a consequentialist epidemiology would be centrally concerned with improving health outcomes. We would be much more concerned with maximizing the good that can be achieved by our studies and by our approaches than we are by our approaches themselves. A consequentialist epidemiology inducts new trainees not around canonical learning but rather around our goals. Our purpose would be defined around health optimization and disease reduction, with our methods as tools, convenient only insofar as they help us get there. Therefore, our papers would emphasize our outcomes with the intention of identifying how we may improve them.

By thinking of “our methods as tools, convenient only insofar as they help us get there” psychometric properties may be the last concern, not the first (and too often, only) question to be answered. The consequential validity question is “so what?” Once there is a measure of a variable it is incumbent on the researcher (the measure developer) to provide data that demonstrates how knowing that information “helps us get there.” Once a measure of a variable has demonstrated consequential validity then it is worth investing in establishing its psychometric properties to fine tune the measure. It is worth it because the variable matters and the measure detects its presence and strength.

This exploratory review was undertaken to “see what is being done” to measure implementation variables and processes in situations where achieving outcomes was the goal (i.e., more likely to be consequential). The interest is in measures that are relevant, sensitive, consequential, and practical. In particular, given the long-term and contextual nature of implementation work that often takes years to accomplish, the search is for studies that have used repeated measures to study the trajectory of implementation variables over time. For this review, a measure that has been used more than once in a study is an indication that the measure is relevant to the variable under study, sensitive to change in the variable from one data point to the next, consequential with respect to informing planning for change, and practical by virtue of being able to be used two or more times to study a variable.

Materials and methods

The review was conducted within the Active Implementation Research Network (AIRN) EndNotes data base. The AIRN EndNotes data base contains 3,950 references (March 20, 2021) that pertain to implementation with a bias toward implementation evaluation and quantitative data articles. The oldest reference relates to detecting and evaluating the core components of independent variables (19). The most recent article describes over 10 years of work to scale up innovations in a large state system (20).

In 2003 the AIRN EndNotes data base was initiated by entering citations from the boxes of paper copies of articles accumulated by the authors since 1971, the year of the first implementation failure experienced by the first author (21). Beginning in 2003 AIRN EndNotes was expanded with references produced from literature searches conducted through university library services (3). Since 2006, articles routinely have been added through Google Scholar searches. Weekly lists of articles identified with the implementation-related search terms are scanned and relevant citations, abstracts (when available), and PDFs (when available) are downloaded into AIRN EndNotes. For inclusion in the database, articles reporting quantitative data are favored over qualitative studies or opinion pieces. Reflecting the universal relevance of implementation factors, the data base includes a wide variety of articles from multiple fields and many points of view. About 2/3 of the articles in AIRN EndNotes were published in 2000–2021.

The majority of articles in AIRN EndNotes published since 2000 include the Abstract and/or a PDF, and the full text of about half of all the articles has been reviewed by the authors and their colleagues. The reviewer of an article enters information in the “Notes” section of EndNotes regarding concepts and terms that relate to the evidence-based Active Implementation Frameworks as they are defined, operationalized, evaluated, and revised (3, 7, 15, 2227). Given the lack of clarity in the implementation field, the Notes provide common concepts and common language that facilitate searches of the AIRN EndNotes data base.

For this study, the AIRN EndNotes data base was searched for articles that used repeated measures of one or more implementation variables. Using the search function in EndNotes, “Any Field” (i.e., title, abstract, keywords, notes) in the data base was searched using the word “measure” and the term “repeated,” or “longitudinal,” or “pattern,” or “stepped wedge.” The search returned 260 references. Because searches of the literature were less systematic in the pre-internet days, references published prior to the year 2000 were eliminated leaving 213 references. The title and abstract of each of the 213 articles was reviewed and those with apparent repeated measures of an implementation variable were retained (n = 58).

The full text of the remaining 58 references was reviewed. For the full text review, “repeated” was defined as two or more uses and “measure” was defined as the same method (observation, record review, survey, systematic interview) with the same content used at Time 1, Time 2, etc. No judgement was made about the adequacy of the measure or time frames. Thus, psychometric properties of a measure were not considered in the review. “Implementation” was defined as any specific support (e.g., training, coaching, leadership, organization changes) for the full and effective use of an identified innovation.

The full text review eliminated 26 articles. The reasons for elimination are provided in Table 1. For example, 13 articles were eliminated because the repeated measure concerned an intervention and not an implementation variable and 7 were eliminated because the same measure was not repeated from one time period to the next.

TABLE 1
www.frontiersin.org

Table 1. Articles eliminated after full text review.

Results

The search process resulted in 32 articles that met the criteria for a “repeated” “measure” of “implementation” variables: in 17 articles 2 or more variables were measured and in 15 articles one variable was measured. Fourteen (14) of the articles were published in 2000–2010 and 18 were published in 2011–2019.

As noted in Table 2, 23 different implementation variables were the subject of repeated measures. In Table 2 the Implementation Variable names are grouped using the Active Implementation Frameworks as a guide (10). The broad spectrum of implementation variables included innovation fidelity (assessed in 17 articles), sustainability (assessed in 8 articles), organization change (assessed in 6 articles), and scaling (assessed in 5 articles). Training, implementation teams, and implementation fidelity were the subject of 2 articles each.

TABLE 2
www.frontiersin.org

Table 2. Implementation variables measured two or more times in the 32 articles.

Table 3 details the individual articles, the assessments they reported, and the implementation variables that were studied.

TABLE 3
www.frontiersin.org

Table 3. The information in the table was sorted alphabetically based on the implementation Variable column (information regarding the implementation Variable can be found at www.activeimplementation.org).

Discussion

It is heartening to see the breadth of implementation-specific variables that have been measured two or more times in one or more studies. Given the long-term complexities involved in providing implementation supports to achieve the full and effective use of innovations, repeated measurements of relevant variables are needed to promote a more complete understanding of implementation processes and outcomes. Yet, this exploratory review found few examples in the literature.

It is discouraging to see so few articles reporting repeated measures. The review found only 32 articles among the 3,950 mostly implementation evaluation articles, and provide an indicator of what must be done to advance the field. Implementation practice and science would be well served by investing in using and improving the measures identified in this review. The measures already have been developed and used in practice and appear to be relevant (they assess the presence and strength of an implementation-specific variable), sensitive (results showed change from one administration to the next), and practical (able to be administered repeatedly in practice). The field would benefit by using these measures in a variety of studies to establish more fully their consequential validity (does the variable being assessed impact the use and effectiveness of innovations). Meeting the criteria for action evaluations is a good start for the development of any measure.

As found in this study, there are good examples of repeated measures of implementation-specific variables in complex settings. Szulanski and Jensen (43) studied innovation fidelity and outcomes for over 3,500 franchises, each with 16 data points spanning 20 years for a total of 56,000 fidelity assessments that showed detrimental outcomes associated with lower fidelity in the early years and improved outcomes associated with lower fidelity after year 17. McIntosh et al. (35) studied innovation fidelity in 5,331 schools for 5 years, a total of 26,655 fidelity assessments that allowed the researchers to detect patterns in achieving and sustaining fidelity of the use of an evidence-based program. For 10 years Fixsen and Blase (41) studied innovation fidelity every six months for practitioners in 41 residential treatment units, a total of 820 fidelity assessments that detected positive trends among new hires as the implementation supports in the organization matured. Datta et al. (32) collected data for two years with 41 data points to track the effectiveness of continual attempts to produce improved outcomes for neonates admitted to the neonatal intensive care unit.

Innovation fidelity also has been assessed at an organization level. McGovern et al. (45) developed the Dual Diagnosis Capability in Addiction Treatment (DDCAT) to assess the dual diagnosis (substance abuse and mental health) capability of addiction treatment services. DDCAT items assess: (1) Program Structure; (2) Program Milieu; (3) Clinical Process: Assessment; (4) Clinical Process: Treatment; (5) Continuity of Care; (6) Staffing; and (7) Training. Organization dual diagnosis treatment capacity was measured at baseline and at 9-month follow-up in a cohort of 16 addiction treatment programs, 32 data points that found assessment, feedback, training, and implementation support were most effective in changing organization capacity. The DDCAT has been used in other studies by different authors to assess capacity (33, 47).

In these and other examples cited in this article, the measures of implementation variables are meaningful (relevant) and are repeated (practical) so that trends (sensitive) can be detected and corrected (if needed). Consequential validity was reported in these examples and in other articles (e.g., 43, 48, 49) and requires further study.

Innovation fidelity (n = 17) was the most frequent repeated measure. Innovation fidelity always is specific to the innovation under consideration. Implementation fidelity, on the other hand, refers to implementation-specific variables being used as intended. A science of implementation and assessments of implementation fidelity are intended to be universal. For example, the drivers best practices assessment (DBPA; 59, 60) measures the presence and strength of the implementation drivers (10, 15, 26, 27) that relate to (a) competency (selection, training, coaching, fidelity), (b) organization (facilitative administration, decision support data system, system intervention), and c) leadership (technical, adaptive). As shown in Table 2, over half of the measures (n = 30; Table 2) reported in the articles assessed one or more variables related to the implementation drivers. The DBPA has been used to assess implementation fidelity in a variety of settings and organizations, demonstrating a strong association with intended uses of evidence-based programs (6164). As action measures are used in practice, the statistical (psychometric) properties can be assessed (61, 65).

These longitudinal studies are not typical, but they should be. After, before and after, one-time, or short-term assessments may be interesting but may add little to the science of implementation. To do something once or even a few times is interesting. To be able to do something repeatedly with useful outcomes and documented improvements over decades will produce socially significant benefits for whole populations (66). Data on the processes of implementation over time are badly needed.

There is much to be done to establish a science of implementation that has useful and reliable measures of implementation-specific independent (if…) and dependent (then…) variables. Implementation theory (6769) can become the source of predictions (if…then) that can be tested in practice. In this way, like any science, a science of implementation can be cumulative and “crowdsourced” globally as theory-based predictions are tested and theory itself is improved or discarded.

Limitations

In the current study, the AIRN EndNotes data base provided a convenient sample for the search that was conducted. Thus, the results of the search offer an indication regarding repeated measures of implementation variables. An exhaustive search of all available sources may produce a different view of the field.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.

Ethics statement

Ethical review and approval was not required for this study in accordance with the local legislation and institutional requirements.

Author contributions

All authors contributed to the article and approved the submitted version.

Acknowledgment

The authors are grateful for the diligent reviews and article summaries entered in AIRN EndNotes by Leah Hambright Bartley, Hana Haidar, Lama Haidar, Sandra Naoom, and Frances Wallace Bailey.

Conflict of interest

All the authors were employed by Active Implementation Research Network, Inc.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. (2004) 82(4):581–629. doi: 10.1111/j.0887-378X.2004.00325.x

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Seers K, Rycroft-Malone J, Cox K, Crichton N, Edwards RT, Eldh AC, et al. Facilitating implementation of research evidence (FIRE): an international cluster randomised controlled trial to evaluate two models of facilitation informed by the promoting action on research implementation in health services (PARIHS) framework. Implement Sci. (2018) 13(1):137. doi: 10.1186/s13012-018-0831-9

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature: national implementation research network. Tampa, Florida: University of South Florida (2005). iii-119. www.activeimplementation.org

4. Allen JD, Towne SD, Maxwell AE, DiMartino L, Leyva B, Bowen DJ, et al. Measures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. (2017) 17(1):591. doi: 10.1186/s12913-017-2459-x

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Saldana L, Chamberlain P, Wang W, Brown HC. Predicting program start-up using the stages of implementation measure. Adm Policy Ment Health. (2012) 39(6):419–25. doi: 10.1007/s10488-011-0363-y

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Tommeraas T, Ogden T. Is there a scale-up penalty? Testing behavioral change in the scaling up of parent management training in Norway. Adm Policy Ment Health Ment Health Serv Res. (2016) 44:203–16. doi: 10.1007/s10488-015-0712-3

CrossRef Full Text | Google Scholar

7. Fixsen DL, Blase KA, Timbers GD, Wolf MM. In search of program implementation: 792 replications of the teaching-family model. Behav Anal Today. (2007) 8(1):96–110. doi: 10.1037/h0100104

CrossRef Full Text | Google Scholar

8. Goggin ML. The “too few cases/too many variables” problem in implementation research. West Polit Q. (1986) 39(2):328–47. doi: 10.1177/106591298603900210

CrossRef Full Text | Google Scholar

9. Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis? BMJ: Br Med J. (2014) 348. doi: 10.1136/bmj.g3725

CrossRef Full Text | Google Scholar

10. Fixsen DL, Blase KA, Van Dyke MK. Implementation practice and science. 1st ed Chapel Hill, NC: Active Implementation Research Network (2019). 378. https://www.amazon.com/dp/1072365529

11. Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implement Sci. (2014) 9(118). doi: 10.1186/s13012-014-0118-8

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Lewis C, Fischer S, Weiner B, Stanick C, Kim M, Martinez R. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci. (2015) 10(1):155. doi: 10.1186/s13012-015-0342-x

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Weiner B, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. (2017) 12(1):108. doi: 10.1186/s13012-017-0635-3

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Shepard LA. Evaluating test validity: reprise and progress. Assess Educ. (2016) 23(2):268–80. doi: 10.1080/0969594X.2016.1141168

CrossRef Full Text | Google Scholar

15. Fixsen DL, Blase KA, Naoom SF, Wallace F. Core implementation components. Res Social Work Pract. (2009) 19(5):531–40. doi: 10.1177/1049731509335549

CrossRef Full Text | Google Scholar

16. Green LW. Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Fam Pract. (2008) 25:20–4. doi: 10.1093/fampra/cmn055

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Kessler RC, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. (2011) 40(6):637–44. doi: 10.1016/j.amepre.2011.02.023

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Galea S. An argument for a consequentialist epidemiology. Am J Epidemiol. (2013) 178(8):1185–91. doi: 10.1093/aje/kwt172

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Bernard C. An introduction to the study of experimental medicine. Paris: Macmillan & Co., Ltd. (1927). First English translation by Henry Copley Greene. Dover edition 1957; 1865.

20. Margolies PJ, Covell NH, Patel SR. Applying implementation drivers to scale-up evidence-based practices in New York state. Global Implemen Res Appl. (2021) 1. doi: 10.1007/s43477-020-00002-z

CrossRef Full Text | Google Scholar

21. Fixsen DL, Blase KA. The teaching-family model: the first 50 years. Perspect Behav Sci. (2018) 42(2):189–211. doi: 10.1007/s40614-018-0168-3

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Fixsen DL, Blase KA, Timbers GD, Wolf MM. In search of program implementation: 792 replications of the teaching-family model. In: Bernfeld GA, Farrington DP, Leschied AW, editors. Offender rehabilitation in practice: implementing and evaluating effective programs. London: Wiley (2001). p. 149–66.

23. Blase KA, Fixsen DL. Core intervention components: identifying and operationalizing what makes programs work. Washington, DC: Office of the Assistant Secretary for Planning and Evaluation, Office of Human Services Policy, U.S. Department of Health and Human Services (2013).

24. Fixsen DL, Blase KA, Metz A, Van Dyke MK. Implementation science. In: Wright JD, editors. International encyclopedia of the social and behavioral sciences. 11. 2nd ed. Oxford: Elsevier, Ltd (2015). p. 695–702.

25. Fixsen DL, Van Dyke MK, Blase KA. Science and implementation. Chapel Hill, NC: Active Implementation Research Network (2019).

26. Blase KA, Fixsen DL, Phillips EL. Residential treatment for troubled children: developing service delivery systems. In: Paine SC, Bellamy GT, Wilcox B, editors. Human services that work: from innovation to standard practice. Baltimore, MD: Paul H. Brookes Publishing (1984). p. 149–65.

27. Blase KA, Van Dyke MK, Fixsen DL, Bailey FW. Implementation science: key concepts, themes, and evidence for practitioners in educational psychology. In: Kelly B, Perkins D, editors. Handbook of implementation science for psychology in education. London: Cambridge University Press (2012). p. 13–34.

28. Strand V, Popescu M, Way I, Jones AS. Building field agencies’ capacity to prepare staff and social work students for evidence-based trauma treatments. Fam Soc. (2017) 98(1):5–15. doi: 10.1606/1044-3894.2017.8

CrossRef Full Text | Google Scholar

29. Panzano PC, Seffrin B, Chaney-Jones S, Roth D, Crane-Ross D, Massatti R, et al. The innovation diffusion and adoption research project (IDARP). In: Roth D, Lutz W, editors. New research in mental health. 16. Columbus, OH: Ohio Department of Mental Health Office of Program Evaluation and Research (2004). p. 78–89.

30. Vernez G, Karam R, Mariano LT, DeMartini C. Evaluating comprehensive school reform models at scale: focus on implementation. Santa monica, CA: RAND Corporation (2006).

31. Fixsen DL, Ward C, Ryan Jackson K, Blase K, Green J, Sims B, et al. Implementation and scaling evaluation report: 2013-2017. State Implementation and Scaling up of Evidence Based Practices Center, Chapel Hill, NC: University of North Carolina at Chapel Hill (2018).

32. Datta V, Saili A, Goel S, Sooden A, Singh M, Vaid S, et al. Reducing hypothermia in newborns admitted to a neonatal care unit in a large academic hospital in New Delhi, India. BMJ Open Quality. (2017) 6(2):e000183. doi: 10.1136/bmjoq-2017-000183

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Lee N, Cameron J. Differences in self and independent ratings on an organisational dual diagnosis capacity measured. Drug Alcohol Rev. (2009) 28:682–4. doi: 10.1111/j.1465-3362.2009.00116.x

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Hardeman W, Michie S, Fanshawe T, Prevost AT, McLoughlin K, Kinmonth AL. Fidelity of delivery of a physical activity intervention: predictors and consequences. Psychol Health. (2008) 23(1):11–24. doi: 10.1080/08870440701615948

PubMed Abstract | CrossRef Full Text | Google Scholar

35. McIntosh K, Mercer SH, Nese RNT, Ghemraoui A. Identifying and predicting distinct patterns of implementation in a school-wide behavior support framework. Prev Sci. (2016) 17(8):992–1001. doi: 10.1007/s11121-016-0700-1

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Tiruneh GT, Karim AM, Avan BI, Zemichael NF, Wereta TG, Wickremasinghe D, et al. The effect of implementation strength of basic emergency obstetric and newborn care (BEmONC) on facility deliveries and the met need for BEmONC at the primary health care level in Ethiopia. BMC Pregnancy Childbirth. (2018) 18(1):123. doi: 10.1186/s12884-018-1751-z

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Shapiro VB, Kim BKE, Robitaille JL, LeBuffe PA, Ziemer KL. Efficient implementation monitoring in routine prevention practice: a grand challenge for schools. J Soc Social Work Res. (2018) 9(3):377–94. doi: 10.1086/699153

CrossRef Full Text | Google Scholar

38. Hoekstra F, van Offenbeek MAG, Dekker R, Hettinga FJ, Hoekstra T, van der Woude LHV, et al. Implementation fidelity trajectories of a health promotion program in multidisciplinary settings: managing tensions in rehabilitation care. Implement Sci. (2017) 12(1):143. doi: 10.1186/s13012-017-0667-8

PubMed Abstract | CrossRef Full Text | Google Scholar

39. Chinman M, Ebener P, Malone PS, Cannon J, D’Amico EJ, Acosta J. Testing implementation support for evidence-based programs in community settings: a replication cluster-randomized trial of getting to outcomes®. Implement Sci. (2018) 13(1):131. doi: 10.1186/s13012-018-0825-7

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Rahman M, Ashraf S, Unicomb L, Mainuddin AKM, Parvez SM, Begum F, et al. WASH Benefits Bangladesh trial: system for monitoring coverage and quality in an efficacy trial. Trials. (2018) 19(1):360. doi: 10.1186/s13063-018-2708-2

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Fixsen DL, Blase K. Implementation quotient. Chapel Hill, NC: Active Implementation Research Network (2009).

42. Jensen RJ. Replication and adaptation: the effect of adaptation degree and timing on the performance of replicated routines. Provo, UT: Brigham Young University (2007).

43. Szulanski G, Jensen RJ. Presumptive adaptation and the effectiveness of knowledge transfer. Strateg Manag J. (2006) 27(10):937–57. doi: 10.1002/smj.551

CrossRef Full Text | Google Scholar

44. Althabe F, Buekens P, Bergel E, Belizán JM, Campbell MK, Moss N, et al. A behavioral intervention to improve obstetrical care. N Engl J Med. (2008) 358(18):1929–40. doi: 10.1056/NEJMsa071456

PubMed Abstract | CrossRef Full Text | Google Scholar

45. McGovern MP, Matzkin AL, Giard J. Assessing the dual diagnosis capability of addiction treatment services: the dual diagnosis capability in addiction treatment (DDCAT) Index. J Dual Diagn. (2007) 3(2):111–23. doi: 10.1300/J374v03n02_13

CrossRef Full Text | Google Scholar

46. Masud Parvez S, Azad R, Rahman MM, Unicomb L, Ram P, Naser AM, et al. Achieving optimal technology and behavioral uptake of single and combined interventions of water, sanitation hygiene and nutrition, in an efficacy trial (WASH benefits) in rural Bangladesh. Trials. (2018) 19(358). doi: 10.1186/s13063-018-2710-8

CrossRef Full Text | Google Scholar

47. Chaple M, Sacks S. The impact of technical assistance and implementation support on program capacity to deliver integrated services. J Behav Health Serv Res. (2016) 43(1):3–17. doi: 10.1007/s11414-014-9419-6

PubMed Abstract | CrossRef Full Text | Google Scholar

48. Aladjem DK, Borman KM. Examining comprehensive school reform. Washington, DC: Urban Institute Press (2006).

49. Forgatch MS, DeGarmo DS. Sustaining fidelity following the nationwide PMTO implementation in Norway. Prev Sci. (2011) 12(3. doi: 10.1007/s11121-011-0225-6

PubMed Abstract | CrossRef Full Text | Google Scholar

50. Kim SS, Nguyen PH, Tran LM, Sanghvi T, Mahmud Z, Haque MR, et al. Large-Scale social and behavior change communication interventions have sustained impacts on infant and young child feeding knowledge and practices: results of a 2-year follow-up study in Bangladesh. J Nutr. (2018) 148(10):1605–14. doi: 10.1093/jn/nxy14730169665

PubMed Abstract | Google Scholar

51. Litaker D, Ruhe M, Weyer S, Stange K. Association of intervention outcomes with practice capacity for change: subgroup analysis from a group randomized trial. Implement Sci. (2008) 3(1):25. doi: 10.1186/1748-5908-3-25

PubMed Abstract | CrossRef Full Text | Google Scholar

52. Parker SK. Longitudinal effects of lean production on employee outcomes and the mediating role of work characteristics. J Appl Psychol. (2003) 88(4):620–34. doi: 10.1037/0021-9010.88.4.620

PubMed Abstract | CrossRef Full Text | Google Scholar

53. Jetten J, O'Brien A, Trindall N. Changing identity: predicting adjustment to organizational restructure as a function of subgroup and superordinate identification. Br J Soc Psychol. (2002) 41(Pt. 2):281–97. doi: 10.1348/014466602760060147

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Das M, Chaudhary C, Mohapatra S, Srivastava V, Khalique N, Kaushal S, et al. Improvements in essential newborn care and newborn resuscitation services following a capacity building and quality improvement program in three districts of Uttar Pradesh, India. Indian J Community Med. (2018) 43(2):90–6. doi: 10.4103/ijcm.IJCM_132_17

PubMed Abstract | CrossRef Full Text | Google Scholar

55. Ryan Jackson K, Fixsen D, Ward C, Waldroup A, Sullivan V, Poquette H, et al. Accomplishing effective and durable change to support improved student outcomes. Chapel Hill, NC: State Implementation and Scaling up of Evidence Based Practices Center, University of North Carolina at Chapel Hill (2018).

56. Smith SN, Almirall D, Prenovost K, Goodrich DE, Abraham KM, Liebrecht C, et al. Organizational culture and climate as moderators of enhanced outreach for persons with serious mental illness: results from a cluster-randomized trial of adaptive implementation strategies. Implement Sci. (2018) 13(1):93. doi: 10.1186/s13012-018-0787-9

PubMed Abstract | CrossRef Full Text | Google Scholar

57. Massatti R, Sweeney H, Panzano P, Roth D. The de-adoption of innovative mental health practices (IMHP): why organizations choose not to sustain an IMHP. Adm Policy Ment Health Ment Health Serv Res. (2008) 35(1-2):50–65. doi: 10.1007/s10488-007-0141-z

CrossRef Full Text | Google Scholar

58. Chilenski SM, Welsh J, Olson J, Hoffman L, Perkins DF, Feinberg ME. Examining the highs and lows of the collaborative relationship between technical assistance providers and prevention implementers. Prev Sci. (2018) 19(2):250–9. doi: 10.1007/s11121-017-0812-2

PubMed Abstract | CrossRef Full Text | Google Scholar

59. Fixsen DL, Blase K, Naoom S, Wallace F. Measures of core implementation components. National Implementation Research Network, Tampa, FL: Florida Mental Health Institute, University of South Florida (2006).

60. Fixsen DL, Ward C, Blase K, Naoom S, Metz A, Louison L. Assessing drivers best practices. Chapel Hill, NC: Active Implementation Research Network (2018).

61. Ogden T, Bjørnebekk G, Kjøbli J, Patras J, Christiansen T, Taraldsen K, et al. Measurement of implementation components ten years after a nationwide introduction of empirically supported programs – a pilot study. Implement Sci. (2012) 7:49. doi: 10.1186/1748-5908-7-49

PubMed Abstract | CrossRef Full Text | Google Scholar

62. Metz A, Bartley L, Ball H, Wilson D, Naoom S, Redmond P. Active implementation frameworks (AIF) for successful service delivery: catawba county child wellbeing project. Res Soc Work Pract. (2014) 25(4):415–22. doi: 10.1177/1049731514543667

CrossRef Full Text | Google Scholar

63. Sigmarsdóttir M, Forgatch M, Vikar Guðmundsdóttir E, Thorlacius Ö, Thorn Svendsen G, Tjaden J, et al. Implementing an evidence-based intervention for children in Europe: evaluating the full-transfer approach. J Clin Child Adolesc Psychol. (2018) 48:1–14. doi: 10.1080/15374416.2018.1466305

CrossRef Full Text | Google Scholar

64. Skogøy BE, Sørgaard K, Maybery D, Ruud T, Stavnes K, Kufås E, et al. Hospitals implementing changes in law to protect children of ill parents: a cross-sectional study. BMC Health Serv Res. (2018) 18. doi: 10.1186/s12913-018-3393-2

CrossRef Full Text | Google Scholar

65. Ward CS, Harms AL, St. Martin K, Cusumano D, Russell C, Horner RH. Development and technical adequacy of the district capacity assessment. J Posit Behav Interv. (2021) 0(0):1098300721990911. doi: 10.1177/1098300721990911

CrossRef Full Text | Google Scholar

66. Fixsen DL, Blase KA, Fixsen AAM. Scaling effective innovations. Criminol Public Policy. (2017) 16(2):487–99. doi: 10.1111/1745-9133.12288

CrossRef Full Text | Google Scholar

67. Improved Clinical Effectiveness through Behavioural Research Group. Designing theoretically-informed implementation interventions. Implement Sci. (2006) 1(4). doi: 10.1186/1748-5908-1-4

CrossRef Full Text | Google Scholar

68. Carpiano RM, Daley DM. A guide and glossary on postpositivist theory building for population health. J Epidemiol Community Health. (2006) 60:564–70. doi: 10.1136/jech.2004.031534

PubMed Abstract | CrossRef Full Text | Google Scholar

69. Nilsen P, Birken SA. Handbook on implementation science. Cheltenham, UK: Edward Elgar Publishing (2020).

70. Fixsen DL, Ward CS, Duda MA, Horner R, Blase KA. (2015). State Capacity Assessment (SCA) for Scaling Up Evidence-based Practices (v. 25.2). Retrieved from State Implementation and Scaling up Evidence Based Practices Center: https://www.activeimplementation.org/resources/state-capacity-assessment/

71. St. Martin K, Ward C, Harms A, Russell C, Fixsen DL. Regional capacity assessment (RCA) for scaling up implementation capacity. Retrieved from State Implementation and Scaling up of Evidence-based Practices Center University of North Carolina at Chapel Hill (2015).

Google Scholar

72. Ward C, St. Martin K, Horner R, Duda M, Ingram-West K, Tedesco M, et al. District Capacity Assessment. Retrieved from State Implementation and Scaling up of Evidence-based Practices Center University of North Carolina at Chapel Hill (2015).

73. Fixsen DL, Ward C, Blase K, Naoom S, Metz A, Louison L. (2018). Assessing Drivers Best Practices. Retrieved from Chapel Hill, NC: Active Implementation Research Network. Available online: https://www.activeimplementation.org/resources/assessing-drivers-best-practices/

Keywords: implementation, scaling, measurement, validity, replication

Citation: Fixsen DL, Van Dyke MK and Blase KA (2023) Repeated measures of implementation variables. Front. Health Serv. 3:1085859. doi: 10.3389/frhs.2023.1085859

Received: 31 October 2022; Accepted: 16 February 2023;
Published: 7 March 2023.

Edited by:

Per Nilsen, Linköping University, Sweden

Reviewed by:

David Sommerfeld, University of California, San Diego, United States
Jane Sandall, King's College London, United Kingdom

© 2023 Fixsen, Van Dyke and Blase. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dean L. Fixsen ZGVhbi5maXhzZW5AYWN0aXZlaW1wbGVtZW50YXRpb24ub3Jn

These authors have contributed equally to this work and share first authorship

Specialty Section: This article was submitted to Implementation Science, a section of the journal Frontiers in Health Services

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.