ContentServer.pdf

RESEARCH Open Access

The Implementation Research Logic Model:a method for planning, executing,reporting, and synthesizing implementationprojectsJustin D. Smith1,2* , Dennis H. Li3 and Miriam R. Rafferty4

Abstract

Background: Numerous models, frameworks, and theories exist for specific aspects of implementation research,including for determinants, strategies, and outcomes. However, implementation research projects often fail toprovide a coherent rationale or justification for how these aspects are selected and tested in relation to oneanother. Despite this need to better specify the conceptual linkages between the core elements involved inprojects, few tools or methods have been developed to aid in this task. The Implementation Research Logic Model(IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complexprocesses of improving the adoption of evidence-based interventions in healthcare delivery systems.

Methods: The IRLM structure and guiding principles were developed through a series of preliminary activities withmultiple investigators representing diverse implementation research projects in terms of contexts, research designs,and implementation strategies being evaluated. The utility of the IRLM was evaluated in the course of a 2-daytraining to over 130 implementation researchers and healthcare delivery system partners.

Results: Preliminary work with the IRLM produced a core structure and multiple variations for common implementationresearch designs and situations, as well as guiding principles and suggestions for use. Results of the survey indicated ahigh utility of the IRLM for multiple purposes, such as improving rigor and reproducibility of projects; serving as a“roadmap” for how the project is to be carried out; clearly reporting and specifying how the project is to be conducted;and understanding the connections between determinants, strategies, mechanisms, and outcomes for their project.

(Continued on next page)

© The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you giveappropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate ifchanges were made. The images or other third party material in this article are included in the article's Creative Commonslicence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commonslicence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtainpermission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to thedata made available in this article, unless otherwise stated in a credit line to the data.

* Correspondence: [email protected]; [email protected] of Population Health Sciences, University of Utah School ofMedicine, Salt Lake City, Utah, USA2Center for Prevention Implementation Methodology for Drug Abuse andHIV, Department of Psychiatry and Behavioral Sciences, Department ofPreventive Medicine, Department of Medical Social Sciences, andDepartment of Pediatrics, Northwestern University Feinberg School ofMedicine, Chicago, Illinois, USAFull list of author information is available at the end of the article

Smith et al. Implementation Science (2020) 15:84 https://doi.org/10.1186/s13012-020-01041-8

(Continued from previous page)

Conclusions: The IRLM is a semi-structured, principle-guided tool designed to improve the specification, rigor,reproducibility, and testable causal pathways involved in implementation research projects. The IRLM can also aidimplementation researchers and implementation partners in the planning and execution of practice change initiatives.Adaptation and refinement of the IRLM are ongoing, as is the development of resources for use and applications todiverse projects, to address the challenges of this complex scientific field.

Keywords: Logic models, Program theory, Integration, Study specification

BackgroundIn response to a call for addressing noted problems withtransparency, rigor, openness, and reproducibility inbiomedical research [1], the National Institutes of Healthissued guidance in 2014 pertaining to the research it funds(https://www.nih.gov/research-training/rigor-reproducibility).The field of implementation science has similarly recognizeda need for better specification with similar intent [2].However, integrating the necessary conceptual elements ofimplementation research, which often involves multiplemodels, frameworks, and theories, is an ongoing challenge.A conceptually grounded organizational tool could im-prove rigor and reproducibility of implementation researchwhile offering additional utility for the field.This article describes the development and application

of the Implementation Research Logic Model (IRLM).The IRLM can be used with various types of implemen-tation studies and at various stages of research, fromplanning and executing to reporting and synthesizingimplementation studies. Example IRLMs are providedfor various common study designs and scenarios, includ-ing hybrid designs and studies involving multiple servicedelivery systems [3, 4]. Last, we describe the preliminaryuse of the IRLM and provide results from a post-training evaluation. An earlier version of this work was

presented at the 2018 AcademyHealth/NIH Conferenceon the Science of Dissemination and Implementation inHealth, and the abstract appeared in the ImplementationScience [5].

Specification challenges in implementation researchHaving an imprecise understanding of what was doneand why during the implementation of a newinnovation obfuscates identifying the factors respon-sible for successful implementation and preventslearning from what contributed to failed implementa-tion. Thus, improving the specification of phenomenain implementation research is necessary to inform ourunderstanding of how implementation strategies work,for whom, under what determinant conditions, andon what implementation and clinical outcomes. Onechallenge is that implementation science uses numer-ous models and frameworks (hereafter, “frameworks”)to describe, organize, and aid in understanding thecomplexity of changing practice patterns and integrat-ing evidence-based health interventions across systems[6]. These frameworks typically address implementa-tion determinants, implementation process, or imple-mentation evaluation [7]. Although many frameworksincorporate two or more of these broad purposes, re-searchers often find it necessary to use more thanone to describe the various aspects of an implementa-tion research study. The conceptual connections andrelationships between multiple frameworks are oftendifficult to describe and to link to theory [8].Similarly, reporting guidelines exist for some of these

implementation research components, such as strategies[9] and outcomes [10], as well as for entire studies (i.e.,Standards for Reporting Implementation Studies [11]);however, they generally help describe the individualcomponents and not their interactions. To facilitatecausal modeling [12], which can be used to elucidatemechanisms of change and the processes involved inboth successful and unsuccessful implementation re-search projects, investigators must clearly define the re-lations among variables in ways that are testable withresearch studies [13]. Only then can we open the “blackbox” of how specific implementation strategies operateto predict outcomes.

Contributions to the literature

� Drawing from and integrating existing frameworks, models,and theories, the IRLM advances the traditional logic model

for the requirements of implementation research and

practice.

� The IRLM provides a means of describing the complexrelationships between critical elements of implementation

research and practice in a way that can be used to improve

the rigor and reproducibility of research and implementation

practice, and the testing of theory.

� The IRLM offers researchers and partners a useful tool for thepurposes of planning, executing, reporting, and synthesizing

processes and findings across the stages of implementation

projects.

Smith et al. Implementation Science (2020) 15:84 Page 2 of 12

Logic modelsLogic models, graphic depictions that present theshared relationships among various elements of a pro-gram or study, have been used for decades in pro-gram development and evaluation [14] and are oftenrequired by funding agencies when proposing studiesinvolving implementation [15]. Used to develop agree-ment among diverse stakeholders of the “what” andthe “how” of proposed and ongoing projects, logicmodels have been shown to improve planning byhighlighting theoretical and practical gaps, supportthe development of meaningful process indicators fortracking, and aid in both reproducing successful stud-ies and identifying failures of unsuccessful studies[16]. They are also useful at other stages of researchand for program implementation, such as organizinga project/grant application/study protocol, presentingfindings from a completed project, and synthesizingthe findings of multiple projects [17].Logic models can also be used in the context of

program theory, an explicit statement of how a pro-ject/strategy/intervention/program/policy is under-stood to contribute to a chain of intermediate resultsthat eventually produce the intended/observed im-pacts [18]. Program theory specifies both a Theory ofChange (i.e., the central processes or drivers by whichchange comes about following a formal theory or tacitunderstanding) and a Theory of Action (i.e., how pro-gram components are constructed to activate theTheory of Change) [16]. Inherent within program the-ory is causal chain modeling. In implementation re-search, Fernandez et al. [19] applied mappingmethods to implementation strategies to postulate theways in which changes to the system affect down-stream implementation and clinical outcomes. Theirwork presents an implementation mapping logicmodel based on Proctor et al. [20, 21], which isfocused primarily on the selection of implementationstrategy(s) rather than a complete depiction of theconceptual model linking all implementation researchelements (i.e., determinants, strategies, mechanisms ofaction, implementation outcomes, clinical outcomes)in the detailed manner we describe in this article.

Development of the IRLMThe IRLM began out of a recognition that implemen-tation research presents some unique challenges dueto the field’s distinct and still codifying terminology[22] and its use of implementation-specific and non-specific (borrowed from other fields) theories, models,and frameworks [7]. The development of the IRLMoccurred through a series of case applications. Thisbegan with a collaboration between investigators atNorthwestern University and the Shirley Ryan

AbilityLab in which the IRLM was used to study theimplementation of a new model of patient care in anew hospital and in other related projects [23]. Next,the IRLM was used with three already-funded imple-mentation research projects to plan for and describethe prospective aspects of the trials, as well as withan ongoing randomized roll-out implementation trialof the Collaborative Care Model for depression man-agement [Smith JD, Fu E, Carroll AJ, Rado J,Rosenthal LJ, Atlas JA, Burnett-Zeigler I, Carlo, A,Jordan N, Brown CH, Csernansky J: Collaborative carefor depression management in primary care: a ran-domized rollout trial using a type 2 hybrideffectiveness-implementation design submitted forpublication]. It was also applied in the later stages ofa nearly completed implementation research projectof a family-based obesity management intervention inpediatric primary care to describe what had occurredover the course of the 3-year trial [24]. Last, theIRLM was used as a training tool in a 2-day trainingwith 63 grantees of NIH-funded planning projectgrants funded as part of the Ending the HIV Epi-demic initiative [25]. Results from a survey of theparticipants in the training are reported in the “Re-sults” section. From these preliminary activities, weidentified a number of ways that the IRLM could beused, described in the section on “Using the IRLMfor different purposes and stages of research.”

MethodsThe Implementation Research Logic ModelStructureIn developing the IRLM, we began with the common“pipeline” logic model format used by AHRQ, CDC,NIH, PCORI, and others [16]. This structure was chosendue to its familiarity with funders, investigators, readers,and reviewers. Although a number of characteristics ofthe pipeline logic model can be applied to implementa-tion research studies, there is an overall misfit due toimplementation research’s focusing on the systems thatsupport adoption and delivery of health practices; involv-ing multiple levels within one or more systems; andhaving its own unique terminology and frameworks [3,22, 26]. We adapted the typical evaluation logic modelto integrate existing implementation science frameworksas its core elements while keeping to the same aim offacilitating causal modeling.The most common IRLM format is depicted in Fig. 1.

Additional File A1 is a Fillable PDF version of Fig. 1. Incertain situations, it might be preferable to include theevidence-based intervention (EBI; defined as a clinical,preventive, or educational protocol or a policy, principle,or practice whose effects are supported by research [27])(Fig. 2) to demonstrate alignment of contextual factors

Smith et al. Implementation Science (2020) 15:84 Page 3 of 12

(determinants) and strategies with the components andcharacteristics of the clinical intervention/policy/programand to disentangle it from the implementation strategies.Foremost in these indications are “home-grown” interven-tions, whose components and theory of change may nothave been previously described, and novel interventionsthat are early in the translational pipeline, which mayrequire greater detail for the reader/reviewer. Variant

formats are provided as Additional Files A2 to A4 for usewith situations and study designs commonly encounteredin implementation research, including comparative imple-mentation studies (A2), studies involving multiple servicecontexts (A3), and implementation optimization designs(A4). Further, three illustrative IRLMs are provided, withbrief descriptions of the projects and the utility of theIRLM (A5, A6 and A7).

Fig. 1 Implementation Research Logic Model (IRLM) Standard Form. Notes. Domain names in the determinants section were drawn from theConsolidated Framework for Implementation Research. The format of the outcomes column is from Proctor et al. 2011

Fig. 2 Implementation Research Logic Model (IRLM) Standard Form with Intervention. Notes. Domain names in the determinants section weredrawn from the Consolidated Framework for Implementation Research. The format of the outcomes column is from Proctor et al. 2011

Smith et al. Implementation Science (2020) 15:84 Page 4 of 12

Core elements and theoryThe IRLM specifies the relationships between determi-nants of implementation, implementation strategies, themechanisms of action resulting from the strategies, and theimplementation and clinical outcomes affected. These coreelements are germane to every implementation researchproject in some way. Accordingly, the generalized theory ofthe IRLM posits that (1) implementation strategies selectedfor a given EBI are related to implementation determinants(context-specific barriers and facilitators), (2) strategieswork through specific mechanisms of action to change thecontext or the behaviors of those within the context, and(3) implementation outcomes are the proximal impacts ofthe strategy and its mechanisms, which then relate to theclinical outcomes of the EBI. Articulated in part by others[9, 12, 21, 28, 29], this causal pathway theory is largely ex-planatory and details the Theory of Change and the Theoryof Action of the implementation strategies in a singlemodel. The EBI Theory of Action can also be displayedwithin a modified IRLM (see Additional File A4). We nowbriefly describe the core elements and discuss conceptualchallenges in how they relate to one another and to theoverall goals of implementation research.

DeterminantsDeterminants of implementation are factors that mightprevent or enable implementation (i.e., barriers and facil-itators). Determinants may act as moderators, “effectmodifiers,” or mediators, thus indicating that they arelinks in a chain of causal mechanisms [12]. Commondeterminant frameworks are the Consolidated Frame-work for Implementation Research (CFIR) [30] and theTheoretical Domains Framework [31].

Implementation strategiesImplementation strategies are supports, changes to, andinterventions on the system to increase adoption of EBIsinto usual care [32]. Consideration of determinants iscommonly used when selecting and tailoring implemen-tation strategies [28, 29, 33]. Providing the theoretical orconceptual reasoning for strategy selection is recom-mended [9]. The IRLM can be used to specify theproposed relationships between strategies and the otherelements (determinants, mechanisms, and outcomes)and assists with considering, planning, and reporting allstrategies in place during an implementation researchproject that could contribute to the outcomes andresulting changesBecause implementation research occurs within dynamic

delivery systems with multiple factors that determine suc-cess or failure, the field has experienced challenges identify-ing consistent links between individual barriers and specificstrategies to overcome them. For example, the ExpertRecommendations for Implementing Change (ERIC)

compilation of strategies [32] was used to determine whichstrategies would best address contextual barriers identifiedby CFIR [29]. An online CFIR–ERIC matching processcompleted by implementation researchers and practitionersresulted in a large degree of heterogeneity and few consist-ent relationships between barrier and strategy, meaning therelationship is rarely one-to-one (e.g., a single strategy isoften is linked to multiple barriers; more than one strategyneeded to address a single barrier). Moreover, when imple-mentation outcomes are considered, researchers often findthat to improve one outcome, more than one contextualbarrier needs to be addressed, which might in turn requireone or more strategies.Frequently, the reporting of implementation research

studies focuses on the strategy or strategies that wereintroduced for the research study, without due attentionto other strategies already used in the system oradditional supporting strategies that might be needed toimplement the target strategy. The IRLM allows for thecomprehensive specification of all introduced andpresent strategies, as well as their changes (adaptations,additions, discontinuations) during the project.

Mechanisms of actionMechanisms of action are processes or events throughwhich an implementation strategy operates to affectdesired implementation outcomes [12]. The mechanismcan be a change in a determinant, a proximal implementa-tion outcome, an aspect of the implementation strategyitself, or a combination of these in a multiple-intervening-effect model. An example of a causal process might beusing training and fidelity monitoring strategies to im-prove delivery agents’ knowledge and self-efficacy aboutthe EBI in response to knowledge-related barriers in theservice delivery system. This could result in raising theiracceptability of the EBI, increase the likelihood of adop-tion, improve the fidelity of delivery, and lead to sustain-ment. Relatively, few implementation studies formally testmechanisms of action, but this area of investigation hasreceived significant attention more recently as the neces-sity to understand how strategies operate grows in thefield [33–35].

OutcomesImplementation outcomes are the effects of deliberateand purposive actions to implement new treatments,practices, and services [21]. They can be indicators ofimplementation processes, or key intermediate outcomesin relation to service, or target clinical outcomes.Glasgow et al. [36–38] describe the interrelated natureof implementation outcomes as occurring in a logical,but not necessarily linear, sequence of adoption by a de-livery agent, delivery of the innovation with fidelity,reach of the innovation to the intended population, and

Smith et al. Implementation Science (2020) 15:84 Page 5 of 12

sustainment of the innovation over time. The combinedimpact of these nested outcomes, coupled with the sizeof the effect of the EBI, determines the population orpublic health impact of implementation [36]. Out-comes earlier in the sequence can be conceptualized asmediators and mechanisms of strategies on laterimplementation outcomes. Specifying which strategiesare theoretically intended to affect which outcomes,through which mechanisms of action, is crucial forimproving the rigor and reproducibility of implementa-tion research and to testing theory.

Using the Implementation Research Logic ModelGuiding principlesOne of the critical insights from our preliminary workwas that the use of the IRLM should be guided by a setof principles rather than governed by rules. Theseprinciples are intended to be flexible both to allow foradaptation to the various types of implementation stud-ies and evolution of the IRLM over time and to addressconcerns in the field of implementation science regard-ing specification, rigor, reproducibility, and transparencyof design and process [5]. Given this flexibility of use,the IRLM will invariably require accompanying text andother supporting documents. These are described in thesection “Use of Supporting Text and Documents.”

Principle 1: Strive for comprehensivenessComprehensiveness increases transparency, can improverigor, and allows for a better understanding of alterna-tive explanations to the conclusions drawn, particularlyin the presence of null findings for an experimental design.Thus, all relevant determinants, implementation strategies,and outcomes should be included in the IRLM.

DeterminantsConcerning determinants, the valence should be notedas being either a barrier, a facilitator, neutral, or variableby study unit. This can be achieved by simply addingplus (+) or minus (–) signs for facilitators and barriers,respectively, or by using coding systems such as thatdeveloped by Damschroder et al. [39], which indicatesthe relative strength of the determinant on a scale: – 2(strong negative impact), – 1 (weak negative impact), 0(neutral or mixed influence), 1 (weak positive impact),and 2 (strong positive impact). The use of such a codingsystem could yield better specification compared tousing study-specific adjectives or changing the name ofthe determinant (e.g., greater relative priority, addressespatient needs, good climate for implementation). It iscritical to include all relevant determinants and notsimply limit reporting to those that are hypothesized tobe related to the strategies and outcomes, as there arecomplex interrelationships between determinants.

Implementation strategiesImplementation strategies should be reported in theirentirety. When using the IRLM for planning a study, itis important to list all strategies in the system, includingthose already in use and those to be initiated for thepurposes of the study, often in the experimental condi-tion of the design. Second, strategies should be labeledto indicate whether they were (a) in place in the systemprior to the study, (b) initiated prospectively for the pur-poses of the study (particularly for experimental studydesigns), (c) removed as a result of being ineffective oronerous, or (d) introduced during the study to addressan emergent barrier or supplement other strategiesbecause of low initial impact. This is relevant whenusing the IRLM for planning, as an ongoing tracking sys-tem, for retrospective application to a completed study,and in the final reporting of a study. There have been anumber of processes proposed for tracking the use ofand adaptations to implementation strategies over time[40, 41]. Each of these is more detailed than would benecessary for the IRLM, but the processes describedprovide a method for accurately tracking the temporalaspects of strategy use that fulfill the comprehensivenessprinciple.

OutcomesAlthough most studies will indicate a primary implementa-tion outcome, other outcomes are almost assuredly to bemeasured. Thus, they ought to be included in the IRLM.This guidance is given in large part due to the interdepend-ence of implementation outcomes, such that adoptionrelates to delivery with fidelity, reach of the intervention,and potential for sustainment [36]. Similarly, the overallpublic health impact (defined as reach multiplied by theeffect size of the intervention [38]) is inextricably tied toadoption, fidelity, acceptability, cost, etc. Although thestudy might justifiably focus on only one or two implemen-tation outcomes, the others are nonetheless relevant andshould be specified and reported. For example, it is import-ant to capture potential unintended consequences andindicators of adverse effects that could result from theimplementation of an EBI.

Principle 2: Indicate key conceptual relationshipsAlthough the IRLM has a generalized theory (describedearlier), there is a need to indicate the relationships be-tween elements in a manner aligning with the specifictheory of change for the study. Researchers ought toprovide some form or notation to indicate these concep-tual relationships using color-coding, superscripts,arrows, or a combination of the three. Such notations inthe IRLM facilitate reference in the text to the studyhypotheses, tests of effects, causal chain modeling, andother forms of elaboration (see “Supporting Text and

Smith et al. Implementation Science (2020) 15:84 Page 6 of 12

Resources”). We prefer the use of superscripts to coloror arrows in grant proposals and articles for practicalpurposes, as colors can be difficult to distinguish, andarrows can obscure text and contribute to visual convo-lution. When presenting the IRLM using presentationprograms (e.g., PowerPoint, Keynote), colors and arrowscan be helpful, and animations can make these connec-tions dynamic and sequential without adding to visualcomplexity. This principle could also prove useful insynthesizing across similar studies to build the science oftailored implementation, where strategies are selectedbased on the presence of specific combinations of deter-minants. As previously indicated [29], there is muchwork to be done in this area given.

Principle 3: Specify critical study design elementsThis critical element will vary by the study design (e.g.,hybrid effectiveness-implementation trial, observational,what subsystems are assigned to the strategies). Thisprinciple includes not only researchers but servicesystems and communities, whose consent is necessary tocarry out any implementation design [3, 42, 43].

Primary outcome(s)Indicate the primary outcome(s) at each level of thestudy design (i.e., clinician, clinic, organization, county,state, nation). The levels should align with the specificaims of a grant application or the stated objective of aresearch report. In the case of a process evaluation or anobservational study including the RE-AIM evaluationcomponents [38] or the Proctor et al. [21] taxonomy ofimplementation outcomes, the primary outcome may bethe product of the conceptual or theoretical model usedwhen a priori outcomes are not clearly indicated. Wealso suggest including downstream health services andclinical outcomes even if they are not measured, as theseare important for understanding the logic of the studyand the ultimate health-related targets.

For quasi/experimental designsWhen quasi/experimental designs [3, 4] are used, theindependent variable(s) (i.e., the strategies that are in-troduced or manipulated or that otherwise differentiatestudy conditions) should be clearly labeled. This isimportant for internal validity and for differentiatingconditions in multi-arm studies.

For comparative implementation trialsIn the context of comparative implementation trials [3, 4],a study of two or more competing implementation strat-egies are introduced for the purposes of the study (i.e., thecomparison is not implementation-as-usual), and there is aneed to indicate the determinants, strategies, mechanisms,and potentially outcomes that differentiate the arms (see

Additional File A2). As comparative implementation caninvolve multiple service delivery systems, the determinants,mechanisms, and outcomes might also differ, though theremust be at least one comparable implementation outcome.In our preliminary work applying the IRLM to a large-scalecomparative implementation trial, we found that weneeded to use an IRLM for each arm of the trial as it wasnot possible to use a single IRLM because the strategiesbeing tested occurred across two delivery systems andstrategies were very different, by design. This is an exampleof the flexible use of the IRLM.

For implementation optimization designsA number of designs are now available that aim to testprocesses of optimizing implementation. These includefactorial, Sequential Multiple Assignment RandomizedTrial (SMART) [44], adaptive [45], and roll-out imple-mentation optimization designs [46]. These designsallow for (a) building time-varying adaptive implementa-tion strategies based on the order in which componentsare presented [44], (b) evaluating the additive and com-bined effects of multiple strategies [44, 47], and (c) canincorporate data-driven iterative changes to improveimplementation in successive units [45, 46]. The IRLMin Additional File A4 can be used for such designs.

Additional specification optionsUsers of the IRLM are allowed to specify any number ofadditional elements that may be important to theirstudy. For example, one could notate those elements ofthe IRLM that have been or will be measured versusthose that were based on the researcher’s prior studiesor inferred from findings reported in the literature.Users can also indicate when implementation strategiesdiffer by level or unit within the study. In large multisitestudies, strategies might not be uniform across all units,particularly those strategies that already exist within thesystem. Similarly, there might be a need to increase thedose of certain strategies to address the relativestrengths of different determinants within units.

Using the IRLM for different purposes and stages ofresearchCommensurate with logic models more generally, theIRLM can be used for planning and organizing a project,carrying out a project (as a roadmap), reporting andpresenting the findings of a completed project, and syn-thesizing the findings of multiple projects or of a specificarea of implementation research, such as what is knownabout how learning collaboratives are effective withinclinical care settings.

Smith et al. Implementation Science (2020) 15:84 Page 7 of 12

PlanningWhen the IRLM is used for planning, the process ofpopulating each of the elements often begins with theknown parameter(s) of the study. For example, if theproblem is improving the adoption and reach of aspecific EBI within a particular clinical setting, the im-plementation outcomes and context, as well as the EBI,are clearly known. The downstream clinical outcomes ofthe EBI are likely also known. Working from the two“bookends” of the IRLM, the researchers and communitypartners and/or organization stakeholders can begin tofill in the implementation strategies that are likely to befeasible and effective and then posit conceptually derivedmechanisms of action. In another example, only the EBIand primary clinical outcomes were known. The IRLMwas useful in considering different scenarios for whatstrategies might be needed and appropriate to test theimplementation of the EBI in different service deliverycontexts. The IRLM was a tool for the researchers andstakeholders to work through these multiple options.

ExecutingWhen we used the IRLM to plan for the execution offunded implementation studies, the majority of the pa-rameters were already proposed in the grant application.However, through completing the IRLM prior to thestart of the study, we found that a number of importantcontextual factors had not been considered, additionalimplementation strategies were needed to complementthe primary ones proposed in the grant, and mecha-nisms needed to be added and measured. At the time ofaward, mechanisms were not an expected component ofimplementation research projects as they will likely be-come in the future.

ReportingFor another project, the IRLM was applied retrospect-ively to report on the findings and overall logic of thestudy. Because nearly all elements of the IRLM wereknown, we approached completion of the model as ameans of showing what happened during the study andto accurately report the hypothesized relationships thatwe observed. These relationships could be formallytested using causal pathway modeling [12] or other pathanalysis approaches with one or more interveningvariables [48].

SynthesizingIn our preliminary work with the IRLM, we used it ineach of the first three ways; the fourth (synthesizing) isongoing within the National Cancer Institute’s Improv-ing the Management of symPtoms during And FollowingCancer Treatment (IMPACT) research consortium. Thepurpose is to draw conclusions for the implementation

of an EBI in a particular context (or across contexts) thatare shared and generalizable to provide a guide forfuture research and implementation.

Use of supporting text and documentsWhile the IRLM provides a good deal of informationabout a project in a single visual, researchers will needto convey additional details about an implementation re-search study through the use of supporting text, tables,and figures in grant applications, reports, and articles.Some elements that require elaboration are (a) prelimin-ary data on the assessment and valence of implementa-tion determinants; (b) operationalization/detailing of theimplementation strategies being used or observed, usingestablished reporting guidelines [9] and labeling conven-tions [32] from the literature; (c) hypothesized or testedcausal pathways [12]; (d) process, service, and clinicaloutcome measures, including the psychometric proper-ties, method, and timing of administration, respondents,etc.; (e) study procedures, including subject selection, as-signment to (or observation of natural) study conditions,and assessment throughout the conduct of the study [4];and (f) the implementation plan or process for followingestablished implementation frameworks [49–51]. Byutilizing superscripts, subscripts, and other notationswithin the IRLM, as previously suggested, it is easy torefer to (a) hypothesized causal paths in theoretical over-views and analytic plan sections, (b) planned measuresfor determinants and outcomes, and (c) specific imple-mentation strategies in text, tables, and figures.

ResultsEvidence of IRLM utility and acceptabilityThe IRLM was used as the foundation for a training inimplementation research methods to a group of 65 plan-ning projects awarded under the national Ending theHIV Epidemic initiative. One investigator (project dir-ector or co-investigator) and one implementation part-ner (i.e., a collaborator from a community servicedelivery system) from each project were invited toattend a 2-day in-person summit in Chicago, IL, in Oc-tober 2019. One hundred thirty-two participantsattended, representing 63 of the 65 projects. A survey,which included demographics and questions pertainingto the Ending the HIV Epidemic, was sent to potentialattendees prior to the summit, to which 129 individ-uals—including all 65 project directors, 13 co-investigators, and 51 implementation partners (62% Fe-male)—responded. Those who indicated an investigatorrole (n = 78) received additional questions about priorimplementation research training (e.g., formal course-work, workshop, self-taught) and related experiences(e.g., involvement in a funded implementation project,program implementation, program evaluation, quality

Smith et al. Implementation Science (2020) 15:84 Page 8 of 12

improvement) and the stage of their project (i.e., explor-ation, preparation, implementation, sustainment [50]).Approximately 6 weeks after the summit, 89 attendees

(69%) completed a post-training survey comprising morethan 40 questions about their overall experience.Though the invitation to complete the survey made nomention of the IRLM, it included 10 items related to theIRLM and one more generally about the logic of imple-mentation research, each rated on a 4-point scale (1 =not at all, 2 = a little, 3 = moderately, 4 = very much;see Table 1). Forty-two investigators (65% of projects)and 24 implementation partners indicated attending thetraining and began and completed the survey (68.2% fe-male). Of the 66 respondents who attended the training,100% completed all 11 IRLM items, suggesting little po-tential response bias.Table 1 provides the means, standard deviations, and

percent of respondents endorsing either “moderately” or“very” response options. Results were promising for theutility of the IRLM on the majority of the dimensionsassessed. More than 50% of respondents indicated that theIRLM was “moderately” or “very” helpful on all questions.Overall, 77.6% (M = 3.18, SD = .827) of respondents indi-cated that their knowledge on the logic of implementationresearch had increased either moderately or very muchafter the 2-day training. At the time of the survey, whenrespondents were about 2.5 months into their 1-year plan-ning projects, 44.6% indicated that they had already beenable to complete a full draft of the IRLM.Additional analyses using a one-way analysis of variance

indicated no statistically significant differences in

responses to the IRLM questions between investigatorsand implementation partners. However, three itemsapproached significance: planning the project (F = 2.460, p= .055), clearly reporting and specifying how the project isto be conducted (F = 2.327, p = .066), and knowledge onthe logic of implementation research (F = 2.107, p = .091).In each case, scores were higher for the investigators com-pared to the implementation partners, suggesting that per-haps the knowledge gap in implementation research laymore in the academic realm than among community part-ners, who may not have a focus on research but whoseday-to-day roles include the implementation of EBPs inthe real world. Lastly, analyses using ordinal logistic re-gression did not yield any significant relationship betweenresponses to the IRLM survey items and prior training (n= 42 investigators who attended the training and com-pleted the post-training survey), prior related research ex-perience (n = 42), and project stage of implementation (n= 66). This suggests that the IRLM is a useful tool for bothinvestigators and implementers with varying levels of priorexposure to implementation research concepts and acrossall stages of implementation research. As a result of thistraining, the IRLM is now a required element in theFY2020 Ending the HIV Epidemic Centers for AIDSResearch/AIDS Research Centers Supplement Announce-ment released March 2020 [15].

Resources for using the IRLMAs the use of the IRLM for different study designs andpurposes continues to expand and evolve, we envision

Table 1 Survey results

Question

To what extent was the Implementation Research Logic Model (IRLM) helpful in… Mean SD % responding either “Moderately”or “Very”

improving the rigor and reproducibility 3.05 .885 77.7%

serving as a “roadmap” for how the project is to be carried out over time 3.08 .950 74.0%

clearly reporting and specifying how the project is to be conducted 2.94 .909 67.8%

understanding the connections between determinants, strategies,mechanisms, and outcomes

2.92 .957 66.3%

identifying gaps in the implementation research logic of their project 2.86 1.021 64.2%

deepening their knowledge of implementation science methods 2.83 .959 62.9%

planning the project 2.82 1.088 61.3%

developing consensus and understanding of the project among diversestakeholders involved

2.75 1.090 58.8%

identifying gaps in new research questions or analyses 2.54 1.032 51.3%

To what extent…

… were the worksheets provided during the summit helpful incompleting the IRLM

3.02 .886 74.1%

… has your knowledge on the logic of implementation research increasedafter the two-day training

3.18 .827 77.6%

Smith et al. Implementation Science (2020) 15:84 Page 9 of 12

supporting researchers and other program implemen-ters in applying the IRLM to their own contexts. Ourteam at Northwestern University hosts web resourceson the IRLM that includes completed examples andtools to assist users in completing their model, in-cluding templates in various formats (Figs. 1 and 2,Additional Files A1, A2, A3 and A4 and others) aQuick Reference Guide (Additional File A8) and aseries of worksheets that provide guidance on popu-lating the IRLM (Additional File A9). These will beavailable at https://cepim.northwestern.edu/implementationresearchlogicmodel/.

DiscussionThe IRLM provides a compact visual depiction of an im-plementation project and is a useful tool for academic–practice collaboration and partnership development.Used in conjunction with supporting text, tables, andfigures to detail each of the primary elements, the IRLMhas the potential to improve a number of aspects of im-plementation research as identified in the results of thepost-training survey. The usability of the IRLM is highfor seasoned and novice implementation researchersalike, as evidenced by our survey results and preliminarywork. Its use in the planning, executing, reporting, andsynthesizing of implementation research could increasethe rigor and transparency of complex studies thatultimately could improve reproducibility—a challenge inthe field—by offering a common structure to increaseconsistency and a method for more clearly specifyinglinks and pathways to test theories.Implementation occurs across the gamut of contexts

and settings. The IRLM can be used when largeorganizational change is being considered, such as a newstrategic plan with multifaceted strategies and outcomes.Within a narrower scope of a single EBI in a specificsetting, the larger organizational context still ought to beincluded as inner setting determinants (i.e., the impact ofthe organizational initiative on the specific EBI implemen-tation project) and as implementation strategies (i.e., thespecific actions being done to make the organizationalchange a reality that could be leveraged to implement theEBI or could affect the success of implementation). TheIRLM has been used by our team to plan for large sys-temic changes and to initiate capacity building strategiesto address readiness to change (structures, processes, indi-viduals) through strategic planning and leadership engage-ment at multiple levels in the organization. This aspect ofthe IRLM continues to evolve.Among the drawbacks of the IRLM is that it might be

viewed as a somewhat simplified format. This representsthe challenges of balancing depth and detail with parsi-mony, ease of comprehension, and ease of use. Thestructure of the IRLM may inhibit creative thinking if

applied too rigidly, which is among the reasons weprovide numerous examples of different ways to tailorthe model to the specific needs of different projectdesigns and parameters. Relatedly, we encourage usersto iterate on the design of the IRLM to increase itsutility.

ConclusionsThe promise of implementation science lies in the abilityto conduct rigorous and reproducible research, to clearlyunderstand the findings, and to synthesize findings fromwhich generalizable conclusions can be drawn and ac-tionable recommendations for practice change emerge.As scientists and implementers have worked to betterdefine the core methods of the field, the need fortheory-driven, testable integration of the foundationalelements involved in impactful implementation researchhas become more apparent. The IRLM is a tool that canaid the field in addressing this need and moving towardthe ultimate promise of implementation research toimprove the provision and quality of healthcare servicesfor all people.

Supplementary informationSupplementary information accompanies this paper at https://doi.org/10.1186/s13012-020-01041-8.

Additional file 1. IRLM Fillable PDF form

Additional file 2. IRLM for Comparative Implementation

Additional file 3. IRLM for Implementation of an Intervention Across orLinking Two Contexts

Additional file 4. IRLM for an Implementation Optimization Study

Additional file 5. IRLM example 1: Faith in Action: Clergy andCommunity Health Center Communication Strategies for Ending theEpidemic in Mississippi and Arkansas

Additional file 6. IRLM example 2: Hybrid Type II Effectiveness–Implementation Evaluation of a City-Wide HIV System Navigation Inter-vention in Chicago, IL

Additional file 7. IRLM example 3: Implementation, spread, andsustainment of Physical Therapy for Mild Parkinson’s Disease through aRegional System of Care

Additional file 8. IRLM Quick Reference Guide

Additional file 9. IRLM Worksheets

AbbreviationsCFIR: Consolidated Framework for Implementation Research; EBI: Evidence-based intervention; ERIC: Expert Recommendations for ImplementingChange; IRLM: Implementation Research Logic Model

AcknowledgementsThe authors wish to thank our colleagues who provided input at differentstages of developing this article and the Implementation Research LogicModel, and for providing the examples included in this article: HendricksBrown, Brian Mustanski, Kathryn Macapagal, Nanette Benbow, LisaHirschhorn, Richard Lieber, Piper Hansen, Leslie O’Donnell, Allen Heinemann,Enola Proctor, Courtney Wolk-Benjamin, Sandra Naoom, Emily Fu, JeffreyRado, Lisa Rosenthal, Patrick Sullivan, Aaron Siegler, Cady Berkel, CarrieDooyema, Lauren Fiechtner, Jeanne Lindros, Vinny Biggs, Gerri Cannon-Smith, Jeremiah Salmon, Sujata Ghosh, Alison Baker, Jillian MacDonald,Hector Torres and the Center on Halsted in Chicago, Michelle Smith, Thomas

Smith et al. Implementation Science (2020) 15:84 Page 10 of 12

Dobbs, and the pastors who work tirelessly to serve their communities inMississippi and Arkansas.

Authors’ contributionsJDS conceived of the Implementation Research Logic Model. JDS, MR, andDL collaborated in developing the Implementation Research Logic Model aspresented and in the writing of the manuscript. All authors approved of thefinal version.

FundingThis study was supported by grant P30 DA027828 from the National Instituteon Drug Abuse, awarded to C. Hendricks Brown; grant U18 DP006255 toJustin Smith and Cady Berkel; grant R56 HL148192 to Justin Smith; grant UL1TR001422 from the National Center for Advancing Translational Sciences toDonald Lloyd-Jones; grant R01 MH118213 to Brian Mustanski; grant P30AI117943 from the National Institute of Allergy and Infectious Diseases toRichard D’Aquila; grant UM1 CA233035 from the National Cancer Institute toDavid Cella; a grant from the Woman’s Board of Northwestern MemorialHospital to John Csernansky; grant F32 HS025077 from the Agency forHealthcare Research and Quality; grant NIFTI 2016-20178 from theFoundation for Physical Therapy; the Shirley Ryan AbilityLab; and by theImplementation Research Institute (IRI) at the George Warren Brown Schoolof Social Work, Washington University in St. Louis, through grant R25MH080916 from the National Institute of Mental Health and the Departmentof Veterans Affairs, Health Services Research & Development Service, andQuality Enhancement Research Initiative (QUERI) to Enola Proctor. Theopinions expressed herein are the views of the authors and do notnecessarily reflect the official policy or position of the National Institutes ofHealth, the Centers for Disease Control and Prevention, the Agency forHealthcare Research and Quality the Department of Veterans Affairs, or anyother part of the US Department of Health and Human Services.

Availability of data and materialsNot applicable.

Ethics approval and consent to participateNot applicable. This study did not involve human subjects.

Consent for publicationNot applicable.

Competing interestsNone declared.

Author details1Department of Population Health Sciences, University of Utah School ofMedicine, Salt Lake City, Utah, USA. 2Center for Prevention ImplementationMethodology for Drug Abuse and HIV, Department of Psychiatry andBehavioral Sciences, Department of Preventive Medicine, Department ofMedical Social Sciences, and Department of Pediatrics, NorthwesternUniversity Feinberg School of Medicine, Chicago, Illinois, USA. 3Center forPrevention Implementation Methodology for Drug Abuse and HIV,Department of Psychiatry and Behavioral Sciences, Feinberg School ofMedicine; Institute for Sexual and Gender Minority Health and Wellbeing,Northwestern University Chicago, Chicago, Illinois, USA. 4Shirley RyanAbilityLab and Center for Prevention Implementation Methodology for DrugAbuse and HIV, Department of Psychiatry and Behavioral Sciences andDepartment of Physical Medicine and Rehabilitation, Northwestern UniversityFeinberg School of Medicine, Chicago, Illinois, USA.

Received: 3 April 2020 Accepted: 3 September 2020

References1. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S,

Chambers CD, Chin G, Christensen G, et al. Promoting an open researchculture. Science. 2015;348:1422–5.

2. Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality ofdocumentation and reporting of fidelity to implementation strategies: ascoping review. Implement Sci. 2015;10:1–12.

3. Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM,Duan N, Mittman BS, Wallace A, et al: An overview of research andevaluation designs for dissemination and implementation. Annual Review ofPublic Health 2017, 38:null.

4. Hwang S, Birken SA, Melvin CL, Rohweder CL, Smith JD: Designs andmethods for implementation research: advancing the mission of the CTSAprogram. Journal of Clinical and Translational Science 2020:Available online.

5. Smith JD. An Implementation Research Logic Model: a step towardimproving scientific rigor, transparency, reproducibility, and specification.Implement Sci. 2018;14:S39.

6. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research andpractice: models for dissemination and implementation research. Am J PrevMed. 2012;43:337–50.

7. Nilsen P. Making sense of implementation theories, models and frameworks.Implement Sci. 2015;10:53.

8. Damschroder LJ. Clarity out of chaos: use of theory in implementationresearch. Psychiatry Res. 2019.

9. Proctor EK, Powell BJ, McMillen JC. Implementation strategies:recommendations for specifying and reporting. Implement Sci. 2013;8.

10. Kessler RS, Purcell EP, Glasgow RE, Klesges LM, Benkeser RM, Peek CJ. Whatdoes it mean to “employ” the RE-AIM model? Evaluation & the HealthProfessions. 2013;36:44–66.

11. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ,Rycroft-Malone J, Meissner P, Murray E, Patel A, et al. Standards forReporting Implementation Studies (StaRI): explanation and elaborationdocument. BMJ Open. 2017;7:e013318.

12. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-BaileyC, Weiner B. From classification to causality: advancing understandingof mechanisms of change in implementation science. Front PublicHealth. 2018;6.

13. Glanz K, Bishop DB. The role of behavioral science theory in developmentand implementation of public health interventions. Annu Rev Public Health.2010;31:399–418.

14. WK Kellogg Foundation: Logic model development guide. Battle Creek,Michigan: WK Kellogg Foundation; 2004.

15. CFAR/ARC Ending the HIV Epidemic Supplement Awards [https://www.niaid.nih.gov/research/cfar-arc-ending-hiv-epidemic-supplement-awards].

16. Funnell SC, Rogers PJ. Purposeful program theory: effective use of theoriesof change and logic models. San Francisco, CA: John Wiley & Sons; 2011.

17. Petersen D, Taylor EF, Peikes D. The logic model: the foundation to implement,study, and refine patient-centered medical home models (issue brief).Mathematica Policy Research: Mathematica Policy Research Reports; 2013.

18. Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and itsuse in improvement. BMJ Quality & Safety. 2015;24:228–38.

19. Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, ParcelG, Ruiter RAC, Markham CM, Kok G. Implementation mapping: usingintervention mapping to develop implementation strategies. Front PublicHealth. 2019;7.

20. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B.Implementation research in mental health services: an emerging sciencewith conceptual, methodological, and training challenges. Admin Pol MentHealth. 2009;36.

21. Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, GriffeyR, Hensley M. Outcomes for implementation research: conceptualdistinctions, measurement challenges, and research agenda. Adm PolicyMent Health Ment Health Serv Res. 2011;38.

22. Rabin BA, Brownson RC: Terminology for dissemination and implementationresearch. In Dissemination and implementation research in health:translating science to practice. 2 edition. Edited by Brownson RC, Colditz G,Proctor EK. New York, NY: Oxford University Press; 2017: 19-45.

23. Smith JD, Rafferty MR, Heinemann AW, Meachum MK, Villamar JA, Lieber RL,Brown CH: Evaluation of the factor structure of implementation researchmeasures adapted for a novel context and multiple professional roles. BMCHealth Serv Res 2020.

24. Smith JD, Berkel C, Jordan N, Atkins DC, Narayanan SS, Gallo C, Grimm KJ,Dishion TJ, Mauricio AM, Rudo-Stern J, et al. An individually tailored family-centered intervention for pediatric obesity in primary care: study protocol ofa randomized type II hybrid implementation-effectiveness trial (RaisingHealthy Children study). Implement Sci. 2018;13:1–15.

25. Fauci AS, Redfield RR, Sigounas G, Weahkee MD, Giroir BP. Ending the HIVepidemic: a plan for the United States: Editorial. JAMA. 2019;321:844–5.

Smith et al. Implementation Science (2020) 15:84 Page 11 of 12

26. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translationof research findings. Implement Sci. 2012;7:50.

27. Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM,Duan N, Mittman BS, Wallace A, et al. An overview of research andevaluation designs for dissemination and implementation. Annu Rev PublicHealth. 2017;38:1–22.

28. Krause J, Van Lieshout J, Klomp R, Huntink E, Aakhus E, Flottorp S, Jaeger C,Steinhaeuser J, Godycki-Cwirko M, Kowalczyk A, et al. Identifyingdeterminants of care for tailoring implementation in chronic diseases: anevaluation of different methods. Implement Sci. 2014;9:102.

29. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosingimplementation strategies to address contextual barriers: diversity inrecommendations and future directions. Implement Sci. 2019;14:42.

30. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC.Fostering implementation of health services research findings into practice:a consolidated framework for advancing implementation science.Implement Sci. 2009;4.

31. Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, Foy R, Duncan EM,Colquhoun H, Grimshaw JM, et al. A guide to using the TheoreticalDomains Framework of behaviour change to investigate implementationproblems. Implement Sci. 2017;12:77.

32. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM,Proctor EK, Kirchner JE. A refined compilation of implementation strategies:results from the Expert Recommendations for Implementing Change (ERIC)project. Implement Sci. 2015;10.

33. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC,McHugh SM, Weiner BJ. Enhancing the impact of implementation strategiesin healthcare: a research agenda. Front Public Health. 2019;7.

34. PAR-19-274: Dissemination and implementation research in health (R01Clinical Trial Optional) [https://grants.nih.gov/grants/guide/pa-files/PAR-19-274.html].

35. Edmondson D, Falzon L, Sundquist KJ, Julian J, Meli L, Sumner JA, KronishIM. A systematic review of the inclusion of mechanisms of action in NIH-funded intervention trials to improve medication adherence. Behav ResTher. 2018;101:12–9.

36. Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic reviewof use over time. Am J Public Health. 2013;103:e38–46.

37. Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, Ory MG,Estabrooks PA. RE-AIM planning and evaluation framework: adapting tonew science and practice with a 20-year review. Front Public Health. 2019;7.

38. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact ofhealth promotion interventions: the RE-AIM framework. Am J Public Health.1999;89:1322–7.

39. Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, OddoneEZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC)program: organizational factors associated with successful implementation.Transl Behav Med. 2016;7:233–41.

40. Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C.Tracking implementation strategies: a description of a practical approachand early findings. Health Research Policy and Systems. 2017;15:15.

41. Boyd MR, Powell BJ, Endicott D, Lewis CC. A method for trackingimplementation strategies: an exemplar implementing measurement-basedcare in community behavioral health clinics. Behav Ther. 2018;49:525–37.

42. Brown CH, Kellam S, Kaupert S, Muthén B, Wang W, Muthén L, ChamberlainP, PoVey C, Cady R, Valente T, et al. Partnerships for the design, conduct,and analysis of effectiveness, and implementation research: experiences ofthe Prevention Science and Methodology Group. Adm Policy Ment HealthMent Health Serv Res. 2012;39:301–16.

43. McNulty M, Smith JD, Villamar J, Burnett-Zeigler I, Vermeer W, Benbow N,Gallo C, Wilensky U, Hjorth A, Mustanski B, et al: Implementation researchmethodologies for achieving scientific equity and health equity. In Ethnicity& disease, vol. 29. pp. 83-92; 2019:83-92.

44. Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST)and the sequential multiple assignment randomized trial (SMART): new methodsfor more potent eHealth interventions. Am J Prev Med. 2007;32:S112–8.

45. Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthén B, Gibbons RD.Adaptive designs for randomized trials in public health. Annu Rev PublicHealth. 2009;30:1–25.

46. Smith JD: The roll-out implementation optimization design: integrating aimsof quality improvement and implementation sciences. Submitted forpublication 2020.

47. Dziak JJ, Nahum-Shani I, Collins LM. Multilevel factorial experiments fordeveloping behavioral interventions: power, sample size, and resourceconsiderations. Psychol Methods. 2012;17:153–75.

48. MacKinnon DP, Lockwood CM, Hoffman JM, West SG, Sheets V. Acomparison of methods to test mediation and other intervening variableeffects. Psychol Methods. 2002;7:83–104.

49. Graham ID, Tetroe J. Planned action theories. In: Straus S, Tetroe J, GrahamID, editors. Knowledge translation in health care: Moving from evidence topractice. Wiley-Blackwell: Hoboken, NJ; 2009.

50. Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic reviewof the Exploration, Preparation, Implementation, Sustainment (EPIS)framework. Implement Sci. 2019;14:1.

51. Rycroft-Malone J. The PARIHS framework—a framework for guiding theimplementation of evidence-based practice. J Nurs Care Qual. 2004;19:297–304.

Publisher’s NoteSpringer Nature remains neutral with regard to jurisdictional claims inpublished maps and institutional affiliations.

Smith et al. Implementation Science (2020) 15:84 Page 12 of 12

BioMed Central publishes under the Creative Commons Attribution License (CCAL). Underthe CCAL, authors retain copyright to the article but users are allowed to download, reprint,distribute and /or copy articles in BioMed Central journals, as long as the original work isproperly cited.

  • Abstract
    • Background
    • Methods
    • Results
    • Conclusions
  • Background
    • Specification challenges in implementation research
    • Logic models
    • Development of the IRLM
  • Methods
    • The Implementation Research Logic Model
      • Structure
      • Core elements and theory
      • Determinants
      • Implementation strategies
      • Mechanisms of action
      • Outcomes
    • Using the Implementation Research Logic Model
      • Guiding principles
      • Principle 1: Strive for comprehensiveness
      • Determinants
      • Implementation strategies
      • Outcomes
      • Principle 2: Indicate key conceptual relationships
      • Principle 3: Specify critical study design elements
      • Primary outcome(s)
      • For quasi/experimental designs
      • For comparative implementation trials
      • For implementation optimization designs
      • Additional specification options
    • Using the IRLM for different purposes and stages of research
      • Planning
      • Executing
      • Reporting
      • Synthesizing
      • Use of supporting text and documents
  • Results
    • Evidence of IRLM utility and acceptability
    • Resources for using the IRLM
  • Discussion
  • Conclusions
  • Supplementary information
  • Abbreviations
  • Acknowledgements
  • Authors’ contributions
  • Funding
  • Availability of data and materials
  • Ethics approval and consent to participate
  • Consent for publication
  • Competing interests
  • Author details
  • References
  • Publisher’s Note