Summary

The Participant-Reported Implementation Update and Score (PRIUS): A Novel Method for Capturing Implementation-Related Data Over Time

Published: February 19, 2021
doi:

Summary

This protocol describes a novel method for collecting and analyzing data related to ongoing implementation called the Participant-Reported Implementation Update and Score (PRIUS). The PRIUS method allows for the efficient and systematic capture of data over time and from multiple viewpoints in healthcare settings.

Abstract

“Implementation” of new initiatives in healthcare settings typically encompasses two distinct components: a “clinical intervention” plus accompanying “implementation strategies” that support putting the clinical intervention into day-to-day practice. A novel clinical intervention, for example, might consist of a new medication, a new protocol, a new device, or a new program. As clinical interventions are not self-implementing, however, they nearly always require effective implementation strategies in order to succeed. Implementation strategies set out to engage healthcare providers, staff and patients in ways that increase the likelihood of the new initiative being successfully adopted, a process that often involves behavior change and new ways of thinking by participants. One of the challenges in studying implementation is that it can be difficult to collect data about the status and progress of implementation, including participants’ own perspectives and experiences concerning implementation to date. This protocol describes a novel method for collecting and analyzing data related to ongoing implementation called the Participant-Reported Implementation Update and Score, or PRIUS. The PRIUS method allows for the efficient and systematic capture of qualitative and quantitative data that can provide a detailed and nuanced account of implementation over time and from multiple viewpoints. This longitudinal method can enable researchers, as well as implementation leaders and organizational stakeholders, to monitor implementation progress more closely, conduct formative evaluation, identify improvement opportunities, and gauge the effect of any implementation changes on a rolling basis.

Introduction

"Implementation" of new initiatives in healthcare settings typically encompasses two distinct components: a "clinical intervention" plus accompanying "implementation strategies" that support putting the clinical intervention into day-to-day practice. A simple analogy to explain the difference is that a clinical intervention represents "the new thing" to be put into practice, whereas an implementation strategy encourages people to "do the new thing1." A novel clinical intervention, for example, might consist of a new medication, a new protocol, a new device, or a new program. As clinical interventions are not self-implementing, however, they nearly always require effective implementation strategies in order to succeed. Implementation strategies set out to engage healthcare providers, staff and patients in ways that increase the likelihood of participants adopting the new initiative, a process that often involves behavior change and new ways of thinking2. One of the long-standing challenges in studying implementation is that it can be extremely difficult to collect data about the current status of implementation related to either of these two components, including participants' own perspectives on implementation progress. This protocol describes a novel method for collecting and analyzing data related to ongoing implementation called the Participant-Reported Implementation Update and Score, or PRIUS. The PRIUS method allows healthcare researchers to capture qualitative and quantitative data efficiently and systematically related to how implementation unfolds over time and how it appears from the vantage points of multiple participants.

The PRIUS occupies a unique niche within the larger family of methods used to study healthcare implementation. Other approaches that predate the PRIUS include Rapid Evaluation and Assessment Methods (REAM), which provides a way for researchers to collect and analyze data on an accelerated timetable yet maintain rigor3,4,5, as well as matrix displays, where researchers integrate large amounts of data into rows and columns that they can then sort and sift to support ongoing analyses6,7. A limitation of these methods, though, is that they can place substantial time and resource demands on researchers, and hence have been primarily used retrospectively to analyze prior implementation.

The use of prospective and longitudinal approaches for collecting implementation-related information can help retain accuracy in data collection, minimizing the possibility of hindsight bias (i.e., when the ultimate success or failure of an initiative influences reporting of earlier events) and recall bias (i.e., when participants do not remember prior events accurately)8. Other prospective and longitudinal approaches exist in implementation but with different aims than the PRIUS. For example, a systemic method has been developed to describe and track local adaptations to implementation as they occur over time using a 10-item spreadsheet9. Another approach known as "Periodic Reflections" provides a structured way for implementation core members to conduct monthly or bimonthly 30-60 minute telephone discussions with implementation team members in to document ongoing implementation phenomena10.

The PRIUS, by contrast, is distinct from these other prospective methods in that it has been designed explicitly to address the need for (a) a brief and systematic method to collect data on the status and progress of implementation interventions as it happens (b) from the perspectives of front-line participants themselves. In the span of a 5-10 minute verbal check-in, the PRIUS both captures implementation developments deemed noteworthy by individual participants as well as their subjective input (in the form of scores assigned on a +2 to -2 scale) about the perceived implications of those developments for ongoing implementation. The PRIUS approach, moreover, integrates qualitative and numerical information from participants for each implementation-related update, generating linked data that can be easily categorized, sorted and sifted to identify major themes as well as trends and patterns over time. The PRIUS method is succinct and organized around three simple verbal prompts: (1) "What are some things that happened over the past two weeks (or since the last time we spoke) that seem relevant from your perspective to the implementation of this project?"; (2) "What impact do you think each of these developments has had on implementation progress?"; and (3) "Why?"

The conceptual framework for the PRIUS is the Consolidated Framework for Implementation Research (CFIR)11, which provides an overall typology for understanding implementation in health services settings. The CFIR framework is both theory-based and evidence-based, and represents the accumulated result of over 50 years of research on implementation and diffusion. The CFIR framework encompasses five interrelated domains: intervention characteristics, outer setting, inner setting, individuals involved, and implementation process. The PRIUS method focuses especially on the intersection of the last three domains, asking individuals about their own perspectives related to the implementation process actively underway in the inner setting.

The PRIUS method is in the public domain and freely available for anyone to use. The protocol presented here further develops the PRIUS approach presented in an earlier publication12 and focuses exclusively on how to conduct the PRIUS with a single participant.

Protocol

This study was approved by the Indiana University Institutional Review Board (Protocol #1602800879). With this approval, the reviewing body included a waiver for the need for informed consent, as it was deemed that participation was entirely voluntary, there was minimal risk of harm, and the study involved no procedures for which written consent was normally required outside of the research context.

1. Participant-Reported Implementation Update and Score (PRIUS)

  1. Ask participant PRIUS prompt #1.
    1. At a mutually agreed-upon time, meet the participant over a videoconferencing application like Zoom.
    2. Pose the first verbal prompt to the participant: "What are some things that happened over the past two weeks (or since the last time we spoke) that seem relevant from your perspective to the implementation of this project"?
  2. Capture qualitative responses to prompt #1.
    1. Using a spreadsheet template like the one pictured below in Table 1, capture the first implementation-related development that the participant reports in the first row under the first column labeled "Update." It is not necessary to capture each development verbatim; a bullet-style summary is sufficient.
    2. Repeat as needed for each additional development reported by the participant during the same session.

Table 1. A blank 4-column PRIUS template. Please click here to download this Table.

  1. Ask participant PRIUS prompt #2.
    1. For each reported development, pose the second verbal prompt: "From your perspective, what would you say the impact of that development has had on the implementation of the project? Do you think it has has a strong, moderate or weak impact, and is the direction of that impact positive or negative?"
  2. Capture scored responses to prompt #2.
    1. Based on the response from the participant, score each development on the template on a 7-point scale ranging from +3 to -3, with zero as the middle value. Next to each development entered into the spreadsheet in Step 1.2, enter the score in the column labeled "Score." Positive scores indicate positive influence on the implementation process; negative scores indicate negative influence; and zero indicates no discernible influence one way or the other. In terms of magnitude, 3 indicates a strong influence, 2 a moderate influence and 1 a weak influence. For example, a PRIUS update with a "-2" score would indicated that that the development was perceived to have a moderate negative impact on implementation of the project.
  3. Ask participant PRIUS prompt #3.
    1. For each development reported and scored, ask the participant to provide a brief rationale for each score with the third verbal prompt: "Why do you think it has had that impact?" (i.e., strong/moderate/weak as well as positive/negative). For example, ask a participant to explain why they thought a particular development had a "weak negative" impact (i.e., a "-1") on implementation progress.
  4. Capture qualitative responses to prompt #3.
    1. Capture each rationale that the participant reports. Enter it in the third column of the spreadsheet entitled "Rationale" for each score captured in Step 1.4. As before, a short text summary is sufficient.
  5. Note any additional information.
    1. Capture any additional information in the optional fourth column of the spreadsheet labeled "Comments" as desired. This might include observed nonverbal cues like facial expressions, body language, and tone of voice and/or additional relevant details.
  6. Check in with participant on a recurring basis.
    1. Meet again with participant every two weeks and repeat steps 1.1 through 1.7.

Representative Results

As reported in an earlier publication12, twelve staff members at a single VA medical center in the midwestern United States participated in PRIUS sessions related to a quality improvement (QI) project over a 6-month period in 2016. This project resulted in a total of over 190 different PRIUS items. The typical PRIUS session yielded three or four items (or "rows" in the PRIUS template). New PRIUS updates were discussed during regular implementation support team meetings, with PRIUS-based findings and insights shared with the leaders of the QI project.

PRIUS Updates (n=190)
Score Frequency (%) Major Descriptive Themes
3 49 (26) Positive experiences during general professional development and training events at beginning of TeleSleep project; positive interactions with vendor (ResMED); implementation of new electronic tools (e.g., TeleSleep template and tracking spreadsheet) very helpful
2 28 (15) Positive interactions between individuals from different clinical areas
1 15 (8) Small, incremental changes (e.g., enrolling an additional patient)
0 26 (14) Potential opportunities representing a change from status quo (e.g., Telehealth not reviewing TeleSleep data during Telehealth meetings; possibility for additional funds to become available in future to TeleSleep)
-1 16 (8) Perceived lack of interest by frontline clinical staff in starting TeleSleep program
-2 33 (17) Negative interactions between individuals from different clinical areas; slow patient enrollment in TeleSleep between February-April 2016; perceived need for additional training to meet PAP needs of patients
-3 23 (12) TeleSleep workload heavier than originally anticipated; distrust and hostility among individuals from different clinical areas

Table 2. Summary of PRIUS entries from a QI project by score and major descriptive themes. (adapted from Miech et al., 2019).12

Table 2 above shows a summary of 190 PRIUS entries from the QI project by score and major descriptive themes. The most frequent score applied by participants to PRIUS entries in this project was +3, the highest possible score on the 7-point scale of +3 to -3. Entries scored with a +3 included initial professional development and training events, interactions with the outside vendor and the implementation of new electronic tools. The next most frequent score was -2, the second-lowest possible score. Entries scored with a -2 included interactions between individuals from different clinical areas, slow initial patient enrollment, and the need for follow-up training for staff in order to meet the PAP needs of patients.

Longitudinal analysis of this same PRIUS dataset demonstrated major changes in program-related components over time: for example, PRIUS items related to professional development changed from a strong positive score (i.e., +3) in February to a moderate negative score (-2) by April.

Analysis of PRIUS data led directly to changes in the implementation intervention. During a formally scheduled meeting to review the PRIUS data, the implementation core support team shared with the clinical intervention leader that participants from different clinical areas had disparate and conflicting perspectives on project implementation. Subsequent discussion about this finding led to the realization that key frontline staff from two areas implementing the QI project had not been able to attend project meetings because of conflicting clinical duties, while managers and service chiefs in those same two clinical areas had been regular attendees. Key frontline staff across these two areas, furthermore, neither knew each other nor understood what happened when patients enrolled in the QI program transferred from one service to the other.

The following month, patient enrollment in the QI project nearly came to a halt, with only 5 patients enrolled when the original target had been 30. As a direct result of the earlier PRIUS discussion, the QI intervention leader and the implementation core support team decided to provide an "appreciation" lunch for all interested frontline staff in the two clinical areas. As part of this voluntary event, the clinical intervention leader thanked front-line staff personally for their participation in the program and shared data demonstrating the efficacy of the QI project to date in terms of improving patient outcomes. The appreciation lunch was well-attended and positively received, with frontline staff from the two services engaging in personal conversations for the first time.

Program recruitment improved dramatically after this lunch; within a month enrollment had to be temporarily suspended so staff could catch up with the influx of new patients. PRIUS updates collected after the event clearly singled out this lunch as an inflection point for implementation. Independent of one another, several individuals reported in their PRIUS updates that it was only after this event that they understood the overall project, knew about the positive impact they were having on patients through the program, and appreciated the perspectives of other frontline staff from different clinical areas13. As intended, the PRIUS made the leader of the clinical intervention aware of an implementation issue that otherwise would have remained undetected, helped inform the development of a course correction, and provided a way to evaluate the effect of that change on ongoing implementation.

Discussion

No special software is required to use the PRIUS method. PRIUS sessions can be administered by researchers, research assistants, and/or other research team members who have been appropriately trained.

Alternative settings for conducting PRIUS sessions include in-person check-ins or over the telephone. The recommended timeframe for checking in with participants is every two weeks; this frequency can be modified if necessary to more closely align with needs of specific projects.

The first PRIUS prompt ("What are some things that happened over the past two weeks (or since the last time we spoke) that seem relevant from your perspective to the implementation of this project"?) bounds participant responses in three specific ways: it provides a specific reporting timeframe; it engages others by explicitly valuing their individual perspectives on implementation; and it focuses attention on only the most notable (i.e., relevant) implementation developments. In practice, participants typically report two or three developments during a check-in session.

The rationale behind the second prompt (i.e., "From your perspective, what would you say the impact of that development has had on the implementation of the project?") is to draw upon the experience and perspectives of individual participants at the same time that they report each development to sort those developments into discrete categories of perceived impact. These numerical scores provide an additional factor with seven different possible values that link directly to the qualitative data reported in the developments and provide another source of data for the implementation support team to use when sorting, analyzing, and reporting on implementation progress in Data Analysis.

The third prompt (i.e., "Why do you think it has had that impact?") explicitly invites participants to explain the reason behind their score. In many cases, this may be the first time that individual participants working extremely busy schedules have had an opportunity to step back and reflect on the meaning and influence of specific implementation-related developments.

The fourth column in the PRIUS template (i.e., "Comments") affords researchers the option to capture any important contextual details related to the reporting of specific implementation developments.

As with any research method, project teams planning to use the PRIUS method in a formal research setting should obtain Institutional Review Board approval or exemption as required by the responsible institution.

It is highly recommended that teams planning to use the PRIUS method schedule a dedicated training session for team members. Suggested length is 1-2 hours. During the training, describe and explain each step of the PRIUS method and watch the video accompanying this article. Pair up team members and have them practice administering PRIUS check-ins with one another in a simulated session, where each member of the pair takes turns being the person administering the PRIUS.

As individual PRIUS sessions are completed, project teams should integrate the PRIUS data on a rolling basis into a unified database in a secure online location that team members can access. A six-column spreadsheet is sufficient for this purpose, where the two additional columns capture the date and respondent for each entry. It is also a recommended practice to make a backup copy of the master PRIUS spreadsheet at regular intervals in case the original is damaged or corrupted.

The project team should conduct reviews of the ever-expanding PRIUS dataset on a periodic basis (e.g., once a month) to examine how recent entries compare with earlier entries, assess implementation-related trends and patterns, identify perceived strengths and weaknesses of implementation to date, and develop specific recommendations for implementation leaders about emergent opportunities to support and sustain implementation progress.

When preparing to conduct the PRIUS, the research team should determine the timeframe over which the PRIUS check-in sessions will occur, including start and end dates. A sample timeframe, for example, might involve administering the PRIUS method for a 3-month period, with individual PRIUS check-in sessions taking place roughly once every two weeks; in this example, the average participant would complete a total of five or six PRIUS check-in sessions.

It is also recommended that the research team send a general announcement to potential participants in advance to notify them that they may receive an invitation to participate in brief 5-minute check-ins to learn more about their perspectives on how implementation is faring. This message ideally would be sent through regular and established communication channels, such as email messages and/or announcements at regular staff meetings. The research team may wish to highlight the following points in the message:

1. participation is voluntary;

2. no preparation is necessary in advance of a PRIUS check-in;

3. the 5-minute check-ins will occur in a spoken conversation (e.g., over the phone, in person or over a videoconferencing application like Zoom) at a time convenient for the participant;

4. comments can be anonymous if desired;

5. the purpose of the check-ins is to develop on overall picture of how implementation is going throughout the organization – including the identification of problems and challenges – in order to improve the implementation process.

An important decision for the research team to consider at this juncture is whom to invite to participate in the PRIUS check-in sessions. This group of participants should represent a diverse sample of individuals affected by implementation; it is neither necessary nor advisable to invite all participants. The research team may wish to invite both individuals who are thought to be supportive of the program as well as those who are thought to be critical or skeptical. The overall number of participants to invite should be dictated by the number of implementation core support team members available to administer the PRIUS check-ins, with each implementation core member assigned 1-3 individual participants. Keep in mind that anyone not extended an initial invitation can be added at a later point in time if desired.

After a group of participants has been identified, it is time for the implementation core team to pair particular members of the implementation core team with specific participants. It is suggested that the same implementation core members conduct the PRIUS check-in session with the same participants over the longer timeframe in order to allow rapport and trust to develop within the dyads.

Before the first PRIUS session, implementation core members may find it helpful to conduct individual outreach to the participants with whom they have been paired for the PRIUS check-in sessions. Ideally, this outreach would take place informally and allow for a two-way conversation about the purpose and structure of the PRIUS check-ins. This individual outreach could emphasize in particular that the PRIUS is voluntary, that no preparation is necessary in advance of a session, and that the 5-minute check-ins occur at a time convenient for the participant. If the participant agrees, and if the IRB requirements (if any) for administering the PRIUS method have been satisfied, the first PRIUS check-in session should be conducted with that individual as soon as feasible.

It is highly recommended that any implementation core team using the PRIUS method also develop communication channels that link the implementation core with the intervention team responsible for implementing the new clinical program, especially if (as is often the case) these constitute two separate groups. These communication channels could consist of recurring meetings, shared online folders and/or email distribution lists. Regular exchange of information can facilitate a close working relationship across the evaluation and intervention teams, essential in order for PRIUS-related analysis and insights to inform mid-course adjustments to ongoing implementation.

The format of the PRIUS is easily sortable to facilitate data analysis. Researchers can review, sift and search the growing body of PRIUS entries at any time on an ongoing, iterative basis, and can conduct longitudinal analyses in at least two ways: comparing PRIUS updates scored with similar values at two different timepoints (e.g., comparing all updates scored with "-2" or "-3" in March 2020 with "-2" or "-3" entries in June 2020); and comparing how scores change over time for the same kind of entry (e.g., perspectives on the quality and adequacy of the professional development provided for the program).

When discrepancies in scores are observed across respondents for similar items, implementation core members can flag those implementation-related developments for further scrutiny and discussion in order to assess if the underlying source of the differences is due to a relatively minor semantic issue or if it reflects a deeper polarization of perspectives. If the latter, the research team may opt to bring these discrepancies to the attention of the team responsible for implementing the new clinical intervention for further discussion and possible corrective action.

In terms of limitations, a business case analysis has not yet been conducted on the PRIUS, as cost and time data have not yet been formally collected. PRIUS methods may be better suited for small- and modest-sized interventions, where they could prove more feasible for capturing prospective data from participants on an ongoing basis.

Overall, the PRIUS method addresses an outstanding need in healthcare research for an efficient and structured method to capture data on the status and progress of implementation interventions. If the successful implementation of a novel clinical initiative (i.e., "the new thing") hinges on an effective implementation intervention (i.e., the set of parallel activities encouraging staff to "do the new thing"), then the PRIUS method offers a novel and straightforward way to capture valuable implementation-related data that might otherwise prove fleeting.

Divulgations

The authors have nothing to disclose.

Acknowledgements

This project was sponsored through internal funding provided by the PRIS-M QUERI at the Richard L. Roudebush VA Medical Center in Indianapolis, Indiana Veterans Health Administration (QUE 15-280). The funding body had no role in the design of the study, the collection, analysis, and interpretation of data, or the writing of the manuscript. The authors retain sole responsibility for the content of this study. The PRIUS method is in the public domain and freely available for anyone to use.

References

  1. Curran, G. M. Implementation science made too simple: a teaching tool. Implementation Science Communications. 1 (27), 1-3 (2020).
  2. Powell, B. J., et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science. 10 (21), 1-14 (2015).
  3. Beebe, J. . Rapid Assessment Process: An Introduction. , (2001).
  4. McNall, M., Foster-Fishman, P. G. Methods of rapid evaluation, assessment, and appraisal. American Journal of Evaluation. 28 (2), 151-168 (2007).
  5. Green, C. A., et al. Approaches to mixed methods dissemination and implementation research: methods, strengths, caveats, and opportunities. Administration and Policy in Mental Health. 42 (5), 508-523 (2015).
  6. Averill, J. B. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qualitative Health Research. 12 (6), 855-866 (2002).
  7. Nadin, S., Cassell, C., Cassell, C., Symon, G. Using data matrices. Essential Guide to Qualitative Research Methods in Organizational Research. , 271-287 (2004).
  8. Kahneman, D. . Thinking Fast and Slow. , (2013).
  9. Rabin, B. A., et al. Systematic, multimethod assessment of adaptations across four diverse health systems interventions. Frontiers in Public Health. 6 (102), 1-11 (2018).
  10. Finley, E. P., et al. Periodic reflections: a method of guided discussions for documenting implementation phenomena. BMC Medical Research Methodology. 18 (1), 153 (2018).
  11. Damschroder, L. J., et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 4 (1), (2009).
  12. Miech, E. J., et al. The prospectively-reported implementation update and score (PRIUS): a new method for capturing implementation-related developments over time. BMC Health Services Research. 19 (1), 124 (2019).
  13. Rattray, N. A., et al. Evaluating the feasibility of implementing a Telesleep pilot program using two-tiered external facilitation. BMC Health Services Research. 20 (1), 357 (2020).

Play Video

Citer Cet Article
Miech, E. J., Rattray, N. A., Bravata, D. M., Myers, J., Damush, T. M. The Participant-Reported Implementation Update and Score (PRIUS): A Novel Method for Capturing Implementation-Related Data Over Time. J. Vis. Exp. (168), e61738, doi:10.3791/61738 (2021).

View Video