Stage 5: Evaluate, monitor, maintain and improve

In its simplest sense, the most common way of thinking about an evaluation is that it is a systematic and structured process that tells us what works and what does not [Friedman et al 2022].  But evaluations can do much more than this. They can:

  • Measure outcomes
  • Assess whether the programme is being delivered as you intended
  • Determine whether the intended activities are being conducted appropriately and effectively
  • Measure what value has been gained relative to costs
  • Provide feedback and information to improve the design and implementation of the project.

Taken together, evaluations should answer the questions, What happened? To whom? When? Why? Under what conditions? [Friedman et al 2022, Fox, et al., 2017, Pawson & Tilly, 1997].

Build iteratively on your initial evaluation plan

In Stage 1 of this guide (Section 1.6), you were encouraged to create the initial version of your evaluation plan, using the four-stage evaluation framework for the Right Decision Service (Figure 3). This is copied below for ease of reference. It specifies short-term outcomes, intermediate outcomes, and long-term outcomes for implementation of the Right Decision Service

The four stage evaluation framework for the RDS
The four stage evaluation framework for the RDS

 

Planning your evaluation at the beginning of the design phase, as recommended in this guidance, ensures that evaluation goals and objectives are clearly defined and integrated into the project's overall planning and implementation process.

Starting the planning process early also allows you sufficient time to develop an evaluation plan, identify appropriate evaluation methods and tools, and secure necessary resources. Additionally, early planning enables you to identify potential challenges, and adjust the project design and implementation as needed to improve the quality and effectiveness of the evaluation.

It is important to note that evaluation planning is an iterative process that should continue throughout the project's lifecycle, with ongoing monitoring and feedback to inform and adjust evaluation plans as necessary.

 

Theory of change

The outcome chain shown in figure 7 above is based on a theory of change defined for the Right Decision Service by the research group responsible for the NASSS Framework (see Section 4.3.2) and by the Matter of Focus evaluation group, which specialises in public sector evaluation in complex environments.

Using this theory of change will help you to:

  1. Clarify the project’s goals and objectives. It helps you and others to understand the overall purpose of the project, the specific outcomes that are being targeted, and the strategies that are being used to achieve those outcomes.
  2. Identify appropriate evaluation questions that need to be answered to assess whether the project is achieving its intended outcomes.
  3. Select appropriate evaluation methods to answer the evaluation questions.
  4. Define strategies to achieve the outcomes at each step in the outcomes chain.
  5. Uncover the assumptions, contextual barriers and facilitators that influence success (or lack of success)
  6. Evaluate the project’s effectiveness. The theory of change helps you to assess the effectiveness of the project by comparing actual outcomes to the intended outcomes described in the theory of change. If the project is not achieving its intended outcomes—or if there are unintended outcomes—the theory of change can help you to identify which assumptions or contextual factors might be contributing to this.

 

Defining a logic model or driver diagram

You can expand on this theory of change to develop your evaluation plan in the form of:

A logic model that delineates how your resources and activities will contribute to your overall outcomes (Kellogg, 2004). Health Scotland (now part of Public Health Scotland) and Evaluation Support Scotland have produced a useful guide to logic modelling.

A driver diagram Driver diagrams are structured charts of three or more levels. They translate a high level improvement goal/aim into a logical set of high level factors (primary drivers) that you need to influence in order to achieve your outcomes.  They also show the specific projects or activities that would act on these high level factors.  NHS England provides a useful guide to driver diagrams.

You should involve a range of stakeholders when designing evaluations to ensure that the evaluation is relevant, credible, and useful. It is important to engage a diverse set of stakeholders early and throughout the evaluation process so that their perspectives and insights are incorporated into the design and implementation of the evaluation. [Giordano & Bell, 2000] In general, you should identify the following stakeholders when planning your evaluation:

  1. Project staff: They have intimate knowledge of the project and can provide insights into the evaluation's design and implementation.
  2. Beneficiaries: The clinicians, service users and others who are intended to benefit from the project who can provide feedback on the evaluation’s relevance and effectiveness.
  3. Experts in the field: Those who have clinical expertise as well as experts in decision making can provide guidance on the most appropriate evaluation methods and tools.
  4. External evaluators: They may be hired to conduct the evaluation and can provide an independent and objective assessment of the project's outcomes and impact.
  5. Other relevant stakeholders: These may advocacy groups or other organizations or individuals you identify who have a stake in the project.

 

Defining indicators to measure success

Table 9 below, copied from Section 1, gives some examples of indicators or measures of success for each stage in the evaluation framework.  You can use this as a starting point to define the measures you will take to reflect achievement of each outcome. 

It is important that your measurement plan is realistic and achievable, so it should focus as far as possible on data that you are already gathering, or on new measures which you can put in place easily.

 

Table 9 (copy of Table 2 in section 1 – copied for ease of reference)

Outcomes

Examples of indicators of success

Stage 1: Stakeholder engagement and valuing of the service (“Reach and reaction”)

Usage statistics for the decision support tool.

User feedback – can be gathered informally or more formally through questionnaire and interview.

Stage 2: Development of new knowledge, skills and attitudes

Numbers undertaking training on new decision support tool.

User feedback on how knowledge and skills in a particular area of practice has improved as a result of using decision support.

Creation of new decision support roles for knowledge manager and practitioners.

Stakeholder feedback on improved understanding of benefits of decision support.

Stage 3: Change in practice, policies and behaviours

 

Data on change in prescribing, test ordering, referral and triage with supporting evidence that decision support has contributed to this change.

Case studies of change in service model as a result of implementing decision support – e.g. arrangements for referral for shared decision-making discussions or medicines review following decision support recommendations.

Design and execute your post-market surveillance plan

Post-market surveillance means capturing data from the system and its users after the DSS is in routine use, then analysing these data to reveal any signs of problems or opportunities to improve the DSS.

How:

Methods to capture post market data include:

  • Analysis of usage log files – for example, using the Google Analytics reports associated with RDS toolkits
  • Keeping a register of issues, complaints or other departures from expectations
  • Observation, interviews and meetings with users.
  • Surveys of user satisfaction.

Implementers should explore any issues, complaints or concerns to understand whether they represent, for example:

  • Misunderstanding of how to use the DSS
  • Issues with design, bugs or performance of the DSS
  • For DSS integrated with electronic health record systems, issues may be due to incomplete patient data or ineffective mapping of codes to the DSS.

Once the causes of problems are understood through these insights, appropriate actions can be taken.

Plan for maintenance, review and updating

The implementation oversight group should regularly review evaluation data to identify areas for:

  • Improving the DSS
  • Improving the approach taken to implementation and training
  • Enhancing the content and value of the DSS as knowledge, guidelines and local practices evolve

Continue stakeholder engagement

Once the DSS has been launched, continue stakeholder engagement as roll-out progresses, and an ongoing programme of communicating success and impact, as well as identifying and satisfying knowledge sharing and peer learning opportunities. To enhance the opportunities for spread, it is necessary to identify and target new groups and keep responding to established users.

Delivering a Learning Health and Care System

The evaluation data from use of the decision support contributes directly to continuous learning and improvement within a learning health and care system.  The insights from evaluation guide adaptation of the decision support tool content and functionality, and to refinement of implementation methods.