By Teresa Finlay and Lidewij Vat

On Nov 15 and 16, we attended the conference ‘International Perspectives on Evaluation of Patient & Public Involvement in Research’ in Newcastle. The conference attracted researchers, professionals and patient contributors who together discussed whether and how the impact of patient and public involvement* should be evaluated. Four themes struck us from the debate on evaluation. 

  • No matter where we live, we face common challenges in health research.

The conference started with a ‘world tour’. Keynote speakers from the UK, Canada and the US shared their views, with a more European focus on the second day with speakers from Denmark, Spain, Ireland. All were approached the topic for a unique perspective, but with the same mission: driving change in research.

While the UK has a decade of experience in patient and public involvement, the debate on evaluation continues. Early measurement and evaluation in the UK focus on:

  • ‘reach’ (the extent to which people and communities are engaged)
  • ‘relevance’ (the extent to which public priorities for research are reflected in funding and activities) and
  • ‘refinement and improvement’ (how public involvement is adding value to research).

Núria Radó-Trilla from the Agency for Health Quality and Assessment of Catalonia (AQuAS) told participants that their initial mission was to assess engagement in Catalonia. However, the indicators they wanted to use were considered relevant but not feasible to evaluate. They concluded that it might be too early to assess change and decided to focus more on promoting engagement.

The Patient-Centered-Outcomes Research Institute (PCORI) in the US have probably done most work in the field of evaluation. Their findings suggest that “engagement can influence the value and relevance and utility of research. Engagement can help balance the inherent trade-offs affecting research conduct while also responding to end-user needs (Kristin Carman)”.

Antoine Boivin from the Centre of Excellence on Partnership with Patients and the Public (CEPPP) in Canada highlighted three important questions: “What are the evaluation goals?”, “How much evidence is needed?” and “Do we have the capacity for evaluation?”

It was clear from the discussion that all stakeholders have different evaluation goals, which are important to take into account when thinking about evaluating patient and public involvement.

  • There is no consensus on the need for measurement, particularly when it comes to more quantitative measures.

The plenary debate was mainly about “How do we define what success looks like?” and “Can and should we measure?” Delegates were divided on this issue. There was a tension between patients who would like to know if their involvement made a difference and accountability of funding bodies. Some people argue we need to get evidence to secure public and private funding and help the movement further. Others believe the word ‘evaluation’ creates fear as it signifies judgement and reduces human interaction to a ‘product’. It was suggested that one of the outcomes might be creating trials that are less burdensome and more accessible for patients while increasing recruitment rate would benefit the study. This was seen as a good outcome predominantly by researchers and professionals in the room.

The current focus of evaluation feels quite narrow to some people. They argue that the focus of evaluation should be on ‘culture change’, to generate more relevant research and change the way research gets done. This may suggest measuring a change in attitudes, knowledge and skills of the academic and industry community.

In sum, there is not a common view on what impact means which makes it hard to evaluate impact. Furthermore, people used terminology like ‘outcomes’, ‘impact’ and ‘value’ without specifying (or perhaps knowing themselves) what they mean by those terms.

  • Evaluation should not (just) be about reporting, but about giving feedback.

One solution might be to talk more about reflective learning. People tend to agree on the need for evaluation when the focus is on understanding the quality of patient and public involvement and stimulating learning. Kristina Staley of TwoCan Associates said: “It’s about interactions between people – a two-way street in which we can’t always predict what the outcomes will be”.

Feedback to better build relationship can be seen as a key indicator for mature embedded patient and public involvement in research (Mathie et al, 2018). Furthermore, we also have to learn what things we can put in place to create the environment to make things happen. “There is a lot of actionable learning behind looking at the individuals (Antoine Boivin)”.

  • We need to build the science with patients. A common map might be a helpful start.

In conclusion, some people currently feel that evaluation of patient and public involvement is a researcher-led activity, which contradicts the original rationale for patient and public involvement. Furthermore, the focus of evaluation tends to be on the impact on research rather than wider impacts.

We feel that more attention should be given to the benefits and costs for the individuals and organizations involved. Creating a common map (framework) as a pathway to impact for all involved might be a helpful start. There is no one-size-fits-all, there are different ways that patient and public involvement can be evaluated. People should pick what fits with their purpose and context.

The PARADIGM consortium aims to develop a framework to demonstrate ‘return on engagement’ for all stakeholders involved in medicines development.

* We have mainly used the term ‘patient and public involvement’ in this blog. This is equivalent to the term ‘patient engagement’, often used at European level and in North America.  

Lidewij (Eva) Vat – Researcher and lecturer meaningful and sustainable patient engagement, Athena Institute, Vrije Universiteit Amsterdam

Teresa Finlay – Postdoctoral Researcher, Nuffield Department of Primary Care Health Sciences, University of Oxford