Authors: Tjerk Jan Schuitmaker, Assistant Professor System Innovations in Health Care at Athena Institute (VU) and Paul Robinson, European Lead, Patient Innovation at MSD.

Metrics are good but what makes them both meaningful and feasible when it comes to Patient Engagement (PE)? A session at the PE Open Forum explored this question

Within PARADIGM, academia, industry, patients and regulators are collaborating to develop a coherent monitoring and evaluation framework to answer the question: “Do we add value with patient engagement, and if so, how?”

On day 2 of the PE Open Forum 2019, representatives from all partners presented the framework, with its menu of metrics, and three practical examples of the determined value of patient engagement. Over 100 participants at the workshop discussed what metrics are feasible and meaningful and how to best monitor and evaluate patient engagement activities. 

The session began with an introduction to the framework. It was explained that the job of the framework is to define the pathway to impact; the relation between your objectives, inputs, activities, learnings and changes, and impact. Connecting the dots from objectives to impact can help to identify a coherent set of metrics that fit a particular activity, setting and organisation; a personalised ‘menu of metrics’.

Lukas Eichmann (Novo Nordisk), Nathalie Bere (EMA) and Rob Camp (EURORDIS) presented on how they applied the principles in the framework, and developed metrics and methods to evaluate PE activities of their organisations. The workshop participants learned about challenges and solutions for successful evaluation in different backgrounds. 

Novo Nordisk used a team workshop to determine feasible and meaningful metrics to monitor and evaluate a patient advisory board; the PARADIGM framework was effective to trigger fruitful discussions. As has been seen before, what is easy to measure was usually not the most meaningful measurement of value. Impact metrics were seen as most meaningful, but difficult to measure because they are often detached from the activity in time, and influenced by many other factors. For example, a linear connection between a patient engagement activity and recruitment rate in a clinical trial is not easy to measure because many other factors are involved.

Participants heard that the European Medicines Agency (EMA) has been using a questionnaire to monitor and evaluate their engagement with patients. A review of their assessment methodologies generated discussions about the kind of questions they asked, and their questionnaire was refined to include more granularity. For instance, they changed questions like ‘Did the patient make an impact?/Did it change the outcome?’ to ‘Did the patient agree with the proposed responses?/Did the patient’s comments result in further reflection?/Did the patient’s input result in a modification of the final advice?/What was the added value of the patient’s input?’. The wording of questions is vital, attendees agreed.  

EURORDIS has been organising Community Advisory Boards (CABs) for several months now and recently began measuring their progress using surveys. This has limitations and cannot capture other values, such as feelings of trust and openness. However, an issue they run into is capacity. How will they analyse all the responses they have received? Yet the survey so far has shown that the CAB meetings are very valuable for sponsors. For example, one measure that they find useful is ‘does the pharma company come back again?’; this implies that the experience had perceived value.

Following these presentations, the room was triggered to discuss the meaningfulness and feasibility of the presented menu of metrics. The discussion underscored the discrepancy between what is feasible and what is meaningful to measure. For example, it was considered highly feasible to measure the number of recommendations received, but not so meaningful. Moreover, it was considered very meaningful to measure the industry’s improved understanding of unmet need, but less feasible in practice.

The room concluded that impact metrics are rarely measured, and measuring remains inherently difficult to implement due to the diversity and complexity of organisations and their patient engagement initiatives. They agreed that the developed monitoring and evaluation framework might be a good way to deal with these complexities.  According to someone in the audience: “whether you can have feasible and meaningful metrics depends on the maturity of the organization and its internal support for patient engagement.” Other participants stressed that “to keep it meaningful: we don’t know what we don’t know, so we need to ask open questions, and qualitative assessments are vital”. It was suggested that quantitative and qualitative assessments should happen in parallel. The quantitative information was needed for ‘reportability’ of results, but qualitative information provides better insights into the processes and possible improvements.

Ultimately, an ideal set of metrics would be easy to measure as well as coherent and context-specific. The framework should help to find the best possible balance for each initiative within its context. The take-home message was that the framework and its menu of metrics must be applied to more patient engagement activities in order to make it as granular and practical as possible.