Lidewij Vat and Tjerk Jan Schuitmaker of Vrije Universiteit Amsterdam, along with Paul Robinson of MSD discuss how they worked to co-create a monitoring and evaluation framework that could meet the needs of all stakeholders involved in patient engagement
The goals of any patient engagement (PE) activity are not the same for everyone involved. Different stakeholders wish to achieve different things when it comes to PE. Assessing the true value and demonstrating the impact of PE activities can therefore be difficult when there is discordance over which metrics are the most important.
The monitoring and evaluation framework as devised by the PARADIGM working group aims to address this issue, by providing a step-by-step process to demonstrate impact and enhance learning for all involved.
According to Vat, embedding PE in organisational decision-making remains challenging, partly due to lack of agreement on its value and the means to evaluate it. “Monitoring and evaluation of patient engagement should answer questions of interest of all involved,” she says.
The tool in question benefits all stakeholders because it enables them to select metrics they value and to consider things that are important to measure for other stakeholders involved.
“Critically, and I believe quite unusually, the tool encourages all those involved in an activity to assess/articulate its value together, not just ‘what was in it for me’,” notes Robinson.
Schuitmaker explains that the new tool builds and improves on previous frameworks. “Existing evaluation approaches measure a variety of aspects within medicine development, or within the process of doing PE, but these are disconnected. This tool helps all stakeholders to connect the dots, from the input, via activities and learnings to impacts. And it helps to monitor whether you are on track to reach the intended impacts.”
Naturally, co-creation and ensuring that everyone’s perspective was included was critical when drawing up the framework – not simply for operational metrics like patients in trials, but also softer metrics like sense of participation, improvement in trust etc. The tool was created by academics with methodological expertise in the area, with input from industry, patients, regulators and HTA bodies as well.
“We know from research that co-designed evaluation frameworks are most likely to be locally relevant and used in practice,” Vat says.
The process was iterative: early versions of the framework and metrics in practice were created, and then learnings shared with other stakeholders to together refine the tool. “As academic facilitators of this process, we had to ensure that everyone had the opportunity to refine the tool and that all metrics were taken into account even if a certain metric was not seen as meaningful from another stakeholder’s perspective,” Vat adds. This participatory relational work can be seen as an intervention/change process in itself, as partners learned from each other what metrics they value and helped to co-create consensus-based sets of metrics for other users.
The multi-stakeholder working group consisted of representatives of four European patient organisations, 15 biopharmaceutical companies and two academic institutions. This broad mix of stakeholders was critical for the tool’s success, Robinson believes. “Our academic colleagues brought the methodological rigour, but few had ever worked for a pharma company, been a patient advocate or sat as a regulator, so together, we could create a tool which took account of the context and the perspectives of each other’s reality – a highly regulated area, need for commercial confidentiality, passionate advocacy for good, need for efficiency etc. And we were able to test the tool on various existing activities, such as the Community Advisory Boards, company Patient Input Activities etc.”
Among the work they carried out was reviewing the literature on M&E of patient engagement, the development and testing of an M&E framework and identification and selection of appropriate metrics for M&E. “Their different ideas on ‘what’ to measure and the variety of experiences with patient engagement resulted in a tool that is inclusive in its scope and can be used in a wide variety of settings,” says Vat.
One challenge presented was the need to balance the tension between standardization and flexibility versus the interest and needs of different stakeholders. Some stakeholders initially preferred to develop purely practical guidance on how to conduct an evaluation, including assessment grids and a ‘fixed’ set of metrics that can be used for benchmarking. To support stakeholders in the relational work that evaluation requires while providing guidance on the reflexive strategy, the group sought to develop an adaptive framework with metrics that can be tailored to different needs.
“Stakeholders preferred a consensus-based framework, but consensus on a standard, one-size-fits-all, set of metrics seems inherently impossible because patients, regulators and industry in the end value different things,” adds Schuitmaker. He says the co-construction process allowed for members of the work package to get an in-depth insight into each other’s perspectives, and look for “win-wins”.
Vat’s hope is that the tool ultimately stimulates a cultural change; for example a broader perspective of what ‘counts’ as evidence (impact/value) and a feedback culture enabling all stakeholders involved to conduct meaningful patient engagement. “Relationships grow over time, and as in any relationship there will be ups and downs, and it may take a while before success becomes visible. Regular feedback and sharing best practices enables all involved to enhance their patient engagement practice in order to maximise impact,” she says.
Robinson echoes this, saying “A tool to measure value requires two culture changes: one, for all to embrace the idea of patient engagement in the development of medicines, and two, for all to embrace the value assessment and continuous improvement philosophy.”
As one of the elements of a unique suite of PE tools, the tool is ultimately complementary to several of them: It provides a common language for monitoring and evaluation and includes metrics that relate to the preparation phase, conduct of patient engagement and evaluation and reporting phase. It also has the potential for widespread use, now and into the future.
“The tool caters to the needs of all stakeholders within the medicine life cycle and allows for a common understanding of ‘value’, thereby improving communication and collaboration between patients, industry and regulators,” says Schuitmaker. “Furthermore, the possibility to flexibly tailor the framework makes the tool feasible in a lot of different contexts, while maintaining the underlying idea of inclusion of metrics that are meaningful for all.”