Kathryn Oliver opened and set the scene, highlighting how mapping has shown that of 1,922 research-policy engagement initiatives worldwide only 3% had a publicly available evaluation. She noted that while engagement is often seen as inherently positive, it can also be costly and counterproductive without strategic planning. She emphasized the need for a common language and better coordination to avoid wasted effort and unintended harm.
Ben Hepworth presented the Ministry of Justice (MoJ) Evidence and Partnerships Hub as a case study. The Hub developed a structured approach to academic engagement, including the publication of the MoJ Areas of Research Interest (ARI), organisation of academic seminars, establishment of academic networks, and facilitation of fellowships and secondments. These initiatives have fostered a KE culture but come with costs, and a cost-benefit analysis is underway to assess their long-term value.
Bridget Sealey reflected on her two decades of experience delivering and evaluating KE activities. She stressed the importance of intentional design and embedded approaches, and noted the many benefits derived from KE evaluations including articulating civic and public value, meeting evolving REF criteria and continuously improving delivery. She stressed the importance of understanding the motivations of both individuals and institutions when it comes to the evaluation of KE.
Then followed three breakout table exercises for the 60+ workshop attendees to participate in. The first exercise encouraged participants to identify KE activities they have seen. The second exercise involved discussing these KE activities from the perspective of different stakeholders (researchers, knowledge brokers, Government officials, funders) and the outcomes and impacts they might be hoping to see. Following this the outcomes and impacts were grouped thematically and tables reported back to the room, where they were collated – this resulted in suggestions such as:
- Increased Government evidence skills/capability
- Government has improved access to evidence
- More and better-quality relationships
- More diverse expert networks
- More efficient use of research funding
- Increased evidence-use in policy
- Public benefit and value-for-money
- Increased researcher knowledge about policy and the role of evidence.
In the third (and final) exercise, tables took these themes and worked backwards, discussing the different KE activities that could generate them. Again, tables reported back to the room and suggestions were collated. Participants highlighted how the same activities can generate multiple outcomes and impacts, as well as incur different costs and benefits that need to be assessed. The workshop ended with each table offering a reflection of what they had learnt and would take away.
By undertaking the exercise both forwards and backwards, the workshop highlighted just how complex the system is in terms of activities, outcomes, and impacts, and how much ‘cross-wiring’ there is to untangle. The workshop generated a wealth of data from participants, that the organising team plan to write up into a report to summarise the views of the KE community and provide some recommendations for how to proceed. This will be vital if we are to systematically embed good evaluation practice of KE initiatives.