Skip to Content
Back to resources
Published by

José Rojas Alvarado

31 March 2026, 10:00 UTC Share

Five lessons on evaluating policy engagement

How do you know whether your policy engagement efforts are worth it? What do you mean by “worth it”? And worth it for whom? These are not straightforward questions.

Many of us are involved in policy engagement in one way or another, yet far fewer have formally evaluated this part of our work. That uncertainty is more common than we might think.

At a recent master class for new cohorts of OPEN Peer Mentors and OPEN Visiting Fellows, Professor Kathryn Oliver noted that, despite an expansion of academic–policy engagement initiatives, fewer than 15% of organisations running them have made their evaluations publicly available. There is considerable activity, but much less visible learning about what difference that work makes.

Here are five things we took away from that masterclass about evaluating and learning from policy engagement.

1. Knowledge exchange is not just dissemination

Knowledge exchange is not simply about pushing research out into the world. It involves relationships, trust, timing, incentives and institutional culture as much as it involves evidence.

It can be approached in several ways:

  • Linear – where research is pushed or pulled
  • Relational – where the focus is on shared understanding
  • Systems – where attention turns to deeper structural barriers.

How we understand engagement shapes how we evaluate it.

If we think about it in linear terms, we tend to measure outputs.

If we think about it relationally or systemically, we begin to ask whether understanding shifted, whether trust deepened, and whether cultures or behaviours changed.

Those questions are more complex, but they are also more closely aligned with what we are ultimately trying to achieve.

2. Engagement has costs — even when it feels positive

Policy engagement is usually framed in terms of opportunity and benefit. It can open doors, build relationships and create influence. But it also requires time, energy and coordination.

But there are opportunity costs in terms of what those involved are not doing while they engage. There can be, for example, relational risks if expectations are not managed, and reputational risks if boundaries blur.

Without strategic planning and coordination, engagement can reduce goodwill, duplicate effort and even narrow diversity rather than expand it. Evaluation, then, is not just about proving success. It is about understanding trade-offs. Are we using people’s time wisely? Are we designing interventions that justify the investment they need?

3. Defining and measuring success

Too often, we assess what we did rather than what changed.

We count events, attendees or citations.

Fellowships, mentoring schemes, secondments or webinars can all sit under the broad label of “engagement.” But they are not interchangeable. They serve different purposes.

And activity is different from impact.

The more useful question is what we were trying to achieve.

Was the aim to diversify networks, influence how a policy problem is framed, deliver specific advice, strengthen an evidence base, or help researchers better understand government?

4. “Worth it” depends on perspective

What counts as a “good” outcome depends on who you are.

It is helpful to distinguish between outcomes, which are nearer-term changes, and impacts, which are broader and longer-term shifts.

But what counts as a “good” outcome also varies depending on who you are.

For an individual researcher, it might be increased confidence, new relationships or a clearer understanding of how policy works. For a university, it might be reputation, evidence for reporting or justification for continued investment. For government colleagues, it might be timely access to expertise or stronger internal capability.

When we ask whether something was worth it, we need to be clear about whose perspective we are centring.

Otherwise, we risk designing evaluations that satisfy reporting systems but perhaps miss what matters to our stakeholders.

5 Evaluation is a learning tool — if we let it be

Evaluation is an ongoing discipline, not a separate exercise.

Evaluation becomes meaningful when it is embedded in design. Identifying the problem, articulating the goal, selecting mechanisms and assessing outcomes are not administrative steps but strategic ones. Approaches such as logical frameworks or Theories of Change can make underlying assumptions visible and testable.

If you think back to the questions at the start, they were not just a way into this piece. They are the questions that need to sit alongside our practice. Not once, but repeatedly. If those questions continue to inform how we design and reflect on our engagement, then evaluation becomes less of a separate exercise and more of an ongoing discipline.

Back to resources

Sign up to our newsletter

* indicates required