Skip to Content
Back to resources
Published by

Sam Illingworth

30 September 2025, 12:36 UTC Share

The Case for Slow AI in Academic and Policy Engagement

In this blog, Sam Illingworth of Edinburgh Napier University proposes using AI as a mirror to help us to notice what matters.

In higher education, as in policymaking, speed is often celebrated. New technologies, including AI, are framed as tools that deliver faster drafts, instant summaries, and accelerated evidence reviews. In a sector under pressure to do more with less, this is tempting. Yet speed on its own does not build trust. Acceleration without reflection can lead to shallow summaries, missed nuance, and outputs that quietly repeat errors or bias. The more useful shift is to treat AI as a mirror that helps us pause at key moments, creating space to check clarity, audience needs, and hidden assumptions. These small pauses can strengthen credibility in ways that rushing cannot.

Academic and policy work is often conducted under short timelines. Requests for briefings and rapid syntheses arrive with little notice, pushing writing towards deadlines rather than context. When AI is used purely as a fast producer, there is a risk that judgement is outsourced to a system that cannot hold responsibility.

Used carefully, AI can help people slow down just enough to notice what matters. Before submitting a briefing, a team might evaluate how a nonspecialist will read it, where language might mislead, and which perspectives are absent. That reflective pause does not take long, but the effect on quality and trust can be significant.

Privacy and practical safeguards

Practical safeguards are essential. Sensitive material should never be copied directly into public systems. Where AI is used for early thinking, it is safer to work from already public summaries or from an abstract stripped of names and unpublished data. Some universities now provide access to privacy proxies that redact identifiers and prevent retention of prompts. Even without these, we must all make careful choices: removing metadata from files before sharing text, turning off chat histories, and keeping brief logs of what was shared, when, and why.

Equally, it is worth checking the governance commitments of AI providers before use. What do they state about retaining data, about using prompts to train models, and about the jurisdictions where information is stored. Do they publish summaries of how their systems work, acknowledge known risks, and offer any form of redress. These checks, though basic, help make sure that engagement remains ethical as well as effective.

Three guided steps with partial prompts

The following guided process shows how AI might be used to strengthen rather than weaken policy engagement. These three prompts can be run in sequence, using only public or sanitised text, and completed in less than ten minutes:

  1. Please rewrite this for a [policymaker in health, MP caseworker, council officer]. Keep to 120 words. List three terms that might confuse a nonspecialist and suggest plain alternatives.
  2. Identify two assumptions in this summary. For each, suggest one stakeholder who might read it differently and explain why in one sentence.
  3. Offer three brief framings of this evidence, each for a different priority. One for equity, one for near term cost, one for long term outcomes. For each framing, give one advantage and one trade-off in plain English.

These prompts demonstrate how AI can be used in ways that promote critical thinking rather than displace it. Instead of generating answers to be taken at face value, the tools can serve as prompts for questioning and reflection. Asking an AI system to present an argument in simpler terms can highlight where our own explanations are vague. Requesting alternative perspectives can reveal assumptions that might otherwise go unchallenged. Comparing different framings of the same evidence encourages us to weigh trade-offs rather than settle for a single narrative. In this sense, AI is less a producer of finished content and more a catalyst for deeper thinking, helping academics and policymakers pause long enough to interrogate their own reasoning before sharing it with others.

If AI is treated only as a tool for speed, the familiar problems remain: information overload, thin summaries, and reduced trust. Slowing down does not mean unnecessary delay. It means taking a short pause to check audience, assumptions, and trade-offs. That small act of reflection is often the difference between adding to the noise and offering something genuinely useful.

The case for Slow AI is therefore a case for balance. AI can indeed save time where it is safe to do so, but its greatest contribution may be in creating those moments of reflection that protect the credibility and trust on which long-term relationships between universities and policymakers depend.

If you are interested in exploring these practices further, I write a newsletter called Slow AI that shares prompts, reflections, and examples of how AI can be used to support careful, ethical engagement. We are building a community of practice for academics and policymakers who want to experiment with more reflective uses of these tools, and I would warmly welcome others to join the conversation.

Back to resources