The discussion around AI is polarised. It’s either going to revolutionise the NHS or it’s hot air, a hype bubble waiting to burst. This takes up so much oxygen that we risk missing something else hiding in plain sight.
AI may be energising for some and concerning for others, but we shouldn’t forget the lessons of digital transformation. In some ways, AI is like any other technology: it will create both benefits and unintended consequences. If we ignore that, we won’t be ready to mitigate the downsides.
The Invisible Workload
While we debate whether AI will save healthcare, it is already reshaping workload pressures and not always helpfully. Here are three examples, based on anecdotal conversations, because there is currently very little research in this area.
Freedom of Information requests are getting longer, more detailed and more technically demanding. Why? Because anyone with a laptop and a large language model can now submit a request with the precision and persistence of a specialist lawyer. The volume is rising, and so is the complexity. The implication is simple: more staff time is needed to process and respond to them.
HR departments are also seeing more applications turbocharged by AI. Applications are more polished, more detailed and full of competency-based responses produced at scale. Shortlisting is harder. Sifting takes longer. It becomes more difficult to distinguish a genuinely strong application from a weak one wrapped in better language. The implication? More time spent shortlisting, and more time spent interviewing.
Then there are patient feedback and complaints. More of them. More detailed. More structured. More likely to demand specific responses. Patients are using AI to help articulate their experiences and expectations better than ever before. That should, in one sense, strengthen the patient voice in the system. But it also means a significant increase in the staff time needed to respond meaningfully.
None of this is necessarily malicious. It is mostly rational. People want information, want jobs, and want help expressing themselves more clearly. But taken together, it represents a real and largely unacknowledged draw on staff capacity.

The Wrong Instinct
There are a couple of predictable responses.
The first is that the NHS will reach for obvious ways to deter forms of AI-enabled demand that add workload. Make FOI requests harder to submit. Add more barriers to job applications. Require patients to jump through more hoops to provide feedback. In other words, introduce protective friction to reduce the pressure on staff time.
That would be a mistake — ethically, practically and strategically.
The second obvious response is: if AI is helping to create the problem, use AI to automate the response.
AI writes the FOI request. AI responds to the FOI request.
AI submits the job application. AI screens the job application.
AI generates the complaint. AI drafts the reply.
AI talking to AI. A perfect, frictionless, entirely pointless loop. With risk of bias impacting aspects like recruitment.
Both of these responses are understandable in a healthcare system already under strain. Neither really addresses the underlying issue.
Ask Why Before Leaping to How
Before deciding how to handle this AI-amplified deluge, we should ask a more fundamental question: what are these processes actually for?
Take patient feedback. The purpose is not to maintain a register of complaints and close tickets. The purpose is to understand lived experience, make sure people are heard, and redesign services accordingly.
Once you step back, it becomes clear that simply scaling up response capacity to match incoming volume does not meet the real intent of the process. What if, instead, we used AI differently in order to build better patient personas, create structured insight repositories, and understand more clearly who we are not hearing from?
That could mean feedback directly informing service design, procurement and pathway redesign.
The same applies to Freedom of Information requests. Again, the purpose is not the process. The purpose is to give the public access to recorded information so they can understand, scrutinise and hold the NHS to account.
If that is the goal, why are we waiting for someone to submit a request before releasing a heavily redacted document? What would it look like to publish data proactively in accessible, usable formats, so that meaningful transparency already exists? That work should begin with understanding what public accountability means today and then designing to achieve this with AI where it makes sense.

Avoiding AI ping pong
The AI debate is too often framed as a choice between two extremes: either this technology is going to revolutionise the NHS, or it is overhyped slop. That framing misses something important: AI is already creating unexpected new forms of work. And if we fail to recognise that, we risk reaching for a reactive answer in which AI is simply deployed to respond to AI-generated activity.
Do we really want an endless game of AI ping-pong? A wasting of energy and money. Or would it be better to use these unexpected pressures as to rethink why and how we provide accountability, recruit staff, learn from patient voice, and improve the wider functioning of the NHS.
I hope you enjoyed this post, if so please share with others and subscribe to receive posts directly via email.
Get in touch via Bluesky or LinkedIn.
Transparency on AI use: GenAI tools have been used to help draft and edit this publication and create the images. But all content, including validation, has been by the author.