Reconsidering human control in AI enabled healthcare

Healthcare systems worldwide, not just the NHS and social care systems in England, are simultaneously grappling with workforce shortages, rising demand, escalating costs, and increasingly older and more complex care needs. These converging pressures are making healthcare systems unsustainable. Against this backdrop, artificial intelligence (AI) is being actively pursued and promoted as a vital technological capability, both to sustain healthcare under growing pressure and to transform care for the future.

Governments and healthcare providers expect AI to reduce administrative burdens, accelerate diagnoses, and make services more efficient and scalable, all to save money and time, and bridge the gap between capacity and demand. Alongside this optimism is a prevailing assumption: people will remain “in control” of AI, and AI will “augment” clinicians. This is reflected in terms like “clinical oversight,” “augmented intelligence,” and “human-in-the-loop.” A reassuring narrative implies that as AI systems become more prevalent, people will stay at the wheel. However, this narrative, though well-intentioned, oversimplifies the complex trade-offs inherent in the pressures and constraints NHS and social care organisations face.

What exactly does it mean for humans to be “in control”? Is that always desirable, or even feasible?

From human control to human oversight

Understandably, patients and staff want people to remain engaged and involved in AI-enabled healthcare services. The reasons are varied: professional judgment, accountability, trust, and empathy. The notion of “a person checking the machine” is emotionally and psychologically reassuring after all, that is what we have come to expect. And yet, we must question our assumptions: does human control always add value? How does human oversight affect our ability to use AI to bridge the workforce gap and meet unmet care needs?

Requiring a human to validate every AI decision can undermine the efficiencies AI is meant to create. If clinicians must review all outputs from an AI tool or analyse every medical image already reviewed by AI, the technology adds value, but does not truly transform. In such cases, the speed of many processes and services will remain limited by staff availability.

However, this does not mean we need to remove people from the process or that AI should operate unchecked. The issue is not binary. Instead, we need a more nuanced understanding of what control and oversight actually entail in different demographics, services, and AI tools.

From Binary Thinking to Nuance and Options

The “human-in-the-loop vs. autonomous AI” framing presents a false binary that can hinder the safe and effective advancement of AI. Instead we need to recognise that human-AI interactions exist on a spectrum, with various approaches suited to different contexts. For example:

  • Human-in-the-loop: AI suggests; humans decide. Appropriate in situations where human control is expected or safest.
  • Human-on-the-loop: AI acts independently, with human supervision and the possibility of intervention.
  • Human-out-of-the-loop: AI operates autonomously in tightly scoped domains, with monitoring and auditing to ensure it continues to perform as expected and there can be an intervention if it drifts out of expected performance.
  • Augmentation and collaboration: AI and humans interact continuously, learning from each other and evolving together.

The best approach depends on many variables: the type of AI, its performance for a given task or patient group, the risk of harm, and the expectations of patients and professionals. Not all AIs are equal: some are rules-based and explainable; others are opaque neural networks. Each demands a different oversight model and some, but not all, require direct human control.

The key lies in matching oversight to the specific AI capability and its context of use.

This shift requires moving from trusting a tool to trusting the system: the processes, the user, and the technology combined. Even a highly accurate AI model can be misused, misunderstood, or poorly integrated into clinical settings. Effective AI is shaped not just by what the technology can do, but by who deploys it, how transparently it is used, and the surrounding system and processes. This means that it’s essential for staff to have education and training on the system of assurance while patients need to be aware of the system and processes, these in combination help to build trust and embrace the change.

Context specific oversight

It is essential that current discourse moves beyond whether a person is “in the loop,” toward a process of understanding the tool, service, and context, alongside public expectations, and then establishing the appropriate control environment. Responsibility is shared across developers, staff, regulators, and providers, it’s likely the system should elements such as:

  • Assessment: Determining the appropriate level of control required.
  • Rigorous safeguards: Including evidence, regulation, and clear channels for escalation.
  • Post-deployment monitoring: Ongoing evaluation of outputs, errors, and outcomes over time.
  • Responsibility clarity: Defined accountabilities for developers, users, and oversight bodies.
  • Patient and clinician feedback loops: Real-world use must inform ongoing development and governance.

Appropriate AI oversight and Newly Emerging Professions

The integration of AI into healthcare is not just a technical shift, it is also cultural and structural. For AI to become the technology that creates a more equitable and sustainable healthcare system, we must reconsider our assumptions about what it means to be “in control” of AI. This means moving beyond slogans about keeping humans “in the loop,” or insisting that staff (or patients) manually check the outputs of AI. Instead, we need a broader perspective, one in which the expectations of safety, responsibility, and reassurance can be met in a variety of ways. This moves us beyond the theatre of control and creates a better workplace for staff.

The shift from direct human control to a spectrum of approaches for ensuring safety and reassurance means that new workforce skills and even new professions will be needed. Clinical safety, AI monitoring, and adaptive system processes will become increasingly important competencies within healthcare. While AI may allow more care with fewer staff, it will also create new responsibilities and roles to ensure AI operates as intended and expected.

So this is the conversation we must have with staff, patients, and the public before outdated assumptions lead to inefficient use of AI tools. For data Understanding Patient Data helps cast a light on data use and expectations. Where is the AI Literacy Institute that's bridging across communities and professions and do we need a new professional group of Clinical AI Officers to help shape the roles of the future?

I hope you enjoyed this post, if so please share with others and subscribe to receive posts directly via email.

Get in touch via Bluesky or LinkedIn.

Transparency on AI use: GenAI tools have been used to help draft and edit this publication and create the images. But all content, including validation, has been by the author.