CKT

Artificial Intelligence in Healthcare – Assistance to Healthcare Professionals, but not without significant risks and challenges

The integration of artificial intelligence into the healthcare system is accelerating across the globe, and Ireland is no exception.  Research shows that AI has potential to revolutionise diagnosis, treatment planning and healthcare delivery.

On Tuesday, 21st October, the Medical Council of Ireland published a position paper on the use of AI in the medical profession where the Medical Council acknowledged that, while AI has the potential to revolutionise healthcare, doctors’ experience and own clinical judgment must continue to be at the forefront of patient care.  Grace Toher, Partner, and Jordan Muir, Trainee Solicitor provides an overview of the council’s guidance.

AI as a Support not a substitute –  a Human Centred Approach

The Council’s paper makes it clear that AI should complement, not replace, critical clinical judgment. Doctors remain ultimately responsible for decisions regarding patient treatment, and AI tools should be treated like any other diagnostic aid. To guarantee accountability and openness in the provision of care, it is imperative for medics to keep correct records and use critical thinking skills. The council warns that over- reliance on AI could create risks if clinicians fail to critically evaluate the outputs of AI systems.

A core principle outlined in the use of AI in healthcare is the “Human Centred Approach” – “Doctors must ensure they are critically assessing any outputs of AI for hallucinations and other inaccuracies, utilising their own judgement as the ultimate decision maker.”

Accountability

The paper outlines that although AI technologies are increasingly used to support clinical decisions, healthcare professionals remain legally and professionally accountable for the outcomes of patient care. Where AI contributes to an error or adverse event, the standards governing clinical responsibility apply just as they would with any other medical tool or intervention.

Transparency and Patient trust

One of the central messages of the paper is the need for transparency in clinical use of AI. It is highlighted that patients must be informed when AI tools are used in their medical care This is vital for maintaining both patient and public trust as well as ensuring informed consent. The council stresses that healthcare professionals must have confidence in the standard of the tool they are using, and are equipped with the necessary training to ensure AI tools meet high clinical and ethical standards.

The Path Forward

The paper provides practical information and guidance for medics on the use of AI in Healthcare.  The Council’s position is optimistic but grounded.  It supports the responsible and ethical use of AI in clinical practice, and calls on regulators, developers, and healthcare professionals to work together to ensure that AI is safe, fair and accountable. As AI continues to evolve, so too must the legal frameworks and professional standards that govern its use. As AI becomes more embedded in medicine, it is critical for healthcare professionals that its use be guided by professional judgement, patient transparency and ethical oversight.

For more information on the Medical Council’s paper, please contact our Healthcare Litigation Team.