Artificial Intelligence: a Helper or Hindrance to Healthcare Provision
AI is a promising tool for improving medical decision-making, but it requires careful regulation and ethical consideration to complement, not replace, human professionals.

When we think of medical professionals, a few key players come to mind; surgeons, nurses and general practitioners to name a few. Patients rely on these individuals to carry them through routine to essential treatments - but what if there was another player emerging in this industry? Not a registered professional, but a technology that aims to contribute to healthcare using intricate algorithms and data-driven decisions.
Artificial intelligence (AI) can be referred to as a computer system that is capable of performing tasks that typically require human intelligence. The recent advances in AI’s capability has gained traction, with businesses across varied industries vying to harness this technology. In the past few years, to call out just a few examples, we have witnessed Al-assisted predictive protein folding, bioprocessing and biomanufacturing optimization, psychotherapy, supply chain logistical problem-solving.
This discussion piece aims to explore the benefits and drawbacks of implementing AI into medical decision making and healthcare provision. It’ll also explore changes that need to be made to the implementation of this technology that will allow it to serve as a complement to the existing healthcare systems around the world.
Benefits of implementing AI into medical decision making
AI holds excellent potential when integrated into healthcare, with there being multiple scenarios that it could work in - from clinical testing in the laboratory to decision making such as diagnosing and selecting treatments. The significance of incorporating AI into these departments is highlighted by the current healthcare workforce crisis. In a data analysis published by the British Medical Association in 2024, it was revealed that England had a substantially low proportion of doctors relative to the population. It went on to reveal that this lead to increased poor wellbeing and burnout among staff, impacting workforce retention and continuing the harmful unemployment cycle. This is why more and more researchers are seeing the potential for AI to help mitigate the effects of this - utilising AI support tools could unlock additional time for doctors and nurses to focus on the nurturing side of care (and therefore benefitting patients), while also reducing stress and work overload for themselves.
Success stories
As discussed in the last section, AI has the potential to work alongside healthcare professionals to improve the industry, allowing the formation of what can be referred to as the human-AI hybrid team. Proof of the increased accuracy that AI can provide when supporting healthcare professionals was evidenced in a 2022 study, which involved an AI assisted colonoscopy. Here, endoscopists were asked to diagnose the same set of lesions in two separate sessions; one independently, and one assisted by AI. The results of this showed that when the endoscopists' confidence level was low, they were able to use the AI to direct their diagnosis to the AI opinion whose confidence perception was high, and vice versa. This highlighted how, while fully automated decision making is unfavoured, the hybridisation of human and AI opinion can work to produce more optimised outcomes.
Ethical and legal drawbacks
There are some issues to consider when it comes to incorporating AI into medical decision making, and these can be split into the categories of ethical and legal. In an issue published in the International Journal of Medical Informatics, the following issues were highlighted:
Accountability and responsibility
How not discussing AI usage can harm the patient-physician relationship, undermining autonomy and trust
Compromisation of informed consent
Lack of appropriate regulation and liability, and accountability for patient harm.
...and potential ways around them
When thinking about ways of combating the ethical and legal issues surrounding AI implementation in healthcare, we can look to the European Union’s AI act. This legal framework provides AI developers and deployers with regulations regarding AI usage, using a risk based approach. ‘Risk’ here is divided into 4 levels: minimal, limited, high and unacceptable. Healthcare falls into the category of high risk, given that it is an essential public service. This means that they are subject to strict obligations before implementation, including adequate risk assessment and mitigation systems, logging of activity to ensure traceability of results, and a high level of robustness, security and accuracy. Furthermore, numerous research papers have explored ideas around the ethical issues of AI implementation, including the idea that professionals must disclose when they are using AI tools (combating the issue of patient-physician distrust), and increased education for professionals who’ll be operating the tools.
Conclusion
In light of the presented arguments, my opinion stands that AI has an important role to play in medical decision making, but is to be used as a complement to human doctors rather than a replacement. Furthermore, I believe there are important ethical and legal implications to consider, which means that the use of AI in medical decision making is not universally applicable at present.
Implementing AI in healthcare could have enormous benefits to both patient and physician well-being, so it is fundamental that the correct testing, continued education, and tight regulations are implemented to allow AI to work as a helper - not a hindrance - to the healthcare system.