The impact of technological encroachment in medicine
The encroachment of technocracy on medicine is a double-edged sword, wielding both the power to transform and the threat to dehumanise. While AI holds the potential to revolutionise diagnostics, treatment, and patient care, we must fight to keep the heart and soul of medicine alive – empathy, compassion, and ethical decision-making. We must demand that we maintain our humanity in the face of technological change.
Medicine is an art as well as a science. It cannot be practised without human interaction. Patients need encouragement from a doctor, who listens carefully, to help them articulate what is wrong. The body will always be part of a physical realm and the touch required as part of examining a patient is something that cannot be readily replaced by technology. Healing rarely goes well without compassion and treatments get nowhere without trust. Medicine encompasses much of what it means to be human and yet the technocrats want a piece of it.
The last three years has forced us to evaluate the merits and drawbacks of centralised control in healthcare. It has exposed the ways our medical systems fail. This is not all bad. Having a light shone upon the issues in a dramatic way is better than the boiling frog method of not realising change has happened until it is too late.
What did we learn from the response to covid?
Firstly, one of the most significant problems we saw with covid was a removal of the patient’s doctor from diagnostic decision making. With centralised testing, when there’s no input from professionals who understand the patient’s unique situation, overdiagnosis becomes a rampant issue. Overdiagnosis can only lead to harm.
Secondly, we also saw rigid treatment protocols. Doctors were told only medicines that had been proved effective through rigorous clinical trials could be used. Not only is that process too slow but it leaves no room for innovation and learning. It also means that drugs where there is a profit to be made, such that trials can be funded, will always be prioritised over well established, cheap generic drugs with a known safety record.
Thirdly, centralised control results in bad feedback and slow responses to errors. We saw that with covid vaccine harms. We also saw it with the catastrophic harm caused by over-ventilation from following protocols. If we had allowed different approaches from the start, we could have quickly learnt what worked and what didn’t work. Instead, the WHO and those in authority at a national level pushed a protocol which killed people.
Centralised medicine does not work well.
More than all of this, what has happened in the last few years has destroyed trust. Vulnerable patients deserve a trusting relationship with their doctor, but that trust must be earned. It might not be all bad. Perhaps this breach of faith will inspire individuals to take greater responsibility for their health decisions and approach medical solutions with a healthy dose of scepticism. Overall, reducing trust in politicians, journalists and doctors could be a healthy result.
There are other problems with healthcare that were apparent before covid and in the analogue age.
Our general practitioners, whose role is to navigate the system for their patients, are overwhelmed. This results in increased referrals and decreased patient satisfaction.
Our population is overly medicated, with a pill for every ill and even more pills to counter the side effects. Drugs are started and never stopped.
This approach is not only unhealthy but also burdens our healthcare system with drug and staff costs. As our society aged and became wealthier, we have invested more in healthcare, but we must question if there are better ways to allocate our finite resources.
The current framework for how we do experiments on humans is flawed. We allow trials to be carried out entirely under the control of those with a vested interest. Regulatory bodies have struggled to ensure these trials are open, transparent and not biased or even, on occasion, fraudulent. Inevitably trials that have any bias will lead to over prescribing which will have detrimental outcomes for patients and associated costs for all of us.
What Improvements could come from a more digitised medical system with Artificial Intelligence?
Firstly, AI has the capacity to transform medicine. AI is creative and sees patterns that humans have missed or just can’t see. For example, it can tell whether a photo of the back of the eye came from a man or a woman. Artificial intelligence may well have the largest and most immediate impact within medicine on specialties where images can be analysed especially in radiology and pathology and particularly in cancer diagnostics. Pathologists examine the tissue removed from a cancer, from a biopsy or a resection, under the microscope. Doing so gives us clues as to how bad the prognosis is and sometimes as to which treatment might work. Using AI we can really scale up what we can deduce from the tissue which will help to predict outcomes and guide patients towards the most effective treatments. These advancements could save lives, save on misplaced psychological distress and save precious resources, creating a more efficient healthcare system.
Secondly, AI when linked to big data will be able to point out hidden drug interactions, unveiling benefits or risks that were previously unknown. For example, a drug might increase the risk of another condition or might even reduce the risk and could be used to treat something for which it is not currently used. Also, we could have a much better understanding of how one type of drug compares to a competitor. This knowledge will empower doctors to make better informed decisions about medications and their potential side effects.
Thirdly, AI offers the potential to help reduce waste in medicine. The way we currently practise medicine includes the overuse of tests and treatments. Doctors are often reluctant to reduce or stop medications and AI could help us get better at knowing when to do that. Through AI and data analysis, doctors can make informed decisions and allocate resources more effectively.
Having said all that, we cannot ignore the darker side of AI in medicine.
What are the direct risks from AI?
Firstly, the way it is incentivised. The core driver is problematic. We need to create a system where the driver behind what and how AI is taught is to improve patient outcomes. Instead, everywhere, the focus is on making money. Resources are not invested into areas where we could learn new ways of improving diagnosis such that patients have fewer invasive tests and only the right toxic therapies. Instead, we have one driven by financial incentives. The quickest financial win is to be able to replace staff. Most startups are focused on that but that comes with no benefit to patients — perhaps the opposite.
Secondly, the use of big data. Any improvements from AI are entirely dependent on centralised health data. There are advantages to a doctor being able to access a patient’s record no matter which part of the country or specialty they are in. However, centralised data brings with it the challenges of who and how people are given access and who makes the rules of the gatekeepers. With covid we have seen the politicisation of medicine which causes biased gatekeeping of these resources.
Moreover, the threat of data privacy and security breaches looms large. As AI relies on massive amounts of sensitive patient data, the misuse or loss of this information could have devastating consequences for patients.
Thirdly, there is a strange belief that artificial intelligence must always be right. It will inevitably have biases of its own based on what and how it was taught. If doctors are not sceptical enough, AI might be treated as omniscient and that could result in catastrophic mistakes. It is essential that the hard graft of checking decisions, including with costly clinical trials, continues.
Beyond the direct risks there are also the risks that we lose human interactions and start outsourcing our ethics.
As AI encroaches on medicine, we must fiercely defend the sanctity of the doctor-patient relationship. The essence of healing lies in empathy, compassion, and communication, which no machine can ever replicate. Even with the most advanced AI, doctors must continue to help patients articulate their concerns. Only doctors can conduct physical examinations, preserving the time-honoured tradition of the “laying on of hands”. The last three years have shown us that, even with a real doctor, there are real limitations to zoom consultations. There are advantages too in some circumstances. It can save patients a lot of wasted time in a waiting room for example. But it is not a good way to care for people. It is not a good way to really understand a patient’s problem and it is a terrible way to examine a patient.
Finally, AI cannot, and should not, be entrusted with ethical decision making in medicine. Complex moral decisions, such as allocating resources for expensive treatments, prioritising organ donations, or fertility treatments, must remain in the hands of human beings who own their moral code and accept responsibility for their choices.
In conclusion, the future of medicine hangs in the balance between the promise of AI and the preservation of human interaction. It is our duty to ensure that AI serves the interests of humanity, rather than catering to the whims of profit-driven corporations. By directing AI towards patient-centred outcomes, we can ensure that the benefits of AI outweigh the drawbacks, creating a healthcare system that is both innovative and compassionate. By embracing the transformative potential of AI while fiercely defending empathy, compassion, and ethical decision-making, we can shape a future where healthcare is not only technologically advanced but also remains anchored in the core values that define our humanity. In this fight, we must not yield to the seductive allure of automation but rather stand strong in our conviction that the human touch and human interaction is, and will always be, an irreplaceable part of medicine.