What can humans do to guarantee an ethical AI in healthcare? (Blog: Part I)

docor talking to patient with data visualisations

A two part feature by Dr Raquel Iniesta exploring the current status of AI in healthcare, what we need to consider to ensure its application is ethical and current approaches that are helping this happen. Part I focusses on the reasons why we need an ethical framework to enable AI to work for healthcare. Part II will focus on how we can establish ethical AI and will be published next week. 

PART 1: WHY DO WE NEED ETHICAL AI?  

When we consider a machine suggesting what to add to our shopping cart or what to watch on TV, we're likely to think of Amazon or Netflix. And along with this the letters AI will probably soon come to mind.  

We are in the Artificial Intelligence era, meaning that AI is almost everywhere and, if it isn’t present in an area, there is surely someone who is already trying to implement it there. Despite AI’s ubiquity, it stills seems shocking to accept that a machine might also be able to recommend a diagnosis or suitable treatment — that sounds far more serious.  No one had the job of recommending which shoes to buy, or whether to watch an episode of Downton Abbey. On the contrary, there has always been a respected professional behind the job of making diagnoses and prescribing drugs who we trust: a medical doctor.  

In reality, machines have been helping doctors to perform their duties for many years. Clinicians use a broad range of tools to take measurements consider a stethoscope or thermometer, or something more sophisticated, such as scanners used for imaging. With these tools, practitioners have established what is called a master-servant relationship: physicians knew how the machine worked, and decided what the machine would do and when. The machine was by no means recommending a clinical decision, they were just producing outputs that needed further translation and interpretation by competent, medical, human professionals.  

 child sits in front of a lighthouse

Caption: Lighthouses were left unmanned years ago and became totally automatic. Whilst in the past keepers and their families lived in the lighthouse, currently no human beings operate them. Buildings are empty and in many cases, they are rapidly deteriorating. Medicine cannot become an unmanned lighthouse. For it to shine, human assistance and collaboration with technology is necessary. ©Raquel Iniesta 2023

AI in healthcare: a new model 

The potential implementation of AI in healthcare is leading to a new paradigm. Now human doctors and computers are becoming team-mates, striving to solve common goals: whether it’s diagnosing a patient, anticipating their disease progression or treating them with the most effective drug. The big difference is that a computer equipped with an AI system is capable of suggesting an answer for each of these medical tasks, even without being operated by a human.  

AI systems, and Machine Learning algorithms that are part of the AI world, can learn from existing data — for example, they can learn what profiles of patients responded well to a certain drug. Then given a new patient, the system would perform a comparison to see if that person is “similar” to any previously analysed profile, and would select the best treatment based on previous responses to the drug recorded for “similar” people.  

However, deciding if two people are similar is an enormous challenge, that involves philosophical questions. Data scientists are trying to solve this by collecting a tremendous amount of patient data. This includes demographic data such as age and gender; clinical data such as the severity of the disease; treatments the patient is or was under; their medical history; biological data such as genetics; or monitoring data collected through wearable devices. The hypothesis is that if two people share a lot of similar data, then these two people are similar and so can be treated or diagnosed in a similar way. However we are also complex beings who are not  made of just codifiable data; but let’s tackle this question later on in part II. 

Ai in healthcare

 The different types data used in developing machine learning algorithms and models

 

AI still needs humans 

AI systems rely on human beings that envisage, design, develop and implement them.  

No AI system would exist without human knowledge. However, as the algorithmic power grows, there are reasonable concerns that AI could surpass human intelligence. Furthermore, there are serious risks that AI tools could be the basis for a human replacement — medical professionals, in this context— that could lead to  dehumanisation in medicine. In the case of doctors, in an extreme situation, machines fed with huge amounts of patient data could be making diagnoses or recommending treatments without the supervision of a competent human, or human patients could end up dealing with chatBots and care robots instead of  human medical professionals.  

Dehumanisation in medicine would be a consequence of the lack of human presence and deindividuating practices. This would risk a reduction in empathy as only humans are equipped with consciousness of the patient’s situation, can develop empathy with other human beings, and fully understand the environment and context. Indeed these are essential factors to meet the global moral imperative of the medical profession, which states that “each patient must be treated as a person” to preserve human dignity.  

Moreover, if machines replace medical professionals, this would also risk the disempowerment of both patients and clinicians. Patients would not be listened to about their suffering, which would represent a big risk for their recovery. As mentioned earlier, patients are not just made of data, and their experience deserves to be heard by another human being, who can understand them and empathise with them. 

Subjectivity is also essential  

It is well known that including patients’ subjective experience in medical decisions is essential in achieving a good response to treatment. This idea sets the basis of patient-Centred and evidence-based medicine, as well as the biopsychosocial model of disease:  these three approaches have been highlighted by national and international bodies (Institute of Medicine of America, World Health Organisation) as crucial for medical institutions and professionals to ensure quality care and patients’ welfare.

Essential elements of these approaches suggest that:  

  • The patient should be placed at the center of the medical stage.  
  • Patients should become active participants in their care and decisions about their health should be shared with the clinician.  
  • Patients should have time in their doctor’s appointment to express their concerns, as this is crucial for their health improvement. 

As explained in Sackett’s training manual on evidence-based medicine “the unique preferences, concerns and expectations each patient brings to a clinical encounter must be integrated into clinical decisions if they are to serve the patient. Patients are not just made up of interconnected systems, a vision that would risk objectification and dehumanisation. The interaction with social and psychological dimensions should also be considered, and not only the  biological dimension represented by data measurements.  

But would an AI system ever be able to accomplish this? AI medical tools can hardly listen in the same way humans can or incorporate the subjective experiences of patients in their automatic decisions even if significant effort is being invested in the field. Even if Chatbots or chatGPT can show an apparently conscious behaviour in a conversational way, this is not spontaneous or intelligent behaviour, but a task learnt from existing patterns and performed unconsciously. It will be hard, if not impossible, for AI algorithms to capture the holistic human essence. 

Remembering the value of non-artificial intelligence 

In order to achieve high-quality care it is essential to recognise human presence. As such, Artificial Intelligence should complement the non-Artificial Intelligence of medical professionals.  

Doctors have the potential to know the facts, the science behind the facts, the patient context, and their own clinical skillset better than AI does. Doctors capture information through their five senses —and that will never be recorded in full in Electronic Health Records. Clinicians have the potential to listen to their patients and establish a human relationship with them, so that real person-centred care can be established where decisions are shared with empowered patients. Clinicians apply explicit knowledge, the knowledge of textbooks, but also tacit knowledge, that is related with medical intuition and deepens with experience.  

AI systems cannot benefit from that knowledge as it cannot be codified. Nevertheless, AI has already demonstrated superiority in a number of medical tasks, including incredible performance in imaging. Although the predictive capabilities from Machine Learning have yet to be proven in some scenarios, AI easily outperformed humans in producing predictions where lots of datapoints were available for each patient. However, this prediction of outcomes can still benefit when a human steps in to check the predictions make sense or apply their understanding.

Clearly AI already has a place within our healthcare systems and naturally there are concerns about how it will be used and what that will mean for patientsWhat is becoming clear is that for AI in healthcare to provide value to patients, clinicians and healthcare systems it must be done ethically and this involves humans.  

In Part II of the blog on Ethical AI in healthcare, Raquel Iniesta will explore how we can achieve an ethical approach and the necessity of fostering respectful relationships between all those involved.   

Raquel is a Senior Lecturer in Statistical Learning for Precision Medicine and leads the Fair Modelling Lab at the Institute of Psychiatry, Psychology & Neuroscience. Her work is supported by the NIHR Maudsley BRC and she is part of our Prediction Modelling Group.


Tags: Informatics - South London and Maudsley NHS Foundation Trust -

By NIHR Maudsley BRC at 24 Oct 2023, 10:05 AM


Back to Blog List