BLOG

I, Doctor Robot

I have a confession to make.

In my last few years of working as an NHS hospital doctor, I became convinced robots would make better doctors than human beings. In part, this followed from guilt at the very human failings that doctors are frequently criticised for. Feeling tired and grumpy, feeling threatened by well-informed patients, feeling annoyed at obstructive colleagues. Computers, I thought, don’t get tired, angry, or hungry.

And that’s without even considering the incredible ability of artificial intelligence (AI) to interrogate large data sets far faster than any human being, identifying hitherto unseen patterns that could revolutionise healthcare. Already, algorithms outperform human doctors at diagnosing skin cancerlung cancers, and pneumonia, at predicting heart attacks, and can even predict whether you’re likely to be alive or dead in the next year. Deep learning has cut the time taken to screen thousands of molecules for potential drugs from a few months to a day, and has already identified candidates for the treatment of Ebola and multiple sclerosis.

Perhaps success at diagnosis is unsurprising. It’s been estimated that it would take at least 160 hours of reading a week just to keep up with the publication of new medical knowledge, so computers were always going to win an information race. But algorithms are now also taking their first steps in treatment. In Lausanne, a smart harness performs a role that would require several physiotherapists, by guiding patients recovering from strokes and spinal injuries as they learn how to walk again.

When practising, I often wished I had much more time to discuss investigation results and their implications with patients. An algorithmic companion, by taking over routine administrative tasks, might have freed me to do exactly that. Perhaps more controversially, the use of software may also be cheaper than employing more doctors, nurses and therapists – although this might be less controversial when you consider the long-term shortage of practitioners in most healthcare systems worldwide.

So when are we going to hear “Mrs. Bloggs, Dr. Robot will see you now” ringing through GP waiting rooms all over the country? Well, it’s complicated.

AI ‘learns’ by being trained on large amounts of data. In healthcare, this means sensitive medical information, which immediately elicits an emotive response. There’s a qualitative difference between the data used to train a diagnostic algorithm and that used to train, say, an autonomous vehicle. The care.data debacle reminds us of the perils of failing to bring the public fully onside where their data is concerned.

In my experience of recruiting patients to clinical studies, I was frequently humbled by the sense of pride that came with ‘helping other people’. ‘Donating data’ should be the same as donating time or tissue to research. Clinical researchers ensure that trial participants know exactly what taking part entails; the same focus on truly informed consent should apply to patient data use. DeepMind’s statement in response to the Information Commissioner’s ruling on their data sharing agreement with the Royal Free is encouraging in this respect.

Technology is biased. Just ask one of the black defendants who were more than twice as likely to be mislabeled as likely to reoffend as a white defendant in the United States. Bias is a consequence of flawed, unrepresentative training data; the fact that the vast majority of AI innovation is being carried out by white men operating out of Silicon Valley doesn’t help. The ‘black box’ problem – namely that neural networks have become so complex that it is difficult to understand the reasons for their outputs – could hamper our ability to question and challenge algorithmic decisions. Progress is being made – proposed solutions include the provision of counterfactuals, or the development of ‘explanation systems’ in parallel with the AI algorithm – but for patients and healthcare practitioners to truly trust these algorithms, a much better understanding of how they work and their limitations is urgently needed.

These issues acquire a whole new level of importance when you consider healthcare. An erroneous treatment recommendation as a result of an algorithm failing to take a patient’s ethnicity into account (perhaps because the training dataset was enriched with data from one particular ethnic group only) may lead to serious harm. Likewise, the ‘right to know’ – that is, the right to understand how and why an algorithm provided a particular output – is crucial in health. Patient autonomy is absolutely key and should not be compromised in any way by these technologies.

Other principles of medical ethics are being challenged by the brave new world AI is ushering. Will these technologies be accessible to everyone across all society, as equitably as possible, or will they predominantly cater to young, affluent, ‘less complex’ patients, leaving the NHS to deal with frail, elderly patients with chronic conditions? Is using algorithms to determine prognosis a triumph of technology, or an abdication of responsibility by doctors? And what about the ethics of not using massive healthcare datasets as effectively as possible, to improve patient care?

I’m heartened by the fact that these issues are being discussed more openly, and in wider groups than just technology and policy circles. But, the technology is progressing at incredible speed, and Jeremy Hunt, Secretary of State for Health, has repeatedly stated his aim to “[root] our healthcare services firmly in the digital age”, including making big changes to how healthcare data is used and shared in the NHS by next year, the 70th anniversary of the NHS. With things moving so fast, much more needs to be done to identify, understand and mitigate the ethical problems with deploying AI in healthcare, not least by doctors and nurses themselves. Also, it’s essential to include the views of those who will be affected the most by these innovations – patients and their relatives. It is only with a truly inclusive approach that AI can be deployed ‘for good’ in healthcare. Anything less risks public rejection and, ultimately, failure.

 

Matt Fenech