BLOG

AI in health: icing or sponge?

It’s become a truism to say that the field of artificial intelligence (AI) is moving fast, but the pace of change in thinking around AI in health has been breathtaking. Since my last blog on this in November, we’ve had a stream of policy reports, including Future Advocacy’s collaboration with the Wellcome Trust on ‘Ethical, Social, and Political Challenges of AI in Health’, a briefing note by the Nuffield Council on Bioethics, an analysis of the opportunities and risks for the UK National Health Service (NHS) by Reform, and many others. The report by the House of Lords’ Select Committee on Artificial Intelligence – a thoughtful and comprehensive review of the policy challenges around AI – has a whole chapter dedicated to the specific implications for healthcare. In parallel, more health chatbots are coming to market, and we’ve had the US Food and Drug Administration (FDA) permit the marketing of the first medical device to use AI – IDx-DR is an algorithm that detects diabetic retinopathy in adults with diabetes. Health Education England has asked Professor Eric Topol to lead a review into the impact of AI on the NHS workforce, and UK Prime Minister Theresa May firmly placed the use of AI in health as a central plank of her Government’s Industrial Strategy, with the announcement of millions of pounds of funding to develop AI to improve diagnosis of cancer and chronic disease. AI in health is here.

How should we view these rapid advances? As always with an emotive topic like healthcare, opinion is divided. Responses to Theresa May’s announcement on Monday were decidedly mixed. Some applauded this forward-thinking approach by the UK Government. Others – many of them healthcare practitioners – decried the lack of evidence for these technologies, and expressed their concern that focusing too much on ‘shiny new things’ will divert attention from the pressing problems with staffing and funding faced by the NHS. In one baking-inspired exchange on Twitter, one clinician suggested that we need to get the “sponge” of healthcare right, before focusing on the “icing and decorations”. So which is it? Is AI icing, or part of the sponge? And more broadly, how should healthcare professionals respond to this rapid pace of change?

There are encouraging signs. At a recent meeting organised by the Royal College of Radiologists, I was struck by how many clinicians with no links to the AI world were present. The conversation kept returning to how these technologies could be applied to real-world clinical problems, to tackle the real issues patients are facing. We emphasise this need to be needs-driven in our recent report – technologists and AI developers need to fall in love with the problem, rather than falling in love with the solution (AI or some other technology) and trying to apply it everywhere. And the only way technologists can truly understand the problem is by immersing themselves in the clinical world, by involving patients and healthcare practitioners from the very beginning of the development of these tools. Initiatives such as the partnership between University College London Hospital and the Alan Turing Institute are highly encouraging.

Even better, speakers frequently referred to the need for clinical efficacy to be proven before the algorithms are released ‘into the wild’. To me, this is so obvious it seems almost pedantic to say it, but I realise in various conversations with various interested parties that there’s a culture clash lurking in the background here – between the ‘move fast and break things’, disruptive culture that perhaps characterised the early days of Silicon Valley, and the slow, safety-first, highly-regulated world of drug and medical device development. Whether algorithms are the new drugs or not, surely an adaptation of the systems that healthcare practitioners and patients trust to bring safe drugs to market should be applied to the use of AI. Requiring high regulatory standards for the use of these tools – be it results from clinical trials or action research, for example – should allay perfectly justified concerns around the lack of evidence for their use.

Moreover, there’s another angle to this. If we want all our medical practice to be evidence-based, then we need to better understand the insights that healthcare data can provide, and it’s undeniable that we’re not using this fantastic resource to maximum effect yet. AI can help with this, by providing data analytical tools of much greater scale and speed that we’ve seen before. But in order to do so, we need to overcome the ‘data siloing’ problem that exists within many healthcare systems. The theoretical cradle-to-grave dataset held by the NHS will remain just that – a wonderful theory – unless we can get data and computer systems in various parts of the NHS to talk to each other in a seamless way. There are great examples of initiatives along these lines, but we need a much more joined-up approach if we’re going to make real inroads here.

There’s a real opportunity for professional bodies in healthcare to take a lead here. Whether it’s in setting standards around the ethical use of healthcare data, or figuring out the best way to talk to healthcare professionals about these technologies avoiding hype and doom-mongering, bodies like the medical Royal Colleges are well placed to develop and disseminate guidance. Our report also identified a significant gap in our knowledge of what patients and the public want from these technologies. Healthcare leaders need to build on the excellent work started by bodies like Understanding Patient Data and the Royal Society, and leverage the close relationship and trust that exists between them and patients to truly empower the latter.

But I’d urge healthcare organisations to be even bolder. We have agency in how we use these technologies – AI is neither inevitable nor predestined. We need to start the conversation on what the ‘healthcare professional of the future’ looks like now, painting a positive picture of all the benefits these technologies can bring while being aware of the steps we need to take to mitigate their risks. Having a positive future to aim for will allow us to better prepare medical and nursing students for the future of health, and will guide decisions on postgraduate training. More importantly, it will pave the way for greater public participation in the development of these technologies, and ultimately greater public acceptance if we get it right.

So which is it? Is AI icing or sponge? As another contributor to the lively discussion on Twitter put it, neither: AI needs to become the ‘eggs and flour for the sponge’. An integral part of everything a data-driven healthcare service does, improving decision-making by professionals and freeing them up to spend more time with their patients and relatives. But we’re not there yet – we need serious conversations about tackling the various challenges these technologies present. Future Advocacy is delighted to be a part of this conversation. Especially since we get to talk about cake.

 

Matt Fenech