The robot doctor will see you now
Can’t imagine telling your symptoms to a robot doctor and then following its advice?
Britain’s National Health Service is trialling a chatbot that gives advice on how urgent your condition might be. If the chatbot decides you’re sick enough, it might put you through to a virtual doctor via your smartphone video. Wait time only six minutes.
Sometimes, the advice amounts to telling the patient to have a good lie down.
So far 26,000 north London patients have joined the NHS trial and another 20,000 are on a waiting list. The NHS hopes the chatbot is a cheaper, better alternative to the expensive telephone triage hotline that stands accused of giving dodgy advice and hanging up on callers.
Would you trust a robot doctor?
The trial is part of a service run by a big-talking company called Babylon Health – whose ambition is to deliver cheap healthcare to the world, by cutting back on the use of human doctors.
Babylon Health – which employs 250 doctors who work virtually, mainly from home – claims to have more than 2 million private subscribers paying £5 ($9) a month.
The NHS is paying Babylon about $US80 ($109) per patient in the trials. If the chatbot can eventually give reliable diagnoses, write prescriptions and specialist referrals – as Babylon is proposing for the future – then those public costs should fall substantially.
The implications here are immense. What’s happening in Britain – which is investing millions of pounds into AI to shore up the NHS’s lagging performance – is a test case for public health systems worldwide. Governments are under pressure to find ways to cut healthcare costs and at the same time lift productivity to meet the demands of growing populations. AI is the big hope.
The closest thing Australia has to the Babylon chatbot app is Ada, a free European download diagnostic tool that, in the fine print, doesn’t claim to make diagnoses. It started out as a tool for doctors – who have reported mixed results – and was then marketed to patients of a DIY bent. It boasts 4 million users globally.
When launched in Australia in 2016, the Australian Medical Association and Royal Australian College of GPs voiced concerns about the system’s accuracy and how it may either alarm or falsely reassure people about their health.
So how is Babylon performing? The company says the trial has led to a downturn in patients at Accident and Emergency departments (routinely crowded with people wanting to see a doctor for free). However, the British Medical Journal has reported a tripling of hospital activity – and a spike in costs – in one area because of chatbot referrals.
‘‘This AI Just Beat Human Doctors On A Clinical Exam’’ was a typical headline (from Forbes) following a glitzy Babylon showcase in June at London’s Royal College of Physicians. The gathering was told that the average pass mark for the national GP exam, which tests diagnostic abilities of trainees, has been 72 per cent for five years. The chatbot scored 82 per cent.
Doctors argued that Babylon had ‘‘cherry-picked’’ questions from the exam and that much of a diagnosis comes from a patient’s history, not just symptoms.
A month later, following claims by British specialists that the chatbot was failing to spot serious, even life-threatening conditions, The Financial Times repeatedly tested the chatbot with classic symptoms of a heart attack. In nine out of 10 attempts, the chatbot advised a panic attack.
Babylon founder, Ali Parsa, a former Goldman Sachs banker, was scornful. The doctors who had complained to the newspaper were ruled by ‘‘self-interest’’, although some of the wording about the chatbot has since been amended on the company website.
Meanwhile, the public-funded trial is to be expanded.
This amounts to a line in the sand: rather than investing public money in more doctors at the coalface, the cash is going to a tech company whose stated claim is to get rid of as many doctors as possible and whose technology is apparently not yet up to speed – and is not subject to regulatory oversight.
The rise of AIs in diagnostic medicine.
Welcome to the new world. The rise of AI in healthcare is moving so quickly, that many research breakthroughs and clinical use of technology fail to attract media attention.
In the past two years, there have been a series of spectacular claims about AI diagnostic algorithms matching or outperforming specialists in spotting cancers, difficult fine-line fractures, impending heart failure and eye disease. Australian researchers have been responsible for some of these breakthroughs but they tend to keep their achievements in sober clinical perspective.
Brisbane start-up Maxwell MRI has an AI tool capable of detecting prostate cancers in medical imaging. Maxwell MRI chief technology officer and co-founder Elliot Smith, currently in the US, confirmed to
Fairfax Media via email that the company is taking its AI-powered detection of breast cancer, lung disease and neurodegenerative diseases – based on deep-learning technology – into the clinical space by the end of the year.
‘‘We will be doing a limited rollout to key sites this year while we do our regulatory approval … with a full-scale rollout in 2019.’’
A number of Australian doctors and researchers told
Fairfax Media that AI will make their lives easier by taking over grunt work – counting dots on pathology slides for example. While they welcome AI algorithms that can augment and provide back-up in the diagnostic process, they resist the idea that machines will routinely make a diagnosis without human oversight.
But already, some of these technologies are going into developing countries and taking on the role of doctors that don’t exist. For example, Google is trialling a device that diagnoses diabetic retinopathy, which causes millions in India to go blind.
The FDA in April has approved a similar device to be marketed in the US – the first time a machine will make a serious medical decision without a human involved.
Dr Brent Richards is the medical director of innovation and director of critical care research at Gold Coast Hospital and Health Service. He ‘‘writes a bit of code’’ and has a number of AI projects on the boil. He says that the Third World can’t wait for a long debate about AI’s role in medicine, be it a chatbot or a diagnostic tool.
‘‘AI tools need to be put out in a measured way. Too early and yes there will be issues, but too late and you harm a lot of people. The [Babylon] chatbot needs to be assessed against what would have happened otherwise: in a Third World country, you would have had no phone call at all. People in the Third World say we’d rather have something that has a five per cent error rate than not getting any advice at all.’’
The point being: ‘‘In some ways AI is the new penicillin in the way it’s changing medicine … and the sector has got such a head of steam that it’s got to the point where some people are having to beat away venture capitalists with a stick.’’
No doubt Western healthcare systems will be watching the Third World trials with interest. Lancet recently published an article that bemoaned a worldwide shortage of pathologists amid industry speculation that the shortfall could be met by AI, which would require a major investment in digitalising pathology slides to create a vast data set from which AI algorithms are taught to recognise diseases.
But digitalising pathology slides adds another step to an already human-intensive process.
Says Monash University pathologist Professor Ruth Salom: ‘‘There was a big hope that pathology departments would digitalise their images, but pathology companies won’t do that because it’s faster for pathologists to look down the microscope than for me to spend … 15 minutes to digitalise each slide. So it’s more time, more costs, extra steps and you save nothing. That’s where we are at the moment with digitalisation.’’
Will a future generation of AIs start replacing pathologists and radiologists?
Salom has shown that metastatic breast tumour in a lymph node can be detected in microscopy images through artificial intelligence. The detection rate ‘‘compares favourably to a human’’ – but she says AI diagnostics have a long way to go before it becomes a threat to jobs.
She makes the point that radiology, which relies on simpler X-rays – will probably see a quicker uptake of AI diagnostics.
Brent Richards referred
Fairfax Media to an article from the Radiological Society of North America that bemoans a rapidly increasing workload and burnout for American radiologists who are among the highest paid medical practitioners in the country. There the AI alternative looks not only attractive, but inevitable.
Dr Luke Oakden-Rayner is a radiologist and medical researcher at the University of Adelaide. He developed an AI algorithm that can predict how long a patient will live with good accuracy – although he cautiously says the experiment delivered proof of concept.
He has a paper being reviewed about an AI project that diagnoses fine hip fractures of a sort that are missed by radiologists 10 per cent of the time at first inspection. He’s an AI enthusiast. But he warns that the roll-out of various AI platforms into the mainstream is being done without sufficient clinical trialling.
Human oversight may be critical to alleviate safety concerns.
He says the Babylon chatbot is the first time AI health technology has put lives at risk – simply because there was no human oversight to catch out mistakes, and that it hadn’t been put through any regulatory process . ‘‘The NHS put this into use without public clinical testing and there are doctors saying that it doesn’t seem to be safe.’’
Dr Oakden-Rayner writes an AI-focused blog (the current story is titled Medical AI Safety: We have a Problem) that has attracted a good following. Every month, he gets a dozen or so emails from students worried they shouldn’t go into radiology or pathology (areas particularly prone to machine take-over) or even medicine at all because AI will take their jobs.
He feels they are jumping the gun.
Students see these reports about cancers and other conditions being diagnosed quickly and efficiently by machines and they think, ‘oh well, there’s no point me going into that field’.
‘‘I try and tamp that down a bit. Because the headlines are not really a reflection of where the field is at the moment.’’
He believes that a significant portion of medical work could be amenable to AI technology. ‘‘Fifty to 90 per cent of our tasks could be performed automatically,’’ he says.
‘‘People make this distinction between whether we’re being supported by bit or replaced by it. It’s probably not a reasonable distinction to make: if it’s doing significant tasks, you have more time to do other things, which really means you need less doctors to do the same amount of work. Being supported by AI is the same thing as being replaced in my mind.
‘‘If that happened overnight, medicine would collapse, no question. There’d be no doctors. The barrier in the way of that is these things take time.’’
What will medicine in the future look like?
Much depends on how quickly AI diagnostic technology is proven to be reliable – and the short-term job losses are more likely to be in AI ventures that have bet too heavily, too quickly.
‘‘These technologies follow exponential curves,’’ he says. ‘‘They speed up over time in how much they change things. At the moment, we’re at the shallow end of the curve … in 10 years it will be unrecognisable.
''Over the next couple of years we’ll probably see a couple of major start-ups collapse because they made promises they won’t be able to keep and what they’re doing isn’t valuable enough.’’
Source: Read Full Article