AI & Robotics in healthcare
AI’s potenital must be tempered by freedom from bias and protection of privacy. Roibeard Ó hÉineacháin reports
Artificial intelligence and robotics have the potential to completely transform healthcare, but there are a number of caveats that require consideration as the new technologies are increasingly introduced into medical practice and into patients’ lives, according to Marco Lorenzi PhD, Epione Research Group, Inria Sophia Antipolis, University of Côte d’Azur.
“The speed at which AI products have gained FDA approval and entered the market has doubled over the past two years. In fact, the market for AI is to be in the hundreds of billions of dollars by 2026,” Dr Lorenzi told the 14th European Glaucoma Society Congress.
The COVID-19 pandemic is further accelerating the AI revolution because of its potential contribution to telemedicine and drug discovery. The use of robots to deliver care in hospitals is also under development as a means of alleviating the pressure on hospital personnel and reducing the risk of infections. AI is also being used in finding cases with smartphone tracing apps.
The new applications of AI pertaining to glaucoma management primarily involve radiology and image analysis, the area where deep learning is most applicable. He noted that he and his associates are developing software to harness AI in the analysis and integration of very complex data in order to predict visual function scores from optical coherence tomography (OCT) imaging. AI can also be used in combination with special attachments for smartphones as a form of portable fundoscopy.
This new reality brings several challenges that have yet to be solved. For example, most of the potential applications of AI and robotics in healthcare have not been adequately validated for adoption into general use, Dr Lorenzi noted. He cited a study showing that robotic laparoscopic hernia repair had the same outcome as standard surgery, but at an increased operating room duration and also increased cost.
BIAS IN TRAINING DATA
Another of the main issues that AI has to face is the issue of bias in the training data. A highly publicised example was the Amazon.com Inc.’s finding that their automatic recruitment algorithm was preferentially selecting men’s curricula vitae (CVs) because they had been trained on a data set composed mostly of men’s CVs and therefore identified being male as a positive feature.
And finally, there are many privacy and security issues involved with the use and sharing of big data and the ethical questions around of providing or selling patients’ data to third party AI companies. Dr Lorenzi noted that he and his associates are working on a new type of AI paradigm, called federated learning, that will allow the sharing of knowledge while preserving privacy.