Ethics and artificial intelligence
The need for an ethical approach to AI applications highlighted
With artificial intelligence (AI) applications impacting on practically every facet of daily life – including healthcare and ophthalmology – the need for robust ethical guidelines to shepherd its use has never been greater, according to Barry O’ Sullivan PhD.
“As Steven Croft, the Bishop of Oxford, has said: ‘This is a critical moment. Humankind has the power now to shape AI as it develops. To do that we need a strong ethical base: a sense of what is right and what is harmful in AI’,” he told delegates attending the 37th Congress of the ESCRS in Paris, France.
Prof O’ Sullivan, Professor of Constraint Programming at University College Cork, Ireland, said that while AI had immense potential for making a positive contribution to human productivity and well-being, it also carried significant risks that everyone should be aware of.
“Because AI is so powerful there is massive responsibility that comes with it. It is a technology which we should not anthropomorphise like in Mary Shelley’s Frankenstein. Essentially, AI can do no more than what we train or teach it to do. So we need to understand its key limitations, especially in a field such as ophthalmology, when we are dealing with people’s health and possibly even their lives,” he said.
Prof O’ Sullivan emphasised that AI is not really about emulating human intelligence but should be considered more as a set of techniques that performs specific tasks in a narrowly defined area.
“Based on a dataset an algorithm can be built that can perform as well as a human expert, but the disadvantage is that the AI application will know nothing at all about the domain in which it is working. So while it might be able to screen for retinal disease it knows nothing about the concept of an eye or a human being,” he said.
Clinicians also need to be aware of the potential for bias when developing an AI application.
“One can influence the machine by not selecting appropriate training examples or omitting certain details, and this will have a devastating impact when the machine is deployed,” he said.
He encouraged delegates to look at the work of the European Commission’s High-Level Expert Group on Artificial Intelligence, which has produced an ethics code for AI in Europe. The code emphasises the need for a human-centric approach that will give rise to “trustworthy” AI, which incorporates key criteria such as accountability, data governance, human oversight, non-discrimination, respect for human autonomy, privacy, robustness, safety and transparency.
Prof O’ Sullivan said that interested parties could also contribute directly to the AI debate by participating in the European AI Alliance, a forum to discuss all aspects of AI development and its impacts and provide input to inform future policy.