AI meets ethics: should we be worried?

It’s a very exciting time for artificial intelligence, but the ethics of transparency and interpretability pose a difficult challenge to its progress. When will AI overtake the human race, and can we trust it to reliably navigate its way through unexpected challenges?

Dr Adrian Weller, Programme Director for Artificial Intelligence at The Alan Turing Institute, gave a compelling discussion this afternoon at Hay on the intricacies of AI; the ways in which we may come across it online (such as, via facial recognition and computer vision), its improvements and limitations.

“AI presents great hope for society…but there are also important concerns,” he said.

When we put our trust in an unknown source, we are vulnerable. Essentially, AI raises important questions about our privacy and data output. “We need to be careful because sometimes we see the world through a digitalised lens, controlled by companies,” said Weller.

And at the core of knowledge engineering are the two key ethical issues of transparency and interpretability. Interpreting AI is personal to us, yet transparency means different things to the developer than it does to the consumer. There is a risk that companies could use explanations as a method of manipulation, Weller told his audience. “There’s something about human nature that if we have an explanation, we’re more likely to go with it,” he said.

Weller came to the worrying conclusion that, although we are still far from general AI, there’s been a rapid increase in algorithmic systems directly impacting our lives.

“By thinking about how to improve AI systems, we can also sometimes see how to improve ourselves,” he said.

If you’re interested in artificial intelligence please also see Event 346 at 5.30pm on Saturday 1 June. If you like watching Hay Festival events digitally please sign up to the Hay Player for more from the world’s greatest thinkers.

Picture by Morgan Williams