Ethics in AI Research Seminar on 25th May at 1 PM (BST)

Profile image of Max Kiener

Taking Responsibility for AI

Presenter: Max Kiener

Abstract: Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as driving a car, fighting in war, or performing a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans could no longer be morally responsible for some of the AI-caused outcomes, which would then result in a 'responsibility gap'. In this presentation, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will, viz. whether certain people could take responsibility for AI-caused harm simply by communicating the intention to do so, just as people can give permission for something (via consent) simply by communicating the intention to do so. So understood, taking responsibility would be a genuine normative power.

I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking prospective liability. According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking retrospective answerability, viz. the view that people can make themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused.

Registration details available here.