
Ethics in AI Lunchtime Seminar - Wednesday 18th January at 12:30pm (GMT)
Titled: Explanation and Power: Governing Well with Algorithms
Attendance is via registration only, which can be found here.
With speaker Dr Zeynep Pamuk (LSE)
Abstract: A widely accepted requirement of democratic government is that decisions must be explained to those who are bound by them. What counts as a good explanation depends on the context and the needs of those receiving it. However, the use of complex machine learning algorithms in governmental decision making raises a challenge for the relational and contextual aspects of explanation. What kind of explanations should we demand when government decisions rely on algorithmic predictions? Which explanations are necessary, which ones sufficient, and which ones simply bad? The computer science literature has so far focused on the technical challenges of designing explainable AI, taking the kind of explanation as a matter of feasibility, and finding the optimal solution to the tradeoff between performance and explainability. The legal literature, meanwhile, has focused on how to make AI explainable in a way that meets existing legal requirements of due process and anti-discrimination. Neither have explored how different kinds of explanations fare with respect to a broader range of social or political values we might seek to realize in a democratic society.