Overview of the Ethics in AI Annual Lecture with Alondra Nelson

On the 19th of October, the Institute for Ethics in AI held its 2nd Annual Lecture given by Professor Alondra Nelson (Harold F Linder Professor at the Institute for Advanced Study) at the Cheng Kar Shun Digital Hub at Jesus College, Oxford. Besides being a respected academic, Nelson served as acting director for the White House Office of Science and Technology Policy, writing the landmark ‘Blueprint for an AI Bill of Rights’ – a document ‘intended to support the deployment of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems’. As John Tasioulas (Director of the Institute) noted in his introduction, Nelson is “one of the wisest voices in the hyperbolic discourse around AI” through her attempts to adapt rights – an expression of our most profound moral values – to the challenges of AI.

Alondra Nelson Annual Lecture
Alondra Nelson and John Tasioulas

Nelson began her lecture inspiringly by noting the different kinds of hope that AI technologies offer for the world, a tone not often struck by those thinking critically about AI. She suggested three categories of ‘AI for Good’: AI for Science, AI for Accessibility and AI for Efficiency. AI technologies can be used to model highly accurate protein folding to understand diseases like Alzheimer’s (AI for Science), to help blind people navigate around a city (AI for Accessibility) and produce more abundant and healthy crops (AI for Efficiency). Nelson points out that ethicists and policymakers think about risks and challenges not just because those must be mitigated but because “we recognize how important it is to succeed with these things”.

Nevertheless, the current harms and potential risks posed by these technologies form the context for a society-wide effort to think about how we can align AI systems in ways that are compatible with human values and interests. Figuring out how to avoid these harms and how to secure our values is what has come to be known as the alignment problem. Nelson advances an important critique of the alignment approach, namely, that as an approach within the growing field of AI Safety it is “much too narrow a response to the challenges that we face with AI”. The key insight of her talk is that our approach to alignment is too thin, and we should instead be reflecting on the idea of thick alignment.

Alondra Nelson

What is thick alignment? Nelson draws on the notion of ‘thick descriptions’ as taken by the anthropologist Clifford Geertz, who himself drew inspiration from Gilbert Ryle’s work (the Oxford audience certainly appreciated the reference). In Geertz’s view, thin descriptions of cultures merely observe the actions of these individuals, whereas thick descriptions go beyond the surface and attempt to capture the deeper layers of meaning embedded in a culture. For example, Geertz thought that in order to understand a remote village’s culture, it is not enough to study the village; we must study in the village. In other words, we must shed ourselves of familiar theoretical preconceptions and be sensitive to the complexity of human behaviours and try to unravel the deeper ideas and meanings implicit in them. 

Annual Lecture

Translating that to her topic, what we might call ‘thin alignment’ focuses on technically encoding values into AI, which is a challenging and important endeavour. But it is not the only way we can think about alignment. If we want to understand the problem of alignment, we must pay closer attention to the social contexts in which we deploy AI technologies and reject the idea that there are value-neutral uses of these technologies. This is what leads her to the idea of thick alignment, which, in her words:

“Sociotechnical approaches to the analysis of AI systems, models, and tools that employ description & contextual analysis (i.e. Use cases, incentives, power, meaning, etc.) & in so doing both reflect and constitute human values”

Annual Lecture

As she put it in her talk, the difficult part is the ‘value’ component of the value alignment problem. We are still negotiating as a society about the role these technologies ought to play in identifying the goods they can provide us and the harms we want to protect ourselves from. According to Nelson, part of this negotiation also involves genuine deliberation and participation by those who will be affected by these technologies and the power they exercise over us. This is something that Nelson and her team, who drafted the AI Bill of Rights, did well. They ran a year-long process consulting the views of many different stakeholders affected by AI technologies. The resulting Bill that emerged can be viewed as the culmination of a thicker value-sensitive and power-sensitive approach to regulating and aligning these technologies with the value of humanity in mind.

There’s a deep philosophical point that ran through Nelson’s talk and the Bill of Rights, too. The exercise of power over others, especially one which is mediated by powerful technologies like AI, needs to be justified to the people it will affect in terms we can all reasonably accept and in ways respectful of our wide plurality of views about how to live well. The ideas in Nelson’s talk were emblematic of this influential approach to liberal political philosophy. Not only was it emblematic of that approach, Nelson’s talk also wisely adapted to our more-than-ever technologically mediated society. The talk ended with a lively Q&A and a ubiquitous, resounding applause from the audience. This Annual Lecture surely set the tone for the profound and important conversations the Institute will have about the ethics and politics of artificial intelligence.

 

Written by Kyle Van Oosterum
DPhil Student in Philosophy sponsored by the Institute for Ethics in AI

Image credit: Ian Wallman