PAIS Colloquium Programme

Institute for Ethics in AI logo on a graphics background

 

Overview:

Normative philosophy of computing draws on the methods of analytical philosophy to engage with normative questions raised by computing and its role in society. It aspires to be technically- and empirically-grounded, and to move philosophy forward while remaining legible and useful to other fields. This workshop aims to foster excellence in this growing field by bringing PhD scholars together with one another and with faculty mentors. Our aim is not only to learn from and strengthen the research presented, but also to build community. It is an initiative of the PAIS network, and in particular of the MINT lab at ANU and the Ethics in AI Institute at Oxford.

 

Day 1 Programme:

08.30-09.00 - Coffee/Welcome

09.00-10.30 - Paper 1

10.30-11.00 – Break

11.00-12.30 - Paper 2

12.30-13.30 - Lunch

13.30-15.00 - Paper 3

15.00-15.30 – Break

15.30-17.00 - Roundtable conversation about normative philosophy of computing

18.00-20.00 – Dinner
 

 

Details

 

08.30-09.00 - Coffee/Welcome

 

09.00-10.30 - Paper 1: The Cheap Considerateness of Social Media

Abstract: Spontaneously thinking of others—remembering their birthdays, thinking to check in on them—used to matter for our relationships. Philosophers have explained the significance of such "considerateness" for a variety of ethical frameworks, but I highlight the corrosive effect of recent technologies. The significance of considerateness has now been cheapened, in particular, by social media with its automatic reminders. One might think that the solution is to increase our voluntary efforts—say, by recording much more elaborate birthday messages for our friends—but I caution that this cannot replace the lost significance of spontaneous attention.

Student - Austen McDougal (Stanford)

Faculty Respondent Dr Liam Bright (LSE)

 

10.30-11.00 - Break

 

11.00-12.30 - Paper 2: Are there principled and practicable moral constraints on machine inference?

Abstract: Can a distribution of goods that arises from a just initial distribution and evolves through legitimate steps ever be unjust? Nozick (1974) said, "No." This paper asks a similar question about inferences made by machine learning algorithms: If an initial set of data, D, is acquired justly (by, e.g., a corporation), are there any inferences from D that are illegitimate? Widely shared moral intuitions suggest the answer must be "Yes." But it's surprisingly hard to defend this claim with arguments that are both morally non-arbitrary and practically tenable. I present five such attempts and argue that they fail.

Student - Cameron McCulloch (Michigan) 

Faculty Respondent Dr Kate Vredenburgh (LSE)

 

12.30-13.30 - Lunch

 

13.30-15.00 - Paper 3: My Imaginary Friend: The Robot

Abstract: This paper will argue that we can best understand the relationship between humans and robots as an ‘imaginary friendship’ - which occurs when a human imagines that they are engaged in genuine friendship with a robot (involving mutual love), though they are not (their love for the robot is in fact unidirectional). What differentiates the robot from many other kinds of imaginary friends is how realistic their friendly performances are; they imitate manifestations of love in such a way that inspires the perceiver to imagine that the robot really can reciprocate feelings. As such - it requires substantially less imagination to befriend the robot, than it does to befriend a stuffed toy, or say, Casper the friendly ghost.

Student - Ruby Hornsby (Leeds)

Faculty Respondent Dr Milo Phillips-Brown

 

15.00-15.30 - Break

 

15.30-17.00 - Roundtable conversation about normative philosophy of computing

 

18.00-20.00 - Dinner

 

Day 2 Programme:

08.30-09.00 - Coffee/Welcome

09.00-10.30 - Paper 4

10.30-11.00 – Break

11.00-12.30 - Paper 5

12.30-13.30 - Lunch

13.30-15.00 - Paper 6

15.00-15.30 – Break

15.30-17.00 – Faculty mentoring session

 

Details

 

08.30-09.00 - Coffee/Welcome

 

09.00-10.30 - Paper 4: Agentic and Algorithmic Context Collapse

Abstract: Context collapse occurs when discursive spaces are crowded such that speech acts performed in one leak into another. Previous theorists have distinguished two kinds of collapse in terms of a speaker’s preferences (authorial collapse is intended, while adversarial collapse is not); in this paper, I articulate a second dimension of context collapse grounded in the phenomenon’s kinematics — agentic collapse is caused by a social agent, whereas algorithmic collapse occurs beyond the control of any individual. After analyzing the resultant fourfold taxonomy, I develop the notion of metacontexual (or ecological) context collapse and explain its role in perspectival conflicts.

Student - A G Holdier (Arkansas)

Faculty Respondent Dr Max Khan Hayward (Sheffield)

 

10.30-11.00 - Break

 

11.00-12.30 - Paper 5: The Emotionless Machine and Its Rational Core: What Can AI Tell Us About the Role of Emotions in Human Moral Cognition

Abstract: In this presentation, I argue that AI can show that humans' moral cognition involves emotions, but one cannot further conclude that moral principles are grounded in emotions, nor moral beliefs are acquired via emotions. Furthermore, I will argue that although the lack of emotions makes AI not a good moral agent and we still need to pinpoint its moral responsibility, AI has the potential to help us with ameliorating the biases and noises brought by emotions in moral cognitions.

Student - Yuhan Fu (Sheffield)

Faculty Respondent Dr John Zerilli (Edinburgh)

 

12.30-13.30 - Lunch

 

13.30-15.00 - Paper 3: Types of Artificial Moral Agency 

Abstract: In discussions of artificial moral agency, it’s not always clear what’s meant by the term “moral agent.” This paper explores two ways to interpret the concept: (1) simple moral agents are sources of moral action, and (2) complex moral agents are morally responsible. These senses of moral agency come apart, and the distinction has implications for the possibility of artificial moral agency. Moreover, in the context of AI applications, we must consider which type of moral agency, if any, is required—it’s not enough to say that AI shouldn’t fulfill a particular role because it’s not a moral agent.

Student - Jen Semler (Oxford) 

Faculty Respondent Professor Seth Lazar (ANU)

 

15.00-15.30 - Break

 

15.30-17.00 - Faculty mentoring session

Graphics image for decoration only