Brief Overview of the Ethics in AI Colloquium: Reflection on the Nature of (Artificial) Intelligence and Creativity

On Thursday 1st June, we brought together a fantastic panel of experts for our final Ethics in AI Colloquium this academic year, entitled 'Reflections on the Nature of (Artificial) Intelligence and Creativity'. Our main speaker was Professor Yejin Choi, whose research focuses on AI broadly and common sense in particular. Choi combines her professorial duties at the University of Washington with her work as the Senior Director of Common Sense Intelligence at AI2, a non-profit research institute working for AI for the common good. Our panel was chaired by Dr Paul Holdengräber, a highly accomplished curator and interviewer and the founding director of Onassis Los Angeles (OLA). Our commentators included Nigel Warburton, widely known as the host of the popular Philosophy Bites podcast and the author of The Art Question (2002). He was joined by Estella Tse, a Visual and Augmented Reality Creative Director and Artist and a Visiting Fellow at the Institute for Ethics in AI.

Decoration only of the audience looking down to the stage

 

In recent times, the issue of creativity in connection with AI received increased attention. While we have long considered creative work to be particularly resistant to automation, the capacities of generative AI models, including Open AI's DALL-E and ChatGPT, invite us to revisit our assumptions. There is a growing concern across the creative sector that these technologies can push artists out of employment while drawing on their work without consent or acknowledgement. The use of AI for artistic work also raises important questions about the nature of creativity and the value of art. These questions are connected to the complex issues raised by the very nature of AI. Paul Holdengräber encouraged the panel to focus on these issues by asking all speakers to engage with a quote from Kate Crawford at the outset of our event. In her book, The Atlas of AI, she argued that " AI is neither artificial nor intelligent". Rather, it is "embodied and material" as a product of our resources and labour. Far from being autonomous or rational, it can only discern information from large data sets on which it is trained. Thus, "it depends entirely on a much wider set of political and social structures", meaning that we can think of AI as a "registry of power", "designed to serve existing dominant interests".[1]

Image by Oxford Atelier of Prof Yejin Choi and Prof Nigel Warburton

Professor Choi's talk and the subsequent discussion offered great insights into all the aforementioned issues. Choi emphasised that we should not think of creativity just in terms of originality. Programmes like DALL-E are capable of some originality because they may synthesise pre-existing ideas in new ways. However, Choi argued that true artistry requires not just originality, but exemplar originality, which often involves exercising outstanding aesthetic taste and ground-breaking ingenuity. Programmes like DALL-E do not currently display such capacities. It takes a human to prompt the algorithm effectively and identify which of its outputs are worth any attention. Hence, AI's current capacity to replace human artists is limited. Moreover, Choi and Warburton agreed that artistry is often a matter of the creative process itself rather than the qualities of its outputs. Works by DALL-E and similar programmes may be beautiful and useful, but they lack the expressive quality of being a genuine product of human beings in response to their context and external circumstances. Since that is what many of us look for in art, we are unlikely to lose interest in human creativity.

Image by Oxford Atelier of Estella Tse and Paul Holdengraeber

Nonetheless, 'creative' AI programmes may adequately serve many of our purposes, and there might be economic incentives to rely on these technologies. As a result, Choi observed that we might increasingly find ourselves in a society where creative skills are less valued, and artists may increasingly find themselves in an industry which no longer supports their craft. We explored these risks during our panel discussion. The panel has wondered whether we should treat generative AI as a threat pushing artists out of work or a welcome tool giving artists the opportunity to work more effectively. The panel also wondered what can be done to prevent generative AI from damaging the creative industry, both at the level of state regulation and individual action, with possible solutions ranging from special taxes for companies using generative AI in work to boycotts of films scripted by AI. Estella Tse highlighted the real-world impact that generative AI already exerts on artists, with the recent strikes of the Writers Guild of America being partially motivated by the refusal of production studios to rule out using generative AI to write scripts.

This was our final Ethics in AI Colloquium this academic year. We hope that you have an enjoyable and restful summer, and that you join us as we return after the break with another varied programme of events.

 

Written by Konrad Ksiazek
DPhil Student in Law and an Affiliated Student at the Institute for Ethics in AI

Image credits: Maciek, MT Studio/Oxford Atelier

 

--------------------------------------------------------------------------------------------------------------------------

[1] Crawford, K. (2022) Atlas of AI: Power, politics, and the planetary costs of Artificial Intelligence. New Haven: Yale University Press. (cited and discussed throughout the paragraph)