AI and the Future of Work

Publication date
Stock image of a work meeting

By Dr Ekaterina Hertog, Daniel Susskind, Charlotte Unruh, and Lily Rodel.

In December 2023 the Institute for Ethics in AI, University of Oxford hosted a two-day workshop on the topic of AI and the future of work. This workshop discussed many different facets of paid work: the white-collar workplace, the AI supply chain, meaningful work vs menial work, the creative industries, and the care sector amongst others. These different contexts implicate different actors, dynamics, policies, and indeed challenges when intersected with AI implementation. During this workshop, work and its role and meaning in social life was placed under the microscope, with participants sharing diverse understandings: to some, work is something you would gladly pay another to do, whilst to others, work is a source of meaning, identity, and a source of collective action. For many, work is both. This workshop shed light on how an understanding of the role of AI in work can open conversations around what work is and, in a technologically enabled future, what work could and should be like. These discussions highlight the role of academics, policymakers, and business leaders in the face of rapid technological advancement to help societies in understanding the choices they face, and to illuminate the possible futures ahead.

We summarize our key discussion points and insights below under four key themes.

Meaningful work: The emergence and rapid adoption of generative AI in the workplace has called into question the role of work and workers. Though many have warned of automation and its impact on the labour market, several participants felt that instead workplace augmentation may become more widespread.

Augmentation may replace drudgery and repetition at work and create more time for workers to spend on tasks they find ‘meaningful’, it may also contribute to increased work intensification and shifting expectations of what is possible for workers to accomplish. Further, the implementation of AI augmentation in the workplace reveals a ‘jagged frontier’ where different configurations of workers, tasks, and AI placement generates distinct results. In the future, participants urged business leaders to prioritize long-term growth rather than short-term automation gains when it comes to AI implementation in the workplace. Others recommended further empirical research on the daily realities and expectations of workers in AI-enabled workplaces.

Justice: AI development and implementation often follows business interests that do not necessarily intersect with ethical considerations of justice and obscures the vital role of workers, both in white-collar workplaces in advanced economies and in the global AI supply chain.

A spotlight on justice adds nuance to the conversation on meaningful work: meaningful work for who? For some AI workers, such as those working in so-called ‘AI sweatshops’, requests for fair pay, a safe working environment, and a formalized working contract are more pressing than requests for meaningful work. This led some workshop participants to advocate for democratization workplaces in general and of AI development and production process more specifically. This approach, it was argued, would enable the creation of more ethical AI.  

Governance: Governance should focus on the base standards that regulations can introduce, such as the fundamental rights of privacy and non-discrimination. AI governance processes should be made transparent to workers. However, the introduction of AI to the workplace has generated complicated questions around where the responsibility for regulation should lie.

Due to the rapid rate of change, organisations and institutions who build and procure these technologies are under increasing pressure to ameliorate potential risk. Participants identified two levels of governance: the local layer, referring to those who deploy AI, and the global layer, referring to macro issues such as workforce, democracy, and existential questions. Though participants agreed that regulation should adhere to already enshrined rights, the question of who should have to right to design how AI is used has become a central concern, next to the issue of who should assume the responsibility to enforce the regulation: the state, the firm, the workers themselves?

When AI is not ‘just work’: This section focused on cases where AI is adopted in industries that rely heavily on human connection (like health care, education, or social care). Whilst some participants warned of the risks of increased datafication in care sectors, others predicted that AI could expand the capability of the sector and lead to greater human connection. Ultimately, technology is not a panacea – the way AI is positioned in these sectors will determine its impact and should always be based on consultations with the people providing and receiving care, i.e. both workers and users.

The discussions around this topic largely focused on the paid care sector, including healthcare and social care, but the speakers highlighted the parallels and connections with unpaid care provided by families. In an ageing population, care needs are increasing, but within the current system in the UK, as well as in many other Global North countries care work is understaffed, undervalued, and underfunded. Adoption of AI that focuses on cost-cutting in this sector continues the current disregard for care work and could lead to an ‘accountability gap’. Rather than replace humans, in this sector AI should focus on helping carers to do ‘what they do best’. Automating some parts of care could create more capacity within the system for greater levels of human connection, but this will not happen automatically. Ultimately, the impact of AI on the care sector depends on the aims and understandings of decision-makers. Academics and policymakers therefore must focus on gathering data and empirical evidence to support greater human connection in the care sector.

For more details on the event and the discussions see here.