Jeremias Adams-Prassl[1], Isabelle Ferreras [2], Sharon Block[3] and Michelle Miller[4]
_______________________________________________________________________________________________________
Recent political developments on both sides of the Atlantic underscore the urgency of exploring perspectives on worker access to governing AI development and deployment. In the United States, the incoming administration's anticipated rollback of labour rights highlights the need for innovative approaches to protect worker interests in the face of increasing AI integration. Simultaneously, ongoing EU discussions around algorithmic management and recent parliamentary hearings of the relevant commissioner-designate, who faced pressure to prioritize a legal instrument governing algorithmic management, signal a growing recognition of this issue within European policy circles. As AI becomes increasingly integrated into workplaces, the question of who controls these technologies takes on profound significance.
Biased algorithms rejecting female engineering applications. Employers using sophisticated tools to target union busting. Workers fired by algorithms management neither control nor understand[5].
AI is revolutionising the world of work. But it is not just about job quantity: mass technological unemployment remains as elusive as ever. Job quality, on the other hand, is a different matter altogether: AI is not (yet?) coming for workers’ jobs[6] - but things are different for their bosses. The rise of algorithmic management has seen the automation of traditional managerial tasks, from hiring workers through to firing them. What started in the gig economy has arrived in workplaces across the socio-economic spectrum, from factories and warehouses to professional service firms and universities. Even where AI is not yet advanced enough to replace bosses, it has intensified managerial control by creating hybrid management models that are making life opaque and unpredictable for workers.
The emerging evidence is grim: algorithmic management tools surveil workers, collecting intimate data at an unprecedented scale, and allowing a hitherto unimaginable degree of control. Taylorism has finally become technically feasible.
And yet, the story is not exclusively one of doom and gloom. As the introduction of new technologies radically reconfigures inequalities of bargaining power[7], the need to rethink regulatory regimes is becoming ever more pressing. As governments, scholars, and unions begin to grapple with the implications of algorithms at work, renewed calls for voices at work emerge across jurisdictions.
Seen thus, the rise of algorithmic management is not just a threat - but equally an opportunity. An opportunity to reopen questions as to the distribution of power in the workplace, an opportunity to create new norms for participation and co-governance and an opportunity to promote an ethical form of AI.
The Algorithmic Boss
Algorithmic management became widespread via the gig economy: the rating mechanisms deployed by companies from Uber to Doordash quickly turned out to be much more than a simple record of consumer satisfaction. As Tom Slee put it, reputation systems were deployed as ‘a substitute for a company management structure, and a bad one at that. A reputation system is the boss from hell: an erratic, bad-tempered and unaccountable manager that may fire you at any time, on a whim, with no appeal.’[8] From assigning tasks to disciplinary measures for non-compliance with company policies, every aspect of gig work is constantly and tightly controlled.[9]
Gig work, however, was merely the harbinger of things to come. Recruitment was amongst the first industries to embrace the latest innovations in algorithmic decision making whether it is supermarkets or banks, candidates face a series of automated hurdles - from CV screening and skills tests to fully automated interviewing. The Covid-19 pandemic further fuelled the deployment of algorithmic management tools:[10] with workers at home and dispersed over the globe, the temptation to outsource monitoring and control was hard for employers to resist.[11]
Technologies developed in one context - such as giving real-time instructions to call center workers (‘time to lighten the mood a bit!’ ... ‘slow down’ … ‘let the customer talk more’) - have quickly become integrated in day-to-day business software, from SAP and Oracle to MS Teams: ‘Your Speaker Coach’ is now available to monitor each and every meeting and call, checking everything from pace and use of filler words to inclusiveness, intonation, monologue and repetitive language. The ensuing report ‘is only visible to you’ - for now, at least.[12]
Reports of algorithmic systems deployed in targeting suspected trade union activists are but the tip of the iceberg: AI can suffer from chronic problems with bias and discrimination,[13] automatically learning to reject female applicants for engineering jobs, and the ever-watchful monitor on workers’ wrists[14] and cleaners’ trolleys ratches up social and physical pressure to perform.
The integration of automated management tools into the world of work over the past two decades has created entirely new categories of need for rights, privacy and protection. From algorithmic management systems to datafied employee evaluations to an emerging frontier of testing artificial intelligence tools, workers in nearly every sector are increasingly governed by the use of various kinds of software meant to increase productivity and reduce labor costs. While these technologies differ significantly in their functions and logics, they introduce a set of shared questions about democratic governance and worker voice in how we structure our work lives[15].
Intensifying Power Imbalances
We find ourselves at an historic crossroads: a small group of investors and corporate leaders make choices that literally govern our lives[16] - whether as citizens, workers, or consumers. Technological choices shape our daily interactions, and the design of future societies. Ultimately, AI promises radically to reshape the power imbalances, whether between workers and employers, or firms and the state.
The link between a right to govern and the supply of capital has long been inherent in the corporation. Shareholders govern the firm, set its strategies, and decide upon priorities, while workers have no concomitant say – except in the rare case of worker-governed and -owned firms.[17] Rapid technological advancement casts this reality in the sharpest relief yet: algorithmic management to date has provided little, if any, avenues for worker voice.
This is puzzling, not least given that even highly automated workplaces continue to require two classes of investment. Despite long-running scares of technological unemployment, workers continue to be essential ‘labor investors’. Their capital requires personal investment - time, physical well-being, one’s future. Never was this clearer than during the pandemic, when scores of low-paid workers found themselves classified as ‘essential’. Workers cannot stay at home and diversify their portfolio to spread life’s risks: more often than not, they need to turn up in order for services to be provided and goods to be sold. And yet, few if any decisional rights are attached to workers’ investment. The opposite is true for management in times of the algorithm: automated systems can be scaled at ease, from deploying workers in cities around the globe to effecting mass layoffs without human input.[18]
The resulting systemic implications could be stark. If the deployment of algorithmic management proceeds within prevailing political, legal, and economic frameworks, we will end up with an AI future determined exclusively by tech firms and their investors. That is a future that we should fear and reject. It is a future in which the worst attributes of our current capitalist system are intensified: extreme concentration in the AI sector will exacerbate the degradation of work and income inequalities driven by past concentrations of corporate power.
A Renewed Case for Worker Voice
How can we claim to be optimistic in the face of these developments? In a functional democratic culture, people expect to shape and consent to the decisions that they must comply with. Active choice and consent are the gateway to democracy,[19] a political act through which we subordinate ourselves to the rules and decisions of the systems we participate in, however ill-informed: from the ballot box to the cookie banner.
Work is different. Most of us have no possibility to shape the rules and decisions that govern the majority of our day. Employment and labor law set out a few basic protections - whilst also enshrining the entrepreneur-coordinator’s managerial prerogatives. Nowhere is this clearer than in jurisdictions that have embraced the concept of “employment at will”: workers can be fired for a good reason, a bad reason or no reason, with only minimal safeguards to protect against unlawful reasons, such as discrimination on grounds including race or sex. Even that level of protection is still seen as too much by some. Gig economy operators’ attempts to cast their workforce in terms of ‘micro-entrepreneurs’ or ‘independent contractors’ are but the latest instantiation in a long litany of attempts to avoid employer responsibilities - much to the chagrin of senior courts around the world, who have lamented the notion that major corporations should choose which rules apply to their business: a task, as the UK Supreme Court reminded Uber’s executives in 2021[20] more appropriately left to elected representatives in Parliament.
The power to govern the firm, on the other hand, is granted to capital investors in corporate law: a body of rules kept entirely separate and non-reciprocal in most jurisdictions. Whilst employment law gives capital investors broad powers over labor investors, corporate law gives labor investors no say over capital investors.
As AI exacerbates financial investor power at work, the time has come to address this imbalance: voice is key to meaningful consent. Worker participation is key to the democratization of our economies. Active governance choices should be shared amongst all those collaborating in the corporate adventure, whatever the form of their investment.
Making Voices Heard
The advent of algorithmic management has galvanized worker voice in countries around the world - and across the AI supply chain. One of the most concerning features of algorithmic management is the fact that we can all quickly become part of training, refining, and deploying automated systems: avenues for voice are therefore required right across the AI life-cycle.
Let’s start at the very beginning: training data. What ultimately powers many of the most sophisticated models available today are vast quantities of unstructured data that require human input to be classified, labeled, and sorted. This work is often hidden, provided through online platforms and large outsourced providers (another traditional attempt in evading even the lax structures of worker-protective laws), driving an industry once self-styled as offering ‘artificial artificial intelligence’.[21]
Resistance is growing. In Kenya, for example, Meta relies on a third party vendor called Sama for content moderation. Its workers have been organizing for years, fueled by the precarious conditions they and others in the AI supply chain labor under – being paid extremely low wages while having to process an overwhelming number of deeply disturbing images a day. When 184 workers were fired for attempting to unionize in 2023, the Kenyan Court of Appeals rejected Meta’s attempts to place the blame for the firings on Sama.[22] Meta, it ruled, was the responsible party for setting basic standards for wages and health and safety. In May of 2024, in response to Prime Minister Ruto’s trip to the White House to negotiate trade deals between the US and Kenya, nearly one hundred data labelers, content moderators and AI workers coordinated an open letter[23] through Foxglove, a UK-based tech worker organization, urging the Prime Minister to hold Meta accountable for its abuses of African workers.
Worker voice is just as important at the upper echelons of the AI supply chain.[24] Companies like OpenAI have faced mounting criticism and high profile departures from employees with concerns about safety and transparency.[25] Senior researchers and engineers at Silicon Valley companies occupy a hybrid investor role in their companies - while they are compensated with wages for their labor, they are also given equity shares in the companies, which muddies the traditionally straightforward distinction between labor and capital investment. According to former employees, OpenAI required departing employees to sign non-disclosure agreements that threatened to claw back equity from anyone who spoke out.[26] After this practice was exposed, CEO Sam Altman claimed that he hadn’t been aware of the requirement and reversed it. But the chilling effect of such restrictive agreements has made the risk of whistleblowing for senior employees far more costly than just lost wages. In response, one former OpenAI researcher, Daniel Kokotajlo (whose vested equity was worth roughly $1.7m) refused to sign his agreement and went on to organize current and former employees who have concerns about what they call OpenAI’s reckless approach to safety.
In addition to going public with media stories, groups of employees have also filed complaints with the SEC about the overly restrictive agreements precluding disclosures of securities violations to the SEC. These employees have found creative ways to address their concerns with the fast pace of development in the company through employment agreements and whistleblower violations, but they do not currently have an avenue to address concerns that are “not yet regulated.” It is difficult to find a clearer statement of the need for multilateral interventions in support of worker voice beyond momentary acts of protest. In order for these employees to participate fully in responsible technology development, they must have sufficient voice in internal decision-making, as well as external recourse when they are rebuffed. As Kokotajlo told the New York Times, “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process…Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
OpenAI’s employees have explicitly voiced concerns about the incentives resulting from the imbalance of power which characterizes their work: “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”[27] Indeed, “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”
Whistleblower mechanisms, suggested as the solution by an internal inquiry[28] triggered by the governance turmoil of late 2023, involving the temporary sacking of OpenAI’s CEO Sam Altman, were considered by employees “insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated… AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”[29] It is difficult to find more poignant public interventions[30] in support of meaningful worker voice, at all levels of the socio-economic spectrum.
Structures for Worker Voice
The deployment of algorithmic management systems is not only a threat, then: it also provides a series of opportunities for worker voice. The vast majority of corporations do not develop software in-house, but acquire it from a small number of suppliers, thus requiring further fine-tuning at the deployment stage. A facial recognition system can match workers at particular levels of accuracy - the point at which a door is opened, however, is then a choice for the individual employer.
Legislators in Europe have sought to increase opportunities for worker voice in the face of algorithmic management. Platform workers in Spain, and soon across the European Union’s twenty-seven Member States, will enjoy rights to be informed about the deployment of algorithmic management systems, their operation and key parameters, and employers’ assessment of new technologies’ impacts on working conditions. New channels for voice have been mandated. At the individual level, this includes a right to an explanation of fully automated decisions, as well as avenues to contest their veracity and, if successful, see them reversed. At the collective level, workers and their representatives will be included in monitoring and fine-tuning the operation of algorithmic systems.
These rights are set out in the so-called Platform Work Directive, approved on April 24, 2024 by the European Parliament.[31] The goal is to re-establish workers’ agency,[32] not least through innovative approaches for worker voice - up to including rights for workers independent of their legal status or trade union membership.
In the United States, unions have begun to form internal Artificial Intelligence and Tech task forces to address algorithmic management and use of AI software through collective bargaining agreements. The most high profile example of this was the victory by members of the Writers Guild of America (WGA) and the Screen Actors Guild (SAG) to negotiate transparency, consent and control over the use of artificial intelligence systems in creative production. The Writers Guild agreement does not ban AI outright but, instead, centers the core creative production skill of writers in the process. AI cannot generate source material that is edited or improved upon by writers; companies must also disclose whether any material provided to a writer has been generated by AI. However, if a writer finds use for a tool like ChatGPT, they may put it to use. The agreement also asserts that the use of a writer's material to train AI is prohibited.
SAG members’ primary focus was the use of digital replicas that could replace working actors. Much like the WGA agreement, the SAG agreement doesn’t outright refuse the use of digital replicas but ensures that actors retain maximum control of their image and likeness. Actors must be paid rates for the time spent being scanned to create the digital replicas, as well as for time the replica was used in production. Producers cannot freely use digital replicas produced from existing material (ie, material that they already own of actor’s previous work) and must pay residuals for the ongoing use of the actor's image. For background actors, they are also paid for the time spent having their image scanned and production companies agreed that replicas cannot be used to avoid hiring background actors. Guidelines require notification periods (48h notice), and “clear and conscious consent” around all of these requirements.[33]
What these agreements signal is that unions see automated software as a potentially reasonable tool to aid in production but retain a commitment to the unique artistic contributions of human workers. Production companies and the creative guilds have established a democratic approach to the integration of AI that rests upon concepts of transparency, consent and discussion. While guardrails are in place to protect against abuses, the agreement is a model of flexibility that ensures workers have a role in truly shaping how technology impacts the creative process.
Collective co-governance is not limited to Hollywood: unions like the Communications Workers of America have negotiated protections for Call Center workers who face ongoing disruption from untested transcription and task guidance software into their interactions with customers. The agreements are designed to ensure that workers’ experiences are centered in the adoption of software so that productivity is enhanced in a way that can benefit, not harm, workers.
The AFL-CIO has taken the lead in shaping the development of algorithmic and AI software at the workforce level through a ground breaking partnership with Microsoft.[34] In addition to ensuring that Microsoft agreed to neutrality in future unionization efforts by employees, the AFL-CIO’s Tech Institute works closely with Microsoft to ensure that workers’ concerns about potential harms are integrated into not just the deployment of tech but its development as well. Additionally, through a partnership with Carnegie Mellon, the AFL-CIO’s Tech Institute has developed a worker-centered approach to federally funded research and development of tech products so that labor is not simply reacting to software after it is deployed but playing a role ensuring that researchers have workers in mind from the very beginning of the development process.
The Futures of Work
There is no such thing as the future of work. Our choices today shape how technologies are deployed and governed: as the new Nobel Prize in economics laureates Daron Acemoglu and Simon Johnson have recently argued,[35] the nature and impact of technology depend on the institutions shaping it. Strong institutions will be required to ensure that benefits are maximized for workers and society at large, and AI’s harms limited. Clear legal rules play an important part in this: some practices, from targeting trade unionists to questionable uses of personal data need to be banned outright. But legislative processes can be slow, with the resulting norms often kept at a high level of generality: more is required to supplement the institutional frameworks designed.
In the workplace, the rise of algorithmic management creates a renewed case for worker voice. A worker-supportive « direction of technology » can’t simply be legislated for. We have to inject a deliberative or pragmatist approach into debates about technology at work: as co-investors, workers have the right to be involved in co-governing their working conditions. Whilst the technology might be new, the institutional response does not have to be: from collective bargaining to works councils and co-determination, labor law provides a path of well-established mechanisms for information, consultation, and co-governance that should be extended and deepened to cover all aspects of technological change. The benefits are clear. Workers find themselves protected against systemic rights violations resulting from inappropriate AI deployment; employers can rely on software that is fine-tuned to their specific organizational and business context.
In adopting AI at work, we must not fall into the trap of doom and technological determinism. As technology reshapes enterprise structures, the time is ripe for recognising rights for all investors in the firm - a unique opportunity to adopt and develop avenues for worker voice which could contribute to the backbone that society is looking for to promote ethical forms of AI.
[1] Professor of Law and Associate Dean, Faculty of Law; Senior research associate of the Institute for Ethics in AI, Oxford University
[2] FNRS Professor of sociology, University of Louvain, Senior research associate, Center for Labor and a Just Economy at Harvard Law School, Distinguished research fellow, Institute for Ethics in AI, Oxford University
[3] Professor of Practice and Executive Director, Center for Labor and a Just Economy, Harvard Law School, Harvard University
[4] Director of Innovation, Center for Labor and a Just Economy at Harvard Law School. The authors thank Lee Biber (University of Louvain) for research assistance on this piece.
[5] "AI’s Impact on the Workplace: A Survey of American Managers” Survey: "One of the biggest concerns among managers is fear of the unknown. Other worries stem from job security, employee adoption, resistance to AI, and pay cuts resulting from AI tools taking over a portion of work and responsibilities." All data found within this report is derived from a survey by Beautiful.ai conducted online via survey platform Pollfish from February 14-16, 2024. In total, 3,000 adult Americans in management positions were surveyed. Source: https://www.beautiful.ai/blog/2024-ai-workplace-impact-report
[6] AI automation is not only transforming jobs that may evolve or disappear, but it's also becoming a key criterion for employers, who now expect workers to be proficient in using AI tools. See 2024 Work Trend Index Annual Report from Microsoft and LinkedIn": Seventy-one per cent of leaders say they would rather hire a less experienced candidate with AI skills than a more experienced candidate without them, according to the 2024 Work Trend Index Annual Report from Microsoft and LinkedIn"
Still, "there are the forecasts that AI will wipe out millions of jobs. The McKinsey Global Institute estimates that by 2030, tasks that account for up to 30% of the hours now worked across the US could be automated, and that AI will push 12 million American workers out of their jobs." Source: https://www.theguardian.com/commentisfree/2024/feb/29/ai-workers-layoffs-surveillance#:~:text=And%20then%20there%20are%20the,workers%20out%20of%20their%20jobs.
[7] Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., Noble, S. U., & Shestakofsky, B. (2021). Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius, 7. https://doi.org/10.1177/2378023121999581
[8] Tom Slee, What’s Yours is Mine: Against the Sharing Economy (O/R Books 2015).
[9] Özbilgin, M. F., Gundogdu, N., & Akalin, J. (2024). Artificial Intelligence, the Gig Economy, and Precarity. In E. Meliou, J. Vassilopoulou, & M. F. Ozbilgin (Éds.), Diversity and Precarious Work During Socio-Economic Upheaval : Exploring the Missing Link (p. 284‑305). Cambridge University Press. https://doi.org/10.1017/9781108933070.014
[10] SBut that is still happening: How Walmart, Delta, Chevron and Starbucks are using AI to monitor employee messages (source : https://www.cnbc.com/2024/02/09/ai-might-be-reading-your-slack-teams-messages-using-tech-from-aware.html)
[11]Aleem M, Sufyan M, Ameer I, Mustak M. Remote work and the COVID-19 pandemic: An artificial intelligence-based topic modeling and a future agenda. J Bus Res. 2023 Jan;154:113303. doi: 10.1016/j.jbusres.2022.113303. Epub 2022 Sep 21. PMID: 36156905; PMCID: PMC9489997.
[12] ee for instance this news report: Microsoft Slams Bosses Who Track Employees Using Teams After Viral Clip. (Source: https://www.newsweek.com/microsoft-slams-bosses-tracking-employees-teams-1808466)
[13] Report from 2022 from the European Union Agency for Fundamental Rights on AI bias and discrimination (PDF)
source: https://fra.europa.eu/en/publication/2022/bias-algorithm
"AI hiring tools may be filtering out the best job applicants" source: https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination
[14] Chinese officials force street cleaners to wear GPS-tracking bracelets while on duty to make sure they are not slacking off. Source: https://www.dailymail.co.uk/news/article-6898227/Street-cleaners-China-forced-wear-GPS-tracking-bracelets-make-sure-arent-slacking-off.html
[15] See the issue discussed in the World Economic Forum Annual Report: "A majority of workers want AI training from their companies. We must empower them" (source: https://www.weforum.org/agenda/2024/01/ai-training-workforce/)
[16] "Make no mistake—AI is owned by Big Tech" (source: https://www.technologyreview.com/2023/12/05/1084393/make-no-mistake-ai-is-owned-by-big-tech/)
[17] Ferreras I., Firms as Political Entities. Cambridge University Press, 2017.
[18] See for instance: "Managed by the algorithm: how AI is changing the way we work"
Source: https://algorithmwatch.org/en/ai-in-workplace-explained/
[19] Isabelle Ferreras, 2022, “From the Politically Impossible to the Politically Inevitable: Taking Action”, in Ferreras I., J. Battilana, D. Méda, Democratize Work. The Case for Reorganizing the Economy, The University of Chicago Press, pp. 23-46
[20] https://www.supremecourt.uk/cases/docs/uksc-2019-0029-judgment.pdf
[21] Altenried, M. (2020). The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital & Class, 44(2), 145-158. https://doi.org/10.1177/0309816819899410
[22] Content moderators sue Meta over alleged 'union-busting' in Kenya
[23] https://www.foxglove.org.uk/2024/05/22/kenyan-tech-workers-president-rutos-us-visit/
[25] "ChatGPT can talk, but OpenAI employees sure can’t" https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release
[26] Sam Altman addresses 'potential equity cancellation' in OpenAI exit agreements after 2 high-profile departures: https://www.businessinsider.com/sam-altman-openai-nda-clause-vested-equity-ilya-sutskever-2024-5
Leaked OpenAI documents reveal aggressive tactics toward former employees:
https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees
[28] https://openai.com/index/review-completed-altman-brockman-to-continue-to-lead-openai/
[30] https://www.theguardian.com/technology/article/2024/jun/04/openai-google-ai-risks-letter
[31]https://www.europarl.europa.eu/news/en/press-room/20240419IPR20584/parliament-adopts-platform-work-directive
[32] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4373355. Adams-Prassl et al. review how a can be effectively implemented across the deployment of AI in the workplace
[33] The Hollywood writers’ strike is over — and they won big. Source:
https://www.vox.com/culture/2023/9/24/23888673/wga-strike-end-sag-aftra-contract
https://www.wga.org/contracts/know-your-rights/artificial-intelligence
Generative AI in Movies and TV: How the 2023 SAG-AFTRA and WGA Contracts Address Generative AI. Source:
[34] AFL-CIO and Microsoft Announce New Tech-Labor Partnership on AI and the Future of the Workforce. Source:
[35] Daron Acemoglu and Simon Johnson, 2023, Power and Progress. Our 1000-Year Struggle Over Technology and Prosperity. NYC: PublicAffairs.
Suggested citation: Adams-Prassl, J., Ferreras, I., Block, S., Miller, M., ‘Current AI Challenges to the Future(s) of Work’, AI Ethics At Oxford Blog (15th November 2024) (available at: https://www.oxford-aiethics.ox.ac.uk/blog/current-ai-challenges-futures-work).