The Work of AI


Mapping Human Labour
in the AI Pipeline

A CSCW 2024 workshop

This one-day workshop invites researchers and practitioners to map the human labour, in its sites, categories, and characteristics, in the infrastructuring of AI and algorithmic systems. Our concern is with the broadening of the terms with which we articulate and account for human labour in the AI pipeline. We aim to attend to not only the privileged bodies in the production of AI or the harmed bodies in the witnessing and moderation of its harms, but the constellation of mundane labours that make AI systems work the way they work. We consider this labour across sites – from labelling datasets, to red teaming, to the (inevitable) public perception management – and across categories of work – from knowledge work to emotion work, from bodily work to organisational work. Approaching the AI pipeline both as a site of empirical study and critical scrutiny, this workshop promotes a CSCW perspective into the inquiry of the humans, work practices, and politics in the AI pipeline. Taking a reflective turn, the workshop also invites us to consider what we as CSCW researchers – one group of humans active at various parts of the AI pipeline – can do and be challenged by through engaging with AI production. (Read workshop themes).

Submitting

The submision deadline is TBC (midnight, anywhere on earth).

Use this button to submit to the workshop.

We invite scholars, practitioners, and anyone else interested in participating in the workshop to submit a two to four-page position paper (or equivalent material) that addresses the workshop themes. We encourage potential participants to discuss their interest in the workshop themes, welcoming reports of (preliminary) empirical results, theoretically oriented pieces, as well as methodological reflections, especially concerning the role of CSCW scholars in understanding and shaping the work of AI.

To promote broader participation, in particular from the industry and civic organisations, we offer the option of submitting alternative material of rough equivalence (e.g., a design portfolio, white paper, or similar). Submissions will be reviewed by the organisers and accepted based on the relevance and development of their chosen topic, as well as participants’ potential to contribute to the workshop.

Selection will rely on an inclusive model, where we will as organisers especially welcome work that represents a diverse community of scholarship and practice.

Notifications of acceptance will be sent by TBC.

Organisers

airi lampinen
Airi Lampinen studies algorithmic systems, with a particular focus on interpersonal and economic encounters. Her current research focuses on shared uses of intimate technology and the layered trust relationships prevalent in engagement with intimate digital health technologies. Lampinen is an Associate Professor in Human–Computer Interaction at Stockholm University, Sweden, and a Docent in Social Psychology at the University of Helsinki, Finland.
Rob Comber
Rob Comber studies the social and environmental sustainability of civic technologies with local, national, and international civic organisations. Taking a critical perspective on what is asked of people in the work to make civic systems work, he has investigated questions on the political, emotional, legal, and recently (de)colonial dimensions of design. He is an Associate Professor in Communication at KTH Royal Institute of Technology, in Stockholm, Sweden.
Srrayva Chandhiramowuli
Srrayva Chandhiramowuli examines the role of human values in data annotation and AI development. Her current research examines the work of data annotation for AI, paying particular attention to systemic challenges and frictions, to envision and inform just, equitable futures of AI. She is a PhD candidate in the Institute for Design Informatics at the University of Edinburgh.
Naja Holten Møller
Naja Holten Møller is an expert on Computer-Supported Cooperative Work (CSCW) in complex, professional work domains. Her research unfolds through a deep engagement with issues of responsibility, and the enactment of ethics through participation of stakeholders in technology use- and design. Holten Møller is an Associate Professor at University of Copenhagen, Denmark. She is particularly noted for her research on “data work” and the challenges and opportunities opened up by the use of large datasets and algorithms to optimize work in the future workplace.
Alex Taylor
Alex Taylor has been contributing to Science & Technology Studies and Human-Computer Interaction (HCI) for over twenty years. His interests are in how digital technologies are co-constitutive of forms of knowing and doing, and, as a consequence, provide a basis for fundamental transformations in society. He is a Reader at Design Informatics, University of Edinburgh.

Planning

Duration:

The workshop will be hosted in-person in Costa Rica: November 9 or 10, 2024. Venue/date TBC.

Activities:

The workshop is structured as a full-day, in-person event, consisting of activities centered on mapping (Marres, 2015) and capturing multiple and diverse standpoints (e.g. Rolin, 2019). These activities will be geared towards community building and expanding the space in which to work critically on/with AI labours.

Goals:

The key objective of this workshop is to bring together researchers within (and where possible beyond) the CSCW community with an interest in human labour appears in the AI pipeline, with the aims of sharing ongoing research, facilitating relationships around shared research interests, and collectively reflecting on the roles and contributions of scholars working on socio-technical issues within AI.

Workshop Themes

The workshop is an open invitation for researchers and practition- ers to develop our shared account of human labour in the production of AI and algorithmic systems. We encourage broad considerations of this topic, and offer some pressing challenges to begin the con- versation. We open discussions for studies on the mundane work of AI, the sites at and through which this work takes place, how we account for this work in its categories, characteristics, and scalar and connective production of the pipeline.

The work to make AI work

This workshop seeks to address the work that makes AI systems work, with the goal of articulating the human labour that creates, maintains, and sometimes breaks these systems. It is a longstanding tradition within CSCW (cf. [19, 24] to account for the politics of visible and invisible work as well as the consequences for the representations of people within socio-technical systems design. Across various systems, including the world wide web, digital libraries, call centres, online discussion fora, food delivery services, and social media, CSCW has attended to who is entangled in making technologies work. This workshop welcomes discussion of what workplace studies of the AI pipeline do or could look like, and what can be gained by approaching the AI pipeline from a practice-based research orientation [3].

Sites of human labour in the pipeline

A starting point for this workshop will be a mapping of the work across the AI pipeline. The different forms of labour – and values associated with this work – will be captured, with emphasis on the ways these centre agency, power and authority in AI imaginaries. This mapping will cut across the data work needed to build training datasets and fine-tune models (often outsourced to the global south) [5, 6, 12, 16, 17], to the labours involved in building and designing tools and systems [13, 27], and the much valorised work done to develop AI models and algorithms [14]. This workshop will invite discussions on where these forms of work are done, spatially and geographically, and how a logic used to breakdown the pipeline celebrates some forms of labour and devalues others.

Categories and characteristics of human labour in the pipeline

The grand narrative of AI sits within the tension of technocratic logics and the shepherding of the magical black box [10]. Through technocratic framings and wizardary the human labour of, for in- stance, working through messy datasets becomes the costless, abstract and logical counting work [6] of sanitisation in data cleaning, while the work of piecing together and fabricating datasets gets rendered as effortless but valuable data enrichment. These categories of work are sequential, smooth, and as such often rendered invisible. Alternative categories of work – whether it is emotion work, body work, ethical work – offer us new ways to approach how we orientate to the AI pipeline. For instance, understood as matters of care [20], datasets and the people who produce them [22] challenges the work that is, or is not, drawn into consideration of the AI pipeline. It extends how we map human labour in its character and scope – what we do and how we do it – and both in work that is sited and in work across the pipeline, it asks what are the consequences of our stories of the AI pipeline? The means through which we can account for it as/to matter..

One example of such a category is to consider that human labour of AI and algorithmic systems can be understood to involve forms of emotional labour and emotion work – distinguished often only by whether one is paid for it or not [2]. In both cases, professionals working in the AI pipeline engage in work to utilise and regulate their own and others’ emotions. Concerns for the emotional cost of work have been raised for the human labour in AI pipeline, particularly for those working with content moderation [22, 25]. Moderators, confronted with only the materials deemed harmful, experience mental and emotional fatigue, even over the course of a work day, directly impacting decisions made on data and content in the pipeline.

Scale and AI’s global supply chain

Another cut to be made in considering the AI pipeline and thinking across the labours that sustain AI is through the webs and weaves of scale. Attending to scale, this workshop will invite a standing back to ask how CSCW might offer a productive analytical frame to examine the many and varied labours making up a global supply chain. Think here about the uneven and woven-together scales through which AI systems are developed and then operate. There are, for example, the many different settings and geographies in which people are needed to make OpenAI’s ChatGPT 4.O speak and translate multiple languages [9], or ensure a car stops before striking a pedestrian [4], or allows checkout tills to be removed from supermarkets [8]. Across such scales, labours are of course local and contingent, yet at the same time they flow into and sustain vast global networks.

Reflections on scale also raise questions about the political and economic flows that AI operates through and across. The pipeline – as it is often conceived – masks the patchiness of labour and the prevailing political economy, namely late capitalism, that labour operates in relation to. Unsurprisingly, alternative and discounted categories of work and labour are found to sit awkwardly against the scalar logics that drive capitalist ruins [26]. Here, then, the workshop will use scale as a means of situating labours in worlds that are politically and economically organised. Again, the invitation will be to consider what orientations CSCW does (and doesn’t) have to make sense of human labour in the AI pipeline in these scalar terms.

References

[1] Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán. 2022. A primer on AI in/from the Majority World: An Empirical Site and a Standpoint. Technical Report. Data & Society Research Institute (2022). https://dx.doi.org/10.2139/ssrn.4199467

[2] Madeline Balaam, Rob Comber, Rachel E. Clarke, Charles Windlin, Anna Ståhl, Kristina Höök, and Geraldine Fitzpatrick. 2019. Emotion Work in Experience- Centered Design. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 602:1–602:12. https://doi.org/10.1145/3290605.3300832

[3] Pernille Bjørn, Luigina Ciolfi, Mark Ackerman, Geraldine Fitzpatrick, and Volker Wulf. 2016. Practice-based CSCW Research: ECSCW bridging across the Atlantic. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion (San Francisco, California, USA) (CSCW ’16 Companion). Association for Computing Machinery, New York, NY, USA, 210–220. https://doi.org/10.1145/2818052.2893365

[4] Ricardo Cano. 2024. One crash set off a new era for self-driving cars in S.F., San Francisco Chronicle. Accessed May 22, 2024. https://www.sfchronicle.com/projects/2024/cruise-sf-collision-timeline/.

[5] Srravya Chandhiramowuli and Bidisha Chaudhuri. 2023. Match Made by Humans: A Critical Enquiry into Human-Machine Configurations in Data Labelling. In Proceedings of the 56th Hawaii International Conference on System Sciences. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3544548.3580645

[6] Srravya Chandhiramowuli, Alex S.Taylor, Sara Heitlinger, and Ding Wang. 2024. Making Data Work Count. Proc. ACM Hum.-Comput. Interact. 8, CSCW1, Article 90 (April 2024), 26 pages. https://doi.org/10.1145/3637367

[7] Kate Crawford and Vladan Joler. 2018. Anatomy of an AI System. Anatomy of an AI System (2018).

[8] Wes Davis. 2024. Amazon gives up on no-checkout shopping in its large grocery stores. Accessed May 22, 2024. https://www.theverge.com/2024/4/2/24119199/amazon-just-walk-out- cashierless-checkout-ending-dash-carts.

[9] Clare Duffy. 2024. OpenAI unveils newest AI model, GPT-4o, CNN. Accessed May 22, 2024. https://cnn.com/2024/05/13/tech/openai-altman-new-ai-model-gpt-4o/index.html.

[10] M. C. Elish and danah boyd. 2018. Situating Methods in the Magic of Big Data and AI. Communication Monographs 85, 1 (2018), 57–80. https://doi.org/10.1080/ 03637751.2017.1375130

[11] Mary L. Gray and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt, Boston.

[12] Lilly C. Irani and M. Six Silberman. 2013. Turkopticon: interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 611–620. https://doi.org/10.1145/2470654.2470742

[13] Shivani Kapania, Alex S Taylor, and Ding Wang. 2023. A hunt for the Snark: Annotator Diversity in Data Practices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 133, 15 pages. https://doi.org/10.1145/3544548.3580645

[14] Steven Levy. 2024. 8 Google Employees Invented Modern AI. Here’s the Inside Story. Wired (2024). https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/

[15] Noortje Marres. 2015. Why map issues? On controversy analysis as a digital method. Science, Technology, & Human Values 40, 5 (2015), 655–686.

[16] Milagros Miceli and Julian Posada. 2022. The Data-Production Dispositif. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 460 (November 2022), 37 pages. https://doi.org/10.1145/3555561

[17] Milagros Miceli, Martin Schuessler, and Tianling Yang. 2020. Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 115 (October 2020), 25 pages. https://doi.org/10.1145/3415186

[18] Naja Holten Møller and Marisa Leavitt Cohn. 2023. Another Rant About Technology. In Torn Many Ways: Politics, Conflict and Emotion in Research. Springer, 55–71.

[19] Bonnie A. Nardi and Yrjö Engeström. 1999. A Web on the Wind: The Structure of Invisible Work. Computer Supported Cooperative Work (CSCW) 8, 1-2 (March 1999), 1–8. https://doi.org/10.1023/A:1008694621289

[20] María Puig de la Bellacasa. 2017. Matters of Care: Speculative Ethics in More than Human Worlds. Number 41 in Posthumanities. University of Minnesota Press, Minneapolis.

[21] Kristina Rolin. 2009. Standpoint theory as a methodology for the study of power relations. Hypatia 24, 4 (2009), 218–226.

[22] Minna Ruckenstein and Linda Lisa Maria Turunen. 2020. Re-Humanizing the Platform: Content Moderators and the Logic of Care. New Media & Society 22, 6 (June 2020), 1026–1042. https://doi.org/10.1177/1461444819875990

[23] Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 39, 15 pages. https://doi.org/10.1145/3411764.3445518

[24] Susan Leigh Star and Anselm Strauss. 1999. Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work. Computer Supported Cooperative Work (CSCW) 8, 1 (March 1999), 9–30. https://doi.org/10.1023/A:1008651105359

[25] Miriah Steiger, Timir J Bharucha, Sukrit Venkatagiri, Martin J. Riedl, and Matthew Lease. 2021. The Psychological Well-Being of Content Moderators: The Emotional Labor of Commercial Moderation and Avenues for Improving Support. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3411764.3445092

[26] Anna Lowenhaupt Tsing. 2015. The mushroom at the end of the world: On the possibility of life in capitalist ruins. Princeton University Press.

[27] Sabah Zdanowska and Alex S Taylor. 2022. A study of UX practitioners roles in designing real-world, enterprise ML systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 531, 15 pages. https://doi.org/10.1145/3491102.3517607