Working paper on “Testing ‘AI’: Do we have a situation?”

A new working paper on “Testing ‘AI’: Do We Have a Situation?” based on conversation between Noortje Marres and Philippe Sormani has just been published as part of a working paper series from “Media of Cooperation” at the University of Siegen. The paper can be found here and further details are copied below.

The new publication »Testing ‘AI’: Do We Have a Situation?« of the Working Paper Series (No. 28, June 2023) is based on the transcription of a recent conversation between the authors Noortje Marres und Philippe Sormani regarding current instances of the real-world testing of “AI” and the “situations” they have given rise to or as the case may be not. The conversation took place online on the 25th of May 2022 as part of the Lecture Series “Testing Infrastructures” organized by the Collaborative Research Center (CRC) 1187 “Media of Cooperation” at the University of Siegen Germany. This working paper is an elaborated version of this conversation.

In their conversation Marres and Sormani discuss the social implications of AI based on three questions: First they return to a classic critique that sociologists and anthropologists have levelled at AI namely the claim that the ontology and epistemology underlying AI development is rationalist and individualist and as such is marked by blind spots for the social and in particular situated or situational embedding of AI (Suchman, 1987, 2007; Star, 1989). Secondly they delve into the issue of whether and how social studies of technology can account for AI testing in real-world settings in situational terms. And thirdly they ask the question of what does this tell us about possible tensions and alignments between different “definitions of the situation” assumed in social studies engineering and computer science in relation to AI. Finally they discuss the ramifications for their methodological commitment to “the situation” in the social study of AI.

Noortje Marres is Professor of Science Technolpgy and Society at the Centre for Interdisciplinary Methodology at the University of Warwick and Guest Professor at Media of Cooperation Collaborative Research Centre at the University of Siegen. She published two monographs Material Participation (2012) and Digital Sociology (2017). 

Philippe Sormani is Senior Researcher and Co-Director of the Science and Technology Studies Lab at the University of Lausanne. Drawing on and developing ethnomethodology he has published on experimentation in and across different fields of activity ranging from experimental physics (in Re- specifying Lab Ethnography, 2014) to artistic experiments (in Practicing Art/Science, 2019). 

The paper »Testing ‘AI’: Do We Have a Situation?« is published as part of the Working Paper Series of the CRC 1187 which promotes inter- and transdisciplinary media research and provides an avenue for rapid publication and dissemination of ongoing research located at or associated with the CRC. The purpose of the series is to circulate in-progress research to the wider research community beyond the CRC. All Working Papers are accessible via the website.

Image caption: Ghost #8 (Memories of a mise en abîme with a bare back in front of an untamable tentacular screen), experimenting with OpenAI Dall-E, Maria Guta and Lauren Huret (Iris), 2022. (Courtesy of the artists)

“Algorithm Trouble” entry in A New AI Lexicon

 A short piece on “Algorithm Trouble” for AI Now Institute‘s A New AI Lexicon, written by Axel Meunier (Goldsmiths, University of London), Jonathan Gray (King’s College London) and Donato Ricci (médialab, Sciences Po, Paris). The full piece is available here, and here’s an excerpt:

“For decades, social researchers have argued that there is much to be learned when things go wrong.¹ In this essay, we explore what can be learned about algorithms when things do not go as anticipated, and propose the concept of algorithm trouble to capture how everyday encounters with artificial intelligence might manifest, at interfaces with users, as unexpected, failing, or wrong events. The word trouble designates a problem, but also a state of confusion and distress. We see algorithm troubles as failures, computer errors, “bugs,” but also as unsettling events that may elicit, or even provoke, other perspectives on what it means to live with algorithms — including through different ways in which these troubles are experienced, as sources of suffering, injustice, humour, or aesthetic experimentation (Meunier et al., 2019). In mapping how problems are produced, the expression algorithm trouble calls attention to what is involved in algorithms beyond computational processes. It carries an affective charge that calls upon the necessity to care about relations with technology, and not only to fix them (Bellacasa, 2017).”