troubling AI: a call for screenshots šŸ“ø

How can screenshots trouble our understanding of AI?

To explore this weā€™re launching a call for screenshots as part of a research collaboration co-organised by the Digital Futures Instituteā€™s Centre for Digital Culture and Centre for Attention Studies at Kingā€™s College London, the mĆ©dialab at Sciences Po, Paris and the Public Data Lab.

We’d be grateful for your help in sharing this call:

Further details can be found here and copied below.

[- – – – – – – āœ„ – – – snip – – – – – – – – – -]

troubling AI: a call for screenshots šŸ“ø

How can screenshots trouble our understanding of AI?

This ā€œcall for screenshotsā€ invites you to explore this question by sharing a screenshot that you have created, or that someone has shared with you, of an interaction with AI that you find troubling, with a short statement on your interpretation of the image and circumstances of how you got it.

The screenshot is perhaps one of todayā€™s most familiar and accessible modes of data capture. With regard to Al, screenshots can capture moments when situational, temporary and emergent aspects of interactions are foregrounded over behavioural patterning. They also have a ā€˜social lifeā€™: we share them with each other with various social and political intentions and commitments.

Screenshots have accordingly become a prominent method for documenting and sharing AIā€™s injustices and other AI troubles ā€“ from researchers studying racist search results to customers capturing swearing chatbots, from artists exploring algorithmic culture to social media users publicising bias.

With this call, we are aiming to build a collective picture of AIā€™s weirdness, strangeness and uncanniness, and how screenshotting can open up possibilities for collectivising troubles and concerns about AI.

This call invites screenshots of interactions with AI inspired by these examples and inquiries, accompanied by a few words about what is troubling for you about those interactions. You are invited to interpret this call in your own way: we want to know what you perceive to be a ā€˜troublingā€™ screenshot and why.

Please send us mobile phone screenshots, laptop or desktop screen captures, or other forms of grabbing content from a screen, including videos or other types of screen recordings, through the form below (which can also be found here) by 15th November 2024.

Your images will be featured in an online publication and workshop (with your permission and appropriate credit), co-organised by the Digital Futures Instituteā€™s Centre for Digital Culture and Centre for Attention Studies at Kingā€™s College London, the mĆ©dialab at Sciences Po, Paris and the Public Data Lab.

Joanna Zylinska
Tommy Shaffer Shane
Axel Meunier
Jonathan W. Y. Gray

Call for papers: “Generative Methods: AI as collaborator and companion in the social sciences and humanities”, Copenhagen, 6-8th December 2023

A call for papers and experiments for Generative Methods: AI as collaborator and companion in the social sciences and humanities taking place in Copenhagen, 6-8th December 2023. Further details can be found here and copied below.

Continue reading

Working paper on “Testing ‘AI’: Do we have a situation?”

A new working paper on ā€œTesting ā€˜AIā€™: Do We Have a Situation?ā€ based on conversation between Noortje Marres and Philippe Sormani has just been published as part of a working paper series from “Media of Cooperation” at the University of Siegen. The paper can be found here and further details are copied below.

The new publication Ā»Testing ā€˜AIā€™: Do We Have a Situation?Ā« of the Working Paper Series (No. 28, June 2023) is based on the transcription of a recent conversation between the authors Noortje Marres und Philippe Sormani regarding current instances of the real-world testing of ā€œAIā€ and the ā€œsituationsā€ they have given rise to or as the case may be not. The conversation took place online on the 25th of May 2022 as part of the Lecture Series ā€œTesting Infrastructuresā€ organized by the Collaborative Research Center (CRC) 1187 ā€œMedia of Cooperationā€ at the University of Siegen Germany. This working paper is an elaborated version of this conversation.

In their conversation Marres and Sormani discuss the social implications of AI based on three questions: First they return to a classic critique that sociologists and anthropologists have levelled at AI namely the claim that the ontology and epistemology underlying AI development is rationalist and individualist and as such is marked by blind spots for the social and in particular situated or situational embedding of AI (Suchman, 1987, 2007; Star, 1989). Secondly they delve into the issue of whether and how social studies of technology can account for AI testing in real-world settings in situational terms. And thirdly they ask the question of what does this tell us about possible tensions and alignments between different ā€œdefinitions of the situationā€ assumed in social studies engineering and computer science in relation to AI. Finally they discuss the ramifications for their methodological commitment to ā€œthe situationā€ in the social study of AI.

Noortje Marres is Professor of Science Technolpgy and Society at the Centre for Interdisciplinary Methodology at the University of Warwick and Guest Professor at Media of Cooperation Collaborative Research Centre at the University of Siegen. She published two monographs Material Participation (2012) and Digital Sociology (2017). 

Philippe Sormani is Senior Researcher and Co-Director of the Science and Technology Studies Lab at the University of Lausanne. Drawing on and developing ethnomethodology he has published on experimentation in and across different fields of activity ranging from experimental physics (in Re- specifying Lab Ethnography, 2014) to artistic experiments (in Practicing Art/Science, 2019). 

The paper Ā»Testing ā€˜AIā€™: Do We Have a Situation?Ā« is published as part of the Working Paper Series of the CRC 1187 which promotes inter- and transdisciplinary media research and provides an avenue for rapid publication and dissemination of ongoing research located at or associated with the CRC. The purpose of the series is to circulate in-progress research to the wider research community beyond the CRC. All Working Papers are accessible via the website.

Image caption: Ghost #8 (Memories of a mise en abƮme with a bare back in front of an untamable tentacular screen), experimenting with OpenAI Dall-E, Maria Guta and Lauren Huret (Iris), 2022. (Courtesy of the artists)