troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

To explore this we’re launching a call for screenshots as part of a research collaboration co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

We’d be grateful for your help in sharing this call:

Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

This “call for screenshots” invites you to explore this question by sharing a screenshot that you have created, or that someone has shared with you, of an interaction with AI that you find troubling, with a short statement on your interpretation of the image and circumstances of how you got it.

The screenshot is perhaps one of today’s most familiar and accessible modes of data capture. With regard to Al, screenshots can capture moments when situational, temporary and emergent aspects of interactions are foregrounded over behavioural patterning. They also have a ‘social life’: we share them with each other with various social and political intentions and commitments.

Screenshots have accordingly become a prominent method for documenting and sharing AI’s injustices and other AI troubles – from researchers studying racist search results to customers capturing swearing chatbots, from artists exploring algorithmic culture to social media users publicising bias.

With this call, we are aiming to build a collective picture of AI’s weirdness, strangeness and uncanniness, and how screenshotting can open up possibilities for collectivising troubles and concerns about AI.

This call invites screenshots of interactions with AI inspired by these examples and inquiries, accompanied by a few words about what is troubling for you about those interactions. You are invited to interpret this call in your own way: we want to know what you perceive to be a ‘troubling’ screenshot and why.

Please send us mobile phone screenshots, laptop or desktop screen captures, or other forms of grabbing content from a screen, including videos or other types of screen recordings, through the form below (which can also be found here) by 15th November 2024.

Your images will be featured in an online publication and workshop (with your permission and appropriate credit), co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

Joanna Zylinska
Tommy Shaffer Shane
Axel Meunier
Jonathan W. Y. Gray

Call for papers: “Generative Methods: AI as collaborator and companion in the social sciences and humanities”, Copenhagen, 6-8th December 2023

A call for papers and experiments for Generative Methods: AI as collaborator and companion in the social sciences and humanities taking place in Copenhagen, 6-8th December 2023. Further details can be found here and copied below.

Continue reading

“Algorithm Trouble” entry in A New AI Lexicon

 A short piece on “Algorithm Trouble” for AI Now Institute‘s A New AI Lexicon, written by Axel Meunier (Goldsmiths, University of London), Jonathan Gray (King’s College London) and Donato Ricci (médialab, Sciences Po, Paris). The full piece is available here, and here’s an excerpt:

“For decades, social researchers have argued that there is much to be learned when things go wrong.¹ In this essay, we explore what can be learned about algorithms when things do not go as anticipated, and propose the concept of algorithm trouble to capture how everyday encounters with artificial intelligence might manifest, at interfaces with users, as unexpected, failing, or wrong events. The word trouble designates a problem, but also a state of confusion and distress. We see algorithm troubles as failures, computer errors, “bugs,” but also as unsettling events that may elicit, or even provoke, other perspectives on what it means to live with algorithms — including through different ways in which these troubles are experienced, as sources of suffering, injustice, humour, or aesthetic experimentation (Meunier et al., 2019). In mapping how problems are produced, the expression algorithm trouble calls attention to what is involved in algorithms beyond computational processes. It carries an affective charge that calls upon the necessity to care about relations with technology, and not only to fix them (Bellacasa, 2017).”