troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

To explore this we’re launching a call for screenshots as part of a research collaboration co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

We’d be grateful for your help in sharing this call:

Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

This “call for screenshots” invites you to explore this question by sharing a screenshot that you have created, or that someone has shared with you, of an interaction with AI that you find troubling, with a short statement on your interpretation of the image and circumstances of how you got it.

The screenshot is perhaps one of today’s most familiar and accessible modes of data capture. With regard to Al, screenshots can capture moments when situational, temporary and emergent aspects of interactions are foregrounded over behavioural patterning. They also have a ‘social life’: we share them with each other with various social and political intentions and commitments.

Screenshots have accordingly become a prominent method for documenting and sharing AI’s injustices and other AI troubles – from researchers studying racist search results to customers capturing swearing chatbots, from artists exploring algorithmic culture to social media users publicising bias.

With this call, we are aiming to build a collective picture of AI’s weirdness, strangeness and uncanniness, and how screenshotting can open up possibilities for collectivising troubles and concerns about AI.

This call invites screenshots of interactions with AI inspired by these examples and inquiries, accompanied by a few words about what is troubling for you about those interactions. You are invited to interpret this call in your own way: we want to know what you perceive to be a ‘troubling’ screenshot and why.

Please send us mobile phone screenshots, laptop or desktop screen captures, or other forms of grabbing content from a screen, including videos or other types of screen recordings, through the form below (which can also be found here) by 15th November 2024 10th December 2024.

Your images will be featured in an online publication and workshop (with your permission and appropriate credit), co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

Joanna Zylinska
Tommy Shaffer Shane
Axel Meunier
Jonathan W. Y. Gray

New article: Staying with the trouble of networks

A new article on “Staying with the trouble of networks” co-authored by Daniela van GeenenJonathan Gray, Liliana BounegruTommaso VenturiniMathieu Jacomy and Axel Meunier has just been published in Frontiers in Big Data. It is available open access in html and PDF versions. Here’s the abstract:

Networks have risen to prominence as intellectual technologies and graphical representations, not only in science, but also in journalism, activism, policy, and online visual cultures. Inspired by approaches taking trouble as occasion to (re)consider and reflect on otherwise implicit knowledge practices, in this article we explore how problems with network practices can be taken as invitations to attend to the diverse settings and situations in which network graphs and maps are created and used in society. In doing so, we draw on cases from our research, engagement and teaching activities involving making networks, making sense of networks, making networks public, and making network tools. As a contribution to “critical data practice,” we conclude with some approaches for slowing down and caring for network practices and their associated troubles to elicit a richer picture of what is involved in making networks work as well as reconsidering their role in collective forms of inquiry.

“Algorithm Trouble” entry in A New AI Lexicon

 A short piece on “Algorithm Trouble” for AI Now Institute‘s A New AI Lexicon, written by Axel Meunier (Goldsmiths, University of London), Jonathan Gray (King’s College London) and Donato Ricci (médialab, Sciences Po, Paris). The full piece is available here, and here’s an excerpt:

“For decades, social researchers have argued that there is much to be learned when things go wrong.¹ In this essay, we explore what can be learned about algorithms when things do not go as anticipated, and propose the concept of algorithm trouble to capture how everyday encounters with artificial intelligence might manifest, at interfaces with users, as unexpected, failing, or wrong events. The word trouble designates a problem, but also a state of confusion and distress. We see algorithm troubles as failures, computer errors, “bugs,” but also as unsettling events that may elicit, or even provoke, other perspectives on what it means to live with algorithms — including through different ways in which these troubles are experienced, as sources of suffering, injustice, humour, or aesthetic experimentation (Meunier et al., 2019). In mapping how problems are produced, the expression algorithm trouble calls attention to what is involved in algorithms beyond computational processes. It carries an affective charge that calls upon the necessity to care about relations with technology, and not only to fix them (Bellacasa, 2017).”