Blog

What next for Memespector-GUI?

Memespector-GUI has helped over a thousand researchers use well-known cloud-based computer vision APIs to enhance their image datasets since 2021. The last major version of Memespector-GUI was released about four years ago.

The developer of Memespector-GUI is considering continuing the development of the tool. He would like to invite former, current, and potential future users to share your feedback on the current version, as well as your ideas for the next generation of the tool.

Questionnaire https://forms.gle/BZJmMVEFFcN1zksMA

This short questionnaire has 12 questions and should take you less than 5 minutes to complete.

This survey will close on 31 March 2025.

If you have any questions or would like to discuss further, feel free to contact Jason Chao at chao@jasontc.net.

forestscapes listening lab at Science Gallery London, 21st March 2025

How can soundscaping prompt reconsideration of the lives, cultures and futures of forests?

To explore this we’re organising a forestscapes listening lab at Science Gallery London, as part of Pulse of the Planet on 21st March 2025 from 6.30pm. You can find out more and register here.

The listening lab is part of the forestscapes project, which examines how soundscaping can surface different ways of knowing, imagining and experiencing forests.

As part of this project we are developing generative arts-based methods for recomposing collections of sound materials to support “collective inquiry” into forests as living cultural landscapes.

At the listening lab we will be using supercollider for live algorithmic recomposition of collections of forest related sounds – including field recordings from forest research and restoration projects, as well as sounds associated with forest sites and forest issues on online platforms such as YouTube and TikTok.

In contrast to listening as individual immersion in curated recreations of nature – the lab will explore listening as a collective practice of unsettling and reconsidering nature-culture relations and how ecologies are mediatised, commodified, laundered and contested.

If you’d like to get updates on the forestscapes project you can sign up here. If you’re interested in collaborating or hosting a forestscapes workshop you can find contact details here.

The great Romanian election TikTok replay

In December 2024, the European Commission launched an investigation following “serious indications that foreign actors interfered in the Romanian presidential elections using TikTok.” The elections have been described as a “first big test” for “Europe’s digital police” and the Digital Services Act.

In January 2025 a group of us met at the Digital Methods Winter School at the University of Amsterdam to explore how TikTok was used during and after the elections.

We explored ways of playing back election TikTok video collections to understand what happened.

We experimented with formats for retrospective display – drawing inspiration from creative coding, algorithmic composition, multiperspective live action replays, and the aesthetics of forensic reconstruction.

Following research on visual methods for studying folders of images (Niederer and Colombo, 2024; Colombo, Bounegru & Gray, 2023) and analytical metapicturing (Rogers, 2021), these formats display multiple videos simultaneously to surface patterns and resonances across them.

Beyond evaluating informational content, group replay formats can also highlight the everyday situations, aesthetics and affective dimensions of election TikTok videos – from sexualised lip-syncing to rousing AI anthems, sponsored micro-influencer testimonials to post-communist nationalist nostalgia.

We explored two approaches for critically replaying Romanian election videos: making video composites based on viral candidate soundtracks, and making post-election hashtag soundscapes. For the former we used a Python script to display videos by theme and adjust opacity according to play count. For the latter we used soundscaping scripts developed as part of the Forestscapes project.

For the video composites we used as case studies two viral soundtracks associated with ultranationalist Călin Georgescu and the centre-right, pro-EU, Save Romania Union candidate Elena Lasconi.

Our preliminary findings indicate that successful pro-Georgescu propaganda using the “SustinCalin Georgescu” soundtrack relies on memetic imitation of the message and affective resonances of the song. TikTok influencers and everyday users translate these into popular formats such as lipsyncs and ASMR videos effectively blending textual, visual, and audio elements.

Gender, sexuality and race are prominent themes in the most engaged with propagandist videos for both campaigns. In pro-Georgescu content, popular endorsement videos often feature white women in either sexualised roles or domestic family settings. Homophobic and transphobic videos with male characters in dresses parody the opponent’s and her party’s association with LGBTQ issues, fuelling the audience’s strong emotions towards minoritised groups.

For the “Hai Lasconi la Putere” propagandistic song, the most significant finding is its successful appropriation for counter-propaganda to spread racist, sexist, homophobic, and transphobic content targeting minoritised groups. These videos do not only target Lasconi but more worryingly these groups themselves, amplifying fears and prejudices, as often reflected in the comments.

The second technique we explored was post-election hashtag soundscaping. We examined hashtags such as: #anularealegeri, #aparamdemocratia, #calingeorgescupresedinte, #cg, #cinetaceestecomplice #demisiaccr, #demisiaiohanis, #lovituradestat, #romaniatacuta, #romaniavanduta, #stegarul, #stegaruldac and #votfurat.

For example, in the #stegaruldac soundscape the simultaneous replay of TikTok video soundtracks associated with this hashtag enables a synthetic mode of attending not only to the content of propaganda but also to the various settings in which propaganda unfolds in everyday life (e.g. in the home and on the street) as well as associated affective atmospheres.

You can explore our project poster and some of our video composites and soundscapes here.

Special Issue: Generative AI for Social Research – Going Native with Artificial Intelligence

We are proud to announce the publication of a special issue for the Sociologica journal, where we take an early stock of the ways in which social scientists have begun to play with so-called “generative artificial intelligence” as both research instruments and research objects. 

Because the encounter between AI and social science is still very new, the special issue aims at breadth rather than depth, and hopes to highlight the diversity of the experiments that researchers have been running since the launch of popular chatbots such as ChatGPT or Stable Diffusion. At the same time, it also takes a specific stance inspired by the digital methods approach (Pilati, Munk & Venturini, 2024), its effort to overcome the quali-quantitative divide and its focus on digitally native methods.

The contributions to the special issue investigate how AI — initially developed for tasks like natural language processing and image generation — is being repurposed to meet the specific demands of social inquiry. This involves not only augmenting existing research methods, but also fostering new, digitally native and quali-quantitative techniques.

In his contribution, Gabriele de Seta (2024) introduces the concept of synthetic probes as a qualitative approach to explore the latent space of generative AI models. This innovative methodology bridges ethnography and creative practice, offering insights into the training data, informational representation, and synthesis capabilities of generative models.

In their contribution, Jacomy & Borra (2024) provide a critical examination of LLMs’ limitations and misconceptions, particularly focusing on their knowledge and self-knowledge capabilities. Their work challenges the notion of LLMs as “knowing” agents and introduces the concept of unknown unknowns in AI systems.

Studying model outputs can also focus on validation. Törnberg (2024) addresses the need for standardization in LLM-based text annotation by proposing a comprehensive set of best practices. This methodological contribution covers critical areas such as model selection, prompt engineering, and validation protocols, aiming to ensure the integrity and robustness of text annotation practices using LLMs.

Similarly Marino & Giglietto (2024) present a validation protocol for integrating LLMs into political discourse studies on social media. Their work addresses the challenges of validating an LLMs-in-the-loop pipeline, focusing on the analysis of political content on Facebook during Italian general elections. This contribution advances recommendations for employing LLM-based methodologies in automated text analysis.

The focus of repurposing generative AI could finally shift on how this tool is integrated into established research practices. Omena et al. (2024) thus introduce the AI Methodology Map, a framework for exploring generative AI applications in digital methods-led research. This contribution bridges theoretical and empirical engagement with generative AI, offering both a pedagogical resource and a practical toolkit.

Rossi et al. (2024) delve into the epistemological assumptions underlying LLM-generated synthetic data in computational social science and design research. Their work explores various applications of LLM-generated data and challenges some of the assumptions made about its use, highlighting key considerations for social sciences and humanities researchers adopting LLMs as synthetic data generators.

All of these approaches go beyond mere criticism of AI, and recognize that AI can have an astonishing broad range of useful research applications provided that social sciences learn to understand their perspectives and biases and actively shape and repurpose these technologies for their research needs.

de Seta, G. (2024). Synthetic Probes: A Qualitative Experiment in Latent Space Exploration. Sociologica, 18(2), 9–23. https://doi.org/10.6092/issn.1971-8853/19512

Jacomy, M., & Borra, E. (2024). Measuring LLM Self-consistency: Unknown Unknowns in Knowing Machines. Sociologica, 18(2), 25–65. https://doi.org/10.6092/issn.1971-8853/19488

Marino, G., & Giglietto, F. (2024). Integrating Large Language Models in Political Discourse Studies on Social Media: Challenges of Validating an LLMs-in-the-loop Pipeline. Sociologica, 18(2), 87–107. https://doi.org/10.6092/issn.1971-8853/19524

Omena, J.J. (2024). AI Methodology Map. Practical and Theoretical Approach to Engage with GenAI for Digital Methods-led Research. Sociologica, 18(2), 109–144. https://doi.org/10.6092/issn.1971-8853/19566

Pilati, F., Munk. A.K., & Venturini, T. (2024). Generative AI for Social Research: Going Native with Artificial Intelligence. Sociologica, 18(2), 1–8.

Rossi, L., Shklovski, I., & Harrison, K. (2024). Applications of LLM-generated Data in Social Science Research. Sociologica, 18(2), 145–168. https://doi.org/10.6092/issn.1971-8853/19576

Törnberg, P. (2024). Best Practices for Text Annotation with Large Language Models. Sociologica, 18(2), 67–85. https://doi.org/10.6092/issn.1971-8853/19461

Hybrid Event: Digital Methods in Brazil

[cross-post]

Call for Participation

This roundtable fosters dialogue about the current state of digital methods for Internet research in Brazil. We seek to celebrate emerging research practices and kick off a Global South network, situating them within a transitional methodological moment in which digital methods and methodologies have been built with, in and about AI, web platforms and data visualisation. This roundtable does not provide an exhaustive overview of digital methods in Brazil. Instead, it focuses on approaches specifically developed within the Brazilian context, offering unique perspectives on the field. 🇧🇷

✏️Confirm your participation here. If you join us online, we will email you the link.

🔗You are welcome to join the Digital Methods Global South Network by collaborating with us to map Digital Methods in Brazil (click here!) The results of this form will be displayed here and updated continuously 🤓.

Join us in person or online! ✨👩🏻‍💻❣️

Continue reading

Job: Digital Methods Research Associate at Media of Cooperation, University of Siegen

The Media of Cooperation Research Centre at the University of Siegen is hiring a Digital Methods Research Associate. Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

Job title: Research Associate – Digital Methods / Scientific Programmer (SFB 1187)

Area: Faculty I – Faculty of Philosophy | Scope of position: full-time | Duration of employment: limited | Advertisement ID: 6274

We are an interdisciplinary and cosmopolitan university with currently around 15,000 students and a range of subjects from the humanities, social sciences and economics to natural sciences, engineering and life sciences. With over 2,000 employees, we are one of the largest employers in the region and offer a unique environment for teaching, research and further education.

In Faculty I – Faculty of Philosophy, SFB 1187 Media of Cooperation, we are looking for a research assistant in the field of Digital Methods/Scientific Programming as soon as possible under the following conditions:

100% = 39.83 hours

Salary group 13 TV-L

limited until December 31, 2027

YOUR TASKS

  • Support in the development and teaching of digital research methods within the framework of the SFB Media of Cooperation and teaching in the media studies courses.
  • Development, implementation and updating of software tools for working with digital research methods, as well as further development of existing open source research software, such as 4CAT.
  • Support in the collection, analysis and visualization of data from online media within the framework of the research projects of the SFB Media of Cooperation, especially in the area of ​​social media platforms, audiovisual platforms, generative AI, apps and sensory media.
  • Administration and maintenance of the digital research infrastructure for data collection, archiving and analysis
  • Participation in the planning and implementation of media science research projects
  • Technical support for workshops and events
  • Networking with developers of research software, also internationally
  • Teaching obligation: 4 semester hours per week

YOUR PROFILE

  • Completed academic university degree (diploma, master’s, magister, teaching qualification, comparable foreign degree) in computer science, business informatics, media studies or a related discipline
  • Experience with system administration and support of server environments (Linux) as well as the operation of web-based applications (e.g. 4CAT)
  • Very good knowledge of developing applications with Python and database systems (MySQL or similar) or willingness to deepen this
  • Basic knowledge of web development with JavaScript, PHP, HTML, CSS, XML or willingness to acquire this
  • Affinity for working with data from platforms, apps, web or other data-intensive media, for example using scraping or API Retrieval
  • Ability to work in a team, creativity and very good communication skills
  • Fluent written and spoken English
  • Experience in the conception and development of research software and interest in supporting the research of the SFB Media of Cooperation

OUR OFFER

  • Promotion of your own scientific or artistic qualification in accordance with the Scientific Temporary Employment Act
  • Various opportunities to take on responsibility and make a visible contribution in the field of research and teaching
  • A modern understanding of leadership and collaboration
  • Good compatibility of work and private life, for example through flexible working hours and place of work as well as support with childcare
  • Comprehensive personnel development program
  • Health management with a wide range of prevention and advice services

We look forward to receiving your application by December 24, 2024.

Please only apply via our job portal (https://jobs.uni-siegen.de.) Unfortunately, we cannot consider applications in paper form or by email.

German language skills are nice to have, but not required.

Contact: Prof. Dr. Carolin Gerlitz

New chapter on “#amazonfires and the online composition of ecological politics” in Digital Ecologies book

How are digital objects – such as hashtags, links, likes and images – involved in ecological politics?

Public Data Lab researchers Liliana Bounegru, Gabriele Colombo and Jonathan Gray explore this in a new chapter on “#amazonfires and the online composition of ecological politics” as part of a book on digital ecologies: mediating more-than-human worlds which has just been published on Manchester University Press.

Here’s the abstract for the chapter:

How are digital objects such as hashtags, links, likes and images involved in the production of forest politics? This chapter explores this through collaborative research on the dynamics of online engagement with the 2019 Amazon forest fires. Through a series of empirical vignettes with visual materials and data from social media, we examine how digital platforms, objects and devices perform and organise relations between forests and a wide variety of societal actors, issues, cultures – from bots to boycotts, agriculture to eco-activism, scientists to pop stars, indigenous communities to geopolitical interventions. Looking beyond concerns with the representational (in-)fidelities of forest media, we consider the role of collaborative methodological experiments with co-hashtag networks, cross-platform analysis, composite images and image-text variations in tracing, eliciting and unfolding the digital mediation of ecological politics. Thinking along with research on the social lives of methods, we consider the role of digital data, methods and infrastructures in the composition and recomposition of problems, relations and ontologies of forests in society.

Here’s the book blurb:

Digital ecologies draws together leading social science and humanities scholars to examine how digital media are reshaping the futures of conservation, environmentalism, and ecological politics. The book offers an overview of the emerging field of interdisciplinary digital ecologies research by mapping key debates and issues in the field, with original empirical chapters exploring how livestreams, sensors, mobile technologies, social media platforms, and software are reconfiguring life in profound ways. The collection traverses contexts ranging from animal exercise apps, to surveillance systems on the high seas, and is organised around the themes of encounters, governance, and assemblages. Digital ecologies also includes an agenda-setting intervention by the book’s editors, and three closing chapter-length provocations by leading scholars in digital geographies, the environmental humanities, and media theory that set out trajectories for future research.

Chatbots and LLMs for Internet Research? Digital Methods Winter School and Data Sprint 2025

The annual Digital Methods Winter School in Amsterdam will take place on 6-10th January 2025 with the theme “Chatbots and LLMs for Internet Research?”. The deadline for applications is 9 December 2024. You can read more on this page (an excerpt from which is copied below).

Chatbots and LLMs for Internet Research? Digital Methods Winter School and Data Sprint 2025
https://wiki.digitalmethods.net/Dmi/WinterSchool2025

The Digital Methods Initiative (DMI), Amsterdam, is holding its annual Winter School on ‘Chatbots for Internet Research?’. The format is that of a (social media and web) data sprint, with tutorials as well as hands-on work for telling stories with data. There is also a programme of keynote speakers. It is intended for advanced Master’s students, PhD candidates and motivated scholars who would like to work on (and complete) a digital methods project in an intensive workshop setting. For a preview of what the event is like, you can view short video clips from previous editions of the School.Chatbots and LLMs for Internet Research? Towards a Reflexive ApproachPositions now are increasingly staked out in the debate concerning the application of chatbots and LLMs to social and cultural research. On the one hand there is the question of ‘automating’ methods and shifting some additional part of the epistemological burden to machines. On the other there is the rejoinder that chatbots may well be adequate research buddies, assisting with (among other things) burdensome and repetitive tasks such as coding and annotating data sets. They seem to be continually improving, or at least growing in size and apparent promise. Researcher experiences are now widely reported: chatbots have outperformed human coders, ‘understanding’ rather nuanced stance-taking language and correctly labeling it better than average coders. But other work has found that the LLM labeling also has the tendency to be bland, given how the filters and safety guardrails (particularly in US-based chatbots) tend to depoliticise or otherwise soften their responses. As researcher experience with LLMs becomes more widely reported, there are user guides and best practices designed to make LLM findings more robust. Models should be carefully chosen, persona’s should be well developed, prompting should be conversational and so forth. LLM critique is also developing apace, with (comparative) audits interrogating underlying discrimination and bias that are only papered over by filters. At this year’s Digital Methods Winter School we will explore these research practices with chatbots and LLMs for internet research, with an emphasis on bringing them together. How to deploy and critique chatbots and LLMs at the same time, in a form of reflexive usage?

There are rolling admissions and applications are now being accepted. To apply please send a letter of motivation, your CV, a headshot photo and a 100-word bio to winterschool [at] digitalmethods.net. Notifications of acceptance are sent within 2 weeks after application. Final deadline for applications is 9 December 2024. The full program and schedule of the Winter School are available by 19 December 2024.

The @digitalmethods.net Winter School in Amsterdam will take place on 6-10th January 2025 with the theme “Chatbots and LLMs for Internet Research?”. Apply by 9th December. 📝✨ publicdatalab.org/2024/11/29/d… #digitalmethods

[image or embed]

— Public Data Lab (@publicdatalab.bsky.social) 5 December 2024 at 10:56
Post by @publicdatalab@vis.social
View on Mastodon

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

To explore this we’re launching a call for screenshots as part of a research collaboration co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

We’d be grateful for your help in sharing this call:

Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

This “call for screenshots” invites you to explore this question by sharing a screenshot that you have created, or that someone has shared with you, of an interaction with AI that you find troubling, with a short statement on your interpretation of the image and circumstances of how you got it.

The screenshot is perhaps one of today’s most familiar and accessible modes of data capture. With regard to Al, screenshots can capture moments when situational, temporary and emergent aspects of interactions are foregrounded over behavioural patterning. They also have a ‘social life’: we share them with each other with various social and political intentions and commitments.

Screenshots have accordingly become a prominent method for documenting and sharing AI’s injustices and other AI troubles – from researchers studying racist search results to customers capturing swearing chatbots, from artists exploring algorithmic culture to social media users publicising bias.

With this call, we are aiming to build a collective picture of AI’s weirdness, strangeness and uncanniness, and how screenshotting can open up possibilities for collectivising troubles and concerns about AI.

This call invites screenshots of interactions with AI inspired by these examples and inquiries, accompanied by a few words about what is troubling for you about those interactions. You are invited to interpret this call in your own way: we want to know what you perceive to be a ‘troubling’ screenshot and why.

Please send us mobile phone screenshots, laptop or desktop screen captures, or other forms of grabbing content from a screen, including videos or other types of screen recordings, through the form below (which can also be found here) by 15th November 2024 10th December 2024.

Your images will be featured in an online publication and workshop (with your permission and appropriate credit), co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

Joanna Zylinska
Tommy Shaffer Shane
Axel Meunier
Jonathan W. Y. Gray

Hybrid event for special issue on critical technical practice(s) in digital research, 10th July 2024

There will be a launch event for the Convergence special issue on critical technical practice(s) in digital research on Wednesday 10th July, 2-4pm (CEST). This will include an introduction to the special issue, brief presentations from several special issue contributors, followed by discussion about possibilities and next steps. You can register here.