The great Romanian election TikTok replay

In December 2024, the European Commission launched an investigation following “serious indications that foreign actors interfered in the Romanian presidential elections using TikTok.” The elections have been described as a “first big test” for “Europe’s digital police” and the Digital Services Act.

In January 2025 a group of us met at the Digital Methods Winter School at the University of Amsterdam to explore how TikTok was used during and after the elections.

We explored ways of playing back election TikTok video collections to understand what happened.

We experimented with formats for retrospective display – drawing inspiration from creative coding, algorithmic composition, multiperspective live action replays, and the aesthetics of forensic reconstruction.

Following research on visual methods for studying folders of images (Niederer and Colombo, 2024; Colombo, Bounegru & Gray, 2023) and analytical metapicturing (Rogers, 2021), these formats display multiple videos simultaneously to surface patterns and resonances across them.

Beyond evaluating informational content, group replay formats can also highlight the everyday situations, aesthetics and affective dimensions of election TikTok videos – from sexualised lip-syncing to rousing AI anthems, sponsored micro-influencer testimonials to post-communist nationalist nostalgia.

We explored two approaches for critically replaying Romanian election videos: making video composites based on viral candidate soundtracks, and making post-election hashtag soundscapes. For the former we used a Python script to display videos by theme and adjust opacity according to play count. For the latter we used soundscaping scripts developed as part of the Forestscapes project.

For the video composites we used as case studies two viral soundtracks associated with ultranationalist Călin Georgescu and the centre-right, pro-EU, Save Romania Union candidate Elena Lasconi.

Our preliminary findings indicate that successful pro-Georgescu propaganda using the “SustinCalin Georgescu” soundtrack relies on memetic imitation of the message and affective resonances of the song. TikTok influencers and everyday users translate these into popular formats such as lipsyncs and ASMR videos effectively blending textual, visual, and audio elements.

Gender, sexuality and race are prominent themes in the most engaged with propagandist videos for both campaigns. In pro-Georgescu content, popular endorsement videos often feature white women in either sexualised roles or domestic family settings. Homophobic and transphobic videos with male characters in dresses parody the opponent’s and her party’s association with LGBTQ issues, fuelling the audience’s strong emotions towards minoritised groups.

For the “Hai Lasconi la Putere” propagandistic song, the most significant finding is its successful appropriation for counter-propaganda to spread racist, sexist, homophobic, and transphobic content targeting minoritised groups. These videos do not only target Lasconi but more worryingly these groups themselves, amplifying fears and prejudices, as often reflected in the comments.

The second technique we explored was post-election hashtag soundscaping. We examined hashtags such as: #anularealegeri, #aparamdemocratia, #calingeorgescupresedinte, #cg, #cinetaceestecomplice #demisiaccr, #demisiaiohanis, #lovituradestat, #romaniatacuta, #romaniavanduta, #stegarul, #stegaruldac and #votfurat.

For example, in the #stegaruldac soundscape the simultaneous replay of TikTok video soundtracks associated with this hashtag enables a synthetic mode of attending not only to the content of propaganda but also to the various settings in which propaganda unfolds in everyday life (e.g. in the home and on the street) as well as associated affective atmospheres.

You can explore our project poster and some of our video composites and soundscapes here.

Special Issue: Generative AI for Social Research – Going Native with Artificial Intelligence

We are proud to announce the publication of a special issue for the Sociologica journal, where we take an early stock of the ways in which social scientists have begun to play with so-called “generative artificial intelligence” as both research instruments and research objects. 

Because the encounter between AI and social science is still very new, the special issue aims at breadth rather than depth, and hopes to highlight the diversity of the experiments that researchers have been running since the launch of popular chatbots such as ChatGPT or Stable Diffusion. At the same time, it also takes a specific stance inspired by the digital methods approach (Pilati, Munk & Venturini, 2024), its effort to overcome the quali-quantitative divide and its focus on digitally native methods.

The contributions to the special issue investigate how AI — initially developed for tasks like natural language processing and image generation — is being repurposed to meet the specific demands of social inquiry. This involves not only augmenting existing research methods, but also fostering new, digitally native and quali-quantitative techniques.

In his contribution, Gabriele de Seta (2024) introduces the concept of synthetic probes as a qualitative approach to explore the latent space of generative AI models. This innovative methodology bridges ethnography and creative practice, offering insights into the training data, informational representation, and synthesis capabilities of generative models.

In their contribution, Jacomy & Borra (2024) provide a critical examination of LLMs’ limitations and misconceptions, particularly focusing on their knowledge and self-knowledge capabilities. Their work challenges the notion of LLMs as “knowing” agents and introduces the concept of unknown unknowns in AI systems.

Studying model outputs can also focus on validation. Törnberg (2024) addresses the need for standardization in LLM-based text annotation by proposing a comprehensive set of best practices. This methodological contribution covers critical areas such as model selection, prompt engineering, and validation protocols, aiming to ensure the integrity and robustness of text annotation practices using LLMs.

Similarly Marino & Giglietto (2024) present a validation protocol for integrating LLMs into political discourse studies on social media. Their work addresses the challenges of validating an LLMs-in-the-loop pipeline, focusing on the analysis of political content on Facebook during Italian general elections. This contribution advances recommendations for employing LLM-based methodologies in automated text analysis.

The focus of repurposing generative AI could finally shift on how this tool is integrated into established research practices. Omena et al. (2024) thus introduce the AI Methodology Map, a framework for exploring generative AI applications in digital methods-led research. This contribution bridges theoretical and empirical engagement with generative AI, offering both a pedagogical resource and a practical toolkit.

Rossi et al. (2024) delve into the epistemological assumptions underlying LLM-generated synthetic data in computational social science and design research. Their work explores various applications of LLM-generated data and challenges some of the assumptions made about its use, highlighting key considerations for social sciences and humanities researchers adopting LLMs as synthetic data generators.

All of these approaches go beyond mere criticism of AI, and recognize that AI can have an astonishing broad range of useful research applications provided that social sciences learn to understand their perspectives and biases and actively shape and repurpose these technologies for their research needs.

de Seta, G. (2024). Synthetic Probes: A Qualitative Experiment in Latent Space Exploration. Sociologica, 18(2), 9–23. https://doi.org/10.6092/issn.1971-8853/19512

Jacomy, M., & Borra, E. (2024). Measuring LLM Self-consistency: Unknown Unknowns in Knowing Machines. Sociologica, 18(2), 25–65. https://doi.org/10.6092/issn.1971-8853/19488

Marino, G., & Giglietto, F. (2024). Integrating Large Language Models in Political Discourse Studies on Social Media: Challenges of Validating an LLMs-in-the-loop Pipeline. Sociologica, 18(2), 87–107. https://doi.org/10.6092/issn.1971-8853/19524

Omena, J.J. (2024). AI Methodology Map. Practical and Theoretical Approach to Engage with GenAI for Digital Methods-led Research. Sociologica, 18(2), 109–144. https://doi.org/10.6092/issn.1971-8853/19566

Pilati, F., Munk. A.K., & Venturini, T. (2024). Generative AI for Social Research: Going Native with Artificial Intelligence. Sociologica, 18(2), 1–8.

Rossi, L., Shklovski, I., & Harrison, K. (2024). Applications of LLM-generated Data in Social Science Research. Sociologica, 18(2), 145–168. https://doi.org/10.6092/issn.1971-8853/19576

Törnberg, P. (2024). Best Practices for Text Annotation with Large Language Models. Sociologica, 18(2), 67–85. https://doi.org/10.6092/issn.1971-8853/19461

Hybrid Event: Digital Methods in Brazil

[cross-post]

Call for Participation

This roundtable fosters dialogue about the current state of digital methods for Internet research in Brazil. We seek to celebrate emerging research practices and kick off a Global South network, situating them within a transitional methodological moment in which digital methods and methodologies have been built with, in and about AI, web platforms and data visualisation. This roundtable does not provide an exhaustive overview of digital methods in Brazil. Instead, it focuses on approaches specifically developed within the Brazilian context, offering unique perspectives on the field. 🇧🇷

✏️Confirm your participation here. If you join us online, we will email you the link.

🔗You are welcome to join the Digital Methods Global South Network by collaborating with us to map Digital Methods in Brazil (click here!) The results of this form will be displayed here and updated continuously 🤓.

Join us in person or online! ✨👩🏻‍💻❣️

Continue reading