Job: Digital Methods Research Associate at Media of Cooperation, University of Siegen

The Media of Cooperation Research Centre at the University of Siegen is hiring a Digital Methods Research Associate. Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

Job title: Research Associate – Digital Methods / Scientific Programmer (SFB 1187)

Area: Faculty I – Faculty of Philosophy | Scope of position: full-time | Duration of employment: limited | Advertisement ID: 6274

We are an interdisciplinary and cosmopolitan university with currently around 15,000 students and a range of subjects from the humanities, social sciences and economics to natural sciences, engineering and life sciences. With over 2,000 employees, we are one of the largest employers in the region and offer a unique environment for teaching, research and further education.

In Faculty I – Faculty of Philosophy, SFB 1187 Media of Cooperation, we are looking for a research assistant in the field of Digital Methods/Scientific Programming as soon as possible under the following conditions:

100% = 39.83 hours

Salary group 13 TV-L

limited until December 31, 2027

YOUR TASKS

  • Support in the development and teaching of digital research methods within the framework of the SFB Media of Cooperation and teaching in the media studies courses.
  • Development, implementation and updating of software tools for working with digital research methods, as well as further development of existing open source research software, such as 4CAT.
  • Support in the collection, analysis and visualization of data from online media within the framework of the research projects of the SFB Media of Cooperation, especially in the area of ​​social media platforms, audiovisual platforms, generative AI, apps and sensory media.
  • Administration and maintenance of the digital research infrastructure for data collection, archiving and analysis
  • Participation in the planning and implementation of media science research projects
  • Technical support for workshops and events
  • Networking with developers of research software, also internationally
  • Teaching obligation: 4 semester hours per week

YOUR PROFILE

  • Completed academic university degree (diploma, master’s, magister, teaching qualification, comparable foreign degree) in computer science, business informatics, media studies or a related discipline
  • Experience with system administration and support of server environments (Linux) as well as the operation of web-based applications (e.g. 4CAT)
  • Very good knowledge of developing applications with Python and database systems (MySQL or similar) or willingness to deepen this
  • Basic knowledge of web development with JavaScript, PHP, HTML, CSS, XML or willingness to acquire this
  • Affinity for working with data from platforms, apps, web or other data-intensive media, for example using scraping or API Retrieval
  • Ability to work in a team, creativity and very good communication skills
  • Fluent written and spoken English
  • Experience in the conception and development of research software and interest in supporting the research of the SFB Media of Cooperation

OUR OFFER

  • Promotion of your own scientific or artistic qualification in accordance with the Scientific Temporary Employment Act
  • Various opportunities to take on responsibility and make a visible contribution in the field of research and teaching
  • A modern understanding of leadership and collaboration
  • Good compatibility of work and private life, for example through flexible working hours and place of work as well as support with childcare
  • Comprehensive personnel development program
  • Health management with a wide range of prevention and advice services

We look forward to receiving your application by December 24, 2024.

Please only apply via our job portal (https://jobs.uni-siegen.de.) Unfortunately, we cannot consider applications in paper form or by email.

German language skills are nice to have, but not required.

Contact: Prof. Dr. Carolin Gerlitz

New chapter on “#amazonfires and the online composition of ecological politics” in Digital Ecologies book

How are digital objects – such as hashtags, links, likes and images – involved in ecological politics?

Public Data Lab researchers Liliana Bounegru, Gabriele Colombo and Jonathan Gray explore this in a new chapter on “#amazonfires and the online composition of ecological politics” as part of a book on digital ecologies: mediating more-than-human worlds which has just been published on Manchester University Press.

Here’s the abstract for the chapter:

How are digital objects such as hashtags, links, likes and images involved in the production of forest politics? This chapter explores this through collaborative research on the dynamics of online engagement with the 2019 Amazon forest fires. Through a series of empirical vignettes with visual materials and data from social media, we examine how digital platforms, objects and devices perform and organise relations between forests and a wide variety of societal actors, issues, cultures – from bots to boycotts, agriculture to eco-activism, scientists to pop stars, indigenous communities to geopolitical interventions. Looking beyond concerns with the representational (in-)fidelities of forest media, we consider the role of collaborative methodological experiments with co-hashtag networks, cross-platform analysis, composite images and image-text variations in tracing, eliciting and unfolding the digital mediation of ecological politics. Thinking along with research on the social lives of methods, we consider the role of digital data, methods and infrastructures in the composition and recomposition of problems, relations and ontologies of forests in society.

Here’s the book blurb:

Digital ecologies draws together leading social science and humanities scholars to examine how digital media are reshaping the futures of conservation, environmentalism, and ecological politics. The book offers an overview of the emerging field of interdisciplinary digital ecologies research by mapping key debates and issues in the field, with original empirical chapters exploring how livestreams, sensors, mobile technologies, social media platforms, and software are reconfiguring life in profound ways. The collection traverses contexts ranging from animal exercise apps, to surveillance systems on the high seas, and is organised around the themes of encounters, governance, and assemblages. Digital ecologies also includes an agenda-setting intervention by the book’s editors, and three closing chapter-length provocations by leading scholars in digital geographies, the environmental humanities, and media theory that set out trajectories for future research.

Chatbots and LLMs for Internet Research? Digital Methods Winter School and Data Sprint 2025

The annual Digital Methods Winter School in Amsterdam will take place on 6-10th January 2025 with the theme “Chatbots and LLMs for Internet Research?”. The deadline for applications is 9 December 2024. You can read more on this page (an excerpt from which is copied below).

Chatbots and LLMs for Internet Research? Digital Methods Winter School and Data Sprint 2025
https://wiki.digitalmethods.net/Dmi/WinterSchool2025

The Digital Methods Initiative (DMI), Amsterdam, is holding its annual Winter School on ‘Chatbots for Internet Research?’. The format is that of a (social media and web) data sprint, with tutorials as well as hands-on work for telling stories with data. There is also a programme of keynote speakers. It is intended for advanced Master’s students, PhD candidates and motivated scholars who would like to work on (and complete) a digital methods project in an intensive workshop setting. For a preview of what the event is like, you can view short video clips from previous editions of the School.Chatbots and LLMs for Internet Research? Towards a Reflexive ApproachPositions now are increasingly staked out in the debate concerning the application of chatbots and LLMs to social and cultural research. On the one hand there is the question of ‘automating’ methods and shifting some additional part of the epistemological burden to machines. On the other there is the rejoinder that chatbots may well be adequate research buddies, assisting with (among other things) burdensome and repetitive tasks such as coding and annotating data sets. They seem to be continually improving, or at least growing in size and apparent promise. Researcher experiences are now widely reported: chatbots have outperformed human coders, ‘understanding’ rather nuanced stance-taking language and correctly labeling it better than average coders. But other work has found that the LLM labeling also has the tendency to be bland, given how the filters and safety guardrails (particularly in US-based chatbots) tend to depoliticise or otherwise soften their responses. As researcher experience with LLMs becomes more widely reported, there are user guides and best practices designed to make LLM findings more robust. Models should be carefully chosen, persona’s should be well developed, prompting should be conversational and so forth. LLM critique is also developing apace, with (comparative) audits interrogating underlying discrimination and bias that are only papered over by filters. At this year’s Digital Methods Winter School we will explore these research practices with chatbots and LLMs for internet research, with an emphasis on bringing them together. How to deploy and critique chatbots and LLMs at the same time, in a form of reflexive usage?

There are rolling admissions and applications are now being accepted. To apply please send a letter of motivation, your CV, a headshot photo and a 100-word bio to winterschool [at] digitalmethods.net. Notifications of acceptance are sent within 2 weeks after application. Final deadline for applications is 9 December 2024. The full program and schedule of the Winter School are available by 19 December 2024.

The @digitalmethods.net Winter School in Amsterdam will take place on 6-10th January 2025 with the theme “Chatbots and LLMs for Internet Research?”. Apply by 9th December. 📝✨ publicdatalab.org/2024/11/29/d… #digitalmethods

[image or embed]

— Public Data Lab (@publicdatalab.bsky.social) 5 December 2024 at 10:56
Post by @publicdatalab@vis.social
View on Mastodon

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

To explore this we’re launching a call for screenshots as part of a research collaboration co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

We’d be grateful for your help in sharing this call:

Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

This “call for screenshots” invites you to explore this question by sharing a screenshot that you have created, or that someone has shared with you, of an interaction with AI that you find troubling, with a short statement on your interpretation of the image and circumstances of how you got it.

The screenshot is perhaps one of today’s most familiar and accessible modes of data capture. With regard to Al, screenshots can capture moments when situational, temporary and emergent aspects of interactions are foregrounded over behavioural patterning. They also have a ‘social life’: we share them with each other with various social and political intentions and commitments.

Screenshots have accordingly become a prominent method for documenting and sharing AI’s injustices and other AI troubles – from researchers studying racist search results to customers capturing swearing chatbots, from artists exploring algorithmic culture to social media users publicising bias.

With this call, we are aiming to build a collective picture of AI’s weirdness, strangeness and uncanniness, and how screenshotting can open up possibilities for collectivising troubles and concerns about AI.

This call invites screenshots of interactions with AI inspired by these examples and inquiries, accompanied by a few words about what is troubling for you about those interactions. You are invited to interpret this call in your own way: we want to know what you perceive to be a ‘troubling’ screenshot and why.

Please send us mobile phone screenshots, laptop or desktop screen captures, or other forms of grabbing content from a screen, including videos or other types of screen recordings, through the form below (which can also be found here) by 15th November 2024 10th December 2024.

Your images will be featured in an online publication and workshop (with your permission and appropriate credit), co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

Joanna Zylinska
Tommy Shaffer Shane
Axel Meunier
Jonathan W. Y. Gray

Hybrid event for special issue on critical technical practice(s) in digital research, 10th July 2024

There will be a launch event for the Convergence special issue on critical technical practice(s) in digital research on Wednesday 10th July, 2-4pm (CEST). This will include an introduction to the special issue, brief presentations from several special issue contributors, followed by discussion about possibilities and next steps. You can register here.

Critical Technical Practice(s) in Digital Research- special issue in Convergence

A special issue on “Critical Technical Practice(s) in Digital Research” co-edited by Public Data Lab members Daniela van Geenen, Karin van Es and Jonathan W. Y. Gray has been published in Convergence: https://journals.sagepub.com/toc/cona/30/1.

The special issue explores the pluralisation of “critical technical practice”, starting from its early formulations by Philip Agre in the context of AI research and development to the many ways in which it has resonated and been taken up by different publications, projects, groups, and communities of practice, and what it has come to mean. This special issue serves as an invitation to reconsider what it means to use this notion drawing on a wider body of work, including beyond Agre.

A special issue introduction explores critical technical practices according to (1) Agre, (2) indexed research, and (3) contributors to the special issue, before concluding with questions and considerations for those interested in working with this notion.

The issue features contributions on machine learning, digital methods, art-based interventions, one-click network trouble, web page snapshotting, social media tool-making, sensory media, supercuts, climate futures and more. Contributors include Tatjana Seitz & Sam Hind; Michael Dieter; Jean-Marie John-Mathews, Robin De Mourat, Donato Ricci & Maxime Crépel; Anders Koed Madsen; Winnie Soon & Pablo Velasco; Mathieu Jacomy & Anders Munk; Jessica Ogden, Edward Summers & Shawn Walker; Urszula Pawlicka-Deger; Simon Hirsbrunner, Michael Tebbe & Claudia Müller-Birn; Bernhard Rieder, Erik Borra & Stijn Peters; Carolin Gerlitz, Fernando van der Vlist & Jason Chao; Daniel Chavez Heras; and Sabine Niederer & Natalia Sanchez Querubin. 

There will be a hybrid event to launch the special issue on 10 July, 2-4 pm CEST.

Links to the articles and our evolving library can be found here:
https://publicdatalab.org/projects/pluralising-critical-technical-practices/.

If you’re interested in critical technical practices and you’d like to follow work in this area, we’ve set up a new mailing list here for sharing projects, publications, events and other activities: https://jiscmail.ac.uk/CRITICAL-TECHNICAL-PRACTICES

Image credit: “All Gone Tarot Deck” co-created by Carlo De Gaetano, Natalia Sánchez Querubín, Sabine Niederer and the Visual Methodologies Collective from Climate futures: Machine learning from cli-fi, one of the special issue articles.

New article on cross-platform bot studies published in special issue about visual methods

An article on “Quali-quanti visual methods and political bots: A cross-platform study of pro- & anti- bolsobots” has just been published in the special issue “Methods in Visual Politics and Protest” of the Journal of Digital Social Research, co-authored by Public Data Lab associates Janna Joceli Omena, Thais Lobo, Giulia Tucci, Elias Bitencourt, Emillie de Keulenaar, Francisco W. Kerche, Jason Chao, Marius Liedtke, Mengying Li, Maria Luiza Paschoal, and Ilya Lavrov.

The article provides methodological contributions for interpreting bot-associated image collections and textual content across Instagram, TikTok and Twitter/X, building on a series of data sprints conducted as part of the Public Data Lab “Profiling Bolsobot Networks” project.

The full text is available open access here. Further details and links can be found at the project page. Below is the abstract:

Computational social science research on automated social media accounts, colloquially dubbed “bots”, has tended to rely on binary verification methods to detect bot operations on social media. Typically focused on textual data from Twitter (now rebranded as “X”), these methods are prone to finding false positives and failing to understand the subtler ways in which bots operate over time and in particular contexts. This research paper brings methodological contributions to such studies, focusing on what it calls “bolsobots” in Brazilian social media. Named after former Brazilian President Jair Bolsonaro, the bolsobots refer to the extensive and skilful usage of partial or fully automated accounts by marketing teams, hackers, activists or campaign supporters. These accounts leverage organic online political culture to sway public opinion for or against policies, opposition figures, or Bolsonaro himself. Drawing on empirical case studies, this paper implements quali-quanti visual methods to operationalise specific techniques for interpreting bot-associated image collections and textual content across Instagram, TikTok and Twitter/X. To unveil the modus operandi of bolsobots, we map the networks of users they follow (“following networks”), explore the visual-textual content they post, and observe the strategies they deploy to adapt to platform content moderation. Such analyses tackle methodological challenges inherent in bot studies by employing three key strategies: 1) designing context-sensitive queries and curating datasets with platforms’ interfaces and search engines to mitigate the limitations of bot scoring detectors, 2) engaging qualitatively with data visualisations to understand the vernaculars of bots, and 3) adopting a non-binary analysis framework that contextualises bots within their socio-technical environments. By acknowledging the intricate interplay between bots, user and platform cultures, this paper contributes to method innovation on bot studies and emerging quali-quanti visual methods literature.

zeehaven – a tiny tool to convert data for social media research

Zeeschuimer (“sea foamer”) is a web browser extension from the Digital Methods Initiative in Amsterdam that enables you to collect data while you are browsing social media sites for research and analysis.

It currently works for platforms such as TikTok, Instagram, Twitter and LinkedIn and provides an ndjson file which can be imported into the open source 4CAT: Capture and Analysis Toolkit for analysis.

To make data gathered with Zeeschuimer more accessible for for researchers, reporters, students, and others to work with, we’ve created zeehaven (“sea port”) – a tiny web-based tool to convert ndjson into csv format, which is easier to explore with spreadsheets as well as common data analysis and visualisation software.

Drag and drop a ndjson file into the “sea port” and the tool will prompt you to save a csv file. ✨📦✨

zeehaven was created as a collaboration between the Centre for Interdisciplinary Methodologies, University of Warwick and Department of Digital Humanities, King’s College London – and grew out of a series of Public Data Lab workshops to exchange digital methods teaching resources earlier this year.

You can find the tool here and the code here. All data is converted locally.

New article on GitHub and the platformisation of software development

An article on “The platformisation of software development: Connective coding and platform vernaculars on GitHub” by Liliana Bounegru has just been published in Convergence: The International Journal of Research into New Media Technologies.

The article is accompanied by a set of free tools for researching Github co-developed by Liliana with the Digital Methods Initiative – including to:

  • Extract the meta-data of organizations on Github
  • Extract the meta-data of Github repositories
  • Scrape Github for forks of projects
  • Scrape Github for user interactions and user to repository relations
  • Extract meta-data about users on Github
  • Find out which users contributed source code to Github repositories

The article is available open access here. The abstract is copied below.

This article contributes to recent scholarship on platform, software and media studies by critically engaging with the ‘social coding’ platform GitHub, one of the most prominent actors in the online proprietary and F/OSS (free and/or open-source software) code hosting space. It examines the platformisation of software and project development on GitHub by combining institutional and cultural analysis. The institutional analysis focuses on critically examining the platform from a material-economic perspective to understand how it configures contemporary software and project development work. It proposes the concept of ‘connective coding’ to characterise how software intermediaries such as GitHub configure, valorise and capitalise on public repositories, developer and organisation profiles. This institutional perspective is complemented by a case study analysing cultural practices mediated by the platform. The case study examines the platform vernaculars of news media and journalism initiatives highlighted by Source, a key publication in the newsroom software development space, and how GitHub modulates visibility in this space. It finds that the high-visibility platform vernacular of this news media and journalism space is dominated by a mix of established actors such as the New York Times, the Guardian and Bloomberg, as well as more recent actors and initiatives such as ProPublica and Document Cloud. This high-visibility news media and journalism platform vernacular is characterised by multiple F/OSS and F/OSS-inspired practices and styles. Finally, by contrast, low-visibility public repositories in this space may be seen as indicative of GitHub’s role in facilitating various kinds of ‘post-F/OSS’ software development cultures.