Chatbots and LLMs for Internet Research? Digital Methods Winter School and Data Sprint 2025

The annual Digital Methods Winter School in Amsterdam will take place on 6-10th January 2025 with the theme “Chatbots and LLMs for Internet Research?”. The deadline for applications is 9 December 2024. You can read more on this page (an excerpt from which is copied below).

Chatbots and LLMs for Internet Research? Digital Methods Winter School and Data Sprint 2025
https://wiki.digitalmethods.net/Dmi/WinterSchool2025

The Digital Methods Initiative (DMI), Amsterdam, is holding its annual Winter School on ‘Chatbots for Internet Research?’. The format is that of a (social media and web) data sprint, with tutorials as well as hands-on work for telling stories with data. There is also a programme of keynote speakers. It is intended for advanced Master’s students, PhD candidates and motivated scholars who would like to work on (and complete) a digital methods project in an intensive workshop setting. For a preview of what the event is like, you can view short video clips from previous editions of the School.Chatbots and LLMs for Internet Research? Towards a Reflexive ApproachPositions now are increasingly staked out in the debate concerning the application of chatbots and LLMs to social and cultural research. On the one hand there is the question of ‘automating’ methods and shifting some additional part of the epistemological burden to machines. On the other there is the rejoinder that chatbots may well be adequate research buddies, assisting with (among other things) burdensome and repetitive tasks such as coding and annotating data sets. They seem to be continually improving, or at least growing in size and apparent promise. Researcher experiences are now widely reported: chatbots have outperformed human coders, ‘understanding’ rather nuanced stance-taking language and correctly labeling it better than average coders. But other work has found that the LLM labeling also has the tendency to be bland, given how the filters and safety guardrails (particularly in US-based chatbots) tend to depoliticise or otherwise soften their responses. As researcher experience with LLMs becomes more widely reported, there are user guides and best practices designed to make LLM findings more robust. Models should be carefully chosen, persona’s should be well developed, prompting should be conversational and so forth. LLM critique is also developing apace, with (comparative) audits interrogating underlying discrimination and bias that are only papered over by filters. At this year’s Digital Methods Winter School we will explore these research practices with chatbots and LLMs for internet research, with an emphasis on bringing them together. How to deploy and critique chatbots and LLMs at the same time, in a form of reflexive usage?

There are rolling admissions and applications are now being accepted. To apply please send a letter of motivation, your CV, a headshot photo and a 100-word bio to winterschool [at] digitalmethods.net. Notifications of acceptance are sent within 2 weeks after application. Final deadline for applications is 9 December 2024. The full program and schedule of the Winter School are available by 19 December 2024.

The @digitalmethods.net Winter School in Amsterdam will take place on 6-10th January 2025 with the theme “Chatbots and LLMs for Internet Research?”. Apply by 9th December. 📝✨ publicdatalab.org/2024/11/29/d… #digitalmethods

[image or embed]

— Public Data Lab (@publicdatalab.bsky.social) 5 December 2024 at 10:56
Post by @publicdatalab@vis.social
View on Mastodon

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

To explore this we’re launching a call for screenshots as part of a research collaboration co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

We’d be grateful for your help in sharing this call:

Further details can be found here and copied below.

[- – – – – – – ✄ – – – snip – – – – – – – – – -]

troubling AI: a call for screenshots 📸

How can screenshots trouble our understanding of AI?

This “call for screenshots” invites you to explore this question by sharing a screenshot that you have created, or that someone has shared with you, of an interaction with AI that you find troubling, with a short statement on your interpretation of the image and circumstances of how you got it.

The screenshot is perhaps one of today’s most familiar and accessible modes of data capture. With regard to Al, screenshots can capture moments when situational, temporary and emergent aspects of interactions are foregrounded over behavioural patterning. They also have a ‘social life’: we share them with each other with various social and political intentions and commitments.

Screenshots have accordingly become a prominent method for documenting and sharing AI’s injustices and other AI troubles – from researchers studying racist search results to customers capturing swearing chatbots, from artists exploring algorithmic culture to social media users publicising bias.

With this call, we are aiming to build a collective picture of AI’s weirdness, strangeness and uncanniness, and how screenshotting can open up possibilities for collectivising troubles and concerns about AI.

This call invites screenshots of interactions with AI inspired by these examples and inquiries, accompanied by a few words about what is troubling for you about those interactions. You are invited to interpret this call in your own way: we want to know what you perceive to be a ‘troubling’ screenshot and why.

Please send us mobile phone screenshots, laptop or desktop screen captures, or other forms of grabbing content from a screen, including videos or other types of screen recordings, through the form below (which can also be found here) by 15th November 2024 10th December 2024.

Your images will be featured in an online publication and workshop (with your permission and appropriate credit), co-organised by the Digital Futures Institute’s Centre for Digital Culture and Centre for Attention Studies at King’s College London, the médialab at Sciences Po, Paris and the Public Data Lab.

Joanna Zylinska
Tommy Shaffer Shane
Axel Meunier
Jonathan W. Y. Gray

Call for papers: “Generative Methods: AI as collaborator and companion in the social sciences and humanities”, Copenhagen, 6-8th December 2023

A call for papers and experiments for Generative Methods: AI as collaborator and companion in the social sciences and humanities taking place in Copenhagen, 6-8th December 2023. Further details can be found here and copied below.

Continue reading

“What actually happened? The use and misuse of Open Source Intelligence (OSINT)”, Digital Methods Winter School and Data Sprint 2023

Applications are now open for the Digital Methods Winter School and Data Sprint 2023 which is on the theme of “What actually happened? The use and misuse of Open Source Intelligence (OSINT)”.

This will take place on 9-13th January 2023 at the University of Amsterdam. Applications are accepted until 1st December 2022.

More details and registration links are available here and an excerpt on this year’s theme and the format is copied below:

The Digital Methods Initiative (DMI), Amsterdam, is holding its annual Winter School on the ‘Use and Misuse of Open Source Intelligence (OSINT)’. The format is that of a (social media and web) data sprint, with tutorials as well as hands-on work for telling stories with data. There is also a programme of keynote speakers. It is intended for advanced Master’s students, PhD candidates and motivated scholars who would like to work on (and complete) a digital methods project in an intensive workshop setting. For a preview of what the event is like, you can view short video clips from previous editions of the School.

Continue reading

EASST session on ‘quali-quantitative’ methods in STS

The following post is from Judit Varga, Postdoctoral Researcher on the ERC-funded project FluidKnowledge, based at the Centre for Science and Technology Studies, Leiden University.

We would like to invite the Public Data Lab and its network of researchers and research centres to join and contribute to our session about quali-quantitative, digital, and computational methods in Science and Technology Studies (STS) at the next EASST conference 6-9 July 2022.

Fitting with the Public Data Network’s activities, the session starts from the observation that engaging with digital and computational ways of knowing is crucial for STS and related disciplines to study or intervene in them. The panel invites contributions that attempt or reflect on methodological experimentation and innovation in STS by combining STS concepts or qualitative, interpretative methods with digital, quantitative and computational methods, such as quali-quantitative research.

Over the past decade, STS scholars have increasingly benefited from digital methods, drawing on new media studies and design disciplines, among others. In addition, recently scholars also called for creating new dialogues between STS and QSS, which have increasingly grown apart since the 1980s. Although the delineation of STS methods from neighboring fields may be arbitrary, delineation can help articulate methodological differences, which in turn can help innovate and experiment with STS methods at the borders with other disciplines.

We invite contributions that engage with the following questions. What do we learn if we try to develop digital and computational STS research methods by articulating and bridging disciplinary divisions? In what instances is it helpful to draw boundaries between STS and digital and computational methods, for whom and why? On the contrary, how can STS benefit from not drawing such boundaries? How can we innovate STS methods to help trace hybrid and diverse actors, relations, and practices, using digital and computational methods? How can methodological innovation and experimentation with digital and computational methods help reach STS aspirations, or how might it hinder or alter them? What challenges do we face when we seek to innovate and experiment with digital and computational methods in STS? In what ways are such methodological reflection and innovation in STS relevant at a time of socio-ecological crises?

The current deadline for abstract submissions is the 1st of February 2022 7th February 2022 (the deadline has been extended).