The following post is from Judit Varga, Postdoctoral Researcher on the ERC-funded project FluidKnowledge, based at the Centre for Science and Technology Studies, Leiden University.
We would like to invite the Public Data Lab and its network of researchers and research centres to join and contribute to our session about quali-quantitative, digital, and computational methods in Science and Technology Studies (STS) at the next EASST conference 6-9 July 2022.
Fitting with the Public Data Network’s activities, the session starts from the observation that engaging with digital and computational ways of knowing is crucial for STS and related disciplines to study or intervene in them. The panel invites contributions that attempt or reflect on methodological experimentation and innovation in STS by combining STS concepts or qualitative, interpretative methods with digital, quantitative and computational methods, such as quali-quantitative research.
Over the past decade, STS scholars have increasingly benefited from digital methods, drawing on new media studies and design disciplines, among others. In addition, recently scholars also called for creating new dialogues between STS and QSS, which have increasingly grown apart since the 1980s. Although the delineation of STS methods from neighboring fields may be arbitrary, delineation can help articulate methodological differences, which in turn can help innovate and experiment with STS methods at the borders with other disciplines.
We invite contributions that engage with the following questions. What do we learn if we try to develop digital and computational STS research methods by articulating and bridging disciplinary divisions? In what instances is it helpful to draw boundaries between STS and digital and computational methods, for whom and why? On the contrary, how can STS benefit from not drawing such boundaries? How can we innovate STS methods to help trace hybrid and diverse actors, relations, and practices, using digital and computational methods? How can methodological innovation and experimentation with digital and computational methods help reach STS aspirations, or how might it hinder or alter them? What challenges do we face when we seek to innovate and experiment with digital and computational methods in STS? In what ways are such methodological reflection and innovation in STS relevant at a time of socio-ecological crises?
The current deadline for abstract submissions is the 1st of February 2022 7th February 2022 (the deadline has been extended).
This will take place on 10-14th January 2022 at the University of Amsterdam.
More details and registration links are available here and an excerpt on this year’s theme and the format is copied below.
The Digital Methods Initiative (DMI), Amsterdam, is holding its annual Winter School on ‘Social media data critique’. The format is that of a (social media and web) data sprint, with tutorials as well as hands-on work for telling stories with data. There is also a programme of keynote speakers. It is intended for advanced Master’s students, PhD candidates and motivated scholars who would like to work on (and complete) a digital methods project in an intensive workshop setting. For a preview of what the event is like, you can view short video clips from previous editions of the School.
Data critique and platform dependencies: How to study social media data?
Source criticism is the scholarly activity traditionally concerned with provenance and reliability. When considering the state of social media data provision such criticism would be aimed at what platforms allow researchers to do (such as accessing an API) and not to do (scrape). It also would consider whether the data returned from querying is ‘good’, meaning complete or representative. How do social media platforms fare when considering these principles? How to audit or otherwise scrutinise social media platforms’ data supply?
Recently Facebook has come under renewed criticism for its data supply through the publication of its ‘transparency’ report, Widely Viewed Content. It is a list of web URLs and Facebook posts that receive the greatest ‘reach’ on the platform when appearing on users’ News Feeds. Its publication comes on the heels of Facebook’s well catalogued ‘fake news problem’, first reported in 2016 as well as a well publicised Twitter feed that lists the most-engaged with posts on Facebook (using Crowdtangle data). In both instances those contributions, together with additional scholarly work, have shown that dubious information and extreme right-wing content are disproportionately interacted with. Facebook’s transparency report, which has been called ‘transparency theater’, demonstrates that it is not the case. How to check the data? For now, “all anybody has is the company’s word for it.”
For Facebook as well as a variety of other platforms there are no public archives. Facebook’s data sharing model is one of an industry-academic ‘partnership’. The Social Science One project, launched when Facebook ended access to its Pages API, offers big data — “57 million URLs, more than 1.7 trillion rows, and nearly 40 trillion cell values, describing URLs shared more than 100 times publicly on Facebook (between 1/1/2017 and 2/28/2021).” To obtain the data (if one can handle it) requires writing a research proposal and if accepted compliance with Facebook’s ‘onboarding’, a non-negotiable research data agreement. Ultimately, the data is accessed (not downloaded) in a Facebook research environment, “the Facebook Open Research Tool (FORT) … behind a VPN that does not have access to the Internet”. There are also “regular meetings Facebook holds with researchers”. A data access ethnography project, not so unlike to one written about trying to work with Twitter’s archive at the Library of Congress, may be a worthwhile undertaking.
Other projects would evaluate ‘repurposing’ marketing data, as Robert Putnam’s ‘Bowling Alone’ project did and as is a more general digital methods approach. Comparing multiple marketing data outputs may be of interest, and crossing those with CrowdTangle ‘s outputs. Facepager, one of the last pieces of software (after Netvizz and Netlytic) to still have access to Facebook’s graph API reports that “access permissions are under heavy reconstruction”. Its usage requires further scrutiny. There is also a difference between the user view and the developer view (and between ethnographic and computational approaches), which is also worth exploring. ‘Interface methods‘ may be useful here. These and other considerations for developing social media data criticism are topics of interest for this year’s Winter School theme.
At the Winter School there are the usual social media tool tutorials (and the occasional tool requiem), but also continued attention to thinking through and proposing how to work with social media data. There are also empirical and conceptual projects that participants work on. Projects from the past Summer and Winter Schools include: Detecting Conspiratorial Hermeneutics via Words & Images, Mapping the Dutchophone Fringe on Telegram, Greenwashing, in_authenticity & protest, Searching constructive/authentic posts in media comment sections: NU.nl/The Guardian, Mapping deepfakes with digital methods and visual analytics, “Go back to plebbit”: Mapping the platform antagonism between 4chan and Reddit, Profiling Bolsobots Networks, Infodemic everywhere, Post-Trump Information Ecology, Streams of Conspirational Folklore, and FIlterTube: Investigating echo chambers, filter bubbles and polarization on YouTube.
Organisers: Lucia Bainotti, Richard Rogers and Guillen Torres, Media Studies, University of Amsterdam. Application information at https://www.digitalmethods.net.
? In order to enable more people to post more easily about various projects and activities, we’re now using WordPress as the backend for the site (along with static site templates and materials for use by different lab projects).
??? We have added a people page so we can highlight a much wider group of people, groups and collaborators who we work with at the Public Data Lab.
? We’ve added an updated projects page which includes more of what we’ve been up to than had been on the previous site, along with a little updating network diagram to show who has been working on what and the different clusters of our activities 🙂
? We’ll be using the blog to post short notes and updates on our various projects and activities across the Public Data Lab and its associated research centres, communities and institutions.
? We have lightly revised our mission statement to better reflect what we do (in light of activities over the past few years)
As always you can follow our activities on Twitter at @PublicDataLab and also get in touch if you’re interested in contributing to or collaborating with the lab.