Conference report: #ICA23
Institutionalized deletion on Twitter, YouTube recommendations, children on the internet, Wikidata labor, the death of the blogosphere, and other takeaways from the International Communication Association's annual convention in Toronto.
Two weeks ago, I joined a few thousand other academics and professionals in Toronto for the 73rd annual International Communication Association conference (ICA). In the next blog post, I’ll share the paper I was there to present, but I’d like to share some highlights from the sessions I attended. At these large conferences, I like to take advantage of the diversity of ideas and presenters by going to a wide range of sessions and taking notes. It might not be the ideal way to immerse oneself in a particular field, but I tend to find inspiration and connections in a wide range of places.
This is a very limited selection of talks from a conference which spanned four full days (not even counting the pre- and post-conference events), but some which I found particularly interesting. Fair warning: these are also just my personal take-aways rather than an attempt to properly summarize the authors’ main point. In some cases, I frankly wound up focusing on a side point and didn’t take note of key findings. 🙂
The Pros and Cons of an Age-Blind Internet: The Challenge from a Child Rights Perspective (Sonia Livingstone)
Sonia Livingstone gave the annual Steve Jones Internet Lecture on the first day of the conference. She started with the famous New Yorker cartoon, “On the internet, nobody knows you’re a dog.” The cartoon was widely seen as characterizing something positive about the internet at the time, but since then its connotations have shifted somewhat. In the context of this talk: “On the internet, nobody knows you’re a child.” But later she pulled it up again, but with another comic panel depicting an NSA-like surveillance operation discussing silly personal information about the dog, learned from its online presence. Under surveillance capitalism, we’ve gone from a situation when children are invisible to where children are hypervisible.
Livingstone took a child rights framework, beginning with the Universal Declaration of Human Rights – kids have all the rights adults have plus several others, including the right to be treated according to their evolving capacity. She went on to the Convention on the Rights of the Child, but there were questions about how to take it all and apply it to the digital world.
Interesting bits and personal takeaways:
- One of the most striking things for me, as someone who isn’t so familiar with the literature on children and the internet, is how many gaps there are in research. According to Livingstone, there still are no good numbers about how many children have access to the internet – we know how many households are online and can combine it with how many have children, but that involves a lot of assumptions.
- As someone with a great interest in media literacy, I appreciated the shout out to teachers, acknowledging that we are broadly worried about how kids use the internet, but at the same time we consistently fail to give enough training and resources to educators.
- I was particularly interested in one of the documents she shared: Child Rights by Design – guidance for how designers, developers, entrepreneurs, etc. can incorporate consideration of child rights into their work.
- One of the reasons I was interested to attend this talk is because of an observation several of us made while undertaking a random YouTube video coding project recently: there are an awful lot of young kids in YouTube videos, including a lot of clear Terms of Use violations that YouTube simply hasn’t caught. This talk made me want to dig into that a bit more.
Go Down the Rabbit Hole or Into the Mainstream? Examining YouTube’s Recommendation System in the Context of COVID-19 Vaccines (Yee Man Margaret Ng; Katherine Hoffmann Pham; Miguel Luengo-Oroz)
If you start with a YouTube video from an authoritative source on COVID-19 vaccines, to what extent does YouTube’s recommendation system move you towards vaccine misinformation? More specifically, if someone tries to find vaccine misinformation among recommendations, how available is it and how many recommendation “hops” does it take to find an example? It turns out that for a real person (on a computer with a browser history, etc.), it’s hard to find an anti-vaccination video in the first five hops. Without a user history, it’s still hard to find, but most of the videos become less relevant.
Interesting bits and personal takeaways:
- This was a talk I suspected would be helpful to some of the work we’re doing at the Initiative for Digital Public Infrastructure to better understand YouTube’s recommendation algorithms.
- One of the more useful parts here was in the experiment design, setting up a variety of contexts through which to check recommendations. For example, a real human watching videos in a browser with a YouTube watch history and YouTube cookies will receive different recommendations than someone who just opened a new Chrome Incognito window, and both of those will be different from the results you get when querying the YouTube API for “related” videos. I hadn’t quite considered that “related” videos would have such a low rate of unique videos among recommendations – it makes me think that in the absence of other indications to go by, if you’re watching a video from a channel with other videos, it’s likely YouTube will just show you more from the same entity.
- It seems like YouTube may move people towards more entertaining rather than educational content, and towards popular content (something we’re working on demonstrating with our YouTube research, too).
Excavating the State’s Memory Hole: What Watchdogs (Don’t) Want to Remember (Muira N. McCammon)
McCammon is working with a fascinating archive: FOIA-requested correspondence from journalists who were inquiring about deleted tweets by official United States government Twitter accounts. It started with a joint task force on Guantanamo Bay when she noticed they were starting to delete tweets, and she began to wonder about “institutionalized deletion.”
Interesting bits and personal takeaways:
- I hadn’t heard of Politiwoops before – it’s a project now hosted by ProPublica which tracks deleted tweets by public officials.
- I was surprised by the idea that not all official accounts have archives that are readily available. It feels like almost nothing is truly ephemeral anymore, but I suppose any research on link rot would disabuse me of that notion. It’s a reminder of how important large, searchable archives are (not unlike Media Cloud!).
Reclaiming the Commons: Wikidata’s Labor Alienation, Data Ethics, and Oracular Machines (Zachary J. McDowell; Matthew Vetter)
What Happened to the Political Blogosphere? (David Karpf)
Joining these two talks because I find myself thinking about them together.
As a long-time Wikipedia contributor/researcher, I’m always excited for a talk that presents a fresh take on the inner-workings of Wikimedia communities. The idea of McDowell and Vetter’s paper is that the CC0 license that Wikidata (the linked data sister project of Wikipedia) uses, which differs from the Attribution-ShareAlike license Wikipedia uses, risks alienating the volunteer labor that makes the project possible. My initial reaction was to struggle to juxtapose the prospect of changing the license for either of two reasons: first, Wikidata – the project, and perhaps the community – sees maximizing usage and usefulness as one of its chief priorities. Like Wikipedia, it knows that means it enriches for-profit companies, but does so anyway because that’s the only way to ensure it can be used for all of the other uses, too. Second, Wikidata incorporates lots of other CC0 and public domain databases, which may legally problematize changing licenses. I’ll come back to this below.
Karpf has been tracking the blogosphere for many years, and observed that it is effectively dead. Blogs are still around, but the blogosphere – the community of writers that link to each others’ work – is not. Part of that was due to the ad-based business model which became impractical for all but the most successful bloggers, and part because legacy media simply hired the most successful bloggers. Karpf traced interweaving institutional, economic, and technological trends, but the takeaway was clear: hey, social science researchers, do a better job of following the money.
After Karpf’s talk, I found myself thinking more and more about the implications of Wikidata’s license. The history of the internet and perhaps the history of technology is at least in part a story about communities creating and experimenting, observers making straight-line predictions about changing the world and techno-utopias, and companies or governments moving into the same space to appropriate, package, scale, and eventually swallow them up. One of the very few examples of “something people use the internet for” which hasn’t been completely incorporated into the giant tech platforms is Wikipedia. Maybe it’s worth taking a moment to consider the unknown risks of passively accepting a giant industry extracting value from volunteer labor. I’m looking forward to reading both of these papers.
More Is Not Always Better? The Curvilinear Relationship Between Cross-Cutting Exposure and Affective Polarization: The Moderating Role of Strength of Political Ideology (Han Lin; Xuejin Jiang; Janggeun Lee; Yi Wang; Yonghwan Kim)
Depolarization in the “Filter Bubble“: Attitudinal Impact of News Recommender Systems Based on Users’ Political Preferences (Katharina Ludwig; Nevena Nikolajevic; Philipp Müller)
These two papers were part of the same session, and I was struck by their very compatible findings. Both papers address political polarization and between them touch on the political climate in three countries (part of why I like ICA so much). Both start with existing research that shows that exposure to other views lessens negative feelings towards the other side.
Lin, et al. use a survey of South Korean adults to show that the relationship between cross-cutting exposure and affective polarization is curvilinear, highlighting a “sweet spot” for exposure to minimize feelings of disgust/distrust for the other side, while too much exposure can backfire.
Ludwig, et al. focus on news recommender systems. They tested showing people ideologically congruent and incongruent news content and compared them to a random mix. The subject here was German articles about migration. Ideologically congruent content was more likely to heighten polarization vs. random, and ideologically incongruent content was more likely to lessen polarization vs. random for politically moderate individuals.
Quick takes
- Myrthe Reuver, Allesandra Polimeno, and Ana I. Lopes presented a paper which uses a method I’d like to learn more about: argument mining / computational argumentation, using machine learning to distinguish between people who support and oppose something.
- I was inspired by the operationalization of something we often take for granted in a talk by Aspen K. Omapang, Breanna E. Green, Chao Yu, Roxana M. Muenster, and Drew Margolin – when a conversation doesn’t begin in the world of politics, what happens when they become political? Unsurprisingly, the likelihood of toxic language increases, but I find it interesting to be able to pinpoint (and detect!) moments when a community is at risk of dealing with harmful content.
- Ricardo R. Ferreira presented a paper about the agency of Brazilian journalists in the country’s democratic backsliding. He pointed to many studies that place blame on the press, but they assume that journalists are just following orders, when in fact there are a range of motivations and kinds of actions they took. Maybe it’s tacky to say it called to mind a television show (an American show, no less), but I couldn’t help but think about season five of The Wire, following the journalists as they balance various demands with personal ambition and ethics.
- Anders Olof Larsson’s paper on the use of Twitter by political parties in Scandanavia showed some interesting trends. His longitudinal study found, among other things, that there’s a whole lot more retweeting than there used to be, perhaps marking a different style of engagement from Twitter’s early days (less conversing, more retweeting for followers).
- Annelien Van Remoortere, Susan Vermeer, and Sanne Kruikemeier reached out to Dutch politicians to see how often they responded and to whom. The researchers created multiple personas and reached out to politicians of different parties to see if some groups were more likely to respond, and if some citizen demographics were more likely to receive a response than others. They found there was no significant difference between political affiliations, and the results for someone with a Moroccan name was not significantly different from someone with a common Dutch name. Local politicians were much less likely to respond overall than national politicians, perhaps due to a smaller operating budget that doesn’t allow for a dedicated communications team.
A comment on imperiled APIs
I heard about a whole lot of papers this year that study Twitter, Reddit, and YouTube – vital work for understanding harmful speech, political polarization, health communication, interpersonal communication, human relationships, and all manner of other topics. Both Twitter and Reddit’s APIs are currently in jeopardy, seeking exorbitant, unrealistic fees and shutting down existing research projects like Pushshift (which I heard mentioned by name a half dozen times at ICA). While YouTube’s API is not significantly changing at the moment, the feature used by Ng, et al. above is going away in August. I can’t help but leave wondering what media will be used for next year’s papers, and, in an age when so much of what we do online is concentrated on a small number of increasingly difficult to study platforms, what crucial developments will remain inscrutable. I’ll end with a link: the Coalition for Independent Technology Research.