Online polarization

From RB Wiki

There is ongoing controversy about the extent of online polarization. This page lists papers who tackled this question.

Lê's overall impression

After reading ~10 recent research papers on the topic, Lê's overall impression is that there seems to be an important online polarization, though its mechanisms are likely more subtle than meets the eye, and though offline polarization seems also important and worrying. Online polarization seems to be very extreme for a minority of users, though it is harder to assess for the majority of them.

Annoyingly, much of the research on the topic seems much less compelling than the claimed results (this is further discussed below). In particular, some arguably fail to factor in cognitive biases. Yet BABBC+18 suggest that they are critical to online polarization.

Polarization overall

PewResearch18 show increasing polarization in US on a wide variety of political concerns PewResearch19a PewResearch19b PewResearch19c.

PerceptionGap highlights exaggerated misconceptions on each side on the preferences of the other side they call the perception gap. Interestingly, estimates of independent voters are in-between a side's actual view and the estimate of this side's view by the other side. It is noteworthy that news and education do not help Mendelberg02 Muller08 KPDS13.

Online radicalization

TGBVS+18 surveys research on offline and online disinformation and polarization. It notes that the research is hard to analyze because of terminology inconsistencies across papers. Also, much of it relies on self-reporting, which seems unreliable given cognitive bisses, especially on political questions. Moreover, the online environment is changing at a rapid rate, which means that past studies may become obsolete. Finally, much seems to come the deliberative tradition of political theory, which they argue to lack quantitative methods.

This calls for caution when analyzing results of polarization research, which is why we dedicated a "less compelling findings" section to findings that don't seem very reliable. In particular, the study of online polarization seems in need of unconventional clever methods to probe what's going on in cyberspace.

By studying YouTube comments, ROWAM20 highlighted a pattern towards radicalization. Alt-right commenters used to be "intellectual dark web" commenters, who used to be alt-lite commenters. They also found that alt-lite videos are much easier to reach than alt-right videos from intellectual dark web videos. This suggests that commenters moved from intellectual dark web to alt-right thanks to the YouTube recommender.

SBD18 distinguish two user profiles: influencers (top 1%) and typical users. Influencers tend to write more polarized contents than they receive, while typical users tend to write less polarized contents than they receive. (Note though that SBD18b arguably oversells the results with a misleading title).

KRNMKO18 showed that online hate is produced by rare but highly influential users. They also showed association between low offline social capital and hate production (which is consistent with CacioppoCacioppo14 VGP17 Kurzgesagt19), but also association between high online social capital and hate production. The causality relationship for this latter case seems hard to diagnose.

MKNS17 suggest the effectiveness of mass targeted persuasion through social medias (more to come).

BABBC+18 studied 1-week treatment where Democrats and Republican Twitter users were paid to follow a bot retweeting leading opponents' tweets. They showed increased polarization, mostly among Republicans. While the paper (brilliantly) points out limits of the generalizability of the findings, this line of study is clearly critical for the design of robustly beneficial algorithms.

BGS17 showed greater increased polarization among older surveyed subjects. They also propose a model that suggest that increased polarization is greater for non-Internet users. Nevertheless, they observe increased polarization for all surveyed subjects. It remains to be explained why polarization increased so much over the last decade.

Less compelling findings

There is a large number of studies with conflicting conclusions. Unfortunately, as suggested by the conflicts, many of the studies are not very compelling, as there often is a gap between the empirical data and the claimed findings of the study.

Typically, "echo chamber" is sometimes measured by "automated estimates of sharing of contents we agree with" or "self-reported exposure to contents we disagree with". These two measures are very distinct. In fact, in the latter case, polarization may increase with decreased echo chamber, for instance if the contents we are exposed to are caricatures of the opponents' ideas. Unfortunately, paper's titles and abstracts are often vague about their actual findings, which adds to confusion. Annoyingly, they sometimes claim definite conclusions despite the weaknesses of their analysis. Publication bias should make us even more careful.

The core problem is that studying online polarization is just very, very, very hard. The internet has arguably become as complex as the human brain (if not more!), with thousands of billions of human and algorithmic nodes, and millions of billions of evolving interactions between nodes via complex (private) messaging that often involves multimedia contents. Such systems are not easily interpretable. In particular, online information is highly personalized, especially on social medias or on YouTube. Today's greatest talents are needed to construct the appropriate tools to probe cyberspace.

Nevertheless, it seems worth listing a few findings, even if they are disputable, as long as their limits are highlighted. Especially given that they have nevertheless gained attention.

LedwichZaitsev19 downplayed the radicalization impact of YouTube. But this work has been harshly criticized by Arvind Narayanan and RZCSB19.

DuboisBlank18 argue against echo chambers. But they rely on self-reported claims of being in echo chambers, which seems very unreliable given our cognitive biases on such matters. Their sample size also seems small (~350, which is then further controlled by 6 variables). The paper does make an interesting point though about the flaws of studies focused on a single media.

HGB17 NechushtaiLewis19 showed evidence of little personalization for identical queries on Google Search and Google News. However, Allgaier19 showed that YouTube Search gave very different answers to slightly different queries: "climate change" yielded videos in line with the scientific consensus, while "climate manipulation" returned climate change denialist videos.

MTHE18 argue that algorithms should not be blamed for lack of diversity in recommendations. The study compares journalists' recommendations to classical recommendation algorithms (like matrix factorization) on a set of 1000 articles. But the study fails to factor in how the impact of recommendation algorithms over time, which has been argued to be critical, as repeated exposure seems to take weeks to have strong impact HOT15. Also the algorithms they studied seem very different from YouTube's algorithms, and the pool of contents to recommend hugely smaller than the pool of YouTube videos. These major limitations strongly undermine the radical claim of MTHE18.

It is unfortunate that such limited evidence against filter bubbles have led to radical conclusions, e.g. Bruns19 calling "echo chambers" and "filter bubbles" the "dumbest metaphor on the Internet". While these terms are indeed imperfect, their criticisms seem to miss out on the quality of exposure to opposing views. It seems quite likely that social medias highlight the most extreme opposing views, which would in fact increase polarization. In this sense, the "bubble" would not only correspond to individuals being exposed only to their views; it would also correspond to individuals being exposed mostly to the most caricatural and easily criticizable views ot the opposite side.

One challenge to design robustly beneficial recommendation algorithms is that humans suffer from numerous cognitive biases, like ingroup bias. LRL79 showed that subjects with strong views on capital punishment increase their confidence on their views when exposed to the information on capital punishment, no matter what these views are. BABBC+18 seem to confirm this, as they even suggest that exposure to alternative views on social medias can increase individuals' confidence in their own views (caveats apply!).

Moreover, YouTube seems quite different from other offline, online and social medias, as its recommender algorithm can pull any video from a gigantic reservoir of highly biased contents. The algorithm, who's responsible for 70% of views, seems to play a much more critical role than in other platforms cnet18.