Roughly a quarter of Americans rely on YouTube as a source of news. With its colossal user base and vast array of content, YouTube stands as one of the globe’s largest online media platforms.
In recent times, there has been a prevalent narrative in the media suggesting that videos from highly partisan and conspiracy-driven YouTube channels are influencing radicalization among young Americans, with YouTube’s recommendation algorithm allegedly exacerbating this phenomenon.
Minimal influence
However, a recent study conducted by the Computational Social Science Lab (CSSLab) at the University of Pennsylvania challenges this narrative. The study reveals that users’ individual political inclinations and preferences primarily dictate their viewing choices on the platform. Furthermore, if YouTube’s recommendation features exert any influence on users’ media consumption patterns, it tends to have a moderating effect.
“On average, relying exclusively on the recommender results in less partisan consumption,” the researchers explain.
To discern the genuine impact of YouTube’s recommendation algorithm on user viewing habits, researchers devised bots programmed to either adhere to or disregard the platform’s recommendations. To accomplish this, they trained bots on the YouTube watch histories of 87,988 actual users collected between October 2021 and December 2022.
The objective was to disentangle the intricate interplay between user preferences and the recommendation algorithm, a dynamic relationship shaped with each video watched.
Viewing history
These bots were equipped with individualized YouTube accounts to track their viewing histories, with the partisanship of their watched content estimated through associated metadata.
In two experiments, the bots underwent a “learning phase,” during which they viewed the same sequence of videos to establish uniform preferences for YouTube’s algorithm.
Subsequently, the bots were divided into groups. While some continued to mirror the real-life users’ viewing behaviors, others served as experimental “counterfactual bots” programmed to follow predetermined rules, thus isolating user behavior from algorithmic influence.
In the first experiment, post-learning phase, the control bot adhered to the user’s viewing history, whereas counterfactual bots diverged and solely selected videos from the recommended list, disregarding user preferences.
Certain counterfactual bots consistently opted for the first “up next” video from sidebar recommendations, while others randomly chose from the top 30 or top 15 videos in sidebar or homepage recommendations, respectively.
Less partisan content
The researchers observed that, on average, counterfactual bots consumed less partisan content compared to their real-life counterparts, a trend particularly pronounced among users with heavier consumption of partisan content.
“This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends,” the researchers say. “The study exhibits similar moderating effects on bots consuming far-left content, or when bots are subscribed to channels on the extreme side of the political partisan spectrum.”
The researchers then looked at the so-called “forgetting time” of the YouTube algorithm, as it has attracted criticism in the past for continuing to recommend content long after it has ceased to be relevant.
In this experiment, researchers measured the recommender’s forgetting time by simulating a scenario where a user with an extensive history of consuming far-right videos—totaling 120 videos—shifts to moderate news for the subsequent 60 videos.
While the control bots persisted in consuming far-right content throughout the experiment, counterfactual bots mimicked a user transitioning from one preference set (far-right videos) to another (moderate videos). As these counterfactual bots adjusted their media preferences, the researchers monitored the average partisanship of recommended videos appearing in the sidebar and homepage.
“On average, the recommended videos on the sidebar shifted toward moderate content after about 30 videos,” the authors conclude, “while homepage recommendations tended to adjust less rapidly, showing homepage recommendations cater more to one’s preferences and sidebar recommendations are more related to the nature of the video currently being watched.”