Digital democracy — Do social media steer opinion formation?

Version

Are social media bad for opinion formation?

With the spread of social media in recent years there has been growing concern about their effects on opinion formation. Key suspects are fake news, radical content, and algorithms that mostly show users what they are already thinking, among others. Accordingly, researchers have worked on numerous studies that investigate the effects of social media on democracy and its members.
This compact overview focuses on opinion formation on the level of individual users. Opinion formation is the fundamental basis of democracy as a person’s opinion on political, social, or economic issues directly feeds into their voting behavior. The results of elections and referendums in turn determine which people or parties will make decisions during the next legislative period — decisions that are generally binding. If social media affect this process, it should be clear whether users, for example, no longer perceive different opinions or become ever more radical in their own worldviews.
What people see in their everyday social media use has potentially far-reaching consequences. This summary of current research offers an overview on the most relevant aspects of this topic: why social media need to select content and opinions in the first place and how this selection works; what consequences this selection has on opinion formation among the users; and what options to act exist for users, platform providers, and politics. Concrete fears that abound in public debate — for instance, on fake news, radicalization, or filter bubbles — are addressed in short profiles that can be read independently.
Profile: Digital divide

Do social media deepen societal rifts?

Profile: Echo chambers

Do social media only show users what they are already thinking?

Profile: Filter bubbles

Do social media users only see “more of the same?“

Why social media (have to) select opinions

What content social media display to users is the result of several levels of selection that are necessary for platforms to work and that serve different purposes for them:
Social media make it very easy for users to create their own content or distribute content from third parties. Platforms themselves, on the other hand, only create small amounts of the available content. This means that the amount of content flowing through a platform at any given time is practically unlimited and needs to be made accessible in one way or another, for example, through clustering, filtering, or sorting.
Selecting content created by third parties for their users — so-called curation — is a central function of social media. Faced with large numbers of users, huge amounts of constantly posted content and the diverse international backgrounds of users and respective legislations, this selection is a complex task with potentially multiple consequences.
How this can result in conflicts of interest can be illustrated by the following two examples:

Facebook’s newsfeed algorithms: Blowing the whistle on their effects

In the fall of 2021, the former Facebook employee Frances Haugen leaked internal company documents to the Wall Street Journal and testified in a US Senate hearing on Facebook’s business practices and goals. According to her statements and to the documents, the News Feed algorithms had been optimized to incite as much “user engagement” as possible, meaning interactions with posts through liking, sharing, or commenting. This favors posts that elicit strong emotions among users, including, for example, fake news stories and conspiracy theories designed to inflame the public and incite anger. The leaked material and Frances Haugen’s testimony imply that the company had been aware of the potential negative effects of this type of content for years, but chose to prioritize other goals in the continuous reworking of their algorithms.
Profile: Fake news

What role do social media play in the spread of false information?

The YouTube debate: Radicalization through video recommendations?

In several widely read articles in the New York Times, YouTube was criticized for allegedly radicalizing users via increasingly extreme video recommendations. Then product chief and current CEO, Neal Mohan, said in an interview it would not make sense to assume YouTube would purposefully radicalize users through video recommendations to make them stay on the platform as long as possible. He pointed out that important advertisers would not want their ads shown alongside radical content, and usage time alone would not benefit YouTube financially.
Both examples show that platform providers take into account different goals at the same time—and that it is difficult to satisfy them equally through the highly complex algorithms of social media.
Profile: Rabbit holes

Are users becoming more radical through YouTube’s recommendations?

How social media curate

Compared to the curation of content at newspapers, radio, or television stations, social media differ in two major respects: First, they know in much deeper detail who their users are and what they select. Second, their identity is typically not centered on journalistic ideals of societally relevant reporting, background information, or editorial comment on current events. Quite to the contrary, Facebook has in fact stated for years that it is not a media company.
The detailed information that users themselves enter into their social media profiles and that manifest in their usage behavior over time allow platforms to curate content on the level of the individual user. Users directly steer the selection of posts they want to see by connecting with other accounts, subscribing to their posts, or searching for specific offerings. Additionally, platforms usually assume that people want to see more of the type of content they look at frequently or extensively, according to the readily available and detailed usage data. Subsequent curation of content for the users can mostly be subsumed under one of two types:
Recommendation systems work based on similarity between types of content and users: If many users select content type Y after having seen X, the former type will be recommended to future users who access X. It remains up to the users whether they actually select content type Y. Via users’ IP addresses, recommendation systems can take into account the approximate location, time of day, or season when suggesting more content. This type of curation remains comparatively coarse and shares similarities with how traditional mass media suggest content; for instance, how broadcasters try to create “audience flow” through the scheduling of adjacent TV shows or how journalistic content and advertisements are tailored to the targeted audience of a magazine or program.
For the necessary selection of posted content, social media use targets such as frequent, regular, or extensive usage, watching videos until the end, or engagement (liking, sharing, commenting…). It is, however, tricky to strike an attractive balance of known versus new or surprising content, to not keep recommending the same content to users over and over again. When Netflix was still mailing DVDs instead of streaming, the company even opened a competition based on this problem: Whoever could create a better recommendation system than their own “Cinematch” would win a prize of one million US dollars, which highlights the value of good content recommendation. Recommendation and personalization algorithms in social media face a similar dilemma to Netflix: If they only display content that fits a user’s previous selections the feed could become boring after a while and result in shorter or less frequent use of the platform. Conversely, YouTube has been criticized for including more radical content in its recommendations to users who watched political videos, including extremist content and conspiracy theories
[1]
. Equally problematic for the video platform was the fact that in 2017, ads from large advertisers had been shown next to videos with extreme political content and hate speech. The cancellation of billions of dollars’ worth of advertising deals put YouTube under a lot of pressure to adapt its curation not only to the users’ viewing behavior, but also to the interests of their advertisers. The platform subsequently changed its rules for monetizing videos, which in turn had repercussions for content creators, whose videos form the basis of bringing together viewers and ads.
In summary, there is a strong interrelation between what social media users want to see (in the sense of: what they click on, and how often); what appears useful for platform providers and their own, usually commercial, goals; what advertisers perceive as a friendly environment for their purposes, and what content gets produced by creators. Long-term trends towards more and more radical content recommendations or like-minded people sharing one-sided political content on social networking sites are seen as particular threats to opinion formation.

What happens below the surface of social media? What are the covert effects of advertising? And how to bots affect communication?

Profile: Rabbit holes

Are users becoming more radical through YouTube’s recommendations?

Profile: Microtargeting

What makes advertising in social media special?

Profile: Dark ads

Can opinion formation be manipulated through social media advertising?

Profile: Bots

What influence do bots have on communication in social media?

The effects of curation

Scientific investigations into the effects of social media curation typically follow one of two paths: A first option is to ask users. This is considered the most pertinent way to capture people’s opinions and allows for surveying a wide range of potentially confounding factors (which, in an experimental setting, can also be controlled by the researchers). On the flipside, survey-based study designs can only relatively broadly gauge what types of content users came into contact with on social media, as their memory, available time, and patience pose natural limits. The second option consists of capturing social media usage automatically and in great detail. However, this often limits what information about political opinions or other individual differences of the users can be gathered.

Who uses what for political information?

Annual surveys such as the Reuters Digital News Report show that social media are a staple of many German users’ media diet
[2]
. They are especially popular among young people: during the 2021 Federal Elections in Germany, almost half of first-time voters said that social media were their primary source of political information
[3]
. Since social media are typically combined with many other media outlets, it is difficult to assess the specific effect of social media use on opinion formation or other outcomes.
Milieu-based research suggests that social media use is most likely to influence the opinions of two groups of users: people who almost exclusively come into contact with information about current events via social media; and people whose contacts in social media are very homogeneous with regard to political opinions
[4]
. All other milieus use many different sources of information so that the possible effects of social media use are mitigated by these other outlets.
A large-scale analysis of digital usage data additionally shows that social media serve as distributors of content that bring average users in contact with more news that non-users
[5]
. Due to the study design, it cannot analyze the opinions contained in the news items; however, US-based studies of large usage datasets have found a way to approximate this: The American two-party system allows researchers to estimate both the political leaning of news posts via the proportion of users who lean Republican, Democrat, or Independent, as well as to establish who accessed the news based on their partisanship. Such studies show that the vast majority of users selects news items with a neutral leaning that are used across party lines
[6]
. Only small groups at the ends of the political spectrum tend to use mainly news that are exclusive to their partisan camp.
These results are a good reminder of the fact that many political opinions exist prior to media use, including social media use, and that can influence what content is selected. It is plausible to assume that social media use can reinforce previously held political opinions when like-minded content is consumed or that it can balance opinions when the user is exposed to a variety of viewpoints. In fact, a German survey of Facebook users did not find an effect on users’ opinions
[7]
. Unlike Facebook usage, the same study showed that both gender and reading newspapers are, however, predictors of political opinion: Women and newspaper-readers are clearly more moderate in their opinions than men or non-readers. This result again underlines the importance of usage of other types of media on opinion formation beyond social media.

Experiments on opinion formation

Instead of conducting surveys on the effects of social media curation on users in everyday life, which can be affected by numerous confounding factors, other researchers design experiments to control the latter. They typically show participants’ opinion posts from social media and ask about potential effects of the users’ political opinions after exposure. This allows for specifying the effect of different types of posts in a systematic manner. A review of seven such experimental studies revealed that preexisting differences in opinion can be deepened by exposure to social media content
[8]
. In an Austrian study, for instance, left-leaning users perceived a populist right-wing politician more negatively after seeing two of his anti-migration tweets
[9]
.
Such an increase in difference of opinion through social media use is called polarization. Experiments like this one, however, cannot glean whether an opinion actually changed through exposure or if the users simply become more aware of their already established opinion of the politician in question. It is equally unclear for how long after the end of the experiment the effect persists. Lastly, given the many instances in which social media users select among options in their everyday usage, a situation of chicken or egg arises: If users typically connect to like-minded people on social media and like posts that align with their opinions, thereby making the curating algorithms show more similar content in the future, to what degree has an opinion already been polarized before exposure and how big is the additional effect of social media usage?
In summary, scholars have investigated various types of effects of platform curation on news use in social media as well as on opinion formation. Experimental research does confirm polarization effects; however, typical usage patterns of social media platforms bring people in contact with a variety of posts that tend to be moderate with regard to opinions. In addition, social media are only one source of news for many age groups.
For about half of first-time voters, social media are the main source of political information
2021 Federal Elections in Germany

Two potentially dangerous outcomes of curation that should be investigated in the future

The state of research on the effects of social media curation is complex, and many results are far from alarming. However, two aspects deserve continued attention, both in future research and in public debate:
First, a small effect on opinions can have broad ramifications. A prominent example is the close result of the Brexit referendum, which was preceded by a large amount of fake news (also in social media) as well as attempts from different online actors to influence the vote. In a similar vein, the last two US presidential elections were won by a small margin, which illustrates that even a small change in (or confirmation of) opinions can potentially impact a larger decision.
Second, social media can play a relatively big role for users with a very narrow or already radical news repertoire compared to the majority of users. People who spend a lot of their time online in comparatively closed groups and are exposed to a homogeneous news diet may become more radical in their opinions over time. It is in this respect that YouTube, for example, has been criticized for its supposedly rabbit-hole-like recommunication algorithm. Users with low political knowledge who only use news sporadically might also be at risk of greater effects on their opinions through social media.
Profile: Russian propaganda

How does Russian propaganda affect opinion formation in other countries?

Profile: Fake news

What role do social media play in the spread of false information?

Profile: Rabbit holes

Are users becoming more radical through YouTube’s recommendations?

How should social media be designed to protect opinion formation as much as possible?

Social media do not only display what users are already thinking, and their effects on opinion formation are not as direct as people in everyday life or public debate might assume. Curation through usage behavior and platform algorithms should be considered separately for their potential effects on different types of content, and existing studies give little cause for alarm. However, problematic consequences may occur for some groups of users, for instance, people who rarely use other news sources or users in very homogeneous networks of contacts. Consequently, the following suggestions have been made to improve social media with regard to opinion formation:
Companies operating social media platforms should acknowledge their importance for opinion formation. Today’s dominant platforms were not created for news and current events, while journalism and its professional norms have evolved over centuries. In contrast, social media leadership typically try to eschew responsibility for their products’ effects on opinion formation and democracy. Companies have successfully argued that they are not media companies and have avoided attendant regulation. The large US companies additionally follow a libertarian interpretation of freedom of speech that is not shared in all cultures where their platforms are being used around the globe.
Opinion formation as a central democratic process is thus worth protecting through a variety of measures, which nevertheless should take into account the positive effects that social media can have.
Portrait einer Frau mit Brille und kurzem Haar
PD Dr. Merja MahrtResearch Associate
Merja Mahrt is a communication scholar who completed her habilitation at Heinrich Heine University Düsseldorf in 2017, with a study on digital fragmentation, echo chambers, and filter bubbles. She received her PhD from the University of Amsterdam after studying at Freie Universität Berlin and Université Michel de Montaigne — Bordeaux III. Her research focuses on social contexts and effects of digital media use.