Experts grade Big Tech on readiness to handle midterm election misinformation

Experts grade Big Tech on readiness to handle midterm election misinformation [Updated]

Bloomberg via Getty

The 2016 US election was a wake-up call about the dangers of political misinformation on social media. With two more election cycles rife with misinformation under their belts, social media companies have experience identifying and countering misinformation. However, the nature of the threat misinformation poses to society continues to shift in form and targets. The big lie about the 2020 presidential election has become a major theme, and immigrant communities are increasingly in the crosshairs of disinformation campaigns—deliberate efforts to spread misinformation.

Social media companies have announced plans to deal with misinformation in the 2022 midterm elections, but the companies vary in their approaches and effectiveness. We asked experts on social media to grade how ready Facebook, TikTok, Twitter, and YouTube are to handle the task.

2022 is looking like 2020

Dam Hee Kim, assistant professor of communication, University of Arizona

Social media are important sources of news for most Americans in 2022, but they also could be a fertile ground for spreading misinformation. Major social media platforms announced plans for dealing with misinformation in the 2022 US midterm elections, but experts noted that they are not very different from their 2020 plans.

One important consideration: Users are not constrained to using just one platform. One company’s intervention may backfire and promote cross-platform diffusion of misinformation. Major social media platforms may need to coordinate efforts to combat misinformation.

Facebook/Meta: C

Facebook was largely blamed for its failure to combat misinformation during the 2016 presidential election campaign. Although engagement—likes, shares, and comments—with misinformation on Facebook peaked with 160 million per month during the 2016 presidential election, the level in July 2018, 60 million per month, was still at high levels.

More recent evidence shows that Facebook’s approach still needs work when it comes to managing accounts that spread misinformation, flagging misinformation posts, and reducing the reach of those accounts and posts. In April 2020, fact-checkers notified Facebook about 59 accounts that spread misinformation about COVID-19. As of November 2021, 31 of them were still active. Also, Chinese state-run Facebook accounts have been spreading misinformation about the war in Ukraine in English to their hundreds of millions of followers.

Twitter: B

While Twitter has generally not been treated as the biggest culprit of misinformation since 2016, it is unclear if its misinformation measures are sufficient. In fact, shares of misinformation on Twitter increased from about 3 million per month during the 2016 presidential election to about 5 million per month in July 2018.

This pattern seems to have continued as over 300,000 Tweets—excluding retweets—included links that were flagged as false after fact checks between April 2019 and February 2021. Fewer than 3 percent of these tweets were presented with warning labels or pop-up boxes. Among tweets that shared the same link to misinformation, only a minority displayed these warnings, suggesting that the process of putting warnings on misinformation is not automatic, uniform, or efficient. Twitter did announce that it redesigned labels to hinder further interactions and facilitate clicks for additional information.

TikTok: D

As the fastest-growing social media platform, TikTok has two notable characteristics: Its predominantly young adult user base regularly consumes news on the platform, and its short videos often come with attention-grabbing images and sounds. While these videos are more difficult to review than text-based content, they are more likely to be recalled, evoke emotion, and persuade people.

TikTok’s approach to misinformation needs major improvements. A search for prominent news topics in September 2022 turned up user-generated videos, 20 percent of which included misinformation, and videos containing misinformation were often in the first five results. When neutral phrases were used as search terms, for example “2022 elections,” TikTok’s search bar suggested more phrases that were charged, for example “January 6 FBI.” Also, TikTok presents reliable sources alongside accounts that spread misinformation.

YouTube: B-

Between April 2019 and February 2021, 170 YouTube videos were flagged as false by a fact-checking organization. Just over half of them were presented with “learn more” information panels, though without being flagged as false. YouTube seems to have added information panels mostly by automatically detecting certain keywords involving controversial topics like COVID-19, not necessarily after checking the content of the video for misinformation.

YouTube could recommend more content by reliable sources to mitigate the challenge of reviewing all uploaded videos for misinformation. One experiment collected the list of recommended videos after a user with an empty viewing history watched one video that was marked as false after fact checks. Of the recommended videos, 18.4 percent were misleading or hyperpartisan, and three of the top 10 recommended channels had a mixed or low factual reporting score from Media Bias/Fact Check.

Read More