8.9 C
New York
Sunday, November 17, 2024

Twitter lifted its ban on COVID misinformation – research shows this is a grave risk to public health

Twitter lifted its ban on COVID misinformation – research shows this is a grave risk to public health

The restraints on COVID-19 misinformation on Twitter are off. AP Photo/Jeff Chiu

 

By Anjana Susarla, Michigan State University

Twitter’s decision to no longer enforce its COVID-19 misinformation policy, quietly posted on the site’s rules page and listed as effective Nov. 23, 2022, has researchers and experts in public health seriously concerned about the possible repercussions.

Health misinformation is not new. A classic case is the misinformation about a purported but now disproven link between autism and the MMR vaccine based on a discredited study published in 1998. Such misinformation has severe consequences for public health. Countries that had stronger anti-vaccine movements against diphtheria-tetanus-pertussis (DTP) vaccines faced a higher incidence of pertussis in the late-20th century, for example.

As a researcher who studies social media, I believe that reducing content moderation is a significant step in the wrong direction, especially in light of the uphill battle social media platforms face in combating misinformation and disinformation. And the stakes are especially high in combating medical misinformation.

Misinformation on social media

There are three key differences between earlier forms of misinformation and misinformation spread on social media.

First, social media enables misinformation to spread at a much greater scale, speed and scope.

Second, content that is sensational and likely to trigger emotions is more likely to go viral on social media, making falsehoods easier to spread than the truth.

Third, digital platforms such as Twitter play a gatekeeping role in the way they aggregate, curate and amplify content. This means that misinformation on emotionally triggering topics such as vaccines can readily gain attention.

How to spot online misinformation.

 

The spread of misinformation during the pandemic has been dubbed an infodemic by the World Health Organization. There is considerable evidence that COVID-19-related misinformation on social media reduces vaccine uptake. Public health experts have cautioned that misinformation on social media seriously hampers progress toward herd immunity, weakening society’s ability to deal with new COVID-19 variants.

Misinformation on social media fuels public doubts about vaccine safety. Studies show that COVID-19 vaccine hesitancy is driven by a misunderstanding of herd immunity and beliefs in conspiracy theories.

Combating misinformation

The social media platforms’ content moderation policies and stances towards misinformation are crucial for combating misinformation. In the absence of strong content moderation policies on Twitter, algorithmic content curation and recommendation are likely to boost the spread of misinformation by increasing echo chamber effects, for example, exacerbating partisan differences in exposure to content. Algorithmic bias in recommendation systems could also further accentuate global healthcare disparities and racial disparities in vaccine uptake.

There is evidence that some less-regulated platforms such as Gab may amplify the impact of unreliable sources and increase COVID-19 misinformation. There is also evidence that the misinformation ecosystem can lure people who are on social media platforms that invest in content moderation to accept misinformation that originates on less moderated platforms.

The danger then is that not only will there be greater anti-vaccine discourse on Twitter, but that such toxic speech can spill over into other online platforms that may be investing in combating medical misinformation.

The Kaiser Family Foundation COVID-19 vaccine monitor reveals that public trust for COVID-19 information from authoritative sources such as governments has fallen significantly, with serious consequences for public health. For example, the share of Republicans who said they trust the Food and Drug Administration fell from 62% to 43% from December 2020 to October 2022.

In 2021, a U.S. Surgeon General’s advisory identified that social media platforms’ content moderation policies need to:

  • pay attention to the design of recommendation algorithms.
  • prioritize early detection of misinformation.
  • amplify information from credible sources of online health information.

These priorities require partnerships between healthcare organizations and social media platforms to develop best practice guidelines to address healthcare misinformation. Developing and enforcing effective content moderation policies takes planning and resources.

In light of what researchers know about COVID-19 misinformation on Twitter, I believe that the announcement that the company will no longer ban COVID-19-related misinformation is troubling, to say the least.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Top image by Engin Akyurt from Pixabay 

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

156,488FansLike
396,312FollowersFollow
2,320SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x