AUTHOR=Yao Mengni , Tian Sha , Zhong Wenming TITLE=Readable and neutral? Reliability of crowdsourced misinformation debunking through linguistic and psycholinguistic cues JOURNAL=Frontiers in Psychology VOLUME=15 YEAR=2024 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1478176 DOI=10.3389/fpsyg.2024.1478176 ISSN=1664-1078 ABSTRACT=Background

In the face of the proliferation of misinformation during the COVID-19 pandemic, crowdsourced debunking has surfaced as a counter-infodemic measure to complement efforts from professionals and regular individuals. In 2021, X (formerly Twitter) initiated its community-driven fact-checking program, named Community Notes (formerly Birdwatch). This program allows users to create contextual and corrective notes for misleading posts and rate the helpfulness of others' contributions. The effectiveness of the platform has been preliminarily verified, but mixed findings on reliability indicate the need for further research.

Objective

The study aims to assess the reliability of Community Notes by comparing the readability and language neutrality of helpful and unhelpful notes.

Methods

A total of 7,705 helpful notes and 2,091 unhelpful notes spanning from January 20, 2021, to May 30, 2023 were collected. Measures of reading ease, analytical thinking, affect and authenticity were derived by means of Wordless and Linguistic Inquiry and Word Count (LIWC). Subsequently, the non-parametric Mann–Whitney U-test was employed to evaluate the differences between the helpful and unhelpful groups.

Results

Both groups of notes are easy to read with no notable difference. Helpful notes show significantly greater logical thinking, authenticity, and emotional restraint than unhelpful ones. As such, the reliability of Community Notes is validated in terms of readability and neutrality. Nevertheless, the prevalence of prepared, negative and swear language in unhelpful notes indicates the manipulative and abusive attempts on the platform. The wide value range in the unhelpful group and overall limited consensus on note helpfulness also suggest the complex information ecology within the crowdsourced platform, highlighting the necessity of further guidance and management.

Conclusion

Based on the statistical analysis of the linguistic and psycholinguistic characteristics, the study validated the reliability of Community Notes and identified room for improvement. Future endeavors could explore the psychological motivations underlying volunteering, gaming, or even manipulative behaviors, enhance the crowdsourced debunking system and integrate it with broader efforts in infodemic management.