Detect "fake news" through lay groups

Study suggests that reader group checks can be as effective as professional fact-checks work.

Faced with serious concerns about misinformation, media networks and news institutions often use professional fact checkers to separate the real from the false. But fact-checkers can only assess a small portion of the content that floods networks and media.

One problem with fact checking is that there is too much content for professional verifiers to cover, especially within a reasonable amount of time.

A new study by researchers at the Massachusetts Institute of Technology (MIT) in the United States of America suggests an alternative approach: checks from non-expert reader groups can be virtually as effective as the work of verifiers professional facts.

In the study now published, entitled “Scaling up Fact-Checking Using the Wisdom of Crowds“, published in the journal Science Advances, examined more than 200 news items that Facebook's algorithms had flagged for further analysis. According to the authors, an alternative way to approach this verification may have been found, using relatively small and politically balanced groups of lay readers to evaluate the headlines and main sentences of the news.

The average rating of a group of 10 to 15 people correlated with the decisions of professional fact checkers. This can help with the scalability problem, because these evaluators were ordinary people with no training in fact-checkers, and just read the headlines and made a judgment without spending time doing any research.

This means that the method of crowdsourcing could be applied widely and cheaper. The study estimates that the cost of having readers rate news in this way is around €0,90 per story.

"There is nothing that solves the problem of false news online," says David Rand, a professor at MIT, who is the senior co-author of the study. "But we are working to add promising approaches to the anti-disinformation toolkit."


A critical mass of readers

To conduct the study, the researchers used 207 news articles that an internal Facebook algorithm identified as requiring fact-checking because there was reason to believe they were problematic, because they were being widely shared, or because they were important topics like health. The experiment involved 1128 US residents.

These participants were given the headline and main sentence of 207 news items and were asked seven questions: how much was the story "accurate", "true", "reliable", "reliable", "objective", "impartial", and “describing an event that actually happened” to generate an overall accuracy score for each story.

At the same time, three professional fact-checkers received all 207 stories and were asked to rate them after their investigation.

In line with other fact-checking studies, although the fact-checkers' ratings were highly correlated with each other, their agreement was far from perfect. In about 49 percent of cases, the three fact-checkers agreed on the proper verdict on the veracity of a content; about 42 percent of the time, two of the three fact-checkers agreed with each other; and about 9 percent of the time, the three fact checkers had different ratings each.

Interestingly, when non-professional readers recruited for the study were classified into groups with equal numbers of Democrats and Republicans, their average ratings were highly correlated with professional fact-checkers' ratings. And, with at least a double-digit number of readers involved, the ratings obtained were as strongly correlated with the fact-checkers as the fact-checkers were with each other.

These readers were not trained in fact checking, they were just reading the main headlines and sentences, and yet they were able to match the performance of the fact checkers, you can read in the article.

While it may initially seem surprising that a group of 12 to 20 readers could match the performance of professional fact-checkers, this is another example of a classic phenomenon: the “wisdom of the crowds”.

In a wide range of applications, lay groups have been found to match or exceed the judgmental performance of experts. The current study shows that this can occur even in the highly polarizing context of misinformation identification.

Experiment participants also took a test of political knowledge and a test of their tendency to think analytically.

Overall, the rankings of people who were better informed about civic issues and engaged in more analytical thinking were more closely aligned with the results obtained by professional fact checkers.


Participation mechanisms

The researchers conclude that the result of this study could be applied in many ways and note that some media owners are actively trying to make crowdsourcing work.

Facebook has a program, called Community Review, where lay people are hired to review news content; Twitter has its own project, Birdwatch, soliciting readers' opinions on the veracity of tweets.

To be sure of the usefulness and feasibility of the now-published study, the authors note that any organization using crowdsourcing needs to find a good mechanism for reader participation. If participation is open to everyone, it is possible that the crowdsourcing process could be unfairly influenced by third parties who can bias and bias the judgment.

Finally, the authors note that this process has not been tested in an environment where anyone can freely choose to participate. On the other hand, news and media organizations would have to find ways to get a large enough group of people to actively evaluate the content in order to make the content work. crowdsourcing.


Author António Piedade is a Biochemist and Science Communicator. He published over 700 articles and chronicles of scientific dissemination in the Portuguese press and 20 articles in international scientific journals.
He is the author of nine books on science dissemination, among which “Iris Científica” (Mar da Palavra, 2005 – National Reading Plan) stand out, “Caminhos de Ciência” with a preface by Carlos Fiolhais (Imprensa Universidade de Coimbra, 2011) and “Diálogos com Ciência” (Ed. Trinta por um Linha, 2019 – National Reading Plan) prefaced by Carlos Fiolhais.
It regularly organizes cycles of scientific dissemination lectures, including the already very popular “Ciência à Seis”, at the Rómulo Centro Ciência Viva of the University of Coimbra.
He regularly gives lectures on scientific dissemination in schools and other institutions.