Hostname: page-component-8448b6f56d-wq2xx Total loading time: 0 Render date: 2024-04-23T10:08:19.668Z Has data issue: false hasContentIssue false

Sex Trafficking, Russian Infiltration, Birth Certificates, and Pedophilia: A Survey Experiment Correcting Fake News

Published online by Cambridge University Press:  09 January 2018

Ethan Porter
Affiliation:
Assistant Professor, School of Media and Public Affairs, George Washington University, 805 21st NW, Washington D.C. 20037, e-mail: evporter@gwu.edu
Thomas J. Wood
Affiliation:
Assistant Professor, Department of Political Science, The Ohio State University, 2018 Derby Hall, 154 N Oval Mall, Columbus, OH 43210, e-mail: wood.1080@osu.edu
David Kirby
Affiliation:
Adjunct Scholar, Cato Institute, 1000 Massachusetts Ave NW, Washington, DC 20001, e-mail: davidrkirby@gmail.com
Rights & Permissions [Opens in a new window]

Extract

Following the 2016 U.S. election, researchers and policymakers have become intensely concerned about the dissemination of “fake news,” or false news stories in circulation (Lazer et al., 2017). Research indicates that fake news is shared widely and has a pro-Republican tilt (Allcott and Gentzkow, 2017). Facebook now flags dubious stories as disputed and tries to block fake news publishers (Mosseri, 2016). While the typical misstatements of politicians can be corrected (Nyhan et al., 2017), the sheer depth of fake news’s conspiracizing may preclude correction. Can fake news be corrected?

Type
Short Report
Copyright
Copyright © The Experimental Research Section of the American Political Science Association 2018 

Following the 2016 U.S. election, researchers and policymakers have become intensely concerned about the dissemination of “fake news,” or false news stories in circulation (Lazer et al., 2017). Research indicates that fake news is shared widely and has a pro-Republican tilt (Allcott and Gentzkow, 2017). Facebook now flags dubious stories as disputed and tries to block fake news publishers (Mosseri, 2016). While the typical misstatements of politicians can be corrected (Nyhan et al., 2017), the sheer depth of fake news’s conspiracizing may preclude correction. Can fake news be corrected?

To answer, we exposed subjects (n = 2,742) on Mechanical Turk to multiple examples of fake news. As far as we know, this represents one of the first experimental tests of corrections on fake news.Footnote 1 Our fake news examples came from across the political spectrum. We used six fake news examples, randomly exposing each subject to two examples. For each fake news example, subjects randomly saw either the fake news story alone, or the story and a correction. All subjects were asked to agree with the position advanced by the fake news story.

Both the fake news examples and corrections came from the real world. We utilized a wide range of sources for stories and corrections. Some came from traditional media, while others emanated from Internet message boards. For one of the fake stories, we varied the media type, showing subjects a video. For another fake story, we presented subjects with one of two real-world corrections. The full text of each fake story and correction appear in the appendix.

As the top row of Figure 1 shows, on every issue, corrected subjects on average became significantly less convinced by the fake news story. Corrections improved accuracy overall, even among those ideological cohorts who had a clear political interest in a fake news story. For example, despite ubiquitous claims about Russian political interference in the 2016 election, even liberals showed a correction to a story alleging Russian infiltration of a Vermont power utility subsequently evinced more accurate beliefs ($\hat{\beta }=-1.11$; p < 0.01). Likewise, conservatives exposed to a correction that indicated that President Trump had not ordered an “unprecedented” crackdown on pedophilia became more accurate ($\hat{\beta }=-0.76$; p < 0.01). The second row of Figure 1 displays ideological results for all stories.

Figure 1 Correction effects by fake story, overall, and by ideology. Text labels report beta coefficients and p-values adjusted via Bonferroni method for multiple comparisons. The second row reports average effects across both the corrections used for the Russia/Vermont story. The bottom row reports the difference in effects by ideology. This figure summarizes the regression models described in Table 1.

Table 1 Regression Models by Issue

Note: For each issue, the first model measures the unconditional effect of a correction (larger values indicate agreement with inaccurate statement). The second model inside each issue reports the correction effect conditional on ideology. The auxiliary quantities underneath the coefficients report the significance of the corrections by ideology. *p < 0.1; **p < 0.05; ***p < 0.01.

Table 2 Regression Models for Vermont Power Grid Hacking, by Correction Type

Note: Both the Washington Post and the Glen Greenwald corrections are indistinguishably corrective, as indicated by insignificant differences in the differences in the correction effects (the second group of auxiliary quantities). *p < 0.1; **p < 0.05; ***p < 0.01.

Table 3 Conditional Balance

Note: For categorical covariates, the three numerica columns report the proportional distribution of each variable within the variable class. For continuous variables, cells report correction exposure group means. Categorical relationships are tested with a chi-square test, continuous variables are tested with an F-test.

To be sure, there was some evidence of differential response to corrections by ideology. Furthermore, uncorrected subjects were credulous of the claims made by the fake stories. Yet, for no issue was a correction met with factual backfire (Nyhan Reifler, 2010; Wood and Porter, nd). As with non-fake stories, corrections led to large gains in factually accurate beliefs across the ideological spectrum. While fake news may have had a significant impact on the 2016 election, upon seeing a correction, Americans are willing to disregard fanciful accounts and hew to the truth.

SUPPLEMENTARY materials

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2017.32

Footnotes

1 While Pennycook and Rand (2017) evaluated Facebook’s corrective efforts, they neither provide corrections, nor do they test complete articles.

References

REFERENCES

Allcott, H., and Gentzkow, M.. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31 (2): 211236.Google Scholar
Lazar, D., Matthew, B., Nir, G., Lisa, F., Kenneth, J., Will, H., and Mattsson, C.. 2017. “Combating Fake News: An Agenda for Research and Action.” The Shorenstein Center on Media, Politics and Public Policy. (https://shorensteincenter.org/wp-content/uploads/2017/05/Combating-Fake-News-Agenda-for-Research-1.pdf), accessed September 16, 2017.Google Scholar
Mosseri, A. 2016. “News Feed FYI: Addressing Hoaxes and Fake News.” Facebook.com (December 26). (https://newsroom.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/)Google Scholar
Nyhan, B., Ethan, P., Jason, R., and Wood, T.. 2017. “Taking Corrections Literally But Not Seriously? The Effects of Information on Factual Beliefs and Candidate Favorability.” (https://ssrn.com/abstract=2995128), accessed June 29, 2017.Google Scholar
Pennycook, G., and Rand, D. G.. 2017. “Assessing the Effect of ‘Disputed’ Warnings and Source Salience on Perceptions of Fake News Accuracy.” (https://ssrn.com/abstract=abstract_id=3035384), accessed September 12, 2017.Google Scholar
Wood, T. J., and Ethan, P.. 2016. “The Elusive Backfire Effect: Mass Attitudes Steadfast Factual Adherence.” (https://papers.ssrn.com/sol3/abstract_id=2819073), accessed August 6, 2016.Google Scholar
Figure 0

Figure 1 Correction effects by fake story, overall, and by ideology. Text labels report beta coefficients and p-values adjusted via Bonferroni method for multiple comparisons. The second row reports average effects across both the corrections used for the Russia/Vermont story. The bottom row reports the difference in effects by ideology. This figure summarizes the regression models described in Table 1.

Figure 1

Table 1 Regression Models by Issue

Figure 2

Table 2 Regression Models for Vermont Power Grid Hacking, by Correction Type

Figure 3

Table 3 Conditional Balance

Supplementary material: PDF

Porter et al supplementary material

Online Appendix

Download Porter et al supplementary material(PDF)
PDF 3.6 MB