Discussion: (0 comments)
There are no comments available.
A public policy blog from AEI
Fake news has become a cause célèbre and fighting it has attracted some powerful players. Facebook just launched its “disputed”tag for possible fake news, and Google has promised to also go on the attack.
But can current tech firms really stop or even slow down fake news? Probably not. Frankly, these firms’ business models enable the economic engine that powers fake news, and the demand for a social media site’s version of the truth is probably quite low.
Fake news isn’t a new phenomenon. In 1782, Benjamin Franklin published a counterfeit issue of the Boston Independent Chronicle that included a fictional account of American scalps being sent to the king of England. During the era of yellow journalism, Joseph Pulitzer and William Randolph Hearst sometimes knowingly published false stories to rouse anti-Spanish sentiment over Spain’s colonization of Cuba. Ryan Holiday, author of “Trust Me, I’m Lying”(Penguin Group, 2013), describes numerous instances over the past decade where he fed made-up stories to a gullible media.
Sometimes false content feels right. As Nobel Laureate Daniel Kahneman has explained, we are all subject to mental shortcuts. One is confirmation biases, which is when we accept data that confirms prior beliefs. Economist Scott Wallsten shows that people in states that President Donald Trump carried in the election were more concerned about fake news before the election than after it, while people in the states that Sec. Clinton carried were more concerned about fake news after the election than before it.
Another mental shortcut is exaggerated emotional coherence, which is the situation in which we believe good things about people we think are good, and bad things about people we think are bad. According to a poll from Suffolk University before the election, 77 percent of Clinton supporters believed Trump is a racist, while 87 percent of Trump supporters believed he is not.
Some mental triggers cause people to share fake news through social media. In his book “Contagious”(Simon & Schuster, 2016), Jonah Berger explains that people pass information along to others when doing so makes the sender look good, the information has a strong emotional impact, or the content is useful to the recipient. Sharing fake news among like-minded people sometimes satisfies all three triggers. And then receiving shared information prompts other cognitive biases, such as believing something because It’s from a trusted source (such as a friend) or because the information is encountered multiple times, making it feel familiar or publicly confirmed.
Fake news has a powerful economic engine. Many of the bloggers and other writers that create or spread false stories are paid based on the number of views of the articles they write, so they have a strong incentive to create pieces that hit the triggers I just described. Fake news writer Cameron Harris sometimes made $1,000 an hour during the 2016 presidential campaign. Another fake news writer, Paul Horner, has made $10,000 a month from ads on his website.
Also costs are low and ideological payoffs can be high. Holiday reported a fake Twitter scandal in 2012 that cost $200 to create and that generated 29 mentions in the media and more than 500,000 page views. According to Brooke Jarvis, people from Greenpeace and Occupy Seattle posted a fake YouTube video of Shell holding a private party to launch arctic oil rigs, and sent out fake threats from Shell regarding the video. The video has received more than 800,000 views.
Current tech leaders might help some, but they are also part of the problem. The internet makes the fake news economic engine possible. Although about half of people age 49 and younger often get their news online – and 44 percent of those get news from Facebook – most consumers do not trust the information: 62 percent of consumers told Pew Research that they don’t trust the news they find on social media very much or don’t trust it at all.
The key to combating fake news probably lies in creating an economic engine that is more powerful than the one that drives fake news. Since costs are already minimal, the engine would have to give consumers more value. Sounds like we need a disruptive innovation, which is what new tech businesses are all about.
(Disclaimer: Dr. Jamison served as a consultant to Google in 2012 on whether Google search should be considered a public utility.)
This post was originally published on TechPolicyDaily.
There are no comments available.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2018 American Enterprise Institute