Yesterday we charted the course that led us to Post-Truth land. How might we find our way out? Hard to say, but fortunately, the second portion of Lewandowsky, Ecker and Cook’s piece offers some suggestions. They also coined fun new phrase to embody the changes that need to be made: technocognition.
As the authors explain, technocognition is the idea that we should use what we know about psychology to design technology in a way that minimizes the impact of misinformation. By improving how people communicate, they hope, we can improve the quality of the information shared.
Fundamentally, the authors argue for the need to educate the public about trolls and fake news, and improve journalism to better fight the misinformation. In addition to common sense steps like disclosing pundit and writer’s conflicts of interests and encouraging more participation to collectively reshape the norm into one where facts matter, media outlets should hire myth-busting fake news reporters, and consider forming a common “Disinformation Charter” of what’s acceptable behavior and standard of accuracy.
But the authors recognize that we can’t expect everyone to start playing by the rules which is why there is a need for independent watchdogs to act as the fake news referees, calling out errors and identifying when stories go past the truth. The climate world, which had already formed important defenses against deniers even before one was elected president, have a couple key actors already in this space, including Climate Feedback. More broadly there’s the UK’s Independent Press Standards Organisation, which recently forced a correction of a Daily Mail climate conspiracy.
[Continued after the jump!]
Then there’s the techno-side of the equation. These are the Silicon Valley fixes, like algorithms that can automatically fact check content to prevent fake news from showing up in searches or feeds, or mechanisms to flag fake news on social media.
Website moderators, the authors argue, need to do a better job containing trolls in the first place. From screening certain phrases that are primarily used as fake news framing to eliminating comment sections all together, there are lots of potential ways of curating the comment section so it’s not such a cesspool of hate and lies. But more important than the comments is the content, which is why the authors suggest that an app for reporters would be useful for quick and easy determinations of what’s real and what’s an alternative fact- the Skeptical Science app for example.
And finally, while this is hardly an ask coming solely from the authors, tech companies should find ways to show people content from beyond their bubble. For example, while Facebook and Twitter primarily show users content based on their subscriptions, reddit’s /all and /popular pages show a mix of what everyone’s looking at, regardless of personal preference. This gives users a sense of the world outside their immediate awareness, forcing at least a subconscious recognition of the wider world they may not want to recognize.
Reading through this list of recommendations, and one gets the idea that with some simple tweaks from Silicon Valley, our post-truth problems could be solved. But is it enough?
For now, we hope that technocognition gets some techno-recognition. As unlikely as it may be, we find ourselves wishing for a way to make this anti-fake-news scholarship achieve a fraction of the viral shares that fake news regularly does.