Chris Mooney has an interesting article up at Mother Jones which draws upon insights revealed by psychology and neuroscience to explain why people tend to dismiss or distort scientific evidence so as to neutralise threats to, or exaggerate the strength of, our pre-existing beliefs. The nutshell is that, as he puts it, "we apply fight-or-flight reflexes not only to predators, but to data itself."
I recommend reading the whole thing, but I’ll just raise a couple of points I found intriguing.
1. Mooney’s discussion of “motivated reasoning” – an excellent phrase – offers some interesting facts about the affective aspect of reasoning:
“The theory of motivated reasoning builds on a key insight of modern neuroscience [7] (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment.”
This may be a key insight of modern neuroscience; it was also a key insight of the eighteenth century. Contrary to caricatures about the ‘age of reason’, the eighteenth century was one in which Hume could declare that “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them”. The point being made here was not that reason is unimportant, but that what motivate human behaviour – action, decision, and indeed reasoning itself – are the passions.
To give another example, the entanglement between passion and reason also underpinned the dispute between Rousseau and Voltaire about how to understand and respond to religious fanaticism. In his play Le Fanatisme, ou Mahomet le Prophète, Voltaire depicted the Prophet Muhammad as an imposter – “an armed Tartuffe” (cited in Toscano 2010:110) – who cynically manipulated the sincere fanaticism of his followers for base political and personal motives. The play wasn’t intended as a historical account – rather, Muhammad and Islam were intended to stand in for religious fanaticism per se. Voltaire’s intention was to show "into what horrible excesses fanaticism, led by an impostor, can plunge weak minds”. This account of fanaticism – still referred to in contemporary debates about terrorism as the “Voltaire thesis” – argues that it requires both imposters and dupes, and points the way to Voltaire’s prescription for dealing with fanatics: the “exposure of their leaders as imposters”. For Voltaire, “[the] cure for fanaticism is less a frontal attack on fanatics than a stripping away of the veil from those who manipulate them”. (Kelly 2009:179-80)
Rousseau shared Voltaire’s hostility to religious fanaticism (of a certain kind). But he rejected Voltaire’s solution on the grounds that Mooney, drawing on the findings of modern neuroscience, describes. Voltaire’s prescription for combating fanaticism appeared to assume that fanaticism is “essentially an error of understanding”, and as such can be “corrected by exposing the imposture the feeds it”. (ibid.:181) For Rousseau, this understates the extent to which people’s reasoning processes are shaped and directed by their emotions. Fanaticism, for Rousseau, is not simply an error; it is a passion. This alternative explanation led Rousseau to advocate force, rather than enlightenment, as a weapon against fanaticism (though Voltaire was hardly a pacifist in this respect either), but it also led him to an appreciation of the potential benefits of fanaticism, if re-directed to more useful political objectives.
One doesn’t have to agree with Rousseau’s prescriptions to think that, as modern neuroscience appears to confirm, he and Hume were right to take the emotive aspect of reasoning seriously. For instance, in his interview with New Left Project, author and former publisher Dan Hind suggests that people continue to treat the institutions and ideas responsible for the 2007-8 financial crash as if they had credibility because “we struggle with the idea that we are profoundly misinformed”. We don’t like to think of ourselves as having been deceived, and so “when it turns out that we have been profoundly misinformed, there is a desire to move on, forget about it, and get on with the business of being misinformed in a new way that we don’t realise.” Exposure of falsehood isn’t always enough, particularly when that exposure challenges strongly held perceptions about ourselves and our position in the world.
2. The evidence assembled by Mooney is fascinating as far as it goes, but there is a curious lacuna in his piece. Psychological explanations for public distrust of science shouldn’t obscure the often rational basis for that distrust. The scientific method is one thing; science as it is practiced in the real world is quite another. The process by which scientific research is funded and reported on is often opaque, and can be distorted to suit the interests of powerful constituencies. As Hind argues, whereas in the eighteenth century religion was arguably the main intellectual source of authority, today “[science], not theology, has become the arena” through which interested parties attempt to legitimise their authority - "it is through their claims to rationality and scientific understanding that our guardians bind us in obedience to the established order". (Hind 2007:49) To the extent that scientific research agendas are determined by corporate priorities, and to the extent that the results of that research are selectively communicated by corporate media, it is understandable that people should treat scientific claims with caution. If people doubt what they are told about what the scientific evidence says, I would suppose that this is in no small part due to corporate interests recognising that “doubt is our product" (cf. Oreskes and Conway).
This is important because, while it may be impossible to transcend our psychological limitations, we can more realistically seek to increase the extent to which people’s beliefs are accurately informed by scientific evidence through institutional (i.e. political) change, to make scientific research and the communication of scientific evidence more transparent and accountable.
Originally published at New Left Project