These days here in the "reality-based community", we like the idea of following the consensus opinion of scientific experts; but it could be argued that this is a form of the classical logical fallacy of "argument from authority".
There's a common resolution out there to this apparent contradiction: "argument from authority" gets rephrased something like: argument from false or misleading authority. I'm not at all convinced that this is the right approach.
If you look around at the way we actually evaluate information, I think you can see that we use multiple stages; there are at least two levels of engagement with two different standards of evidence: the quick look and the close focus. A rule of thumb like "trust the experts" is excellent in the early stages, but the logical fallacies become important in the later stages.
We have two "gatekeepers", and both are very useful, but they work at two different phases in the intellectual process.
There are far too many voices in the world to treat all of them equally and thoroughly, instead we try to first filter out things that don't seem promising. And we often do it using rules-of-thumb that would seem very sloppy if invoked in the more rigorous later stages of evaluation.
That initial filter does not seem to be discussed very often-- in fact, I think we often pretend that it doesn't exist or shouldn't exist, though really something like it is an absolute necessity.
Once we recognize that that initial filter is needed, then we can think about how it works, and how we might improve it...
re-writing the logical fallacies
There are some long-established, well-known principles of intellectual discourse, such as the various logical fallacies, including the ever popular (and sometimes correctly applied) "Argumentum ad hominem" and it's less heard from flip-side "Argumentum ad verecundiam" (argument from authority). With the first you're asked to ignore a statement because of who said it, with the other you're asked to accept it for the same reasons. The underlying principle is that you should "address the speech and not the speaker", because logically they really are separate, and a true statement can come from a poor source ("even a stopped clock is right twice a day") and a false statement can come from an impressive source ("even Jove nods").
All of this, however is an extremely high standard to maintain, and many of the modern write-ups you see of these classic principles include a lot of hedging if not outright re-writing to soften them.
To take an example close to home, here at the dailykos, the often excellent SkepticalRaptor posted a write-up where he linked to a version of "Argumentum ad verecundiam" of his own, which he had reframed as Argument from False or Misleading Authority
Re-writing one of the classical logical fallacies strikes me as a bit high-handed, but I think I understand entirely where Skeptical Raptor is coming from. He has some respect for trained experts speaking about their field of expertise (and I do too, most of us do). Arguably the entire body of knowledge of modern civilization is based on a trust network of experts in different fields, but the rule of thumb to "trust the experts" is directly in conflict with this logical fallacy.
This attempt at reframing this principle to dodge one problem runs into a different one: the entire point of regarding "argument from authority" as a fallacy is to recognize that even someone who's an acknowledged expert can be wrong, even in their field of expertise-- e.g. Louis Agassiz was a brilliant, well-respected biologist, but he came down on the wrong side of the theory of evolution.
I think you can see a similar re-writing of the classics in other places such as the "rationalwiki", in the pages Argumentum ad hominem and Argumentum ad verecundiam.
Many reasonable people have noticed that there are some very unreasonable things you can do with the standard logical fallacies (e.g a global warming denialist might very well claim that citing the consensus among climate experts is merely "argument from authority").
So evidentally, many of these reasonable people have concluded that there are right and wrong ways of using the fallacies, and you need to distinguish between the two when defining them. My own feeling is that these are very awkward intellectual maneuvers, and they may not be entirely necessary.
I suggest that the resolution of this apparent contradiction is not to complicate the definitions of the logical fallacies, but just to recognize explicitly that we use two different phases of evaluation with two different standards of judgment.
the two stages of evaluation
A number of issues are clarified once you think in terms of a dual-level process.
We have a long, highly developed tradition of what standards to apply at the second level when scholars are engaged in detailed study and debate. This is the world in which the logical fallacies were originally formulated.
Expecting these standards to work the same for the first level strikes me as a source of confusion.
There's no contradiction between "citing experts" and calling "arguing from authority" a fallacy if you think of seeking experts as a heuristic that gets applied at an early stage of screening, when trying to decide if a source of information is worth paying attention to at all. The standard logical fallacies are part of a different more rigorous approach that's applied to something that's made it through the initial stage. If someone keeps insisting that expertise settles everything once you've made it past the initial filter, then it's appropriate to complain about argument from authority.
Most of the explicit rules of intellectual engagement that we have effectively presume that you've made it through that initial filter. We have very little in the way of codified rules for how (or when) our initial filters are supposed to work.
Maybe it's worth thinking about what we do and firming up our strategies, and possibly developing some new ones.
experts can be wrong, even in their field
Above, I mentioned that insisting on that the consensus among climate scientists settles the issue could be called "argument from authority"-- and actually, that's technically correct: it is not at all a logical impossibility for the 97% (plus) consensus among climate experts to be wrong:
They may have a bad "groupthink" problem; or they may all have fallen into the same trap, mislead by quirks of the evidence; or maybe there's an insanely huge conspiracy, and they've all been corrupted by the secret triumvirate of George Soros, Warren Buffet and Dr. Evil.
It would not, however be a good bet that that's so, and no sensible person would assume that it is.
Before you get to the level of certainty of a logical proof, there are probabilistic rules of thumb to apply to decide what you'll even think about trying to prove.
the nuclear power debate
I'm probably going to write more about this some other time, but let's touch on the way the first-stage filters function in the nuclear power debate.
The anti-nuclear side has an awkward problem at this point, in that they have to deal with folks like myself comparing them to the climate change denialists. My take is that there are some striking similarities: both sides ignore the vast consensus of expert opinion and instead cherry-pick exceptions that agree with their preconceptions.
So, how does the anti-nuclear side deal with this accusation? They make two different, but related moves: (1) they insist they're not denying scientific evidence, they're just objecting to a particular technology, (2) they claim the technical experts in the nuclear power field are not independent and may very well be corrupted by their industrial connections.
(Note: you don't hear that second objection at the dailykos much, because it gets too close to violating some site rules that I have at best mixed feelings about: no "conspiracy theory", and don't call people "shills".)
The way I would respond to those two moves: (1) the distinction between science and technology is often greatly exaggerated (a reflection of the Cartesian mind-body dichotomy, I suspect), and evaluating evidence about phenomena and the efficacy of a technique are both actually very similar problems and (2) money can certainly corrupt, but then, money is everywhere, and presuming it only corrupts the people you don't want to listen to is actually just a cover for cherry-picking. Further, financial concerns aren't the only force that can corrupt judgment, any sort of engagement with an idea has a way of making people feel committed to it.
The deep need to avoid admitting that you got something wrong seems like a tremendous force in human affairs...
The example at hand: people who are willing to risk the fate of the planet rather than use nuclear power to save it.
argumentum adversum krugman
Paul Krugman has had a number of exchanges with people who want to accuse him of "ad hominem" argument-- he's not always polite to the Very Serious People, and many people seem to use "argumentum ad hominem" to mean "mommy, he was rude to me!".
A notable occasion arose after he referred to Paul Ryan as "The Flim-Flam Man", Ad what?:
As I’ve always understood it, ad hominem attacks involve attacking the person in general rather than what the person has to say on a specific issue. ... I did point out that Ryan appears to be faking it in the selling of his plan — and I documented that assertion with specifics on the plan, on how he gamed the CBO process, and on the differences between how he talks about the deficit and what his plan would actually do.
In the comments, sblundy of Boston states it succinctly:
That's how I understood it. An ad hominem attack means to attempt to discredit an argument by asserting that the arguer as despicable. You tend to discredit arguments and then conclude that the arguer is despicable.
And Jean Baptiste Botul of Paraguay gets to the point that I've been making here:
... though ad hominems are fallacious in that they're not universally valid, they can be highly reliable inferences that lead to true conclusions a lot of the time.
There's a funny quirk to this business, by the way: If you say "don't listen to him, he engages in 'ad hominem' attacks", that in itself is an ad hominem argument.
don't trust known liars
Daniel Davies (aka D-Squared), as quoted by Brad Delong:
The raspberry road that led to Abu Ghraib was paved with bland assumptions that people who had repeatedly proved their untrustworthiness, could be trusted. There is much made by people who long for the days of their fourth form debating society about the fallacy of "argumentum ad hominem". There is, as I have mentioned in the past, no fancy Latin term for the fallacy of "giving known liars the benefit of the doubt", but it is in my view a much greater source of avoidable error in the world.
Track records really do matter. "But that guy is always wrong!" really does qualify as an "ad hominem" argument (it's logically possible for someone who's been wrong in the past to be right this time); however spending a lot of time carefully considering the opinions of people who've been wrong an awful lot would not seem to be advisable. Once again: there are two phases of engagement, and heuristics involving background and track records really do count, at least in the first phase.
The entire Very Serious Person problem that Krugman discusses so often arises because many people having broken first-stage filters that grant credibility to anyone who looks Serious, rather than, for example, seeking out people with a history of getting things right.
ignorable classes
On nearly any subject of interest, there are bound to be groups of people that aren't really worthy of serious attention, who you would rather not hear from at all. (Judge the speech and not the speaker-- provided they're not one of those guys). But it's hard to find a general name for these groups classified as "ignorable", because there are many reasons that can happen. You might be question their knowledge, their sanity, or their morality or some combination of the three. They aren't all "crazies", some, for example, may be hired shills, operatives that are eminently sane and well-informed, but with radically different values or interests than your own.
Different people make different judgment calls and consider different sets of people ignorable. And there are always factions out there trying to manipulate these perception filters-- if they can get the opposition filed away as "extremists", then they win the game. And even if all they can do is achieve the appearance of equivalence between both sides, that may be a good second best.
For this reason alone (it's not hard to think of others), it's arguably a good idea not to take your perception filters too seriously, however necessary they are in general--
(Of course I would say something like this, I'm acutely aware that I'm already filed in many peoples "ignorable" category.)
the need for leaks
So, how to proceed? I suggest:
(1) embrace the fact that you really are using rules-of-thumb that automatically downgrades the opinions of some factions (there's no need to feel ashamed or lazy about this: it's a necessity).
(2) create some exceptions for your perception filters: engage with the "ignorables" with an open-mind on occasion, to see if you can learn something from them. This might even be formalized-- "Every third Sunday, I will park my snark-reflex, and try to talk reasonably with Those Guys."
Stage one is a very quick-and-dirty process: you have to expect it to be fallible... so ideally it should leak: it should be possible to make it through even if the usual criteria isn't met. If it doesn't ever leak, we should look into opening up some holes to make it leak.
The phenomena that some people refer to as living in an "echo chamber" or a "bubble" is a disease of the first stage: if you use affinity to rigorously screen for voices to listen to, and never engage seriously with any dissenting voices, there's a risk of getting trapped in groupthink.
(Though conversely: if you do no screening at all, you can waste your life arguing with people who aren't worth your time, who haven't taken the trouble to master the basics, or perhaps aren't capable of it.)
One of my ideas is to decree a "blue moon day" that I set aside to engage with people who I normally might ignore. Actual blue moons are a little too infrequent for this purpose (the next is in 2018) so it should probably be something else... maybe "first prime Saturdays" would work. Let's try that: on October 3rd, 2015 we open the gates. Feel free to "celebrate" with me.