Daniel Kahneman (who is not to be confused with Dan Kahan, but good luck with that) is the author of a book called Thinking, Fast and Slow which, among other things, discusses the way we all have two ways of understanding and making decisions, intuition and reason, which he refers to by the (somewhat) standard jargon "System 1" and "System 2". System 1 is fast, but unreliable, System 2 is much more thorough, but slow.
Kahneman's research in human understanding, following the usual practice of psychologists, has typically focused on individual behavior. There are many important aspects of human understanding that have to do with group behavior, though. Human beings working together have the potential to be smarter than human beings in isolation (and unfortunately, they also have the potential to be stupider).
I suggest that if each human being is working with a "System 1" and a "System 2", groups of human beings can have something you might call "System 3": when we get a System 3 that works well, groups of human beings can understand things in greater depth than could be achieved by individuals in isolation, though this overall process is frequently even even slower than an individual's reason ("System 2")....
jargon
Kahneman uses the jargon "System 1" and "System 2" in his book-- though in some of his papers, when he wants to get the idea across fast in a limited space, I note that he uses more immediately comprehensible terms such as "intuition" and "reason". I gather that "System 1" and "System 2" were originally intended to be neutral terms that would allow these human characteristics to be investigated without the preconceptions that familiar terms like "intuition" and "reason" have. I'm afraid to my ear the terms "System 1" and "System 2" sound like you're trying to impress people with how scientific you are, and they have some baggage of their own about them (like, are they really "systems"?). I've tried to come up with other names, but the best I can do is things like "fire mind" and "ice mind", and if I started talking like that all the time I'd have to move to Marin County.
System 3
The prime example and inevitable poster-child for System 3 is Science. As I've said before:
... as most of us are aware at this point, scientific training does not turn human beings into perfectly objective, unbiased reasoning machines, and yet the scientific enterprise taken as a whole does a good job of converging on the truth.
For me, that's the existence proof that social groups can be smarter than individuals-- and it raises the question of what sorts of social institutions we might create that can increase our collective intelligence.
Other examples to consider (which have varying degrees of success) include the legal system, business organizations, military organizations, artists co-ops, and so on.
System 3 is a group phenomena, a matter of the collective intelligence of networks of individuals, so the characteristics of System 3 depend on how we each evaluate information from others, as well as on how we decide what to try to communicate to others.
And it seems to me that our our "System 3"s are often subdividable into two stages that parallel the distinction between System 1 and System 2, because System 3 necessarily employs those two systems, because it is after all, another cognitive process conducted by human beings.
This is a frequent theme with me these days, and you'll often find me saying things like:
If you look around at the way we actually evaluate information, I think you can see that we use multiple stages; there are at least two levels of engagement with two different standards of evidence: one that's rough and one that's fine-- the quick look and the close focus.
System 1 and System 2 (or more simply, intuition and reason) are common features of all of us, they're one of the givens of being human (as far as we know). System 3 on the other hand is an an emergent characteristic of social organizations.
Since there are multiple ways of organizing human beings, there's potential for many third brains...
We're not constrained to just fire and ice mind.
We're not limited to just two "thirds", but any third we build must be constructable by and out of our dual selves.
the many names for the intelligence of many
Here, what I'm calling System 3 here is often discussed under a number of different names such as "collective intelligence", "social epistemology", or "cultural cognition". I've been poking around in the literature on this stuff a bit, off and on...
The source of that phrase "cultural cognition" is Edwin Hutchins (Cognition in the Wild, 1995) who did some field work studying how a large naval vessel is run: he concluded that the understanding of running the ship is shared between many individuals, and not actually held entirely by any one of them. He summarized his result with the catchy slogan "cognition is culturally distributed". Myself, I would not want to suggest that this is anything but some very solid work, but I'm afraid his central result may seem somewhat obvious to anyone who isn't a psychologist: in a typical business, for example, the accountants understand the finances better than engineers, who understand the technical characteristics of the products better that the marketing department, who understand how to appeal to the customers better, and so on. Specialization is ubiquitous in the modern world.
Of late, however, you're far more likely to see the phrase "cultural cognition" employed in an almost opposite sense. Researchers such as Dan Kahan (who is not be confused with Daniel Kahneman, but good luck with that) at the Cultural Cognition Project focus largely on phenomena like "motivated reasoning", where our judgement appears to be corrupted by our pre-existing opinions and our sense of group identity. This is more a matter of "collective stupidity" than collective intelligence.
But that collective stupidity is indeed a real phenomena-- the idea that we can rely on "The Wisdom of Crowds" is complicated by the result that even a little cross-communication between individuals can result in group-think effects that get in the way of any attempt at polling multiple independent points-of-view (see Jan Lorenz, et al (2011)).
Daniel Kahneman (you know, the other one, "Thinking Fast and Slow"), likes to use the strategy of first polling people in isolation, before sitting down with them all in a meeting where crowd-following effects can corrupt their perception.
"Groupthink" has been a well-recognized phenomena for quite some time (uncle wikipedia says: term coined in '52, book published in '72, based on studies in the intervening years, credits go to Whyte and Janis, respectively). And if you do websearches on phrases like "how to avoid groupthink" you'll find many business advice articles all of which say essentially the same things (where's the irony mark on this keyboard?).
Clearly, individuals taken en mass don't necessarily get smarter or stupider than the individuals alone: social structures matter. The same individuals might work together better or worse depending on the situation they're in.
The great hope of the "free market" enthusiasts is that market-based systems can pool the information of large numbers of individuals to get an emergent wisdom of greater quality than a small group of experts, but it's a real problem if we're going after "independent" opinions, because we just aren't: we all communicate with each other, we all react to the same news stories, and we all make guesses about other people's guesses about what to do.
There's another group of people who've adopted the phrase "Collective Intelligence" to apply to the internet "web 2.0" world of interlocking web sites, where collective intelligence refers to things such as wikipedia, or (even sexier!) trying to tease out useful information from google clicks or twitter traffic.
In the Toby Segaran book, Programming Collective Intelligence (2007) from O'Reilly, he takes the side of markets:
A well-known example is financial markets, where a price is not set by one individual or by a coordinated effort, but by the trading behavior of many independent people all acting in what they believe is their own best interest. Although it seems counter-intuitive at first, futures markets, in which many participants trade contracts based on their beliefs about future prices, are considered to be better at predicting prices than experts who independently make projections. This is because these markets combine the knowledge, experience, and insight of thousands of people to create a projection rather than relying on a single person's perspective.
A number of "predictions market" web sites have been created, where different people place bets on what they think is likely to happen. Back in the mid-Naughts, Kevin Kelly was arguing that the data showed they seemed to work well (See: Election Prediction Markets, Wisdom of Public Prediction Markets ).
The Cosma Shalizi, Henry Farrell paper on Cognitive Democracy primarily makes a very abstract argument that markets are weaker decision-making systems than Hayek and company would have you believe, and that democratic systems have advantages over both markets and hierarchies. (In the authors' taxonomy, markets, hierarchies and democracies are the only three choices: they apparently have a very expansive definition of democracy.)
Shalizi and Farrell place a lot of emphasis on the need for decision making systems to be able to experiment with variations of themselves, and point to internet-based systems such as collaborative web sites as fertile grounds for future experimentation.
improving 1 and 2
Back to Kahneman (you know, the "Thinking, Fast and Slow" guy):
Kahneman is generally pessimistic about attempts at improving our thinking processes-- he's seen too many trained cognitive scientists fall into obvious traps when they really should have known better-- nevertheless you'd have to say that he concedes that it's possible:
- Our intuition can be trained by our reason.
- Our intuition is capable of realizing when it needs to invoke our reason.
- The two systems can talk to each other and improve how they function together.
This all suggests that it might be possible to contrive some sort of training program to improve individual cognition. Some things like this exist already-- arguably, scientific training is something like this.
There's at least a possibility that we might do better. A few different angles of attack:
- come up with a variant of scientific training for ordinary citizens, who really need to learn what they can expect (or not expect) from scientific evidence, and need some guidance in how to evaluate and apply scientific results. (Personally, I would love to see the general public learn how to live comfortably with uncertainty to the same degree that scientists do.)
- a training program for people interested in learning to become resistant to confidence tricks (which once suspects would also be very useful training for citizens to function in a Democracy).
- invent a qualification exam for people trained to avoid the known cognitive biases (ala Heinlein's "trained witnesses" from Stranger).
3 plus
Another angle of attack to improving our intelligence is to focus on improving our collective intelligence, to attempt to improve "System 3".
Kahneman comments approvingly on Atul Gawande's thesis presented in the 2009 book The Checklist Manifesto which argues that systematic checklist-based diagnostic procedures (e.g. for medical purposes) often out perform the judgement of trained, experienced experts. I might point to this as an example of groups of experts collaborating to build a "System 3" that's smarter than an individual expert (or at least, smarter than the average expert).
Myself, I'm always trying to think of different types of collaborative web sites (which Shalizi and Farrell also suggest) that might be worth experimenting with. The existing ones all strike me as having some severe limitations as intellectual forums (this one included)... And I would hope trial-by-twitter-shit-storm is not the ultimate culmination of our social development.
wiring human networks
Starting with the idea that each of us has a dual nature, intuition and reason, I think that suggests different approaches toward building structures with human components.
You might try to specialize along the lines of these two functions: a group of flame-spotters might function as first-readers, doing quick evaluations of information to decide whether it should be passed to a team of experts who will attempt to do a more thorough evaluation.
You might try to do this with people who seem better at intuition or reason... or it just might be a matter of consciously choosing to perform one role or the other, a recognition that you're probably going to be better at engaging one mode of thinking at a time.
the connectivity parameter
One issue in "wiring human beings together" is how tightly they should be connected.
Susan Cain, with her book Quiet: The Power of Introverts in a World That Can't Stop Talking (2012) makes the point that there are at least some people who do better working with other people if they don't do it too closely. An article at fastcompany apparently intended to contrast her opinions with Keith Sawyer (author of "Group Genius", from 2007), but to my eye they actually agree with each other quite a bit. Everyone from that world is apparently aware of a result that the group "brainstorming" sessions they were raised on don't have any experimental evidence behind them, and individuals on average are more creative when solving problems alone (this was popularized by the somewhat problematic Jonah Lehrer in an article in the New Yorker, and one hopes they fact-checked that one thoroughly).
Kevin Kelly, in his book Out of Control (1994), wrote about some work by Stuart Kauffman which looked at the adaptability of a network as a function of a connectivity parameter. He found that adaptability peaked at a certain optimum amount of interconnection: it's weak both for poorly connected systems and for highly connected ones.
Kauffman's Law states that above a certain point, increasing richness of connections between agents freezes adaptation. Nothing gets done because too many actions hinge on too many other contradictory actions. In the landscape metaphor, ultra-connectance produces ultra-ruggedness, making any move a likely fall off a peak of adaptation into a valley of nonadaption. Another way of putting it, too many agents have a say in each other's work, and bureaucratic rigor mortis sets in. Adaptability conks out into gridlock. For a contemporary culture primed to the virtues of connecting up, this low ceiling of connectivity comes as unexpected news. (p. 400)
And further:
We own the technology to connect everyone to everyone, but those of us who have tried living that way are finding that we are disconnecting to get anything done. (p. 401)
(And some of you millennials look like you might need to re-learn this bit of "unexpected news" from the mid-90s.)
speed of response
Isn't there a need for a System 3 that can respond fast?
It's all very well for the wheels of Science to grind slowly, but there are emergencies out there, cases where collective action must move more quickly-- or be beaten by the command-and-control competitors.
I note that Thinking in Time, as the title suggests is acutely conscious of the time pressure on urgent decisions.
They discuss many "mini-methods", but they make it clear throughout that they understand there are limitations on how thorough you can be on the various steps.
E.g. making a timeline will involve selecting just the important stuff, but they're aware that there's no good way to spell out exactly how to do that on the fly.
I tend to presume that any System 3 is likely to be slower than an individual working on a problem, but at least some tasks are likely be "parallelyzable", so that multiple players can work on it in pieces without bogging down the whole effort.
crowded fields and/or open channels
Some of the directions you can go with this material, may not seem promising at first glance. Schemes for improving the interplay between System 1 and System 2 could easily degenerate into a new self-improvement cult ("We're going to do EST right this time, dammit. You asshole.")
But then, at least that indicates there's an existing slot for such material, there's some interest in schemes like this, and if you're inspired to go in this direction you might actually find a receptive audience.
And schemes for developing new System 3s are in the same territory as business advice books: a crowded field full of shallow, disreputable work...
But once again, any established outlet for a set of ideas may be better than none at all.
And if you get rich with your new book "Enabling the Third Brain for Success in Sex and Real Estate", I want a cut.