In this installment we falsify Big Bang determinism of behavior, and look at local causality. Try as I might, brevity eludes me, so to keep these articles within reasonable length, I have to chop them into smaller pieces. This is the second piece, there will be more, probably “same time, same station” next week.
Many of the ideas in this article refer to my preceding article and the associated discussion. If you haven't read it, it's here:
www.dailykos.com/...
Big Bang determinism:
The principle of causality implies that if you know "the facts and the laws" (measurements of objects, and laws of nature), then in theory you can extrapolate the entire time-line of the measured objects. You can predict their state at any point in the future, and you can "retrodict" their state at any point in the past. This has been remarkably successful for a vast range of objects and phenomena. It gave us the basis to discover much of what we know in the physical sciences, including our theories of the inception of the universe.
Causality at the cosmic scale is also commonly used by those who hold that "free will = 0" (does not exist). Often they argue that if we knew the initial state of every particle at the moment of the Big Bang, we could infer not only the development of galaxies and star systems, but also the evolution of humans and the thoughts and behaviors of each individual. This is the position I call "Big Bang determinism." It's a misuse of causality and it's not even wrong.
The supernatural supercomputer:
Those who believe that "Big Bang determinism" rules out free will often make the argument that "if they had a supercomputer with sufficient sensors at the moment of the Big Bang, they could measure the starting state of every particle and thereby predict every subsequent event in the history of the universe." This would enable them to predict every thought and action of every person, demonstrating that free will does not exist.
That argument presupposes that the supercomputer and sensors, or at minimum the needed materials, blueprints, and algorithms, would have had to exist prior to the moment of inception of the universe. But anything that exists before, above, or outside of nature is by definition "supernatural."
Even if mathematics exists in a Platonic realm, the needed algorithms would still require instantiation in software running on hardware, so Platonic math doesn't obviate the dependence on supernatural hardware. That hardware would have to be all the more supernatural to survive the sheer temperatures of the earliest moments of the universe.
There's also the little problem of quantum uncertainty relations. One can't measure both the position and momentum of a particle with equally high resolution: any increase in accuracy of one measurement comes at the tradeoff of a decrease in accuracy of the other. Not only must one have a supernatural supercomputer, it must also perform miraculous measurements.
If one can't get at all of the particles in the early universe, one can't predict all that becomes of them. One can of course believe in such things as a matter of faith, but as with divine intervention, they are beyond the scope of science. The assertion that "free will = 0" based on causal determinism from the inception of the universe, is so untestable in both theory and practice, that it has no place in science-based arguments.
The randomness shield:
Physical randomness breaks causal chains. This is easily illustrated with simple arithmetic. Let A and B represent causally-determined values, and let R represent a random value. Compare the statement A + B = C to the statement A + B + R = C.
If we know the causal mechanisms that produce the values of A and B, then we can determine A and B, and “predict” the sum C in the first statement. We can also predict how changes to the causal inputs will alter the values of A and B and thereby affect C.
But in the second statement, where R intervenes, we can no longer predict C accurately, no matter how much we know about the causal inputs to A and B. The randomness of R "shields" the value C from some amount of the causal determinism embodied in A and B.
If we can establish the parameters (limits) of R, then we can establish the boundaries of the variability of C. But as long as R > 0, we can never eliminate that variability altogether.
The biological definition of evolution is change over time in the heritable characteristics of organisms. Evolution depends on replication, variation, and selection. To the extent that any of the processes that drive variation are truly physically random (e.g. cosmic ray strikes germ cell, causing mutation), the causal chain from the first cells to ourselves is peppered with tiny increments of randomness. Together the increments add up and contribute to the diversity of life we observe. They also add up to the equivalent of the variable R in our simple arithmetic.
When we trace the lineage of life on Earth back to its origin, we do so empirically rather than algorithmically: no algorithm can break through the shield of accumulated randomness to match the empirical findings. That is, we can't conceive of a hypothetical supercomputer and program that, when fed the facts of Earth life at a particular point in history, can accurately duplicate our empirical findings of life substantially before or after that point in time. Similarly, no algorithm can predict the variations that will occur in the human species over Darwinian time going forward, that may eventually produce one or more successor species.
This will remain true even if our hypothetical supercomputer is fed the complete genetic code of an organism at one point in time. At best it could produce a set of possible results (genetic codes at selected points in the past or the future) and rank-order them by "all other factors equal" probability. The accuracy of its extrapolations will necessarily fall off as a function of the distance between its starting point and a chosen endpoint.
Further, individual brains (human and other) are infested with random processes. There are noisy neurons about which one can't predict whether or not they'll fire when presented with a stimulus. There is a background level of random noise in the brain at-large that cognitive scientists have in the past sought to filter out to get at meaningful signals. And there is the theory proposed by Roger Penrose & Stuart Hameroff, that quantum mechanical computation occurs in the proteins in the cytoskeletons of neurons.
The intervention of small increments of randomness at various points in the signal chains of activity in the brain, also creates a "randomness shield" that limits the efficacy of causal determinism. Together they add up to the variable R in our simple arithmetic, and make mush of any attempt to extend causal determinism beyond the scope of what I call "local causality."
You can summarize this section as: the greater the degree of randomness that intervenes in a causal chain, the greater the degree to which it shields subsequent events from deterministic prediction.
Local causality:
"Local causality" is causality that can be established at a high level of confidence based on empirically observable variables.
"Local determinism" occurs where an observable cause produces an observable effect with sufficient reliability that the sum of random inputs between the cause and the effect is insufficient to alter the outcome.
These definitions exclude "Big Bang determinism" and other metaphysical factors, and involve only those variables that can be measured over the course of experiments.
If you want to design a machine that operates precisely, you'll seek to minimize the effects of random factors. For example a laboratory-grade scale might be fitted into a glass enclosure to minimize the effects of random air currents on measurements of very small weights of materials.
We can easily envision scenarios where we can use local causality to predict human behaviors. For example, knowing someone's answer to the question "Do you eat meat?" is sufficient to predict whether they will choose pizza with vegetable toppings vs. pizza with meat toppings. We don't need to dig for "ultimate causes" for their vegetarianism; knowing that it exists is sufficient to predict their choice of pizza toppings.
This is important, as it gives us a basis for empirical testing of hypotheses about free will. If free will does not exist, then if we can specify the relevant starting conditions, we should be able to predict individual behaviors.
As it turns out, it's not as easy as that.
Stay tuned!