Lost in the Garden of Forking Paths
Despite the imposition on the patience of my loyal readers. I would be DELIGHTED for the opportunity to discuss this particular paradox of philosophy with someone (anyone) other than Dr Andreassen. I will say only that he has no interest in this kind of conversation, and that his persistence puzzles me. I intend no further answers to him.
A reader with the explosive name of Plutonium writes in with a cogent and coherent argument in favor of materialism.
Let examine the propositions:
1. All non-agent physical systems are physically decomposable into particles.
If by non-agent physical systems you mean dead bodies in motion, things like stars and atoms and clocks; and if by ‘decomposable’ you mean the one thing can be described and defined entirely in terms of the other thing with nothing left over and nothing unexplained, then yes. I agree with this without reservation.
2. All non-agent particles(particles not in an agent system) interact with other particles in specific, deterministic(Only one outcome) fashions.
If by this, you mean that dead bodies in motion, things like stars and atoms and clocks, given the same initial positions and moved by the same external forces will end up in the same end position in two different trials, then yes. I agree without reservation.
“1 and 2 are just the normal ‘physics’ assumptions. Tell me if you think these are bad.”
No, I am happy to speak with someone who seems to know what the normal assumptions of physics are. If you start telling me that Newton can predict Newton’s thoughts with mechaNewton, and that normal physics can measure beauty and checkmate and the width of the imaginary line dividing the sea from the sky at the horizon, I will strangle myself with that imaginary line.
3. The physical component of an agent system is physically decomposable into the same particles as the non-agent case.
If we restrict our case to the physical components only, then yes, albeit obviously the deterministic element falls out of this equation at this point.
4. These particles obey the same rules in the non-agent case as in the agent case.
Concerning external forces acting on the living body if it happens to be case where the deliberate and the non-deliberate body would react the same way, then yes. Various chemical reactions, molecular actions, gross physical motions such as the speed with which a man falls off the Leaning Tower of Pisa versus a wax mannikin, yes, all these are the same.
This seems to imply materialism (or effective materialism) to me.
I do not see why. There is some unspoken assumption you are making that I am not, or visa versa. Let us see if we can discover what it is.
Suppose for the sake of argument that we have two automatons. One has been visited by the Blue Fairy from Pinocchio, and can make decisions, and the other cannot. Let us call them Will and Malzel. The first one, Will, has decided to write a play called Hamlet, and has not decided whether or not to end it happily or tragically. The second one, Malzel, cannot make any decisions nor is he actually a “he” — he does not have a point of view. However, a cunning workingman, named Descartes, can install a clever system of wheels and gears in the gauntlets and vambraces of Malzel, and place a quill pen in this claws, so that Malzel can seem to an unobservant observer to be performing the exact same penmanship motions as Will. If a sheet of parchment is placed under the pen, Malzel will seem to write the same ending as Will.
Now here is where we have the first deviation in unspoken assumptions. Can Will have the exact same internal composition as Malzel? Suppose we take Will apart, and find the same gears and wheels in the same positions and with the same tension on the mainsprings as Malzel. Does this prove that Will never was able to decide how to end the play? That the decision was foreordained by the position of his gears? I do not see why it does.
But suppose Will does not have the exact same internal composition as Malzel. Suppose Will’s internal composition undergoes continual change and refinement as he operates, so that it is simply not clear what exactly is an internal as opposed to an external force operating on him. Will gets angry, let us say, or blushes at the sight of a Femmbot, or gets drunk — is this an external force, or an internal one? Suppose Will makes a resolution not to get angry, struggles with himself, and resists, but then thinks he should have given into the anger after all, because now his other robot friends think him a coward. Is the opinion of the friends an external force? He also is pricked by his conscience into thinking he should not get into fights, and pulled by his sense of honor that gentlemen, automatons or not, cannot back away from certain fights. Is that sense of honor external?
Now, you might be saying at this point, “Hm. I’m hungry. How about a cheese sandwich?” and I would agree that it would be a nice time for a sandwich. Or you might be saying, “But if Will is by definition an automaton, then by definition he cannot make decisions!” Unlike the cheese sandwich question, there I would have to disagree.
We simply and absolutely do not know how, or even if, volition is tied to the motions of the body or the composition of the body. Studies of the brain are inconclusive at best.
We know from experience that certain things are completely under the control of my will, such as whether I eat that cheese sandwich. Others, such as whether I can digest cheese correctly or not because of an ulcer, seem to be less under our volitional control. Still others are completely not in our control, such as whether the Moon is made of cheese, or whether the Moon can break the law of noncontradiction and be both cheese and non-cheese at the same sense at the same time.
We know from experience that a man can decide whether to write the words “Hamlet and Ophelia both die” or the words “Hamlet and Ophelia both wed.” Only a madman would claim that moving the pen hand one way is allowed by the laws of physics and that moving the pen hand another way is impossible, and breaks the Newtonian laws against equal and opposite motion, or breaks the laws of thermodynamics.
Nonetheless that is the claim that has to be made if we analyze Will the automaton, who by hypothesis can make decisions, as being nothing other than an articulated skeletal framework of gears and wheels. Our analysis will by definition exclude the decision making aspect of the reality which is Will.
But, you may be wondering, “If Will decides to end the play tragically, the first letter of the last word in the sentence will be a ‘d’, and he will make a curved line and a vertical upright; if happily, a ‘w’ and make four diagonals. The mechanical forces acting on the pen at this crucial second, time T, must either be to make the curve or the diagonal. Hence an examination of the position of his finger wheels should tells us whether it is possible for him to make one and not the other.”
Well, as far as that goes, it is sound reasoning, but in anything other than a hypothetical thought experiment, all that happens is that we trace back the chain of cause and effect so far and no farther.
We can see that Will’s finger parts are connected to the hand parts, the hand parts connected to the wrist parts, the wrist parts connected to the arm parts, and all of it connected to the head parts, and so on. There the matter is obscure: the brain gears are something like a black box. The head parts get energy from the stomach parts, which get energy from food, which get energy from the sun, which came from the big bang. Everything before and after the chain of cause and effect running through the brain box is unambiguous.
The only thing we know about the black box is that, no matter what the internal arrangement of decision making gears and wheels, no wheel can turn in violation of Newton’s third law of gravity nor in violation of the Second Law of Thermodynamics.
While we are at it, we also know that no decision can violation the logical law of Non-Contradiction. Will cannot decide to write both a tragedy and not a tragedy at the same time in the same sense of the word.
Here finally we come to your point: when you pull apart the black box of Will’s head (who, by hypothesis, we said could make decisions) you do not find any wheel or regulator which causes him to obey the Second Law of Thermodynamics nor the Law of Non-Contradiction. Likewise you do not find where the decision making power is lodged.
Let us say we open up Malzel as well. He cannot make decisions: he is not even a he, but an it, an inert collection of mechanisms. Let us say, just to make the argument difficult, that his black box in his metal skull has EXACTLY the same internal arrangement as Will.
What do we conclude when this happens?
Here we approach the unspoken assumption. I see two possible assumptions:
If you assume that the decision making power is something like the Law of Noncontradiction, you cannot find it inside the black box of the skull helmet, nor would you expect to find it in any location, or make of anything, because it is not a physical thing at all. It is something which, however, defines how a body acts.
The automaton of Will cannot write the tragic ending if Will has willed himself to write the happy ending, assuming he retains control of his arm and fingers. Will defines this part of how Will acts. Likewise, Will cannot write an ending that is both tragic and non-tragic in the same sense at the same time, because no one can do this. Logic, the nature of reality, defines how that part of Will acts.
Neither definition necessarily rests on setting a chain of physical cause and effect in motion.
For example: A law atom enforcing the law of noncontradiction does not push an inhibiting atom against an illogical atom expunging it from reality so that illogical things do not happen. It is just part of the description of reality that reality is logical.
Likewise: a willpower atom of tragic weight carrying the Kill Hamlet instructions is not pushed up against the lighter Happy Ending atom and breaks it into components of Hap and End and the Norse God Ing. It is just part of the description of Will that Will acts by willpower. That is what it means to be human (or, in this case, a decision making automaton.)
The other assumption is that the decision making power is a physical thing after all, ergo is something like an extra spring or gear that Will has but Malzel does not.
But, alas, this leads us immediately into the paradox of madness: for it means that if the extra spring is present, Will must make the curved line of tragedy with his handmechanism, whether he wants to or not, whether he thinks he wants to or not (or, worse, because he can only make the curve of tragedy, he must think he wants to even though he actually has no choice).
If the extra spring is turned the other way, his finger mechanism must make the straight line.
Indeed, under this second assumption, it does not matter whether we are examining Will or Malzel: there is no difference between them. The decision making power does not and cannot exist. It is irrelevant to the description of the automaton.
Now, again this second assumption is the easier one to make. It seems natural. It seems that if the arm gear moves the finger gear in the straight line of happiness, and something in the brain gear moves the arm gear, that for the same reason that the finger cannot move other than as the arm determines, the arm cannot move other than the brain gear determines, and the decision making power is just another word, an inaccurate word, for the brain gear.
Under this assumption to set the brain gear to move the pen in a curve and see it move in a line would be indeed impossible, a violation of some or all laws of physics and logic: an action without a reaction, an action without a cause.
But the one idea I cannot seem to explain to anyone, albeit it seems clear enough to me, is that there is an ambiguity between when we mean when we say the brain gear moves the pen in a line and not in a curve, and when we say Will’s desire for a happy ending moved him to move the pen in a line and not in a curve.
It is not just two different kinds of motion, it is two different dimensions of reality.
The first is a description of efficient cause only. Efficient cause is admittedly deterministic. The second is a description of final cause. Final cause in the case of humans involves a choice of means and ends, and is inevitably not deterministic.
To have an efficient cause be both deterministic and not deterministic would be a logical impossibility. Likewise, to have a final cause be both deterministic and not deterministic would be an logical impossibility.
To have the efficient cause be deterministic and the final cause be not deterministic, however, strikes me as a paradox, that is, something so odd only a philosopher could believe it, but it also strikes me as the only possible description of experiential reality. It is obvious that we humans select means and ends, or otherwise we could not make decisions. It is also obvious that we humans live in a universe where there is cause and effect, and nothing happens without an efficient cause, or else again decision making would be impossible or meaningless.
At this point, I can only offer a myth or metaphor. Imagine time as a mental construct, or myth. When describing mental actions, that is, decisions, time is a garden of forking paths. We come to a cross road — Kill Hamlet or let him live? — and we see two options, both real and both possible to us. We choose one; the other falls into the past and can never come again. Hamlet dies.
When describing the same event’s physical actions, however, the myth of time is linear. The brain gear moves the hand mechanism in the curve of tragedy because that is the way the brain gear is shaped. There are no crossroads nor turn offs.
Imagine, if you will, walking down this garden of forking paths dragging a flexible chain behind you. The chain is the chain of cause and effect. When you come to the crossroads, the chain bends to the left if you go left. Once you pass the crossroad, the righthand path does not exist and (from the point of view of the chain) never existed.
To make this image more confusing, imagine now that lightwaves always follow the chain so that the chain looks straight, like an iron bar, with not a hair’s deviation to the left or to the right. From the chains point of view, it seems as if the garden is painted on plastic, or painted on fluid, and that each decision crossroad all that happens is that the walker walks straight forward, without making a decision, because decision-making is impossible — but the fluid garden bends to the left or to the right to put the path he chooses under the foot of the man walking.
As first glance, this seems absurd, I grant you: but consider. The rules of physics say we must treat all physical actions like this chain, that is, like a bar that can predict the motions of the man walking. The experience of man says that we must treat all mental actions as semi-deliberate or deliberate, that is, like a walk down a garden of forking paths, bending left and right as we wish.
But there is no law that says these two points of view must occupy the same frame of reference, only that the the description of one can map onto the description of another in much the same way a Mercator projection map of the earth can map every point of longitude and latitude onto a corresponding point on a globe of the earth.
So, if you ask me whether the chain of cause and effect flexes to the left or right when a man decides to write a tragedy or a comedy, I say yes and no.
But before I am accused of absurdity, remember that we are discussing an ambiguous picture of the universe. Yes, from the point of view of the mental realm, where the man makes decisions based on final causes, his pursuit of what seems good to him, and No from the point of view of the physical properties of the physical elements involved in the event.
— which is, by the way, the one way in which we never discuss decision making processes, only the lapse of them.
Physical aspects of the brain, such as the chemical expression of sudden rage, or the lack of ability to form intent due to drunkenness or insanity, are only ever discussed in a moral or ethical or legal discussion to excuse someone of the capacity for decision making, not to define the decision nor say how it is made.
Gratuitous picture of Catwoman (just so reader know this is still me writing):