The Cabinet of Wisdom VIII The Simple Error of the Argument from Complexity
The Cabinet of Wisdom
PART VIII
The Simple Error of the Argument from Complexity
There is one basic objection to the idea that non-self aware tools do not think, and it is expressed in two arguments. The first is the argument from complexity, and the second is the argument from emergent properties. Let us address the first and weaker one here, and the second in another column.
The argument from complexity, is the crux of the matter that causes so much woe and confusion.
A machine can mimic the motions produced by a man, when he is deliberately acting in a mechanical hence non-deliberate way, that is, he is plodding through a routine task limited to unambiguous options, as in a stack of Hexapawn matchboxes, a chessplaying cabinet, or a Chinese Room made by Fu Manchu that quotes Lao Tzu.
When his possible range of actions cannot be limited, he is not acting mechanically, and a machine cannot mimic him. This is because complex rote actions can be simplified into simpler rote actions, but unlimited, that is, undefinable actions, cannot be simplified into rote actions. See Goedel for details.
Rote thought, or mechanical thought, is when a mind capable of deliberation and voluntary action obeys an instruction without deliberation — thinking without thinking about it, as it were.
If the thought-process is simple, logical, and mechanical, it can be tracked by a mechanical system, as when a man pushes an abacus bead each time he counts a peach put in a peach jar.
Lo and behold, once done, the number of peaches and the number of beads match, even though the man himself was not paying attention to the counting process.
Likewise, a simple machine, such as a turnstile placed over the mouth of a peach jar that moves an abacus bead once each time a peach rolls past it will “count” the number of peaches rolled into the peach jar by moving the same number of beads on the abacus.
But the turnstile and the abacus beads are not actually doing any mental process. Please note that the only property the beads and the peaches have in common is a formal property, namely, they share the same number.
Only to a human observer does the abacus beads have a meaning when moved from one side of the bead rack to the other. A passing stranger, once the peaches are all inside the jar, seeing the beads but not opening the jar, does not know what the bead positions represent.
Again, if our peach-counting peach-jar used a cogwheel and escapement that clicked once clockwise for each peach rolled, the formal property of shared number would be abstract, namely, some fixed angle of turn of the wheel would match the number of peaches.
Likewise, if a clockmaker inscribes Arabic numerals on the teeth of a cogwheel or the face of a clock, the turning hands of the clock has a meaning to the human observer, but not to the clock.
Other types of thinking, such as classifying the peaches according to “freshness” or some other quality not easily reduced to a measurement, require judgment, which no abacus, no clock, no adding machine, can make.
Certain types of judgment in certain circumstances that can be reduced to simpler mechanical thought-processes, such as positions of chessmen on a chessboard, cloud the question of what is a mechanical process and what is not, because, in the case of a chessplaying cabinet, unlike Hexapawn, the reduction of myriad mechanical processes to one result is too complex for a human mind to grasp.
Also, as a matter of historical record, a well known chessmaster said that his thought process was not mechanical, and could not be reduced to a mechanical operation, and he thus predicted — and predicted wrongly — that no chessplaying calculation engine would ever be made, or could.
He mistook intuitive or indeterminate states of mind for complex yet mechanical actions. A chessplaying cabinet uses no intuition to determine its responses to a chess move.
Indeed, it does not deliberate, in that sense, at all, it merely gives a rote response to a given board situation, based on the mechanical motions of its interior parts, set in motion by keystrokes or some other input causing motions in one part of its machinery.
Those motions are meaningless from the viewpoint of the cabinet, because the cabinet has no viewpoint. The cabinet does not crave the sensation of victory when it presents the winning move in response to a player’s prior move. It is giving mechanical motions that copy the form of a rote response of a rote thought process.
The rote may be complicated, indeed, more complicated than the human mind can grasp, but, from this, it does not necessarily follow that the chess machine thinks about chess, likes thinking about chess, or has any thoughts or feelings at all on the topic.
The chessplaying cabinet does not desire checkmate.
It may be that all thoughts could one day be mimicked to a mechanical process of sufficient complexity. It may be that certain types of intuitive or creative or self referential thinking cannot be so mimicked. That is an issue irrelevant to the current question.
Now, to dispel this spurious objection, let me point out that it is simply a non-sequitur to say that a simple mechanical operation is not thought therefore a complex mechanical operation is thought.
Whether or not any one man can follow the rapid opening and closing of mechanical relays if ten thousand abacuses all operate at once, does not mean that a thousand abacuses can think.
One objection that is endlessly repeated is that since a given machine, such as the Chinese Room or the Hexapawn cabinet, produces a result that a thinking human could have produced had he been there, or, in other words, that the machine can pass the ‘imitation game’ test suggested by Alan Turing, therefore the machine thinks.
But that simply does not follow.
The engineer who makes the machine thinks. The machine does no more that function as designed.
The mere fact that, in a specific case, the machine might produce an outcome neither the engineer nor any human thinker anticipated does not mean that the machine, rather than the engineer, engaged in the voluntary mental process of associational perception, abstraction, passion, and volition called thought.
The fact that a rote mental process is complex means it is hard to visualize. It does not mean it possesses free will or has a thought process of its own.
A simple example of a complex mental process might make this clear:
All it means is that a mechanical motion of parts has the same form as some rote thought process, and the similarity of form means that if a symbolic meaning is associated by an human observer to the mechanical motions, that, by no coincidence, the mechanical motion will represent the same thing that the rote thought process would have represented, had anyone taken the trouble and time to think it through. The form is the same, just as the form of a bootprint in the snow is the same as the bottom of the boot of a stubborn Vermont hiker crossing a snowy field. It does not mean the bootprint is the boot, nor the hiker.
A human mind cannot visualize a chiliagon, for example. A mathematical expression can deduce the angle of a regular equilateral figure with a thousand angles as easily as it deduces the angle of a regular equilateral figure with three.
The human eye cannot distinguish between a 999-sided figure and a 1000-sided figure. But an instrument can, and a clockwork arrangement could represent the formula between acuteness of the interior angle and the number of sides in a regular figure.
For the record, the formula for the sum of angles is (# of sides – 2) * 180. When divided by the number of sides, this gives the degree of any one of the interior angles, which, since the figure is regular, all equal each other. Hence the interior angle of an equilateral triangle=60°; square =90°; pentagon=108°; hexagon=120°; octagon=135°, dodecagon=150° and so on.
A human eye can see the proportions of an equilateral triangle and square at a glance, and, even without a protractor, might be able to estimate the interior angles of hexagons and dodecagons.
Now, as a thought experiment, let us take a polygon, such as a square, an have a clockwork called an Angle Calculation Hypothetical Engine. Let it turn wheels in mechanical response to keys pressed and big red lever pulled, and display wheel in a window reading the answer of ninety degrees.
This is a simple enough mathematical operation that even an otherwise innumerate observer could do the figuring in his head, or with the aid of a slate and chalk. Let us hypothesize that the Angle Calculation Hypothetical Engine cannot pass the Turing Test, nor does it think, nor is it self-aware.
Hence, for a thousand-sided figure, a chiliagon, the interior angle=179.64°and for a nine-hundred ninety-nine sided figure, which perhaps is called a enneahectaenneadecanonogon, the interior angle=179.639°. For the record, the interior angle of a myriagon, a ten-thousand-sided figure, is 179.964°
Now, neither I, nor any human nor Martian nor elf anyone has ever had the pleasure of meeting can hold a visual image in his head of a chiliagon nor an enneahectaenneadecanonogon or a myriagon.
Now, I can visualize a square. I could draw a fair approximate of one freehand on the back of paper envelope, and take a protractor and measure the resulting interior angle, which is a right angle. I could do the same with an equilateral triangle, and measure the result, which is one third of two right angles, or 60°.
I cannot visualize a myriagon, and if I drew one freehand on the back of an envelope, I suspect to the untrained eye, the resulting figure would look much like a circle.
Nonetheless, if I had an army of mathematically-trained surveyor slaves and a paper envelope the size of Kansas, under the whips of cruel overseers over a tragic decade, such a figure could be drawn, and a sufficiently sharp-eyed undergraduate armed with a sufficiently finely graded protractor could measure the angle, presumably seeing the figure our calculation engine has predicted, namely, 179.964° — but I still could not visually picture the Kansas-sized regular polygon in my mind, and if I flew over it on a Zeppelin, and saw it with my eye, I doubt that my vision would be acute enough to distinguish it from a circle.
But if the triangle on the back of the envelope does not think, then the Myriagon the size of Kansas does not think either. Nor does the protractor think. I made no comment about the sharp-eyed undergraduate. Likewise if I deduce the interior angle of an imaginable figure, such as a triangle, square, or hexagon, using a clockwork engine, in no wise does it think.
So, whenever the Angle Calculation Hypothetical Engine deduces the interior angle of a literally unimaginable figure, or, in other words, producing a result no human mind anticipated or foreknew, and if the answer is correct, does that prove that the calculation engine can think?
A common objection raised to the Chinese Room argument is that, since the answers from the front mail slot, written in red ink, are by hypothesis indistinguishable to an unobservant observer from the real and wise answers given by Lao Tzu, and since neither the man trapped in the Chinese Room, nor the evil genius who made the Chinese Room and stocked its library shelves, nor Lao Tzu himself wrote nor anticipated in detail this particular answer given from the front mail slot, and since the thoughtful answer must arise from a thinking entity, therefore, by process of elimination, the Chinese Room thinks.
By the selfsame logic used by objections raised to the Chinese Room argument, since the engineer of the ACHE engine cannot picture in his visual imagination the difference between a chiliagon and a myriagon, nor can he necessarily know nor guess the number value of the interior angle, given in degrees of a circle, of a nine-hundred ninety-nine sided enneahectaenneadecanonogon, and since the thought must arise from a thinking being, and since if it is not the engineer who thought the answer, it must be the engine, which must be a thinking being, QED.
By the selfsame logic, Suppose I am an officer in the army and you are a scout. I write down an order on a bit of paper, saying “search for the enemy to the north if the weather is clear. If it is foggy, discontinue the search.” You go out scouting. I do not know if it will be foggy or clear, so I do not know what you are going to do. You do not know, until you go north and see the weather conditions. The fog is not a living thing but an atmospheric condition, so it does not know. Fog rolls in. You look at the bit of paper. It says to discontinue the search. You stand by.
Disasters happens! The lack of proper scouting has destroyed the King’s Third Regiment, who were eaten by jaguars. There is a court martial to inquire. The court wants to know who made the decision to discontinue the search? I claim I was not there when you decided to discontinue, since I had retired to my bunk by that hour, and was not in radio communication with you. Therefore, I did not make the decision. You say you were following orders, therefore you did not make the decision. The fog is not available to give testimony at trial, but a public defender on its behalf says that the fog was merely in the area, minding its own business, does not speak English, and therefore the fogbank did not make the decision.
By process of elimination, the bit of paper made the decision.
This conclusion is sufficiently absurd, that I hope the error is clear.
The map is not the territory, the word is not the thing it represents. A bootprint in the snow is not a stubborn Yankee in winter boots hiking across a snowy field in Vermont.
The boot and the bootprint have the same form. The word represents the object. A map copies the contours of the terrain.
In the similar fashion, a stack of Hexapawn matchboxes, a chess-laying cabinet, a Chinese Room, or a peach-counting jar copies the contour, that is, the form of thought.
From the resulting forms, a thought can be decoded or deduced by a thinking observer. To the observer the meaningless symbol manipulations of the matchboxes or cabinet drawers or rolodex catalogs have meaning. By no coincidence, it is the same meaning he himself would have deduced by a patient and error-free process of thought, could he live long enough to go through the endless permutations.
If an engineer builds a clockwork and it functions as designed, and it can mimic the same form as a rote thought process, even if he cannot anticipate the outcome of the mimicked actions, the clockwork is not thinking.
It is mimicking a form, following an algorithm, carrying out actions of one inanimate part pushing another pushing a third, which are meant to represent or reflect a mechanical parallel or mechanical analogy to a human thought process.
If all human judgment calls, and, indeed, all human decisions and thought processes, from jury verdicts to bluffing in poker to writing a poem to saying a prayer to falling in love to predicting the weather based on the smell of the wind and the ache in your toes, were mental processes of this complex mechanical type, then all human decision processes could be mimicked by a mechanical process, in the same way an abacus can mimic counting.
But that is an open question, which this argument does not address. The argument here is only that the thought represented by a symbol written or inscribed or drawn on a card, on a matchbox top, or the tooth of a gear, or, indeed, pushed together by a mechanical process in the forms of electrons and presented on a glowing screen, is not present as a material property of the card, top, tooth, or screen.
Likewise, the beads and rolodexes and clockworks, at no point of their moving process, contain the symbol as a material property.
Indeed, whether we are discussing the human brain or anything else, the material properties can only be discussed or described in material terms, that is, in terms of combinations of quantities of mass, length, duration, temperature, candlepower, current, moles.
The immaterial aspect of the human brain, or anything else, can only be discussed or described in imponderables, that is, in categories of true and false, reasonable and unreasonable, vicious and virtuous, fair and foul, and so on.
It is true that beads and blocks and keyboard and clockworks can be inscribed with shapes or marks that have meaning. If inscribed with Chinese ideograms, they will have meaning to Chinamen; if written in Roman alphabets, to Frenchmen and lesser races uses their script; if in cuneiform, to Babylonians; if hieroglyphs, to Egyptians; and the Voynich Codex has no meaning to any living person.
Because human thought, when it is logical and limited, can be copied in form by mechanical motions (as when a peach-counting jar can mimic the counting process by pushing beads on an abacus or turning a toothed gear) and since those resulting forms will have a meaning to the Chinamen or Frenchmen or Babylonians of Egyptians (and to no one else), the machine looks like it is thinking.
Certainly, such machines can serve as a useful prop or aid to thinking. This is particularly true in chess, which is designed to be slightly more complicated than a human mind can easily grasp.
Whether or not a machine, or a Chinese Room, can perform the routines of a human conversation with all its metaphors, nuances, and ambiguities is an open question, but makes no difference to the process described here.
By hypothesis, a Chinese Room sufficiently complex to anticipate all possible nuances of conversation, if such a thing is possible, is not doing anything that requires knowledge of the meaning of the symbols written on the cards or wheels or circuits or pages being manipulated by the mechanical process of the Room.
But it does not follow that this process is thought, nor anything like thought.