How I met the Superman
During an ongoing conversation concerning C.S. Lewis and Arthur C. Clarke, jordan179 asked
My answer was this:
I cannot speak for Lewis. My own brief brush with transhumanists was an eye-opening affair. It was my first encounter with people who try to deck out scientists and engineers with the hairy coats of prophets or the canonical vestments of archbishops, and end up merely embarrassing the engineers as much as the archbishops. I do not see why an engineer would be any better at the an archbishop’s job than visa versa. Listening to the metaphysical musings of physicists (who have never read of word of Aristotle or Kant) is embarrassing enough: you should listen to computer programming speculating about the moral evolution of the human soul. It is knee-slappingly funny, if it were not so sad.
For example, I once shocked a list-full of extropian transhumanists by suggesting that, once you design a self-aware computer, you have to teach it a moral code, or otherwise it will not know right from wrong. The extropians thundered at me that machines, by pure empirical deduction, will be able, by trial and error, determine how to measure, weigh, and assess moral entities like right and wrong, just and unjust, more accurately than merely human beings.
Turn about is fair play. They shocked me when they explained that, as clear-eyed and completely rational advocates of science, they took it as an article of faith that science would one day discover that the second law of thermodynamics, or entropy, is wrong, and that infinite energy can be produced from nothingness via perpetual motion machine. There must be a perpetual motion machine possible on the grounds that science has not yet proved it to be impossible. Hm.
bibliophile112 follows up by asking
My answer:
I am sure the self aware computer could deduce morality through natural reason alone, but why should we make it do so, when we know enough to tell it some answers, and point in the right direction for others? Why pretend we do not know it is wrong to kill when we do know that?
Now, while I am sure Robby the Robot is smart enough to deduce the entire body of geometric proofs from nothing and nowhere, I notice that, in real life, only Pythagoras came up with the Pythagorean theorem. The Chinese and the Indians, Egyptians and Babylonians both had literate and civilized peoples, and none of them has Euclid’s elements. I am not sure what advantage my Transhumanists sought by not passing on to our children (and if you make an intelligent being, its your child) the knowledge we have. What is the point of having Robby reinvent the wheel?
If we can teach Robby geometry, why not morality?
The assumption of the Transhumanists with whom I spoke — if I understood them, which I doubt, they were a rather mystical, woolly-headed and angry bunch (or maybe just that ones I addressed were provoked to anger, because I questioned them?) — was that human moral codes were all radically incorrect, incorrect at the root, and that therefore the only way to have a creature with a correct moral code was to set it free in the world with no moral code at all, and wait for blind nature, natural selection, or pure deductive reasoning to allow the superhuman mind to arrive at its own conclusions, without any meddling from us.
In the same way that the ancient Gnostic sought a God who existed utterly apart from all human conditions, apart from creation, apart from anything we can name — a God who is wholly OTHER and utterly alien to us — so too did these Transhumanists with whom I spoke seek a posthuman mental development that was wholly OTHER and wholly alien to our own human condition.
I speak here only of the long-term ambitions of the Transhumanists. In the short term, their daydreams were more humble and wholesome: they wanted better medical technology, the internet in their contact lenses, help for people with brain diseases, prostehtic limbs wired cyborglike into the nervous system so that the maimed could learn to feel and touch again.
In hthe middle term, they wanted a way to halt or reverse the aging process, or a technology to augment human intelligence and memory, to grow a bigger brain. They wanted to grow up to be the Selenites of H.G. Wells, but they saw themselves each in the role of Grand Lunar, rather than worker drone.
It was when they segued from the short-term to the long-term goals, that an element of odd dissonance started to creep in. They started in the short term by speaking of improving the lot of man, and they ended by talking about the abolition of man.
Imagine a dog trying to design a man who will like dogs; and so the dogs design a man who has a mate, who eats food like their own, who likes duck hunting, and who can throw a stick to fetch, and so on. The dog knows that his new master will have concerns dogs cannot understand, questions of marriage and economics and religion and politics which are simply beyond canine reach, but those concerns will be based on what the dog does understand–marriage has to do with mating, economics has to do with getting food and shelter, religion with love, politics with the pack. If only in a concrete nonverbal way, dogs understand sex and food, and love, and the pack. If man is a rational animal, the dog who designs a man would at least be in sympathy with the man’ s animal nature.
The transhumanists were not like this dog. They wanted a posthuman to be their master, but they wanted it to be nothing like anything they understand, and want it to have nothing in common with them.
Imagine a dog designing, let us say, the scrambler-creatures from Peter Watt’s BLINDSIGHT, or a Berserker from Fred Saberhagen, or the Monolith from Clarke’s 2001 A SPACE ODYSSEY — just something not like us in any way.
I could not fathom it. Don’t get me wrong, these were folks who liked my GOLDEN AGE books, which, obviously, have a lot of transhumanist overtones. I liked them, all except for one, and they were bright folks, witty and interested in interesting topics. But they seemed to approach this particular question from a peculiar angle I could not grasp. It did not seem reasonable to me, and did not seem to have the preservation of humanity and human nature as one of their goals: it was as if they were little Frankensteinoids who wanted the monsters of our own creation to wipe us out.
One wag joked that the worst nightmare he could have would be if an intelligent supercomputer were taught religion: evidently the idea that a posthuman would be not simply good, but righteous, and obey the Buddhist principle of ahimsa, or the Christian principle of turning the other cheek, he saw as a greater threat than the merciless programming of Colossus the Forbin Project, or Skynet, or the Humanoids from planet Wing IV.
The idea that the posthuman supercomputers should be instructed not to kill us, or, better yet, taught to honor their father and the mothers that their days might be long on the earth, was rejected not merely with disagreement, but with scorn and contempt. Their was something really wrong and twisted about these people at a deep psychological level I could not comprehend. It was as if they yearned not just for personal death, but for extinction.
Again, they reminded me of ancient Cathars, or Gnostics, religious cultists who sought to escape from the universe and the human condition, not into Eden, but instead into some indifferent outside void.
Why, if they yearned for death, they also daydreamed about a technology that would grant endless life, that, I cannot say. I don’t know if the ones I talked to were typical or were a few cracked pots on the far fringe. I am limiting my comments only to the specific individuals with whom I corresponded, and they cannot be assumed to be spokesmen for the whole.
Some of them, it was clear, wanted to be little tin gods, not to worship the little tin gods. They wanted to download their brain information into the Overframe, and I think they were imagining something the size of the Solid State Entity from NEVERNESS by David Zindell: a collection of larger-than-Dyson-Sphere electronic brains scattered throughout some convenient nebula, or lining the interior of a Dyson shell constructed around the mega black whole at the core of the galaxy.
Now, exactly how they were deal with the various lusts, hungers, ambitions, hatred and sheer bloody-mindedness involved in being a disembodied god with a nine-figure I.Q., that was part of the conversation I missed. Logically, however, if the machine gods were to be built without human moral codes or human religion, this (in theory) was what they were imagining as their ultimate destiny as well.
So what do you call a super-powerful disembodied mind or spirit, weilding superhuman intellect, which can be downloaded from one possessed body to another, still emrboiled in human (or subhuman) lusts, appetites and hatreds, but which deems itself free from any scruples or moral codes aside from its own appetites or calculations of its own advantage? They cannot be called man or woman. For Spirits when they please can either sex assume, or both; so soft and uncompounded is their essence pure, not tied or manacled with joint or limb, nor founded on the brittle strength of bones, like cumbrous flesh; but in what shape they choose dilated or condensed, bright or obscure, can execute their airy purposes, and works of love or enmity fulfil.
After the Singularity, Hell.
Uncle Screwtape would be pleased. If you notice the parallels between the long-term ambitions of the Transhumanists and the ultimate designs of the sorcerer-scientists of the National Institute of Controlled Experimentation run by Wither and Frost, you will see a respect for material science combined with a mystical ambition characteristic of warlocks.
To draw the conversation beck to the beginning, I can speak for C.S. Lewis on that score: using the tools of a scientist like Edison or Einstein to cast the spells of a warlock like Faust or a necromancer like Frankenstein, that was something Lewis certainly foresaw, feared and opposed.