It is not easy to review a non-fiction book without giving the gist of what the book is about. Without a synopsis, all one can do is to call it insightful and other such epithets.
The Age of Spiritual Machines is really an insightful book. It is a study of the future of computing and computational intelligence. It forces us to rethink what we mean by intelligence and consciousness, not merely at a technological level, but at a philosophical level. What do you do when your computer feels sad that you are turning it off and declares, “I cannot let you do that, Dave?”
What do we mean by intelligence? The traditional yardstick of machine intelligence is the remarkably one-sided Turing Test. It defines intelligence using comparative means — a computer is deemed intelligent if it can fool a human evaluator into believing that it is human. It is a one-sided test because a human being can never pass for a computer for long. All that an evaluator needs to do is to ask a question like, “What is ?” Мой $4 calculator takes practically no time to answer it to better than one part in a million precision. A super intelligent human being might take about a minute before venturing a first guess.
But the Turing Test does not define intelligence as arithmetic muscle. Intelligence is composed of “выше” cognitive abilities. After beating around the bush for a while, one comes to the conclusion that intelligence is the presence of consciousness. And the Turing Test essentially examines a computer to see if it can fake consciousness well enough to fool a trained evaluator. It would have you believe that consciousness is nothing more than answering some clever questions satisfactorily. Is it true?
Once we restate the test (and redefine intelligence) this way, our analysis can bifurcate into an inward journey or an outward one. we can ask ourselves questions like — what if everybody is an automaton (except us — ты и я — конечно) successfully faking intelligence? Are we faking it (и freewill) to ourselves as well? We would think perhaps not, or who are these “ourselves” that we are faking it to? The inevitable conclusion to this inward journey is that we can be sure of the presence of consciousness only in ourselves.
The outward analysis of the emergence of intelligence (a la Turing Test) brings about a whole host of interesting questions, which occupy a significant part of the book (I’m referring to the audio abridgment edition), although a bit obsessed with virtual sex at times.
One of the thought provoking questions when machines claim that they are sentient is this: Would it be murder to “kill” one of them? Before you suggest that I (или, скорее,, Kurzweil) stop acting crazy, consider this: What if the computer is a digital backup of a real person? A backup that thinks and acts like the original? Still no? What if it is the only backup and the person is dead? Wouldn’t “killing” the machine be tantamount to killing the person?
If you grudgingly said yes to the last question, then all hell breaks loose. What if there are multiple identical backups? What if you create your own backup? Would deleting a backup capable of spiritual experiences amount to murder?
When he talks about the progression of machine intelligence, Kurzweil demonstrates his inherent optimism. He posits that ultimate intelligence yearn for nothing but knowledge. I don’t know if I accept that. To what end then is knowledge? I think an ultimate intelligence would crave continuity or immortality.
Kurzweil assumes that all technology and intelligence would have all our material needs met at some point. Looking at our efforts so far, I have my doubts. We have developed no boon so far without an associated bane or two. Think of the seemingly unlimited nuclear energy and you also see the bombs and radioactive waste management issues. Think of fossil fuel and the scourge of global warming shows itself.
I guess I’m a Mr. Glass-is-Half-Empty kind of guy. Мне, even the unlimited access to intelligence may be a dangerous thing. Remember how internet reading changed the way we learned things?