Tag Archives: consciousness

Man as Chinese Room

In the previous posts in this series, discutimos lo devastador que fue el argumento de la sala china de Searle sobre la premisa de que nuestros cerebros son computadoras digitales,,en,Argumentó,,en,bastante convincente,,en,que la mera manipulación de símbolos no puede conducir a la rica comprensión que parece que disfrutamos,,en,Me negué a estar convencido,,en,y encontró la respuesta de los llamados sistemas más convincente,,en,Era el contraargumento que decía que era toda la Sala China la que entendía el idioma,,en,no solo el operador o el empujador de símbolos en la habitación,,en,Searle se rió,,en,pero también tuvo una respuesta seria,,en,"Déjame ser toda la sala china,,en,Permítanme memorizar todos los símbolos y las reglas de manipulación de símbolos para poder dar respuestas en chino a las preguntas.,,en,Todavía no entiendo el chino ".,,en. He argued, quite convincingly, that mere symbol manipulation could not lead to the rich understanding that we seem to enjoy. However, I refused to be convinced, and found the so-called systems response more convincing. It was the counter-argument saying that it was the whole Chinese Room that understood the language, not merely the operator or symbol pusher in the room. Searle laughed it off, but had a serious response as well. He said, “Let me be the whole Chinese Room. Let me memorize all the symbols and the symbol manipulation rules so that I can provide Chinese responses to questions. I still don’t understand Chinese.”

Now, que plantea una pregunta interesante: si conoce suficientes símbolos chinos,,en,y reglas chinas para manipularlos,,en,¿No sabes realmente chino?,,en,Por supuesto, puede imaginarse a alguien capaz de manejar un idioma correctamente sin comprender una palabra de él.,,en,pero creo que eso es llevar la imaginación demasiado lejos,,en,Me recuerda el,,en,vista ciega,,en,Sitio ciego,,en,Experimente donde la gente pueda ver sin saberlo.,,en,sin ser consciente de lo que estaban viendo,,en,La respuesta de Searle apunta en la misma dirección: poder hablar chino sin entenderlo,,en,Lo que le falta al Salón Chino es la conciencia de lo que está haciendo.,,en,Para profundizar un poco más en este debate,,en,tenemos que ser un poco formales sobre sintaxis y semántica,,en, and Chinese rules to manipulate them, don’t you actually know Chinese? Of course you can imagine someone being able to handle a language correctly without understanding a word of it, but I think that is stretching the imagination a bit too far. I am reminded of the blind sight experiment where people could see without knowing it, without being consciously aware of what it was that they were seeing. Searle’s response points in the same direction — being able to speak Chinese without understanding it. What the Chinese Room is lacking is the conscious awareness of what it is doing.

To delve a bit deeper into this debate, we have to get a bit formal about Syntax and Semantics. El lenguaje tiene tanto sintaxis como semántica,,en,una declaración como "Lea las publicaciones de mi blog" tiene la sintaxis que se origina en la gramática del idioma inglés,,en,símbolos que son palabras,,en,marcadores de posición sintácticos,,en,letras y puntuacion,,en,Además de toda esa sintaxis,,en,tiene un contenido - mi deseo y pido que leas mis publicaciones,,en,y mi creencia de fondo de que sabes lo que significan los símbolos y el contenido,,en,Esa es la semántica,,en,el significado de la declaración,,en,Un ordenador,,en,según Searle,,en,solo puede tratar con símbolos y,,en,basado en la manipulación simbólica,,en,proponer respuestas sintácticamente correctas,,en,No comprende el contenido semántico como nosotros,,en,Es incapaz de cumplir con mi solicitud por su incomprensión,,en. For example, a statement like “Please read my blog posts” has the syntax originating from the grammar of the English language, symbols that are words (syntactical placeholders), letters and punctuation. On top of all that syntax, it has a content — my desire and request that you read my posts, and my background belief that you know what the symbols and the content mean. That is the semantics, the meaning of the statement.

A computer, according to Searle, can only deal with symbols and, based on symbolic manipulation, come up with syntactically correct responses. It doesn’t understand the semantic content as we do. It is incapable of complying with my request because of its lack of understanding. Es en este sentido que la Sala China no entiende chino,,en,esa es la afirmación de Searle,,en,Dado que las computadoras son como habitaciones chinas,,en,tampoco pueden entender la semántica,,en,Pero nuestro cerebro puede,,en,y por lo tanto el cerebro no puede ser una mera computadora,,en,Cuando se pone de esa manera,,en,Creo que la mayoría de la gente se pondría del lado de Searle,,en,Pero, ¿y si la computadora pudiera realmente cumplir con las solicitudes y comandos que forman el contenido semántico de las declaraciones?,,en,Supongo que incluso entonces probablemente no consideraríamos una computadora completamente capaz de comprensión semántica,,en,por eso, si una computadora realmente cumplió con mi solicitud de leer mis publicaciones,,en,Puede que no lo encuentre intelectualmente satisfactorio,,en,Lo que estamos exigiendo,,en,es la conciencia,,en,¿Qué más podemos pedirle a una computadora para convencernos de que es consciente?,,en. At least, that is Searle’s claim. Since computers are like Chinese Rooms, they cannot understand semantics either. But our brains can, and therefore the brain cannot be a mere computer.

When put that way, I think most people would side with Searle. But what if the computer could actually comply with the requests and commands that form the semantic content of statements? I guess even then we would probably not consider a computer fully capable of semantic comprehension, which is why if a computer actually complied with my request to read my posts, I might not find it intellectually satisfying. What we are demanding, of course, is consciousness. What more can we ask of a computer to convince us that it is conscious?

No tengo una buena respuesta para eso,,en,Pero creo que debes aplicar estándares uniformes al atribuir conciencia a entidades externas a ti, si crees en la existencia de otras mentes en los humanos.,,en,tienes que preguntarte qué estándares aplicas para llegar a esa conclusión,,en,y asegúrese de aplicar los mismos estándares a las computadoras también,,en,No puede incorporar condiciones cíclicas en sus estándares, como otros tienen cuerpos humanos.,,en,sistemas nerviosos y una anatomía como tú para que ellos también tengan mentes,,en,que es lo que hizo Searle,,en,en mi opinión,,en,es mejor tener la mente abierta sobre tales preguntas,,en,e importante no contestarlas desde una posición de lógica insuficiente,,en,Searle,,en,Mentes como inteligencia artificial,,en. But I think you have to apply uniform standards in ascribing consciousness to entities external to you — if you believe in the existence of other minds in humans, you have to ask yourself what standards you apply in arriving at that conclusion, and ensure that you apply the same standards to computers as well. You cannot build cyclical conditions into your standards — like others have human bodies, nervous systems and an anatomy like you do so that that they have minds as well, which is what Searle did.

In my opinion, it is best to be open-minded about such questions, and important not to answer them from a position of insufficient logic.

Minds as Machine Intelligence

Prof. Searle es quizás más famoso por su prueba de que las máquinas informáticas,,en,o cálculo según lo definido por Alan Turing,,en,nunca puede ser inteligente,,en,Su demostración usa lo que se llama el argumento de la habitación china.,,en,que muestra que la mera manipulación de símbolos,,en,que es la definición de cálculo de Turning,,en,no puede conducir a la comprensión y la inteligencia,,en,Ergo, nuestros cerebros y mentes no podrían ser meras computadoras,,en,El argumento es el siguiente: suponga que Searle está encerrado en una habitación donde recibe entradas correspondientes a preguntas en chino.,,en,Tiene un conjunto de reglas para manipular los símbolos de entrada y seleccionar un símbolo de salida.,,en,tanto como lo hace una computadora,,en,Entonces se le ocurren respuestas en chino que engañan a los jueces externos haciéndoles creer que se están comunicando con un hablante de chino real.,,en (or computation as defined by Alan Turing) can never be intelligent. His proof uses what is called the Chinese Room argument, which shows that mere symbol manipulation (which is what Turning’s definition of computation is, according to Searle) cannot lead to understanding and intelligence. Ergo our brains and minds could not be mere computers.

The argument goes like this — assume Searle is locked up in a room where he gets inputs corresponding to questions in Chinese. He has a set of rules to manipulate the input symbols and pick out an output symbol, much as a computer does. So he comes up with Chinese responses that fool outside judges into believing that they are communicating with a real Chinese speaker. Suponga que esto se puede hacer,,en,aquí está el chiste: Searle no sabe ni una palabra de chino,,en,No sabe lo que significan los símbolos,,en,Entonces, la mera manipulación de símbolos basada en reglas no es suficiente para garantizar la inteligencia,,en,comprensión, etc.,,en,Pasar la prueba de Turing no es suficiente para garantizar la inteligencia,,en,Uno de los argumentos en contra que encontré más interesantes es lo que Searle llama el argumento de los sistemas,,en,No es Searle en la sala china que entiende chino,,en,es todo el sistema, incluido el conjunto de reglas, lo que,,en,Searle se ríe diciendo,,en,"Qué,,en,habitación,,en,entiende chino,,en,"Creo que el argumento de los sistemas amerita más que ese despido burlón,,en,Tengo dos argumentos de apoyo a favor de la respuesta del sistema.,,en. Now, here is the punch line — Searle doesn’t know a word of Chinese. He doesn’t know what the symbols mean. So mere rule-based symbol manipulation is not enough to guarantee intelligence, consciousness, understanding etc. Passing the Turing Test is not enough to guarantee intelligence.

One of the counter-arguements that I found most interesting is what Searle calls the systems argument. It is not Searle in the Chinese room that understands Chinese; it is the whole system including the ruleset that does. Searle laughs it off saying, “What, the room understands Chinese?!” I think the systems argument merits more that that derisive dismissal. I have two supporting arguments in favor of the systems response.

El primero es el punto que hice en la publicación anterior de esta serie.,,en,Problema de otras mentes,,en,El problema de otras mentes,,en,vimos que la respuesta de Searle a la pregunta de si los demás tienen mente fue esencialmente por comportamiento y analogía,,en,Otros se comportan como si tuvieran mente,,en,en que gritan cuando les golpeamos el pulgar con un martillo,,en,y sus mecanismos internos para el dolor,,en,nervios,,en,cerebro,,en,disparos neuronales, etc.,,en,son similares a los nuestros,,en,En el caso de la habitación china,,en,ciertamente se comporta como si entendiera chino,,en,pero no tiene análogos en términos de partes o mecanismos como un hablante chino,,en,¿Es esta ruptura en la analogía lo que impide que Searle le asigne inteligencia?,,en,a pesar de su comportamiento inteligente,,en. In Problem of Other Minds, we saw that Searle’s answer to the question whether others have minds was essentially by behavior and analogy. Others behave as though they have minds (in that they cry out when we hit their thumb with a hammer) and their internal mechanisms for pain (nerves, brain, neuronal firings etc) are similar to ours. In the case of the Chinese room, it certainly behaves as though it understands Chinese, but it doesn’t have any analogs in terms of the parts or mechanisms like a Chinese speaker. Is it this break in analogy that is preventing Searle from assigning intelligence to it, despite its intelligent behavior?

El segundo argumento toma la forma de otro experimento mental; creo que se llama el argumento de la nación china,,en,Supongamos que podemos delegar el trabajo de cada neurona en el cerebro de Searle a una persona que no hable inglés.,,en,Entonces, cuando Searle escucha una pregunta en inglés,,en,en realidad está siendo manejado por billones de elementos computacionales que no hablan inglés,,en,que generan la misma respuesta que su cerebro,,en,¿Dónde está la comprensión del idioma inglés en esta nación china de personas que no hablan inglés actuando como neuronas?,,en,Creo que habría que decir que es toda la "nación" la que entiende inglés,,en,¿O Searle se reiría diciendo,,en,nación,,en,entiende ingles,,en,si la nación china pudiera entender inglés,,en. Let’s say we can delegate the work of each neuron in Searle’s brain to a non-English speaking person. So when Searle hears a question in English, it is actually being handled by trillions of non-English speaking computational elements, which generate the same response as his brain would. Now, where is the English language understanding in this Chinese Nation of non-English speaking people acting as neurons? I think one would have to say that it is the whole “nation” that understands English. Or would Searle laugh it off saying, “What, the nation understands English?!”

Well, if the Chinese nation could understand English, Supongo que la sala china también podría entender chino,,en,Computación con mera manipulación de símbolos,,en,que es lo que está haciendo la gente en la nación,,en,puede y conduce a la inteligencia y la comprensión,,en,Entonces nuestros cerebros realmente podrían ser computadoras,,en,y mentes software manipulando símbolos,,en,Ergo Searle está mal,,en,Mira,,en,Utilicé al profesor,,en,Los argumentos de Searle y mis contraargumentos en esta serie como una especie de diálogo para lograr un efecto dramático,,en,El hecho de la cuestión es,,en,Searle es un filósofo de renombre mundial con credenciales impresionantes, mientras que yo soy un bloguero esporádico, un filósofo que pasa por alto en el mejor de los casos,,en,Supongo que me disculpo aquí con el profesor.,,en,Searle y sus alumnos si encuentran mis publicaciones y comentarios ofensivos,,en,No fue intencionado,,en,solo se pretendía una lectura interesante,,en. Computing with mere symbol manipulation (which is what the people in the nation are doing) can and does lead to intelligence and understanding. So our brains could really be computers, and minds software manipulating symbols. Ergo Searle is wrong.

Look, I used Prof. Searle’s arguments and my counter arguments in this series as a sort of dialog for dramatic effect. The fact of the matter is, Prof. Searle is a world-renowned philosopher with impressive credentials while I am a sporadic blogger — a drive-by philosopher at best. I guess I am apologizing here to Prof. Searle and his students if they find my posts and comments offensive. It was not intended; only an interesting read was intended.

Problem of Other Minds

¿Cómo sabes que otras personas tienen mentes como tú?,,en,Esto puede parecer una pregunta tonta,,en,pero si te permites pensarlo,,en,te darás cuenta de que no tienes ninguna razón lógica para creer en la existencia de otras mentes,,en,por eso es un problema no resuelto en filosofía,,en,el problema de otras mentes,,en,Para ilustrar,,en,Estaba trabajando en ese proyecto de Ikea el otro día.,,en,y estaba martillando esa extraña cosa de dos cabezas de clavo, tornillo y talón,,en,Lo perdí por completo y golpeé mi pulgar,,en,Sentí el dolor insoportable,,en,lo que significa que mi mente lo sintió y grité,,en,Sé que tengo mente porque sentí el dolor,,en,digamos que veo a otro idiota golpeándose el pulgar y gritando,,en,No siento dolor,,en,mi mente no siente nada,,en,excepto un poco de empatía en un buen día,,en? This may sound like a silly question, but if you allow yourself to think about it, you will realize that you have no logical reason to believe in the existence of other minds, which is why it is an unsolved problem in philosophy – the Problem of Other Minds. To illustrate – I was working on that Ikea project the other day, and was hammering in that weird two-headed nail-screw-stub thingie. I missed it completely and hit my thumb. I felt the excruciating pain, meaning my mind felt it and I cried out. I know I have a mind because I felt the pain. Now, let’s say I see another bozo hitting his thumb and crying out. I feel no pain; my mind feels nothing (except a bit of empathy on a good day). ¿Qué base lógica positiva tengo para pensar que el comportamiento,,en,llorando,,en,es causado por el dolor que siente una mente,,en,No estoy sugiriendo que otros no tengan mente o conciencia, todavía no,,en,Simplemente estoy señalando que no hay una base lógica para creer que sí,,en,La lógica ciertamente no es la única base para creer,,en,La fe es otra,,en,Intuición,,en,analogía,,en,delirio masivo,,en,adoctrinamiento,,en,presión de grupo,,en,instinto, etc.,,fr,son todas bases para creencias tanto verdaderas como falsas,,en,Creo que otros tienen mente,,en,de lo contrario, no me molestaría en escribir estas entradas de blog,,en,Pero soy muy consciente de que no tengo ninguna justificación lógica para esta creencia en particular.,,en,Lo que pasa con este problema de otras mentes es que es profundamente asimétrico,,en,Si creo que no tienes mente,,en (crying) is caused by pain felt by a mind?

Mind you, I am not suggesting that others do not have minds or consciousness — not yet, at least. I am merely pointing out that there is no logical basis to believe that they do. Logic certainly is not the only basis for belief. Faith is another. Intuition, analogy, mass delusion, indoctrination, peer pressure, instinct etc. are all basis for beliefs both true and false. I believe that others have minds; otherwise I wouldn’t bother writing these blog posts. But I am keenly aware that I have no logical justification for this particular belief.

The thing about this problem of other minds is that it is profoundly asymmetric. If I believe that you don’t have a mind, no es un problema para ti, sabes que estoy equivocado en el momento en que lo escuchas porque sabes que tienes una mente,,en,asumiendo,,en,Pero tengo un problema serio: no hay forma de que ataque mi creencia en la inexistencia de tu mente.,,en,Podrías decirme,,en,pero luego pensaría,,en,"Si,,en,eso es exactamente lo que un robot sin mente estaría programado para decir,,en,Estaba escuchando una serie de conferencias sobre la filosofía de la mente del Prof,,en,John Searle,,en,Él "resuelve" el problema de otras mentes por analogía,,en,Sabemos que tenemos los mismos cables anatómicos y neurofísicos además de un comportamiento análogo,,en,Para que podamos "convencernos" de que todos tenemos mentes,,en,Es un buen argumento hasta donde llega,,en (assuming, of course, that you do). But I do have a serious issue — there is no way for me to attack my belief in the non-existence of your mind. You could tell me, of course, but then I would think, “Yeah, that is exactly what a mindless robot would be programmed to say!”

I was listening to a series of lectures on the philosophy of mind by Prof. John Searle. He “solves” the problem of other minds by analogy. We know that we have the same anatomical and neurophysical wirings in addition to analogous behavior. So we can “convince” ourselves that we all have minds. It is a good argument as far as it goes. Lo que me molesta de esto es su complemento, lo que implica sobre las mentes en cosas que están conectadas de manera diferente.,,en,como serpientes y lagartos y peces y babosas y hormigas y bacterias y virus,,en,máquinas,,en,¿Podrían las máquinas tener mentes?,,en,La respuesta a esto es bastante trivial, por supuesto que pueden,,en,Somos maquinas biologicas,,en,y tenemos mentes,,en,otra vez,,en,que ustedes hacen,,en,¿Podrían las computadoras tener mentes?,,en,más intencionadamente,,en,¿Podrían nuestros cerebros ser computadoras?,,en,y las mentes se ejecutan en software,,en,Eso es forraje para el próximo post,,en,Cerebros y Computadoras,,en,Tenemos un paralelo perfecto entre cerebros y computadoras.,,en,Podemos pensar fácilmente en el cerebro como el hardware y la mente o la conciencia como el software o el sistema operativo.,,en,Estaríamos equivocados,,en,según muchos filósofos,,en,pero sigo pensando en ello de esa manera,,en, like snakes and lizards and fish and slugs and ants and bacteria and viruses. And, of course, machines.

Could machines have minds? The answer to this is rather trivial — of course they can. We are biological machines, and we have minds (assuming, again, that you guys do). Could computers have minds? Or, more pointedly, could our brains be computers, and minds be software running on it? That is fodder for the next post.

Brains and Computers

We have a perfect parallel between brains and computers. We can easily think of the brain as the hardware and mind or consciousness as the software or the operating system. We would be wrong, according to many philosophers, but I still think of it that way. Permítanme esbozar las convincentes similitudes,,en,según yo,,en,antes de entrar en las dificultades filosóficas involucradas,,en,Mucho de lo que sabemos sobre el funcionamiento del cerebro proviene de estudios de lesiones.,,en,Sabemos,,en,por instancias,,en,que características como la visión del color,,en,reconocimiento facial y de objetos,,en,detección de movimiento,,en,la producción y la comprensión del lenguaje están controladas por áreas especializadas del cerebro,,en,Lo sabemos mediante el estudio de personas que han sufrido daño cerebral localizado.,,en,Estas características funcionales del cerebro son notablemente similares a las unidades de hardware de computadora especializadas en gráficos.,,en,captura de video, etc.,,en,La similitud es aún más sorprendente cuando consideramos que el cerebro puede compensar el daño en un área especializada mediante lo que parece una simulación de software.,,en,el paciente que perdió la capacidad de detectar movimiento,,en (according to me) before getting into the philosophical difficulties involved.

A lot of what we know of the workings of the brain comes from lesion studies. We know, for instances, that features like color vision, face and object recognition, motion detection, language production and understanding are all controlled by specialized areas of the brain. We know this by studying people who have suffered localized brain damage. These functional features of the brain are remarkably similar to computer hardware units specialized in graphics, sound, video capture etc.

The similarity is even more striking when we consider that the brain can compensate for the damage to a specialized area by what looks like software simulation. For instance, the patient who lost the ability to detect motion (una condición que las personas normales tendrían dificultades para apreciar o identificarse con,,en,aún podía inferir que un objeto estaba en movimiento comparando sucesivas instantáneas de él en su mente,,en,El paciente sin capacidad para distinguir rostros podría,,en,deducir que la persona que caminaba hacia él en un lugar preestablecido en el momento adecuado probablemente era su esposa,,en,Tales casos nos dan la siguiente imagen atractiva del cerebro.,,en,Cerebro,,en,Hardware de la computadora,,en,Conciencia,,en,Sistema operativo,,en,Funciones mentales,,en,Programas,,en,Me parece una imagen lógica y convincente.,,en,Esta imagen seductora,,en,es demasiado simplista en el mejor de los casos,,en,o completamente equivocado en el peor de los casos,,en,Lo básico,,en,El problema filosófico con esto es que el cerebro mismo es una representación dibujada en el lienzo de la conciencia y la mente.,,en,que son nuevamente constructos cognitivos,,en) could still infer that an object was in motion by comparing successive snapshots of it in her mind. The patient with no ability to tell faces apart could, at times, deduce that the person walking toward him at a pre-arranged spot at the right time was probably his wife. Such instances give us the following attractive picture of the brain.
Brain → Computer hardware
Consciousness → Operating System
Mental functions → Programs
It looks like a logical and compelling picture to me.

This seductive picture, however, is far too simplistic at best; or utterly wrong at worst. The basic, philosophical problem with it is that the brain itself is a representation drawn on the canvas of consciousness and the mind (which are again cognitive constructs). Esta abismal regresión infinita es imposible de salir de,,en,Pero incluso cuando ignoramos este obstáculo filosófico,,en,y preguntarnos si los cerebros podrían ser computadoras,,en,tenemos grandes problemas,,en,¿Qué estamos preguntando exactamente?,,en,¿Podrían nuestros cerebros ser hardware de computadora y las mentes ser software que se ejecuta en ellos?,,en,Antes de hacer tales preguntas,,en,tenemos que hacer preguntas paralelas,,en,¿Podrían las computadoras tener conciencia e inteligencia?,,en,¿Podrían tener mentes?,,en,Si tuvieran mentes,,en,como sabremos,,en,Aún más fundamentalmente,,en,¿Cómo sabes si otras personas tienen mentes?,,en,Este es el llamado problema de otras mentes,,en,que discutiremos en la próxima publicación antes de proceder a considerar la computación y la conciencia,,en,Archivos de computadoras,,en. But even when we ignore this philosophical hurdle, and ask ourselves whether brains could be computers, we have big problems. What exactly are we asking? Could our brains be computer hardware and minds be software running on them? Before asking such questions, we have to ask parallel questions: Could computers have consciousness and intelligence? Could they have minds? If they had minds, how would we know?

Even more fundamentally, how do you know whether other people have minds? This is the so-called Problem of Other Minds, which we will discuss in the next post before proceeding to consider computing and consciousness.

The Age of Spiritual Machines by Ray Kurzweil

It is not easy to review a non-fiction book without giving the gist of what the book is about. Without a synopsis, all one can do is to call it insightful and other such epithets.

The Age of Spiritual Machines is really an insightful book. It is a study of the future of computing and computational intelligence. It forces us to rethink what we mean by intelligence and consciousness, not merely at a technological level, but at a philosophical level. What do you do when your computer feels sad that you are turning it off and declares, “I cannot let you do that, Dave?”

What do we mean by intelligence? The traditional yardstick of machine intelligence is the remarkably one-sided Turing Test. It defines intelligence using comparative meansa computer is deemed intelligent if it can fool a human evaluator into believing that it is human. It is a one-sided test because a human being can never pass for a computer for long. All that an evaluator needs to do is to ask a question like, “What is tan(17.32^circ)?” My $4 calculator takes practically no time to answer it to better than one part in a million precision. A super intelligent human being might take about a minute before venturing a first guess.

But the Turing Test does not define intelligence as arithmetic muscle. Intelligence is composed of “higher” cognitive abilities. After beating around the bush for a while, one comes to the conclusion that intelligence is the presence of consciousness. And the Turing Test essentially examines a computer to see if it can fake consciousness well enough to fool a trained evaluator. It would have you believe that consciousness is nothing more than answering some clever questions satisfactorily. Is it true?

Once we restate the test (and redefine intelligence) this way, our analysis can bifurcate into an inward journey or an outward one. we can ask ourselves questions likewhat if everybody is an automaton (except us — you and me — of course) successfully faking intelligence? Are we faking it (and freewill) to ourselves as well? We would think perhaps not, or who are theseourselvesthat we are faking it to? The inevitable conclusion to this inward journey is that we can be sure of the presence of consciousness only in ourselves.

The outward analysis of the emergence of intelligence (a la Turing Test) brings about a whole host of interesting questions, which occupy a significant part of the book (I’m referring to the audio abridgment edition), although a bit obsessed with virtual sex at times.

One of the thought provoking questions when machines claim that they are sentient is this: Would it be murder tokillone of them? Before you suggest that I (or rather, Kurzweil) stop acting crazy, consider this: What if the computer is a digital backup of a real person? A backup that thinks and acts like the original? Still no? What if it is the only backup and the person is dead? Wouldn’tkillingthe machine be tantamount to killing the person?

If you grudgingly said yes to the last question, then all hell breaks loose. What if there are multiple identical backups? What if you create your own backup? Would deleting a backup capable of spiritual experiences amount to murder?

When he talks about the progression of machine intelligence, Kurzweil demonstrates his inherent optimism. He posits that ultimate intelligence yearn for nothing but knowledge. I don’t know if I accept that. To what end then is knowledge? I think an ultimate intelligence would crave continuity or immortality.

Kurzweil assumes that all technology and intelligence would have all our material needs met at some point. Looking at our efforts so far, I have my doubts. We have developed no boon so far without an associated bane or two. Think of the seemingly unlimited nuclear energy and you also see the bombs and radioactive waste management issues. Think of fossil fuel and the scourge of global warming shows itself.

I guess I’m a Mr. Glass-is-Half-Empty kind of guy. To me, even the unlimited access to intelligence may be a dangerous thing. Remember how internet reading changed the way we learned things?

Death of a Parent

Dad
My father passed away early this morning. For the past three months, he was fighting a heart failure. But he really had little chance because many systems in his body had started failing. He was 76.

I seek comfort in the fact that his memories live on. His love and care, and his patience with my silly, childhood questions will all live on, not merely in my memories, hopefully in my actions as well.

Perhaps even the expressions on his face will live on for longer than I think.

Dad and NeilDeath is as much a part of life as birth. Anything that has a beginning has an end. So why do we grieve?

We do because death stands a bit outside our worldly knowledge, beyond where our logic and rationality apply. So the philosophical knowledge of the naturalness of death does not always erase the pain.

But where does the pain come from? It is one of those questions with no certain answers, and I have only my guesses to offer. When we were little babies, our parents (or those who played the parentsrole) stood between us and our certain death. Our infant mind perhaps assimilated, before logic and and rationality, that our parents will always stand face-to-face with our own enddistant perhaps, but dead certain. With the removal of this protective force field, the infant in us probably dies. A parent’s death is perhaps the final end of our innocence.

Dad and NeilKnowing the origin of pain is little help in easing it. My trick to handle it is to look for patterns and symmetries where none existslike any true physicist. Death is just birth played backwards. One is sad, the other is happy. Perfect symmetry. Birth and life are just coalescence of star dust into conscious beings; and death the necessary disintegration back into star dust. From dust to dustCompared to the innumerable deaths (and births) that happen all around us in this world every single second, one death is really nothing. Patterns of many to one and back to countless many.

We are all little droplets of consciousness, so small that we are nothing. Yet, part of something so big that we are everything. Here is a pattern I was trying to findmaterially made up of the same stuff that the universe is made of, we return to the dust we are. So too spiritually, mere droplets merge with an unknowable ocean.

Going still further, all consciousness, spirituality, star dust and everythingthese are all mere illusory constructs that my mind, my brain (which are again nothing but illusions) creates for me. So is this grief and pain. The illusions will cease one day. Perhaps the universe and stars will cease to exist when this little droplet of knowledge merges with the anonymous ocean of everything. The pain and grief also will cease. In time.

Siddhartha by Hermann Hesse

I don’t get symbolism. Rather, I do get it, but I’m always skeptical that I may be getting something the author never intended. I think and analyze too much instead of just lightening up and enjoying what’s right in front of me. When it comes to reading, I’m a bit like those tourists (Japanese ones, if I may allow myself to stereotype) who keep clicking away at their digital cameras often missing the beauty and serenity of whatever it is that they are recording for posterity.

But, unlike the tourist, I can read the book again and again. Although I click as much the second time around and ponder as hard, some things do get through.

When I read Siddhartha, I asked myself if the names like Kamala and Kamaswami were random choices or signified something. After all, the first partKamameans something akin to worldliness or desire (greed or lust really, but not with so much negative connotation) in Sanskrit. Are Vasudeva and Givinda really gods as the name suggests?

But, I’m getting ahead of myself. Siddhartha is the life-story of a contemporary of Buddha — about 2500 years ago in India. Even as a young child, Siddhartha has urges to pursue a path that would eventually take him to salvation. As a Brahmin, he had already mastered the prayers and rituals. Leaving this path of piety (Bhaktiyoga), he joins a bunch of ascetics who see the way to salvation in austerity and penances (probably Hatayoga and Rajayoga). But Siddhartha soon tires of this path. He learns almost everything the ascetics had to teach him and realizes that even the oldest and wisest of them is no closer to salvation than he himself is. He then meets with the Buddha, but doesn’t think that he couldlearnthe wisdom of the illustrious one. His path then undergoes a metamorphosis and takes a worldly turn (which is perhaps a rendition of Grahasthashrama or Karmayoga). He seeks to experience life through Kamala, the beautiful courtesan, and Kamaswamy the merchant. When at last he is fully immersed in the toxic excesses of the world, his drowning spirit calls out for liberation from it. He finally finds enlightenment and wisdom from the river that he had to cross back and forth in his journeys between the worlds of riches and wisdom.

For one who seeks symbolism, Siddhartha provides it aplenty.

  • Why is there a Vaishnava temple when Siddhartha decides to forgo the spiritual path for a world one? Is it a coincidence or is it an indication of the philosophical change from an Advaita line to a patently Dwaita line?
  • Is the name Siddhartha (same as that of the Buddha) a coincidence?
  • Does the bird in the cage represent a soul imprisoned in Samsara? If so, is its death a sad ending or a happy liberation?
  • The River of life that has to be crossed — is it Samsara itself? If so, is the ferryman a god who will help you cross it and reach the ultimate salvation? Why is it that Siddhartha has to cross it to reach the world of Kamala and Kamaswamy, and cross it back to his eventual enlightenment? Kamala also crosses the river to his side before passing on.
  • The affection for and the disillusionment in the little Siddhartha is the last chain of bondage (Mohamaya) that follows Siddhartha across the river. It is only after breaking that chain that Siddhartha is finally able to experience Nirvana enlightenment and liberation. Is there a small moral hiding there?

One thing I noticed while reading many of these great works is that I can readily identify myself with the protagonist. I fancy that I have the simple greatness of Larry Darrell, and fear that I secretly possess the abominable baseness of Charles Strickland. I feel the indignant torture of Philip Carey or Jay Gatsby. And, sure, I experience the divine urges of Siddhartha. No matter how much of a stretch each of these comparisons may be. Admittedly, this self-identification may have its roots more in my vanity than any verisimilitude. Or is it the genius of these great writers who create characters so vivid and real that they talk directly to the naked primordial soul within us, stripped of our many layers of ego? In them, we see the distorted visions of our troubled souls, and in their words, we hear the echoes of our own unspoken impulses. Perhaps we are all the same deep within, part of the same shared consciousness.

One thing I re-learned from this book is that you cannot learn wisdom from someone else. (How is that for an oxymoron?) You can learn knowledge, information, data — yes. But wisdom — no. Wisdom is the assimilation of knowledge; it is the end product of your mind and soul working on whatever you find around you, be it the sensory data, cognitive constructs, knowledge and commonsense handed down from previous generations, or the concepts you create for yourself. It is so much a part of you that it is you yourself, which is why the word Buddha means Wisdom. The person Buddha and his wisdom are not two. How can you then communicate your wisdom? No wonder Siddhartha did not seek it from the Buddha.

Wisdom, according to Hermann Hesse, can come only from your own experiences, both sublime and prosaic.

Zen and Free Will

Neuroscience has a finding that may question the way we think of our free will.

We now know that there is a time lag of about half a second between the momentwetake a decision and the moment we become aware of it. This time lag raises the question of who is taking the decision because, in the absence of our conscious awareness, it is not clear that the decision is really ours. This finding has even cast doubt on our notion of free will.

In the experimental setup testing this phenomenon, a subject is hooked up to a computer that records his brain activities (EEG). The subject is then asked make a conscious decision to move either the right hand or the left hand at a time of his choosing. The choice of right or left is also up to the subject. The computer always detects which hand the subject is going to move about half a second before the subject is aware of his own intention. The computer can then order the subject to move that handan order that the subject will be unable to disobey, shattering the notion of free-will.

Free will may be a fabrication of our brain after the real action. In other words, the real action takes place by instinct, and the sense of decision is introduced to our consciousness as an afterthought. If we could somehow limit our existence to tiny compartments in time, as Zen suggests, then we might not feel that we had free will.

Ref: This post is an edited excerpt from my book, The Unreal Universe.