Tag Archives: searle

Man as Chinese Room

In the previous posts in this series, we discussed how devastating Searle’s Chinese Room argument was to the premise that our brains are digital computers. He argued, quite convincingly, that mere symbol manipulation could not lead to the rich understanding that we seem to enjoy. However, I refused to be convinced, and found the so-called systems response more convincing. It was the counter-argument saying that it was the whole Chinese Room that understood the language, not merely the operator or symbol pusher in the room. Searle laughed it off, but had a serious response as well. He said, “Let me be the whole Chinese Room. Let me memorize all the symbols and the symbol manipulation rules so that I can provide Chinese responses to questions. I still don’t understand Chinese.”

Now, that raises an interesting question — if you know enough Chinese symbols, and Chinese rules to manipulate them, don’t you actually know Chinese? Of course you can imagine someone being able to handle a language correctly without understanding a word of it, but I think that is stretching the imagination a bit too far. I am reminded of the blind sight experiment where people could see without knowing it, without being consciously aware of what it was that they were seeing. Searle’s response points in the same direction — being able to speak Chinese without understanding it. What the Chinese Room is lacking is the conscious awareness of what it is doing.

To delve a bit deeper into this debate, we have to get a bit formal about Syntax and Semantics. Language has both syntax and semantics. For example, a statement like “Please read my blog posts” has the syntax originating from the grammar of the English language, symbols that are words (syntactical placeholders), letters and punctuation. On top of all that syntax, it has a content — my desire and request that you read my posts, and my background belief that you know what the symbols and the content mean. That is the semantics, the meaning of the statement.

A computer, according to Searle, can only deal with symbols and, based on symbolic manipulation, come up with syntactically correct responses. It doesn’t understand the semantic content as we do. It is incapable of complying with my request because of its lack of understanding. It is in this sense that the Chinese Room doesn’t understand Chinese. At least, that is Searle’s claim. Since computers are like Chinese Rooms, they cannot understand semantics either. But our brains can, and therefore the brain cannot be a mere computer.

When put that way, I think most people would side with Searle. But what if the computer could actually comply with the requests and commands that form the semantic content of statements? I guess even then we would probably not consider a computer fully capable of semantic comprehension, which is why if a computer actually complied with my request to read my posts, I might not find it intellectually satisfying. What we are demanding, of course, is consciousness. What more can we ask of a computer to convince us that it is conscious?

I don’t have a good answer to that. But I think you have to apply uniform standards in ascribing consciousness to entities external to you — if you believe in the existence of other minds in humans, you have to ask yourself what standards you apply in arriving at that conclusion, and ensure that you apply the same standards to computers as well. You cannot build cyclical conditions into your standards — like others have human bodies, nervous systems and an anatomy like you do so that that they have minds as well, which is what Searle did.

In my opinion, it is best to be open-minded about such questions, and important not to answer them from a position of insufficient logic.

Minds as Machine Intelligence

Prof. Searle is perhaps most famous for his proof that computing machines (or computation as defined by Alan Turing) can never be intelligent. His proof uses what is called the Chinese Room argument, which shows that mere symbol manipulation (which is what Turning’s definition of computation is, according to Searle) cannot lead to understanding and intelligence. Ergo our brains and minds could not be mere computers.

The argument goes like this — assume Searle is locked up in a room where he gets inputs corresponding to questions in Chinese. He has a set of rules to manipulate the input symbols and pick out an output symbol, much as a computer does. So he comes up with Chinese responses that fool outside judges into believing that they are communicating with a real Chinese speaker. Assume that this can be done. Now, here is the punch line — Searle doesn’t know a word of Chinese. He doesn’t know what the symbols mean. So mere rule-based symbol manipulation is not enough to guarantee intelligence, consciousness, understanding etc. Passing the Turing Test is not enough to guarantee intelligence.

One of the counter-arguements that I found most interesting is what Searle calls the systems argument. It is not Searle in the Chinese room that understands Chinese; it is the whole system including the ruleset that does. Searle laughs it off saying, “What, the room understands Chinese?!” I think the systems argument merits more that that derisive dismissal. I have two supporting arguments in favor of the systems response.

The first one is the point I made in the previous post in this series. In Problem of Other Minds, we saw that Searle’s answer to the question whether others have minds was essentially by behavior and analogy. Others behave as though they have minds (in that they cry out when we hit their thumb with a hammer) and their internal mechanisms for pain (nerves, brain, neuronal firings etc) are similar to ours. In the case of the Chinese room, it certainly behaves as though it understands Chinese, but it doesn’t have any analogs in terms of the parts or mechanisms like a Chinese speaker. Is it this break in analogy that is preventing Searle from assigning intelligence to it, despite its intelligent behavior?

The second argument takes the form of another thought experiment — I think it is called the Chinese Nation argument. Let’s say we can delegate the work of each neuron in Searle’s brain to a non-English speaking person. So when Searle hears a question in English, it is actually being handled by trillions of non-English speaking computational elements, which generate the same response as his brain would. Now, where is the English language understanding in this Chinese Nation of non-English speaking people acting as neurons? I think one would have to say that it is the whole “nation” that understands English. Or would Searle laugh it off saying, “What, the nation understands English?!”

Well, if the Chinese nation could understand English, I guess the Chinese room could understand Chinese as well. Computing with mere symbol manipulation (which is what the people in the nation are doing) can and does lead to intelligence and understanding. So our brains could really be computers, and minds software manipulating symbols. Ergo Searle is wrong.

Look, I used Prof. Searle’s arguments and my counter arguments in this series as a sort of dialog for dramatic effect. The fact of the matter is, Prof. Searle is a world-renowned philosopher with impressive credentials while I am a sporadic blogger — a drive-by philosopher at best. I guess I am apologizing here to Prof. Searle and his students if they find my posts and comments offensive. It was not intended; only an interesting read was intended.

Problem of Other Minds

How do you know other people have minds as you do? This may sound like a silly question, but if you allow yourself to think about it, you will realize that you have no logical reason to believe in the existence of other minds, which is why it is an unsolved problem in philosophy – the Problem of Other Minds. To illustrate – I was working on that Ikea project the other day, and was hammering in that weird two-headed nail-screw-stub thingie. I missed it completely and hit my thumb. I felt the excruciating pain, meaning my mind felt it and I cried out. I know I have a mind because I felt the pain. Now, let’s say I see another bozo hitting his thumb and crying out. I feel no pain; my mind feels nothing (except a bit of empathy on a good day). What positive logical basis do I have to think that the behavior (crying) is caused by pain felt by a mind?

Mind you, I am not suggesting that others do not have minds or consciousness — not yet, at least. I am merely pointing out that there is no logical basis to believe that they do. Logic certainly is not the only basis for belief. Faith is another. Intuition, analogy, mass delusion, indoctrination, peer pressure, instinct etc. are all basis for beliefs both true and false. I believe that others have minds; otherwise I wouldn’t bother writing these blog posts. But I am keenly aware that I have no logical justification for this particular belief.

The thing about this problem of other minds is that it is profoundly asymmetric. If I believe that you don’t have a mind, it is not an issue for you — you know that I am wrong the moment you hear it because you know that you have a mind (assuming, of course, that you do). But I do have a serious issue — there is no way for me to attack my belief in the non-existence of your mind. You could tell me, of course, but then I would think, “Yeah, that is exactly what a mindless robot would be programmed to say!”

I was listening to a series of lectures on the philosophy of mind by Prof. John Searle. He “solves” the problem of other minds by analogy. We know that we have the same anatomical and neurophysical wirings in addition to analogous behavior. So we can “convince” ourselves that we all have minds. It is a good argument as far as it goes. What bothers me about it is its complement — what it implies about minds in things that are wired differently, like snakes and lizards and fish and slugs and ants and bacteria and viruses. And, of course, machines.

Could machines have minds? The answer to this is rather trivial — of course they can. We are biological machines, and we have minds (assuming, again, that you guys do). Could computers have minds? Or, more pointedly, could our brains be computers, and minds be software running on it? That is fodder for the next post.

Dualism

After being called one of the top 50 philosophy bloggers, I feel almost obliged to write another post on philosophy. This might vex Jat who, while appreciating the post on my first car, was somewhat less than enthusiastic about my deeper thoughts. Also looking askance at my philosophical endeavors would be a badminton buddy of mine who complained that my posts on death scared the bejesus out of him. But, what can I say, I have been listening to a lot of philosophy. I listened to the lectures by Shelly Kagan on just that dreaded topic of death, and by John Searle (again) on the philosophy of mind.

Listening to these lectures filled me with another kind of dread. I realized once again how ignorant I am, and how much there is to know, think and figure out, and how little time is left to do all that. Perhaps this recognition of my ignorance is a sign of growing wisdom, if we can believe Socrates. At least I hope it is.

One thing I had some misconceptions about (or an incomplete understanding of) was this concept of dualism. Growing up in India, I heard a lot about our monistic philosophy called Advaita. The word means not-two, and I understood it as the rejection of the Brahman and Maya distinction. To illustrate it with an example, say you sense something — like you see these words in front of you on your computer screen. Are these words and the computer screen out there really? If I were to somehow generate the neuronal firing patterns that create this sensation in you, you would see these words even if they were not there. This is easy to understand; after all, this is the main thesis of the movie Matrix. So what you see is merely a construct in your brain; it is Maya or part of the Matrix. What is causing the sensory inputs is presumably Brahman. So, to me, Advaita meant trusting only the realness of Brahman while rejecting Maya. Now, after reading a bit more, I’m not sure that was an accurate description at all. Perhaps that is why Ranga criticized me long time ago.

In Western philosophy, there is a different and more obvious kind of dualism. It is the age-old mind-matter distinction. What is mind made of? Most of us think of mind (those who think of it, that is) as a computer program running on our brain. In other words, mind is software, brain is hardware. They are two different kinds of things. After all, we pay separately for hardware (Dell) and software (Microsoft). Since we think of them as two, ours is an inherently dualistic view. Before the time of computers, Descartes thought of this problem and said there was a mental substance and a physical substance. So this view is called Cartesian Dualism. (By the way, Cartesian coordinates in analytic geometry came from Descartes as well — a fact that might enhance our respect for him.) It is a view that has vast ramifications in all branches of philosophy, from metaphysics to theology. It leads to the concepts of spirit and souls, God, afterlife, reincarnation etc., with their inescapable implications on morality.

There are philosophers who reject this notion of Cartesian dualism. John Searle is one of them. They embrace a view that mind is an emergent property of the brain. An emergent property (more fancily called an epiphenomenon) is something that happens incidentally along with the main phenomenon, but is neither the cause nor the effect of it. An emergent property in physics that we are familiar with is temperature, which is a measure of the average velocity of a bunch of molecules. You cannot define temperature unless you have a statistically significant collection of molecules. Searle uses the wetness of water as his example to illustrate emergence of properties. You cannot have a wet water molecule or a dry one, but when you put a lot of water molecules together you get wetness. Similarly, mind emerges from the physical substance of the brain through physical processes. So all the properties that we ascribe to mind are to be explained away as physical interactions. There is only one kind of substance, which is physical. So this monistic philosophy is called physicalism. Physicalism is part of materialism (not to be confused with its current meaning — what we mean by a material girl, for instance).

You know, the trouble with philosophy is that there are so many isms that you lose track of what is going on in this wild jungle of jargonism. If I coined the word unrealism to go with my blog and promoted it as a branch of philosophy, or better yet, a Singaporean school of thought, I’m sure I can make it stick. Or perhaps it is already an accepted domain?

All kidding aside, the view that everything on the mental side of life, such as consciousness, thoughts, ideals etc., is a manifestation of physical interactions (I’m restating the definition of physicalism here, as you can see) enjoys certain currency among contemporary philosophers. Both Kagan and Searle readily accept this view, for example. But this view is in conflict with what the ancient Greek philosophers like Socrates, Plato and Aristotle thought. They all believed in some form of continued existence of a mental substance, be it the soul, spirit or whatever. All major religions have some variant of this dualism embedded in their beliefs. (I think Plato’s dualism is of a different kind — a real, imperfect world where we live on the one hand, and an ideal perfect world of forms on the other where the souls and Gods live. More on that later.) After all, God has to be made up of a spiritual “substance” other than a pure physical substance. Or how could he not be subject to the physical laws that we, mere mortals, can comprehend?

Nothing in philosophy is totally disconnected from one another. A fundamental stance such as dualism or monism that you take in dealing with the questions on consciousness, cognition and mind has ramifications in what kind of life you lead (Ethics), how you define reality (Metaphysics), and how you know these things (Epistemology). Through its influence on religions, it may even impact our political power struggles of our troubled times. If you think about it long enough, you can connect the dualist/monist distinction even to aesthetics. After all, Richard Pirsig did just that in his Zen and the Art of Motorcycle Maintenance.

As they say, if the only tool you have is a hammer, all problems begin to look like nails. My tool right now is philosophy, so I see little philosophical nails everywhere.