ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Talk:Chinese room - Wikipedia, the free encyclopedia

Talk:Chinese room

From Wikipedia, the free encyclopedia

Socrates This article is within the scope of the WikiProject Philosophy, which collaborates on articles related to philosophy. To participate, you can edit this article or visit the project page for more details.
B This article has been rated as B-Class on the quality scale.
??? This article has not yet received an importance rating on the importance scale.

Contents

[edit] The Rulebook

The rulebook was written by an intelligent human that understood Chinese. That intelligent being is no longer part of the sytem. If the system understood chinese then it would be able to add new pages to the rulebook. It can't. Ergo, the system doesn't understand Chinese.

Rulebook =represents= Computer Program. It doesn't take any intelligence or understanding to run a computer program, but it does take a great deal of intelligent thought to write a meaningful program.

So, where's the argument? The "system" only understands Chinese if you include the human that wrote the rulebook in the "system".

Yes, that's certainly the intuition Searle wants you to have, but it's also wrong. The Chinese speaker didn't just write some handy book that explained Chinese, they recorded everything about their behavior into this "book". That's why a purely mechanical obedience of the rules leads to behavior identical to the Chinese author.
What misleads is that, with a regular book, the author is indeed outside of the system, but it's just not the case with this extreme hypothetical example. Your intuition, which works fine in real cases, has failed you in this one. Alienus 01:23, 7 January 2006 (UTC)
I don't think you make a clear point. The fact that you put book in quotes ("book"). It is a book. Just like any other book. Of course it isn't a book explaining Chinese. Explaination implies understanding. This is a rule book (also known as a computer program). The book could never be complete. That's the argument. If you really think that this is a complete set of rules then that's your opinion. That needs to be proven. If you could prove that, then maybe we'd be on the way to proving that the system was intelligent. The system makes no allowances for adding pages to the book, so it's safe to define an intelligence outside the system that is required to add a new page to the book. The system can't add algorithms to the computer program. Oops, I mean can't add rules to the book.
Yes, just like any other book, except that it's so huge as to be able to specify what symbols to manipulate so as to duplicate all the behavior of a (Chinese) person. Size does matter, and the incredible magnititude of this book distorts our intuitions.
As you said, the book doesn't explain Chinese, just tell us how to act so as to exactly immitate a person who speaks Chinese. I suppose that, in principle, the book could be analyzed to determine how Chinese is spoken, as well as to determine any other fact about the virtual person inside it. In practice, it's hard enough to just follow the instructions accurately.
As for the issue of adding to it, that would be a weakness in Searle's analogy. It could easily be repaired by giving the book many empty pages, such that there are instructions on how to fill them in response to input. This would give the book a memory and shrink it to more imaginable size. Alternately, we could say, as Searle did, that the book is really, really huge, so that it contains all possible responses to all possible inputs, including those that depend on memory.
Now, I'm not the one saying there could be a complete set of fixed rules for all occasions. That's what Searle is requiring for his thought experiment. His point, though, is that, even with such a book, the person inside the room who's following the rules in that book does not know Chinese.
In fact, I agree: that person doesn't know Chinese, anymore than a processor executing those rules would. However, the virtual person stored in the book and instantiated through the room and its motor-fingered occupant, does know the language quite well.
In principle, the behavior of any person, Chinese or otherwise, could be duplicated by a Turing machine. If it were, that machine as a whole (as opposed to the processor, the book, or any other specific component of it) would be experiencing consciousness. As it stands, proponents of hard AI say that such a machine already exists, and can be found in your skull. Alienus 23:50, 18 January 2006 (UTC)
Actually the system would be able to add new pages to the rulebook, in the sense that the system could be, for exanple, taught English. Rich Farmbrough 10:08 22 June 2006 (GMT).


PROVE TO ME WITH 100% THAT THERE IS A BOOK, EVERY TIME...SUCKERS... —Preceding unsigned comment added by 71.77.135.111 (talk) 02:47, 27 February 2008 (UTC)

There is no need to add anything to the rulebook, as the rulebook must be infinite: finite lookup tables aren't Turing complete. The argument is simply that you can simulate a Chinese speaker by taking as input a string of all questions that have been asked from the computer up to that point, look up the answer by using that string as an index from an infinite table that gives the corresponding answer acceptable to a Chinese speaker, and then because the person inside the computer who's doing the lookup doesn't understand Chinese the computer doesn't either. The last step is a matter of controversy.
Surely the ability to learn is fundamental to intelligence and understanding. You mention that the rulebook has to be infinite. So, you're not just talking about a machine that understands Chinese, it would be able to tell me how to do cold fusion in my kitchen. An infinite rulebook and not having the ability to learn are features of an intelligent Chinese room.
Note that if you don't use a Turing machine but a finite computer as the model instead, the lookup table would still have to be mind-bogglingly huge (which isn't any better for intuition than countable infinities), as the indices would still be strings that contain all the input that has ever been given to the computer, and the table would have to contain entries for all possible indices. The table would be finite assuming finite lifetime for the universe and a maximum possible rate for entering input, but there isn't enough matter in the universe to store such a table even for short input strings. There still isn't any need to add to the lookup table, as "all the questions the human asks are responded to appropriately" , which won't happen if there's something missing from the table. Coffee2theorems 15:58, 25 August 2006 (UTC)
Another note: In my experience it is a widely held belief that the ability to predict (in this case, predict what is an acceptable answer to a Chinese speaker; this also counters any argument of compressing the lookup table to make the scheme more plausible, as compression depends on predictive ability) constitutes understanding. E.g. in The Crackpot Index John Baez gives "arguing that while a current well-established theory predicts phenomena correctly, it doesn't explain "why" they occur, or fails to provide a "mechanism"" as one indicator of a crackpot (ie. does not invalidate an argument, but makes it highly dubious). Beyond a certain point, it does not make much sense to question whether you really, truly understand something, as if for all intents and purposes you act as if you did, it makes no difference anyway. If it walks like a duck and quacks like a duck... Coffee2theorems 17:13, 25 August 2006 (UTC)
Can I add my 2 cents? The thing is always what exactly are we trying to show or prove. The person in the room can surely react in a natural chineese manner to the input in chineese, which might be good enough. Does this mean that he understands Chineese? Of course not. But being a human being, if he stays in the room long enough, he will eventually get to understand Chinese. Supposingly a computer could also do that. Would it be meaningful and useful? Let's say that from one point on, the person in the room understands Chineese to the degree that he/she can converse with him/herself. So the input is self provided. The conversation will be of course limited to the subset of input provided by the outside chinese and its overlap with the rulebook. Is this understanding meaningful? If the person in the room is immortal, and after a thousand years another Chinese drops by, neither the rulebook or the past knowledge of chinese will help the person in the room interact with the new Chinese. The rulebook was written by someone who understood the collective state of the Chineese language in a particular point in time and in a particular physical and social environment. And it is meaningful only at that time. Dpser 13:22, 31 December 2006 (UTC)

[edit] older stuff

OK, I'll bite... Since Hofstadter's reply was the one Searle called the Systems reply and since you edited it out for no apparent reason, I'll put it and the other responses here and then we'll talk. My paper was actually on the whole drawn out debate between Searle & H/D. The next part of my original paper was on Hofstader's reply, but I left that out, since it has little bearing on the Chinese Room article. --Eventi
I was only cleaning up the language of this article--if I removed something it was probably a mistake (something like a cut with intention to paste at another location, which was forgotten). Thanks for putting it back. I agree that responses to Searle ought to go into other article(s). I'll see if I can come up with a few. I admit that my totally unsupported comment is just that--it's on a Talk page, after all--and I may well choose to back it up if I can find some time, but the impression I get from those in the AI field I know well--Minksy, Kurzweil, and others with whom I have conversed--is that no one takes Searle seriously except Searle. --LDC

I think the essay was well written, though a little out of date. It's far from neutral point of view, which would be the hardest part to fix. -LC

Thanks... What do you think is out of date? --Eventi

[edit] Replies to the Chinese Room

The first of the six replies is called the "Systems Response", coming from Berkeley University.

However, Searle forgets here that his role is this scheme is relegated to the role of a neuron in a human brain, or a set of neurons. One could argue that neurons, or other human cognitive subsystems composed of many neurons that mindlessly execute rules, functioning as part of the whole, are unaware in the sense that the whole, the totality of all subsystems is aware. What is awareness anyway, why is yours special over the more limited "awareness" of your subsystems, or, as a better example, the awareness of a cat. Also, one of the important qualities of being able to answer questions is being able to adapt and learn about the world at large, and the set of rules he'd be executing would have to accomodate in the background such functions of learning, or self-revision of the rules, including processing observations coming through the senses about the external world, logically filtering them through the existing worldview of the given consciousness to only accept consistent facts, or slightly modify the worldview to accomodate new facts that fit and are demanded by more important consistency requirements - such as paradoxes about the constant speed measurements of light, required by the more important rule that experiment and observation is king over ad-hoc invented world-view beliefs. The chinese language would simply be a mold capable of accepting concepts, a tool for manipulating the sensory data and to constantly revise it, to dream about the world chinese (or "human" words that happen to have chinese symbols) and toss out the garbage while committing new and important finding to long term memory. Once he has a worldview, a coherent outlook about the world around him where everything falls neatly into place, and there are very few cracks called paradoxes that are such an important source of humor and fun, then he would be able to answer questions, and without such qualities I doubt he'd ever be able to pass the Turing test, because he couldn't answer any questions. By the way, though I don't know what kind of music it will like, I predict that that the first artificially intelligent computer will have quite a good sense of humor. The scary part is the cat example, if we think we are more aware and conscious than cats are, then there is most likely even more awareness possible than we are capable of. Do we want to find out? Is it inevitable? 4.159.92.189 03:10, 21 January 2006 (UTC) Sillybilly 20:22, 21 January 2006 (UTC)

Searle first points out that this reply is actually inconsistent with the strong AI belief that one does not have to know how the brain works to know how the mind works, since the mind is a program that runs on the brain?s hardware. "On the assumption of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology (363)".

Searle foresees objections to his water pipe computer, namely the systems response, that it is the conjunction of the operator and the pipes that understand. Again he answers that in principle, the operator could internalize the system as before, and there would still be no understanding. Searle says that this is because the system simulates the wrong things about the brain. Only the formal description of the brain, and not its ability to produce "intentional states".

Another reply which comes takes a different tack from the others, and really asks a different question. It's the "Other Minds" reply. How does anyone know that other people understand Chinese?

Cole (1991, 1994) develops the reply most extensively and argues as follows: Searle's argument presupposes, and requires, that the agent of understanding be the computer itself or, in the Chinese Room parallel, the person in the room. Searle's failure to understand Chinese in the room does not show that there is no understanding being created. Cole argues that the mental traits that constitute John Searle, his beliefs and desires, memories and personality traits — are all irrelevant and causally inert in producing the answers to the Chinese questions. If understanding of Chinese were created by running the program, the mind understanding the Chinese would not be the computer, nor, in the Chinese Room, would the person understanding Chinese be the room operator. The person understanding the Chinese would be a distinct person from the room operator. The reason is that the personality and memories of the Chinese understanding person would not be John Searle's personality and memories. The personality and memories, beliefs and desires, of the agent understanding Chinese would all depend on the programming, and the structure of the enormous database that it will have to draw upon in order to pass the Turing Test — it would not be John Searle's personality, memories, beliefs or desires. So if the activity in the room creates any understanding, it would not be John Searle's understanding. We can say that a virtual person who understands Chinese would be created by running the program.

In its essence, this is just like a computer program which has an input, it computes something and finally splits an output. Suppose further that the rulebook is such that people outside this room can discuss with you in Chinese. For example, they send you a question 'how are you' and you, following the rulebook, would give a meaningful answer. So far, the computer program simulates human being which understands Chinese.

One can even ask 'do you understand Chinese?' from the room and it can answer 'yes, of course' despite of the fact that you, inside the room, would not understand a word of what is going on. You are just following rules, not understanding Chinese

Yes, but then you are granting Searle his main point. You are attributing intentionality to a room that contains nothing more than a person with some slips of paper. Which is absurd to anyone other than a die-hard Strong AI proponent.Trac63 15:45, 19 May 2007 (UTC)
Why aren't the criticisms a larger part of the article? I think they'd go better in this article than on their own -- "criticism of the Chinese Room" doesn't stand independently. No one brings up the opposition to the Chinese Room except when presented with the Chinese Room argument. Also, it's an important part of the presentation of the argument to note that not many people take the experiment seriously for these reasons (unless I'm mistaken...). I wouldn't support breaking the section off unless the main article got too long -- which seems unlikely. Is anyone putting this material in somewhere/are there significant objections to its insertion (duly NPOVed, etc.)? Mindspillage 14:47, 27 Dec 2004 (UTC)
Firstly, there is such a thing as NPOV.
Secondly, you are mistaken. What the Chinese Room argument really does is illustrate the philosophical difficulties with the Turing Test -- and the objections to the Chinese Room argument generally satisfy only the Strong AI and Cognitive Science crowd. The rest of the philosophic world for the most part abandoned mind-body dualism centuries ago.Trac63 14:10, 19 May 2007 (UTC)
Arguably, there are people who, while not taking the thoight experiment itself seriously, do take its suggested results very seriously. Alienus 23:50, 18 January 2006 (UTC)
Problem with this argument is, that in reality, people are not in some sealed room with just text/sound as input. If you'll forget about room and start thinking in real life, you will be able to make sentences which can't be answered properly with rulebook of any size. For example "What kind of fruit I'm just eating?". It's simply impossible to answer correctly that question without actual understanding of language. Or you can use some sentence which is correct, but makes no sense in given situation. Person/machine which understands will be confused, machine which uses textbook will give equally inapropriate answer. So you can easily check if other people understand Chinese. (if you speak Chinese, of course)
If there would be inherently right answer to any sentence, there would be no reason why should languages exist. There would be no reason to ask, if there would be just one possible answer. Errorneous 01:40, 21 July 2007 (UTC)

What I'm not sure about, and which I'll ask Searle or look up information on the web, is whether what Searle is concerned about is paralel to consciousness, or what is called the experience of qualia. Hopefully, this would be easier to get agreement from AI scientists about: that the Chinese room system, or a series of pipes, whatever, wouldn't have consciousness like people do. AI theorists have to either propose a machine can have what we term consciousness, or that there is a mind-body dualism where the mind is either dispensible or epiphenomenal. What I guess Searle also wants to forward in addition to this, is that the understanding and Intentionality of the human mind is unable to be explained without consciousness, and can't be emulated/simulated without it. Aside from Searle's argument, this seems intuitive: because, if it wasn't necessary, why would we have it? Of course, we wouldn't experience existence if we didn't have it, but thats not the point. That it exists, and we exist, as being aware, suggests that its necessary to thought. I should note that I have other concerns about consciousness; one is with epiphenomenalism and Wittgenstein's characterization of it (which I assume Searle rejects)--my other is substantially broader--I think that the issue of conscioussness (how are we conscious) and the issue of existence (why does anything exist at all) have to be the same issue, and can't be separated and ultimately we have to look at ontology. But I wont get into that here. Brianshapiro

I'm pretty sure Searle is critiquing functionalisms explanation of consciousness, not qualia. Why would you get agreement from functionalists? This is their hypothesis, that consciousness is not in principle human-specific. --snoyes 05:27, 30 Nov 2003 (UTC)
Searle is definitely supporting the notion of qualia as raw feels independent of behavior. A key point of the argument is that the room behaves as if it were a person but (ex hypothesi) lacks qualia. Alienus 23:50, 18 January 2006 (UTC)

[edit] the extreme bogosity reply

I think that the replies section needs to contain a subsection that conveys the POV that the Chinese Room argument is not valid. This view is popular enough and important enough to warrant inclusion. One verifiable source is Russel and Norvig's AI text.

A quick reminder of NPOV: "NPOV says that the article should fairly represent all significant viewpoints that have been published by a verifiable source, and should do so in proportion to the prominence of each."

I think there are two important points that need to be expressed. The first is that one doesn't need to prove that strong AI is possible in order to refute the Chinese Room argument. The second point is showing where the Chinese Room argument fails. In the thought experiment it is irrelevant whether or not Searle understands Chinese when he is executing the Chinese language program. The proper question would be whether or not the virtual computer running on Searle's brain understands Chinese. Then it becomes clear that because of the substrate independence of computation, the whole exercise of setting up the Chinese Room is pointless. Christocuz 22:03, 6 January 2007 (UTC)

I will second that the article seems quite biased towards Searle's point of view. I can think of several objections right off the top of my head that aren't listed (I'm not throwing them in because I believe this would be original research). They seem obvious and valid to me, so it is troubling that they aren't mentioned. David McCabe 21:05, 14 February 2007 (UTC)
Huh? The Chinese Room argument is by its very nature a POV, as is everything else in philosophy. I'm not sure it's a good idea to turn every philosophy article into a debate just because there's a large number of people who object to that viewpoint.Trac63 14:10, 19 May 2007 (UTC)

[edit] casual powers of the brain

From the perspective of somebody who has never heard this thought experiment before and therefore comes in with no pre-conceptions: The article makes sense, except that I am at a loss to what the "casual powers of the brain" are. Perhaps this can be expanded upon, or referenced to another article?

You’re quite right. The only trouble is that the 'causal powers of the brain' are left fairly vague by Searle. I think he says in effect that 'computational power alone is insufficient to produce mind, so what ever causes mind is not computational – so let's just call it that causal part of the mind'. But this is just my POV, so I am loath to include it. Banno 21:10, Jan 2, 2005 (UTC)
Yes, "Causal," not "Casual."  :) I believe the "causal powers of the brain" refers to the particular capacity of a brain to "cause" a mind, through neurological activity. Strong AI wants a computer to be able to create a mind by running software, without needing to simulate the neurological activity of a brain. Such a simulation would presumably require complete understanding of brain function.

[edit] False premise reply

I'm not sure if the false premise argument is present in the historical literature, but it is commonly heard in discussions of the Chinese room. The version I put in the article is not quoted from one particular source, but is a simplified version of the typical argument. Here is one example of the false premise argument from a philosophy discussion site:
"So, how do you decide whether or not a system understands semantics? The Turing Test is one possible test. Indeed, the only way to test for semantics is to test for understanding in general. There is no measurable difference between understanding in general and an understanding of semantics. Syntax is sufficient for semantics. Searle's distinction between purely syntactical systems and semantic systems is illogical. There is no observable difference. If a system passes the Turing Test, it has demonstrated an understanding of semantics."

Given that this argument is not necessarily historic, does it merit inclusion in the article? Kaldari 00:06, 24 Jun 2005 (UTC)

Well, let's pick at the argument and see how it stands up.
1. the only way to test for semantics is to test for understanding in general
2. There is no measurable difference between understanding in general and an understanding of semantics.
therefore,
3. Syntax is sufficient for semantics.
4. But this contradicts Searle's second premise;
5. So Searle's argument cannot stand.
Is this the correct interpretation? If this is what you are suggesting, then the argument is a reductio. As such one can conclude the either (Syntax is sufficient for semantics)xor(Syntax is insufficient for semantics). Given this, pedantically, the argument does not reach the conclusion it claims.
Remember that the Chinese room is an argument in support of (Syntax is insufficient for semantics). What argument is presented against it? (1), above, appears non-controversial; it is almost tautologous, given that semantics is understanding. (2) above is less clear - I can;t quite see what it might mean. But certainly, (3) simply does not follow from (1) and (2). Syntaxis not even mentioned in (1) and (2), it just appears in (3).
So I;d say no, the argument is neither valid nor cogent, nor does it have a place in the literature. So it should not be included int he article. Banno 22:16, Jun 24, 2005 (UTC)

the real issue here is the false dichotomy between semantics and syntax. The distinction holds within purely formal systems, but breaks down completely otherwise, as in natural languages; these systems are schematic. If we imagine our brains/minds as being schematically structured (self-organizing, with tangled semantico-sytactic heirarchies) it really doesn't seem that far-fetched to imagine that a 'conscious' system can arise from a purely formal one... at precisely the level that this distinction fails. This is why using language as a metaphore is confusing in Searle's Chinese Room metaphore; no book of rules, even an infinately long one, could ever exist for any natural language; you can't pluck one word out of chinese, or any other natural language for that matter, and formalize its relationship with all the other words in the language, because this relationship is not 'fixed' in any way. The chinese room is kind of a misleading mental excercise, as the premise of formalizing (creating a book of rules for) a system that is so obviously not purely formal (Chinese) is basically nonsesnsical. The notion of "purely sytactic rules" applies to a computer language, a mathematical language, but not Chinese!-hpblackett

===second version===
Hi Banno, the example I put on the talk page above was just a casual example of someone using the false premise argument as part of their criticism on a discussion board. The example was not especially well written or meant to be presented as a formal argument. Forget points 1 through 5. The false premise argument is actually extremely simple:
In the Chinese Room thought experiment, Searle asserts that purely syntactic rules (without semantics) are theoretically sufficient to pass the Turing test. He doesn't offer any justification for this assertion or its plausability, he just says to "suppose" it happens. The false premise argument simply says that this assertion is wrong, i.e. it is impossible to pass the Turing test with purely syntactic rules as the Turning test is essentially a test for semantics. Thus the conclusions drawn from this experiment are not justified since it is based on a false premise. Here is the relevant original material from Searle:
"Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers."
The false premise reply simply rejects this supposition as impossible. This seems like a pretty straight-forward and valid argument. What do you think? Kaldari 22:42, 24 Jun 2005 (UTC)

From the article:
It would be impossible for the person in the room to pass the Turing test merely with a book of syntactical rules. The rules would have to include semantics in order to actually fool the Chinese-speaking interviewer. If the rules included semantics, the person in the room would actually gain some understanding of Chinese. In other words, the Turing test is essentially a test for semantics.
By definition, if it is in the book of rules, it is syntax, not semantics. The point of the Chinese Room is that the person in the room does not understand the symbols in the way a native speaker would; all they are doing is following the rules. So the idea that the rules include semantics does not make sense. Perhaps Kaldari, you could fill out the argument to overcome this? Banno 22:29, Jun 24, 2005 (UTC)

You misunderstand the argument. Again, please disregared the example I gave above. It was confusing. The false premise argument does not assert that the rules include semantics. It accepts Searle's scenario that the rules do not include semantics. It merely states that such rules would not be sufficient to pass the Turing test. Kaldari 22:46, 24 Jun 2005 (UTC)
Here is perhaps a better statement of the argument:
Searle's assumption (Banno's emphasis) that it is possible to pass the Turing test using purely syntactical rules (without semantics) is wrong, since the Turing test is essentially a test for semantics. Thus the conclusions that Searle makes are based on a false premise.
Kaldari 23:25, 24 Jun 2005 (UTC)
Here is a better citation of the false premise argument: [1]. The most relevant section is the one titled "Holes in the Chinese Room". Kaldari 23:55, 24 Jun 2005 (UTC)

But - as it says in the article - Searle does not just assume that syntax is insufficient for semantics - he argues for it by presenting the Chinese Room argument: "Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese". Neither Searle-in-the-room, nor the room as a whole, understand Chinese; all they do is follow a set of rules for moving symbols around. Remember that the point of the Chinese Room argument is to show that it is not true, as strong AI claims, that "if a machine were to pass a Turing test, then it can be regarded as 'thinking' in the same sense as human thought". Nothing in the room "thinks" like a human; yet the room, by supposition, passes the test. Therefore according to Searle the claim of strong AI is false. Banno

Can you find a better citation? and unidentified Blogger might not be the most reliable source; furthermore, point one and two appear to misunderstand the hypothetical nature of the Chinese Room argument; it is a presupposition of the thought experiment that the room passes the Turing test, so it is pointless to say that it might not. Point three is correct, but misses the point, which is that: ifthe room passes the Turing Test, and because it does not think like a human, it is wrong to suppose that a machine that passes the Turing Test ipso facto thinks like a human. Banno 07:48, Jun 25, 2005 (UTC)

[edit] monkeys and systems

I didn't write that Searle assumes that syntax is insufficient for semantics. I wrote that he assumes it is possible that the room passes the Turing test (without semantics). You write the same thing in your reply: "it is a presupposition of the thought experiment that the room passes the Turing test". And it is certainly not pointless to say that his presupposition is flawed! What if I made the following argument:
I have a monkey that is really smart. Suppose that I make it study day and night for five years so that it passes the SAT. The SAT is supposed to be a test to determine if someone is smart enough to get into college, but of course no matter how smart a monkey is, it would never belong in college. In this case, however, the monkey made it into college because of it's great SAT score. Therefore the SAT is a flawed test.
The problem here is not in my logic, but in the assumtion that if a monkey studies enough it would pass the SAT. Clearly, criticizing the presupposition is critical to debunking this rediculous example. As the writer of the source I cited states: "for such an experiment to have any applicable results, the assumptions have to be possible at least."
As I have said before, please ignore that initial example I posted on the talk page, as it is far too scatterbrained to be a good example of the false premise argument. Kaldari 16:18, 25 Jun 2005 (UTC)

This is fun; we seem to be at cross-purposes here, and although there may be something hiding in your argument, it remains unclear to me what exactly that something is.

Remember that it is the proponents of strong AI that say that a machine that passes the Turing Test can think like a human, not Searle. Searle set the Chinese Room up as an example of a machine that does not think like a human, yet ex hypothesis it passes the test. Now, are you claiming that:

a) it is impossible for any machine, including the Chinese Room, to pass the Turing Test;

b) some machines might pass the Turing Test, but the Chinese Room is not one of them;

c) if the room passes the Turing Test, then ipso facto it is cognisant (it understands semantics)?

If (a), then you are simply denying the efficacy of the Turing Test, and not addressing the issues raised by Searle. If (b), then could you provide reasons for your claim? Banno

But I suspect that you wish to claim (c), which is the most interesting case. But this seems to me to be no more than a repeat of the systems reply. The systems reply says that the room indeed understands Chinese; Searle's reply is that, if it does, it does not do so in the same way that a human does. When Searle-in-the-room is talking about "rice", for instance, there is no understanding, no intentionality associated with "rice" - all that is done is the shuffling of symbols. Banno

So it seems to me that your reply is just a variation on the systems reply, and does not merit its own subheading.Banno 21:36, Jun 25, 2005 (UTC)

Banno, the false premise reply clearly claims B, as I have patiently tried to explain multiple times. If you need reasons to back up claim B, please refer to the citation in the article, particularly the section labeled "1". Basically, any set of purely syntactic rules is going to be limited in what questions it can answer in a meaningful or human-like way. As Chomsky points out, human language is "discreet but infinite". Thus in order for a set of syntactic rules to be able to pass the Turing test, the set of rules would also have to be infinite, which is impossible.
Also, I would appreciate it if you would remove your false characterization of the reply as representing claim C, as that claim is not central to the false premise argument. Thanks. Kaldari 28 June 2005 18:38 (UTC)

[edit] Style & POV

Just a note on style - your versions have asserted that Searle is wrong, rather than that if the reply is accepted, thenSearle is wrong. so for instance, you had: "Thus the conclusions that Searle makes are based on a false premise", which asserts that Searle is wrong: clearly POV. This needs to be couched in a conditional, as in my: "if Searle-in-the-room passes the Turing Test, then ipso facto Searle-in-the-room understands Chinese; and therefore his second assumption is incorrect".

Also, it is a good idea to leave the "dubious" in until this discussion reaches some conclusion, at which point I will remove it. Banno 21:36, Jun 25, 2005 (UTC)

Banno, I wrote the false premise reply in the same style as the other replies. They all very obviously represent POVs. That is why they are called criticisms. If you want to NPOV the false premise reply, why not NPOV the other replies as well? For example, "Houser contends that blah blah blah" rather than "blah, blah, blah"? Adding your own rebuttal is not NPOVing! Kaldari 28 June 2005 18:51 (UTC)
I've used Searle as the main reference, essentially for convenience, since it is Searle that brings the objections together and replies to them. If someone wants to add an additional criticism, then given the contentious nature of the topic, it is reasonable to ask for attribution and for clarity. So, what is it that you see as POV in those replies? One could not object to the POV, as long as it is attributed. Since what you have presented lacks clarity and is not present in the literature, the only reasonable thing to do is to remove it until such time as it can be presented more suitable. Banno June 29, 2005 08:09 (UTC)
The objections themselves are not attributed, they are simply given as if dictated from the gods. Who says "perhaps the person and the room considered together as a system" understands Chinese? Who says that if you place the room inside a robot surely it can be said to "understand what it is doing"? They are written in the same sourceless POV style as I used for the false premise reply. The only reason I wrote it in that style was to be consistant with the other sections. The only sources attributed in the other 2 replies are the replies to the replies: which of course is Searle. If you want to completely NPOV the criticisms section, go ahead, but don't single me out just because I tried to match the style of the other sections! If anything, the false premise section is the most NPOV as it actually gives a small bit of context now: "Another argument, possibly only found outside academic discussion, is:" Kaldari 29 June 2005 17:56 (UTC)

That's why there are links are at the bottom - where they should be. I recommend [Minds, Brains, and Programs], as it has replies to another four objections, much better than the concatenation I constructed here. I wrote the criticism section as an introduction, not an exhaustive account. If you want to improve it, please do, but use something with a bit more grunt than one Blogger's opinion.

Far too many folk read one account and decide that the Chinese Room must be wrong, usually because they do not see what it is the argument is saying. The argument does not claim that AI is impossible. Nor that the brain is not a machine. But undergrad computer scientists the world over, after a quick scan of some third party summary, decide it tries to do either or both. The argument claims that a machines' being able to pass the Turing test is not sufficient reason to suppose that the machine has a mind that is the same as a human mind; Your "false Premise" does no more than claim that a machine that passes the Turing test passes the Turing test. It does not even address the issue. The statement "Searle's assumption that it is possible to pass the Turing test using purely syntactical rules is wrong" is not even close, since that assumption is not made by Searle, but by the advocates of strong AI. If you think otherwise, then tell us what more is involved in the Chinese room than syntactic rules.

So far as I can see, the argument you present is without merit, and should be removed. There is plenty of far better material you could use to mount a case against Searle. Banno June 30, 2005 12:34 (UTC)

Banno, I am not "mounting a case against Searle". I added the false premise reply because it is a point that often comes up in discussions of the Chinese Room on philosophy discussion boards. I'm sorry if the best written version of it I could find was written by a blogger. It seems clear to me that you are rather obsessed with defending the Chinese Room argument, rather than writing a comprehensive article about it. Since it is apparent that you "own" the article and will only be satisfied once you have bullied me off with your nonsensical analyses of the "merits" of the argument, which however simple, you seem completely incapable of grasping. Rather than try to explain for the nth time why your rebuttal is both absurd and original research, I'm going to stop wasting my time, and leave the article to your disposal. I'm taking the article off my watchlist, so feel free to delete whatever you want and add pictures of pink bunnies or whatever suites your fancy. Kaldari 30 June 2005 14:56 (UTC)
That's unfair. The argument in "false premise" is inadequate; I had hoped that you would meet the challenge and produce a better argument. I think that the article as it stands is biased in favour of Searle, but I don't think that this should be fixed by simply adding any old junk that disagrees with him. I have been quite explicit, I think, in explaining the inadequacies of the section in question. If that peeves you, then tough, but don't blame me if you are unable to fix those inadequacies. Banno June 30, 2005 19:47 (UTC)

[edit] Argument's Failure

My comments in parentheses - Banno June 29, 2005 08:27 (UTC)

Searle's Chinese room argument's failing (clearly POV - Searle and others contend that the argument does not fail -Banno)stems from his inability to differentiate between the computational substrate and the computation itself. (given what is said below, it appears that the anonymous author of this section is not aware of Searle's substantial discussion of the Background, which corresponds reasonable well with "computational substrate", but is a much clearer concept -Banno)

For a Chinese individual the computational substrate is the physical world with all its rules for chemical reactions and atomic interactions, the program is his brain. His computationally active brain (program) understands Chinese (the equation of "brain" and "program" is problematic - "brain" and "mind" might work -Banno).

In the Chinese room, the individual is the computational substrate. (Why? why isn't the physical world outside the room the computational substrate, as it is for the individual above? -Banno) He knows nothing of Chinese, merely manipulating characters. The program is the rulebook. The computationally active rulebook (program) computed by the individual understands Chinese.

In the Chinese room example, Searle takes the individual and claims to debunk strong AI by stating that he doesn't really understand Chinese. What Searle is doing is taking the computational substrate and stating it doesn't understand. None of Searle's brain's atoms and chemical reactions understands English. His active brain however does. (This appears to be a variation of the systems reply - the parts do not understand Chinese, but the whole does -Banno)

[interjection] Similarly one could say that in the water-pipe example Searle misinterprets the function of the human delivering the results. If we imagine the waterpipes as connections within the brain, the water as impulses, and the valves as synapses we have (assuming equivalent structure and complexity) a snapshot of a chinese speakers brain and therefore the system assumes any level of cognition that we could attribute to the chinese speaker who served as a model for the system. The human in the room forms an input/output conduit and nothing more - transmitting stimuli/triggers to the brain and returning the appropriate responses dumbly as directed. A completely paralysed chinese speaker may only have hearing for input and can only return sequences of blinks to signal his understanding however we would not say that this limitation was a result of his intelligence or understanding ... ~What Searle does here is to deny that any such understanding takes place because the eyelids and ears themselves do not understand chinese. This seems rather absurd to me however I appreciate that I may be reading his objection out of context.
I think, instead, a better argument would be that a chinese speakers brain was capable of configuring and adapting to ongoing stimuli to reach the point where it was able to understand chinese. The waterpipe system, though complex enough to deliver snapshots of the chinese speakers current processes, is assumed incapable of such growth or self adaptation. Sureley the true test of any complex systems 'intelligence' is its ability to adapt to its environment ... one would require a system of pipes and plumbers which, from an initial configuration, could be taken anywhere in the world and would, over time, learn to provide 'appropriate' and 'self-determined' responses. To 'function' if you prefer.
Of course, I am a strong believer that there is nothing magical about the human brain or the thought processes. I believe that intelligence, free will, and determinism is largely illusory and results only from complexity at a level we will, as individuals, never understand. The facade is necessary for our evolutionary function and psychological wellbeing so I think it is quite normal for many find such a concept unbearable. Most of us would rather believe that there is a fundamental difference between brain and machine or between mind and program/data. I think we all need to get over our own sense of self-importance and accept ourselves as nothing more than complex pattern-matching systems which, in time, given the massive parallelism promised by quantum computing, will be not only emulated but inevitably superceded. From the humblest ant to a music composing songbird, from algae to mankind ... we are all gods unto ourselves and dust unto the universe.
Of course, I have never composed a sonnet or symphony - so, in some peoples evaluation, I may not be qualified to bring any meaningful ideas to this debate.GMC)

Searle's counter argument that the individual could memorise the rulebook changes nothing. The substrate is still the individual's brain; the program is in his memory. When the individual runs the rulebook from memory, he doesn't understand Chinese but the active program does (This appears confused, since the active program would appear to be no more than the individual acting from the rule book...so how can the individual not understand Chinese when the active program does? - Banno)

To extend his counter argument, if a real Chinese person were to interact with the individual merely running the Chinese rule book from memory, we would have two entities that understand Chinese conversing with one another.

In a computer analogy, in the case of the real Chinese, the "Chinese" program, his brain, is run in "hardware" mode (what does "hardware mode" mean? -Banno). For the other individual, the "Chinese" program is run in "software" mode, the individual's brain's computational capabilities are used to run the "Chinese" program. In the end, there are two entities that understand Chinese conversing together.

The conclusion is one that has been known in computer science for a very long time. That computational substrate is irrelevant to the active computation. There is no such thing as fake computation. All computation is genuine. Computation is computation regardless of whether the substrate is physical or emulated. (? -Banno)

The poster is referring to Turing equivalence.--207.245.10.221 23:25, 2 December 2006 (UTC)

Mario eating magic mushrooms on a real Nintendo is the same Mario that eats magic mushrooms on a Nintendo emulated on my computer, and is the same Mario that eats magic mushrooms on a Nintendo emulated a computer emulated on another computer.

there are indeed responses to Searle that really on differentiating emulation from simulation. Perhaps the author had these in mind. A clearer exposition would be most welcome. Banno June 29, 2005 08:27 (UTC)
This is all Original Research. Where are the sources that are making these arguments/counterarguments? Kaldari 29 June 2005 18:09 (UTC)
Agreed; and therefore the section should not be included. Banno June 29, 2005 18:32 (UTC)
A proof that Searle's argument is false: 1. Chinese-speaking humans "understand" Chinese. 2. The way in which they "understand" chinese is by taking in stimuli to their ears, converting it to electric potentials, then sending it through the paths of the brain. 3. If you hollowed out the chinese-speaker's brain, and instead hooked up the ear nerves to a monitor, and sat a person down in front of the monitor with instructions for how to replicate the neural interations that would have taken place in the brain, then carry out those interactions, then output the reply, then the effect is reproduced. 4. By Searle's emulation argument, the Chinese speaker therefore does not understand Chinese. 5. By contradiction with axiom given in point 1, Searl's argument is false. QED --Bmk 04:16, 17 June 2006 (UTC)
There's not necessarily any contradiction there. The Chinese speaker with the hollowed out brain might not understand Chinese. Remember, Searle is saying that there is more to understanding than just getting the right input/output correlations. Your axiom 2 assumes that there isn't any more to it than this, so you're just begging the question. Cadr 12:21, 22 June 2006 (UTC)


Axiom 2 looks like common sense to me, unless you wish to propose some mechanism by which non-physical processes may produce a physical result in the Chinese-speaker's behaviour. If we're not positing the brain as some sort of 'antenna' that communicates with an ethereal, non-physical entity that does all the 'understanding' itself and meat-puppets the human to act as if it understands, then we're stuck with the notion that electrical and chemical processes in the brain are enough to explain understanding. —The preceding unsigned comment was added by 204.174.23.210 (talk) 12:49, 8 January 2007 (UTC).

[edit] The short version

In this diagram, we show that "PROCESSOR + PROGRAM = PROCESS".

           |-----------------------------------------------------------------------|
           |   PROCESSOR    |      PROGRAM       |           PROCESS               |
|----------|----------------|--------------------|---------------------------------|
| Searle:  |Laws of physics | Brain              | Mind, no Chinese understanding  |
|----------|----------------|--------------------|---------------------------------|
| Chinese: |Laws of physics | Brain              | Mind with Chinese understanding |
|----------|----------------|--------------------|---------------------------------|
| Room:    |Mind            | Rulebook           | Chinese understanding           |
|----------|----------------|--------------------|---------------------------------|
| Memory:  |Mind            | Memorized rulebook | Chinese understanding           |
|----------------------------------------------------------------------------------|

Notice in the last two cases that the mind(PROCESSOR) is not the one that has Chinese understanding.

From Searle's 3 premise argument

  1. Programs are purely formal (syntactic). (That is true, all the programs in the "PROGRAM" column are purely syntactic.)
  2. Human minds have mental contents (semantics). (That is true, the mind is a PROCESS that has semantics.)
  3. Syntax by itself is neither constitutive of, nor sufficient for, semantic content. (That is true, the syntax(PROGRAM) is no closer to understanding Chinese than the PROCESSOR.)
  4. Therefore, programs by themselves are not constitutive of nor sufficient for minds. (That is true, the PROGRAM requires a PROCESSOR to create a PROCESS.)

We have to be careful not to compare apples and oranges. Although all of the above premises and the final deduction are true, the conclusion that Searle extrapolates from it, that computers(PROCESSOR) running human made software(PROGRAM) cannot understand Chinese(PROCESS), is a step beyond his logical arguments.

This is not Searle's conclusion. Quite the opposite. He does claim in several places that a suitable arrangement of a processor and interface could understand Chinese in the way a human does; he must, since he thinks that the human mind is such an arrangement. What the Chinese Room shows is that satisfaction of the Turing test does not imply possession of such a mind. Banno June 29, 2005 18:41 (UTC)
Is that all it's meant to demonstrate? It's not "directed at" the more general claim that "the appropriately programmed computer literally has cognitive states"? If not -- if this is a misconception so widespread that Searle himself believes it -- then we should take pains in the article to correct that misconception, starting with radical changes to the introduction. --Echeneida
Why? Banno 21:31, August 14, 2005 (UTC)


[edit] Searle's responses

Searle's defense of the systems argument is a little flawed. He suggests that if the man in the room memorizes all of the rules and leaves the room that he's able to converse in Chinese without knowing the language - then we have a case where he doesn't understand chinese but yet is able to speak in it.

I would argue that we then have two intelligences in one brain. If you ask the guy a question in Chinese, he responds, intelligently, using his internalized rules. However, if you ask him in English what it was he just said, he won't know. This is the heart of Searle's defense - he doesn't understand Chinese. However, you can simply reverse the argument. Ask him a question in English - and then ask (in Chinese) what he just said - and his 'Chinese self' will be unable to answer.

There is complete symmetry here. You simply have two brains inside one head with no way (apart perhaps from speed of response and flaws in the Chinese rulebook) to know which is the 'real' man. That being the case, what reason have you to assume that either the Chinese brain or the English brain is the 'real' one? If you can't tell, what is to say that the Chinese half isn't intelligent?

This is what makes the Turing test a reasonable one. If you simply cannot in any way tell the difference between human responses and AI responses, then what reason do you have to say that the AI system isn't intelligent. To deny intelligence is to say that one's definition of intelligence being a human-only property is unfalsifiable. Unfalsifiable claims are not scientific. (unsigned)

Whoever wrote this understands the systems defense quite well, and has noticed a weakness in our description of it. I've added a sentence to try to fix that. Meanwhile, I've noticed that there are other defenses available, which probably deserve to be written up. Here's one place they can be found: http://www.iep.utm.edu/c/chineser.htm#SH2a [2]. Perhaps someone would be willing to do this. Alienus 06:13, 29 December 2005 (UTC)
I'd just like to add that while I think the anon comment was very insightful, I think the falsifiability issue is a bit of a red herring. Searle doesn't have to give you a test which will determine which half of the man's mind is the "real man". He is only seeking to establish the principled philosophical distinction between the two halves. In fact, Searle would be contradicting himself if he gave you such a test, because the indistinguishability of the Chinese room from a real Chinese speaker is a premise of Searle's argument. Cadr 21:57, 18 June 2006 (UTC)
In fact, one of the counterarguments is that Searle is contradicting himself. Searle's goal is to create a distinction, but it's not a principled one, nor is a difference that makes any sort of difference. Consider the parallel of philosophical zombies, in which the premise states that a p-zombie acts in all ways like a regular person, yet the argument keeps distinguishing between p-zombies and people by positing what it sets out to prove: qualia. Both of these arguments are refuted, then, by operationalism (or, more specifically, behaviorism). Al 12:46, 22 June 2006 (UTC)
Well, there is no point getting into an argument about this here, so I'll try to keep this response brief. I agree that Searle's distinction is not principled, but it is backed by some pretty impressive intuitions. I think he's right that intuitively, we do feel that there is more to understanding than input/output correspondance. In my opinion, his most persuasive point in this regard (which I think is really much more persuasive than the Chinese room) is that if we ignore our intuitions about understanding/conciousness, we are pretty much forced to accept that livers, stomachs, etc. are concious, since they can perfectly well be said to process information. Now, if we accept that it is unlikely that stomachs are in fact concious, we have real evidence for qualia -- evidence which is stronger than the intuitive evidence from the Chinese room thought experiment, in my view.
Nobody says that information processing is intelligence. You can't have a conversation with a stomach. I don't know what you're on about here. —Preceding unsigned comment added by 67.180.15.30 (talk) 07:25, 5 May 2008 (UTC)
I'm not quite sure what you're getting at with behaviorism and operationalism. Behaviorism in the strict sense is incompatible with functionalism anyway. Operationalism amounts to a refusal to consider the possibility that concious things have intrinstic qualities responsible for their concious nature, and therefore doesn't refute anything except in the trivial sense that skepticism refutes everything. Searle raises an important question for functionalists -- the problem of distinguishing between things which think and things which don't -- which operationalism is inherently incapable of addressing.
Personally, I think Searle is probably wrong, I've just not come across any completely convincing refutation of his arguments. Cadr 14:37, 22 June 2006 (UTC)

I agree that we shouldn't belabor this point or turn Talk into Debate, but it does help to discuss these ideas a bit, so that we can figure out what parts are worth expanding, summarizing and so on. I'll try to be brief, though.

Searle's argument is based on starting with an intuition and magnifying it. The problem is that the intuition is itself suspect. Like many intuitions, it gives a reasonable approximation of the truth, at least when applied to typical, everyday situations. Yes, there is such a thing as consciousness, as what red looks like and even what it's like to be the Pope. However, it doesn't necessarily follwo that these things exist in a manner that, even in principle, can be separated from behavior.

If we understand consciousness as the capacity to behave in the way that we define as conscious, if perceptions exist solely in terms of behavior and behavioral dispositions and what it's like is in principle shareable, then we keep all the necessary parts of consciousness without violating physicalism. This is what I meant by behaviorism and operationalism. If consciousness is operationally defined in terms of behavior, p-zombies become self-contradictory, and the Chinese room may well qualify as conscious. Al 17:59, 22 June 2006 (UTC)

He seems to make a related point here:
The only motivation for saying that there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of the Turing test. The example shows that there could be two 'systems', both of which pass the Turing test, but only one which understands; and it is no argument against this point to say that since they both pass the Turing test they must both understand, since this claim fails to meet the argument that the system in me that understands Chinese has a great deal more than the system that merely processes Chinese. In short, the systems reply simply begs the question by insisting without argument that the system must understand Chinese.
Cadr 22:12, 18 June 2006 (UTC)

[edit] What about Gödel's incompleteness theorems?

The rulebook is a fixed set of rules, presumably complex enough to be able to give answers about natural numbers and logical statements – therefore it is strong enough that the Gödel's incompleteness theorem can be applicable. That means, that there are inputs that are just undecidable according to the rulebook – the person would not be able to deduce the correct reply, not even find out if he is able to find the reply beforehand (see Halting problem). Of course, the rulebook might not contain only first-order logic, but also second-order logic, and this complicates matters (but from the original description, it seems strongly that only first-order logic statements are used). You can distinguish this system from a native chinese speaker by timing responses, or by giving double-meaning questions – unless there is some randomness in the rules (e.g. flip a coin and if if it falls this way, write this character).

However, the real question is "can a natural language be axiomatized with finite number of first-order (or even second-order) logic rules, or an axiom schema?" rado 10:19, 28 December 2005 (UTC)

How do you know the human brain can't be axiomatized with a finite number of first-order logic rules? Godel doesn't really come into this.--Robert Merkel 22:32, 28 December 2005 (UTC)
But the human brain is not a rule set. The Room is a machine. LoopZilla 10:05, 29 December 2005 (UTC)
You're indulging in Proof by assertion. How do you know the brain isn't a rule set? --Robert Merkel 10:11, 29 December 2005 (UTC)
That is in fact a deep philosophical question. Even if we are materialists, we cannot be quite sure - it comes down to the question if a physical objects can be axiomatized, and that is not trivial to answer. Physics seems to be based on the assumption that it has to be axiomatizable, but in fact it is discovering more and more axioms as we go deeper and deeper (quite comparable to the mathematics, in order to explain the whole theory we have to start adding additional axioms, and Gödel shows that we can never reach the end).
Human brain can be crudely axiomatized on the neuron synaptic level, almost perfectly on molecule level, but if we get to rid of the "almost", we have to go deeper into quantum wave functions and probabilities and different quantum theory interpretations come forth. "Chinese room" is even much cruder than the synaptic level, and therefore I think it cannot reach the level of sophistication able to simulate an average chinese speaking person - for that, you need axioms and rules describing at least 10¹¹ neurons and their connections. rado 14:37, 30 December 2005 (UTC)
If there was a set of rules for a brain it would just follow those rules, with determined inputs. Most human have brains that act with logic and also illogical aspects. Examples are fear and pain. LoopZilla 20:57, 29 December 2005 (UTC)

I just want to point out that timing the response is not a fair method for determining if there's really a mind inside. One of the inherent distortions of the Chinese room argument is that any implementation would have to be incredibly slow, on the order of centuries between responses. Dennett argues that this fact confuses our intuitions terribly. Alienus 19:00, 29 December 2005 (UTC)

To make a long answer short: The room is not required to give correct yes or no answers to any question expressible in chinsese. It just gives the answers a chinese would give.

[edit] Searle, Language and stuff like that

Searle's gedanken is really a smoke screen. The rule set of the room remains fixed, no matter how many questions are asked and answered.

Human intelligence creates langauge. New words, phrases, dialects and languages are born in a continuous manner.

TXT, slang, poetry, polari....

'Nuff said.

LoopZilla 18:25, 8 January 2006 (UTC)

Sorry, a computer program can be written to create language as well.

The Chinese Room is not capable of doing so, showing that it's a poor model of AI. A computer program can even be self-modifying, again something the Chinese Room can't do.

Exile 14:44, 17 January 2006 (UTC)

Uhm, if the virtual Chinese person were incapable of creativity, that would be a dead give-away, wouldn't it? In order for Searle's thought experiment to work, the rule book would have to be so huge as to include all creative acts that the virtual Chinese person would have been capable of. Strains credibility, but that's what Searle demands. Alienus 18:53, 8 January 2006 (UTC)
The Chinese room is of course a Turing machine. So the demand you speak of is not Searle's, but that of the advocates of AI. What Searle does is show us some consequences of the idea that a Turing machine might be capable of thought. Banno 20:24, 8 January 2006 (UTC)
That doesn't follow. See above for why. Alienus 23:50, 18 January 2006 (UTC)
You're argument about creativity doesn't necessarily hold up. According to Douglas Hofstaedter, creativity is variation on a theme. Therefore, given thematice input, there is no reason why an extremely well programmed turing machine could not write variations of the theme to itself. In fact, there is a fair amount of research into creative computers, including computers capable of writing stories and making music. And btw, Banno's interpretation of the purpose of Searle's thought experiment is correct.Shaggorama 10:08, 6 February 2006 (UTC)

The rulebook can have an arbitrarily large (but fixed) amount of memory by storing the memory content encoded in the currently executed rule number. It is an absurdly inefficient method, but it works. The rulebook can memorize earlier experiences in that memory and make decisions based on them later on.

[edit] Second language speakers

It can be argued that, Assembly being the native lagnauge, Chinese woudl be a second language to the computer. I don't speak chinese, but I did take several years of spanish. When I have to pee around spanish-speakers, I formulate the English question "where are the bathrooms?", then translate it as follows: "where is Donde, bathrooms are Banos, plural so los banos, where are is donde esta, so donde esta los banos?" and only then do I ask the question in spanish. The reply comes "alli, a la izquierdo de los sombreros". My brain takes the words and picks them apart: "alli is over there, so I should look where they're pointing. Los sombreros... oh, the hats. OK, by the hats... izquierdo, that's um... left. Left of the hats. Got it. Now I want to thank them. Thanks is.. gracias." then I say "gracias." I am not fluent in Spanish, but I am considered to speak it. How is that different? Kuroune 03:40, 12 February 2006 (UTC)

Because you know what a bathroom is. —The preceding unsigned comment was added by 204.174.23.210 (talk) 13:01, 8 January 2007 (UTC).
This should never happen, not even on basic level, and definitely not after several years. Spanish words should be for you like synonymes for English ones and Spanish grammatic should be just different way to create sentences. You should be able to formulate question directly in Spanish. You probably learned it in some really wrong way.
What are you trying to do is not easy even for people who are fluent in both languages. Everyone can learn to speak many languages, but using two languages at once is extremely hard thing which only few people can do. Even people who speak both languages on native level often fail if they try to translate between them in realtime.Errorneous 03:42, 21 July 2007 (UTC)

[edit] Confusion

Maybe I'm just really confused, but I personally don't think this argument works. I mean, aren't our brains' understandings of natural languages similar in the manner of really just being tables of words that we cobble together into coherent (usually) sentences? How is this any different for a machine?

You're not necessarily confused. Many people who have studied artificial intelligence think that the argument is wrong. --Robert Merkel 06:32, 2 March 2006 (UTC)
I completely agree. Language is essentially a response to events, so for the Chinese Room argument to be correct one would have to assume that the computer is translating the Chinese on the fly via the rulebook into it's 'native' language instead of responding directly to the Chinese with more Chinese. —Preceding unsigned comment added by 220.253.59.24 (talk) 06:01, August 24, 2007 (UTC)

[edit] Structure

The article is becoming quite messy. The argument is presented in the introduction in a form that does not appear in Searle's writing. The original paper is described twice, the citation being made the second time around; the thought experiment is described at least twice, in different parts of the text. The Turing test is poorly described. The history of the argument is presented out of sequence and is repetitive.

I suggest going back to an earlier version. Banno 23:18, 2 April 2006 (UTC)

[edit] [OT] Of course, students produce right answers without understanding...

One of the funny things about this is that it evokes what literally happens in educational settings, especially educational settings where progress is measured by test scores.

A student can memorize a sentence such as "the heart is the organ that pumps blood." He can then correctly answer a variety of questions, multiple-choice and otherwise. "What organ of the body pumps blood? ________ " The heart. "True or false: the liver pumps blood." F. "What fluid is most closely associated with the heart? a) Bile b) Phlegm c) Lymph d) Blood?" D. In fact, he could memorize a dozen such sentences, and then apply linguistic manipulation to them and paraphrase them to produce a short essay on circulation, good enough to get a C- anyway.

He can do all this without having the slightest understanding of hearts, blood, or pumps. It could just as well be "the gostak is the erigon that distims the doshes."

My own experience is that perhaps 80% of what I learned in college, including those subjects I aced, was this kind of learning. Not quite as bad as that; I wasn't just applying syntactic machinery to memorized verbal content. I had formed a mental model. (Offhand, I'd say that applying syntactic machinery to memorized verbal content is good enough to get you a C; any mental model will get you into the B+ or A- range).

But in many cases I had formed a sketchy, crude, or inadequate mental model. In the case of knowledge I actually used over the years, I acquired a much deeper understanding. For the material I really know, my test answers and scores would not be much different now then they were decades ago, but the mental model I would be using to produce those answers would be utterly different--and much more accurate when applied outside the narrow range being exercised in the test. Dpbsmith (talk) 12:34, 12 April 2006 (UTC)

The computer could 'understand' a topic once it is given more information about it though. For instance, using your example of not understanding what a heart is, one could give information to the computer on what the heart is, and further information given on each part of the definition it did not already know. While the data needed to make this happen would be an incredibly large amount, it is still possible.
Knowledge and, I would propose, conciousness, is basically the linking of words together in ones 'native' or usual language.
I would just like to add that I have no formal education in these areas, and my opinions are my own.

[edit] Problem with the article

The article makes an important mistake quite near the beginning when it says the following: "Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese symbols, looks them up on look-up table, and returns the Chinese symbols that are indicated by the table.".

It is essential to the functionalist argument that the book in the room is *not* a look-up table. Such a situation would be analogous to a 'computer' that simply had an enormous hard drive and responded to any input by simply retrieving the correct response and spitting it out. This does not fit the functionalist conception of mind [incidentally I'm not sure the article makes clear enough the relationship between machine functionalism and 'hard AI'] whereby a brain is a computer running a suitable program - ie it needs to *work out* what to respond with based on the input and various rules, not simply look it up.

In Searle's Chinese Room there are two piles of cards, one which constitutes a book in Chinese, and another which contains background information (again written in Chinese). The book, which takes the place of the program in the analogy, gives instructions in English for how to use cards from each to respond to an input based on the structure of that input. While most of the article does not continue to treat the argument in this fallacious way, that opening section should still be changed to avoid perpetuating the mis-understanding.


Another change that should be made is in the first of the objections. "This leads to the interesting problem of a person being able to converse fluently in Chinese without "knowing" Chinese, and a counterargument says that such a person actually does understand Chinese even though they would claim otherwise. A related argument is that the person doesn't know Chinese but the system comprising the person and the rule book does.". The last sentence is wholly superfluous as it is not a 'related' argument, it is a restatement of the argument made at the very beginning of the paragraph. The first sentence is fine, but it would be good to add the supporting argument against the person understanding Chinese - ie that while they might be able to say the words for "the sun is shining" they could never know that those words actually related to the sun.

These problems have been fixed. ---- CharlesGillingham (talk) 09:22, 5 May 2008 (UTC)



This paragraph is ugly too. "But what if a brain simulation were connected to the world in such a way that it possessed the causal power of a real brain—perhaps linked to a robot of the type described above? Then surely it would be able to think. Searle agrees that it is in principle possible to create an artificial intelligence, but points out that such a machine would have to have the same causal powers as a brain. It would be more than just a computer program."


I'd be happy to make all these changes myself, but I have made no significant contributions to Wikipedia before, and would be nervous doing it now without some agreement.

[edit] Computer is NOT a look up table; original Chinese room definition wanted.

Quoting the article:

"Suppose that, many years from now, we have constructed a computer which behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input, consults a large look-up table (as all computers can be described as doing), and then produces other Chinese characters as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test."

Such a computer will fail the Turing test - it won't be able to answer "What was my previous question?" or "How long we are chatting?".

A computer usually has an internal state (memory, registers, etc), so "consults a large look-up table (as all computers can be described as doing)" is wrong or at least incomplete.

Could someone find and post the original definition of the Chinese room, whatever it was?

--Whichone 20:39, 16 June 2006 (UTC)

That is the original definition (not sure if it's a verbatim quote, but it's accurate). You shouldn't take the idea of a "look-up table" too seriously -- Searle wasn't concerned with the precise mechanism. The instructions would be sufficiently complicated that the room would be able to answer the questions you gave. The room is a thought experiment, it's not supposed to be something which could actually plausibly exist. Cadr 20:50, 16 June 2006 (UTC)
Yes, but the salient point here is that not all computers can be described as consulting lookup tables, so the assertion in the article is simply false. It's also not clear to me from Searle's original article that he's talking about lookup tables. He certainly talks about symbol sets, and correlating symbols from one set with symbols from others. But it's not clear that he's talking about a simple one-to-one mapping of questions to answers. (Indeed, in the responses section, he countenances the idea that the man in the room needs to perform various calculations in the course of selecting the correct set of answer symbols.)
Whether or not Searle's argument fails if he's talking about a lookup table, the article shouldn't say he's talking about a lookup table if he's not. 68.239.37.231 23:33, 13 September 2006 (UTC)VP
"The room is a thought experiment, it's not supposed to be something which could actually plausibly exist." It can't be Searle's opinion, otherwise the paradox degenerates into a trivial "Suppose we live in a world where both nondestructible objects and unstoppable forces exist - what would happen if they meet? " --Whichone 21:58, 16 June 2006 (UTC)
Well, first off, the Chinese room isn't a paradox. In point of fact, it would not in the least trivialise Searle's argument if it were shown that the Chinese room (or something like it) couldn't exist. Searle is trying to show that functionalist ideas about what constitutes thought or conciousness are wrong. He does this by imagining the existence of a hypothetical object which a functionalist would (Searle argues) be obliged to say was capable of thought/conciousness, and saying: "look, this hypothetical object is clearly capable of neither". The question of whether or not the object can or does exist is simply irrelevant. So far as I know, no criticisms of the Chinese room argument (of which there are many) disagree on this point. I suggest that you read Searle's original article to get an idea of what it is he's trying to argue. Cadr 22:43, 16 June 2006 (UTC)
You're correct here. While there are what I would consider convincing counter-arguments against the Chinese room, none are based directly on practicality. The closest is an aside by Dennett about how, in addition to being false on stricter grounds, the argument is misleading because it hides the fact that any attempt to emulate a person's behavior by explicitly manipulating symbols would lead to reaction times measured in decades, not milliseconds. The net effect of this distortion is to downplay the amount of work involved and to piggyback on the intuition that not enough work could be done in realtime by such a room to succeed in being conscious. Al 15:54, 17 June 2006 (UTC)
Even 'decades' falls far short of the true magnitude of the difference. Given current estimates of the computational power equivalent of a human brain of 10^14—10^17 operations per second, a Chinese room staffed by one human agent at a time could be expected to have typical response times on the order of millions or even billions of years.--207.245.10.221 21:09, 1 December 2006 (UTC)
Whichone is of course correct, however I think the problem here is the confusion of 'implausible' with 'impossible'—it is, after all, the very purpose of a thought experiment to establish the boundary between the two. A classic example in SR of a thought experiment invalidated by an impossible premise is the scenario in which one sends a signal faster than light by pushing on one end of a long and infinitely rigid rod. (SR places an upper bound on the rigidity of any physically possible material.)
A recent debate in GR concerned whether or not causality could be violated by a pair of parallel cosmic strings passing each other close by at relativistic speeds. While no-one truly expected to find such a thing out in the real universe, it raised serious questions about the nature of time in GR until it was definitively established that this arrangement simply could not appear in any spacetime with physically realistic boundary conditions. --207.245.10.221 22:49, 2 December 2006 (UTC)


[edit] Citations in replies

The replies are in great need of supporting citations. Banno 21:41, 12 September 2006 (UTC)

I do know that the "robot reply" is attributed at least somewhat to Jerry Fodor, but that's all I've got. One source that could be used is the textbook Philosophical Problems: An Annotated Anthology, edited by Laurence BonJour and Ann Baker. The argument would probably need to be rephrased to fit what's in the book. --Clpo13 06:17, 14 November 2006 (UTC)


[edit] Reductio Ad Absurdum

I may be wrong, but I'm pretty sure this argument as stated is not sound:

1. If Strong AI is true, then there is a program for L such that if any computing system runs that program, that system thereby comes to understand L. 2. I could run a program for L without thereby coming to understand L. 3. Therefore Strong AI is false.

Statement 1 only asserts that there must be 'a' program for L such that etc. 2 then asserts that 'a' program can be run without gaining understanding. That, however does not necessarily mean there is 'no' prgram that could give a computing system understanding of the language, just that there is definitely one that doesn't.

That section apears to be origional research, anyway. Should be removed. Banno 21:24, 17 November 2006 (UTC)

[edit] Searle's Dualism Concept

I firmly believed Searle's Chinese Room argument was invalid until I read the lengthy document containing his original argument and all his responses. He talks about Dualism. Now I think he's not absolutely wrong. The concept of dualism is crucial to build intelligent machines. It basically says that we humans work at two levels, the mind and the brain. The mind is more abstract and is not concerned with how the brain actually processes information. The brain does all the work, and the mind is only aware of what's being done, not how it's being done.

A similiar approach for programming intelligent systems, I believe, would be useful. An abstract layer just like the mind which is not concerned with internal processing details interacts with the world. Another underlying layer, the brain of the computer, does the actual processing. It seems logical because by representing actual concepts or objects as symbols, processing becomes easier. I believe the brain also transforms input into some other form to facilitate processing.

I'll refer to the computer's mind and brain as simply mind and brain. An additional layer between this mind and brain would be useful. This layer basically is the interpreter between these two layers. It understands the worldly concepts, and also the underlying representation and can convert to and from these forms, as needed. So, the mind never needs to worry about internal representation. The intermediary layer converts the information to a form the mind can understand. Likewise, the brain does not worry about any sot of conversion.

It's not very different from how compilers work. Just that the mind and the brain have an associated memory, which is ongoing, and doesn't get deleted when the program stops execution.

But I still think we don't have to replicate the exact human structure to achieve this intelligence. These layers, which can be implemented as programs, could very well understand what's going on, and at the same time, exhibit intelligence.

[edit] ELIZA

I'm surprised that this page makes no mention of ELIZA, which seems to be a pretty strong argument in favor of the Chinese Room argument.

Eliza is a poor argument in favour because it's a very poor example of an AI. It certainly is a *long* way from passing the Turing Test. ELIZA-like programmes are a dead-end in Strong AI design (although they have other uses in a Weak AI context, such as computer-human interactions for smart telephone systems and the like). In fact, if anything ELIZA and their ilk are a prime source of Searle's minsunderstanding of AI and the reason he came up with such a specious argument in the first place. FSharpMajor 13:20, 5 March 2007 (UTC) (PS: this talk page is more interesting than the article)

[edit] Sources

I'm new to wikipedia, so sorry if I ask something trivial. Why is this article flagged as having no sources? Are the Related works not considered to be sources? Thanks Chiara 09:05, 16 February 2007 (UTC)

I was wondering the same. There's a lot of {fact} tags in there that aren't really needed, as most of the material is based on one primary source, the Minds Brains and Programs article itself. There should be a cite for some of the criticisms, though FSharpMajor 13:23, 5 March 2007 (UTC)

[edit] Representation the mind and AI

Searles "Chinese room" contains no represenations of the world external to language, or for that matter anything. The room operates purely at the level of if input then output. Real AI on the other hand would have to have an internal "picture" the real world because a table like Searle describes would take too long to write and too much processing power to look through. In other words the Chinese room doesn't understand anything not because it lacks the "casual powers of the brain" but because it lacks the right cognitive architecture. If the Chinese Room used represenations of the way the world is in order to generate it's responses, and was capable of adapting it's current representation of the world based on data instead of just a massive list of if then rules then it would be sentinent.

Did that make sense?

Yes, but Searle's argument doesn't say anything about the internal structure of the programme (as mentioned above, it's not just described as a look-up table but a full-blown program (although he misleads sometimes because he describes the instructions as also being in Chinese). One thing someone alluded to above is that what he's also very quiet about is the amount of space the rule book would have to devote to conditional branching, loops and so on - that is, how little of it would relate to the content. So anyway: according to Searle the program can have any architecture at all, it's still not conscious. (And incidentally, it certainly would have to have real-world knowledge, otherwise it wouldn't seem to be a native speaker of Chinese - or possibly it would seem like an aphasiac native speaker of Chinese) FSharpMajor 13:27, 5 March 2007 (UTC)

You're missing the point. All a computer does is manipulate symbols and move them from one place to another very quickly. A program is nothing more than a set of instructions for the manipulation and movement of those symbols. Searle-in-the-room is doing essentially everything that a computer does, except that the hardware is a human with a set of instructions and symbols on paper.

[edit] Lack or presence of dynamic state

The article does not mention any dynamic state, which is necessary to give contextually meaningful answers or more generally answers about the past. If Searle's scenario does include dynamic state, then that should be mentioned. If it doesn't, then that should be mentioned. The argument fails in different ways (category error versus irrelevancy) depending on whether there is a dynamic state. But it would perhaps be wrong to mention the failure modes since as far as I can recall from the SciAm article the simple ways that the argument fails were not discussed so that that would probably constitute "original research", no matter how trivial it is. Alfps 18:08, 27 March 2007 (UTC)

This annoyed me too. Strong AI would be able to continue to understand Chinese even as the language changes. A static set of rules with no state would not be able to do this, easily completely invalidating the argument. Whether there is a dynamic state is extremely important and should definitely be mentioned in the article. Herorev 03:16, 22 July 2007 (UTC)
An example: Imagine someone telling the machine the name of their dog. The machine processes this input through its rules. Then the person asks the machine what the name of their dog is. A machine without state would not be able to say, and would thus not be able to pass the Turing test. Therefore, dynamic state is an absolute requirement for a machine to pass the Turing test. I looked at the original description of the Chinese room by John Searle, but it is extremely vague and leaves out many details that are needed to properly consider it, so I don't really know whether it has state. Herorev 03:52, 22 July 2007 (UTC)
The rulebook can potentially contain a set of instructions equivalent to any turing machine, so yes, the Chinese Room has dynamic state. Cadr 22:31, 6 October 2007 (UTC)

[edit] Problem of definition

Either I don't understand this discussion, or there is a major flaw. It is all about if the Chinese Room understands Chinese or not. But what is the definition of "understanding a natural language"? If that isn't defined very clearly, then it is not possible to prove if the Chinese Room does it or not. --LarsPensjo 19:13, 19 August 2007 (UTC)

[edit] Bold Edit

I did a bold edit, because I know that this argument is not well recieved anywhere except in philosophy. I don't want to confuse a reader. It does fail the "argument from bogosity" test which another guy pointed out earlier. Likebox 09:31, 23 September 2007 (UTC)

[edit] Clearly biased article

I'm not sure whether this is the result of one person's edits or not, but this article is clearly insanely biased, especially the Criticism section, which literally states: "This point is well accepted by everybody." This statement is obviously patently false, and the rest of the section is similarly ludicrous. This needs to be fixed, or at the very least, flagged. —Preceding unsigned comment added by Max Martin (talk • contribs) 15:25, 28 September 2007 (UTC)

Hello, I wrote that, and it may be biased, but it's a point of view which is very common among scientists. It was expressed many times in the talk page, and it only takes up one section. It is one point of view, and it gets its airing. The other points of view are expressed too.
The point which is "well accepted by everybody" is that a computer program without a large store of memory cannot think. This is well accepted by the whole world. I don't know why you think it is controvertial. A computer program without memory is just a giant lookup table, and cannot remember anything. It would fail the turing test during the following exchange
Hello! Are you a computer?
No, of course not.
What did I just say?
----?
And a computer program with a large memory is not a lookup table, but an enormous library of meticulously cross-referenced books which are constantly rewritten. This is not conveyed by the Searlish argument of the previous section. Likebox 03:04, 29 September 2007 (UTC)
Searle allows the operator inside the room to have a pencil and paper,so "without a memory" is not applicable.
Turing Equivalence means a system with a single large rule-book and a copious memory could do everything a complex system of multiple books could do. 1Z 15:08, 3 October 2007 (UTC)
Obviously true, but the argument as written did not mention memory, and only later talked about "moving symbols around", which is not the same as memory, since memory requires you to not just move symbols you recieve around, but to also maniplate symbols that store the memory of the computer while the computer is just sitting there thinking. The phrasing of the argument makes this point completely unclear, and it seems to me on purpose, so as to obscure the massive manipulation of information going on inside the computer.Likebox 16:53, 3 October 2007 (UTC)


The article as written does mention paper and pencils, the point is not moot, it is just false. 1Z 20:21, 3 October 2007 (UTC)
That was added a few hours ago, by user:peterdjones. I couldn't do it because I didn't know if Searle included pencils and working paper in the original article. It certainly wasn't anywhere in the presentation here. Likebox 00:07, 4 October 2007 (UTC)
OK--- I've been thinking about this. I thought I was speaking for a huge community when I wrote "criticisms", with definite cites to (at least) Douglas Hofstadter and Daniel Dennett, and backup from at least two previous contributers to the talk page. But the words "Insanely biased" really got to me. I would appreciate it if other people could chip in with their views, because I reverted the criticism section after a delete, and I don't know if that was justified. Perhaps the criticism belongs in the later section, where the Searlish original author put it. I don't know what the majority thinks.Likebox 05:21, 3 October 2007 (UTC)
The current Criticism section is completely unacceptable. The claim that the CR has no memory

is false: from the original article "The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has ‘data banks’ of sets of Chinese symbols." 1Z 15:13, 3 October 2007 (UTC)

OK--- so that needs to be fixed in the previous section and then the criticism section can just focus on the system reply. But I don't want to touch the previous section because Searle's argument sounds so juvenile to my ears that I couldn't possibly be fair.Likebox 16:53, 3 October 2007 (UTC)

To say that an argument is "accepted by everybody" is biased phrasing, no matter what argument you're discussing. In this case, it's blatantly false that a criticism of Searle's argument is accepted by everybody, because everybody includes Searle himself. Even statements such as "the earth is round" are not accepted by everybody; whatever your opinion of someone who disagrees with you might be, you cannot pretend that they do not exist and that they do not have a different opinion. Including a section with criticisms of this argument is entirely appropriate; claiming that these criticisms are universally agreed upon is both foolish and arrogant. If everyone agreed that Searle's argument was nonsense, nobody would have bothered making a Wikipedia page about it. —Preceding unsigned comment added by Max Martin (talk • contribs) 17:45, 3 October 2007 (UTC)

It depends on what is said to be "accepted by everybody". I quote myself "Searle's argument at best establishes that a computer without memory cannot think or understand; a point which is well accepted by everybody". That point is well accepted both by Searle, and by his opponents. Searle does not think that any computer can think or understand, and the issue of memory is moot. None of Searle's opponents make the claim that a computer without memory can understand.
The disagreement only arises when Searle takes the further leap to say that any information processing system cannot think or understand, even one with memory. This argument is controvertial, because the system with memory has a qualitatively different form than the system without memory. I would appreciate feedback from more than one person on this.Likebox 18:58, 3 October 2007 (UTC)
Apropos "if everybody agreed that Searle's argument was nonsense, nobody would have bothered..." The fact that there are a lot of foolish arguments in the world does not mean that they should be presented on wikipedia without a balanced counterargument. The counterargument was made many, many times in well respected places. Douglas Hofstadter discusses it at length in Godel Escher Bach, for example. The fact that philosophers are so ignorant of the nature of computers that they could be fooled into visualizing a computer as a no-memory device is tragic. Searle's argument about "memorizing" the rule book shows he doesn't understand the first thing about computers. The rule book could be as simple as a cellular automaton, which is trivial to memorize. The difficult part is memorizing the contents of all the scratch paper in the filing cabinets, the memory contents of the computer.Likebox 19:37, 3 October 2007 (UTC)

Perhaps you should publish this simple CA that can pass a TT in Chinese. 1Z 20:19, 3 October 2007 (UTC)

Dude, the CA is already published. It is cellular automaton 110, conjectured by Wolfram to be Turing complete, and proven to be turing complete by Cook, and discussed ad nauseum in Wolfram's book "A New Kind of Science". If a computer program can pass the turing test in chinese, so can this CA. This means that the "rulebook" in Searle's experiment can be trivial, except for maybe a slightly nontrivial code to translate some chunk of the CA to chinese characters. The hard part for Searle to memorize wouldn't be the rules. It would be the "00111001010000110000110001001" state of the automaton at any time. Let's see him try to memorize that!Likebox 22:26, 3 October 2007 (UTC)

Being Turing complete, and being able to pass a Turing test are two entirely different issues. The operator in the room is supposed to already know how to perform substitutions, record states and so on (like the "head" of a TM). The "rulebook" is what you call the programme.1Z 22:32, 3 October 2007 (UTC)

Yes, of course yes. But what you call "program" and what you call "data manipulated by the program" are hard to disentangle. For example, the "program" might be a BASIC interpreter, and then the "data manipulated by the program" would be a BASIC program stored in memory as data from the point of view of the interpreter. For a human who understands what is going on, the BASIC code is clearly a program, not data. But for an interpreter, it's data. The "program" might be the CA, and the data in the CA would be a representation of yet another program which passes the turing test.
The idea that Searle is trying to convey that the rulebook is the only data which can be interpreted as a program. All the scratch paper stuff, all the stuff in the filing cabinets, that's all just unstructured data. That is totally bogus. Any program that will pass the turing test will need an enormous store of data which is irreducibly large and whose changes represent the states of internal programs, thoughts running through the head of the computer. These will have activity which could be thought of as lots of other programs running, but now encoded in the data.Likebox 23:18, 3 October 2007 (UTC)
I don't see what the big deal is. The data isn't totally unstructured, but it could have a very simple structure at the expense of a cumbersome, inelegant system. Of course it isn't supposed to be an achievable memory feat, it's a thought experiment. 1Z 23:25, 3 October 2007 (UTC)
You can't always simplify the data structures at the expense of more rules in the book, because the book is static and unchanging, and the programs represented as data in the computer could become more complex. A computer program could always generate new programs as data structures as it runs. Then if you wanted to incorporate them into the rulebook, the rulebook would have to constantly grow and change. That's the key point. Searle's intuition is that you can somehow trade off a complex program with lots of data into a longer, less elegant program with less complex data. That's not obviously true, and I think it is obviously not true. It is a critical misunderstanding of the nature of computer programs.
Now, if Searle had argued that in order to think a machine must have access to a source of random numbers, in order to evolve new programs, his argument might be interesting and cogent. But that's not what he is arguing. He is arguing that a computer program is an glorified rule-based lookup table of stimulus/response, and his argument is that such a system is incapable of thinking. This is very similar to Chomsky's argument against the behavioralists. But Chomsky came to a different conclusion. He concluded that in order to have thinking (or language) you need more than a rule-based lookup table. You need a full fledged computer program. With memory. —Preceding unsigned comment added by Likebox (talkcontribs) 23:54, 3 October 2007 (UTC)
You can always simplify things down to a TM. Your "complex data" can be addequately represented as a linear

data store of 1's and 0's. 1Z 09:08, 4 October 2007 (UTC)

What difference could it possibly make? What changes if it is stored as 1s and 0s or as sequences of "abcdefgh" or as notes of the musical scale? A sequence of 1s and 0s is no less complex than a sequence of text or a sequence of musical notes. The "structure" and "complexity" of data doesn't change when it is written as 1s and 0s or any other way. The structure and complexity of data is determined by how the algorithm acts on the data. —Preceding unsigned comment added by Likebox (talkcontribs) 17:31, 4 October 2007 (UTC)

Most of the algorithm can be reduced to 1's and 0's as well. 1Z 18:49, 4 October 2007 (UTC)

All of the algorithm can be "reduced" to 1's and 0's. There is no sense in which this is a reduction. Likebox 19:08, 4 October 2007 (UTC)
And we have to take your word for that? 1Z 20:18, 6 October 2007 (UTC)
No, you don't. You don't have to agree with me. Just represent the point of view, because it is common. Likebox 21:15, 6 October 2007 (UTC)

[edit] Criticsm section

I did a bit of editing on the Criticism section, which was obviously biased and took a hectoring tone. It could still use some work, and I don't quite understand some of what the writer is saying. Also, it seems this section is based on Dennett's "Consciousness Explained", so why isn't it in the "Replies" section which already contains criticisms from Dennett? -Father Inire 10:23, 6 October 2007 (UTC)

The material in that section has been dealt with, and much better, elsewhere on the page.1Z 15:53, 6 October 2007 (UTC)

I wrote it, and it is also based on "Godel Escher Bach", and "The Mind's I" and the new Dennett book. But mostly it is just bloody obvious, at least to me, and I don't understand why Searle's argument is taken seriously by anybody. If you type "chinese room" on google, you will find at least a dozen articles that make the exact same point, so this is not a fringe opinion. I liked your rewrite, the tone was neutral. But now somebody got rid of the whole section.Likebox 17:14, 6 October 2007 (UTC)

1) If it is "based on" those sources they should mentioned.

Unfortunately, there are so many sources, it would be biased in another way to mention only one or two. As I said, this argument appears in print in dozens, if not hundreds of places. I cited one of them. Likebox 19:36, 6 October 2007 (UTC)
It remains the case that your section is under-sourced. Of course, the original article is the primary source.1Z 20:16, 6 October 2007 (UTC)
The original article can't be used as a primary source for criticism, because it is biased toward's Searle's opinion. You cannot present a rebuttal from the point of view of an advocate. That's the main problem with the "replies" section.Likebox 20:38, 6 October 2007 (UTC)

2) Many critiques of the CR are already mentioned in the article; that does not justify repeatimg them in an unverifiable personal-essay style form.

no more and no less unverifiable than what you are saying here. I don't think we need to agree in order to present arguments fairly. Likebox 19:36, 6 October 2007 (UTC)
This is a talk page, not an encyclopedia article.1Z 20:16, 6 October 2007 (UTC)

3) The Systems Response is already covered.

1Z —Preceding signed but undated comment was added at 19:06, 6 October 2007 (UTC)

The critiques are all mentioned in a back-and-forth way that lends some aura of legitemacy to Searle's argument. It makes it sound like there is a serious debate among experts in AI (or even in this talk page) about the legitemacy of the argument. This is just false, as you can see by scrolling up. You can't write an argument from only one point of view.
I won't revert again. I have tried my best. Likebox 19:36, 6 October 2007 (UTC)
The back-and-forth accurately represents the state of play. I don't know of any arguments against Searle other than those mentioned. Of course a lot of people want to say he is wrong without giving any cogent reason why, but that is not considered notable,not should it be. You may of course be able to source a new objection, but that is not what you have been doing so far. 1Z 20:16, 6 October 2007 (UTC)

The back-and-forth might accurately represent the state of play in philosophy, but it most certainly does not represent the state of play in science. The reason that the argument is still used in philosophy is because it is a nice elementary way to introduce "zombies" and "qualia", and all these other things which are philosophically interesting. But a lot of computer scientists read these pages, and I don't think that a philosophical argument that claims to demolish their field should be presented without a rebuttal. No matter how old or hoary the rebuttal sounds to your ears.Likebox 20:33, 6 October 2007 (UTC)

If you can quote a verifiable response from a notable CS you may do so. However, it sounds as though you are projecting your own attitudes. 1Z 08:22, 7 October 2007 (UTC)
Stop being ridiculous. Of course I am projecting my own attitudes, as are you. But I also obviously could find notable sources for any of the hackneyed comments I am making.Likebox 21:44, 7 October 2007 (UTC)
"Projecting" as in attributing to others. 1Z 22:35, 7 October 2007 (UTC)
Everything I say I attribute to myself alone. But in this case, I am not saying something original.Likebox 22:40, 7 October 2007 (UTC)


"Everything I say I attribute to myself alone." Nope: "a lot of computer scientists read these pages, and I don't think that a philosophical argument that claims to demolish their field should be presented without a rebuttal".1Z 08:23, 8 October 2007 (UTC)
Why so snipy? I made a mistake in my choice of words. I should have said "A lot of computer scientists read these pages, and I don't think that a philosophical argument that claims to demolish the subfield of AI should be presented without a rebuttal". But you knew very well what I meant, as did everyone else.Likebox 16:59, 8 October 2007 (UTC)
The Chinese room argument does not in any sense "demolish their [computer scientists'] whole field". It is an attack on Strong AI, not on computer science. The possibility of Strong AI is still pretty hotly debated, I don't think there's any consensus on it in any field. Cadr 22:25, 6 October 2007 (UTC)

[edit] Stomachs Thinking

One of the interesting consequences of rejecting Searle's argument is explained by Searle in his original paper. It is this: if you accept the systems reply, you are forced to conclude that any physical system which can serve as an information processor is in some strange sense conscious. This is very interesting, and in my opinion much more interesting than any of Searle's other arguments. First, I think he is absolutely right--- stomachs are conscious. Second, I think this ties in with other ideas about systems-level consciousness in philosophy, like the collective unconscious, the notion of humanism, and hive minds. While Searle intends the argument pejoratively, some people do believe that stomachs and bacteria are conscious in their own limited and peculiar way.Likebox 22:41, 6 October 2007 (UTC)

And Searle's argument is "illegitimate" because of this unproveable personal belief of yours? 1Z 08:24, 7 October 2007 (UTC)
Dude, I'm doing what is called "moving on". This is another unrelated topic. —Preceding unsigned comment added by Likebox (talkcontribs) 19:27, 7 October 2007 (UTC)
Yes, you're right, this is an interesting point: he also mentions John McCarthy's facetious bit about a thermostat having "beliefs" ("I believe that it's too hot in here"). Searle would say that McCarthy (and you) have made the idea of "belief" and "consciousness" so general that it doesn't mean the same thing any more. "Strong AI" was supposed to be able to explain that consciousness is the result of computation. If you say "everything is conscious" then you haven't explained anything. Sure, everything is computation, and if consciousness is computation, then everything is conscious. But now you're saying that "computation" doesn't distinguish thinking things from non-thinking things, so computation, by itself, isn't important. This disproves Strong AI: SAI said that computation is the thing that makes us different from stomachs or thermostats.
(By the way, I'm just saying what Searle thinks here) ---- CharlesGillingham 10:55, 19 October 2007 (UTC)
Hey-- finally! A response. I got unwittingly sucked into a big fracas at a "Big Name" page and I didn't get out of the vortex until today. But here's a response to this Searly argument. Saying that any computation is consciousness does not make a thermostat conscious because thermostats are a really trivial computer with a 1 bit memory (their on/off state). The issue of memory is crucial. If you say a thermostat is conscious, it is exactly 1 bit's worth of consciousness. Not everything is computing. For example, if you have a gas of molecules, they aren't in any sense computing, because the motion is chaotic. You can't store any data in the motion. If you have a regular system, the system can't compute, because it's regular. I'm taking this from "The Computational Theory of Biological Function I" BTW, if you're interested in reading more. The only systems which are capable of any non-trivial amount of computation are biological. As a matter of fact, that's how I would define a biological system, as a physical system which can compute with a huge amount of memory and processing speed.
So the crucial thing to ask, in my opinion, is how much memory does the system have. In the case of a human brain, the answer is unclear, but it is certainly huge.Likebox 04:20, 23 October 2007 (UTC)

[edit] History section is "borrowed" from Stanford encyclopedia

See this, section 3. Is this a "public domain" source? I'm not sure what the rules are here. ---- CharlesGillingham 10:55, 19 October 2007 (UTC)

I have deleted the plagiarized section. ---- CharlesGillingham 23:27, 1 November 2007 (UTC)

[edit] Teacher look! Johnny is cheating on the (Turing) test!

I always thought that Turing anticipated many arguments against AI including Searle-like arguments when he created the test. The questioner may ask anything to determine whether the box is thinking, understanding, has Buddha nature or whatever else they feel separated human thought from machine's. One rule: no peeking.

There's the rub. Searle looked inside and says "Hey wait a minute! Clearly nothing is understanding Chinese because there are only syntactic rules etc. So there is no understanding and therefore there can't be strong AI".

But he misses the point of the Turing Test. He doesn't get to look inside to detemine if there is "understanding", he must determine it from the outside.

Can Searle determine, from outside the box, that the machine is not really "understanding" Chinese? By the description of the test, he cannot. So, the Turinig Test has been passed and the machine is "thinking". The difference between syntax and semantics, the necessity of some component within the box to "understand" or for there to be important distinctions between strong and weak AI are either not well defined or red herrings.

Does this make sense or am I missing something?

Gwilson 15:53, 21 October 2007 (UTC)

You're right, Turing did anticipate Searle's argument. He called it "the argument from consciousness". He didn't answer it, he just dismissed it as being tangential to his main question "can machines think?" He wrote: "I do not wish to give give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." (see philosophy of artificial intelligence). When it comes to consciousness, Turing recommended we follow a "polite convention" that if it acts like it thinks, we'll just go ahead say "it thinks". But he's aware this is only a convention and that he hasn't solved the hard problem of consciousness.
Turing never intended for his test to be able determine if a machine has consciousness or mental states like "understanding". Part of Searle's point here is to show that it fails to do so. Turing was too smart to fall into this trap. Searle's real targets aren't careful thinkers like Turing, but others, who are given to loose talk like "machines with minds" (John Haugeland) or who think the computational theory of mind solves the hard problem of consciousness, like Jerry Fodor, Steven Pinker or Daniel Dennett. ---- CharlesGillingham 17:03, 22 October 2007 (UTC)
It's a shame if Charles's statement above can't be sourced anywhere, because it is eminently interesting and really well suited to a spot on the article (if it can be sourced somehow). If not, good job Charles on this interesting bit of thought.
Et Amiti Gel (talk) 07:39, 14 January 2008 (UTC)
Turing's answer to the "argument from consciousness" is in his famous 1950 paper Computing Machinery and Intelligence. In the last chapter of Norvig & Russell's standard AI textbook, they equate Searle's argument with the one that Turing answers. Turing's reply is a version of the "Other Minds Reply", which is mentioned in this article. ---- CharlesGillingham (talk) 18:52, 15 January 2008 (UTC)

[edit] Rewrite of replies

I have rewritten the "replies" section. I wanted to include a number of very strong and interesting proposals (such as Churchland's luminous room and Dennett's argument from natural selection) that weren't in the previous version. I also wanted to organize the replies (as Cole 2004 does) by what they do and don't prove.

I am sorry to have deleted the previous versions of the System and Robot replies. These were quite well written and very clear. I tried to preserve as much of the text as I could. All of the points that were made are included in the new version. (Daniel Dennett's perspectives are amply represented, for example). However, because there were so many points to be made, some of this text had to be lost. My apologies to the authors of those sections. ---- CharlesGillingham 14:52, 9 November 2007 (UTC)

[edit] searle fall into a trap

He set himself up a trap and fell into it. If the computer can pass the turing test, than it is irrelevant whether or not it "understands" Chinese. In order for it to be able to respond in a human manner, it would have to be able to simulate conversation. The answers have to come from somewhere, regardless of the language if they are to seem natural. The thing is, Searle doesn't seem to realize that his argument is essentially equivalent to the normal definition of a turing test. The human in his experiment is a manual turing machine simulator. He basically tries to deny that a turing machine can do something, but posits it as a premise in his argument. He presupposes his conclusion that a computer has no mind, and then uses an argument that has nothing to do with this conclusion at all. To sum up his argument: A computer can be built that easily passes a turing test. A human can compute this program by hand. Therefore Computers are stupid and smell bad. The only thing that the argument proves is that the human brain is at least turing complete; I think everyone already knew that Mr. Searle.--66.153.117.118 (talk) 20:44, 25 November 2007 (UTC)

This is encyuclopedially irrelevant, and misses Searle' point that a TT passer can lack genuine semantics.1Z (talk) 10:35, 26 November 2007 (UTC)
I'm glad that you seem to have gotten the point that the Chinese room is a universal Turing machine, and so anything a computer can do, the Chinese room can do. If a "mind" can "emerge" from any digital machine, of any architecture, it can "emerge" from the Chinese room. That's not Searle's main point (as 1Z points out), but it's essential to the argument. Searle's main point takes the form of an intuition: he can not imagine that a mind (with "genuine semantics") could emerge from such a simple set up. Of course, a lot of the rest of us can. The power of the argument is the stark contrast between Searle's understanding and the room's understanding, and the way it forces AI believers to "put up or shut up". Searle is saying "there's a another mind in the Chinese room? I don't think so. Why don't you prove it!" And of course, at the end of the day, we really can't. We can only make it seem more plausible. But we thought it was plausible to begin with, and nothing will convince Searle. ---- CharlesGillingham (talk) 21:01, 26 November 2007 (UTC)
The irony is that the Turing test is also a "put up or shut up" test. I imagine Turing would have said to Searle "if you think there is some difference in causality or understanding (or whatever ill-defined concept you posit is important) between the artificial and the human "mind". Prove it. Show that you can determine, using the Test which is which". Since the Test is passed in the Chinese room Argument we should conclude that "causality", "understanding" or "mind" are really just philosophical mumbo-jumbo and have nothing to do with the issue. I think Searle's "success" is that he sucked everyone trying to "find the mind" in the CR. (A task equally impossible as "finding the mind" in a living breathing human) The response should have been "Show me yours and then I'll show you mine". Gwilson 14:51, 30 November 2007 (UTC)

[edit] To understand or not understand...

I have a more fundamental question on this whole experiment. What does it mean "to understand." We can make any claim as to humans that do or don't understand, and the room that does/not. But how do we determine that anything "understands"? In other words, we make distinction between syntactic and semantics, but how do they differ? These two are typically (to me) the two extremes of a continuous attribute. Humans typically "categorize" everything and create artificial boundaries, in order for "logic" to be applicable to it. Syntax is the "simple" side of thought, where application of grammar rules is used, ex. The ball kicks the boy. Grammar wise very correct. But semantically wrong (the other side of the spectrum) - rules of the world, in addition to grammar, tells us that who ever utters this, does not "understand". In effect, we say that environmental information is captured also as rules, that validates an utterance on top of grammar. To understand, is to perceive meaning, which in turns imply that you are able to infer additional information from a predicate, by the application of generalized rules of the environment. These rules are just as write able as grammar, into this experiment's little black book. For me, the categorization of rules, and the baptism of "to understand" as "founded in causal properties" (again undefined) creates a false thought milieu in which to stage this experiment. (To me, a better argument in this debate on AI vs. thought is that a single thought is processing an infinite amount of data - think chaos theory and analog processing, where as digital processes cannot. But this is probably more relevant elsewhere.) —Preceding unsigned comment added by 163.200.81.4 (talk) 05:35, 11 December 2007 (UTC)

I think Searle imagines that the program has syntactically defined the grammar to avoid this. Instead of something simple like <noun> <verb> <noun> the grammar could be defined with rules like <animate object noun> <verb requiring animate object> <animate or inanimate noun>. So "kick" is a <verb requiring animate object>, "boy" is an <animate object noun> and ball is a <inanimate object noun>. The sentence "The ball kicks the boy" is then parsed to be <inanimate object noun> <verb requiring animate object> <animate object noun> which doesn't parse correctly. Therefore a computer program could recognize this statement as nonsense without having understanding of balls, boys or kicking. It just manipulated the symbols into the category to which they belonged and applied the rules.
This is a simple example and the actual rules would have to be very complex ("The ball is kicked by the boy" is meaningful so obviously more rules are needed). I'm not sure if anyone has been able to define English syntax in such a way as to avoid these kind of semantic errors (or Chinese for that matter). Additionally, it is unclear to me how a syntax could be defined which took into account the "semantic" of previous sentences. (For example, "A boy and his dog were playing with a ball. The boy kicked it over the house". What did he kick? Searle also cites a more complex example of a man whose hamburger order is burnt to a crisp. He stomps out of the restaurant without paying or leaving a tip. Did he eat the hamburger? Presumably not.) However, if we assume that some program can pass the Turing Test then we must assume that it can process syntax in such a way.
I agree with you, however, that Searle fails to define what he means by some key terms like "understanding". He argues that a calculator clearly doesn't understand while a human mind does. This argument falls flat since the point in question is whether the Chinese Room is "understanding" or not. It also begs the question, if the Chinese Room (which has no understanding) cannot be differentiated from a human mind then how are we sure that understanding is important to "mind" or that a human mind really does have "understanding"? Gwilson (talk) 15:58, 5 January 2008 (UTC)

[edit] The Refactoring

The "blockhead" map which turns a simulation into a lookup table ( or "refactors" or whatever) requires bounded size input--- if the input can be arbitrarily long, you cannot refactor it as written. However, it is easy to get around a size limitation by doing this

"I was wondering, Mr. Putative Program, if you could comment on Shakespeare's monologue in Hamlet (to be continued)"

"Go on"

"Where hamlet says ..."

But then there's a "goto X" at each reply step, which effectively stores the information received in each chunk of data in the quantity X. If the chunks are of size N characters, The refactored program has to be immensely long, so that the jumps can go to 256^N different states at each reply, and that length must be multiplied by the number of mental states, which is enormous. So again, the argument is intentionally perversely misleading. The length of the program is so enormous, the mental state is entirely encoded in the "instruction pointer" of the computer, which tells you what line of code the program is executing. There is so much code, that this pointer is of size equal to the number of bits in a human mind.Likebox (talk) 19:47, 5 February 2008 (UTC)

Your analysis of the blockhead argument is absolutely correct. Computationalism and strong AI assume that "mental states" can be represented as symbols, which in turn can be coded as extremely large numbers (represented as X in this example). "Thinking" or "conscious awareness" can be represented as a dynamic process of applying a function recursively to a very large number. Refactoring this function into a goto-table is of course possible, and requires the exponential expansion of memory that you calculated.
However, since this is only a thought experiment, the fact that no such table could ever be constructed is irrelevant. The blockhead example just drives the point home that we are talking about "lifeless" numbers here. The details of the size of the program are not really the issue—the issue is whether the mind, the self, consciousness can be encoded as numbers at all. Our intuitions about "mind" and "self" tend to slip away when faced with the utter coldness of numbers. The failure of our intuitions has to do with our inability to see that extremely large numbers are as complex and interesting as the human spirit itself. ---- CharlesGillingham (talk) 19:23, 7 February 2008 (UTC)
I'm not sure that this fact (the table could not be constructed) is irrelevant. If the table cannot be constructed then it cannot be used as an argument in support of Searle's intuition. I might suggest that a Turing machine could not encode such a table even with an infinite tape because the number of entries in the table might be uncountably infinite (ie infinite number of entries in an infinite number of combinations).
I wanted to bring forth another point and this seems as good a place as any. What provision does Searle or the refactor algorithm make for words/characters which aren't in the lexicon, but which still make sense to those who "understand" the language? For one example, we've probably all seen the puzzles where one has to come up with the common phrase from entries like: |r|e|a|d|i|n|g| or cor|stuck|ner (reading between the lines and stuck in a corner) and we can decipher smilies like ;-> and :-O. To refactor one must anticipate an infinite number of seemingly garbage/nonsense entries as well those which are "in the dictionary". How would Searle process such a string of characters or even a string chinese characters one of which was deliberately listed upside down or on it's side? Gwilson (talk) 19:48, 21 February 2008 (UTC)
Well, such a table could be constructed in theory (by an alien race living in a much larger universe with god-like powers). I meant only that building such a table is impractical for human beings living on earth. (My rough upper bound on the table length is 2^(10^15) -- one entry for each possible configuration of the memory of a computer with human level intelligence.)
The size of the program or the complexity of its code are not the issue. The issue is whether a program, of any size or complexity, can actually have mind. A convincing refutation of Searle should apply to either program—the ludicrous simple but astronomically large "blockhead" version or the ludicrously complex but reasonably sized neuron-by-neuron brain simulation. You haven't refuted Searle until you prove that both cases could, in theory, have a mind.
On the issue of infinities. It doesn't really affect the argument significantly to assume that the machine's memory has some upper bound, or that the input comes in packets (as Likebox proposes above). In the real world, all computers have limits to the amount of input they can accept or memory they can hold, so we can safely assume that our "Chinese speaking program" operates within some limits when it's running on it's regular hardware. This implies that, for example, a Turing Machine implementation would only require a finite (but possibly very large) amount of tape and would have a finite number of states. Searle's argument (that he "still understands nothing") applies in this case just as easily as to a case with no limit on the memory, so the issue of infinities really does nothing to knock down Searle.
The answer to your second question is that, if the program can successfully pass the Turing test, then it should react to all those weird inputs exactly like a Chinese speaker would. Searle (in the room) is simply following the program, and the program should tell him what to do in these cases. Note that Searle's argument still works if he is only looking at a digitized representation of his input, i.e. he is only seeing cards that say "1" or "0". Searle "still understands nothing" which is all he thinks he needs to prove his point.
(And here is my usual disclaimer that just because I am defending Searle, it doesn't mean that I agree with Searle.) ---- CharlesGillingham (talk) 01:06, 22 February 2008 (UTC)
What I was hoping to show was that if you could drive the table size to infinity then the algorithm could not be guaranteed to terminate and hence would not be guaranteed to pass the TT. The Blockhead argument only works if the program can pass TT since everyone agrees that a program that fails TT does not have mind. I realize that this is pointless because Blockhead is really just copying it's answers from the mind of a living human. Given two human interlocutors (A and B), one could easily program a pseudo-Blockhead program which will pass TT. The pseudo-Blockhead takes A's input and presents it to B. It copies B response and presents it back to A and so on. Provided A and B are unaware of each other they will consider pseudo-Blockhead to have passed TT. The only difference between Blockhead and pseudo-Blockhead is that Blockhead asks B beforehand what his answer will be for ever possible conversation with A. At the end of the day though, Blockhead is using the mind of B to the answer A, the same as pseudo-Blockhead.
So, if Searle asks us to "Show me the mind" in Blockhead or pseudo-Blockhead, it's easy. It is B's mind which came up with the answers. I'm hoping this means that both Blockhead and pseudo-Blockhead do nothing to support Searle since they are in fact merely displaying the product of a mind.
Getting back to the smilies and such. I recall back around the time that Searle was writing his paper one of the popular uses of computers was to produce "ASCII ART". Pictures, some basic stick figures and others huge and detailed, printed on line printers or terminals using Ascii keyboard characters. These are instantly recognizable to anyone with "mind" however, they do not follow any rules of syntax. In essence, they are all semantic and no syntax. Can Searle's argument, that the program is merely manipulating "symbols" according to syntactical rules without "understanding", apply when the input has no symbols and no syntax? Having those things in the input is, I think, somewhat crucial to Searle's argument. However, I find that Searle's argument is slippery, when cornered on one front his argument seems to change. Does the CR not have mind because it processes only syntax without semantic or because computers don't have causality? Are the symbols the 1's and 0's of the computer or the symbols of the language? I don't know. Gwilson (talk) 21:24, 23 February 2008 (UTC)
Well, no, the ASCII pictures are still made of symbols, and he's still manipulating them according to the syntactic rules in his program. So he's still just following syntactic rules on symbols. The semantics is the picture (as I recall, back when I was a kid, it was usually a playboy centerfold). It's important for his argument that he can't figure out what the symbols mean, so it's important that he's never able to actually see the picture --- like if he gets the characters one at time. He only manipulates them syntactically (i.e. meaninglessly, e.g. sorting them into piles, comparing them to tables, putting some of them into the filing cabinets, taking others out, comparing #20 to #401 to see if they're the same, counting the ones that match, writing that down and putting it in drawer #3421, going to filing cabinet #44539 and getting his next instruction, etc.), never noticing that all of these characters would make a picture if laid out the floor in the right order. Eventually he gets to an instruction that says "Grab big squiggly boxy character #73 and roundish dotted character #23 (etc) and put them through the slot." And the guy outside the room reads: "Wow. Reminds of me of my ex-wife. You got any more?" The virtual Chinese Mind saw the picture, but Searle didn't.
The point is, his program never lets him know what the input or output means, and that's the sense in which his actions are syntactic. It's syntax because the symbols don't mean anything to him. He doesn't know what the Chinese Mind is hearing (or seeing) and he doesn't know what the Chinese Mind is saying.
In answer to your question about the argument, is supposed to go something like this. The only step in the argument that is controversial is marked with a "*".
  1. CR has syntax. CR doesn't have semantics.* Therefor syntax is insufficient for semantics.
  2. Brains cause minds, i.e. brains must have something that causes a mind to exist, we don't know what it is, but it's something. Let's call it "causal powers". Brains use causal powers to make a mind.
  3. Every mind has semantics. CR doesn't have semantics. Therefor CR doesn't have a mind. Therefor CR doesn't have causal powers.
  4. Computers only have syntax. Syntax is insufficient for semantics. Every mind has semantics. Therefor computers can't have a mind. Therefor computers don't have causal powers.
Again, the only real issue is "CR doesn't have semantics". Everything else should be pretty obvious.
"Has syntax" means "uses symbols as if they didn't stand for anything, as if they were just objects."
"Has semantics" means "uses symbols that mean something", or in the case of brains or minds, "has thoughts that mean something."
Does that help? ---- CharlesGillingham (talk) 09:06, 24 February 2008 (UTC)
Yes, thanks, it helps me understand Searle's argument better. Part of the reason Searle's description of the CR is so engaging (to me) is that he makes it easy to see where the "illusion of understanding" comes from. The syntax of the input language (Chinese in this case) allows the program to decode inputs like <man symbol> <bites symbol> <dog symbol> and produce output <dog symbol> <hurt symbol> <? symbol> without an "understanding" of what dogs are or biting is. When the input has no rules of syntax for the program exploit, I can't imagine how the program parses it and produces the "illusion of understanding". Of course, this is unimportant to Searle's argument. The CR can only processes using it's syntax rules, it has no "understanding". It doesn't matter to Searle where the "illusion" comes from, it only matters that there is no "real' understanding since the CR uses only syntax to arrive at a response.
I want to ponder on this thought: if the input contains no rules of syntax which encode/hide/embed a semantic then any apparent semantic produced must be "real semantic" and not "illusion of semantic". Once again, that will depend on what is meant by "semantic" and "understanding". Like a magician pulling a quarter out of your ear when beforehand we checked every possible place in the room and on the magician and on you (except your ear) and found no hidden quarters. If he could pull a quarter out of your ear, it's either really magic or the quarter was in your ear. Gwilson (talk) 15:33, 25 February 2008 (UTC)

(deindent) There is a presumption in the discussion here, and with Searly argument in general, that it is relatively easy to imagine a machine which does not have "real semantics" and yet behaves as if it does, producing symbols like "dog hurt" from "man bites dog" in a sensible way without including data structures which correspond to any deep understanding of what the symbols mean.

This intuition is entirely false, and I don't think many people who have done serious programming believe it. If you actually sit down to try to write a computer program that tries to extract syntactical structure from text for the purpose of manipulating it into a reasonable answer, you will very quickly come to the conclusion that the depth of data structures that is required to make sense out of the written text is equal to the depth of the data structures in your own mind as you are making sense out of the text. If the sentence is about "dogs" the program must have an internal representation of a dog capable of producing facts about dogs, like the fact that they have legs, and bark, and that they live with people and are related to wolves. The "dog description" module must be so sophisticated that it must be able to answer any concievable intuitive question about dogs that people are capable of producing without thinking. In fact, the amount of data is so large and so intricately structured that it is inconceivable that the answer could predictibly come out with the right meaning without the program having enough data stored that the manipulations of the data include a complete understanding. Since the data structures in current computers are so limited and remote from the data structures of our minds, there is not a single program that comes close to being able to read and understand anything, not even "One fish two fish red fish blue fish".

This is known to all artificial intelligence people, and is the reason that they have not succeeded very well in doing intuitive human things like picture recognition. Searle rewords the central difficulty into a principle "It is impossible to produce a computational description of meaning!" But if you are going to argue that the Turing test program is trivial, you should at least first show how to construct a reasonable example of a program that passes the Turing test, where by reasonable I only mean requiring resources than can fit in the observable universe.Likebox (talk) 17:40, 25 February 2008 (UTC)

You've just given the "contextualist" or "commonsense knowledge" reply, served as as is customary with a liberal sprinkling of the "complexity" reply. (Which I find very convincing, by the way. And so does Daniel Dennett and Marvin Minsky. Your reply is very similar to Dennett's discussion in Consciousness Explained.) You're right that "the depth of data structures that is required ... is equal to the depth of the data structures in your own mind", as it must be. Unfortunately for defeating Searle, he doesn't agree there are 'data structures' in your mind at all. He argues that, whatever's in your head, it's not "data structures", it's not symbolic at all. It's something else. He argues that, whatever it is, it is far more complicated than any program that you can imagine, in fact, far more complicated than any possible program.
Note that, as the article discusses (based on Cole, Harnad and their primary sources), an argument that starts out "here's what an AI program would really be like" can, at best, only make it seem more plausible that there is a mind in the Chinese Room. At best, they can only function as "appeals to intuition". My intuition is satisfied. Searle's isn't. What else can be said? ---- CharlesGillingham (talk) 18:34, 25 February 2008 (UTC)

[edit] Forest & Trees

I've made a number of changes designed to address the concerns of an anonymous editor who felt the article contained too much "gossip". I assume the editor was talking about the material in the introduction that gave the Chinese Room's historical context and philosophical context. I agree that this material is less on-point than the thought experiment itself, so I moved the experiment up to the top and tucked this material away into sections lower in the article where hopefully it is set up properly. I put the context in context, so to speak. If these sections are inaccurate in any way (i.e., if there are reliable sources that have a different perspective) please feel free to improve them. ---- CharlesGillingham (talk) 19:23, 7 February 2008 (UTC)

[edit] Definition of Mind, Understanding etc

I think we've touched on this before in the talk section, but one of the things Searle doesn't do is define what he means by "has mind" or "Understands". He says in his paper "There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.". He makes this claim, because we can all agree that whatever "understanding" is , we know that Searle doesn't have it in regards to Chinese.

However, at the end of the day he has left a vital part of his argument undefined and in doing so prevented people from discovering potential undiscovered flaws in his argument. While Searle's argument has a certain mathematical "proofiness" about it, because he doesn't define key terms like "understanding" or "has mind" it isn't a real proof, only an interesting philosophical point of view.

What I'm wondering is, can we somehow get the fact that Searle doesn't define understanding into the first few paragraphs? Something like: "Searle does not attempt to define what is meant by understanding. He notes that "There are clear...." ".

The Turing test deliberately avoids defining terms like mind and understanding as well. So, I think we could follow those words with something like CharlesGillingham early words here "When it comes to consciousness, Turing recommended we follow a "polite convention" that if it acts like it thinks, we'll just go ahead say "it thinks"."

Does anyone feel that would improve the article? —Preceding unsigned comment added by Gwilson (talkcontribs) 14:49, 28 February 2008 (UTC)

This could fit in nicely right after (or mixed in with) the paragraph where David Chalmers argues that Searle is talking about consciousness. I like the Searle quote. The truth is, defining "understanding" (or what philosophers call intentionality) is a major philosophical problem in its own right. Searle comes from the ordinary language philosophy tradition of Ludwig Wittgenstein, J. L. Austin, Gilbert Ryle and W. V. O. Quine. These philosophers insisted that we always use words in their normal ordinary sense. They argue that, "understanding" is defined as: 'what you're doing when you would ordinarily say "Yes, I understand."' If you try to create some abstract definition based on first principles, you're going to leave something out, you're going to fool yourself, you're going to twist the meaning to suit your argument. That's what has usually happened in philosophy, and is the main reason that it consistently fails to get anywhere. You have to rely on people's common sense. Use words in their ordinary context -- don't push them beyond their normal limits. That's what Searle is doing here.
Turing could fit in two places. (1) In the paragraph that argues that the Chinese Room doesn't create any problems for AI research, because they only care about behavior, and Searle's argument explicitly doesn't care how the machine behaves. (2) In the "other minds" reply, which argues that behavior is what we use to judge the understanding of people. It's a little awkward because Turing is writing 30 years before Searle, and so isn't directly replying to the Chinese Room, and is actually talking about intelligence vs. consciousness, rather than acting intelligent vs. understanding. But, as I said above, I think that Turing's reply applies to Searle, and so do Norvig & Russell. ---- CharlesGillingham (talk) 10:00, 4 March 2008 (UTC)

[edit] Footnote format

This is just as aesthetic choice, but, as for me, I don't care for long strings of footnotes like this.[1][2][3][4][5] It looks ugly and breaks up the text. (I use Apple's Safari web browser. Perhaps footnotes are less obtrusive in other browsers.) So I usually consolidate the references for a single logical point into a single footnote that lists all the sources.[6] This may mean that there is some overlap between footnotes and there are occasionally several footnotes that refer to the same page of the same source. I don't see this is as a problem, since, even with the redundancy it still performs the essential functions of citations, i.e. it verifies the text and provides access to further reading. Any one else have an opinion? (I'm also posting this at Wikipedia talk:Footnotes, to see what they think.) ---- CharlesGillingham (talk) 17:30, 24 March 2008 (UTC)

I have recombined the footnotes by undoing the edit that split them up. I admire the effort of the anonymous editor who undertook this difficult and time consuming task, but unfortunately I found mistakes. For examples a references to Hearn, p. 44 was accidentally combined with a reference to Hearn p. 47. Also, as I said above, I don't think the effort really improved the article for the reader. Sorry to undo so much work. ---- CharlesGillingham (talk) 06:17, 27 March 2008 (UTC)

[edit] Empirical Chinese Rooms

I'm posting here in the discussion section because I have a conflict of interest, but I think this is the best place to bring my question....

Recently I self-published an article on Philica called The Real Chinese Room, in which I replicated Searle's experiment using a version of the ELIZA code. As I discovered later, Harre and Wang conducted a similar experiment in 1999, and published it in the Journal of Experimental & Theoretical Artificial Intelligence. (11 #2 April) I haven't been able to find a copy, but from their very terse abstract it would appear that their experiment confired Searle's assumption. Mine did not.

It seems like at least Harre and Wang's work would be a useful contribution to the article if anyone could locate it. My own article has not been peer-reviewed to date, so it is not a reliable source (yet). But I would hope that the article could be expanded to look at the empirical work done on this problem. Ethan Mitchell (talk) 21:31, 8 May 2008 (UTC)


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -