The Chinese Room argument begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.
The question is this: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese?
If you think the computer understands Chinese, then suppose that a man is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. He could receive Chinese characters through a slot in the door, process them according to the program’s instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, that he would do so as well, simply by running the program manually.
There is no essential difference between the roles of the computer and the man in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, the man would not be able to understand the conversation he is mediating. Therefore, neither does the computer in the original example.

Check out other work in the Life Of The Mind seriesĀ here.

Leave a Reply

Your email address will not be published. Required fields are marked *