The Chinese Room Experiment: Computers with a Mind?
The Chinese Room Mental Experiment is a hypothetical situation raised by the American philosopher John Searle, to demonstrate that the ability to orderly manipulate a set of symbols does not necessarily imply that there is a linguistic understanding or comprehension of those symbols. That is, the ability to understand does not arise from syntax, which questions the computational paradigm that cognitive sciences have developed to understand the functioning of the human mind.
In this article we will see exactly what this mind experiment consists of and what kind of philosophical debates it has generated.
Turing’s machine and the computational paradigm
The development of artificial intelligence is one of the great attempts of the 20th century to understand and even replicate the human mind through the use of computer programs . In this context, one of the most popular models has been Turing’s machine.
Alan Turing (1912-1954) wanted to prove that a programmed machine can hold conversations like a human being. To this end, he proposed a hypothetical situation based on imitation: if we program a machine to imitate the linguistic ability of speakers, then put it before a set of judges, and get 30% of these judges to think they are talking to a real person, this would be sufficient evidence to show that a machine can be programmed in such a way that it replicates the mental states of human beings; and vice versa, this would also be an explanatory model of how human mental states work.
Based on the computational paradigm, part of the cognitive current suggests that the most efficient way of acquiring knowledge about the world is through the increasingly perfected reproduction of the rules of information processing , so that, independently of the subjectivity or history of each one, we could function and respond in society. Thus, the mind would be an exact copy of reality, it is the place of knowledge par excellence and the tool to represent the outside world.
After Turing’s machine even some computer systems were programmed that tried to pass the test . One of the first was ELIZA, designed by Joseph Weizenbaum, which responded to users by means of a model previously registered in a database, thus making some interlocutors believe that they were talking to a person.
Among the most recent inventions that are similar to Turing’s machine are, for example, CAPTCHA for detecting spam, or SIRI for the iOS operating system. But, just as there have been those who have tried to prove Turing right, there have also been those who have questioned it.
The Chinese room: does the mind work like a computer?
From the experiments that sought to pass the Turing test, John Searle distinguishes between Weak Artificial Intelligence (that which simulates understanding, without but intentional states, that is, it describes the mind but does not equal it); and Strong Artificial Intelligence (when the machine has mental states like those of human beings, for example, if it can understand stories as a person does).
For Searle, it is impossible to create strong artificial intelligence , which he wanted to prove through a mental experiment known as the Chinese room or the Chinese piece. This experiment consists of posing a hypothetical situation that is the following: a native English speaker, who does not know Chinese, is locked in a room and must answer questions about a story he has been told in Chinese.
How do you respond to them? By means of a book of rules written in English that serve to syntactically order the Chinese symbols without explaining their meaning, only explaining how they should be used. Through this exercise, the questions are properly answered by the person inside the room, even if this person has not understood their content.
Now, suppose there’s an outside observer, what does he see? That the person inside the room is behaving exactly like a person who does understand Chinese.
For Searle, this shows that a computer program can imitate a human mind, but this does not mean that the computer program is the same as a human mind, because it has no semantic capability and no intentionality .
.
Impact on the understanding of the human mind
Taken to the human realm, this means that the process by which we develop the ability to understand a language goes beyond having a set of symbols; other elements are needed that computer programs cannot have.
Not only that, but from this experiment studies on how meaning is constructed , and where that meaning is. The proposals are very diverse, ranging from cognitivist perspectives that say it is in the head of each person, derived from a set of mental states or that are given in an innate way, to more constructionist perspectives that ask how systems of rules and practices that are historical and that give a social sense are socially constructed (that a term has a meaning not because it is in the head of people, but because it enters a set of practical rules of language).
Criticisms of the Chinese Room Mental Experiment
Some researchers who disagree with Searle think that the experiment is invalid because, even if the person inside the room does not understand Chinese, it may be that, in conjunction with the elements around him (the room itself, the furniture, the rule book), there is an understanding of Chinese.
Searle responds to this with a new hypothetical situation: even if we remove the elements surrounding the person inside the room, and ask them to memorise the rules manuals for manipulating the Chinese symbols, this person would not be understanding Chinese, which a computer processor does not do either.
The answer to this same criticism has been that the Chinese room is a technically impossible experiment. In turn, the answer to this has been that just because it is technically impossible does not mean that it is logically impossible .
Another of the most popular criticisms has been made by Dennett and Hofstadter, who apply not only to Searle’s experiment but also to the set of mental experiments that have been developed in the last centuries, since reliability is doubtful because they do not have a rigorous empirical reality, but rather a speculative one that is close to common sense, which makes them, above all, a “bomb of intuition”.
Bibliographic references:
- González, R. (2012). The Chinese Part: a mental experiment with a Cartesian bias? Chilean Journal of Neuropsychology, 7(1): 1-6.
- Sandoval, J. (2004). Representation, discursiveness and situated action. Critical introduction to the social psychology of knowledge. University of ValparaÃso: Chile.
- González, R. (S/A). “Bombs of intuition”, mind, materialism and dualism: Verification, refutation or epoché? University of Chile repository. [On line]. Consulted 20 April 2018. Available at http://repositorio.uchile.cl/bitstream/handle/2250/143628/Bombas%20de%20intuiciones.pdf?sequence=1.