Thursday, October 10, 2019
Discuss ââ¬ËThe Chinese Roomââ¬â¢ Argument Essay
In 1980, John Searle began a widespread dispute with his paper, ââ¬ËMinds, Brains, and Programmesââ¬â¢ (Searle, 1980). The paper referred to a thought experiment which argued against the possibility that computers can ever have artificial intelligence (AI); in essence a condemnation that machines will ever be able to think. Searleââ¬â¢s argument was based on two key claims. That; ââ¬Å"brains cause minds and syntax doesnââ¬â¢t suffice for semanticsâ⬠(Searle, 1980, p.417). Syntax in this instance refers to the computer language used to create a programme; a combination of illegible code (to the untrained eye) which provides the basis and commands for the action of a programme running on a computer. Semantics refers to the study of meaning or the understanding behind the use of language. Searleââ¬â¢s claim was that it is the existence of a brain which gives us our minds and the intelligence which we have, and that no combination of programming language is sufficient enough to contribute meaning to the machine and therein for the machine to understand. His claim was that the apparent understanding of a computer is merely more than a set of programmed codes, allowing the machine to extort answers based on available information. He did not deny that computers could be programmed to perform to act as if they understand and have meaning. In fact he quoted; ââ¬Å"the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive statesâ⬠(Searle, 1980, p. 417). Searleââ¬â¢s argument was that we may be able to create machines with ââ¬Ëweak AIââ¬â¢ ââ¬â that is, we can programme a machine to behave as if it were thinking, to simulate thought and produce a perceptible understanding, but the claim of ââ¬Ëstrong AIââ¬â¢ (that machines are able to run with syntax and have cognitive states as humans and understand and produce answers based on this cognitive understanding, that it really has (or is) a mind (Chalmers, 1992)) is just not possible. A machine is unable to generate fundamental human mindsets such as intentionality, subjectivity, and comprehension (Ibid, 1992). Searleââ¬â¢s main argument for this notion came from his ââ¬ËChinese room experimentââ¬â¢, for which there has been much deliberation and denunciation from fellow researchers, philosophers and psychologists. This paper aims to analyse the arguments, assess counter augments and propose that John Searle was accurate in his philosophy; that machines will n ever think as humans and that the issue relates more to the simple fact that a computer is neither human nor biological in nature, nor can it ever be. In 1950, Alan Turing proposed a method of examining the intelligibility of a machine to become known as ââ¬ËThe Turing Testââ¬â¢ (Turing, 1950). It describes an examination of the veracity to which a machine can be deemed intelligent, should it so pass . Searle (1980) argued that the test is fallible, in that a machine without intelligence is able to pass such a test. ââ¬ËThe Chinese Roomââ¬â¢ is Searleââ¬â¢s example of such machine. ââ¬ËThe Chinese roomââ¬â¢ experiment is what is termed by physicists a ââ¬Ëthought experimentââ¬â¢ (Reynolds and Kates, 1995); such that it is a hypothetical experiment which is not physically performed, often without any intention of the experiment ever being executed. It was proposed by Searle as a way of illustrating his understanding that a machine will never logically be able to possess a mind. Searle (1980) suggests that we envisage ourselves as a monolingual (speaking only one language) English speaker, locked inside a room with a large group of Chinese writing in addition to a second group of Chinese script. We are also presented with a set of rules in English which allow us to connect the initial set of writings, with the second set of script. The set of rules allows you to identify the first and second set of symbols (syntax) purely by their presenting form. Furthermore, we are presented with a third set of Chinese symbols and additional English instructions whi ch makes it feasible for you to associate particular items from the third batch with the preceding two. This commands you consequently to ââ¬Ëgive backââ¬â¢ particular Chinese symbols with particular shapes in response. Searle encourages us to accept that the initial set of writing is a ââ¬Ëscriptââ¬â¢ (a natural language processing computational data set); the second set a ââ¬Ëstoryââ¬â¢ and the third group ââ¬Ëquestionsââ¬â¢. The symbols which are returned are the ââ¬Ëanswersââ¬â¢ and the English instructions are the ââ¬Ëcomputer programmeââ¬â¢. However, should you be the one inside ââ¬Ëthe Chinese roomââ¬â¢ you would not be aware of this. However, Searle suggests that your responses to the questions become so good, that you are impossible to differentiate from a native Chinese speaker; yet you are merely behaving as a computer. Searle argues that whilst in the room and delivering correct answers, he still does not know anything. He cannot speak Chinese yet is able to produce the correct answers without an understanding of the Chinese language. Searleââ¬â¢s thought experiment demonstrated that of ââ¬Ëweak AIââ¬â¢; that we can indeed programme a machine to behave as if it were thinking and such to simulate thought and hence produce a perceptible understanding, when in fact the machine understands nothing; it is simply following a linear instructional set, for which the answers are already programmed. The machine is not producing intuitive thought; it is providing a programmed answer. Searle was presented with many critical replies to ââ¬Ëthe Chinese roomââ¬â¢ experiment, for which he offered a rejoinder; a retort to the replies by looking at the room in a different way to account for such counterarguments presented by researchers in the field of AI. Harnard (1993) supports ââ¬ËThe Systems Replyââ¬â¢ in refute of the work of Searle. This argues that we are encouraged to focus on the wrong agent; the individual in the room. This implies that the man in the room does not understand Chinese as a single entity, but the system in which he operates (the room), does. However, an evident opposition to such claim is that the system (the room) again has no real way of connecting meaning to the Chinese symbols any more than the individual man did in the first instance. Even if the individual were to internalize (memorise) the entire instructional components, and be removed from the system (room), how would the system compute the answers, if all the computational ability is within the man. Furthermore, the ââ¬Ëroomââ¬â¢ cannot understand Chinese. ââ¬ËThe Robot Replyââ¬â¢ is due to refutation by Harnard (1989) who argued that meaning is unable to be attached to the ciphers of Chinese writing due to the lack of ââ¬Ësensory-motoricââ¬â¢ connection. That is, the symbols are in no way attached to a physical meaning, that which can be ââ¬Ëseenââ¬â¢ and comprehended. As children, we learn to associate meaning of words by attaching them to physical ââ¬Ëthingsââ¬â¢. Harnard argues, that ââ¬Ëthe Chinese roomââ¬â¢ lacks this ability to associate meaning to the words, and thus is unable to produce understanding. Yet, Searleââ¬â¢s defence is that if we were to further imagine a computer inside a robot, producing a representation of walking and perceiving, then according to Harnard, the robot would have understanding of other mental states. However, when Searle places the room (with the man inside) inside the robot and allows the symbols to come from a television attached to the robot, he insists that he still does not have understanding; that his computational production is still merely a display of ââ¬Ësymbol representationââ¬â¢ (Searle, 1980, p.420). Searle also argues that part of ââ¬ËThe Robot Replyââ¬â¢ is in itself, disputing the fact that human cognition is merely symbol manipulation and as such refutes the opinion of ââ¬Ëstrong AIââ¬â¢, as it is in need of ââ¬Ëcausal relations to the outside worldââ¬â¢ (Ibid, p.420). Again, the system simply follows a computational set of rules installed by the programmer and produces linear answers, based upon such rules. There is no spontaneous thought or understanding of the Chinese symbols, it merely matches with that already programmed in the system. ââ¬ËThe Robot Replyââ¬â¢ is therefore suggestive that programmed structure is enough to be acc ountable for mental processes; for cognition. ââ¬Ë[this suggests] that some computational structure is sufficient for mentality, and both are therefore futileââ¬â¢ (Chalmers, 1992, p.3). Further to ââ¬Ëthe Robot Replyââ¬â¢, academics from Berkley (Searle, 1980) proposed ââ¬ËThe Brain Simulator Replyââ¬â¢, in which the notion of exactly what the man represents is questioned. It is hereby proposed that the computer (man in the room) signifies neurons firing at the synapse of a Chinese narrator. It is argued here that we would have to accept that the machine understood the stories. If we did not, we would have to assume that native Chinese speakers also did not understand the stories since at a neuronal level there would be no difference. The opposition clearly defines understanding by the correct firing of neurons, which may well produce the correct responses from the ââ¬Ëmachineââ¬â¢ and a perceived understanding, that is assumed, but the argument remains; does the machine (man) actually understand that which he is producing (answering), or is it again, merely a computational puzzle, solved through logical programming? Searle argues yes. He asks us to imagine a man in the room using water pipes and valves to represent the biological process of neuronal firing at the synapse. The input (English instructions) now informs the man, which valves to turn on and off and thus produce an answer (a set of flowing pipes at the end of the system). Again, Searle argues that neither the man, nor the pipes actually understand Chinese. Yes, they have an answer and yes, the answer is undoubtedly correct, but the elements which produced the answer (the man and the pipes) still do not understand what the answer is; they do not have semantic representation for the output. Here, the representation of the neurons is simply that; a representation. A representation which is unable to account for the higher functioning processes of the brain and the semanticist understanding therein. Further argument suggests a combination of the aforementioned elements known as ââ¬ËThe Combination Replyââ¬â¢ should allow for ââ¬Ëintentionalityâ⬠⢠to the system, as proposed by academics at Berkley and Standford (Simon and Eisenstadt, 2002). The idea is such that combining the intelligence of all the replies aforementioned into one system, the system should be able to produce semantic inference from the linear answer produced by the syntax. Again, Searle (1980) is unable to justify such claims, as the sum of all parts does not account for understanding. Not one of the replies was able to validate genuine understanding from the system and as such, the combination of the three counterarguments, will still remain as ambiguous as first presented. Searle quotes; ââ¬Å"if the robot looks and behaves sufficiently like us then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behaviorâ⬠¦ [i]f we knew independently how to account for its behavior without such assumptionsâ⬠¦we would not attribute intentionality to it, especially if we knew it had a formal programâ⬠(1980, p. 421). Searleââ¬â¢s argument is simple. If we did not know that a comput er produces answers from specifically programmed syntax, then it is plausible to accept that it may have mental states such as ours. The issue however is straightforwardly so, that we do know that the system is a computational set and as such is not a thinking machine any more so than any other computational structure. ââ¬ËThe Chinese Roomââ¬â¢ thought experiment is undoubtedly notorious and controversial in essence. The thought experiment has been refuted and discredited repeatedly, yet perceivably defended by Searle. His own defensive stance has appeared to cause infuriation amongst ââ¬Ëstrong AIââ¬â¢ theorists, resulting in questionable counter attacks, resulting in more of what appears a ââ¬Å"religious diatribe against AI, masquerading as a serious scientific argumentâ⬠(Hofstadter 1980, p. 433) than a significant opposition. Searle (1980) argues that accurate programming in no instance can ever produce ââ¬Ëthoughtââ¬â¢ in the essence of what we understand thought to be; not only the amalgamation of significant numbers of neurons firing, but the underlying predominance which make us what we are, that predominance being consciousness. From a functionalist perspective, with the mind being entwined within the brain and our bodies entangled further, creating a machine which ââ¬Ëthinksââ¬â¢ as a human is nigh impossible. To do so, would be to create an exact match of what we are, how we are constructed and the properties of substance of which we stand. If successful, we have not created a thinking ââ¬Ëmachineââ¬â¢ but a thinking ââ¬Ëhumanââ¬â¢; a human which alas, is not a machine. Searle (1982) argues that it is an undeniable fact that the earth is comprised of particular biological systems, particularly brains which are able to create intellectual phenomena which are encompassed with meaning. Suggesting that a machine is capable of intelligence would therein suggest that a machine would need the computational power equivalent to that of the human mind. Searle (Ibid, 1982, p. 467) states that he has offered an argument which displays that no recognised machine is able ââ¬Ëby itselfââ¬â¢ to ever be capable of generating such semantic powers. It is therefore assumed, that no matter how far science is able to recreate machines with behavioural characteristics of a ââ¬Ëthinkingââ¬â¢ human, it will never be more than a programmed mass of syntax, computed and presented as thought, yet never actually existing as actual thought. References: Chalmers, D. 1992, ââ¬ËSubsymbolic Computation and the Chinese Roomââ¬â¢, in J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap, Hillsdale, NJ: Lawrence Erlbaum. Harnad, S. 1989. Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1, pp.5-25. Harnad, S. 1993. Grounding symbols in the analog [sic] world with neural nets. Think 2(1): 12-78 (Special issue on ââ¬Å"Connectionism versus Symbolism,â⬠D.M.W. Powers & P.A. Flach, eds.). Simon, H.A., & Eisenstadt, S.A., 2002. A Chinese Room that Understands Views into the Chinese room. In: J. Preston * M. Bishop (eds). New essays on Searle and artificial intelligence Oxford: Clarendon, pp. 95-108. Hofstadter, D. 1980. Reductionism and religion. Behavioral and Brain Sciences 3(3),pp.433ââ¬â34. Reynolds, G. H., & Kates, D.B. 1995. The second amendment and statesââ¬â¢ rights: a thought experiment. William and Mary Law Review, 36, pp.1737-73. Searle, J. 1980. ââ¬Å"Minds, Brains, and Programs.â⬠Behavioral and Brain Sciences 3, pp.417-424. Searle, J. 1982. ââ¬ËThe Myth of the Computer: An Exchangeââ¬â¢, in New York Review of Books 4, pp.459-67.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment