Reading:
Turing, A.M. (1950). Computing machinery and intelligence
Turing has famously proposed the imitation game as a substitution for the question “Can machines think?” According to him, if a machine could imitate a human being well enough in this imitation game setting, then we should consider the machine to be “intelligent” or “capable of thinking.”
This is certainly an interesting approach. Nevertheless, whether this imitation game test really serves as a legitimate criterion for our decision on the original problem is still controversial. Certainly, machines that could pass this imitation game must be extremely powerful in terms of their storage capacities, computational capacities, etc. In addition to those, the machines must be capable of natural language processing (NLP) and must be able to receive a wide range of inputs and transform, apply them freely as humans do. Therefore, it is no doubt that, even with our current technology, no machine is even close to completely passing this imitation game.
However, suppose that such a machine really exists, are we so certain that it must have the ability to “think?” I believe this still belongs to a problem of definition. As Turing is satisfied with his imitation game criterion, people who view “thinking” as involving some “subjective” and “conscious” elements will probably be reluctant to admit the conclusion “Machines can think” merely based on this criterion. In Turing’s original paper, he reacted to this argument – “The Argument from Consciousness” – by simply expressing that he thinks that “most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position.” However, he has not offered a reasonable explanation for his claim, as solipsism remains a rational approach.
From my personal point of view, the syntax vs. semantics distinction argument from John Searle is still worth further discussion (see the Chinese Room thought experiment for more information). As long as we are still using our current programming approaches, maybe the machines will never be able to understand the semantics behind the questions and answers. Even if such a machine can imitate a human being well enough and pass the imitation game, what it is doing is merely operating on the syntactic level and computing out the answers without “understanding” them.
However, it is also possible that our distinction between syntax and semantics based on our intuitions is itself an illusion. It might be possible that what we, our brains, are essentially doing is also merely operating on the symbols, representations of objects and concepts and forming our outputs in a similar fashion as computers do. In other words, it may only appear that we “understand” the “meanings” behind the words and what we are actually doing is nothing but computing based on induction laws, syntactic relationships, etc.
To conclude, I think Turing’s proposal is an interesting while controversial one. It might be a good aim for engineers, but maybe it would not do a lot in helping us solve the “Can machines think?” question eventually.
Comments