Readings:
Palmer, S.E. (1999). Visual awareness. Vision science: From photons to phenomenology
Dennett, D. (1981). Where am I? Brainstorms: Philosophical Essays on Mind and Psychology
Dennett, D. (1981). Can Machines Think?
As a scientist who is particularly interested in neuroscience and the evolution of human mind, combining with our current knowledge of how the mind and the brain are so intimately connected with each other, I believe reductive materialism (or eliminative materialism, which I view as a perspective that only differs in its attitude toward the daily usage of certain vocabularies related to mind, emotions, etc.) will be the answer to the millennium-long mind-and-body problem. Nevertheless, I phrase it as a “belief” because it does not seem to be a satisfying answer for all the psychological experiences we have as a living human being, especially on the problem of consciousness or subjective experience, sometimes referred to as “qualia” in some philosophical literature.
From a materialistic viewpoint, all psychological phenomena should ultimately be reduced to certain kinds of neural firing patterns or cell interactions, at least having some materialistic foundations. And to some degree, a functionalist approach does seem to be plausible, as what really matters might just be the interacting patterns or functions carried out by the system, independent of its exact composition. Based on this idea, some philosophers have suggested the theoretical possibility for us to create robots that have consciousness, intelligence, or the ability to think. And some people, Turing as the most famous one, have developed different kinds of tests as their criteria.
The answers to these questions, I believe, however, are largely dependent on people’s definitions for these several keywords involved. For example, if some people understand “intelligence” as merely the ability to solve problems or accomplish high-level computations, then AIs and certain machines are almost undoubtedly intelligent. However, if some people define it in a different way that involves the conscious experience of thinking for instance, then the answer is more uncertain. In other words, while “intelligence” and “thinking” are somewhat more cognitively based and could be understood as some form of information processing or computational mechanism, “consciousness” usually involves a connotation of being able to “subjectively experience” the world, which is, from my personal point of view, the ultimate question for us to address. But unfortunately, it is exactly this question that we do not have a single clue for the answer.
As discussed in the problem of other minds, being conscious is such a subjective experience that we cannot even be certain about whether people around us are really conscious or not. The possibility of them being philosophical zombies exist, and there does not seem to be a definitive way to test this crazy hypothesis. Of course, the possibility of imagination does not tell us anything about the real world. Maybe such a creature is not even possible in reality, as consciousness might be able to automatically arise as its complexity has reached a certain level, as Dennett has argued.
My conclusion of this short essay is that, for all the philosophical discussions on whether creatures or machines other than human beings could have consciousness, emotions, intelligence, or the ability to think, more clearly stated definitions of these words are necessary for meaningful discussions. And personally speaking, I view the problem of subjectivity (or consciousness) as the ultimate question and the most difficult one for us to answer. As we have no solid basis for understanding whether people sitting next to us are conscious or not, finding out the answer for AIs and other creatures will be even harder, probably with only speculations left.
Comentarios