The only way this would work is if the Chinese Room was continuously being fed information from the outside world. But once you add this stipulation, the “understanding” of the Chinese Room begins to look more genuine because this simple “instruction manual” now has the ability for novel visual recognition and processing complex cultural information as well as being hooked up in real-time to a complex causal network.
Rachel Anne Williams
But where does the understanding of the information coming from the outside world arise? Many of our computers interact with the environment, albeit in fairly rudimentary ways. However, none of them so far have shown any understanding of that environment, let alone any interest in finding out anything about the environment beyond what their programming dictates.
What’s special about biological (carbon based) brains is their plasticity. They develop over time and can grow new neurons in response to environmental stimuli. Computers completely lack this ability. In addition, biology is bottom up (emergent), while technology is top down (designed).
In an article I recently wrote regarding computer “intelligence” I cited Nicholas Carr, who in his book The Glass Cage referred to a problem the University of Toronto computer scientist Hector Levesque raises. Levesque “provides an example of a simple question that people can answer in a snap but that baffles computers: The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?” The reason computers have a problem with this simple question is they completely lack any understanding of what styrofoam really is, the relative hardness of objects, or any ability to construct an accurate picture of the context using the information available to them.
I put the problem this way in my article: “Curiosity is, among other things, one very important ‘feature of intelligence’ that so far hasn’t been ‘precisely described,’ and which no program so far devised has even come remotely close to simulating. For all the things our machines can do, they remain as incurious as a stone. So far no computer, no matter how sophisticated, has shown any desire to do any investigating to satisfy its own curiosity.”
In other words, until a machine shows an interest of its own accord in the meaning of the symbols entering the Chinese room, any discussion of machine intelligence is moot. No self-awareness means no curiosity, which means no learning or understanding.