Imitation Does Not Equal Intelligence
There is the most extraordinary thing in the air at present whereby we simultaneously seek to understand the brain as a computer, and the computer as a brain. Though no one actually seeks directly to equate the two items in question the very attempt seems self-defeating, since the metaphor by which we try to deepen our understanding is unintelligible anyway. If we do not understand the brain properly and profoundly, how can using it as a metaphor substantially enlarge our grasp of ‘Artificial Intelligence’?
The analogous relationship between computers and human brains has provoked fear, excitement and, ultimately, speculation about what the role(s) of AI might soon be in society. In both popular culture – e.g., the works of Isaac Asimov and the Terminator films – and in ‘serious’ journalism people do seem to imbue computers with intellect. There are now, it is true, computers that can defeat grandmasters at chess or distinguish pictures that have cats in them from those that do not. These achievements imitate faculties of the human mind the most astonishing of which is a certain apparent degree of abstraction. When a computer successfully distinguishes between a picture of a cat and a picture of a car it is suggested that the computer understands ‘cat-ness’ – or that which makes a cat a cat.
There is the rub. The computer only appears to abstract. It has no concept of ‘cat’ except what can be programmed into it by a human mind. The celebrated Ada Countess of Lovelace (1815–1852) once said, ‘the Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.’ That is, in fact, as she well knew, all a computer can ever do: that is, follow instructions (algorithms, more precisely). Indeed, as the algorithms become more refined and complex, the range of possibilities of what a computer can do does expand. Yet, no matter how impressive the outcome, it can still be only a reflection of intelligence in the programmer – not in the computer itself. Intelligent activity does not originate in the latter.
With each and every new development in neuroscience it becomes clearer yet that brains do not work like computers. Even something as simple as memory is completely different in the two. A computer, for instance, runs out of memory, so it needs to delete information to make more space. Astonishingly the human brain acts in a reverse manner. Studies have shown that the mind learns more facts more quickly if it already knows a lot. The ‘emptier’ a mind is, the slower it will be at learning new facts. If something as simple as memory (simple in a computer, that is) is so very different, it is a wonder that we persist with associating brain and computer. To make comparison even more inadequate, we are significantly further from understanding the brain than the computer.
Classical philosophers believed the intellect was a faculty of the soul. Aristotle went further and said that having a mind (nous, rendered as ‘intellect’ or ‘reason’) was essential to a human, just as sensory faculties are essential to animals. If we liberate ourselves from prejudice and ‘lazy’ associations we open the door for a ‘paradigm shift’. That phrase was famously coined by the philosopher of science, Thomas Kühn, to describe the phenomenon of a reframing of scientific thought and of underlying assumptions that may allow for a rapid advance in knowledge. This shift cannot originate within science itself, but rather from a complex social process. Although we shall undoubtedly grow in understanding of the brain that will never throw real light on the intellect.