Lanier compares Ray Kurzweil's idea of "the Singularity" to a religion, observing that "a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity,"and wondering whether these tensions would be eased a bit if technologists were less messianic.
I think Lanier's ideas are valid and worth contemplating, but I'll take the general train of thought on a slight detour. One of the objectives of AI research has been to make machines think like people. This has often driven researchers to try to understand how people actually think - how our brain, mind, emotions, and body interact to form thoughts, premises, conclusions, convictions, beliefs, and all the rest; even how we recognize a person's identity from her or his face.
The more I learn about AI and human psychology - and I have learned only a very small amount about either - the more convinced I am that AI research not only mystifies our understanding of human nature (as Lanier recognizes), but has potential to clarify it.
Lanier writes:
In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering” machine that is designed to play the TV quiz show “Jeopardy.” Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.To me, this is also an example of how computers do not think like human beings, and that trying to make them think like us might be useful heuristically, but isn't really a desirable goal in and of itself. Why spend so much money trying to make more things that think like people when we already have several billion people who are already experts?
Perhaps we should recognize, and emphasize, that "artificial intelligence" only resembles human intelligence insofar as it can solve some problems only humans have been able to solve heretofore. For the moment, I have yet to be convinced that AI is more than a really sophisticated hand-held calculator. We aren't metaphysically threatened by machines that can do arithmetic thousands of times faster and more accurately than ourselves; why should we be threatened by a handful of machines that seem to be able to hold a semi-coherent conversation with us under very narrow circumstances?
Ken Pimple, PAIT Program Director
1 comment:
I agree with Ken that AI combines mystifying and illuminating.
Scientific models are heuristic fictions we have created to guide research. Brain models are interesting in that they exhibit a self-referential aspect. The result is not quite a paradox—more like a fetish. We create an idol (model) and prostrate ourselves before it by imagining our minds to function wholly within it. It is important not to start believing that our entire mental process is just some epiphenomenon of a model we created. Each model illuminates some aspect of us in the same way that all technology objectifies some aspect of ourselves, but to get trapped in one illusion we have spun is to get caught in the mystifying veil of Maya.
Post a Comment