A recent
article in the
New York Times concerns "leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds" to debate "whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload."
According to the article, the scholars expressed concerns about how advances in AI might be used by criminals, whether smart machines might take jobs away from people, and the likelihood that emerging technologies will "force humans to learn to live with machines that increasingly copy human behaviors."
These worries sound very familiar to me. What useful technology cannot be used by criminals (think robbers and pantyhose)? Major technological changes have always taken jobs from people, and many also create new jobs. And although we don't encounter many machines today that copy human behaviors, we have adapted to more social changes than I can hope to name.
I don't know how well this article reflects the actual meeting, but I suspect (and hope) that the actual conversations were more than cliches. I suppose that the devil is in the details, and the
New York Times probably isn't the place to look for a detailed discussion of complex social and technological issues. But this is the kind of coverage that advances in AI and pervasive technologies tends to get in the popular media, which suggests that it shapes public understanding of these issues. One challenge facing this group of scholars - and everyone involved in the PAIT project - is coping with these broad-brush and somewhat shallow portrayals of the issues.
Ken Pimple, PAIT Project Director