As I read it, Sharkey sees a pervasive acceptance of the view that artificial intelligence can now, or will soon be able to, emulate human intelligence in ways that will be useful and benign in everyday life, including, for example, robots that can sympathetically care for the sick. Sharkey believes that this view overstates the capacities of AI. At the risk of putting words into his mouth, I believe that Sharkey is concerned that if we build health care robots using actual available technology (or near-future technology) but are led by this erroneous view of AI, we are bound to run into serious problems, not because AI will soon be superior to human intelligence, but because actual AI will fall short of our hopes and, more importantly, expectations. I think of the many apparently well-intentioned projects of the past that failed disastrously in part because we expected and wanted them to succeed - urban renewal, the institutionalization of people believed to be mentally ill or mentally retarded, prohibition, the war on drugs.
Whether my summation is accurate or not, one question and answer stand out as relevant to the PAIT project:
Is this why you are calling for ethical guidelines and laws to govern the use of robots?
In the areas of robot ethics that I have written about - childcare, policing, military, eldercare and medical - I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.
What do you think? Feel free to share your comments below.
Ken Pimple, PAIT Project Director