Monday, September 28, 2009

Noel Sharkey to speak at PAIT workshop

Noel Sharkey, Professor of Artificial Intelligence and Robotics, Professor of Public Engagement, and EPSRC Senior Media Fellow at the University of Sheffield, is the third speaker to be added to the roster of the PAIT workshop. He joins Fred H. Cate, Distinguished Professor and C. Ben Dutton Professor of Law, IU School of Law, and Director of the Center for Applied Cybersecurity Research at Indiana University Bloomington, and Helen Nissenbaum, Professor of Media, Culture, and Communication and Senior Fellow of the Information Law Institute at New York University.

In addition to formal presentations by these three distinguished scholars, the workshop will feature panel presentations and small-group breakout discussions. We already have an outstanding group of registrants and look forward to welcoming more.

Two quick notes:
  1. Today (September 28, 2009) is the deadline for travel subsidy applications.
  2. We have added a link on our Web site for reserving hotel rooms at the workshop.
Ken Pimple, PAIT Project Director

Wednesday, September 23, 2009

"Nationwide Warnings of Faulty Transit Sensor"

This news story from the New York Times (September 22, 2009) cites a recent report from the National Transportation Safety Board about the June 22, 2009, Metro crash in Washington, D.C., in which nine people died and dozens were injured. The NTSB has not yet come to a conclusion about the cause of the crash, the report notes that "a critical part of the sensing system was replaced days before the accident and that the subway’s managers did not respond aggressively to earlier system failures that did not result in death or injury."

Whatever the cause of this deadly accident, it stands as yet another reminder that technology is only as safe as the people who use and maintain it, the people who oversee them, the policies that guide the overseers (when they follow the policies), and numerous other links in an all-too frail system.

Ken Pimple, PAIT Project Director

Monday, September 7, 2009

"Gadget Makers Can Find Thief, but Don’t Ask"

This New York Times story focuses on the frustration of some owners whose Kindle reader was lost or stolen. According to the article, Amazon won't work with customers to locate missing or stolen Kindles unless the owner can get a subpoena from the police. Owners are understandably irked that Amazon won't even deactivate the device, which would make it useless because the thief could not register it and get new e-books. It seems like a self-serving move because if a thief, or an honest person who finds a lost Kindle, registers the device, Amazon can continue to sell through that device.

I'm a Kindle owner and this news makes me wary. I'm probably not the only person who has a sense of loyalty and even gratitude to the companies that make and support my favorite devices, and I do associate the pleasure I derive from my Kindle with Amazon. Being reminded that Amazon is a business, and that customer service is important to most businesses only insofar as it helps the bottom line, is distasteful. But then, real life is often distasteful.

Be that as it may, as portable and pervasive IT devices become more common, and we grow more dependent upon them, we are likely to see more of this kind of problem. Amazon has one good argument for its stance: They don't want to deactivate any Kindle's by mistake. How can they know how the device changed hands? If it's too easy to get a device disabled, pranksters will have a field day.

With a reading device, this is a nuisance. With future technologies, it might be a life-or-death matter. Wouldn't it be nice to forestall problems like this?

Ken Pimple, PAIT Project Director

Friday, September 4, 2009

"Why AI is a dangerous dream"

The September1, 2009, edition of New Scientist includes an interview with Noel Sharkey. The interview is accompanied by 194 comments from readers as of this writing, so it is with some temerity that I venture to summarize Dr. Sharkey's main contention - but here goes.

As I read it, Sharkey sees a pervasive acceptance of the view that artificial intelligence can now, or will soon be able to, emulate human intelligence in ways that will be useful and benign in everyday life, including, for example, robots that can sympathetically care for the sick. Sharkey believes that this view overstates the capacities of AI. At the risk of putting words into his mouth, I believe that Sharkey is concerned that if we build health care robots using actual available technology (or near-future technology) but are led by this erroneous view of AI, we are bound to run into serious problems, not because AI will soon be superior to human intelligence, but because actual AI will fall short of our hopes and, more importantly, expectations. I think of the many apparently well-intentioned projects of the past that failed disastrously in part because we expected and wanted them to succeed - urban renewal, the institutionalization of people believed to be mentally ill or mentally retarded, prohibition, the war on drugs.

Whether my summation is accurate or not, one question and answer stand out as relevant to the PAIT project:

Is this why you are calling for ethical guidelines and laws to govern the use of robots?

In the areas of robot ethics that I have written about - childcare, policing, military, eldercare and medical - I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.

What do you think? Feel free to share your comments below.

Ken Pimple, PAIT Project Director

Tuesday, September 1, 2009

"A Casualty of the Technology Revolution: ‘Locational Privacy’"

A commentary in today's New York Times, citing the Electronic Frontier Foundation, outlines and raises concerns about widely-used technologies that make it easy to record our every movement - probably not news to readers of this blog, but possibly an eye-opener to many people.

Here are the recommendations from near the end of the commentary.
What can be done? As much as possible, location-specific information should not be collected in the first place, or not in personally identifiable form. There are many ways, as the Electronic Frontier Foundation notes, to use cryptography and anonymization to protect locational privacy. To tell you about nearby coffee shops, a cellphone application needs to know where you are. It does not need to know who you are.

When locational information is collected, people should be given advance notice and a chance to opt out. Data should be erased as soon as its main purpose is met. After you pay your E-ZPass bill, there is no reason for the government to keep records of your travel.
Are these measures adequate? How can they be implemented? Please share your thoughts and comments.

Ken Pimple, PAIT Project Director