Friday, November 6, 2009

Taking a break

The PAIT workshop is now four months away and we have not had any new registrants for two weeks. This week we launched a LISTSERV e-mail lists for current (and future) registrants for pre-workshop conversations in the hope of making our face time as productive as possible.

My focus for the next few months will be on working with the people who will be at the workshop, rather than recruiting more. Thus I will be taking a break from updating this blog, probably until after the workshop.

Ken Pimple, PAIT Project Director

Monday, October 12, 2009

"What will talking power meters say about you?"

The October 9 entry on Bob Sullivan's MSNBC blog, "The Red Tape Chronicles," asks:
Would you sign up for a discount with your power company in exchange for surrendering control of your thermostat?  What if it means that, one day, your auto insurance company will know that you regularly arrive home on weekends at 2:15 a.m., just after the bars close?
The potential benefits of Smart Grid technology are many, including more efficient use of energy, fewer blackouts and brownouts, and lower energy costs. But utility companies will collect enormous amounts of data on consumers hooked up to the Smart Grid. Utility companies might sell that data to companies that will use it to protect themselves - like the hypothetical auto insurance company that might raise your premium or cancel your policy (or notify the police?) based on thin evidence that you drink and drive most weekends.

Strong data privacy laws in Europe may protect EU consumers, but the U.S. does not have such laws. Utility companies have financial incentives to adopt Smart Grid technology, and also to sell the data. Is there any force in the United States stronger than money, a force that can get laws implemented to protect consumers at some financial cost to business? If there is, it isn't Congress.

Thanks to Donald Searing of Syncere Systems for drawing my attention to this story.

Ken Pimple, PAIT Project Director

Monday, September 28, 2009

Noel Sharkey to speak at PAIT workshop

Noel Sharkey, Professor of Artificial Intelligence and Robotics, Professor of Public Engagement, and EPSRC Senior Media Fellow at the University of Sheffield, is the third speaker to be added to the roster of the PAIT workshop. He joins Fred H. Cate, Distinguished Professor and C. Ben Dutton Professor of Law, IU School of Law, and Director of the Center for Applied Cybersecurity Research at Indiana University Bloomington, and Helen Nissenbaum, Professor of Media, Culture, and Communication and Senior Fellow of the Information Law Institute at New York University.

In addition to formal presentations by these three distinguished scholars, the workshop will feature panel presentations and small-group breakout discussions. We already have an outstanding group of registrants and look forward to welcoming more.

Two quick notes:
  1. Today (September 28, 2009) is the deadline for travel subsidy applications.
  2. We have added a link on our Web site for reserving hotel rooms at the workshop.
Ken Pimple, PAIT Project Director

Wednesday, September 23, 2009

"Nationwide Warnings of Faulty Transit Sensor"

This news story from the New York Times (September 22, 2009) cites a recent report from the National Transportation Safety Board about the June 22, 2009, Metro crash in Washington, D.C., in which nine people died and dozens were injured. The NTSB has not yet come to a conclusion about the cause of the crash, the report notes that "a critical part of the sensing system was replaced days before the accident and that the subway’s managers did not respond aggressively to earlier system failures that did not result in death or injury."

Whatever the cause of this deadly accident, it stands as yet another reminder that technology is only as safe as the people who use and maintain it, the people who oversee them, the policies that guide the overseers (when they follow the policies), and numerous other links in an all-too frail system.

Ken Pimple, PAIT Project Director

Monday, September 7, 2009

"Gadget Makers Can Find Thief, but Don’t Ask"

This New York Times story focuses on the frustration of some owners whose Kindle reader was lost or stolen. According to the article, Amazon won't work with customers to locate missing or stolen Kindles unless the owner can get a subpoena from the police. Owners are understandably irked that Amazon won't even deactivate the device, which would make it useless because the thief could not register it and get new e-books. It seems like a self-serving move because if a thief, or an honest person who finds a lost Kindle, registers the device, Amazon can continue to sell through that device.

I'm a Kindle owner and this news makes me wary. I'm probably not the only person who has a sense of loyalty and even gratitude to the companies that make and support my favorite devices, and I do associate the pleasure I derive from my Kindle with Amazon. Being reminded that Amazon is a business, and that customer service is important to most businesses only insofar as it helps the bottom line, is distasteful. But then, real life is often distasteful.

Be that as it may, as portable and pervasive IT devices become more common, and we grow more dependent upon them, we are likely to see more of this kind of problem. Amazon has one good argument for its stance: They don't want to deactivate any Kindle's by mistake. How can they know how the device changed hands? If it's too easy to get a device disabled, pranksters will have a field day.

With a reading device, this is a nuisance. With future technologies, it might be a life-or-death matter. Wouldn't it be nice to forestall problems like this?

Ken Pimple, PAIT Project Director

Friday, September 4, 2009

"Why AI is a dangerous dream"

The September1, 2009, edition of New Scientist includes an interview with Noel Sharkey. The interview is accompanied by 194 comments from readers as of this writing, so it is with some temerity that I venture to summarize Dr. Sharkey's main contention - but here goes.

As I read it, Sharkey sees a pervasive acceptance of the view that artificial intelligence can now, or will soon be able to, emulate human intelligence in ways that will be useful and benign in everyday life, including, for example, robots that can sympathetically care for the sick. Sharkey believes that this view overstates the capacities of AI. At the risk of putting words into his mouth, I believe that Sharkey is concerned that if we build health care robots using actual available technology (or near-future technology) but are led by this erroneous view of AI, we are bound to run into serious problems, not because AI will soon be superior to human intelligence, but because actual AI will fall short of our hopes and, more importantly, expectations. I think of the many apparently well-intentioned projects of the past that failed disastrously in part because we expected and wanted them to succeed - urban renewal, the institutionalization of people believed to be mentally ill or mentally retarded, prohibition, the war on drugs.

Whether my summation is accurate or not, one question and answer stand out as relevant to the PAIT project:

Is this why you are calling for ethical guidelines and laws to govern the use of robots?

In the areas of robot ethics that I have written about - childcare, policing, military, eldercare and medical - I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.

What do you think? Feel free to share your comments below.

Ken Pimple, PAIT Project Director

Tuesday, September 1, 2009

"A Casualty of the Technology Revolution: ‘Locational Privacy’"

A commentary in today's New York Times, citing the Electronic Frontier Foundation, outlines and raises concerns about widely-used technologies that make it easy to record our every movement - probably not news to readers of this blog, but possibly an eye-opener to many people.

Here are the recommendations from near the end of the commentary.
What can be done? As much as possible, location-specific information should not be collected in the first place, or not in personally identifiable form. There are many ways, as the Electronic Frontier Foundation notes, to use cryptography and anonymization to protect locational privacy. To tell you about nearby coffee shops, a cellphone application needs to know where you are. It does not need to know who you are.

When locational information is collected, people should be given advance notice and a chance to opt out. Data should be erased as soon as its main purpose is met. After you pay your E-ZPass bill, there is no reason for the government to keep records of your travel.
Are these measures adequate? How can they be implemented? Please share your thoughts and comments.

Ken Pimple, PAIT Project Director

Monday, August 24, 2009

Case study: The Presence Clock

Comments on and discussion of this case study are welcome; please use the Comment function. - Ken Pimple

Sensing Presence and Privacy: The Presence Clock

Kalpana Shankar, Ph.D.,
Assistant Professor of Informatics and Computing,
Indiana University Bloomington

Oliver McGraw, B.S.

Tom lives about two hours away from his 89 year-old mother, Judy. He has been quite worried about her since she fell last year. Since she is otherwise healthy, she insists that she does not need a medical alert bracelet. So Tom purchased a pair of Presence Clocks for her birthday.

The two analog Presence Clocks (see picture) are equipped with motion sensors and lights to record motion and presence. The clocks are connected to each other via the Internet. In this way, a family member does not need to be present at the time of remote activity in order to “see” it. Each of the owners of the clocks can sense at a glance the remote activity of the other.

Tom put one of the clocks in Judy’s kitchen and the other in his own. When the new clock at Judy’s house detects movement in her living room a green light (near the 3) begins blinking on Tom’s clock. It also indicates when the last time Judy was in her kitchen; the intensity of the blue light at the hour markers on the clock face shows how much time in that hour someone has spent near the clock; for example, the bright light at 4 and the dull light at 12 indicate that someone spent quite a bit of time near the clock in the 4:00 hour, and not as much time in the 12:00 hour. Similarly, Judy’s clock lets her know when Tom is in his kitchen or when he was last there. Through these clocks, Judy and Tom can both feel as though they have had contact with the other during the day and he can be reassured daily that she has not had an accident.

After a week Judy began to feel uncomfortable, like the Presence Clock is an invasion of her privacy. Tom has started to ask her specific questions: Where did you go this afternoon? Why did you not get up until nine? She knows that she is not as healthy as she used to be, but she wants to stay independent. It was all she could do after her fall last year to persuade him that she did not need to move into an assisted living facility. So, she keeps the clock and does not complain. If it makes Tom feel better maybe he won’t make her move. She decides the clock is better than having to move, but she is worried about what Tom will think of next.

Although Tom certainly seems to be a loving, caring son who means well by his purchase of the Presence Clocks what is clearly troubling in this scenario is the way in which the clock, combined with the use Tom makes of it, invades the mother’s privacy, paternalizes her and ultimately is used to disrespect her autonomy (i.e. her ability to run her affairs as she chooses).

Commentary

Sandra Shapshay, Ph.D.,
Assistant Professor of Philosophy,
Indiana University-Bloomington

Upon first hearing about this new technology, and having young children, my first thought was: that’s kind of a like a baby monitor, but for checking in on seniors. How similar are these technologies?

Well, unlike the baby monitor, the Presence Clock does not pick up actual sounds (or in fancier models images and sound). Another dissimilarity is that the Presence Clocks pick up and transmit presence on both ends, whereas the baby-monitor is a one-way surveillance device. The original intention of these devices is also different: The designers of the presence-clock saw it largely as a kind of gentle “social networking device”—to allow people to feel more present in each others’ lives; whereas the baby monitor obviously has no real “social networking side.” So, there are significant differences between the Presence Clock and the baby monitor, but in the use that Tom is making of the clocks in this case, there is something similar going on and it is unsettling.

Like almost any technology, the Presence Clock can be used in an ethically responsible or troubling manner. A hammer can be used by a carpenter to build a chair, or by a robber to break into someone’s house. I’m sure the Presence Clock could be used in a completely innocuous fashion, to help people feel better connected to each other, to make seniors and their children all feel safer. But in the case at hand, we see one way in which the technology can be abused.

Judy is an apparently competent 89 year old woman. But the case suggests that since Judy’s fall, Tom has been treating her as less than competent to make her own decisions – note “It was all she could do after her fall last year to persuade him that she did not need to move into an assisted living facility.” Indeed, one can detect a subtle threat behind the use of the Presence Clock: Keep it, or else you’ll have to move out of your home. Judy is understandably disturbed. Imagine if you had a bout of depression, recovered, but then were told by a paternalizing adult child: You need to use this device or I’ll do what I can to have you committed to a psychiatric facility. Imagine if your son has a good rapport with your psychologist, so that he could probably make good on his threat to have you committed. It is not much of a stretch to see that Judy is in an analogous position. If she refuses to use the Presence Clocks, Tom might take more aggressive measures to have Judy moved to an assisted living facility, something she really doesn’t want.

Now in both the case of the formerly depressed parent and the case of Judy, it might be that the clock could actually be a benefit to all parties. Say, if I should have a relapse of my depression, and stay in bed for the entire day, while my clock is in the kitchen, then my son might have an early warning about the situation. If Judy should have another fall and would not be able to get to her kitchen all day, she might very well be thankful that Tom was minding his Presence Clock.

Nonetheless, if this expected benefit for me did not outweigh the burdensome invasion of my privacy, then I think in such a situation I would object that my autonomy – that is, my ability to run my affairs as I see fit – as a currently competent adult was being disrespected. I’m being treated as less than fully autonomous – to be asked to give up some of my privacy, which I cherish – through the pressure to use the Presence Clock. It is the hallmark of a liberal society, such as ours, that competent adults be allowed to make decisions for themselves, so long as they don’t hurt anyone else in the process. Judy may be putting herself at greater risk without the Presence Clock, but indeed, as a competent adult, that is her prerogative.

So this leads to my first ethical worry about this technology: It may be used even by well-meaning relatives and friends to treat adults with less respect for their autonomy than they deserve, especially if adults are vulnerable in some way. It is worrisome that their vulnerability may be used to leverage the relinquishing of some personal privacy with this device.

Does this ethical worry about this potential use of the Presence Clock mean that its sale or use should be restricted by law? Not really; a baby monitor might be used in a similar fashion as a surveillance device on adults, but I don’t think that the possibility of the technology’s abuse is a bona fide reason to restrict its sale, singling it out among many other technologies that may be thus abused, e.g. hammers, baseball bats, household bleach, etc. Notwithstanding, it would be a good idea to raise awareness among seniors about this technology, and to empower them to say “no” to its use if this should represent an unwelcome invasion of their privacy, and one that is not outweighed by potential benefits, all things considered.

There is another ethical worry I have with this technology, which is much more diffuse and part of a wider cultural phenomenon in the United States: The substitution of technological connectedness for actual presence of adult children in their parents’ lives. It is not uncommon for grown children to move rather far away from their parents. A recent New York Times article described that the average distance of adult children from their parents in the U.S. today is 2 hours driving time. For better or for worse, ageing parents are being cared for less and less by their adult children and more and more in assisted living facilities and nursing homes. It seems inherent in the logic of the Presence Clock, as applied to relations between seniors and their adult children, to substitute virtual for actual presence of adult children in their ageing parents’ lives.

Insofar as this is the case, does the Presence Clock merely rationalize a troubling situation, put a circular band-aid on the problem? Does the Presence Clock make it easier for adult children to feel a bit better about not living up to their filial obligations? Or is the Presence Clock a technological support to seniors and their adult children – making a potentially bad situation a lot better given these sociological realities? Do adult children fulfill their obligations better with technologies such as the Presence Clock? Do seniors find the Presence Clock comforting or a second-rate form of connectedness? Obviously, there are large and difficult questions here dealing with the nature of filial obligations, and whether the current state of care for seniors is right or good.

It is likely that some seniors prefer lesser involvement of their children in their daily lives and that some would prefer more. Each family dynamic is individual and highly complex. But I would like to voice this more ephemeral worry that the Presence Clock may be part of a larger, worrisome trend in relationships between adult children and seniors that does not conduce to the flourishing of seniors or the flourishing of the relationship between seniors and their adult children.

In sum, the Presence Clock might very well fill a safety and connectedness need for seniors and their adult children. However, like any technology, it may be deployed even with the best of intentions in a manner that diminishes the privacy of seniors and disrespects their status as competent adults.



This case and commentary were prepared for and presented at the annual meeting of the Association for Practical and Professional Ethics, Cincinnati, Ohio, March 2009.

Copyright © 2009, Kalpana Shankar, Oliver McGraw, and Sandra Shapshay. All rights reserved.

Permission is hereby granted to reproduce and distribute copies of this work for nonprofit educational purposes, provided that copies are distributed at or below cost, and that the authors, source, and copyright notice are included on each copy. This permission is in addition to rights of reproduction granted under Sections 107, 108, and other provisions of the U.S. Copyright Act.

Thursday, August 20, 2009

“Engineering Towards a More Just and Sustainable World”

As if you needed any additional reasons to attend the PAIT workshop, you might also be interested in attending a mini conference at the APPE annual meeting. The following is from the APPE Web site:

A mini conference, “Engineering Towards a More Just and Sustainable World” will be held Saturday afternoon, March 6 through Sunday Noon March 7, 2010. Registration is $40 for those registered for the preconference workshop, “Ethical Guidance for Research and Application of Pervasive and Autonomous Information Technology (PAIT)” or for the Annual Meeting. Registration for the Mini Conference alone is $70.

Ken Pimple, PAIT Project Director

Marc Rotenberg in Cincinnati March 5, 2010

Here's another good reason to participate in the PAIT workshop: Marc Rotenberg, Executive Director of the Electronic Privacy Information Center (EPIC), will deliver the keynote address at the APPE annual meeting the day after and in the same hotel as the PAIT workshop. The following is from the APPE Web site:

The keynote speaker for the Nineteenth Annual Meeting will be Marc Rotenberg, the Executive Director of the Electronic Privacy Information Center (EPIC) in Washington, DC.

Marc Rotenberg teaches information privacy law at Georgetown University Law Center and has testified before Congress on many issues, including access to information, encryption policy, consumer protection, computer security, and communications privacy. He testified before the 9-11 Commission on “Security and Liberty: Protecting Privacy, Preventing Terrorism.”

Dr. Rotenberg has served on several national and international advisory panels, including the expert panels on Cryptography Policy and Computer Security for the OECD, the Legal Experts on Cyberspace Law for UNESCO, and the Countering Spam program of the ITU. He currently chairs the ABA Committee on Privacy and Information Protection and is the former Chair of the Public Interest Registry, which manages the .ORG domain. He is editor of Privacy and Human Rights and The Privacy Law Sourcebook, and co-editor (with Daniel J. Solove and Paul Schwartz) of Information Privacy Law (Aspen Publishing 2007).

He is a graduate of Harvard College and Stanford Law School and served as Counsel to Senator Patrick J. Leahy on the Senate Judiciary Committee after graduation from law school. He is a Fellow of the American Bar Foundation and the recipient of several awards including the World Technology Award in Law.
Ken Pimple, PAIT Project Director

Wednesday, August 12, 2009

Friday, August 7, 2009

Ethics and Assistive Technology Survey

Researchers at the University of British Columbia recently posted a survey on Ethics and Assistive Technology Survey. The group's early survey on robot ethics appears still to be open.

From their Web site: "The purpose of the surveys is to facilitate well-informed discussion and explore attitudes about complex issues related to ethics (including animal welfare) and scientific and technological developments."

Thanks to Colin Allen (a member of the PAIT Planning Committee) for bringing these surveys to my attention.

Ken Pimple, PAIT Project Director

Monday, August 3, 2009

Working definition

Comments on and discussion of this working definition are welcome; please use the Comment function. Slight changes made 10/30/2009. - Ken Pimple

The PAIT Planning Committee developed these working definitions to help guide our efforts. They are intended to be useful rather than conclusive.

For the purposes of this workshop, we consider terms such as “pervasive computing,” “ubiquitous computing,” “ubicomp,” “everyware,” “ambient intelligence,” and “ambient computing” to be roughly synonymous. We use the term “information technology” to highlight the important role of hardware not usually associated with computers, such as advanced sensing and communication devices, involved in most pervasive IT. Our shorthand for these technologies and their application is PAIT.

Definition: Pervasive IT devices are small and/or unobtrusive (compared to a desktop computer, for example) and can be embedded in everyday objects (e.g., carpets, clothing, doorways, toys) to collect and/or act upon data generated by or important to human activity. Often the data collected can be wirelessly transmitted, stored, and shared on the Internet. In some instances, several devices will share data and work together toward a common goal. Some will be unobtrusive and generally unnoticed while others will interact perceptibly with people (asking questions, giving reminders).

Some pervasive technologies are also autonomous, or self-directing.

Definition: Autonomous systems are typically computer-based devices augmented with sensing devices beyond those found on a typical desktop computer, including analogues to vision and hearing. An autonomous system can operate for extended periods of time without direct human intervention and alter the way it performs by learning from its own experience. Some autonomous systems can also adapt to particular environments (e.g., by moving safely through a particular house) and some can perform based on non-linear calculations (e.g., Bayesian inference) such that performance cannot be completely predicted or characterized from the system’s programming. Many autonomous systems act only on and through data (as do most desktop computers), but others also act on the physical world (e.g., by welding joints). The latter are considered robots without regard to their physical shape or mobility status (they need not be humanoid and they can be bolted to a factory floor).

Friday, July 31, 2009

Registration, travel subsidies, people links

The PAIT Planning Committee has made substantial progress in preparing for the workshop. At our main Web site, you can now find
  • The workshop registration form (PDF format)
  • The policy and application form (the latter in PDF) for travel subsidies to make it a bit easier for members of underrepresented groups in science and engineering to participate in the workshop
  • Links to many people with an interest in ethical issues associated with pervasive computing, autonomous systems, ubicomp, ambient intelligence, and the like
If you'd like your name added to the list, please let me know. I'm also interested in adding to the list of meetings and educational efforts on that same page; please send along any relevant links.

Keep posted for more updates!

Ken Pimple, PAIT Project Director

Thursday, July 30, 2009

"Swedish tourists miss island due to GPS typo"

A July 28 story from the Associated Press relates how a couple wanting to drive to the island of Capri misspelled the name in their GPS and wound up "some 400 miles (660 kilometers) away in the northern industrial town of Carpi."

Sometimes technology is only as smart as its users. How sophsticated would an AI have to be to have prevented this mishap?

Ken Pimple, PAIT Project Director

Wednesday, July 29, 2009

Robots in Wired

The July 22 issue of Wired has an article on robot ethics - "Robo-Ethicists Want to Revamp Asimov's 3 Laws." As robots become more common in everyday settings and more people interact with them, there will certainly be more problems; it's just a fact of scale. But as robots get more sophisticated, as well as more common, novel problems are likely to arise, too.

One of the intriguing issues raised in this article concerns liability. Right now, if a product fails to perform as it should (automobile tires that fall apart at high speeds, for example), the manufacturer is liable for the damage. But at some point robots will be able to take actions that their designers and manufacturers have not foreseen and could not have predicted. Will the robot itself be liable for damages? Will the manufacturer? Or will there be some kind of shared liability? What would be the best approach?

Thanks to Colin Allen (a member of the PAIT Planning Committee) for bringing this article to my attention.

Ken Pimple, PAIT Project Director

AI on Wall Street

A recent article in the New York Times describes "high-frequency trading," a new system for trading stocks that depends on high-powered computers that "execute millions of orders a second and scan dozens of public and private marketplaces simultaneously. They can spot trends before other investors can blink, changing orders and strategies within milliseconds."

According to the article, only a handful of traders currently have access to these tools, giving them an edge in electronic trading. They are able to spot trends and act on them so quickly and in such large scale that it's possible for a trader (that is, and AI trader) to buy in-demand shares and almost immediately sell them to slower traders at a profit.

This use of artificial intelligence is not as glamorous as warrior robots, but it may have the potential to change the behavior of markets, not to mention create an uneven playing field based on access to this technology. The immediate changes are obviously welcomed by the traders who can indulge in high-frequency trading, but what are the ramifications down the road?

Thanks to Colin Allen (a member of the PAIT Planning Committee) for bringing this article to my attention.

Ken Pimple, PAIT project director.

Monday, July 27, 2009

"Scientists Worry Machines May Outsmart Man"

A recent article in the New York Times concerns "leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds" to debate "whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload."

According to the article, the scholars expressed concerns about how advances in AI might be used by criminals, whether smart machines might take jobs away from people, and the likelihood that emerging technologies will "force humans to learn to live with machines that increasingly copy human behaviors."

These worries sound very familiar to me. What useful technology cannot be used by criminals (think robbers and pantyhose)? Major technological changes have always taken jobs from people, and many also create new jobs. And although we don't encounter many machines today that copy human behaviors, we have adapted to more social changes than I can hope to name.

I don't know how well this article reflects the actual meeting, but I suspect (and hope) that the actual conversations were more than cliches. I suppose that the devil is in the details, and the New York Times probably isn't the place to look for a detailed discussion of complex social and technological issues. But this is the kind of coverage that advances in AI and pervasive technologies tends to get in the popular media, which suggests that it shapes public understanding of these issues. One challenge facing this group of scholars - and everyone involved in the PAIT project - is coping with these broad-brush and somewhat shallow portrayals of the issues.

Ken Pimple, PAIT Project Director

Friday, July 24, 2009

Welcome to PAIT!

"Ethical Guidance for Research and Application of Pervasive and Autonomous Information Technology (PAIT)" is a two-day workshop to be held on March 3-4, 2010.

The workshop will precede the annual meeting of the Association for Practical and Professional Ethics, which will begin on Thursday, March 4, 2010 at the historical Hilton Cincinnati Netherland Plaza in Cincinnati, Ohio.

For background information on the workshop, see http://poynter.indiana.edu/pait/.

Watch this space for news about the workshop and activities leading up to it. If you would like to be added to our mailing list for occasional updates, please send me an e-mail message.

Kenneth D. Pimple, Ph.D., PAIT project director