Ken Pimple, PAIT Project Director
Tuesday, August 31, 2010
"Moral responsibility and autonomous media"
In this short article, PAIT participant Bo Brinkman argues that we can't shift the blame for bad results from the use of advanced technologies to the technologies themselves. No matter how smart our robots, bots, software, etc., become, the people who design, manufacture, sell, and use them share moral responsibility.
Monday, August 30, 2010
Call for papers: Ethics and Affective Computing
Call for Papers
IEEE Transactions on Affective Computing
Special Issue on Ethics and Affective Computing
The pervasive presence of automated and autonomous systems necessitates the rapid growth of a relatively new area of inquiry called machine ethics. If machines are going to be turned loose on their own to kill and heal, explore and decide, the need for designing them to be moral becomes pressing. This need, in turn, penetrates to the very foundations of ethics as robot designers strive to build systems that comply. Fuzzy intuitions will not do when computational clarity is required. So, machine ethics also asks the discipline of ethics to make itself clear. The truth is that at present we do not know how to make it so. Rule-based approaches are being tried even in light of an acknowledged difficulty to formalize moral behavior, and it is already common to hear that introducing affects into machines may be necessary in order to make machines behave morally. From this perspective, affective computing may be morally required by machine ethics.
On the other hand, building machines with artificial affects might carry with it negative ethical consequences. In order to make humans more willing to accept robots and other automated computational devices, creating them to display emotion will be a help, since if we like them, we will, no doubt, be more willing to welcome them. We might even pay dearly to have them. But do artificial affects deceive? Will they catch us with our defenses down, and do we have to worry about Plato's caveat in the Republic that one of the best ways to be unjust is to appear just? Automated agents that seem like persons might appear congenial, even as any moral regard is ignored, making them dangerous culprits indistinguishable from automated "friends." In this light, machine ethics might demand that we exercise great caution in using affective computing. In radical cases, it might even demand that we not use it at all.
We would seem to have here a quandary. No doubt there are others. The purpose of this volume is to explore the range of ethical issues related to affective computing. Is affective computing necessary for making artificial agents moral? If so, why and how? Where does affective computing require moral caution? In what cases do benefits outweigh the moral risks? Etc.
Invited Authors:
- Roddy Cowie (Queen's University, Belfast)
- Luciano Floridi (University of Hertfordshire and University of Oxford)
- Matthias Scheutz (Tufts University)
Friday, August 20, 2010
Uberveillance and the Social Implications of Microchip Implants
Professor Katina Michael and Dr M.G. Michael, University of Wollongong, Australia, have issued a call for chapter details to be published in a book entitled Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies.
Professor and Dr. Michael define Uberveillance as "an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body. These embedded technologies can take the form of traditional pacemakers, radio-frequency identification (RFID) tag and transponder implants, biomems and nanotechnology devices."
From the call:
Please see the unusually detailed and helpful call for more details.
Disclosure: I am one of the 29 members of the Editorial Advisory Board.
Professor and Dr. Michael define Uberveillance as "an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body. These embedded technologies can take the form of traditional pacemakers, radio-frequency identification (RFID) tag and transponder implants, biomems and nanotechnology devices."
From the call:
Submission Procedure
Researchers, practitioners and members of the general public are invited to submit on or before September 15, 2010, a 2 page chapter proposal clearly explaining the mission and concerns of his or her proposed chapter. Authors of accepted proposals will be notified by November 10, 2010 about the status of their proposals and sent chapter guidelines. Full chapters are expected to be submitted by January 30, 2011. All submitted chapters will be reviewed on a double-blind review basis. Contributors may also be requested to serve as reviewers for this project.
* * *
Important Dates
September 15, 2010: Proposal Submission Deadline
November 10, 2010: Notification of Acceptance
January 30, 2011: Full Chapter Submission
March 1, 2011: Review Results Returned
May 1, 2011: Final Chapter Submission
Please see the unusually detailed and helpful call for more details.
Disclosure: I am one of the 29 members of the Editorial Advisory Board.
Ken Pimple, PAIT Project Director
Monday, August 16, 2010
A grab-bag of goodies
Here are a few tidbits of possible interest that I have gathered over the last few months without managing to post them here. If only I had an autonomous agent to help me keep up with things.
- October 2009 - Mark Guzdial - How We Teach Introductory Computer Science is Wrong - Communications of the ACM blog
- April 2010 - Catharine Smith - Man Claims Google Street View Led Burglars to Target His Home - Huffington Post
- May 2010 - Michael Durbin - Fixing Wall Street’s Autopilot - The New York Times
- May 2010 - Hack Attacks Mounted on Car Control Systems - BBC News
- May 2010 - Jan Beyea - The Smart Electricity Grid and Scientific Research - Science
Ken Pimple, PAIT Project Director
Friday, August 13, 2010
"A high-tech solution to an older-age issue"
This story from yesterday's Marketplace Morning Report describes an alternative to renovating your home to make a welcoming space for your elderly parent. The "med-cottage" (or "medcottage;" it's spelled both ways in the transcript) is a "little prefab house that sits in the backyard. It leases for $2,000 a month. Behind that vinyl exterior there are motion sensors."
The sensors detect when the cottage's inhabitant gets out of bed, uses the bathroom, and more. "All that information feeds realtime to a website you can check like email. An iPod app is in the works."
A team of researchers at Indiana University is examining ethical issues raised in this kind of high-tech elder monitoring. The project is called Ethical Technology in the Homes of Seniors, or E.T.H.O.S. It would be interesting to know how much effort the designers of the medcottage put into considering the ethical issues raised by their product.
The sensors detect when the cottage's inhabitant gets out of bed, uses the bathroom, and more. "All that information feeds realtime to a website you can check like email. An iPod app is in the works."
A team of researchers at Indiana University is examining ethical issues raised in this kind of high-tech elder monitoring. The project is called Ethical Technology in the Homes of Seniors, or E.T.H.O.S. It would be interesting to know how much effort the designers of the medcottage put into considering the ethical issues raised by their product.
Ken Pimple, PAIT Project Director
Monday, August 9, 2010
"The First Church of Robotics"
Today's New York Times includes an op-ed piece by Jaron Lanier bemoaning what I'd call the metaphysical pretensions of artificial intelligence - including the term itself, how it is used to make technologies seem more impressive than they are, and, most importantly, how the combination changes the way we think about ourselves. As Lanier writes, "by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people."
Lanier compares Ray Kurzweil's idea of "the Singularity" to a religion, observing that "a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity,"and wondering whether these tensions would be eased a bit if technologists were less messianic.
I think Lanier's ideas are valid and worth contemplating, but I'll take the general train of thought on a slight detour. One of the objectives of AI research has been to make machines think like people. This has often driven researchers to try to understand how people actually think - how our brain, mind, emotions, and body interact to form thoughts, premises, conclusions, convictions, beliefs, and all the rest; even how we recognize a person's identity from her or his face.
The more I learn about AI and human psychology - and I have learned only a very small amount about either - the more convinced I am that AI research not only mystifies our understanding of human nature (as Lanier recognizes), but has potential to clarify it.
Lanier writes:
Perhaps we should recognize, and emphasize, that "artificial intelligence" only resembles human intelligence insofar as it can solve some problems only humans have been able to solve heretofore. For the moment, I have yet to be convinced that AI is more than a really sophisticated hand-held calculator. We aren't metaphysically threatened by machines that can do arithmetic thousands of times faster and more accurately than ourselves; why should we be threatened by a handful of machines that seem to be able to hold a semi-coherent conversation with us under very narrow circumstances?
Lanier compares Ray Kurzweil's idea of "the Singularity" to a religion, observing that "a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity,"and wondering whether these tensions would be eased a bit if technologists were less messianic.
I think Lanier's ideas are valid and worth contemplating, but I'll take the general train of thought on a slight detour. One of the objectives of AI research has been to make machines think like people. This has often driven researchers to try to understand how people actually think - how our brain, mind, emotions, and body interact to form thoughts, premises, conclusions, convictions, beliefs, and all the rest; even how we recognize a person's identity from her or his face.
The more I learn about AI and human psychology - and I have learned only a very small amount about either - the more convinced I am that AI research not only mystifies our understanding of human nature (as Lanier recognizes), but has potential to clarify it.
Lanier writes:
In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering” machine that is designed to play the TV quiz show “Jeopardy.” Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.To me, this is also an example of how computers do not think like human beings, and that trying to make them think like us might be useful heuristically, but isn't really a desirable goal in and of itself. Why spend so much money trying to make more things that think like people when we already have several billion people who are already experts?
Perhaps we should recognize, and emphasize, that "artificial intelligence" only resembles human intelligence insofar as it can solve some problems only humans have been able to solve heretofore. For the moment, I have yet to be convinced that AI is more than a really sophisticated hand-held calculator. We aren't metaphysically threatened by machines that can do arithmetic thousands of times faster and more accurately than ourselves; why should we be threatened by a handful of machines that seem to be able to hold a semi-coherent conversation with us under very narrow circumstances?
Ken Pimple, PAIT Program Director
Subscribe to:
Posts (Atom)