Tuesday, December 28, 2010

"Cheaters Find an Adversary in Technology"

This article, published in the New York Times on December 27, 2010, describes Caveon Test Security, a company that finds cheating in standardized tests by using "data forensics." The description of the tug-of-war between test designers/givers and test takers (especially would-be cheats) is intriguing on its own, but what does this have to do with pervasive information technology? Consider a case of state-wide testing:
With more than 100,000 students tested, proctors could not watch everyone - not when some teenagers can text with their phones in their pockets.
One of Caveon's clients is the Law School Admission Council. One of the challenges for the LSAT is that students who have recently completed the exam discuss the test online. Caveon "patrols the Internet for leaked questions." (The article doesn't say what it does with what it finds.)

Two pervasive technologies - texting and the World Wide Web - are cited as tools for cheating. Another technology (or bundle of technologies) is used to detect and prevent cheating. Is this an arms race, or will an equilibrium be achieved? You tell me.

Ken Pimple, PAIT Project Director

Wednesday, December 8, 2010

"Microsoft Introduces Tracking Protection to Its Browser"

An article published in the New York Times on December 7, 2010, reports that Microsoft has announced that the next version of Internet Explorer ("available next year") will include a feature "that would permit users to stop certain Web sites from tracking them."

The announcement comes shortly after the Federal Trade Commission advocated such features (see my earlier posts "Stage Set for Showdown on Online Privacy" and Update: FTC and online privacy).

Since I made at least one snarky remark about Microsoft on this blog ("New Web Code Draws Concern Over Privacy Risks"), feel obliged to praise Microsoft for this decision, which I assume has been in the works for some time.

My favorite sentence in the above-mentioned Times article: "Microsoft’s announcement comes at a time when some in the online advertising community fear that a government-mandated do-not-track system could have severe ramifications for their business models."

It's no surprise that online advertisers would be worried about this development, but their business model is not as important as the civil right to privacy.

Ken Pimple, PAIT Project Director

Thursday, December 2, 2010

Update: FTC and online privacy

An earlier post on this blog concerns a potential clash between the United States Federal Trade Commission (FTC) and Commerce Department on regulating online privacy. At the time, the New York Times reported that the FTC was "exploring" a "do not track" policy, similar to the "do not call" lists that are now popular across the United States, allowing many of us to enjoy dinner without interruption from telemarketers.

The "do not track" policy would allow consumers to opt out of being tracked by Web sites.

Yesterday's New York Times reports that the FTC has "advocated" such a plan. Good for the FTC.

Ken Pimple, PAIT Project Director

Wednesday, December 1, 2010

"Context Awareness is Future of Experience-Driven Design"

Francis Harvey brought this article from Outlook Series to my attention. It describes the keynote address delivered at the Intel Developer Forum on September 16, 2010, delivered by Intel Vice President, Director of Intel Labs, and Intel Chief Technology Officer and Senior Fellow Justin Rattner.

Context awareness covers a wide range of ways that computing devices, including smart phones, can be "aware" of the preferences, needs, and expectations of the device's user. A smart phone "knows" a great deal about its owner/user because it has access to her or his address book, calendar, e-mail, social networking information, pattern of outgoing and ingoing telephone calls, real-time physical location, and more. In the near future, Rattner claims (and demonstrates in a short video) that our smart phones give us directions to the nearest restaurant of the kind we best like and can afford, plus suggest the entree that we'd like best.

The article gives a nod to security and privacy concerns. I've got to say that if I were in charge of U.S. espionage, I'd be working to get these devices into the hands of all sorts of people. Think of the ways access to this kind of information about a person's habits and past behavior would enhance blackmail, intelligence gathering, kidnapping, and assassination.

For the ordinary citizen, the technology might be useful and even attractive. The biggest selling point seems to be that it will save people time - the time it takes to ask the concierge about good restaurants, for example. But my observation has been that the more time I save, the busier I turn out to be; all that free time gets filled up very fast, and not typically with refreshing and rewarding experiences

Besides, I don't want my smart phone to morph into a combination backseat driver and nag.

Ken Pimple, PAIT Project Director

Monday, November 15, 2010

Facebook "privacy," smart cars, and personal drones

Four links on three unrelated topics:

1. Privacy on Facebook (not): 

(a) The New York Times reported on October 22 that the capacity of Facebook to allow marketers to connect with very specific groups of people ("say, golf players in Illinois who make more than $150,000 a year and vacation in Hawaii") can inadvertently allow those marketers to learn sensitive personal information that Facebook claims it keeps confidential.

(b) Francis Harvey (to whom my thanks) sent me a link to a November 4 article about Air Force efforts to warn "Facebook users of a new location-based application that may pose a security risk because it publicises users' locations without their specific consent." As usual, the application is on by default, meaning that Facebook users must adjust their privacy settings manually.

2. Better software in cars. Francis Harvey (thanks again!) also alerted me to an October 15 press release from McMaster University announcing a major initiative to develop advanced software for use in automobiles. Cars already have multiple computer chips running thousands of lines of code, and the possibility of errors or malfunctions to cause trouble - including fatal trouble - is growing. New approaches to developing software are needed to address these challenges.

3. Military software for the masses. A breezy article brought to my attention by Don Searing (again, thanks!) entitled "Celebs beware! New Pandora's box of 'personal' drones that could stalk anyone from Brangelina to your own child" touches on potentially serious developments. Small flying drones (measuring about 3 feet from tip-to-tip of its helicopter-like blades) are being used by U.K. police who pilot the drones using software developed by the U.S. military. If the past is prologue, we can expect ever-cheaper drones over the next few years becoming affordable to corporations, then small businesses, then to merely well-off individuals, who will be able to use them to spy on - anyone. Presumably it will be rare for inexpensive drones outfitted with weapons to be sold to the general public, but the cunning people who brought us the Improvised Explosive Devices that have killed and maimed so many people in Afghanistan and Iraq must be salivating over the possibilities already.

Ken Pimple, PAIT Project Director

Wednesday, November 10, 2010

"Stage Set for Showdown on Online Privacy"

This New York Times article (November 9, 2010) describes a potential clash between the U.S. Federal Trade Commission (FTC) and Commerce Department over regulating online privacy.

Apparently the FTC's stance is pro-privacy (a.k.a., pro-consumer, pro-individual, pro-people) while the Commerce Department's stance is pro-business.

The FTC is thinking about requiring a "do not track" feature, much like the "do not call" lists that many Americans gratefully started using a few years ago to cut down - nearly eliminate, in my experience - telemarketing calls. The Internet equivalent would allow users to opt out of being tracked, presumably either one Web site at a time or globally. I'd like to see this implemented. There are sites where I appreciate being tracked; Amazon.com, for example, gives me useful suggestions by tracking my preferences over time, and wisely or foolishly I trust Amazon with that information.

The argument from business and the Commerce Department will be that such protections are too costly and will slow economic expansion. Some would characterize this as a battle between individual rights and commercial greed. I don't attribute greed to corporations; they are intended to make money and are generally well-designed to do so. It would be foolish of them not to argue their point. But I hope privacy wins out.

Ken Pimple, PAIT Project Director

Monday, November 8, 2010

Robot hands, robot guards

Two snippets on robots:
  • An article in ScienceNOW (Oct. 25, 2010) describes a new robotic "hand" that can pour the contents of a glass into a mug and draw a square with a pen. The hand has no fingers. It's a "thin rubber sack filed with coffee grains or small glass spheres." When its flexible surface is pushed onto an object, a pipe in the hand's "wrist" sucks out some air, and voilà! the sack contracts and can lift the object. There's a nice video to illustrating its use. This hand is another example of how imitating humans might not be the best way to design robots - mimicking the human hand has not been very successful so far.
  • An article in the New York Times (Nov. 3, 2010) describes the increasing appeal (and decreasing cost) of using toy robots that can be remotely controlled via the Internet to keep an eye on one's home. The robots are relatively inexpensive and typically can move around the house and send sights and sounds to the remote user. The Times article mentions the entertainment value of these robots, but emphasizes their use as a security system. To me, they look more like an insecurity system; our family, at least, would feel more anxious going out of town leaving a robot behind that we could use to check on our house. We can't check it now, so we don't think about it. Make it possible to check for trouble and we'll frequently think about trouble. I'll pass.
Ken Pimple, PAIT Project Director

Thursday, October 21, 2010

"Privacy vs. Profits"

In this blog entry (New York Times, October 19, 2010), journalist Robert Wright describes the benefits he hopes to reap from HTML 5, about which I posted an entry on October 15. Specifically, Wright opines that the personal information of users that HTML 5 will soon be broadcasting for all who care to find it will allow journalists to earn an honest living on the Internet. With HTML 5, they'll be able to create a detailed profile of their readers, sell targeted advertising, and rake in the dough.

His discussion is more nuanced than my synopsis, of course, but the whole point can be found in his title. In the battle of privacy vs. profits, I suspect that profits is likely to win, at least in the United States.

Thanks to Francis Harvey for alerting me to this.

Ken Pimple, PAIT Project Director

Friday, October 15, 2010

"New Web Code Draws Concern Over Privacy Risks"

The "Web code" described in this article (New York Times, October 10, 2010) is HTML 5. Why is it that "improvements" are so often hamstrung by major flaws? Hasn't anyone learned anything from Microsoft?

Ken Pimple, PAIT Project Director

"Aiming to Learn as We Do, A Machine Teaches Itself"

The machine mentioned in this article (New York Times, Oct 4, 2010) is, of course, a computer, and it is designed to teach itself semantics by trolling the Internet, primed with a few basic linguistic categories. To this non-expert, it looks inventive and promising. I'd like to see someone pursue a similar project to enable a computer to teach itself how to make moral judgments.

One problem with using the Internet is the heavy emphasis on pornography and celebrities. Perhaps there's a way to adjust for that.

Ken Pimple, PAIT Project Director

"Google Cars Drive Themselves, in Traffic"

Similar stories to this one from the New York Times appeared elsewhere. The last I had heard, self-driving cars would require a retrofit of roads. (I don't remember when or where I read that; it might have been 30 years ago in one of my father's issues of Popular Mechanics.) Google's approach seems more feasible, though how feasible that makes it in absolute terms I couldn't say.

The obvious concerns about this technology include safety and reliability, as well as hacking (imagine kidnapping, or killing, someone by taking over her or his car; in fact, it might be an effective way to frame someone).

Perhaps less obvious: Will self-driven cars get better gas mileage? Will Americans put up with them, even if they prove safer than human-driving cars? Will they be the end of the designated driver?

Ken Pimple, PAIT Project Director

Friday, October 8, 2010

Securing Emerging Technologies: Medical Devices, Robots, Cars, and More

Securing Emerging Technologies: 
Medical Devices, Robots, Cars, and More

Tadayoshi (Yoshi) Kohno
Assistant Professor
Department of Computer Science and Engineering (CSE)
University of Washington

Today’s and tomorrow’s emerging technologies have the potential to greatly improve the quality of our lives. Without the appropriate checks and balances, however, these emerging technologies also have the potential to compromise our digital (and physical) security and privacy. A key goal of the University of Washington CSE Computer Security Lab is to help us achieve the best of both worlds: The wonderful promises offered by the new technologies without the associated security and privacy risks. This talk will examine several strands of our research, including our discoveries of security vulnerabilities in emerging technologies ranging from wireless implantable defibrillators to cars, and our development of defenses to mitigate these vulnerabilities.

This lecture is free and open to the public.

Wednesday, October 20, 2010
4:00-5:30 pm
State Room East (IMU, 2nd floor)
Indiana University Bloomington

Tadayoshi Kohno is an Assistant Professor in the Department of Computer Science and Engineering at the University of Washington. His research focuses on computer security and privacy, broadly defined. In fact, he believes that almost every topic in computer science can have an exciting security-related twist. Originally trained in applied and theoretical cryptography, his current research thrusts span from secure cyber-physical systems (including wireless medical devices and automobiles) to private cloud computing. Kohno is the recipient of a National Science Foundation CAREER Award, an Alfred P. Sloan Research Fellowship, an MIT Technology Review TR-35 Young Innovator Award, and multiple best paper awards. He received his PhD in computer science from the University of California at San Diego.

Financial support for this lecture comes from the New Frontiers in the Arts and Humanities program (Office of the Vice Provost for Research) and the Poynter Center’s project on Ethical Guidance for Research and Application of Pervasive and Autonomous Information Technology (PAIT), made possible by the National Science Foundation (grant number SES-0848097).

A .PDF of this announcement is available at https://oncourse.iu.edu/access/content/user/pimple/Kohno-Oct20.pdf.

Ken Pimple, PAIT Project Director

Wednesday, October 6, 2010

A Faustian Exchange: What is to be human in the era of Ubiquitous Technology?

A Faustian Exchange: What is to be human in the era of Ubiquitous Technology?
25th Birthday Issue of AI & Society

We would like to invite you to submit a provisional title and abstract of your paper for the special 25th birthday issue of AI & Society. Please confirm by 15 November 2010 whether you will be submitting a paper.

We have had a very enthusiastic response to our discussions and contacts regarding the proposal to publish the special Birthday issue of AI & Society. The theme of the birthday issue, "A Faustian Exchange: What is to be human in the era of Ubiquitous Technology?," has been warmly welcomed. We very much appreciate the many useful suggestions that have been made.

What we are seeking articles that would review and reflect on developments over the recent past and do so in a more general manner than the usual theoretical and highly structured papers appearing in the journal. We are hoping the resultant articles can be wide ranging, visionary, reflective and opinionated in the best sense of the word, challenging and even provocative, and critically journalistic in form. This is not to suggest a dilution of academic rigour, but rather a way of presenting the important ideas in a form that is accessible not only to our current academic readership but also to those whose area of work is less highly specialised.

Timeline
  • 15 November 2010: Confirmation of the submission of papers
  • 15 December 2010: Title and Abstract (approx 500 words)
  • 15 July 2011: Full articles (up to 6000 words)
  • January-March 2012: Review process and submission to the publishers
  • Publication: July/August 2012
  • AI & Society 25th Birthday Conference/Workshop: Cambridge, UK Autumn 2012
Please pass this information to your colleagues and networks who may be interested in this call for papers.

[Thanks to Jason Borenstein for sending this to me - Ken Pimple, PAIT Project Director]

Monday, October 4, 2010

Call for papers: Geographic Information Ethics

This Call for Papers comes from Francis Harvey, University of Minnesota:
After two years of engaging sessions, Dawn Wright and I have moved ahead with a call for papers in a 2011 Geographic Information Ethics session at the AAG conference in Seattle, WA. The general theme is still the breadth of interactions with ethical issues and geographic information. The list of sample topics points to this breadth.
  • case studies, curriculum development, or the pedagogy of teaching GIS ethical issues;
  • issues of privacy, surveillance, inequity, erroneous or inappropriate data concerning geographic technologies;
  • codes of ethics and conduct of professional organizations;
  • GIS professional development;
  • reflections on the changing nature of ethical issues in GIS&T
  • ethics of data publication and peer review
This breadth has been advantageous in developing important discussions and creating a forum for work that engages ethical dimensions of geographic information technologies. Following last year's presentations and discussions, we have added the topic of data publication and peer review to the list.

At this point we write to bring this session to your attention and see if you may be interested in presenting. Also, if you could pass this notice about the call to colleagues working on ethics and geographic information, we'd appreciate your help in identifying other participants for the session.

The full call is online at: http://web.me.com/fhap13/AAG_GIS-Ethics_Sessions/GIS_Ethics_Sessions.html

I'm always happy to post relevant announcements.

Ken Pimple, PAIT Project Director

Tuesday, September 21, 2010

"Code Known as Flash Cookies Raises Privacy Concerns"

This article in today's New York Times concerns the perennial problem of online privacy. The cookies used by Adobe's Flash player are different from those with which most of us are familiar, and are not deleted by the methods available in most browsers to manage cookies. The purpose of Flash cookies is also worrisome:
“The core function of the cookie is to link what you do on Web site A to what you do on Web site B,” said Peter Eckersley, a technologist at the Electronic Frontier Foundation. “The Flash cookie makes it harder for people to stop that from happening.”
Many people appreciate the same-site services that ordinary cookies can provide - showing content based on what I have already read on that site, for example. But sharing that information with a different site is another issue altogether. No doubt it can be used to great advantage for consumers, but it is also an order of magnitude more intrusive than ordinary cookies.

The article touches on several issues that arise across many pervasive technologies. Flash cookies, being unfamiliar and "transparent to the user," are covert. They compile information without our consent or knowledge. And although they can be useful and welcome to users, in the end they are implemented in the pursuit of profit, whether by making our Web experience better (the benign purpose) or by manipulating us - or worse - by exploiting our personal interests.

Giving businesses and organizations who use these technologies the benefit of the doubt, I have to say that this practice is rude at best. Imagine shopping malls studying their security camera footage to analyze which stores individual consumers visited and used the information to send them targeted advertising. Wouldn't that be obviously intrusive? Why can't Internet-based businesses understand that we don't like to be stalked? Why do so many people assume profit automatically justifies a practice? Shouldn't we decide first whether a practice is acceptable at all, and then determine whether it is suitable to be used for profit?

Ken Pimple, PAIT Project Director

Monday, September 13, 2010

"How to Train Your Robot (to Lie)"

In an earlier entry I expressed my discontent with our current terminology about robots - specifically, that we use the word "robot" to cover too many distinct types of artifacts. This article from ScienceNow illustrates another problem with how we talk about robots, somewhat reminiscent of Jaron Lanier's concerns over The First Church of Robotics.

The claim in the article's title is that a robot has been taught how to lie, but it has only learned how to lay a false trail. This can be seen as deception, or misdirection, or perhaps even camouflage - all of which can be seen in non-human animals and even plants - but hardly lying. No doubt the work represents a breakthrough, but is the hyped-up language necessary or productive? Am I the only one who thinks longingly of the days when science news was a bit more sober and restrained?

Ken Pimple, PAIT Project Director

Wednesday, September 8, 2010

Surveillance for preschoolers; when algos takes over

Two tidbits today.

(1) An editorial in today's New York Times, Keeping Track of the Kids, expresses a "worry that we are all becoming a little too blasé about our scrutinized lives" in which we accept security cameras, mobile telephones that allow us to be tracked minute-by-minute, and RFID chips on preschool children. The editorial's final sentence: "Though it may seem innocuous to attach a chip to our preschoolers’ clothes, do we really want to raise a generation of kids that are accustomed to being tracked, like cattle or warehouse inventory?"

It's a nicely phrased question. There are, of course, other ways to put it, like, "do we really want to raise a generation of kids who are accustomed to being protected from predation, not to mention simply being lost?"

What, indeed, would be lost - or gained - by this generation's getting accustomed to being tracked? Their elders apparently became so accustomed with very little fanfare.

(2) Colin Allen, to whom my thanks, brought this ABC program, The flash crash (August 29), to my attention. The program is summarized as follows:
A few months ago the US share market plunged l000 points in a few minutes, and trillions were traded both up and down. What caused it, and can it happen again? Tiny high frequency computer algorithms - or algos - roam the markets, buying and selling in a parallel universe more or less uncontrolled by anyone. Did they go feral, or was it the fat finger of a coked out trader? In September US regulators bring out their findings.
One has to be more interested in stock markets than I am to listen to the entire program, or read the entire transcript, but you can find the good stuff easily enough at the very end - about the last 2 minutes in the audio. That's when Colin Allen himself sums up the underlying problems. Enjoy.

Ken Pimple, PAIT Project Director

Tuesday, September 7, 2010

"The Boss Is Robotic, and Rolling Up Behind You"

This article from the New York Times (September 4) is worth a look, if only for the graphic of 5 similar robots and a couple of useful videos of the robots in action. These robots are essentially video conferencing tools on wheels, remotely operated by one person (whose face typically appears on the robot's screen) from an ordinary desktop computer.

The appeal of this technology in certain settings is obvious: An entrepreneur with offices and staff in two or more cities can meet with his staff daily without having to travel; a physician can assist in a health clinic hundreds of miles away. I don't think these robots will become ubiquitous, though.

The terminology bothers me a bit, though. I wish that we had widely-used terminology that could distinguish between, for example, factory robots that are bolted in place and perform well-defined and highly patterned movements; mobile robots with some choice-making capacity but very few functions (the Roomba® is a good example); remote-controlled mobile devices like the ones described in this article; and fully mobile machines with good deal of choice-making capacity. There might be other categories of which I am unaware (or not bringing to mind just now. I also think it would be useful to distinguish between human-shaped (humanoid) robots, robots so human-like that they could be mistaken for a person (think C3PO vs Data), and non-humanoid robots (R2D2).

If you know of such terminology, please post a comment sharing it with me and the readers of this blog.

Ken Pimple, PAIT Project Director

Monday, September 6, 2010

2nd World Congress on Computer Science and Information Engineering (CSIE 2011)

Call for Papers and Exhibits

17-19 June 2011, Changchun, China

CSIE 2011 intends to be a global forum for researchers and engineers to present and discuss recent advances and new techniques in computer science and information engineering. Topics of interests include, but are not limited to, data mining & data engineering, intelligent systems, software engineering, computer applications, communications & networking, computer hardware, VLSI, & embedded systems, multimedia & signal processing, computer control, robotics, and automation.

All papers in the CSIE 2011 conference proceedings will be indexed in Ei Compendex and ISTP, as well as included in the IEEE Xplore (The previous conference CSIE 2009 has already been indexed in Ei Compendex and included in the IEEE Xplore). IEEE Catalog Number: CFP1160F-PRT. ISBN: 978-1-4244-8361-7.

Changchun is the capital city of Jilin province, situated in the central section of China's northeast region. There are many natural attractions to entertain residents and visitors around Changchun. The grand Changbai Mountain renowned for its spectacular landscape, charming scenery, glamorous legends, as well as rich resources and products, has been praised as the first mountain in the northeast, outstanding as one of the China’s top-ten famous mountains. Other attractions in or around Changchun include Songhua lake (Songhuahu), Jingyue Lake (Jingyuetan), Changchun Movie Wonderland, Changchun Puppet Palace (Weihuanggong), Changchun World Sculpture Park, and Changchun World Landscape Park, etc.

Important Dates
  • Paper Submission Deadline: 20 September 2010
  • Review Notification: 15 November 2010
  • Final Paper and Author Registration Deadline: 6 January 2011
Contact Information
If you have any inquiries, please email us at CSIE2011@cust.edu.cn.

"A Strong Password Isn’t the Strongest Security"

This article from the New York Times (September 4, 2010) points out that strong passwords, on their own, are useless against keylogging software (the kind that captures your keystrokes and sends them to a bad guy somewhere who then has your username and password, no matter how strong). Cormac Herley, a Microsoft security expert, is quoted as saying, “Keeping a keylogger off your machine is about a trillion times more important than the strength of any one of your passwords."

Furthermore, requiring strong passwords encourages users to write them down, usually in places they can easily be found. So much for security.

What I find most interesting about this article, though, is the emphasis on the responsibility of  system administrators for security. All the rules for strong passwords shifts an unreasonable - and possibly counter-productive - burden onto end users, who might be able to guard against keyloggers, but certainly not as well as sysadmins. As Mr. Herley says, "It is not users who need to be better educated on the risks of various attacks, but the security community. ... Security advice simply offers a bad cost-benefit tradeoff to users."

We're all in this together, to be sure, but  lets delegate responsibility appropriately.

Ken Pimple, PAIT Project Director

Friday, September 3, 2010

"Technology Leads More Park Visitors Into Trouble"

This article, published on August 21 in the New York Times, describes some of the ways in which technology - cell phones, cell phone cameras, satellite location devices - have been used to dangerous or costly effect in U.S. national parks. The stories recounted all strike me as caused by a lack of common sense, or by simple stupidity, abetted by technology. A more generous interpretation is that, at least in some cases, the technology leads some people to take greater risks, confident that they can be saved by dialing 911.

Do such events indict technology, individual boneheads, a society of boneheads, or the nanny state?

Thanks to Don Searing for bringing this article to my attention.

Ken Pimple, PAIT Project Director

Tuesday, August 31, 2010

"Moral responsibility and autonomous media"

In this short article, PAIT participant Bo Brinkman argues that we can't shift the blame for bad results from the use of advanced technologies to the technologies themselves. No matter how smart our robots, bots, software, etc., become, the people who design, manufacture, sell, and use them share moral responsibility.

Ken Pimple, PAIT Project Director

Monday, August 30, 2010

Call for papers: Ethics and Affective Computing

Call for Papers
IEEE Transactions on Affective Computing
Special Issue on Ethics and Affective Computing

The pervasive presence of automated and autonomous systems necessitates the rapid growth of a relatively new area of inquiry called machine ethics. If machines are going to be turned loose on their own to kill and heal, explore and decide, the need for designing them to be moral becomes pressing. This need, in turn, penetrates to the very foundations of ethics as robot designers strive to build systems that comply. Fuzzy intuitions will not do when computational clarity is required. So, machine ethics also asks the discipline of ethics to make itself clear. The truth is that at present we do not know how to make it so. Rule-based approaches are being tried even in light of an acknowledged difficulty to formalize moral behavior, and it is already common to hear that introducing affects into machines may be necessary in order to make machines behave morally. From this perspective, affective computing may be morally required by machine ethics.

On the other hand, building machines with artificial affects might carry with it negative ethical consequences. In order to make humans more willing to accept robots and other automated computational devices, creating them to display emotion will be a help, since if we like them, we will, no doubt, be more willing to welcome them. We might even pay dearly to have them. But do artificial affects deceive? Will they catch us with our defenses down, and do we have to worry about Plato's caveat in the Republic that one of the best ways to be unjust is to appear just? Automated agents that seem like persons might appear congenial, even as any moral regard is ignored, making them dangerous culprits indistinguishable from automated "friends." In this light, machine ethics might demand that we exercise great caution in using affective computing. In radical cases, it might even demand that we not use it at all.

We would seem to have here a quandary. No doubt there are others. The purpose of this volume is to explore the range of ethical issues related to affective computing. Is affective computing necessary for making artificial agents moral? If so, why and how? Where does affective computing require moral caution? In what cases do benefits outweigh the moral risks? Etc.

Invited Authors:
  • Roddy Cowie (Queen's University, Belfast)
  • Luciano Floridi (University of Hertfordshire and University of Oxford)
  • Matthias Scheutz (Tufts University)
Papers must not have been previously published, with the exception that substantial extensions of conference papers can be considered. The authors will be required to follow the Author’s Guide for manuscript submission to the IEEE Transactions on Affective Computing at http://www.computer.org/portal/web/tac/author. Papers are due by March 1st, 2011, and should be submitted electronically at https://mc.manuscriptcentral.com/taffc-cs. Please select the "SI - Ethics 2011" manuscript type upon submission. For further information, please contact guest editor, Anthony Beavers at afbeavers@gmail.com.

Friday, August 20, 2010

Uberveillance and the Social Implications of Microchip Implants

Professor Katina Michael and Dr M.G. Michael, University of Wollongong, Australia,  have issued a call for chapter details to be published in a book entitled Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies.

Professor and Dr. Michael define Uberveillance as "an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body. These embedded technologies can take the form of traditional pacemakers, radio-frequency identification (RFID) tag and transponder implants, biomems and nanotechnology devices."

From the call:
Submission Procedure
Researchers, practitioners and members of the general public are invited to submit on or before September 15, 2010, a 2 page chapter proposal clearly explaining the mission and concerns of his or her proposed chapter. Authors of accepted proposals will be notified by November 10, 2010 about the status of their proposals and sent chapter guidelines. Full chapters are expected to be submitted by January 30, 2011. All submitted chapters will be reviewed on a double-blind review basis. Contributors may also be requested to serve as reviewers for this project.

* * *

Important Dates
September 15, 2010: Proposal Submission Deadline
November 10, 2010: Notification of Acceptance
January 30, 2011: Full Chapter Submission
March 1, 2011: Review Results Returned
May 1, 2011: Final Chapter Submission

Please see the unusually detailed and helpful call for more details.

Disclosure: I am one of the 29 members of the Editorial Advisory Board.

Ken Pimple, PAIT Project Director

Monday, August 16, 2010

A grab-bag of goodies

Here are a few tidbits of possible interest that I have gathered over the last few months without managing to post them here. If only I had an autonomous agent to help me keep up with things.
Ken Pimple, PAIT Project Director

Friday, August 13, 2010

"A high-tech solution to an older-age issue"

This story from yesterday's Marketplace Morning Report describes an alternative to renovating your home to make a welcoming space for your elderly parent. The "med-cottage" (or "medcottage;" it's spelled both ways in the transcript) is a "little prefab house that sits in the backyard. It leases for $2,000 a month. Behind that vinyl exterior there are motion sensors."

The sensors detect when the cottage's inhabitant gets out of bed, uses the bathroom, and more. "All that information feeds realtime to a website you can check like email. An iPod app is in the works."

A team of researchers at Indiana University is examining ethical issues raised in this kind of high-tech elder monitoring. The project is called Ethical Technology in the Homes of Seniors, or E.T.H.O.S. It would be interesting to know how much effort the designers of the medcottage put into considering the ethical issues raised by their product.

Ken Pimple, PAIT Project Director

Monday, August 9, 2010

"The First Church of Robotics"

Today's New York Times includes an op-ed piece by Jaron Lanier bemoaning what I'd call the metaphysical pretensions of artificial intelligence - including the term itself, how it is used to make technologies seem more impressive than they are, and, most importantly, how the combination changes the way we think about ourselves. As Lanier writes, "by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people."

Lanier compares Ray Kurzweil's idea of "the Singularity" to a religion, observing that "a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity,"and wondering whether these tensions would be eased a bit if technologists were less messianic.

I think Lanier's ideas are valid and worth contemplating, but I'll take the general train of thought on a slight detour. One of the objectives of AI research has been to make machines think like people. This has often driven researchers to try to understand how people actually think - how our brain, mind, emotions, and body interact to form thoughts, premises, conclusions, convictions, beliefs, and all the rest; even how we recognize a person's identity from her or his face.

The more I learn about AI and human psychology - and I have learned only a very small amount about either - the more convinced I am that AI research not only mystifies our understanding of human nature (as Lanier recognizes), but has potential to clarify it.

Lanier writes:
In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering” machine that is designed to play the TV quiz show “Jeopardy.” Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.
To me, this is also an example of how computers do not think like human beings, and that trying to make them think like us might be useful heuristically, but isn't really a desirable goal in and of itself. Why spend so much money trying to make more things that think like people when we already have several billion people who are already experts?

Perhaps we should recognize, and emphasize, that "artificial intelligence" only resembles human intelligence insofar as it can solve some problems only humans have  been able to solve heretofore. For the moment, I have yet to be convinced that AI is more than a really sophisticated hand-held calculator. We aren't metaphysically threatened by machines that can do arithmetic thousands of times faster and more accurately than ourselves; why should we be threatened by a handful of machines that seem to be able to hold a semi-coherent conversation with us under very narrow circumstances?

Ken Pimple, PAIT Program Director

Monday, June 28, 2010

"Computing Ethics: Work Life in the Robotic Age"

In this column, published in the July 2010 issue of Communications of the ACM, Jason Borenstein warns of the economic displacement that advances in robotics might entail and urges the robotics community to "be diligent in dealing with emerging ethical issues."

Ken Pimple, PAIT Project Director

Friday, June 25, 2010

"Computers Learn to Listen, and Some Talk Back"

One occupational hazard of being an ethicist is a tendency to dwell on the unethical, dangerous, and otherwise undesirable features of any given phenomenon. This story published on the New York Times Web site reminds me why advances in robotics, artificial intelligence, and computing are exciting as well as (sometimes) scary. I found the accompanying time line, "Building Smarter Machines," ranging from 1936 to 2009, particularly compelling. Both are a part of the "Smarter Than You Think" series.

Ken Pimple, PAIT Project Director

Wednesday, June 16, 2010

2011 Conference of the Society for Philosophy and Technology: Technology and Security

From the conference Web site:
The University of North Texas is proud to host the 17th international biennial conference of the Society for Philosophy and Technology. SPT is the leading international organization devoted to the philosophical examination of technology. The 2011 conference theme is Technology and Security. We encourage contributions that examine the role of technology in fostering and sustaining security as well as creating or exacerbating insecurities. The theme of security suggests a wide range of issues such as national security, social security (poverty, age, disability), cyber security, food security, environmental security, energy security, etc. SPT 2011 will, of course, welcome contributions on any philosophical dimension of technology -- from the perspective of any academic discipline as well as perspectives outside of the academy.
Thanks to Colin Allen for bringing this to my attention.

Ken Pimple, PAIT Project Director

Tuesday, June 15, 2010

"Skipping Class? Sensors Now Take the Roll"

An article in the May 7, 2010 issue of The Chronicle of Higher Education (page A11) tells us that Northern Arizona University "is installing an electronic system that measures student attendance."

The system will be installed "using $75,000 in federal stimulus money," and will "detect the ID cards students are carrying as they enter large classrooms."

The opinions of exactly three people are mentioned in the article: The Vice Provost for Academic Affairs, who favors the initiative, and two students who do not. The students, one of whom created a Facebook group resisting proximity cards, express the opinion that class attendance is a matter of free choice and individual responsibility.

I suspect that some students are in favor of the initiative, too - namely those who responsibly attend class and resent slackers who get a free pass.

Ken Pimple, PAIT Project Director

Monday, May 17, 2010

Google's accidental snooping

On May 14, the New York Times, the Huffington Post, and no doubt other sources, reported that Google's Street View cars had unintentionally collected snippets of information from unsecured WiFi routers. These two sources seem to be quoting from Google's own blog post on the topic. As I write, Google's post lists some 48 links in several languages back to the post, including one entitled "We No Longer Trust Google." (But they do trust Google enough to include a button allowing readers to add the post's URL to Google Bookmarks.)

Google has explained how it happened, outlined steps it is taking to dispose of the inadvertently collected data and make sure this doesn't happen again, and apologized: "We are profoundly sorry for this error and are determined to learn all the lessons we can from our mistake."

I  am not well informed of Google's other misdeeds, real or imagined, and I am not qualified to evaluate the ramifications of this incident, but standing on its own, it does not seem to me to carry the hallmarks of malicious activity. It is always a matter of alarm when the powerful make mistakes, though, because even innocent mistakes can have serious consequences. Let's hear it for vigilance.

But how many people do you suppose added security to their WiFi routers when they learned of this?

My thanks to Colin Allen for providing me with the links to the Huffington Post and Google blog posts.

Ken Pimple, PAIT Project Director

Tuesday, May 11, 2010

Call for Papers: Ethical and social aspects of mobiles/ubiquitous computing

The Nordic Journal of Applied Ethics is seeking papers for a thematic issue on ethical and social aspects of mobiles/ubiquitous computing. I have this announcement from Colin Allen, to whom my thanks.

Guest editors: Charles Ess, May Thorseth and Johnny Hartz Søraker

Twenty-five years ago, William Gibson presented a futuristic account of “cyberspace:” This space contained a complete virtual world so rich and complex as to be capable of replacing a real world and embodied existence. What was once clearly science fiction now becomes ever more the reality of our lives as increasingly intertwined with the multiple interactions made possible by computer-based communication networks. We invite papers that help us explore the various ethical and social dimensions of the contemporary world of ubiquitous computing. We are interested in the transformative powers of various technologies like mobiles, computer games and social networking services. Possible topics include: changing senses of selves, social interactions, privacy, intellectual property; the blurring of borderlines of virtual and real social interaction, online and offline presence and interactions among humans. Papers should include examples of contemporary technologies, although speculative thought experiments are also encouraged.

Submit your paper to: redaktor@etikkipraksis.org
Deadline: July 1, 2010

Thursday, May 6, 2010

Geographic Information Ethics and GIScience

The 2010 meeting of the Association of American Geographers included a session on "Geographic Information Ethics and GIScience." The full results from the session can be found at

http://dusk.geo.orst.edu/aag_ethics10.html. 

Thanks to Dawn ("Deepsea Dawn") Wright, Department of Geosciences, Oregon State University, for sharing this information.

Ken Pimple, PAIT Project Director

Thursday, March 25, 2010

PAIT Workshop - A success

I'm back after a long hiatus. The PAIT workshop ended three weeks ago today and I'm still trying to catch up on the many tasks that I let languish as I prepared for the workshop. I'm also just starting on the next phase of the PAIT project; more on that later.

For now I'll just say that the workshop went well. We had 36 participants who brought a wide scope and variety of expertise and interests to the workshop. We learned from each other and made connections.

In the final session we asked participants to write what they plan to do next to further the cause. Although only 26 people (of perhaps 30 remaining at the very end) submitted their plans, I count 32 separate courses of action. The plans could be categorized differently, of course, but this is heartening. Some plans are general, others more concrete and specific. Nine of them are obviously shared by two or more people, and I know for certain of several that will be team projects.

I will be trying to update this blog fairly regularly, but my workload is pretty heavy until mid-June. Please feel free to ask questions or post comments.

Ken Pimple, PAIT Project Director