Friday, December 23, 2011

"Using Google’s Data to Sell Thermometers to Mothers"

I have been known to rant about advertising, which is intended to induce people to buy products they would not otherwise buy. In other words, advertising strives to manipulate behavior; it is (or wants to be) a kind of mind control. Perfect advertising would be perfect mind control. For this reason I deplore any advances in the art or science of advertising.

We learn in this New York Times article (Andrew Adam Newman, Dec. 22, 2011) of an advertising campaign for children's thermometers. The ads appear in "popular apps like Pandora," but only on devices used by mothers who live in areas experiencing a high rate of flu and who live "within two miles of retailers that carry the thermometer."
"Flu levels in your area are high," says the banner ad within an app. "Be prepared with [product name]."
This breakthrough in advertising is made possible by Google Flu Trends, a predictive model for flu outbreaks using Google's massive database of Internet searches. The model has "a reporting lag of only about a day, outdoing C.D.C. flu reports, which typically are published a week or two after breakouts."

Arguably, the thermometer advertisers are performing a public service. I know from experience that it can be hard to take the temperature of a baby or toddler, and easier methods are welcome. It's also important to know whether your child has a fever and, if so, how severe it is. Getting the word out could be useful to mothers and even life-saving for children.

So I don't really object to this campaign. But I do deplore the advance in mind control.

Ken Pimple, PAIT Project Director

Monday, December 12, 2011

"One Million Apps, and Counting"

From "One Million Apps, and Counting" by Shelly Freierman (New York Times, December 11, 2011):
The pace of new app development dwarfs the release of other kinds of media. “Every week about 100 movies get released worldwide, along with about 250 books,” said Anindya Datta, the founder and chairman of Mobilewalla which helps users navigate the mobile app market. “That compares to the release of around 15,000 apps per week.”
That's a lot of apps. If 0.1% of them were defective in a way that would compromise a user's privacy or security, that would be just 15 dangerous apps per week, or about 1,000 so far. Good odds, or bad?

Ken Pimple, PAIT Project Director

Monday, October 24, 2011

"A Hearing Aid That Cuts Out All the Clatter"

Here's a rarity: A story with good news and, as far as my imagination can stretch, no down side ("A Hearing Aid that Cuts Out All the Clatter," by John Tierney, New York Times, October 23,2011).

A relatively inexpensive "hearing loop" - "a thin strand of copper wire radiating electromagnetic signals that can be picked up by a tiny receiver already built into most hearing aids and cochlear implants" - installed in the floor around the edges of a room can transmit the signal from a microphone directly to the receiver, called a telecoil, or t-coil.

Hearing aids work best in relatively quiet places, where there are few sources of noise. In a subway or other crowded, busy place, hearing aids amplify all of the noise indiscriminately, creating a true cacophony from which it is difficult to distinguish the sounds that matter. The hearing loop / telecoil combination solves that problem, at least at events where microphones are used.

The technology "has been widely adopted in Northern Europe" and is catching on in the U.S.

Could it have a down side? I suppose so, but I don't see it.

Ken Pimple, PAIT Project Director

Wednesday, October 19, 2011

"Stuxnet Computer Worm’s Creators May Be Active Again"

John Markoff of the New York Times reports that the Stuxnet Computer Worm’s Creators May Be Active Again (October 18, 2011). Can anyone tell me what's more scary than Stuxnet?

Maybe I don't want to know.

Ken Pimple, PAIT Project Director

Monday, October 17, 2011

"Killer apps" and "big data"

Several years ago, I read an article - possibly a book review, probably in Science - that outlined the science underlying certain aspects of ancient and medieval warfare, including the mathematics of fortifications and catapults. Science and engineering have always been servants of war, for better and for worse, steadily producing more powerful and more accurate ways to kill and maim people, destroy property, and wreak havoc. My knowledge of military history is incomplete at best, and my knowledge of the history of diplomacy is probably even weaker, but I venture to guess that the "science" of negotiation and diplomacy - war by peaceful means - has not kept pace with the science of war, perhaps because it is so much easier to make better weapons than it is to make better men.*

This rumination was precipitated by two recent articles: A comment by P.W. Singer ("Military robotics and ethics: A world of killer apps," Nature 477:366-401, September 22, 2011) and a news article by John Markoff ("Government aims to build a 'data eye in the sky'," New York Times October 10, 2011).

Singer opens his piece by commenting on the unexpected consequences of the Manhattan Project, which "opened up entirely new areas of physics, revolutionized the energy industry and transformed world politics."
What is different today is the speed with which our technology can outpace our ethical and policy responses to it. Astounding advances grab the headlines so frequently that the public has become numb to their significance - whether it is robotic planes, directed-energy weapons such as high-energy lasers, or 'electric skin', tiny sensors that are applied to the body like tattoos.

We are "giants" when it comes to technology, but "ethical infants" when it comes to understanding its consequences, as US Army general Omar Bradley remarked in 1948. Bradley was referring to nuclear research, but as the pace of technologic change takes off, that gulf - between our sophisticated inventions and our crude grasp of the consequences - continues to widen. We need to start bridging it.
In light of this perspective, we should all be alarmed by news that both DARPA and IARPA (U.S. agencies that support cutting-edge research; the acronyms stand for Defense/Intelligence Advanced Research Projects Agency/Activity) are interested in using social science techniques to "mine the vast resources of the Internet" with automated systems that will provide a "data eye in the sky" to follow and, they hope, predict "political and economic events" as well as "pandemics and other types of widespread contagion."

There's not much point in demanding the cessation of such initiatives; they will be pursued by someone. There might be a chance to direct and control them, however. If we know how.

Ken Pimple, PAIT Project Director

*I use the word "men" in the old-fashioned sense of "human beings" because that usage, in spite of its sexist connotations, has a certain power and dignity, at least to my ear. Besides, until recently, war was pretty much a monopoly held by the males of the species, so I think we deserve the blame.

Friday, September 30, 2011

"Moods on Twitter Follow Biological Rhythms, Study Finds"

I doubt that there are any surprises in this story from the New York Times. It's only worth mentioning insofar as it portends future, more sophisticated data mining from sites like Facebook and Twitter.

Ken Pimple, PAIT Project Director

Thursday, September 29, 2011

"Secret memo reveals which telecoms store your data the longest"

Francis Harvey, to whom my thanks, brought this item to my attention. In an article by David Kravets of wired.com describes "a newly released Justice Department internal memo that for the first time reveals the data retention policies of America’s largest telecoms." The article includes a link to the one-page memo. Apparently AT&T has the biggest appetite for keeping track of its user's movements:
The biggest difference in retention surrounds so-called cell-site data. That is information detailing a phone’s movement history via its connections to mobile phone towers while it's traveling.

Verizon keeps that data on a one-year rolling basis; T-Mobile for “a year or more;” Sprint up to two years, and AT&T indefinitely, from July 2008.
Also of interest: Verizon keeps "text message content" for 3-5 days, but T-Mobile, AT&T, and Sprint don't keep it at all.

This information could be useful to policy makers, as the article recognizes.
The document release comes two months before the Supreme Court hears a case testing the government’s argument that it may use GPS devices to monitor a suspect’s every movement without a warrant. And the disclosure comes a month ahead of the 25th anniversary of the Electronic Privacy Communications Act, an outdated law that the government has invoked to obtain, without a warrant, the data the Justice Department document describes.
Stay tuned for future developments.

Ken Pimple, PAIT Project Director

Friday, September 16, 2011

"New emotion detector can see when we're lying"

BBC News reports that a research team from universities in England and Wales has developed a new computerized camera system that "successfully discriminates between truth and lies in about two-thirds of cases" when tested with volunteers (reported by Hamish Pritchard, September 13, 2011). The team spokesman, Hassan Ugail of Bradford University, is quoted as speculating that "In a real, high-stress situation, we might get an even higher success rate," even up to 90%, which is reportedly "similar to the performance of the polygraph."

The beauty of such a system for detecting terrorists at airports is obvious - just scan everyone and double-check anyone who seems to be lying. Employers will also be excited at the prospect of using it in job interviews. Private investigators will invest in a portable unit that they can hide in a potted plant at a restaurant where a woman can ask her husband, point blank, whether he's fooling around. The applications are endless. 

The mischief that such a device could create is nearly endless. The last I heard, the polygraph was widely considered ineffective and of dubious worth in criminal cases. I am also given to believe that polygraphs and fingerprints have never been adequately tested for reliability, so if this system is given more rigorous screening, it might prove to be better than I expect it will be. No matter how (in)accurate it turns out to be, people tend to be so credulous about lie-detecting machines that it will probably be taken as infallible.

The kicker comes in the last paragraph: Like the polygraph, the new system detects "emotions, such as distress, fear or distrust, and not the act of lying itself. Fear can sometimes be the fear of not being believed rather than the fear of being caught." Or the fear of flying, or being water boarded.

Thanks to Colin Allen for drawing my attention to this.

Ken Pimple, PAIT Project Director

Monday, September 12, 2011

Mind-controlled robots, the Supreme Court on GPS, Wii without a wand, and computers that write

A small avalanche of PAIT-related articles buried me this weekend; let me see whether I can dig my way out.
  • Disabled Patients Mind-Meld With Robots by Sara Reardon (ScienceNOW, September 6, 2011) - Using Skype and wearing "a cap of tiny electroencephalogram (EEG) electrodes," two people "whose lower bodies were paralyzed and who had been bed bound for 6 or 7 years" controlled the movements of a modified commercial robot - Robotino -100 kilometers away. They used only their brain waves - no moving eyes, no twitching fingers. The paralyzed subjects had been trained for 1 hour a week for 6 weeks. The system had been tested earlier with non-paralyzed people, and the paralyzed subjects "performed just as well as the healthy subjects." Think of what this would mean to paralyzed people and their friends and families - it would be a miracle.

    My enthusiasm is, of course, always tempered by caution. The Hollywood version would have someone hack the system and take control of the robot to frame a paralyzed person for murder. My real concern, though, is accessibility. There's no mention in the article how much this rig would cost, and the manufacturer of Robotino, Festo Didactic, lists all of the prices associated with Robotino as "on request." I fear this is an instance of "if you have to ask the price, you can't afford it."

  • Court Case Asks if ‘Big Brother’ Is Spelled GPS by Adam Liptak (New York Times, September 10, 2011) - At least three federal judges have compared the use of global positioning system (GPS) devices by police to George Orwell's novel, 1984. In November, the Supreme Count "will address a question that has divided the lower courts: Do the police need a warrant to attach a GPS device to a suspect’s car and track its movements for weeks at a time?"

    The answer seems obvious to me (hint: it begins with a "y"), but with the current court I'm not making any bets.

  • Remote Control, With a Wave of a Hand by Anne Eisenberg (New York Times, September 10, 2011) - "Scientists at Microsoft Research and the University of Washington have come up with a new system that uses the human body as an antenna. The technology could one day be used to turn on lights, buy a ticket at a train station kiosk, or interact with a world of other computer applications. And no elaborate instruments would be required." No need for a Wii wand or the Kinect's cameras that track motion.

    Nothing is said about whether this technology could be used to identify and track specific individuals (is your repertoire of everyday gestures as distinctive as your face or fingerprints?).

  • In Case You Wondered, a Real Human Wrote This Column, allegedly by Steve Lohr (New York Times, September 10, 2011) - I had heard some time ago about the effort to make computers write newspaper-style sports articles based solely on the statistics of the game. That such writing would soon be indistinguishable from prose written by a human sports writer I did not doubt. The time seems to be near.

    Somehow I can't get excited by this one.
Ken Pimple, PAIT Project Director

Friday, August 26, 2011

"To Catch a Quake"

Science, the flagship journal of the American Association for the Advancement of Science (AAAS), has a weekly roundup of some of the most interesting recent science publications called "Editors' Choice." In this week's issue (v. 333, n. 6046, Aug. 26, 2011), one of the seven featured publications is described in a paragraph entitled "To Catch a Quake" by Nicholas S. Wigginton (p. 1072). If you or your institution doesn't have a subscription to Science, the link probably won't work.

Wigginton's synopsis of the article describes the Quake-Catcher Network, "a volunteer-based seismic network that employs personal computers as low-cost seismic stations by sending seismic data collected with a small USB accelerometer through the user's Internet connection." After Chile's huge earthquake in 2010, "volunteers rapidly installed nearly 100 accelerometers within weeks in and around the mainshock [sic] area."

The study showed that this network was able accurately to collect aftershock data. Such networks could be inexpensively deployed in high-risk areas to provide first responders real-time information on the areas most likely to need help.

The citation of the original article, by "Chung et al.," provided by Science  is "Seismol. Res. Lett. 82, 526 (2011)." (The cryptic, telegraphic style of citation is typical of the sciences.)

Don't let it be said that I only share bad news on this blog.

Ken Pimple, PAIT Project Director

Thursday, August 25, 2011

The ubiquity of surveillance

Arlo and Janis, one of my favorite newspaper comic strips, featured a PAIT-related subject yesterday. The strip is subtle and funny, and the subject is scary or annoying, depending on your point of view. To me it's both. Read the strip and let me know what you think (use the comment feature below).

Ken Pimple, PAIT Project Director

Monday, August 22, 2011

In the news: Privacy, security, hacking

Six tidbits in the news - some a bit late in being posted, others quite recent - in three categories:
  1. Privacy a: The Week in Privacy (Just Between You and Me) Peter Catapano (New York Times, June 17, 2011) reviews the "the theory of the insidious plot to flood the minds and bodies of the American public with ever-more-mesmerizing and shinier technological gadgets and distractions so that those who already mostly control the world can for their own benefit further monitor and control the behaviors of the powerless masses, and that said powerless masses will be too busy operating or figuring out how to operate their new personal devices to even know what happened"
  2. Privacy b: Just Give Me the Right to Be Forgotten Natasha Singer (New York Times, August 20, 2011) wishes that people in the United States had something like "the data protection directive of the European Union," under which "people who have contracted with a company generally have a right to withdraw their permission for it to keep their personal data. Under this 'right to be forgotten,' Europeans who terminate frequent-flier memberships, for example, can demand that airlines delete their flight and mileage records."
  3. Security: Federal Push for ‘Cloud’ Technology Faces Skepticism Sean Collins Walsh (New York Times, August 21, 2011) reports on security concerns raised in the light of enthusiasm in some corners for using cloud hosts for some Federal agencies. The "selling points" include "lower cost [and] greater flexibility, because agencies can change the size of a project without having to add or subtract from their computing infrastructure." The unpromising history (so far) of deficits in Internet-linked computing some people worried.
  4. Hackers a: Deploying New Tools to Stop the Hackers Christopher Drew and Verne G. Kopytoff (New York Times, June 17, 2011) describe some of the threats to the security of Internet-accessible computers, as well as some of new approaches to fighting back.
  5. Hackers b: Web Site Ranks Hacks and Bestows Bragging Rights Riva Richmond (New York Times, August 21, 2011) reports an "upstart" Web site which "offers a way to separate the skilled [hackers] from the so-called script kiddies by verifying hacks using codes that participants must plant somewhere on sites they have compromised."
  6. Hackers c:  Master Hacker Kevin Mitnick Shares His 'Addiction' "Famed hacker" Kevin Mitnick is interviewed (All Things Considered, August 21, 2011) about his new book, Ghost in the Wires: My Adventures as the World's Most Wanted Hacker.
Arms races are alive and well.
Ken Pimple, PAIT Project Director

Friday, August 12, 2011

Skintight monitoring

A research article in the August 12, 2011 issue of Science, "Epidermal Electronics," describes a bandage-like "electronic skin" (as it's called in a commentary, "An Electronic Second Skin," in the same issue). According to the abstract of the research article, the material can incorporate "electrophysiological, temperature, and strain sensors, as well as transistors, light-emitting diodes, photodetectors, radio frequency inductors, capacitors, oscillators, and rectifying diodes."

The potential uses in medical applications alone are impressive. Zhenqiang Ma, author of the commentary, describes one current technology that this material may replace one day: "a patient who may have heart disease is usually required to wear a bulky monitor for a prolonged period (typically a month) in order to capture the abnormal yet rare cardiac events." Skintight monitors would eliminate the bulk and weight and have other benefits.

In addition to "physiological status monitoring," the authors of the research article say that the material could be used for "wound measurement/treatment, biological/chemical sensing, human-machine interfaces, covert communications, and others."

Naturally it's the last two examples that catch my attention. Human-machine interfaces? A cool and no doubt really useful application, but vaguely scary, too. And of course covert communications always seem nefarious.

Ken Pimple, PAIT Project Director

Friday, July 29, 2011

The allure of robots

It's been a long time since my last post, and it might be a long time before my next, due to time and deadline crunches. Just to let you know I'm still around, I'm sharing a link to a PAIT-related comic of PartiallyClips. My first thought on reading it: "One danger of advanced technology is that people expect too much from it."

Ken Pimple, PAIT Project Director

Monday, June 13, 2011

More on personalization; loving our devices; censor-proof Internet

Three interesting articles, two that have languished for weeks awaiting my attention, and one more recent.
  • The Trouble With the Echo Chamber Online by Natasha Singer (New York Times, May 28, 2011) draws on Eli Pariser's recent book, The Filter Bubble: What the Internet is Hiding From You (see my earlier post) and an interview with Pariser, among other sources, to explore the downside of personalized Web searches and other online experiences: Increasing isolation.
  • Liking Is for Cowards. Go for What Hurts. (New York Times, May 28, 2011) is an op-ed piece by John Franzen and drawn from his May 21 commencement address at Kenyon College, in which Franzen contrasts "the narcissistic tendencies of technology and the problem of actual love." It's a better than average commencement address, but, to my taste, the profundity mandate implicit in the genre gives the talk a sophomoric tone.
  • U.S. Underwrites Internet Detour Around Censors by James Glanz and John Markoff (New York Times, June 12, 2011) describes a "global effort" to develop technologies that will be able to do an end-run around Internet censorship, such as an Internet server that can fit in a suitcase. The goal is to empower groups struggling against "repressive governments."

    It's one of those ideas that give me a momentary thrill followed by a longer-lasting chill. I'm all for providing non-violent support to dissident groups, but it seems likely that these technologies could also be used for nefarious purposes. Once a technology is deployed, it will be imitated and re-deployed, and not necessarily for democracy-loving purposes. Would it be a boon for terrorists? For rogue governments?

    The only answer I have is that if the U.S. can develop these technologies with good intentions, others with bad intentions could develop them, too. Perhaps it's best to be ahead of the curve.

Ken Pimple, PAIT Project Director

Monday, May 23, 2011

"When the Internet Thinks It Knows You"

In this opinion piece from the New York Times (May 22, 2011) is written by Eli Pariser, one of the founders of and current president of the board of MoveOn.org. Pariser expresses concern about the "Internet giants - Google, Facebook, Yahoo and Microsoft -" who are so good at mining our browsing habits to customize advertising to the reader.

It isn't the advertising practices of these giants that bother Pariser; its the search filtering. To Pariser, some of the democratization that the Internet has fostered is a risk.
[W]hen personalization affects not just what you buy but how you think, different issues arise. Democracy depends on the citizen’s ability to engage with multiple viewpoints; the Internet limits such engagement when it offers up only information that reflects your already established point of view. While it’s sometimes convenient to see only what you want to see, it’s critical at other times that you see things that you don’t.
Insofar as this is a threat (and I think it is), I would say that Facebook has the potential to do the most harm. Doesn't it seem likely that the more time a person spends living in Facebook-land, talking only to chosen friends, the more likely her or his worldview will narrow?

Parsier believes that companies that use this kind of technology should "give us control over what we see - making it clear when they are personalizing, and allowing us to shape and adjust our own filters."

Will they heed his call? I doubt it. Would it do any good if they did? Not much, I'd guess. Most people wouldn't take the time to shape their own filters. It's so easy and pleasant to let someone else filter for me and categorize my behaviors so I don't have to - it's like Shangri-La.

But I'd use the "search filter off" option sometimes.

Ken Pimple, PAIT Project Director

Monday, May 16, 2011

"Why Privacy Matters Even if You Have 'Nothing to Hide'"

The May 15 2011 issue of The Chronicle of Higher Education includes an excerpt from Daniel J. Solove's new book, Nothing to Hide: The False Tradeoff Between Privacy and Security (Yale University Press), tackling a common response governmental gathering of personal information:
"I've got nothing to hide," they declare. "Only if you're doing something wrong should you worry, and then you don't deserve to keep it private."
He points out that even people who have nothing to hide - because they have not committed any crimes, or done anything they are ashamed of - still would not care to have all of their private information made public. The nothing-to-hide argument is based on "the underlying assumption that privacy is about hiding bad things." It's about hiding bad things, but that's only one aspect. It's also about privacy - the ability to have a life that is not entirely on public view.

Solove identifies several harms that can arise from invasions of privacy.
  • When not-particularly-revealing data from multiple sources are combined, the aggregation can reveal more than the bits reveal on their own.  
  • Exclusion is characteristic of much data gathering; individuals are excluded from when they are "prevented from having knowledge about how information about them is being used, and when they are barred from accessing and correcting errors in that data." 
  • Related to exclusion is secondary use (sometimes called "re-purposing"), in which data gathered with one object in mind are used for another purpose. In the context of government surveillance, secondary use "can paint a distorted picture, especially since records are reductive—they often capture information in a standardized format with many details omitted."
Almost everything he says in this short, readable, and useful essay can also be applied to commercial data collection. I'm guessing his book is a good one.

Ken Pimple, PAIT Project Director

Sunday, May 8, 2011

"Now, to Find a Parking Spot, Drivers Look on Their Phones"

Those lucky people in San Francisco now have an iPhone app to help them find empty parking spaces, according to the New York Times (May 7, 2011, by Matt Richtel). This could be a good thing; it will probably reduce stress and frustration (and perhaps road rage) and alleviate downtown congestion, some 30% of which is estimated by city officials to be caused by drivers looking for a place to park.

The city installed sensors in nearly 20,000 parking spaces that alert a computer system when those spots are filled (or emptied) as part of a $20 million parking initiative. (Unless the initiative covered other projects, that's $1,000 per parking spot.)

As the article notes, San Francisco isn't the first city to try this out, but it is the most widespread (so far). Can anyone doubt that it will lead to more distracted drivers and more collisions - including automobile/pedestrian collisions?

When Google perfects its self-driving cars and hooks in this system, San Francisco will be driving paradise.

Ken Pimple, PAIT Project Director

Friday, May 6, 2011

"Preventing the Next Flash Crash"

In this editorial (New York Times, May 6, 2011), Edward E. Kaufman Jr., a former U.S. Senator (D-Delaware) and current Senator and chairman of the Permanent Subcommittee on Investigations Carl M. Levin (D-Michigan) decry the lack of regulatory reform on high-speed automated trading. They recall the 2010 flash crash:
One year ago, the stock market took a brief and terrifying nose-dive. Almost a trillion dollars in wealth momentarily vanished. Shares in blue-chip companies were traded at absurdly low prices. High-frequency traders, who use computers to look for microscopic price differences in stocks on different exchanges and other trading venues, stopped trading, while others immediately sold whatever they bought, mainly to each other, in what has been called “hot potato” trading.
Their tale of inaction and obstacles to action is depressing, and all too familiar. Here's an example of a practice with a demonstrated capacity to do tremendous harm to the world economy, balanced only by dubious claims of benefits and the religion of profit. The federal government clearly has the power and authority to remove the enormous risk but can't - or won't - take action.

It doesn't bode well for our cultural ability to deal with far less dramatic and harmful, but still serious, ethical issues raised by other pervasive and autonomous information technologies whose risk has not yet been demonstrated (shall we always wait for disaster, or could we once in a  while prevent it?) and for which no single entity with the capacity to control them can be found.

For more on the flash crash, see my earlier post.

Ken Pimple, PAIT Project Director

Monday, May 2, 2011

Online privacy; military gets hip; caterpillar robot; intelligent pricing

Four interesting items came to my attention yesterday and today:
  • In the New York Times, Randall Stross supports opt-in rules for online data (Opt-In Rules Are a Good Start), meaning that no one should be allowed to gather or use your digital information without your consent. For too many sites, the best we can get is an opt-out option, which requires us to say, "Hey, don't do this;" or, more typically, "Hey, stop doing this." Not surprisingly, the proposed Commercial Privacy Bill of Rights Act of 2011 is supports an opt-out approach (we wouldn't want industry to be hampered by privacy concerns). Surprisingly, the article's poster child for opt-in is Facebook, which must have cleaned up its act while I wasn't looking.
  • Also in the Times, Andrew Martin and Thomas Lin write that some senior officials in the United States military are pushing to start using, or increase the use of, smartphones, iPads, video games, and virtual worlds in military training (Keyboards First. Then Grenades). Other senior officials are opposed. But some of these technologies are already being used and have proven effective, and they are appealing to young recruits. The smart money is on increased use.
  • ScienceNOW, a publication of the American Association for the Advancement of Science (AAAS), has a wonderful video of a 10-centimeter/4-inch robot that mimics the escape behavior of some caterpillars (Video: Caterpillar-Inspired Robots Rock 'n' Roll). You've got to see it to believe it. From the video it appears the robot is still on wires (presumably for power or control), but it's only a matter of time before the military develops it for intelligence gathering or assassination.
  • Finally, ScienceInsider, also a AAAS publication, reports that an out-of-print 1992 book on developmental biology available on Amazon.com recently offered "15 used from $35.54, and 2 new from $1,730,045.91 (+$3.99 shipping)" (The $23 Million Textbook). The biologist who noticed the outlandish price tracked changes for a while and noticed a pattern: "Whenever one seller changed the price of the book, the other seller reacted by offering the book at 99.83% of that price. In response, the first seller automatically started asking 127% of the other seller's new price - and so on. The price peaked on 18 April before a human being intervened and the prices came back to earth." The culprit was "algorithmic pricing." Thank goodness it wasn't used to order drone strikes.
Ken Pimple, PAIT Project Director

Wednesday, April 13, 2011

"What You Should Know About the EU's New 'Internet of Things' Privacy Framework"

In this entry on the Glen Gilmore and Social Media blog, Gilmore describes “the Internet of Things” as

a predicted, transformative moment in time when nearly all “things” in the physical world will be interconnected, wirelessly, with communication capabilities linking the physical and virtual worlds for a variety of cooperative applications

Much of the linking and cooperation will be facilitated by RFID smart tags, over which the European Union (EU) “has expressed grave concerns about the privacy implications of an unregulated internet and unchecked technology.” Gilmore provides an outline of the “EU’s 2009 Internet of Things: 14-point Strategic Action Plan” and links to the EU’s 24-page Privacy and Data Protection Impact Assessment Framework for RFID Applications and a two-page press release describing the voluntary agreement between the EU, industry, and privacy protection groups.

Specifically, the framework establishes “guidelines for all companies in Europe to address the data protection implications of smart tags (Radio Frequency Identification Devices – RFID) prior to placing them on the market.”

Gilmore’s critique of the framework is to the point, but perhaps a bit understated.

Despite the fanfare of many signatures, the framework is voluntary, with no express auditing mechanisms, though record-keeping procedures are outlined, and no defined penalties for non-compliance.

Coincidentally, the announcement of the EU’s voluntary framework came within one week of the release of a report by Carnegie Mellon University showing “lagging compliance” with U.S. industry self-regulation in online behavioral advertising.

...

[T]he framework gives private stakeholders the green light to continue full-steam ahead with their already massive investment in RFID technologies and the “internet of things” it heralds. [Emphasis and link in original]

I take it that Gilmore thinks the voluntary agreement is unacceptably weak and that the United States, “ever lagging behind the EU’s privacy initiatives,” is even worse.

Thanks to Francis Harvey for bringing this to my attention.

Ken Pimple, PAIT Project Director

Tuesday, April 12, 2011

"3-D Avatars Could Put You in Two Places at Once"

Wouldn't you like to have a 3-D avatar bearing your own face that could attend 3-D virtual meetings with other talking heads on blocky bodies with pencil-thin limbs? I know I wouldn't.

John Tierney of the New York Times tells us (3-D Avatars Could Put You in Two Places at Once, April 11, 2011) that this will probably be available in the next five years.

Other selling points:
  • No more travel to meetings.
  • Your avatar can attend and participate in the meeting while you sleep. 
  • You can alter your avatar's face to make the other meeting participants like you more.

Although the title puts you in two places at once, if the avatar can function without your direction (because you're asleep), why couldn't you be in three, four, five places (in two, three, four meetings) at once? Think of the increased productivity (and the increased expectations for productivity).

I can't wait.

Ken Pimple, PAIT Project Director

Tuesday, March 15, 2011

"Poker Bots Invade Online Gambling"

This article from the New York Times by Gabriel Dance (March 13, 2011) describes the use of AI poker bots in online gambling. The bots apparently can win tens of thousands of dollars for the humans who deploy them. Professor Tuomas W. Sandholm of Carnegie Mellon University is quoted as saying that poker bots "can rival good players, but not the the best - yet."

Naturally the use of the poker bots is defended by some, condemned by others. Those of us who don't play poker online are safe from them. But then again:

The poker bots’ arrival may be just another sign of an emerging world where humans, knowingly or unknowingly, encounter robots on an everyday basis. People already talk with computers when they call customer service centers or drive their cars.

Ken Pimple, PAIT Project Director

Tuesday, March 8, 2011

Book event: World Wide Mind: The Coming Integration of Humanity, Machines and the Internet

I received this meeting notice from Jason Borenstein, to whom my thanks. - Ken



Book event: World Wide Mind: The Coming Integration of Humanity, Machines and the Internet

Monday, March 21, 2011
12:15 p.m. - 1:30 p.m.

New America Foundation
1899 L St NW, Suite 400
Washington, DC 20036

What if digital communication felt as real as being touched? This question led acclaimed science writer Michael Chorost to explore profound new ideas triggered by lab research around the world, and has resulted in World Wide Mind: The Coming Integration of Humanity, Machines and the Internet (Free Press; February 2011) - the first book to explain exactly how humans and computers could be merged and the risks, implications, and amazing possibilities that await us in the future. World Wide Mind takes mind-to-mind communication out of the realm of science fiction and reveals how we are on the verge of a radical new understanding of human interaction.

Please join us for a conversation with writer Michael Chorost on how we communicate, how we can connect more fully with one another in a hyper technological age, and how our addiction to email and texting can be countered with technologies that put us-literally-in each other's minds.

Featured Speaker
Dr. Michael Chorost
Author, World Wide Mind

Moderator
Andrés Martinez
Director, Bernard L. Schwartz Fellows Program
New America Foundation

To RSVP for the event, go to the event page:
http://www.newamerica.net/events/2011/world_wide_mind

For questions, contact Stephanie Gunter at (202) 596-3367 or gunter@newamerica.net.

Monday, February 28, 2011

"Surrounded by Machines"

This article, published in the March 2011 edition of Communications of the ACM (pp. 29-31).

Although the article was authored by yours truly, it owes its title to Keith Miller and its publication to Rachelle Hollander, editor of CACM's ethics column, to both of whom my thanks. It briefly describes three of the presentations at the 2010 PAIT workshop.
The funding period for the project officially ends tomorrow (March 1, 2011).

In the same issue, "Catch me if you can" by Gregory Benford (pp. 112-111)1 traces the evolution of computer viruses and other malware, including Stuxnet. Benford claims that he wrote the first virus. I thank him for the article, but not for his invention.


Ken Pimple, PAIT Project Director

1This isn't a typo; the article begins on page 112 and ends on 111.

Wednesday, February 23, 2011

"Location Privacy: Is Privacy in Public a Contradiction in Terms?"

This entry by Robert Gellman on the GeoData Policy blog (Feb. 21, 2011) is an intelligent, but far from exhaustive, discussion of issues related to privacy in public places and changing technologies. I learned a number of things from it; the one that surprised me the most was Gellman's summary of a Supreme Court finding.
In United States v. Knotts, a 1983 Supreme Court decision, the police surreptitiously attached an electronic beeper to an item purchased by a suspect. They used the beeper to track the movements of the suspect’s car. The Court held that a person traveling in a car on public streets has no reasonable expectation of privacy in his movements. The Court didn’t care if the police watched or used technology. It found no Fourth Amendment violation either way.
I hadn't known about this decision, and it gave me food for thought. Gellman doesn't say whether the police had a warrant; if they did, the decision is in line with my understanding of police procedures. If they didn't, the decision strikes me as a serious escalation of police powers and degradation of civil rights. Furthermore, since the beeper was attached to an "item" and not to the car, it could have been used to track the suspect in relatively private spaces.

I found this a worthwhile read; you might also. Thanks to Francis Harvey for bringing it to my attention.

Ken Pimple, PAIT Project Director

Thursday, February 17, 2011

Watson, Facebook, and IP numbers

I go out of town for a couple of days and one of my favorite sources, the New York Times, publishes four articles relevant to this blog. Rather than wait until I have time to summarize and comment on each of them (which won't be soon), I'm going the cheap-and-easy way - three bullet points for four articles.
  • Watson, IBM's Jeopardy!-playing computer, gets two articles, both by John Markoff: A Fight to Win the Future: Computers vs. Humans (Feb. 14, 2011), a thoughtful, wide-ranging reflection on how Watson's language facility might (and might not) change economies and cultures; and Computer Wins on 'Jeopardy!;" Trivial, it's Not (Feb. 16) is more narrowly focused on describing Watson's performance on the TV show.
  • Facebook Officials Keep Quiet on Its Role in Revolts by Jennifer Preston (Feb. 14) highlights the critical role Facebook and other technologies have played in the recent (and ongoing) uprisings in Egypt and elsewhere. Facebook and Twitter, at least, take the stand that they are providers of social and communication services, not king-makers (the only safe stand they can take, of course).
  • I suppose it calls my nerd credentials into question that I didn't know that we were running out of IP numbers until I read Drumming Up More Addresses on the Internet by Laurie J. Flynn (Feb. 14). The 4.3 billion numbers that were created in 1977 are almost all used up; but a fix, IPv6, is in the bag, but like Y2K, there are a lot of entities that need to be fixed individually. Some people scoff at the Y2K problem - "The world didn't end!" - but only because they don't realize that the problems were averted by thousands of people working hard to make it a triumph rather than a disaster. Let's hope we do as well with this transition.
Ken Pimple, PAIT Project Director

Monday, February 14, 2011

"Malware Aimed at Iran Hit Five Sites, Report Says"

This article from the New York Times by John Markoff (February 11, 2011) summarizes a report from computer security software firm Symantec analyzing the Stuxnet worm. They found that there were "three waves of attacks."
Liam O Murchu, a security researcher at the firm, said his team was able to chart the path of the infection because of an unusual feature of the malware: Stuxnet recorded information on the location and type of each computer it infected.
Symantec analyzed samples of the worm from "various" computers and "determined that 12,000 infections could be traced back to just five initial infection points."

The tracking information was apparently intended to allow the attackers to learn whether the target computers became infected.

Sophisticated malware meets sophisticated analysis.

Ken Pimple, PAIT Project Director

Friday, February 11, 2011

Robots on stage

The New York Times reports that "Heddatron," a play by Elizabeth Meriwether, will open in Chicago today. The cast includes ten robots.
Five large robots are controlled remotely by cast members sitting offstage, and the remaining five are very small, autonomous "critter-bots," which basically just zip around the stage for the last five minutes of the play. 
I'd love to see this play and write a review for this blog. Anyone want to give me a travel grant?

Ken Pimple, PAIT Project Director

Thursday, February 10, 2011

Call for papers: Fourth Workshop on Roboethics

Fourth Workshop on Roboethics
May 13, 2011

CALL FOR PAPERS

Important dates:
  • February 28, 2011: Paper submission deadline
  • March 7, 2011: Notification of Acceptance/Rejection
  • March 14, 2011: Camera-Ready Submission Deadline
Organizers: IEEE RAS Technical Committee on Roboethics
  • Gianmarco Veruggio, CNR-¬‐IEIIT, Italy (corresponding co-chair)
  • Jorge Solis, Waseda University, Japan (co-chair)
  • Matthias Scheutz, Indiana University, USA (co-chair)
Scope: The proposed Full Day Workshop on Roboethics is the fourth biennial event, organized by the Technical Committee on Roboethics as part of the ICRA conference (previous workshops took place in 2005, 2007 and 2009). Roboethics is ethics applied to robotics, i.e., the human-centered ethics guiding the design, construction and use of robots. It deals with the study of the ethical, legal and social aspects of the introduction and use of robots in our daily lives. Progress in the field of computer science and telecommunications allows us to endow machines with enough intelligence so that they already can act autonomously (to some degree). However, as the application domains for robots are increasing and robots are coming out of the factory halls, robotics research is increasingly raising ethical implications, related to the emerging interactions between robots and human beings.

Roboethics shares many "sensitive areas" with computer ethics, information ethics, bioethics and not only roboticists, but also sociologists, psychologists and philosophers are discussing the potentialities and limits of robotics to help building a better human society.

This workshop will increase robotics researchers' ethical awareness, in the context of the ever growing interdisciplinarity that characterizes the new generation of robotics research.

Goal: The theme of the ICRA 2011 conference is "Better Robots, Better Life", an expectation that robot technology will help build a better human society. But achieving this goal is not only a technical problem. Robotics applications raise ethical questions, related to emerging interactions between robots and humans. The application of ethics to machines, including robots and computer programs, has been typically limited the questions of whether designers and operators should take full responsibility of machines' actions. However, in the near future, the robotics is already developing machines with more open-ended behaviors and the ability to acquire new behaviors as a results of online learning during task execution. This kind of adaptation will likely limit the predictability of robot behaviors. Moreover, the types of interactions and the physical integrations of humans and robots are increasing rapidly. The social, economic, psychological, philosophical, and emotional impacts of this research are still unclear, however, and require careful analysis and attention by the research community. Among the objectives of the workshop is the opportunity of developing rules for roboethical quality insurance, aimed at preventing unethical uses of robotics research products. Long-term objectives include the increase of robotics researchers' ethical awareness, in the context of the ever growing interdisciplinarity that will characterize the new generation of robotics research.

Topics: Contributions are welcome on (but not limited to) the ethical, legal and societal aspects of the following topics:
  • robot ethics (decision procedures/algorithms for moral behavior)
  • technical dependability (availability; reliability; safety; security)
  • military application of robotics (acceptability, advantages and risks, codes)
  • health (robotics in surgery; robotics in health care, assistance, prosthetics and therapy)
  • service (social robotics, personal assistants, companions)
  • economy (replacing humans in the workplace; robotics and the job market)
  • psychology (position of humans in the control hierarchy; robots and children)
  • law (robots and liability; deployment of autonomously acting robots)
  • environment (sustainable exploitation of resources; cleaning nuclear and toxic waste)
For more information, please see the web page at http://www.roboethics.org/icra2011/

Interested authors are encouraged to send their original contributions in the above or related areas to the organizers at info@roboethics.org

Extended abstracts (of two pages) or full papers of up to 6 pages (using the ICRA conference publication format) are welcome.

Friday, February 4, 2011

"'Death by GPS' in desert"

I've written about the unintended downside of relying too heavily on GPS, mobile telephones, and similar locational technologies in two earlier posts, the first about simply getting lost and the second describing several ways people visiting U.S. national parks have managed to get themselves into trouble using these devices.

This article by Tom Knudsen, published in The Sacramento Bee (Jan 30, 2011) describes the grim stories of people who have died in Death Valley National Park and Joshua Tree National Park after they entered these rugged areas unprepared for 120 degree (F) temperatures and wound up on impassable roads.
These are not just stories of unimaginable suffering. They are reminders that even with a growing suite of digital devices at our side, technology cannot guarantee survival in the wild. Worse, it is giving many a false sense of security and luring some into danger and death.
According to Charlie Callagan, Death Valley wilderness coordinator, "Some of the databases on the GPS units are showing old roads that haven't been open in 40 years." He's been "working with technology companies to remove closed and hazardous roads from their navigation databases – but with only partial success."

There has been a huge increase in summertime visitors to Death Valley, "from 97,000 in 1985 to 257,500 in 2009," an increase of 165%. It would be interesting to know the causes of the increase and whether locational technologies have played a large role. There's no way to tell from the article whether the per-visitor death rate has increased proportional to, slower than, or faster than the rate of increase in visitors. Whatever the case, clearly the providers of GPS services have a heavy responsibility to make their devices safer, at least by removing abandoned roads in dangerous areas, perhaps by removing places like Death Valley from their systems entirely.

Thanks to Don Searing for bringing this article to my attention.

Ken Pimple, PAIT Project Director

Tuesday, February 1, 2011

"Smart Meters Draw Fire From Left and Right in California"

According to a January 31, 2011, article in the New York Times,
Pacific Gas and Electric’s campaign to introduce wireless smart meters in Northern California is facing fierce opposition from an eclectic mix of Tea Party conservatives and left-leaning individualists who say the meters threaten their liberties and their health.
I discussed the "liberties" angle in an earlier post, but the "health" concern caught me by surprise. The key is not the meters themselves, but the wireless technology they use to transmit data to power plants. It turns out that some people believe they are sensitive to radiation from mobile devices, WiFi, and smart meters, causing "dizziness, fatigue, headaches, sleeplessness or heart palpitations." It's called “electromagnetic hypersensitivity,” or E.H.S.

The article mentions that no health risks from such radiation have been confirmed. My impulse is to be skeptical of unconfirmed exotic conditions brought on by new technology, but lack of evidence  is not proof that there's no effect. At any rate, the power company in question - Pacific Gas & Electric - is exploring the possibility of offering the option of hard-wiring the smart grids.

Ken Pimple, PAIT Project Director

Thursday, January 27, 2011

More on Stuxnet

In an earlier post, I wrote about the theory that Stuxnet was created and deployed by the U.S. and Israel. I deplored the deed because it also unleashed a powerful and - to my knowledge - unprecedented form of malicious software that will certainly be copied and re-used for all sorts of mischief.

The January 26, 2010, edition of the New York Times includes two op-ed pieces on Stuxnet. In "25 Years of Vandalism," William Gibson (author of Neuromancer and coiner of the word "cyberspace") traces the history of hacking to 1986. He also claims that it is less likely that Stuxnet is "a cyberweapon purpose-built by one state actor to strategically interfere with the business of another" than "a piece of hobbyist 'street' technology." If he's right, this is probably even worse news than I thought. It seems likely that hobbyist crackers - who are probably more numerous and even less discerning than governments - can adapt each others' code more readily than the kind of sophisticated worm Stuxnet has been described as elsewhere.

Indeed, the other op-ed, "From Bullets to Megabytes" by Richard A. Falkenrath, former "deputy homeland security adviser to President George W. Bush," describes Stuxnet as a "sophisticated half-megabyte of computer code." Falkenrath's analysis of the fallout from Stuxnet is also more sophisticated on mine, touching on the likely effect on relationships between governments and the global information technology industry as well as raising questions about the legality of the authorization of the use of such malware by the U.S. President.

It's a scary place out there.

Ken Pimple, PAIT Project Director

Wednesday, January 26, 2011

"Attention Turns to the Dangers of Distracted Pedestrians"

Fresh on the heals of news that T-Mobile and other mobile phone carriers are serious about providing protection against distracted driving (see my earlier post), the New York Times (January 25, 2011) reports that several states - New York, Oregon, Virginia, California, and Arkansas are named - have passed, tried to pass, or are thinking of passing laws to ban pedestrians and bicyclists from using mobile phones and media players with headphones or ear buds.

A surprising number of people walk or run right in front of a moving car when entranced by their music, often with fatal consequences. The curmudgeon in me just wants to nominate such people for the Darwin Award, but death is an awfully steep penalty for a moment's distraction, and the innocent drivers involved in such collisions must be seriously traumatized.

Before legislation started popping up banning texting while driving, I wondered whether it would be possible or effective simply to define distracted driving as reckless driving. That legal move, plus a good deal of public education, might be a good deterrent. There's probably an analog for bicyclists, but is there for pedestrians?

At any rate, no matter how much we love our devices, and how much actual value they add to our lives, we really shouldn't let them eradicate our good sense.

Ken Pimple, PAIT Project Director

Tuesday, January 25, 2011

"Google and Mozilla Announce New Privacy Features"

According to this New York Times article, Google's browser (Chrome) and Mozilla's (Firefox) will soon have the capability to send a "do not track" signal to Web sites they visit. Although they take different approaches, the opt-out feature of both browsers will, in essence, ask each Web site visited not to track the user. The new features will have no effect at sites that are not so configured.

These features do not seem as robust as Microsoft's (which I mentioned in an earlier post), which will allow the user to block Web sites based on a do-not-track list that users will be able to import to their browsers or create themselves. (See Microsoft's announcement for details.)

The comments posted by readers to the article about Google and Mozilla are generally disdainful of their approach - relying on industry to voluntarily implement the software that will make the privacy features work. Probably safe and legitimate sites will do so, but predatory sites will certainly not.

The comments also offer a few suggestions for dealing with this problem, including already available plug-ins (extensions).

Ken Pimple, PAIT Project Director

Friday, January 21, 2011

"Cell Carriers Explore Ways to Limit Distracted Driving"

This article in the January 20, 2011, New York Times, describes the announcement by T-Mobile of a service that "for $4.99 a month, automatically disables rings and alerts and sends calls to voice mail when the phone is in a moving car." The feature can be disabled by passengers or foolhardy drivers. Other carriers are exploring the same idea.

I think that using a mobile phone while driving - whether texting or talking, hands-free or hands-on - should be illegal. Distracted driving is dangerous. But until we pass laws or change our culture, this kind of feature is welcome.

Ken Pimple, PAIT Project Director

"Israel Tests on Worm Called Crucial in Iran Nuclear Delay"

This article, published January 15, 2011, in the New York Times, lays out a case to show that the United States and Israel created and used the Stuxnet computer worm to delay Iran's nuclear program.
By the accounts of a number of computer scientists, nuclear enrichment experts and former officials, the covert race to create Stuxnet was a joint project between the Americans and the Israelis, with some help, knowing or unknowing, from the Germans and the British.
Stuxnet does its damage by taking over a specific controller, the Siemens P.C.S.-7, which is used to run all kinds of industrial machinery. In particular, Stuxnet targeted the controllers of the centrifuges used by Iran to enrich uranium into a form that can be used to fuel a power plant or create a nuclear weapon. The P.C.S.-7 is widely used, and it seems likely that Stuxnet could be adapted to attack other nuclear refineries or even other kinds of plants - water treatment facilities, power plants, and so forth.

It seems to be widely agreed that Stuxnet is too sophisticated to have been created by your run-of-the-mill, or even stand-out, cracker, meaning that it was most likely created by one or more governments or corporations. The claim that it was crafted by the United States with Israeli help strikes me as credible, and I am glad that Iran's nuclear ambitions have been delayed.

However, the origin and results of this (apparently) first use of Stuxnet are not my concern here. To me, the biggest issue is that this sophisticated software is out there, available for study. I find this to be the most disturbing paragraph in the article:
“It’s like a playbook,” said Ralph Langner, an independent computer security expert in Hamburg, Germany, who was among the first to decode Stuxnet. “Anyone who looks at it carefully can build something like it.” Mr. Langner is among the experts who expressed fear that the attack had legitimized a new form of industrial warfare, one to which the United States is also highly vulnerable.
Someone, whether the U.S. or someone else, carefully crafted a genie, and then let it out of the bottle. The world may be a bit more safe from Iran's nuclear program for the moment, but I can't help wondering whether it's a net gain in security.


Ken Pimple, PAIT Project Director

Tuesday, January 4, 2011

Update: "A Faustian Exchange"

This appears to be an update of an earlier post. My thanks to Jason Borenstein for sending this my way. - Ken

AI & SOCIETY: Celebrating the 25th birthday anniversary

Call for Papers

Theme: ‘A Faustian Exchange: What is to be human in the era of Ubiquitous Technology?’

As part of the celebration of the 25th birthday anniversary of AI&Society in 2012, we are planning three inter-linked activities: a Special Birthday volume; Academic Workshop/Conference in Cambridge, and a Public installation event at the Dana Centre, Science Museum, London. In the age of pervasive and streaming technologies, we get a deep sense that the more we get caught up in a process of self-commodification, the more we are threatened with the loss of our existential autonomy. We have become accustomed to perceiving and thinking in singularities and individualism, rooted deeply into the techno-industrial culture of competitiveness and the possibilities inherent in technology. Since its inception, the theme of Judgment to Calculation has been central to the ongoing debates in the journal. In the early days of AI, Prof. Weizenbaum in his seminal book, Computer Power and Human Reason (1976), warned us against instrumental reason and giving machines the responsibility for making genuinely human choices.

There is a legitimate concern that further advances in pervasive technology could create profound social disruptions and even have dangerous consequences, forcing humans to learn to live with machines which increasingly copy human behaviours. But how is it possible to reconcile the widening gaps between constructed reality and the basic reality of the human condition? The challenge is to recalibrate the spiral of Judgment to Calculation, moving forwards from Calculation to Judgment. We feel that the time has now come to square the circle and provide a forum for a debate on the theme of ‘Faustian Exchange: what it is to be human in Ubiquitous Technology’, reflecting the complex, uncertain, multicultural and interconnected world we live in.

Issues and Concerns
Pervasive technology has great potential and possibilities in many realms of human society, including medicine, healthcare, agriculture, transportation, education, commerce, arts and culture, scientific research and discovery. However, we should remain vigilant about the profound implications of the mediating technologies on human life.
  • What are the consequences of man’s reliance on technology in deciding and pursuing what is truly valuable?
  • What is it to be human when being mediated by technology in contrast to how we are in the presence of others?
  • How do we make our presence felt in the wilderness of the post-human and the extended mind?
  • How does this new pervasive technology affect society? How do we interact with the technologies embedded in our world? Have we gone beyond the frontiers of control?
  • How do we deal with the dilemma that singularity represents not simply the passing of humankind from center stage, but that it contradicts our most deeply held notions of being?
  • A robot for granny – Is there a technocratic fix for every social “problem”?
  • What would it be like designing technological systems for nurturing the well-being of human kind?
  • What can arts, literature, music and culture contribute to the debate on Faustian Exchange?
  • Can the sorcerer’s apprentice shed some light on increasing preoccupation of technologising the academy and turning universities into theme parks of extended websites?
  • How do we transcend the ‘bipolar tendency’ of the market culture, and ‘deal with the swings between prophesies of doom that serve only to paralyze us further, and the unbridled consumerism that makes things worse’?
  • Does the recent financial crisis at last make us see through the myth of the culture of ‘anti-intellectualism’ and the ‘end of history’?
  • What have we gained and what have we lost in the Faustian Exchange? Have we already bargained our soul for the seductive power of instrumental technology?
This special 25th anniversary issue of AI&Society will explore ways to optimize technology for society beyond the question of could we and should we. We welcome contributions for this special volume, and look forward to receiving expressions of interest, position papers/ abstracts, full papers:

Call for papers: 5 October 2010          
Abstracts: 25 January 2011 (approx 500 words)
Full articles (upto 6000 words): 15 July 2011    
Publication: July/August 2012

Karamjit S Gill
Editor, AI&Society: journal of knowledge, culture ad communication
kgillbton@yahoo.co.uk

"When Computers Keep Watch"

This article from the New York Times (January 1, 2011) describes advances and uses of computerized analysis of visual images of people, including face recognition. The first example is of a system that monitors a prison yard in an annual training exercise for correctional officers.
Perched above the prison yard, five cameras tracked the play-acting prisoners, and artificial-intelligence software analyzed the images to recognize faces, gestures and patterns of group behavior. When two groups of inmates moved toward each other, the experimental computer system sent an alert - a text message - to a corrections officer that warned of a potential incident and gave the location.
Other examples include a computer-vision system that reminds hospital personnel to wash their hands when they are supposed to; another mounted behind a mirror that can "read a man's face to detect his heart rate and other vital signs;" a third can "analyze a woman’s expressions as she watches a movie trailer or shops online, and help marketers tailor their offerings accordingly."

Like most pervasive technologies, these computer-vision systems clearly have the potential to be beneficial in many ways, but also could easily be misused to violate privacy and cause other kinds of harms. As I read the article, the possibility of abuse by employers occurred to me before I reached this passage:
At work or school, the technology opens the door to a computerized supervisor that is always watching. Are you paying attention, goofing off or daydreaming?
Some people will argue that such a use would be justified because it would lead to great productivity and a thriving economy. Others, such as myself, would call it tyrannical; and I'd go on to say that there may be problems with ever-growing economies.

The examples above of the mirror that reads vital signs and the computer that monitors the reactions of shoppers or movie watchers are made possible by the research of Rosalind W. Picard and Rana el-Kaliouby at M.I.T. They have worked "for years" to apply "facial-expression analysis software to help young people with autism better recognize the emotional signals from others that they have such a hard time understanding" and co-founded a company, Affectiva, to market the software.

I am most alarmed by the use of these technologies to improve marketing and advertising, the practical science(s) of behavior control. Big business has the money and the incentive to propel the use of this software far and fast. What if the marketers actually perfect their art? Perfect marketing is perfect behavior control, and it might be reached under the flag of economic development with the blessing of our dominant paradigm. I find the fact that this may be made possible by the work of people who wanted to help people with autism bitterly ironic.

Ken Pimple, PAIT Project Director