Monday, May 23, 2011

"When the Internet Thinks It Knows You"

In this opinion piece from the New York Times (May 22, 2011) is written by Eli Pariser, one of the founders of and current president of the board of MoveOn.org. Pariser expresses concern about the "Internet giants - Google, Facebook, Yahoo and Microsoft -" who are so good at mining our browsing habits to customize advertising to the reader.

It isn't the advertising practices of these giants that bother Pariser; its the search filtering. To Pariser, some of the democratization that the Internet has fostered is a risk.
[W]hen personalization affects not just what you buy but how you think, different issues arise. Democracy depends on the citizen’s ability to engage with multiple viewpoints; the Internet limits such engagement when it offers up only information that reflects your already established point of view. While it’s sometimes convenient to see only what you want to see, it’s critical at other times that you see things that you don’t.
Insofar as this is a threat (and I think it is), I would say that Facebook has the potential to do the most harm. Doesn't it seem likely that the more time a person spends living in Facebook-land, talking only to chosen friends, the more likely her or his worldview will narrow?

Parsier believes that companies that use this kind of technology should "give us control over what we see - making it clear when they are personalizing, and allowing us to shape and adjust our own filters."

Will they heed his call? I doubt it. Would it do any good if they did? Not much, I'd guess. Most people wouldn't take the time to shape their own filters. It's so easy and pleasant to let someone else filter for me and categorize my behaviors so I don't have to - it's like Shangri-La.

But I'd use the "search filter off" option sometimes.

Ken Pimple, PAIT Project Director

Monday, May 16, 2011

"Why Privacy Matters Even if You Have 'Nothing to Hide'"

The May 15 2011 issue of The Chronicle of Higher Education includes an excerpt from Daniel J. Solove's new book, Nothing to Hide: The False Tradeoff Between Privacy and Security (Yale University Press), tackling a common response governmental gathering of personal information:
"I've got nothing to hide," they declare. "Only if you're doing something wrong should you worry, and then you don't deserve to keep it private."
He points out that even people who have nothing to hide - because they have not committed any crimes, or done anything they are ashamed of - still would not care to have all of their private information made public. The nothing-to-hide argument is based on "the underlying assumption that privacy is about hiding bad things." It's about hiding bad things, but that's only one aspect. It's also about privacy - the ability to have a life that is not entirely on public view.

Solove identifies several harms that can arise from invasions of privacy.
  • When not-particularly-revealing data from multiple sources are combined, the aggregation can reveal more than the bits reveal on their own.  
  • Exclusion is characteristic of much data gathering; individuals are excluded from when they are "prevented from having knowledge about how information about them is being used, and when they are barred from accessing and correcting errors in that data." 
  • Related to exclusion is secondary use (sometimes called "re-purposing"), in which data gathered with one object in mind are used for another purpose. In the context of government surveillance, secondary use "can paint a distorted picture, especially since records are reductive—they often capture information in a standardized format with many details omitted."
Almost everything he says in this short, readable, and useful essay can also be applied to commercial data collection. I'm guessing his book is a good one.

Ken Pimple, PAIT Project Director

Sunday, May 8, 2011

"Now, to Find a Parking Spot, Drivers Look on Their Phones"

Those lucky people in San Francisco now have an iPhone app to help them find empty parking spaces, according to the New York Times (May 7, 2011, by Matt Richtel). This could be a good thing; it will probably reduce stress and frustration (and perhaps road rage) and alleviate downtown congestion, some 30% of which is estimated by city officials to be caused by drivers looking for a place to park.

The city installed sensors in nearly 20,000 parking spaces that alert a computer system when those spots are filled (or emptied) as part of a $20 million parking initiative. (Unless the initiative covered other projects, that's $1,000 per parking spot.)

As the article notes, San Francisco isn't the first city to try this out, but it is the most widespread (so far). Can anyone doubt that it will lead to more distracted drivers and more collisions - including automobile/pedestrian collisions?

When Google perfects its self-driving cars and hooks in this system, San Francisco will be driving paradise.

Ken Pimple, PAIT Project Director

Friday, May 6, 2011

"Preventing the Next Flash Crash"

In this editorial (New York Times, May 6, 2011), Edward E. Kaufman Jr., a former U.S. Senator (D-Delaware) and current Senator and chairman of the Permanent Subcommittee on Investigations Carl M. Levin (D-Michigan) decry the lack of regulatory reform on high-speed automated trading. They recall the 2010 flash crash:
One year ago, the stock market took a brief and terrifying nose-dive. Almost a trillion dollars in wealth momentarily vanished. Shares in blue-chip companies were traded at absurdly low prices. High-frequency traders, who use computers to look for microscopic price differences in stocks on different exchanges and other trading venues, stopped trading, while others immediately sold whatever they bought, mainly to each other, in what has been called “hot potato” trading.
Their tale of inaction and obstacles to action is depressing, and all too familiar. Here's an example of a practice with a demonstrated capacity to do tremendous harm to the world economy, balanced only by dubious claims of benefits and the religion of profit. The federal government clearly has the power and authority to remove the enormous risk but can't - or won't - take action.

It doesn't bode well for our cultural ability to deal with far less dramatic and harmful, but still serious, ethical issues raised by other pervasive and autonomous information technologies whose risk has not yet been demonstrated (shall we always wait for disaster, or could we once in a  while prevent it?) and for which no single entity with the capacity to control them can be found.

For more on the flash crash, see my earlier post.

Ken Pimple, PAIT Project Director

Monday, May 2, 2011

Online privacy; military gets hip; caterpillar robot; intelligent pricing

Four interesting items came to my attention yesterday and today:
  • In the New York Times, Randall Stross supports opt-in rules for online data (Opt-In Rules Are a Good Start), meaning that no one should be allowed to gather or use your digital information without your consent. For too many sites, the best we can get is an opt-out option, which requires us to say, "Hey, don't do this;" or, more typically, "Hey, stop doing this." Not surprisingly, the proposed Commercial Privacy Bill of Rights Act of 2011 is supports an opt-out approach (we wouldn't want industry to be hampered by privacy concerns). Surprisingly, the article's poster child for opt-in is Facebook, which must have cleaned up its act while I wasn't looking.
  • Also in the Times, Andrew Martin and Thomas Lin write that some senior officials in the United States military are pushing to start using, or increase the use of, smartphones, iPads, video games, and virtual worlds in military training (Keyboards First. Then Grenades). Other senior officials are opposed. But some of these technologies are already being used and have proven effective, and they are appealing to young recruits. The smart money is on increased use.
  • ScienceNOW, a publication of the American Association for the Advancement of Science (AAAS), has a wonderful video of a 10-centimeter/4-inch robot that mimics the escape behavior of some caterpillars (Video: Caterpillar-Inspired Robots Rock 'n' Roll). You've got to see it to believe it. From the video it appears the robot is still on wires (presumably for power or control), but it's only a matter of time before the military develops it for intelligence gathering or assassination.
  • Finally, ScienceInsider, also a AAAS publication, reports that an out-of-print 1992 book on developmental biology available on Amazon.com recently offered "15 used from $35.54, and 2 new from $1,730,045.91 (+$3.99 shipping)" (The $23 Million Textbook). The biologist who noticed the outlandish price tracked changes for a while and noticed a pattern: "Whenever one seller changed the price of the book, the other seller reacted by offering the book at 99.83% of that price. In response, the first seller automatically started asking 127% of the other seller's new price - and so on. The price peaked on 18 April before a human being intervened and the prices came back to earth." The culprit was "algorithmic pricing." Thank goodness it wasn't used to order drone strikes.
Ken Pimple, PAIT Project Director