Tuesday 01 October 2024
Font Size
   
Tuesday, 26 July 2011 18:00

Steven Levy on Letting Bots Do Our Tweeting for Us

Rate this item
(0 votes)

Illustration: Eero Pitkänen

My tweets generally reflect a set of parochial interests that I continually revisit: the shuffle function in iTunes, the Phillies’ crummy batting lineup, reviews of my book, and, of course, the latest tech news associated with my stories. That’s not necessarily a bad thing: I imagine that those are subjects my followers are interested in as well.

But the predictability led me to speculate how simple it might be to create a program that would do the tweeting for me. Then I wondered whether it might be possible for such a program to emulate all of my social network activity. If such an autopilot were well implemented, it could analyze my output on Facebook, Twitter, and Foursquare and use that data to understand my interests, the sort of things I like to recommend, and the voice I use to communicate in those compacted formats, continuously refining its ability to be my proxy. That way, if I went off the grid—say, on a safari—I could just let the bot take the wheel.

Ideally, no one would realize this social autopilot wasn’t really me. If I happened to get trampled by a rhino in the veld, the program would keep going, and a part of me would continue griping about the Phils, quipping about Microsoft, and checking out new restaurants in the East Village. When it came to my bot, death would have no dominion.

A seminal 1950 paper by Alan Turing predicted that computers might someday become intelligent enough to pass as human and proposed a way to determine when that day arrived. The Turing test has since become the gold standard of artificial intelligence and the basis for an annual competition. But as Brian Christian portrays it in his new book, The Most Human Human, the test’s stilted mechanics—five-minute dialogs between judges and “remotes,” who are either people or computers—bear little relation to how people interact in the wild. (Christian participated in the 2009 competition, garnering the most votes asserting that he was a person. As always, the computers fell short.)

Wouldn’t it be more interesting if the test for AI were whether a social autopilot could convincingly replace the multifaceted social networking presence of an actual person? Victory would come when people failed to distinguish the bots from the live humans in their social graphs—the digital equivalent of The Crying Game. “It would be an indictment of online culture to see how far the autopilot could go,” Christian says.

David Ferrucci, the researcher who led the IBM Watson team that vanquished the human champions of Jeopardy, thinks much of this mimicry is possible. “It would be frightening how good it could be,” he says. “But then you’d see chinks in the armor.” Just as Watson misread a few Jeopardy clues with a wacky denseness that betrayed its nonhumanness, so a social autopilot program would.

But as software gets better, I suspect that bots might make fewer such gaffes or even, in some ways, surpass our performance. In her book Alone Together, sociologist Sherry Turkle observes that our personas on social networks are already fake—they’re not so much who we are as idealized projections of who we want to be. “It’s like being in a play,” as the subject of one of her studies explains. “You make a character.”

Doing this is hard work, Turkle writes, because we have difficulty squaring the actual details of our lives with the images we want to project. But computers are free of the ego and pretense that cloud the process for us. Once they get the basics right, social bots could prove to be more authentically fake than we are.

Email This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn