I'm writing this from the hospital...two big guys came by last night with a "message from Mr. Mathers."
-------------
Ok. That is a joke. I am sick, though, and not able to make it out to LI. I will likely be posting material here over the weekend of relevance to our work on Identity, so check back often. You'll get papers back next week and a new writing topic, as well.
Peace, out.
Thursday, March 15, 2007
Sunday, March 11, 2007
Tattoos/Identity/Skin


Here is a link to Shelley Jackson's website The Ineradicable Stain where you can read more about "Skin," a story being published one word at a time on the skin of 2095 volunteers. Above is a collage of some of the words, as well as a photo of Jackson working on an allied project: a life size drawing of a "map" of the words.
You can also read an interview with Ms. Jackson and read through the wikipedia entry on her and her work.
Friday, March 9, 2007
Say Everything



Highly relevant...from this week's New York Magazine---an update to the NYT "screenagers" article?:
Say Everything
As younger people reveal their private lives on the Internet, the older generation looks on with alarm and misapprehension not seen since the early days of rock and roll. The future belongs to the uninhibited.
By Emily Nussbaum
Yeah, I am naked on the Internet,” says Kitty Ostapowicz, laughing. “But I’ve always said I wouldn’t ever put up anything I wouldn’t want my mother to see.”
She hands me a Bud Lite. Kitty, 26, is a bartender at Kabin in the East Village, and she is frankly adorable, with bright-red hair, a button nose, and pretty features. She knows it, too: Kitty tells me that she used to participate in “ratings communities,” like “nonuglies,” where people would post photos to be judged by strangers. She has a MySpace page and a Livejournal. And she tells me that the Internet brought her to New York, when a friend she met in a chat room introduced her to his Website, which linked to his friends, one of whom was a photographer. Kitty posed for that photographer in Buffalo, where she grew up, then followed him to New York. “Pretty much just wanted a change,” she says. “A drastic, drastic change.”
Her Livejournal has gotten less personal over time, she tells me. At first it was “just a lot of day-to-day bullshit, quizzes and stuff,” but now she tries to “keep it concise to important events.” When I ask her how she thinks she’ll feel at 35, when her postings are a Google search away, she’s okay with that. “I’ll be proud!” she says. “It’s a documentation of my youth, in a way. Even if it’s just me, going back and Googling myself in 25 or 30 years. It’s my self—what I used to be, what I used to do.”
We settle up and I go home to search for Kitty’s profile. I’m expecting tame stuff: updates to friends, plus those blurry nudes. But, as it turns out, the photos we talked about (artistic shots of Kitty in bed or, in one picture, in a snowdrift, wearing stilettos) are the least revelatory thing I find. In posts tracing back to college, her story scrolls down my screen in raw and affecting detail: the death of her parents, her breakups, her insecurities, her ambitions. There are photos, but they are candid and unstylized, like a close-up of a tattoo of a butterfly, adjacent (explains the caption) to a bruise she got by bumping into the cash register. A recent entry encourages posters to share stories of sexual assault anonymously.
Some posts read like diary entries: “My period is way late, and I haven’t been laid in months, so I don’t know what the fuck is up.” There are bar anecdotes: “I had a weird guy last night come into work and tell me all about how if I were in the South Bronx, I’d be raped if I were lucky. It was totally unprovoked, and he told me all about my stupid generation and how he fought in Vietnam, and how today’s Navy and Marines are a bunch of pussies.” But the roughest material comes in her early posts, where she struggles with losing her parents. “I lost her four years ago today. A few hours ago to be precise,” she writes. “What may well be the worst day of my life.”
Talking to her the night before, I had liked Kitty: She was warm and funny and humble, despite the “nonuglies” business. But reading her Livejournal, I feel thrown off. Some of it makes me wince. Much of it is witty and insightful. Mainly, I feel bizarrely protective of her, someone I’ve met once—she seems so exposed. And that feeling makes me feel very, very old.
Because the truth is, at 26, Kitty is herself an old lady, in Internet terms. She left her teens several years before the revolution began in earnest: the forest of arms waving cell-phone cameras at concerts, the MySpace pages blinking pink neon revelations, Xanga and Sconex and YouTube and Lastnightsparty.com and Flickr and Facebook and del.icio.us and Wikipedia and especially, the ordinary, endless stream of daily documentation that is built into the life of anyone growing up today. You can see the evidence everywhere, from the rural 15-year-old who records videos for thousands of subscribers to the NYU students texting come-ons from beneath the bar. Even 9-year-olds have their own site, Club Penguin, to play games and plan parties. The change has rippled through pretty much every act of growing up. Go through your first big breakup and you may need to change your status on Facebook from “In a relationship” to “Single.” Everyone will see it on your “feed,” including your ex, and that’s part of the point.
Hey Nineteen
It’s been a long time since there was a true generation gap, perhaps 50 years—you have to go back to the early years of rock and roll, when old people still talked about “jungle rhythms.” Everything associated with that music and its greasy, shaggy culture felt baffling and divisive, from the crude slang to the dirty thoughts it was rumored to trigger in little girls. That musical divide has all but disappeared. But in the past ten years, a new set of values has sneaked in to take its place, erecting another barrier between young and old. And as it did in the fifties, the older generation has responded with a disgusted, dismissive squawk. It goes something like this:
Kids today. They have no sense of shame. They have no sense of privacy. They are show-offs, fame whores, pornographic little loons who post their diaries, their phone numbers, their stupid poetry—for God’s sake, their dirty photos!—online. They have virtual friends instead of real ones. They talk in illiterate instant messages. They are interested only in attention—and yet they have zero attention span, flitting like hummingbirds from one virtual stage to another.
“When it is more important to be seen than to be talented, it is hardly surprising that the less gifted among us are willing to fart our way into the spotlight,” sneers Lakshmi Chaudhry in the current issue of The Nation. “Without any meaningful standard by which to measure our worth, we turn to the public eye for affirmation.”
Clay Shirky, a 42-year-old professor of new media at NYU’s Interactive Telecommunications Program, who has studied these phenomena since 1993, has a theory about that response. “Whenever young people are allowed to indulge in something old people are not allowed to, it makes us bitter. What did we have? The mall and the parking lot of the 7-Eleven? It sucked to grow up when we did! And we’re mad about it now.” People are always eager to believe that their behavior is a matter of morality, not chronology, Shirky argues. “You didn’t behave like that because nobody gave you the option.”
None of this is to suggest that older people aren’t online, of course; they are, in huge numbers. It’s just that it doesn’t come naturally to them. “It is a constant surprise to those of us over a certain age, let’s say 30, that large parts of our life can end up online,” says Shirky. “But that’s not a behavior anyone under 30 has had to unlearn.” Despite his expertise, Shirky himself can feel the gulf growing between himself and his students, even in the past five years. “It used to be that we were all in this together. But now my job is not to demystify, but to get the students to see that it’s strange or unusual at all. Because they’re soaking in it.”
One night at Two Boots pizza, I meet some tourists visiting from Kansas City: Kent Gasaway, his daughter Hannah, and two of her friends. The girls are 15. They have identical shiny hair and Ugg boots, and they answer my questions in a tangle of upspeak. Everyone has a Facebook, they tell me. Everyone used to have a Xanga (“So seventh grade!”). They got computers in third grade. Yes, they post party pictures. Yes, they use “away messages.” When I ask them why they’d like to appear on a reality show, they explain, “It’s the fame and the—well, not the fame, just the whole, ‘Oh, my God, weren’t you on TV?’ ”
After a few minutes of this, I turn to Gasaway and ask if he has a Web page. He seems baffled by the question. “I don’t know why I would,” he says, speaking slowly. “I like my privacy.” He’s never seen Hannah’s Facebook profile. “I haven’t gone on it. I don’t know how to get into it!” I ask him if he takes pictures when he attends parties, and he looks at me like I have three heads. “There are a lot of weirdos out there,” he emphasizes. “There are a lot of strangers out there.”
There is plenty of variation among this younger cohort, including a set of Luddite dissenters: “If I want to contact someone, I’ll write them a letter!” grouses Katherine Gillespie, a student at Hunter College. (Although when I look her up online, I find that she too has a profile.) But these variations blur when you widen your view. One 2006 government study—framed, as such studies are, around the stranger-danger issue—showed that 61 percent of 13-to-17-year-olds have a profile online, half with photos. A recent pew Internet Project study put it at 55 percent of 12-to-17-year-olds. These numbers are rising rapidly.
It’s hard to pinpoint when the change began. Was it 1992, the first season of The Real World? (Or maybe the third season, when cast members began to play to the cameras? Or the seventh, at which point the seven strangers were so media-savvy there was little difference between their being totally self-conscious and utterly unself-conscious?) Or you could peg the true beginning as that primal national drama of the Paris Hilton sex tape, those strange weeks in 2004 when what initially struck me as a genuine and indelible humiliation—the kind of thing that lost former Miss America Vanessa Williams her crown twenty years earlier—transformed, in a matter of days, from a shocker into no big deal, and then into just another piece of publicity, and then into a kind of power.
But maybe it’s a cheap shot to talk about reality television and Paris Hilton. Because what we’re discussing is something more radical if only because it is more ordinary: the fact that we are in the sticky center of a vast psychological experiment, one that’s only just begun to show results. More young people are putting more personal information out in public than any older person ever would—and yet they seem mysteriously healthy and normal, save for an entirely different definition of privacy. From their perspective, it’s the extreme caution of the earlier generation that’s the narcissistic thing. Or, as Kitty put it to me, “Why not? What’s the worst that’s going to happen? Twenty years down the road, someone’s gonna find your picture? Just make sure it’s a great picture.”
And after all, there is another way to look at this shift. Younger people, one could point out, are the only ones for whom it seems to have sunk in that the idea of a truly private life is already an illusion. Every street in New York has a surveillance camera. Each time you swipe your debit card at Duane Reade or use your MetroCard, that transaction is tracked. Your employer owns your e-mails. The NSA owns your phone calls. Your life is being lived in public whether you choose to acknowledge it or not.
So it may be time to consider the possibility that young people who behave as if privacy doesn’t exist are actually the sane people, not the insane ones. For someone like me, who grew up sealing my diary with a literal lock, this may be tough to accept. But under current circumstances, a defiant belief in holding things close to your chest might not be high-minded. It might be an artifact—quaint and naïve, like a determined faith that virginity keeps ladies pure. Or at least that might be true for someone who has grown up “putting themselves out there” and found that the benefits of being transparent make the risks worth it.
Shirky describes this generational shift in terms of pidgin versus Creole. “Do you know that distinction? Pidgin is what gets spoken when people patch things together from different languages, so it serves well enough to communicate. But Creole is what the children speak, the children of pidgin speakers. They impose rules and structure, which makes the Creole language completely coherent and expressive, on par with any language. What we are witnessing is the Creolization of media.”
That’s a cool metaphor, I respond. “I actually don’t think it’s a metaphor,” he says. “I think there may actually be real neurological changes involved.”
CHANGE 1: THEY THINK OF THEMSELVES AS HAVING AN AUDIENCE
I’m crouched awkwardly on the floor of Xiyin Tang’s Columbia dorm room, peering up at her laptop as she shows me her first blog entries, a 13-year-old Xiyin’s musings on Good Charlotte and the perfidy of her friends. A Warhol Marilyn print gazes over our shoulders. “I always find myself more motivated to write things,” Xiyin, now 19, explains, “when I know that somebody, somewhere, might be reading it.”
From the age of 8, Xiyin, who grew up in Maryland, kept a private journal on her computer. But in fifth grade, she decided to go public and created two online periodicals: a fashion ’zine and a newsletter for “stories and novellas and whatnot.” In sixth grade, she began distributing her journal to 200 readers. Even so, she still thought of this writing as personal.
“When I first started out with my Livejournal, I was very honest,” she remembers. “I basically wrote as if there was no one reading it. And if people wanted to read it, then great.” But as more people linked to her, she became correspondingly self-aware. By tenth grade, she was part of a group of about 100 mostly older kids who knew one another through “this web of MySpacing or Livejournal or music shows.” They called themselves “The Family” and centered their attentions around a local band called Spoont. When a Family member commented on Xiyin’s entries, it was a compliment; when someone “Friended” her, it was a bigger compliment. “So I would try to write things that would not put them off,” she remembers. “Things that were not silly. I tried to make my posts highly stylized and short, about things I would imagine people would want to read or comment on.”
Since she’s gone to college, she’s kept in touch with friends through her journal. Her romances have a strong online component. But lately she’s compelled by a new aspect of her public life, what she calls, with a certain hilarious spokeswoman-for-the-cause affect, the “party-photo phenomenon.” Xiyin clicks to her Facebook profile, which features 88 photos. Some are snapshots. Some are modeling poses she took for a friend’s portfolio. And then there are her MisShapes shots: images from a popular party in Tribeca, where photographers shoot attendees against a backdrop. In these photos, Xiyin wears eighties fashions—a thick belt and an asymmetrical top that give me my own high-school flashback—and strikes a world-weary pose. “To me, or to a lot of people, it’s like, why go to a party if you’re not going to get your picture taken?”
Among this gallery, one photo stands out: a window-view shot of Xiyin walking down below in the street, as if she’d been snapped by a spy camera. It’s part of a series of “stalker photos” a friend has been taking, she informs me: He snaps surreptitious, paparazzi-like photos of his friends and then uploads them and “tags” the images with their names, so they’ll come across them later. “Here’s one where he caught his friend Hannah talking on the phone.”
Xiyin knows there’s a scare factor in having such a big online viewership—you could get stalked for real, or your employer could bust you for partying. But her actual experience has been that if someone is watching, it’s probably a good thing. If you see a hot guy at a party, you can look up his photo and get in touch. When she worked at American Apparel, management posted encouraging remarks on employee MySpace pages. A friend was offered an internship by a magazine’s editor-in-chief after he read her profile. All sorts of opportunities—romantic, professional, creative—seem to Xiyin to be directly linked to her willingness to reveal herself a little.
When I was in high school, you’d have to be a megalomaniac or the most popular kid around to think of yourself as having a fan base. But people 25 and under are just being realistic when they think of themselves that way, says media researcher Danah Boyd, who calls the phenomenon “invisible audiences.” Since their early adolescence, they’ve learned to modulate their voice to address a set of listeners that may shrink or expand at any time: talking to one friend via instant message (who could cut-and-paste the transcript), addressing an e-mail distribution list (archived and accessible years later), arguing with someone on a posting board (anonymous, semi-anonymous, then linked to by a snarky blog). It’s a form of communication that requires a person to be constantly aware that anything you say can and will be used against you, but somehow not to mind.
This is an entirely new set of negotiations for an adolescent. But it does also have strong psychological similarities to two particular demographics: celebrities and politicians, people who have always had to learn to parse each sentence they form, unsure whether it will be ignored or redound into sudden notoriety (Macaca!). In essence, every young person in America has become, in the literal sense, a public figure. And so they have adopted the skills that celebrities learn in order not to go crazy: enjoying the attention instead of fighting it—and doing their own publicity before somebody does it for them.
CHANGE 2: THEY HAVE ARCHIVED THEIR ADOLESCENCE
I remember very little from junior-high school and high school, and I’ve always believed that was probably a good thing. Caitlin Oppermann, 17, has spent her adolescence making sure this doesn’t happen to her. At 12, she was blogging; at 14, she was snapping digital photos; at 15, she edited a documentary about her school marching band. But right now the high-school senior is most excited about her first “serious project,” caitlinoppermann.com. On it, she lists her e-mail and AIM accounts, complains about the school’s Web censors, and links to photos and videos. There’s nothing racy, but it’s the type of information overload that tends to terrify parents. Oppermann’s are supportive: “They know me and they know I’m not careless with the power I have on the Internet.”
As we talk, I peer into Oppermann’s bedroom. I’m at a café in the West Village, and Oppermann is in Kansas City—just like those Ugg girls, who might, for all I know, be linked to her somehow. And as we talk via iChat, her face floats in the corner of my screen, blonde and deadpan. By swiveling her Webcam, she gives me a tour: her walls, each painted a different color of pink; storage lockers; a subway map from last summer, when she came to Manhattan for a Parsons design fellowship. On one wall, I recognize a peace banner I’ve seen in one of her videos.
I ask her about that Xanga, the blog she kept when she was 12. Did she delete it?
“It’s still out there!” she says. “Xanga, a Blogger, a Facebook, my Flickr account, my Vimeo account. Basically, what I do is sign up for everything. I kind of weed out what I like.” I ask if she has a MySpace page, and she laughs and gives me an amused, pixellated grimace. “Unfortunately I do! I was so against MySpace, but I wanted to look at people’s pictures. I just really don’t like MySpace. ’Cause I think it’s just so … I don’t know if superficial is the right word. But plastic. These profiles of people just parading themselves. I kind of have it in for them.”
Oppermann prefers sites like Noah K Everyday, where a sad-eyed, 26-year-old Brooklyn man has posted a single photo of himself each day since he was 19, a low-tech piece of art that is oddly moving—capturing the way each day brings some small change. Her favorite site is Vimeo, a kind of hipster YouTube. (She’s become friends with the site’s creator, Jakob Lodwick, and when she visited New York, they went to the Williamsburg short-film festival.) The videos she’s posted there are mostly charming slices of life: a “typical day at a school,” hula-hooping in Washington Square Park, conversations set to music. Like Oppermann herself, they seem revelatory without being revealing, operating in a space midway between behavior and performance.
At 17, Oppermann is conversant with the conventional wisdom about the online world—that it’s a sketchy bus station packed with pedophiles. (In fact, that’s pretty much the standard response I’ve gotten when I’ve spoken about this piece with anyone over 39: “But what about the perverts?” For teenagers, who have grown up laughing at porn pop-ups and the occasional instant message from a skeezy stranger, this is about as logical as the question “How can you move to New York? You’ll get mugged!”) She argues that when it comes to online relationships, “you’re getting what you’re being.” All last summer, as she bopped around downtown Manhattan, Oppermann met dozens of people she already knew, or who knew her, from online. All of which means that her memories of her time in New York are stored both in her memory, where they will decay, and on her site, where they will not, giving her (and me) an unsettlingly crystalline record of her seventeenth summer.
Oppermann is not the only one squirreling away an archive of her adolescence, accidentally or on purpose. “I have a logger program that can show me drafts of a paper I wrote three years ago,” explains Melissa Mooneyham, a graduate of Hunter College. “And if someone says something in instant message, then later on, if you have an argument, you can say, ‘No, wait: You said this on this day at this time.’ ”
As for that defunct Xanga, Oppermann read it not long ago. “It was interesting. I just look at my junior-high self, kind of ignorant of what the future holds. And I thought, You know, I don’t think I gave myself enough credit: I’m really witty!” She pauses and considers. “If I don’t delete it, I’m still gonna be there. My generation is going to have all this history; we can document anything so easily. I’m a very sentimental person; I’m sure that has something to do with it.”
CHANGE 3: THEIR SKIN IS THICKER THAN YOURS
The biggest issue of living in public, of course, is simply that when people see you, they judge you. It’s no wonder Paris Hilton has become a peculiarly contemporary role model, blurring as she does the distinction between exposing oneself and being exposed, mortifying details spilling from her at regular intervals like hard candy from a piñata. She may not be likable, but she offers a perverse blueprint for surviving scandal: Just keep walking through those flames until you find a way to take them as a compliment.
This does not mean, as many an apocalyptic op-ed has suggested, that young people have no sense of shame. There’s a difference between being able to absorb embarrassment and not feeling it. But we live in a time in which humiliation and fame are not such easily distinguished quantities. And this generation seems to have a high tolerance for what used to be personal information splashed in the public square.
Consider Casey Serin. On Iamfacingforeclosure.com, the 24-year-old émigré from Uzbekistan has blogged a truly disastrous financial saga: He purchased eight houses in eight months, looking to “fix ’n’ flip,” only to end up in massive debt. The details, which include scans of his financial documents, are raw enough that people have accused him of being a hoax, à la YouTube’s Lonelygirl15. (“ForeclosureBoy24,” he jokes.) He’s real, he insists. Serin simply decided that airing his bad investments could win him helpful feedback—someone might even buy his properties. “A lot of people wonder, ‘Aren’t you embarrassed?’ Maybe it’s naïve, but I’m not going to run from responsibility.” Flaming commenters don’t bug him. And ironically, the impetus for the site came when Serin was denied a loan after a lender discovered an earlier, friends-only site. Rather than delete it, he swung the doors open. “Once you put something online, you really cannot take it back,” he points out. “You’ve got to be careful what you say—but once you say it, you’ve got to stand by it. And the only way to repair it is to continue to talk, to explain myself, to see it through. If I shut down, I’m at the mercy of what other people say.”
Any new technology has its victims, of course: the people who get caught during that ugly interregnum when a technology is new but no one knows how to use it yet. Take “Susie,” a girl whose real name I won’t use because I don’t want to make her any more Googleable. Back in 2000, Susie filmed some videos for her then-boyfriend: she stripped, masturbated, blew kisses at the Webcam—surely just one of many to use her new computer this way. Then someone (it’s not clear who, but probably her boyfriend’s roommate) uploaded the videos. This was years before YouTube, when Kaazaa and Morpheus ruled. Susie’s films became the earliest viral videos and turned her into an accidental online porn star, with her own Wikipedia entry.
When I reached her at work, she politely took my information down and called back from her cell. And she told me that she’d made a choice that she knew set her outside her own generation. “I never do MySpace or Facebook,” she told me. “I’m deathly afraid to Google myself.” Instead, she’s become stoic, walling herself off from the exposure. “I’ve had to choose not to be upset about it because then I’d be upset all the time. They want a really strong reaction. I don’t want to be that person.”
She had another option, she knows: She could have embraced her notoriety. “I had everyone calling my mom: Dr. Phil, Jerry Springer, Playboy. I could have been like Paris Hilton, but that’s not me. That thing is so unlike my personality; it’s not the person I am. I guess I didn’t think it was real.” As these experiences become commonplace, she tells me, “it’s not going to be such a big deal for people. Because now it’s happened to a million people.”
And it’s true that in the years since Susie’s tapes went public, the leaked sex tape has become a perverse, established social convention; it happens at every high school and to every B-list celebrity. At Hunter College last year, a student named Elvin Chaung allegedly used Facebook accounts to blackmail female students into sending him nude photos. In movies like Road Trip, “oops porn” has become a comic convention, and the online stuff regularly includes a moment when the participant turns to the camera and says, “You’re not going to put this online, are you?”
But Susie is right: For better or worse, people’s responses have already begun to change. Just two years after her tapes were leaked, another girl had a tape released on the Internet. The poster was her ex, whom we’ll call Jim Bastard. It was a parody of the MasterCard commercial: listing funds spent on the relationship, then his “priceless” revenge for getting dumped—a clip of the two having sex. (To the casual viewer, the source of the embarrassment is somewhat unclear: The girl is gorgeous and the sex is not all that revealing, while the boy in question is wearing socks.) Then, after the credits, the money shot: her name, her e-mail addresses, and her AIM screen names.
Like Susie, the subject tried, unsuccessfully, to pull the video offline; she filed suit and transferred out of school. For legal reasons, she wouldn’t talk to me. But although she’s only two years younger than Susie, she hasn’t followed in her footsteps. She has a MySpace account. She has a Facebook account. She’s planned parties online. And shortly after one such party last October, a new site appeared on MySpace: seemingly a little revenge of her own. The community is titled “The Society to Chemically Castrate Jim Bastard,” and it features a picture of her tormentor with the large red letters loser written on his forehead—not the most high-minded solution, perhaps, but one alternative to retreating for good.
Like anyone who lives online, Xiyin Tang has been stung a few times by criticism, like the night she was reading BoredatButler .com, an anonymous Website posted on by Columbia students, and saw that someone had called her “pathetic and a whore.” She stared at her name for a while, she says. “At first, I got incredibly upset, thinking, Well now, all these people can just go Facebook me and point and form judgments.” Then she did what she knew she had to do: She brushed it off. “I thought, Well, I guess you have to be sort of honored that someone takes the time to write about you, good or bad.”
I tell Xiyin about Susie and her sex tape. She’s sympathetic with Susie’s emotional response, she says, but she’s most shocked by her decision to log off entirely. “My philosophy about putting things online is that I don’t have any secrets,” says Xiyin. “And whatever you do, you should be able to do it so that you’re not ashamed of it. And in that sense, I put myself out there online because I don’t care—I’m proud of what I do and I’m not ashamed of any aspect of that. And if someone forms a judgment about me, that’s their opinion.
“If that girl’s video got published, if she did it in the first place, she should be thick-skinned enough to just brush it off,” Xiyin muses. “I understand that it’s really humiliating and everything. But if something like that happened to me, I hope I’d just say, well, that was a terrible thing for a guy to do, to put it online. But I did it and that’s me. So I am a sexual person and I shouldn’t have to hide my sexuality. I did this for my boyfriend just like you probably do this for your boyfriend, just that yours is not published. But to me, it’s all the same. It’s either documented online for other people to see or it’s not, but either way you’re still doing it. So my philosophy is, why hide it?”
FUTURE SHOCK
For anyone over 30, this may be pretty hard to take. Perhaps you smell brimstone in the air, the sense of a devil’s bargain: Is this what happens when we are all, eternally, onstage? It’s not as if those fifties squares griping about Elvis were wrong, after all. As Clay Shirky points out, “All that stuff the elders said about rock and roll? They pretty much nailed it. Miscegenation, teenagers running wild, the end of marriage!”
Because the truth is, we’re living in frontier country right now. We can take guesses at the future, but it’s hard to gauge the effects of a drug while you’re still taking it. What happens when a person who has archived her teens grows up? Will she regret her earlier decisions, or will she love the sturdy bridge she’s built to her younger self—not to mention the access to the past lives of friends, enemies, romantic partners? On a more pragmatic level, what does this do when you apply for a job or meet the person you’re going to marry? Will employers simply accept that everyone has a few videos of themselves trying to read the Bible while stoned? Will your kids watch those stoner Bible videos when they’re 16? Is there a point in the aging process when a person will want to pull back that curtain—or will the MySpace crowd maintain these flexible, cheerfully thick-skinned personae all the way into the nursing home?
And when you talk to the true believers, it’s hard not to be swayed. Jakob Lodwick seems like he shouldn’t be that kind of idealist. He’s Caitlin Oppermann’s friend, the co-founder of Vimeo and a co-creator of the raunchy CollegeHumor.com. Lodwick originated a popular feature in which college girls post topless photos; one of his first online memories was finding Susie’s videos and thinking she seemed like the ideal girlfriend. But at 25, Lodwick has become rather sweetly enamored of the uses of video for things other than sex. His first viral breakthrough was a special-effects clip in which he runs into the street and appears to lie down in front of a moving bus—a convincing enough stunt that MSNBC, with classic older-generation cluelessness, used it to illustrate a segment about kids doing dangerous things on the Internet.
But that was just an ordinary film, he says: no different from a TV segment. What he’s really compelled by these days is the potential for self-documentation to deepen the intimacy of daily life. Back in college, Lodwick experimented with a Website on which he planned to post a profile of every person he knew. Suddenly he had fans, not just of his work, but of him. “There was a clear return on investment when I put myself out there: I get attention in return. And it felt good.” He began making “vidblogs,” aiming his camera at himself, then turning it around to capture “what I’d see. I’d try to edit as little as possible so I could catch, say, a one-second glimpse of conversation. And that was what resonated with people. It was like they were having a dream that only I could have had, by watching this four or five minutes. Like they were remembering my memories. It didn’t tell them what it was like to hang out with me. It showed them what it was like to be me.”
This is Jakob’s vision: a place where topless photos are no big deal—but also where everyone can be known, simply by making him- or herself a bit vulnerable. Still, even for someone like me who is struggling to embrace the online world, Lodwick’s vision can seem so utopian it tilts into the impossible. “I think we’re gradually moving away from the age of investing in something negative,” he muses about the crueler side of online culture. “For me, a fundamental principle is that if you like something, you should show your love for it; if you don’t like it, ignore it, don’t waste your time.” Before that great transition, some Susies will get crushed in the gears of change. But soon, he predicts, online worlds will become more like real life: Reputation will be the rule of law. People will be ashamed if they act badly, because they’ll be doing so in front of all 3,000 of their friends. “If it works in real life, why wouldn’t it work online?”
If this seems too good to be true, it’s comforting to remember that technology always has aftershocks. Surely, when telephones took off, there was a mourning period for that lost, glorious golden age of eye contact.
Right now the big question for anyone of my generation seems to be, endlessly, “Why would anyone do that?” This is not a meaningful question for a 16-year-old. The benefits are obvious: The public life is fun. It’s creative. It’s where their friends are. It’s theater, but it’s also community: In this linked, logged world, you have a place to think out loud and be listened to, to meet strangers and go deeper with friends. And, yes, there are all sorts of crappy side effects: the passive-aggressive drama (“you know who you are!”), the shaming outbursts, the chill a person can feel in cyberspace on a particularly bad day. There are lousy side effects of most social changes (see feminism, democracy, the creation of the interstate highway system). But the real question is, as with any revolution, which side are you on?
Sunday, March 4, 2007
Nature, the Anti-Webkins...

In light of Maria's (101 TTh) work on the effects of a computer-centric culture on children, I'm posting about two interesting and related articles I read today.
The blog, Boing Boing, led me to an interesting article in Orion Magazine, "Leave No Child Inside." By Richard Louv, the piece, describes efforts to reclaim the idea of outdoor play for kids, who are increasingly under house arrest. Louv ascribes many benefits to outdoor play, beyond simple physical fitness -- the idea of a connection to the outdoor, physical world, the numinous moments of natural beauty, the psychological benefits to distracted, hyper kids.
From Louv's essay:
"Urban, suburban, and even rural parents cite a number of everyday reasons why their children spend less time in nature than they themselves did, including disappearing access to natural areas, competition from television and computers, dangerous traffic, more homework, and other pressures. Most of all, parents cite fear of stranger-danger. Conditioned by round-the-clock news coverage, they believe in an epidemic of abductions by strangers, despite evidence that the number of child-snatchings (about a hundred a year) has remained roughly the same for two decades, and that the rates of violent crimes against young people have fallen to well below 1975 levels.
Yes, there are risks outside our homes. But there are also risks in raising children under virtual protective house arrest: threats to their independent judgment and value of place, to their ability to feel awe and wonder, to their sense of stewardship for the Earth—and, most immediately, threats to their psychological and physical health. The rapid increase in childhood obesity leads many health-care leaders to worry that the current generation of children may be the first since World War II to die at an earlier age than their parents. Getting kids outdoors more, riding bikes, running, swimming—and, especially, experiencing nature directly—could serve as an antidote to much of what ails the young."
A Boing Boing reader then posted a link to this fascinating article about Pott Row First School in Norfolk England: "Half of all classes at the school are held outside, regardless of weather. Every kid in the school is issued a one-piece waterproof jumpsuit and rubber boots, and they gleefully suit up and head outside for math and art class in the rain."
Wednesday, February 28, 2007
Dr Jekyll/Mr. Hyde Effect...

This is the article I mentioned in class about the phenomenon of "flaming," or otherwise reasonable people acting overly aggressive online due to anonymity. It was in the New York Times and I'm posting it all here, rather than giving you a link, because the Times usually requires you to make an account and sign in before you read any of their precious data:
Flame First, Think Later: New Clues to E-Mail Misbehavior
February 20, 2007
By DANIEL GOLEMAN
Jett Lucas, a 14-year-old friend, tells me the kids in his middle school send one other a steady stream of instant messages through the day. But there’s a problem.
“Kids will say things to each other in their messages that are too embarrassing to say in person,” Jett tells me. “Then when they actually meet up, they are too shy to bring up what they said in the message. It makes things tense.”
Jett’s complaint seems to be part of a larger pattern plaguing the world of virtual communications, a problem recognized since the earliest days of the Internet: flaming, or sending a message that is taken as offensive, embarrassing or downright rude.
The hallmark of the flame is precisely what Jett lamented: thoughts expressed while sitting alone at the keyboard would be put more diplomatically — or go unmentioned — face to face.
Flaming has a technical name, the “online disinhibition effect,” which psychologists apply to the many ways people behave with less restraint in cyberspace.
In a 2004 article in the journal CyberPsychology & Behavior, John Suler, a psychologist at Rider University in Lawrenceville, N.J., suggested that several psychological factors lead to online disinhibition: the anonymity of a Web pseudonym; invisibility to others; the time lag between sending an e-mail message and getting feedback; the exaggerated sense of self from being alone; and the lack of any online authority figure. Dr. Suler notes that disinhibition can be either benign — when a shy person feels free to open up online — or toxic, as in flaming.
The emerging field of social neuroscience, the study of what goes on in the brains and bodies of two interacting people, offers clues into the neural mechanics behind flaming.
This work points to a design flaw inherent in the interface between the brain’s social circuitry and the online world. In face-to-face interaction, the brain reads a continual cascade of emotional signs and social cues, instantaneously using them to guide our next move so that the encounter goes well. Much of this social guidance occurs in circuitry centered on the orbitofrontal cortex, a center for empathy. This cortex uses that social scan to help make sure that what we do next will keep the interaction on track.
Research by Jennifer Beer, a psychologist at the University of California, Davis, finds that this face-to-face guidance system inhibits impulses for actions that would upset the other person or otherwise throw the interaction off. Neurological patients with a damaged orbitofrontal cortex lose the ability to modulate the amygdala, a source of unruly impulses; like small children, they commit mortifying social gaffes like kissing a complete stranger, blithely unaware that they are doing anything untoward.
Socially artful responses emerge largely in the neural chatter between the orbitofrontal cortex and emotional centers like the amygdala that generate impulsivity. But the cortex needs social information — a change in tone of voice, say — to know how to select and channel our impulses. And in e-mail there are no channels for voice, facial expression or other cues from the person who will receive what we say.
True, there are those cute, if somewhat lame, emoticons that cleverly arrange punctuation marks to signify an emotion. The e-mail equivalent of a mood ring, they surely lack the neural impact of an actual smile or frown. Without the raised eyebrow that signals irony, say, or the tone of voice that signals delight, the orbitofrontal cortex has little to go on. Lacking real-time cues, we can easily misread the printed words in an e-mail message, taking them the wrong way.
And if we are typing while agitated, the absence of information on how the other person is responding makes the prefrontal circuitry for discretion more likely to fail. Our emotional impulses disinhibited, we type some infelicitous message and hit “send” before a more sober second thought leads us to hit “discard.” We flame.
Flaming can be induced in some people with alarming ease. Consider an experiment, reported in 2002 in The Journal of Language and Social Psychology, in which pairs of college students — strangers — were put in separate booths to get to know each other better by exchanging messages in a simulated online chat room.
While coming and going into the lab, the students were well behaved. But the experimenter was stunned to see the messages many of the students sent. About 20 percent of the e-mail conversations immediately became outrageously lewd or simply rude.
And now, the online equivalent of road rage has joined the list of Internet dangers. Last October, in what The Times of London described as “Britain’s first ‘Web rage’ attack,” a 47-year-old Londoner was convicted of assault on a man with whom he had traded insults in a chat room. He and a friend tracked down the man and attacked him with a pickax handle and a knife.
One proposed solution to flaming is replacing typed messages with video. The assumption is that getting a message along with its emotional nuances might help us dampen the impulse to flame.
All this reminds me of a poster on the wall of classrooms I once visited in New Haven public schools. The poster, part of a program in social development that has lowered rates of violence in schools there, shows a stoplight. It says that when students feel upset, they should remember that the red light means to stop, calm down and think before they act. The yellow light prompts them to weigh a range of responses, and their consequences. The green light urges them to try the best response.
Not a bad idea. Until the day e-mail comes in video form, I may just paste one of those stoplights next to my monitor.
Julian Dibble article, "A Rape in Cyberspace"
I mentioned Julian Dibble's oft-reprinted essay, "A Rape in Cyberspace" yesterday in Tuesday's class. Some students were interested in reading it for the assignment. Here's a link to it. Although it was written in 1993, much of what it has to say about how virtual societies work still applies. Or you could think about it in terms of how much as changed in cyberspace since the days of "textworlds" like LambdaMOO." Either way, I think you'll find it interesting
Thursday, February 15, 2007
Life Hackers: Heroes or Victims of Multi-tasking?
It occurred to me after I posted the links to sites like 43 folders that the whole "life hacking" phenomenon would be useful to consider for the paper.
You can dig up a lot on the life hacking meme (and also find out what a "meme" is) just using Google. But I remembered a good article on the phenomenon that was in the New York Times a couple of years ago. Since it was published in their Sunday magazine supplement you can't access it via the Historical New York Times database available through our library.
So I decided to post the whole thing here:
Meet the Life Hackers
By CLIVE THOMPSON
New York Times: October 16, 2005
In 2000, Gloria Mark was hired as a professor at the University of California at Irvine. Until then, she was working as a researcher, living a life of comparative peace. She would spend her days in her lab, enjoying the sense of serene focus that comes from immersing yourself for hours at a time in a single project. But when her faculty job began, that all ended. Mark would arrive at her desk in the morning, full of energy and ready to tackle her to-do list -- only to suffer an endless stream of interruptions. No sooner had she started one task than a colleague would e-mail her with an urgent request; when she went to work on that, the phone would ring. At the end of the day, she had been so constantly distracted that she would have accomplished only a fraction of what she set out to do. ''Madness,'' she thought. ''I'm trying to do 30 things at once.''
Lots of people complain that office multitasking drives them nuts. But Mark is a scientist of ''human-computer interactions'' who studies how high-tech devices affect our behavior, so she was able to do more than complain: she set out to measure precisely how nuts we've all become. Beginning in 2004, she persuaded two West Coast high-tech firms to let her study their cubicle dwellers as they surfed the chaos of modern office life. One of her grad students, Victor Gonzalez, sat looking over the shoulder of various employees all day long, for a total of more than 1,000 hours. He noted how many times the employees were interrupted and how long each employee was able to work on any individual task.
When Mark crunched the data, a picture of 21st-century office work emerged that was, she says, ''far worse than I could ever have imagined.'' Each employee spent only 11 minutes on any given project before being interrupted and whisked off to do something else. What's more, each 11-minute project was itself fragmented into even shorter three-minute tasks, like answering e-mail messages, reading a Web page or working on a spreadsheet. And each time a worker was distracted from a task, it would take, on average, 25 minutes to return to that task. To perform an office job today, it seems, your attention must skip like a stone across water all day long, touching down only periodically.
Yet while interruptions are annoying, Mark's study also revealed their flip side: they are often crucial to office work.
Sure, the high-tech workers grumbled and moaned about disruptions, and they all claimed that they preferred to work in long, luxurious stretches. But they grudgingly admitted that many of their daily distractions were essential to their jobs. When someone forwards you an urgent e-mail message, it's often something you really do need to see; if a cellphone call breaks through while you're desperately trying to solve a problem, it might be the call that saves your hide. In the language of computer sociology, our jobs today are ''interrupt driven.'' Distractions are not just a plague on our work -- sometimes they are our work. To be cut off from other workers is to be cut off from everything.
For a small cadre of computer engineers and academics, this realization has begun to raise an enticing possibility: perhaps we can find an ideal middle ground. If high-tech work distractions are inevitable, then maybe we can re-engineer them so we receive all of their benefits but few of their downsides. Is there such a thing as a perfect interruption?
Mary Czerwinski first confronted this question while working, oddly enough, in outer space. She is one of the world's leading experts in interruption science, and she was hired in 1989 by Lockheed to help NASA design the information systems for the International Space Station. NASA had a problem: how do you deliver an interruption to a busy astronaut? On the space station, astronauts must attend to dozens of experiments while also monitoring the station's warning systems for potentially fatal mechanical errors. NASA wanted to ensure that its warnings were perfectly tuned to the human attention span: if a warning was too distracting, it could throw off the astronauts and cause them to mess up million-dollar experiments. But if the warnings were too subtle and unobtrusive, they might go unnoticed, which would be even worse. The NASA engineers needed something that would split the difference.
Czerwinski noticed that all the information the astronauts received came to them as plain text and numbers. She began experimenting with different types of interruptions and found that it was the style of delivery that was crucial. Hit an astronaut with a textual interruption, and he was likely to ignore it, because it would simply fade into the text-filled screens he was already staring at. Blast a horn and he would definitely notice it -- but at the cost of jangling his nerves. Czerwinski proposed a third way: a visual graphic, like a pentagram whose sides changed color based on the type of problem at hand, a solution different enough from the screens of text to break through the clutter.
The science of interruptions began more than 100 years ago, with the emergence of telegraph operators -- the first high-stress, time-sensitive information-technology jobs. Psychologists discovered that if someone spoke to a telegraph operator while he was keying a message, the operator was more likely to make errors; his cognition was scrambled by mentally ''switching channels.'' Later, psychologists determined that whenever workers needed to focus on a job that required the monitoring of data, presentation was all-important. Using this knowledge, cockpits for fighter pilots were meticulously planned so that each dial and meter could be read at a glance.
Still, such issues seemed remote from the lives of everyday workers -- even information workers -- simply because everyday work did not require parsing screenfuls of information. In the 90's, this began to change, and change quickly. As they became ubiquitous in the workplace, computers, which had until then been little more than glorified word-processors and calculators, began to experience a rapid increase in speed and power. ''Multitasking'' was born; instead of simply working on one program for hours at a time, a computer user could work on several different ones simultaneously. Corporations seized on this as a way to squeeze more productivity out of each worker, and technology companies like Microsoft obliged them by transforming the computer into a hub for every conceivable office task, and laying on the available information with a trowel. The Internet accelerated this trend even further, since it turned the computer from a sealed box into our primary tool for communication. As a result, office denizens now stare at computer screens of mind-boggling complexity, as they juggle messages, text documents, PowerPoint presentations, spreadsheets and Web browsers all at once. In the modern office we are all fighter pilots.
Information is no longer a scarce resource -- attention is. David Rose, a Cambridge, Mass.-based expert on computer interfaces, likes to point out that 20 years ago, an office worker had only two types of communication technology: a phone, which required an instant answer, and postal mail, which took days. ''Now we have dozens of possibilities between those poles,'' Rose says. How fast are you supposed to reply to an e-mail message? Or an instant message? Computer-based interruptions fall into a sort of Heisenbergian uncertainty trap: it is difficult to know whether an e-mail message is worth interrupting your work for unless you open and read it -- at which point you have, of course, interrupted yourself. Our software tools were essentially designed to compete with one another for our attention, like needy toddlers.
The upshot is something that Linda Stone, a software executive who has worked for both Apple and Microsoft, calls ''continuous partial attention'': we are so busy keeping tabs on everything that we never focus on anything. This can actually be a positive feeling, inasmuch as the constant pinging makes us feel needed and desired. The reason many interruptions seem impossible to ignore is that they are about relationships -- someone, or something, is calling out to us. It is why we have such complex emotions about the chaos of the modern office, feeling alternately drained by its demands and exhilarated when we successfully surf the flood.
''It makes us feel alive,'' Stone says. ''It's what makes us feel important. We just want to connect, connect, connect. But what happens when you take that to the extreme? You get overconnected.'' Sanity lies on the path down the center -- if only there was some way to find it.
It is this middle path that Czerwinski and her generation of computer scientists are now trying to divine. When I first met her in the corridors of Microsoft, she struck me as a strange person to be studying the art of focusing, because she seemed almost attention-deficit disordered herself: a 44-year-old with a pageboy haircut and the electric body language of a teenager. ''I'm such a spaz,'' she said, as we went bounding down the hallways to the cafeteria for a ''bio-break.'' When she ushered me into her office, it was a perfect Exhibit A of the go-go computer-driven life: she had not one but three enormous computer screens, festooned with perhaps 30 open windows -- a bunch of e-mail messages, several instant messages and dozens of Web pages. Czerwinski says she regards 20 solid minutes of uninterrupted work as a major triumph; often she'll stay in her office for hours after work, crunching data, since that's the only time her outside distractions wane.
In 1997, Microsoft recruited Czerwinski to join Microsoft Research Labs, a special division of the firm where she and other eggheads would be allowed to conduct basic research into how computers affect human behavior. Czerwinski discovered that the computer industry was still strangely ignorant of how people really used their computers. Microsoft had sold tens of millions of copies of its software but had never closely studied its users' rhythms of work and interruption. How long did they linger on a single document? What interrupted them while they were working, and why?
To figure this out, she took a handful of volunteers and installed software on their computers that would virtually shadow them all day long, recording every mouse click. She discovered that computer users were as restless as hummingbirds. On average, they juggled eight different windows at the same time -- a few e-mail messages, maybe a Web page or two and a PowerPoint document. More astonishing, they would spend barely 20 seconds looking at one window before flipping to another.
Why the constant shifting? In part it was because of the basic way that today's computers are laid out. A computer screen offers very little visual real estate. It is like working at a desk so small that you can look at only a single sheet of paper at a time. A Microsoft Word document can cover almost an entire screen. Once you begin multitasking, a computer desktop very quickly becomes buried in detritus.
This is part of the reason that, when someone is interrupted, it takes 25 minutes to cycle back to the original task. Once their work becomes buried beneath a screenful of interruptions, office workers appear to literally forget what task they were originally pursuing. We do not like to think we are this flighty: we might expect that if we are, say, busily filling out some forms and are suddenly distracted by a phone call, we would quickly return to finish the job. But we don't. Researchers find that 40 percent of the time, workers wander off in a new direction when an interruption ends, distracted by the technological equivalent of shiny objects. The central danger of interruptions, Czerwinski realized, is not really the interruption at all. It is the havoc they wreak with our short-term memory: What the heck was I just doing?
When Gloria Mark and Mary Czerwinski, working separately, looked at the desks of the people they were studying, they each noticed the same thing: Post-it notes. Workers would scrawl hieroglyphic reminders of the tasks they were supposed to be working on (''Test PB patch DAN's PC -- Waiting for AL,'' was one that Mark found). Then they would place them directly in their fields of vision, often in a halo around the edge of their computer screens. The Post-it notes were, in essence, a jury-rigged memory device, intended to rescue users from those moments of mental wandering.
For Mark and Czerwinski, these piecemeal efforts at coping pointed to ways that our high-tech tools could be engineered to be less distracting. When Czerwinski walked around the Microsoft campus, she noticed that many people had attached two or three monitors to their computers. They placed their applications on different screens -- the e-mail far off on the right side, a Web browser on the left and their main work project right in the middle -- so that each application was ''glanceable.'' When the ding on their e-mail program went off, they could quickly peek over at their in-boxes to see what had arrived.
The workers swore that this arrangement made them feel calmer. But did more screen area actually help with cognition? To find out, Czerwinski's team conducted another experiment. The researchers took 15 volunteers, sat each one in front of a regular-size 15-inch monitor and had them complete a variety of tasks designed to challenge their powers of concentration -- like a Web search, some cutting and pasting and memorizing a seven-digit phone number. Then the volunteers repeated these same tasks, this time using a computer with a massive 42-inch screen, as big as a plasma TV.
The results? On the bigger screen, people completed the tasks at least 10 percent more quickly -- and some as much as 44 percent more quickly. They were also more likely to remember the seven-digit number, which showed that the multitasking was clearly less taxing on their brains. Some of the volunteers were so enthralled with the huge screen that they begged to take it home. In two decades of research, Czerwinski had never seen a single tweak to a computer system so significantly improve a user's productivity. The clearer your screen, she found, the calmer your mind. So her group began devising tools that maximized screen space by grouping documents and programs together -- making it possible to easily spy them out of the corner of your eye, ensuring that you would never forget them in the fog of your interruptions. Another experiment created a tiny round window that floats on one side of the screen; moving dots represent information you need to monitor, like the size of your in-box or an approaching meeting. It looks precisely like the radar screen in a military cockpit.
In late 2003, the technology writer Danny O'Brien decided he was fed up with not getting enough done at work. So he sat down and made a list of 70 of the most ''sickeningly overprolific'' people he knew, most of whom were software engineers of one kind or another. O'Brien wrote a questionnaire asking them to explain how, precisely, they managed such awesome output. Over the next few weeks they e-mailed their replies, and one night O'Brien sat down at his dining-room table to look for clues. He was hoping that the self-described geeks all shared some common tricks.
He was correct. But their suggestions were surprisingly low-tech. None of them used complex technology to manage their to-do lists: no Palm Pilots, no day-planner software. Instead, they all preferred to find one extremely simple application and shove their entire lives into it. Some of O'Brien's correspondents said they opened up a single document in a word-processing program and used it as an extra brain, dumping in everything they needed to remember -- addresses, to-do lists, birthdays -- and then just searched through that file when they needed a piece of information. Others used e-mail -- mailing themselves a reminder of every task, reasoning that their in-boxes were the one thing they were certain to look at all day long.
In essence, the geeks were approaching their frazzled high-tech lives as engineering problems -- and they were not waiting for solutions to emerge from on high, from Microsoft or computer firms. Instead they ginned up a multitude of small-bore fixes to reduce the complexities of life, one at a time, in a rather Martha Stewart-esque fashion.
Many of O'Brien's correspondents, it turned out, were also devotees of ''Getting Things Done,'' a system developed by David Allen, a personal-productivity guru who consults with Fortune 500 corporations and whose seminars fill Silicon Valley auditoriums with anxious worker bees. At the core of Allen's system is the very concept of memory that Mark and Czerwinski hit upon: unless the task you're doing is visible right in front of you, you will half-forget about it when you get distracted, and it will nag at you from your subconscious. Thus, as soon as you are interrupted, Allen says, you need either to quickly deal with the interruption or -- if it's going to take longer than two minutes -- to faithfully add the new task to your constantly updated to-do list. Once the interruption is over, you immediately check your to-do list and go back to whatever is at the top.
''David Allen essentially offers a program that you can run like software in your head and follow automatically,'' O'Brien explains. ''If this happens, then do this. You behave like a robot, which of course really appeals to geeks.''
O'Brien summed up his research in a speech called ''Life Hacks,'' which he delivered in February 2004 at the O'Reilly Emerging Technology Conference. Five hundred conference-goers tried to cram into his session, desperate for tips on managing info chaos. When O'Brien repeated the talk the next year, it was mobbed again. By the summer of 2005, the ''life hacks'' meme had turned into a full-fledged grass-roots movement. Dozens of ''life hacking'' Web sites now exist, where followers of the movement trade suggestions on how to reduce chaos. The ideas are often quite clever: O'Brien wrote for himself a program that, whenever he's surfing the Web, pops up a message every 10 minutes demanding to know whether he's procrastinating. It turns out that a certain amount of life-hacking is simply cultivating a monklike ability to say no.
''In fairness, I think we bring some of this on ourselves,'' says Merlin Mann, the founder of the popular life-hacking site 43folders.com. ''We'd rather die than be bored for a few minutes, so we just surround ourselves with distractions. We've got 20,000 digital photos instead of 10 we treasure. We have more TV Tivo'd than we'll ever see.'' In the last year, Mann has embarked on a 12-step-like triage: he canceled his Netflix account, trimmed his instant-messaging ''buddy list'' so only close friends can contact him and set his e-mail program to bother him only once an hour. (''Unless you're working in a Korean missile silo, you don't need to check e-mail every two minutes,'' he argues.)
Mann's most famous hack emerged when he decided to ditch his Palm Pilot and embrace a much simpler organizing style. He bought a deck of 3-by-5-inch index cards, clipped them together with a binder clip and dubbed it ''The Hipster P.D.A.'' -- an ultra-low-fi organizer, running on the oldest memory technology around: paper.
In the 1920's, the Russian scientist Bluma Zeigarnik performed an experiment that illustrated an intriguing aspect of interruptions. She had several test subjects work on jigsaw puzzles, then interrupted them at various points. She found that the ones least likely to complete the task were those who had been disrupted at the beginning. Because they hadn't had time to become mentally invested in the task, they had trouble recovering from the distraction. In contrast, those who were interrupted toward the end of the task were more likely to stay on track.
Gloria Mark compares this to the way that people work when they are ''co-located'' -- sitting next to each other in cubicles -- versus how they work when they are ''distributed,'' each working from different locations and interacting online. She discovered that people in open-cubicle offices suffer more interruptions than those who work remotely. But they have better interruptions, because their co-workers have a social sense of what they are doing. When you work next to other people, they can sense whether you're deeply immersed, panicking or relatively free and ready to talk -- and they interrupt you accordingly.
So why don't computers work this way? Instead of pinging us with e-mail and instant messages the second they arrive, our machines could store them up -- to be delivered only at an optimum moment, when our brains are mostly relaxed.
One afternoon I drove across the Microsoft campus to visit a man who is trying to achieve precisely that: a computer that can read your mind. His name is Eric Horvitz, and he is one of Czerwinski's closest colleagues in the lab. For the last eight years, he has been building networks equipped with artificial intelligence (A.I.) that carefully observes a computer user's behavior and then tries to predict that sweet spot -- the moment when the user will be mentally free and ready to be interrupted.
Horvitz booted the system up to show me how it works. He pointed to a series of bubbles on his screen, each representing one way the machine observes Horvitz's behavior. For example, it measures how long he's been typing or reading e-mail messages; it notices how long he spends in one program before shifting to another. Even more creepily, Horvitz told me, the A.I. program will -- a little like HAL from ''2001: A Space Odyssey'' -- eavesdrop on him with a microphone and spy on him using a Webcam, to try and determine how busy he is, and whether he has company in his office. Sure enough, at one point I peeked into the corner of Horvitz's computer screen and there was a little red indicator glowing.
''It's listening to us,'' Horvitz said with a grin. ''The microphone's on.''
It is no simple matter for a computer to recognize a user's ''busy state,'' as it turns out, because everyone is busy in his own way. One programmer who works for Horvitz is busiest when he's silent and typing for extended periods, since that means he's furiously coding. But for a manager or executive, sitting quietly might actually be an indication of time being wasted; managers are more likely to be busy when they are talking or if PowerPoint is running.
In the early days of training Horvitz's A.I., you must clarify when you're most and least interruptible, so the machine can begin to pick up your personal patterns. But after a few days, the fun begins -- because the machine takes over and, using what you've taught it, tries to predict your future behavior. Horvitz clicked an onscreen icon for ''Paul,'' an employee working on a laptop in a meeting room down the hall. A little chart popped up. Paul, the A.I. program reported, was currently in between tasks -- but it predicted that he would begin checking his e-mail within five minutes. Thus, Horvitz explained, right now would be a great time to e-mail him; you'd be likely to get a quick reply. If you wanted to pay him a visit, the program also predicted that -- based on his previous patterns -- Paul would be back in his office in 30 minutes.
With these sorts of artificial smarts, computer designers could re-engineer our e-mail programs, our messaging and even our phones so that eachtool would work like a personal butler -- tiptoeing around us when things are hectic and barging in only when our crises have passed. Horvitz's early prototypes offer an impressive glimpse of what's possible. An e-mail program he produced seven years ago, code-named Priorities, analyzes the content of your incoming e-mail messages and ranks them based on the urgency of the message and your relationship with the sender, then weighs that against how busy you are. Superurgent mail is delivered right away; everything else waits in a queue until you're no longer busy. When Czerwinski first tried the program, it gave her as much as three hours of solid work time before nagging her with a message. The software also determined, to the surprise of at least one Microsoft employee, that e-mail missives from Bill Gates were not necessarily urgent, since Gates tends to write long, discursive notes for employees to meditate on.
This raises a possibility both amusing and disturbing: perhaps if we gave artificial brains more control over our schedules, interruptions would actually decline -- because A.I. doesn't panic. We humans are Pavlovian; even though we know we're just pumping ourselves full of stress, we can't help frantically checking our e-mail the instant the bell goes ding. But a machine can resist that temptation, because it thinks in statistics. It knows that only an extremely rare message is so important that we must read it right now.
So will Microsoft bring these calming technologies to our real-world computers? ''Could Microsoft do it?'' asks David Gelernter, a Yale professor and longtime critic of today's computers. ''Yeah. But I don't know if they're motivated by the lust for simplicity that you'd need. They're more interested in piling more and more toys on you.''
The near-term answer to the question will come when Vista, Microsoft's new operating system, is released in the fall of 2006. Though Czerwinski and Horvitz are reluctant to speculate on which of their innovations will be included in the new system, Horvitz said that the system will ''likely'' incorporate some way of detecting how busy you are. But he admitted that ''a bunch of features may not be shipping with Vista.'' He says he believes that Microsoft will eventually tame the interruption-driven workplace, even if it takes a while. ''I have viewed the task as a 'moon mission' that I believe that Microsoft can pull off,'' he says.
By a sizable margin, life hackers are devotees not of Microsoft but of Apple, the company's only real rival in the creation of operating systems -- and a company that has often seemed to intuit the need for software that reduces the complexity of the desktop. When Apple launched its latest operating system, Tiger, earlier this year, it introduced a feature called Dashboard -- a collection of glanceable programs, each of which performs one simple function, like displaying the weather. Tiger also includes a single-key tool that zooms all open windows into a bingo-card-like grid, uncovering any ''lost'' ones. A superpowered search application speeds up the laborious task of hunting down a missing file. Microsoft is now playing catch-up; Vista promises many of the same tweaks, although it will most likely add a few new ones as well, including, possibly, a 3-D mode for seeing all the windows you have open.
Apple's computers have long been designed specifically to soothe the confusions of the technologically ignorant. For years, that meant producing computer systems that seemed simpler than the ones Microsoft produced, but were less powerful. When computers moved relatively slowly and the Internet was little used, raw productivity -- shoving the most data at the user -- mattered most, and Microsoft triumphed in the marketplace. But for many users, simplicity now trumps power. Linda Stone, the software executive who has worked alongside the C.E.O.'s of both Microsoft and Apple, argues that we have shifted eras in computing. Now that multitasking is driving us crazy, we treasure technologies that protect us. We love Google not because it brings us the entire Web but because it filters it out, bringing us the one page we really need. In our new age of overload, the winner is the technology that can hold the world at bay.
Yet the truth is that even Apple might not be up to the task of building the ultimately serene computer. After all, even the geekiest life hackers find they need to trick out their Apples with duct-tape-like solutions; and even that sometimes isn't enough. Some experts argue that the basic design of the computer needs to change: so long as computers deliver information primarily through a monitor, they have an inherent bottleneck -- forcing us to squeeze the ocean of our lives through a thin straw. David Rose, the Cambridge designer, suspects that computers need to break away from the screen, delivering information through glanceable sources in the world around us, the way wall clocks tell us the time in an instant. For computers to become truly less interruptive, they might have to cease looking like computers. Until then, those Post-it notes on our monitors are probably here to stay.
You can dig up a lot on the life hacking meme (and also find out what a "meme" is) just using Google. But I remembered a good article on the phenomenon that was in the New York Times a couple of years ago. Since it was published in their Sunday magazine supplement you can't access it via the Historical New York Times database available through our library.
So I decided to post the whole thing here:
Meet the Life Hackers
By CLIVE THOMPSON
New York Times: October 16, 2005
In 2000, Gloria Mark was hired as a professor at the University of California at Irvine. Until then, she was working as a researcher, living a life of comparative peace. She would spend her days in her lab, enjoying the sense of serene focus that comes from immersing yourself for hours at a time in a single project. But when her faculty job began, that all ended. Mark would arrive at her desk in the morning, full of energy and ready to tackle her to-do list -- only to suffer an endless stream of interruptions. No sooner had she started one task than a colleague would e-mail her with an urgent request; when she went to work on that, the phone would ring. At the end of the day, she had been so constantly distracted that she would have accomplished only a fraction of what she set out to do. ''Madness,'' she thought. ''I'm trying to do 30 things at once.''
Lots of people complain that office multitasking drives them nuts. But Mark is a scientist of ''human-computer interactions'' who studies how high-tech devices affect our behavior, so she was able to do more than complain: she set out to measure precisely how nuts we've all become. Beginning in 2004, she persuaded two West Coast high-tech firms to let her study their cubicle dwellers as they surfed the chaos of modern office life. One of her grad students, Victor Gonzalez, sat looking over the shoulder of various employees all day long, for a total of more than 1,000 hours. He noted how many times the employees were interrupted and how long each employee was able to work on any individual task.
When Mark crunched the data, a picture of 21st-century office work emerged that was, she says, ''far worse than I could ever have imagined.'' Each employee spent only 11 minutes on any given project before being interrupted and whisked off to do something else. What's more, each 11-minute project was itself fragmented into even shorter three-minute tasks, like answering e-mail messages, reading a Web page or working on a spreadsheet. And each time a worker was distracted from a task, it would take, on average, 25 minutes to return to that task. To perform an office job today, it seems, your attention must skip like a stone across water all day long, touching down only periodically.
Yet while interruptions are annoying, Mark's study also revealed their flip side: they are often crucial to office work.
Sure, the high-tech workers grumbled and moaned about disruptions, and they all claimed that they preferred to work in long, luxurious stretches. But they grudgingly admitted that many of their daily distractions were essential to their jobs. When someone forwards you an urgent e-mail message, it's often something you really do need to see; if a cellphone call breaks through while you're desperately trying to solve a problem, it might be the call that saves your hide. In the language of computer sociology, our jobs today are ''interrupt driven.'' Distractions are not just a plague on our work -- sometimes they are our work. To be cut off from other workers is to be cut off from everything.
For a small cadre of computer engineers and academics, this realization has begun to raise an enticing possibility: perhaps we can find an ideal middle ground. If high-tech work distractions are inevitable, then maybe we can re-engineer them so we receive all of their benefits but few of their downsides. Is there such a thing as a perfect interruption?
Mary Czerwinski first confronted this question while working, oddly enough, in outer space. She is one of the world's leading experts in interruption science, and she was hired in 1989 by Lockheed to help NASA design the information systems for the International Space Station. NASA had a problem: how do you deliver an interruption to a busy astronaut? On the space station, astronauts must attend to dozens of experiments while also monitoring the station's warning systems for potentially fatal mechanical errors. NASA wanted to ensure that its warnings were perfectly tuned to the human attention span: if a warning was too distracting, it could throw off the astronauts and cause them to mess up million-dollar experiments. But if the warnings were too subtle and unobtrusive, they might go unnoticed, which would be even worse. The NASA engineers needed something that would split the difference.
Czerwinski noticed that all the information the astronauts received came to them as plain text and numbers. She began experimenting with different types of interruptions and found that it was the style of delivery that was crucial. Hit an astronaut with a textual interruption, and he was likely to ignore it, because it would simply fade into the text-filled screens he was already staring at. Blast a horn and he would definitely notice it -- but at the cost of jangling his nerves. Czerwinski proposed a third way: a visual graphic, like a pentagram whose sides changed color based on the type of problem at hand, a solution different enough from the screens of text to break through the clutter.
The science of interruptions began more than 100 years ago, with the emergence of telegraph operators -- the first high-stress, time-sensitive information-technology jobs. Psychologists discovered that if someone spoke to a telegraph operator while he was keying a message, the operator was more likely to make errors; his cognition was scrambled by mentally ''switching channels.'' Later, psychologists determined that whenever workers needed to focus on a job that required the monitoring of data, presentation was all-important. Using this knowledge, cockpits for fighter pilots were meticulously planned so that each dial and meter could be read at a glance.
Still, such issues seemed remote from the lives of everyday workers -- even information workers -- simply because everyday work did not require parsing screenfuls of information. In the 90's, this began to change, and change quickly. As they became ubiquitous in the workplace, computers, which had until then been little more than glorified word-processors and calculators, began to experience a rapid increase in speed and power. ''Multitasking'' was born; instead of simply working on one program for hours at a time, a computer user could work on several different ones simultaneously. Corporations seized on this as a way to squeeze more productivity out of each worker, and technology companies like Microsoft obliged them by transforming the computer into a hub for every conceivable office task, and laying on the available information with a trowel. The Internet accelerated this trend even further, since it turned the computer from a sealed box into our primary tool for communication. As a result, office denizens now stare at computer screens of mind-boggling complexity, as they juggle messages, text documents, PowerPoint presentations, spreadsheets and Web browsers all at once. In the modern office we are all fighter pilots.
Information is no longer a scarce resource -- attention is. David Rose, a Cambridge, Mass.-based expert on computer interfaces, likes to point out that 20 years ago, an office worker had only two types of communication technology: a phone, which required an instant answer, and postal mail, which took days. ''Now we have dozens of possibilities between those poles,'' Rose says. How fast are you supposed to reply to an e-mail message? Or an instant message? Computer-based interruptions fall into a sort of Heisenbergian uncertainty trap: it is difficult to know whether an e-mail message is worth interrupting your work for unless you open and read it -- at which point you have, of course, interrupted yourself. Our software tools were essentially designed to compete with one another for our attention, like needy toddlers.
The upshot is something that Linda Stone, a software executive who has worked for both Apple and Microsoft, calls ''continuous partial attention'': we are so busy keeping tabs on everything that we never focus on anything. This can actually be a positive feeling, inasmuch as the constant pinging makes us feel needed and desired. The reason many interruptions seem impossible to ignore is that they are about relationships -- someone, or something, is calling out to us. It is why we have such complex emotions about the chaos of the modern office, feeling alternately drained by its demands and exhilarated when we successfully surf the flood.
''It makes us feel alive,'' Stone says. ''It's what makes us feel important. We just want to connect, connect, connect. But what happens when you take that to the extreme? You get overconnected.'' Sanity lies on the path down the center -- if only there was some way to find it.
It is this middle path that Czerwinski and her generation of computer scientists are now trying to divine. When I first met her in the corridors of Microsoft, she struck me as a strange person to be studying the art of focusing, because she seemed almost attention-deficit disordered herself: a 44-year-old with a pageboy haircut and the electric body language of a teenager. ''I'm such a spaz,'' she said, as we went bounding down the hallways to the cafeteria for a ''bio-break.'' When she ushered me into her office, it was a perfect Exhibit A of the go-go computer-driven life: she had not one but three enormous computer screens, festooned with perhaps 30 open windows -- a bunch of e-mail messages, several instant messages and dozens of Web pages. Czerwinski says she regards 20 solid minutes of uninterrupted work as a major triumph; often she'll stay in her office for hours after work, crunching data, since that's the only time her outside distractions wane.
In 1997, Microsoft recruited Czerwinski to join Microsoft Research Labs, a special division of the firm where she and other eggheads would be allowed to conduct basic research into how computers affect human behavior. Czerwinski discovered that the computer industry was still strangely ignorant of how people really used their computers. Microsoft had sold tens of millions of copies of its software but had never closely studied its users' rhythms of work and interruption. How long did they linger on a single document? What interrupted them while they were working, and why?
To figure this out, she took a handful of volunteers and installed software on their computers that would virtually shadow them all day long, recording every mouse click. She discovered that computer users were as restless as hummingbirds. On average, they juggled eight different windows at the same time -- a few e-mail messages, maybe a Web page or two and a PowerPoint document. More astonishing, they would spend barely 20 seconds looking at one window before flipping to another.
Why the constant shifting? In part it was because of the basic way that today's computers are laid out. A computer screen offers very little visual real estate. It is like working at a desk so small that you can look at only a single sheet of paper at a time. A Microsoft Word document can cover almost an entire screen. Once you begin multitasking, a computer desktop very quickly becomes buried in detritus.
This is part of the reason that, when someone is interrupted, it takes 25 minutes to cycle back to the original task. Once their work becomes buried beneath a screenful of interruptions, office workers appear to literally forget what task they were originally pursuing. We do not like to think we are this flighty: we might expect that if we are, say, busily filling out some forms and are suddenly distracted by a phone call, we would quickly return to finish the job. But we don't. Researchers find that 40 percent of the time, workers wander off in a new direction when an interruption ends, distracted by the technological equivalent of shiny objects. The central danger of interruptions, Czerwinski realized, is not really the interruption at all. It is the havoc they wreak with our short-term memory: What the heck was I just doing?
When Gloria Mark and Mary Czerwinski, working separately, looked at the desks of the people they were studying, they each noticed the same thing: Post-it notes. Workers would scrawl hieroglyphic reminders of the tasks they were supposed to be working on (''Test PB patch DAN's PC -- Waiting for AL,'' was one that Mark found). Then they would place them directly in their fields of vision, often in a halo around the edge of their computer screens. The Post-it notes were, in essence, a jury-rigged memory device, intended to rescue users from those moments of mental wandering.
For Mark and Czerwinski, these piecemeal efforts at coping pointed to ways that our high-tech tools could be engineered to be less distracting. When Czerwinski walked around the Microsoft campus, she noticed that many people had attached two or three monitors to their computers. They placed their applications on different screens -- the e-mail far off on the right side, a Web browser on the left and their main work project right in the middle -- so that each application was ''glanceable.'' When the ding on their e-mail program went off, they could quickly peek over at their in-boxes to see what had arrived.
The workers swore that this arrangement made them feel calmer. But did more screen area actually help with cognition? To find out, Czerwinski's team conducted another experiment. The researchers took 15 volunteers, sat each one in front of a regular-size 15-inch monitor and had them complete a variety of tasks designed to challenge their powers of concentration -- like a Web search, some cutting and pasting and memorizing a seven-digit phone number. Then the volunteers repeated these same tasks, this time using a computer with a massive 42-inch screen, as big as a plasma TV.
The results? On the bigger screen, people completed the tasks at least 10 percent more quickly -- and some as much as 44 percent more quickly. They were also more likely to remember the seven-digit number, which showed that the multitasking was clearly less taxing on their brains. Some of the volunteers were so enthralled with the huge screen that they begged to take it home. In two decades of research, Czerwinski had never seen a single tweak to a computer system so significantly improve a user's productivity. The clearer your screen, she found, the calmer your mind. So her group began devising tools that maximized screen space by grouping documents and programs together -- making it possible to easily spy them out of the corner of your eye, ensuring that you would never forget them in the fog of your interruptions. Another experiment created a tiny round window that floats on one side of the screen; moving dots represent information you need to monitor, like the size of your in-box or an approaching meeting. It looks precisely like the radar screen in a military cockpit.
In late 2003, the technology writer Danny O'Brien decided he was fed up with not getting enough done at work. So he sat down and made a list of 70 of the most ''sickeningly overprolific'' people he knew, most of whom were software engineers of one kind or another. O'Brien wrote a questionnaire asking them to explain how, precisely, they managed such awesome output. Over the next few weeks they e-mailed their replies, and one night O'Brien sat down at his dining-room table to look for clues. He was hoping that the self-described geeks all shared some common tricks.
He was correct. But their suggestions were surprisingly low-tech. None of them used complex technology to manage their to-do lists: no Palm Pilots, no day-planner software. Instead, they all preferred to find one extremely simple application and shove their entire lives into it. Some of O'Brien's correspondents said they opened up a single document in a word-processing program and used it as an extra brain, dumping in everything they needed to remember -- addresses, to-do lists, birthdays -- and then just searched through that file when they needed a piece of information. Others used e-mail -- mailing themselves a reminder of every task, reasoning that their in-boxes were the one thing they were certain to look at all day long.
In essence, the geeks were approaching their frazzled high-tech lives as engineering problems -- and they were not waiting for solutions to emerge from on high, from Microsoft or computer firms. Instead they ginned up a multitude of small-bore fixes to reduce the complexities of life, one at a time, in a rather Martha Stewart-esque fashion.
Many of O'Brien's correspondents, it turned out, were also devotees of ''Getting Things Done,'' a system developed by David Allen, a personal-productivity guru who consults with Fortune 500 corporations and whose seminars fill Silicon Valley auditoriums with anxious worker bees. At the core of Allen's system is the very concept of memory that Mark and Czerwinski hit upon: unless the task you're doing is visible right in front of you, you will half-forget about it when you get distracted, and it will nag at you from your subconscious. Thus, as soon as you are interrupted, Allen says, you need either to quickly deal with the interruption or -- if it's going to take longer than two minutes -- to faithfully add the new task to your constantly updated to-do list. Once the interruption is over, you immediately check your to-do list and go back to whatever is at the top.
''David Allen essentially offers a program that you can run like software in your head and follow automatically,'' O'Brien explains. ''If this happens, then do this. You behave like a robot, which of course really appeals to geeks.''
O'Brien summed up his research in a speech called ''Life Hacks,'' which he delivered in February 2004 at the O'Reilly Emerging Technology Conference. Five hundred conference-goers tried to cram into his session, desperate for tips on managing info chaos. When O'Brien repeated the talk the next year, it was mobbed again. By the summer of 2005, the ''life hacks'' meme had turned into a full-fledged grass-roots movement. Dozens of ''life hacking'' Web sites now exist, where followers of the movement trade suggestions on how to reduce chaos. The ideas are often quite clever: O'Brien wrote for himself a program that, whenever he's surfing the Web, pops up a message every 10 minutes demanding to know whether he's procrastinating. It turns out that a certain amount of life-hacking is simply cultivating a monklike ability to say no.
''In fairness, I think we bring some of this on ourselves,'' says Merlin Mann, the founder of the popular life-hacking site 43folders.com. ''We'd rather die than be bored for a few minutes, so we just surround ourselves with distractions. We've got 20,000 digital photos instead of 10 we treasure. We have more TV Tivo'd than we'll ever see.'' In the last year, Mann has embarked on a 12-step-like triage: he canceled his Netflix account, trimmed his instant-messaging ''buddy list'' so only close friends can contact him and set his e-mail program to bother him only once an hour. (''Unless you're working in a Korean missile silo, you don't need to check e-mail every two minutes,'' he argues.)
Mann's most famous hack emerged when he decided to ditch his Palm Pilot and embrace a much simpler organizing style. He bought a deck of 3-by-5-inch index cards, clipped them together with a binder clip and dubbed it ''The Hipster P.D.A.'' -- an ultra-low-fi organizer, running on the oldest memory technology around: paper.
In the 1920's, the Russian scientist Bluma Zeigarnik performed an experiment that illustrated an intriguing aspect of interruptions. She had several test subjects work on jigsaw puzzles, then interrupted them at various points. She found that the ones least likely to complete the task were those who had been disrupted at the beginning. Because they hadn't had time to become mentally invested in the task, they had trouble recovering from the distraction. In contrast, those who were interrupted toward the end of the task were more likely to stay on track.
Gloria Mark compares this to the way that people work when they are ''co-located'' -- sitting next to each other in cubicles -- versus how they work when they are ''distributed,'' each working from different locations and interacting online. She discovered that people in open-cubicle offices suffer more interruptions than those who work remotely. But they have better interruptions, because their co-workers have a social sense of what they are doing. When you work next to other people, they can sense whether you're deeply immersed, panicking or relatively free and ready to talk -- and they interrupt you accordingly.
So why don't computers work this way? Instead of pinging us with e-mail and instant messages the second they arrive, our machines could store them up -- to be delivered only at an optimum moment, when our brains are mostly relaxed.
One afternoon I drove across the Microsoft campus to visit a man who is trying to achieve precisely that: a computer that can read your mind. His name is Eric Horvitz, and he is one of Czerwinski's closest colleagues in the lab. For the last eight years, he has been building networks equipped with artificial intelligence (A.I.) that carefully observes a computer user's behavior and then tries to predict that sweet spot -- the moment when the user will be mentally free and ready to be interrupted.
Horvitz booted the system up to show me how it works. He pointed to a series of bubbles on his screen, each representing one way the machine observes Horvitz's behavior. For example, it measures how long he's been typing or reading e-mail messages; it notices how long he spends in one program before shifting to another. Even more creepily, Horvitz told me, the A.I. program will -- a little like HAL from ''2001: A Space Odyssey'' -- eavesdrop on him with a microphone and spy on him using a Webcam, to try and determine how busy he is, and whether he has company in his office. Sure enough, at one point I peeked into the corner of Horvitz's computer screen and there was a little red indicator glowing.
''It's listening to us,'' Horvitz said with a grin. ''The microphone's on.''
It is no simple matter for a computer to recognize a user's ''busy state,'' as it turns out, because everyone is busy in his own way. One programmer who works for Horvitz is busiest when he's silent and typing for extended periods, since that means he's furiously coding. But for a manager or executive, sitting quietly might actually be an indication of time being wasted; managers are more likely to be busy when they are talking or if PowerPoint is running.
In the early days of training Horvitz's A.I., you must clarify when you're most and least interruptible, so the machine can begin to pick up your personal patterns. But after a few days, the fun begins -- because the machine takes over and, using what you've taught it, tries to predict your future behavior. Horvitz clicked an onscreen icon for ''Paul,'' an employee working on a laptop in a meeting room down the hall. A little chart popped up. Paul, the A.I. program reported, was currently in between tasks -- but it predicted that he would begin checking his e-mail within five minutes. Thus, Horvitz explained, right now would be a great time to e-mail him; you'd be likely to get a quick reply. If you wanted to pay him a visit, the program also predicted that -- based on his previous patterns -- Paul would be back in his office in 30 minutes.
With these sorts of artificial smarts, computer designers could re-engineer our e-mail programs, our messaging and even our phones so that eachtool would work like a personal butler -- tiptoeing around us when things are hectic and barging in only when our crises have passed. Horvitz's early prototypes offer an impressive glimpse of what's possible. An e-mail program he produced seven years ago, code-named Priorities, analyzes the content of your incoming e-mail messages and ranks them based on the urgency of the message and your relationship with the sender, then weighs that against how busy you are. Superurgent mail is delivered right away; everything else waits in a queue until you're no longer busy. When Czerwinski first tried the program, it gave her as much as three hours of solid work time before nagging her with a message. The software also determined, to the surprise of at least one Microsoft employee, that e-mail missives from Bill Gates were not necessarily urgent, since Gates tends to write long, discursive notes for employees to meditate on.
This raises a possibility both amusing and disturbing: perhaps if we gave artificial brains more control over our schedules, interruptions would actually decline -- because A.I. doesn't panic. We humans are Pavlovian; even though we know we're just pumping ourselves full of stress, we can't help frantically checking our e-mail the instant the bell goes ding. But a machine can resist that temptation, because it thinks in statistics. It knows that only an extremely rare message is so important that we must read it right now.
So will Microsoft bring these calming technologies to our real-world computers? ''Could Microsoft do it?'' asks David Gelernter, a Yale professor and longtime critic of today's computers. ''Yeah. But I don't know if they're motivated by the lust for simplicity that you'd need. They're more interested in piling more and more toys on you.''
The near-term answer to the question will come when Vista, Microsoft's new operating system, is released in the fall of 2006. Though Czerwinski and Horvitz are reluctant to speculate on which of their innovations will be included in the new system, Horvitz said that the system will ''likely'' incorporate some way of detecting how busy you are. But he admitted that ''a bunch of features may not be shipping with Vista.'' He says he believes that Microsoft will eventually tame the interruption-driven workplace, even if it takes a while. ''I have viewed the task as a 'moon mission' that I believe that Microsoft can pull off,'' he says.
By a sizable margin, life hackers are devotees not of Microsoft but of Apple, the company's only real rival in the creation of operating systems -- and a company that has often seemed to intuit the need for software that reduces the complexity of the desktop. When Apple launched its latest operating system, Tiger, earlier this year, it introduced a feature called Dashboard -- a collection of glanceable programs, each of which performs one simple function, like displaying the weather. Tiger also includes a single-key tool that zooms all open windows into a bingo-card-like grid, uncovering any ''lost'' ones. A superpowered search application speeds up the laborious task of hunting down a missing file. Microsoft is now playing catch-up; Vista promises many of the same tweaks, although it will most likely add a few new ones as well, including, possibly, a 3-D mode for seeing all the windows you have open.
Apple's computers have long been designed specifically to soothe the confusions of the technologically ignorant. For years, that meant producing computer systems that seemed simpler than the ones Microsoft produced, but were less powerful. When computers moved relatively slowly and the Internet was little used, raw productivity -- shoving the most data at the user -- mattered most, and Microsoft triumphed in the marketplace. But for many users, simplicity now trumps power. Linda Stone, the software executive who has worked alongside the C.E.O.'s of both Microsoft and Apple, argues that we have shifted eras in computing. Now that multitasking is driving us crazy, we treasure technologies that protect us. We love Google not because it brings us the entire Web but because it filters it out, bringing us the one page we really need. In our new age of overload, the winner is the technology that can hold the world at bay.
Yet the truth is that even Apple might not be up to the task of building the ultimately serene computer. After all, even the geekiest life hackers find they need to trick out their Apples with duct-tape-like solutions; and even that sometimes isn't enough. Some experts argue that the basic design of the computer needs to change: so long as computers deliver information primarily through a monitor, they have an inherent bottleneck -- forcing us to squeeze the ocean of our lives through a thin straw. David Rose, the Cambridge designer, suspects that computers need to break away from the screen, delivering information through glanceable sources in the world around us, the way wall clocks tell us the time in an instant. For computers to become truly less interruptive, they might have to cease looking like computers. Until then, those Post-it notes on our monitors are probably here to stay.
Granularity, the art of breaking things down
Here's a possibly useful discussion of how to approach breaking things down for writing a paper. It's written by an English professor for the blog Life Hack. Life Hack is a website that talks about all kinds of ways to become more productive, efficient, and get more value from your time and work. Take a look at Granularity For Students. You may find some useful advice (the comments underneath the post are also interesting, I think).
FYI, other popular sites devoted to "life hacking" are Lifehacker, 43 Folders and 43 Things.
FYI, other popular sites devoted to "life hacking" are Lifehacker, 43 Folders and 43 Things.
First essay assignment
On the digital communications technology revolution.
Length: 3 1/2 to 4 double-spaced, typewritten and stapled pages
Due: to be decided
In class we’ve been examining various arguments about and examples of the “digital communications revolution.” We’ve looked at texts in our textbook, read things on the web and talked about them in class.
The first assignment is a place for your to bring some focus to our discussion and put your own thoughts into written form. For this assignment I would like you to write an essay reflecting on one aspect of “change(s)” brought about by the rapid proliferation of microchip technologies and the advent of technology-enabled communication media. Pick one social aspect of this change, give examples, summarize what you know about it, and then discuss your own experiences.
Very simple.
But also potentially difficult since I am asking you to conceptualize: you need to sort through all your notes and class material and decide how various things can be categorized as “topics.” Then you have to decide whether or not these topics could be labled “social effects/consequences of digital communications technology.” Then you have to break this down even further into subtopics and organize them into an essay.
Where to start?
One way would be to think about your essay in terms of its parts. It must have a beginning, middle and an end. It must cover three things: 1) general arguments about “the third technological revolution” 2) the specifically social aspect of this change that you’re focusing on, and 3) your experiences and practices as a “netizen.”
The thinking and planning parts will probably take as long as the writing part---that’s good, that’s what I intend. I want this assignment to be an actual opportunity for students to work with ideas.That’s why I’m not putting on due date on this yet. I want to talk in class about the “planning stages” of the paper: summarize our discussions, work on breaking this into separate topics and so on. But you should start your preliminary thinking now and try to figure out what it is that you want to focus your paper on.
NB: Remember that all essays must be titled. A title is one of the elements which distinguishes a piece of formal writing from an informal series of notes. Your title should reflect something pertinent to your discussion. "Paper One," "Essay," "Digital Communications Technology," and the like are not adequate essay titles.
Length: 3 1/2 to 4 double-spaced, typewritten and stapled pages
Due: to be decided
In class we’ve been examining various arguments about and examples of the “digital communications revolution.” We’ve looked at texts in our textbook, read things on the web and talked about them in class.
The first assignment is a place for your to bring some focus to our discussion and put your own thoughts into written form. For this assignment I would like you to write an essay reflecting on one aspect of “change(s)” brought about by the rapid proliferation of microchip technologies and the advent of technology-enabled communication media. Pick one social aspect of this change, give examples, summarize what you know about it, and then discuss your own experiences.
Very simple.
But also potentially difficult since I am asking you to conceptualize: you need to sort through all your notes and class material and decide how various things can be categorized as “topics.” Then you have to decide whether or not these topics could be labled “social effects/consequences of digital communications technology.” Then you have to break this down even further into subtopics and organize them into an essay.
Where to start?
One way would be to think about your essay in terms of its parts. It must have a beginning, middle and an end. It must cover three things: 1) general arguments about “the third technological revolution” 2) the specifically social aspect of this change that you’re focusing on, and 3) your experiences and practices as a “netizen.”
The thinking and planning parts will probably take as long as the writing part---that’s good, that’s what I intend. I want this assignment to be an actual opportunity for students to work with ideas.That’s why I’m not putting on due date on this yet. I want to talk in class about the “planning stages” of the paper: summarize our discussions, work on breaking this into separate topics and so on. But you should start your preliminary thinking now and try to figure out what it is that you want to focus your paper on.
NB: Remember that all essays must be titled. A title is one of the elements which distinguishes a piece of formal writing from an informal series of notes. Your title should reflect something pertinent to your discussion. "Paper One," "Essay," "Digital Communications Technology," and the like are not adequate essay titles.
Webkinz

----Portrait of a Webkinz Kiddie Krack Addict----
In Tuesday's class, Maria brought in a clipping about a new children's toy: Webkinz. Like Beanie Babies, they are "limited edition" collectible stuffed animals, but with a difference. "Webkinz pets are lovable plush pets that each come with a unique Secret Code. With it, you enter Webkinz World where you care for your virtual pet, answer trivia, earn KinzCash, and play the best kids games on the net!" at least according to the hype on their website.
They're hyper money makers, too. The marketing news website, MarketVox, has an interesting article about their use of "social media" as a marketing tool. All in all, Webkinz look to be a product thoroughly embedded in digital communications technology on every level: product, activity, sales.
Maria raised some interesting questions in class about the consequences of such a toy. While marketing pundit B.L. Ochman thinks they're the greatest thing since sliced bread, she makes her living hyping cash cows and not examining the social, political and emotional fallout of, well, greed. She raves, "Besides teaching children to type, it helps them learn reading, spelling, logical thinking and, perhaps also kiddy gambling. But, the FAQs maintain, "While the Wishing Well and Wheel of WOW use visual metaphors that are sometimes associated with gambling, there is no gambling involved."
Somehow I don't go to a products on FAQ's for reassurance about its dark side. But then my job is to train people to think and hers is to stoke consumption frenzy.
A brief search via Technorati, the search engine for blogs linked to on the side of this page, produced this interesting discussion. More in line with Maria's remarks, this blogger wondered about how such web-based play would affect the already expanding problem of child obesity. She also quotes from a Washington Post article on the Webkinz phenomenon:
""Play always reflects the adult world," said Christopher Byrne, an independent toy analyst who goes by the Toy Guy. "It's kids aspiring to have a MySpace page, but cognitively and developmentally, they're not ready for that. This gives them the experience of sharing and connecting with friends."
While I have my doubts about Toy Guy's "independence" as an analyst----how are his remarks different from any industry shill's?----he does zero in better than B.S. Ochman on what the toy maybe in fact teaching: it's teaching how to used the internet for some uses, but not others.
And I think that might be an interesting point of entry for a good essay.
Wednesday, February 14, 2007
Classes Cancelled for Bad Weather
I just got an email from the NCC administration that all day classes today--February 14---are canceled. Even though the MW 101 section begins at 5:00 pm, it's considered a day class, so you're all free.
Happy Valentines Day!
On a more serious note, I will posting the first essay assignment here later today or tomorrow. I hope students have a chance to read it and start thinking and maybe even asking questions over in the discussion area.
Don't worry about the due date, we'll sort that out when we're next in class February 26th.
Happy President's Day!
Happy Valentines Day!
On a more serious note, I will posting the first essay assignment here later today or tomorrow. I hope students have a chance to read it and start thinking and maybe even asking questions over in the discussion area.
Don't worry about the due date, we'll sort that out when we're next in class February 26th.
Happy President's Day!
Monday, February 12, 2007
Keitais and Thumb Tribes
In our last discussion about the current culture of multi-tasking, some students in the TTh section brought up the way that many technogadgets today are "bundled"for multiple uses and multiple media. I mentioned the even-more loaded keitai of Japan, cellphones which push the boundaries of all-in-one functionality.
I was reminded of this whimsical take on the phenomenon by multi-tasking musician Momus that first appeared in Wired Magazine:
My First Screen Kiss 
Walking through Tokyo's Ginza district one Friday evening last month I saw an extraordinary sight that will soon become an ordinary one: A businessman was talking into his keitai (the Japanese word for cell phone), holding it out in front of him rather than to his ear. Suddenly, smiling, he raised the device to his lips and kissed the screen.
It wasn't hard to piece together an explanation -- the man was making a video call to his lover. His lover had asked for a screen kiss, or perhaps they'd synchronized one. It was my first glimpse of this behavior, and it happened in Tokyo, but I knew it wouldn't be my last. Soon enough we will see this scene repeated in New York, London, Paris, Berlin and San Francisco.

As we've read by now in countless articles, the keitai isn't just a new technology, it's a new culture. New cultures bring new sights to the streets. Today I want to make a list of some of the ways keitai culture has made itself apparent in the two months I've been here in Japan.
First, though, I should explain that I'm very much a laptop guy. I like the big screen, the fully-featured internet with pictures. If I'm at home, I'm almost always on my laptop, surfing Wi-Fi. When I go out, I want to escape the web's sticky threads. I want to see people, and I want to see life. But there's a problem. Increasingly, when I go out here in Osaka, what I'm observing in public places is people silently surfing on their i-mode keitais. I tear myself away from the internet only to enjoy endless vistas of other people using it.
I shouldn't be surprised. Japan Media Review tells us that there are 89 million keitai subscriptions in Japan. Seventy percent of the population owns at least one keitai. This saturation has a very literal impact on my movements through the city: it's not unusual to have to jump out of the way of a young man wobbling along Osaka's narrow backstreets on a bicycle, concentrating on the glowing screen of his keitai. Perhaps he's lost and consulting a GPS navigation service, or, who knows, he may even be reading a Wired News column in translated, stripped-down Hotwired i-mode format. He may be reading me, which would be great, but has he seen me?
Xeni Jardin, reviewing a book called Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, compared the dextrous multi-tasking skills of Japan's oyayubi zoku (literally "thumb tribe") to Japanese folk heroes. Sontoku Ninomiya and Prince Shotoku Taishi were medieval multi-taskers so intelligent that they could, so the story goes, listen to what 10 people were saying, all speaking at once. But could they ride bicycles at the same time?
It's also a little worrying to see two girls in a cafe running out of things to say and sitting face to face in silence, each reading their keitai screen. The massive success of keitai culture in Japan is largely due to the decision, taken in the late '90s, to market the phones to women and young people. It would be sad if their online conversations had silenced their cafe conversations.
Then again, information ubiquity is great. You can sew facts into conversations on the fly. It's great, for instance, when you're in the middle of a six- or seven-hour drinking and eating session in a reggae izakaya, and someone mentions an island where there's an art installation, and with a few clicks you can call up and save the details of exactly how to get there.
It's also wonderful to see keitais being used for art themselves. My favorite Japanese photographer, Rinko Kawauchi, shows her photographs (taken with conventional cameras and mounted on gallery walls) all over the world. But if you can't get to one of her exhibitions, you can still see the photo diary Rinko keeps online, each day illustrated with one of her beautiful, subtle and understated keitai phone-camera snaps.
It may be a while before I see scenes like the ones I've been describing outside Japan; researcher Cho-Nan Michael Tsai estimates that "although the US was a dominant player in the telecommunication industry, today its cellular phone service industry is about 3 to 5 years behind Asia." Meanwhile, every day seems to bring more convergences and innovations: Keitais become mp3 players, they become e-wallets (you can now pay for goods in Japanese convenience stores by placing your phone against a reader), they become portable Playstations.
Given the theme of my last Wired News column about a return to the simple life, you might not be too surprised to hear that my favourite new keitai is one that comes full circle back to the simplest sort of telephone. Marketed to old people scared by today's increasingly complex everything-but-the-kitchen sink phones, the Tu-Ka S dispenses with all the frills. There's no camera, no mp3 player, no Playstation, no GPS, no e-cash, not even a screen. There's just voice service, a mike, an earpiece, a red Stop button, a green Go button and the numbers.
Don't be surprised if you see me kissing one.

Wednesday, February 7, 2007
Take a look at this video (for both sections):



The Machine is Us/ing Us
I found this little thing on YouTube. It's a nice, compact video essay that hits every one of the issues we've been unpacking in class: Web 2.0, hypertext, social networking, the non-linearity of digital text, blogging, new forms of creativity, etc., and does so in a very visually elegant fashion. It also points out many topics for further discussion that are also topics I've "asterixed" for down the road in class: identity, copyright, ethics, etc.
Watch it several times and take notes so we can talk about it in class. You may also want to check out the comments posted about it underneath---especially the disparaging ones.
Subscribe to:
Comments (Atom)


