Five or six years ago, I attended a lecture on the science of attention. A philosopher who conducts research over in the medical school was talking about attention blindness, the basic feature of the human brain that, when we concentrate intensely on one task, causes us to miss just about everything else. Because we can't see what we can't see, our lecturer was determined to catch us in the act. He had us watch a video of six people tossing basketballs back and forth, three in white shirts and three in black, and our task was to keep track only of the tosses among the people in white. I hadn't seen the video back then, although it's now a cla**ic, featured on punk-style TV shows or YouTube versions enacted at frat houses under less than lucid conditions. The tape rolled, and everyone began counting.
Everyone except me. I'm dyslexic, and the moment I saw that grainy tape with the confusing basketball tossers, I knew I wouldn't be able to keep track of their movements, so I let my mind wander. My curiosity was piqued, though, when about 30 seconds into the tape, a gorilla sauntered in among the players. She (we later learned a female student was in the gorilla suit) stared at the camera, thumped her chest, and then strode away while they continued pa**ing the balls.
When the tape stopped, the philosopher asked how many people had counted at least a dozen basketball tosses. Hands went up all over. He then asked who had counted 13, 14, and congratulated those who'd scored the perfect 15. Then he asked, "And who saw the gorilla?"
I raised my hand and was surprised to discover I was the only person at my table and one of only three or four in the large room to do so. He'd set us up, trapping us in our own attention blindness. Yes, there had been a trick, but he wasn't the one who had played it on us. By concentrating so hard on counting, we had managed to miss the gorilla in the midst.
Attention blindness is the fundamental structuring principle of the brain, and I believe that it presents us with a tremendous opportunity. My take is different from that of many neuroscientists: Where they perceive the shortcomings of the individual, I sense an opportunity for collaboration. Fortunately, given the interactive nature of most of our lives in the digital age, we have the tools to harness our different forms of attention and take advantage of them.
It's not easy to acknowledge that everything we've learned about how to pay attention means that we've been missing everything else. It's not easy for us rational, competent, confident types to admit that the very key to our success—our ability to pinpoint a problem and solve it, an achievement honed in all those years in school and beyond—may be exactly what limits us. For more than a hundred years, we've been training people to see in a particularly individual, deliberative way. No one ever told us that our way of seeing excluded everything else.
I want to suggest a different way of seeing, one that's based on multitasking our attention—not by seeing it all alone but by distributing various parts of the task among others dedicated to the same end. For most of us, this is a new pattern of attention. Multitasking is the ideal mode of the 21st century, not just because of information overload but also because our digital age was structured without anything like a central node broadcasting one stream of information that we pay attention to at a given moment. On the Internet, everything links to everything, and all of it is available all the time.
Unfortunately, current practices of our educational institutions—and workplaces—are a mismatch between the age we live in and the institutions we have built over the last 100-plus years. The 20th century taught us that completing one task before starting another one was the route to success. Everything about 20th-century education, like the 20th-century workplace, has been designed to reinforce our attention to regular, systematic tasks that we take to completion. Attention to task is at the heart of industrial labor management, from the a**embly line to the modern office, and of educational philosophy, from grade school to graduate school.
The Newsweek cover story proclaimed, "iPod, Therefore I Am."
On MTV News, it was "Dude, I just got a free iPod!"
Peter Jennings smirked at the ABC-TV news audience, "Shakespeare on the iPod? Calculus on the iPod?"
And the staff of the Duke Chronicle was apoplectic: "The University seems intent on transforming the iPod into an academic device, when the simple fact of the matter is that iPods are made to listen to music. It is an unnecessarily expensive toy that does not become an academic tool simply because it is thrown into a cla**room."
What had those pundits so riled up? In 2003, we at Duke were approached by Apple about becoming one of six Apple Digital Campuses. Each college would choose a technology that Apple was developing and propose a campus use for it. It would be a partnership of business and education, exploratory in all ways. We chose a flashy new music-listening gadget that young people loved but that baffled most adults.
When we gave a free iPod to every member of the entering first-year cla**, there were no conditions. We simply asked students to dream up learning applications for this cool little white device with the adorable earbuds, and we invited them to pitch their ideas to the faculty. If one of their professors decided to use iPods in a course, the professor, too, would receive a free Duke-branded iPod, and so would all the students in the cla** (whether they were first-years or not).
This was an educational experiment without a syllabus. No lesson plan. No a**essment matrix rigged to show that our investment had been a wise one. No a**ignment to count the basketballs. After all, as we knew from the science of attention, to direct attention in one way precluded all the other ways. If it were a reality show, we might have called it Project Cla**room Makeover.
At the time, I was vice provost for interdisciplinary studies at Duke, a position equivalent to what in industry would be the R&D person, and I was among those responsible for cooking up the iPod experiment. In the world of technology, "crowdsourcing" means inviting a group to collaborate on a solution to a problem, but that term didn't yet exist in 2003. It was coined by Jeff Howe of Wired magazine in 2006 to refer to the widespread Internet practice of posting an open call requesting help in completing some task, whether writing code (that's how much of the open-source code that powers the Mozilla browser was written) or creating a winning logo (like the "Birdie" design of Twitter, which cost a total of six bucks).
In the iPod experiment, we were crowdsourcing educational innovation for a digital age. Crowdsourced thinking is very different from "credentialing," or relying on top-down expertise. If anything, crowdsourcing is suspicious of expertise, because the more expert we are, the more likely we are to be limited in what we conceive to be the problem, let alone the answer.
Once the pieces were in place, we decided to take our educational experiment one step further. By giving the iPods to first-year students, we ended up with a lot of angry soph*mores, juniors, and seniors. They'd paid hefty private-university tuition, too! So we relented and said any student could have a free iPod—just so long as she persuaded a professor to require one for a course and came up with a learning app in that course. Does that sound sneaky? Far be it from me to say that we planned it.
The real treasure trove was to be found in the students' innovations. Working together, and often alongside their professors, they came up with far more learning apps for their iPods than anyone—even at Apple—had dreamed possible. Most predictable were uses whereby students downloaded audio archives relevant to their courses—Nobel Prize acceptance speeches by physicists and poets, the McCarthy hearings, famous trials. Almost instantly, students figured out that they could record lectures on their iPods and listen at their leisure.
Interconnection was the part the students grasped before any of us did. Students who had grown up connected digitally gravitated to ways that the iPod could be used for collective learning. They turned iPods into social media and networked their learning in ways we did not anticipate. In the School of the Environment, one cla** interviewed families in a North Carolina community concerned with lead paint in their homes and schools, commented on one another's interviews, and together created an audio documentary that aired on local and regional radio stations and all over the Web. In the music department, students uploaded their own compositions to their iPods so their fellow students could listen and critique.
After eight years in Duke's central administration, I was excited to take the methods we had gleaned from the iPod experiment back into the cla**room. I decided to offer a new course called "This Is Your Brain on the Internet," a title that pays homage to Daniel J. Levitin's inspiring book This Is Your Brain on Music (Dutton, 2006), a kind of music-lover's guide to the brain. Levitin argues that music makes complex circuits throughout the brain, requiring different kinds of brain function for listening, processing, and producing, and thus makes us think differently. Substitute the word "Internet" for "music," and you've got the gist of my course.
I advertised the cla** widely, and I was delighted to look over the roster of the 18 students in the seminar and find more than 18 majors, minors, and certificates represented. I created a bare-bones suggested reading list that included, for example, articles in specialized journals like Cognition and Developmental Neuropsychology, pieces in popular magazines like Wired and Science, novels, and memoirs. There were lots of Web sites, too, of course, but I left the rest loose. This cla** was structured to be peer-led, with student interest and student research driving the design. "Participatory learning" is one term used to describe how we can learn together from one another's sk**s. "Cognitive surplus" is another used in the digital world for that "more than the sum of the parts" form of collaborative thinking that happens when groups think together online.
We used a method that I call "collaboration by difference." Collaboration by difference is an antidote to attention blindness. It signifies that the complex and interconnected problems of our time cannot be solved by anyone alone, and that those who think they can act in an entirely focused, solitary fashion are undoubtedly missing the main point that is right there in front of them, thumping its chest and staring them in the face. Collaboration by difference respects and rewards different forms and levels of expertise, perspective, culture, age, ability, and insight, treating difference not as a deficit but as a point of distinction. It always seems more cumbersome in the short run to seek out divergent and even quirky opinions, but it turns out to be efficient in the end and necessary for success if one seeks an outcome that is unexpected and sustainable. That's what I was aiming for.
I had the students each contribute a new entry or amend an existing entry on Wikipedia, or find another public forum where they could contribute to public discourse. There was still a lot of criticism about the lack of peer review in Wikipedia entries, and some professors were banning Wikipedia use in the cla**room. I didn't understand that. Wikipedia is an educator's fantasy, all the world's knowledge shared voluntarily and free in a format theoretically available to all, and which anyone can edit. Instead of banning it, I challenged my students to use their knowledge to make Wikipedia better. All conceded that it had turned out to be much harder to get their work to "stick" on Wikipedia than it was to write a traditional term paper.
Given that I was teaching a cla** based on learning and the Internet, having my students blog was a no-brainer. I supplemented that with more traditionally structured academic writing, a term paper. When I had both samples in front of me, I discovered something curious. Their writing online, at least in their blogs, was incomparably better than in the traditional papers. In fact, given all the tripe one hears from pundits about how the Internet dumbs our kids down, I was shocked that elegant bloggers often turned out to be the clunkiest and most pretentious of research-paper writers. Term papers rolled in that were shot through with jargon, stilted diction, poor word choice, rambling thoughts, and even pretentious grammatical errors (such as the ungrammatical but proper-sounding use of "I" instead of "me" as an object of a preposition).
But it got me thinking: What if bad writing is a product of the form of writing required in college—the term paper—and not necessarily intrinsic to a student's natural writing style or thought process? I hadn't thought of that until I read my students' lengthy, weekly blogs and saw the difference in quality. If students are trying to figure out what kind of writing we want in order to get a good grade, communication is secondary. What if "research paper" is a category that invites, even requires, linguistic and syntactic gobbledygook?
Research indicates that, at every age level, people take their writing more seriously when it will be evaluated by peers than when it is to be judged by teachers. Online blogs directed at peers exhibit fewer typographical and factual errors, less plagiarism, and generally better, more elegant and persuasive prose than cla**room a**ignments by the same writers. Longitudinal studies of student writers conducted by Stanford University's Andrea Lunsford, a professor of English, a**essed student writing at Stanford year after year. Lunsford surprised everyone with her findings that students were becoming more literate, rhetorically dexterous, and fluent—not less, as many feared. The Internet, she discovered, had allowed them to develop their writing.
The semester flew by, and we went wherever it took us. The objective was to get rid of a lot of the truisms about "the dumbest generation" and actually look at how new theories of the brain and of attention might help us understand how forms of thinking and collaborating online maximize brain activity. We spent a good deal of time thinking about how accident, disruption, distraction, and difference increase the motivation to learn and to solve problems, both individually and collectively. To find examples, we spent time with a dance ensemble rehearsing a new piece, a jazz band improvising together, and teams of surgeons and computer programmers performing robotic surgery. We walked inside a monkey's brain in a virtual-reality cave. In another virtual-reality environment, we found ourselves trembling, unable to step off what we knew was a two-inch drop, because it looked as if we were on a ledge over a deep canyon.
One of our readings was On Intelligence (Times Books, 2004), a unified theory of the brain written by Jeff Hawkins (the neuroscientist who invented the Palm Pilot) with Sandra Blakeslee. I agree with many of Hawkins's ideas about the brain's "memory-prediction framework." My own interest is in how memories—reinforced behaviors from the past—predict future learning, and in how we can intentionally disrupt that pattern to spark innovation and creativity. Hawkins is interested in how we can use the pattern to create next-generation artificial intelligence that will enhance the performance, and profitability, of computerized gadgets like the Palm Pilot. The students and I had been having a heated debate about his theories when a student discovered that Hawkins happened to be in our area to give a lecture. I was away at a meeting, when suddenly my BlackBerry was vibrating with e-mails and IM's from my students, who had convened the cla** without me to present a special guest on a special topic: Jeff Hawkins debating the ideas of Jeff Hawkins. It felt a bit like the gag in the cla**ic Woody Allen movie Annie Hall, when someone in the line to purchase movie tickets is expounding pompously on the ideas of Marshall McLuhan and then McLuhan himself steps into the conversation.
It was that kind of cla**.
"Jeff Hawkins thought it was odd that we decided to hold cla** when you weren't there," one student texted me. "Why wouldn't we? That's how it works in 'This Is Your Brain on the Internet.'"
Project Cla**room Makeover. I heard the pride. "Step aside, Prof Davidson: This is a university!"
"Nonsense!"
"Absurd!"
"A wacko holding forth on a soapbox. If Prof Davidson just wants to yammer and lead discussions, she should resign her position and head for a park or subway platform, and pa** a hat for donations."
Some days, it's not easy being Prof Davidson.
What caused the ruckus in the blogosphere this time was a blog I posted on the Hastac, an online network, which I co-founded in 2002, dedicated to new forms of learning for a digital age. The post, "How to Crowdsource Grading," proposed a form of a**essment that I planned to use the next time I taught "This Is Your Brain on the Internet."
It was my students' fault, really. By the end of the course, I felt confident. I settled in with their evaluations, waiting for the accolades to flow, a pedagogical shower of appreciation. And mostly that's what I read, thankfully. But there was one group of students who had some candid feedback, and it took me by surprise. They said everything about the course had been bold, new, and exciting.
Everything, that is, except the grading.
They pointed out that I had used entirely conventional methods for testing and evaluating their work. We had talked as a cla** about the new modes of a**essment on the Internet—like public commenting on products and services and leaderboards (peer evaluations adapted from sports sites)—where the consumer of content could also evaluate that content. These students said they loved the cla** but were perplexed that my a**essment method had been so 20th century: Midterm. Final. Research paper. Graded A, B, C, D. The students were right. You couldn't get more 20th century than that.
The students signed their names to the course evaluations. It turned out the critics were A+ students. That stopped me in my tracks. If you're a teacher worth your salt, you pay attention when the A+ students say something is wrong.
I was embarra**ed that I had overlooked such a crucial part of our brain on the Internet. I contacted my students and said they'd made me rethink some very old habits. Unlearning. I promised I would rectify my mistake the next time I taught the course. I thought about my promise, came up with what seemed like a good system, then wrote about it in my blog.
My new grading method, which set off such waves of vitriol, combined old-fashioned contract grading with peer review. Contract grading goes back at least to the 1960s. In it, the requirements of a course are laid out in advance, and students contract to do all of the a**ignments or only some of them. A student with a heavy course or workload who doesn't need an A, for example, might contract to do everything but the final project and then, according to the contract, she might earn a B. It's all very adult.
But I also wanted some quality control. So I added the crowdsourcing component based on the way I had already structured the course. I thought that since pairs of students were leading each cla** session and also responding to their peers' required weekly reading blogs, why not have them determine whether the blogs were good enough to count as fulfilling the terms of the contract? If a blog didn't pa** muster, it would be the task of the student leaders that week to tell the blogger and offer feedback on what would be required for it to count. Student leaders for a cla** period would have to do that carefully, for next week a cla**mate would be evaluating their work.
I also liked the idea of students' each having a turn at being the one giving the grades. That's not a role most students experience, even though every study of learning shows that you learn best by teaching someone else. Besides, if constant public self-presentation and constant public feedback are characteristics of a digital age, why aren't we rethinking how we evaluate, measure, test, a**ess, and create standards? Isn't that another aspect of our brain on the Internet?
There are many ways of crowdsourcing, and mine was simply to extend the concept of peer leadership to grading. The blogosphere was convinced that either I or my students would be pulling a fast one if the grading were crowdsourced and students had a role in it. That says to me that we don't believe people can learn unless they are forced to, unless they know it will "count on the test." As an educator, I find that very depressing. As a student of the Internet, I also find it implausible. If you give people the means to self-publish—whether it's a photo from their iPhone or a blog—they do so. They seem to love learning and sharing what they know with others. But much of our emphasis on grading is based on the a**umption that learning is like cod-liver oil: It is good for you, even though it tastes horrible going down. And much of our educational emphasis is on getting one answer right on one test—as if that says something about the quality of what you have learned or the likelihood that you will remember it after the test is over.
Grading, in a curious way, exemplifies our deepest convictions about excellence and authority, and specifically about the right of those with authority to define what constitutes excellence. If we crowdsource grading, we are suggesting that young people without credentials are fit to judge quality and value. Welcome to the Internet, where everyone's a critic and anyone can express a view about the new iPhone, restaurant, or quarterback. That democratizing of who can pa** judgment is digital thinking. As I found out, it is quite unsettling to people stuck in top-down models of formal education and authority.
Learn. Unlearn. Relearn. In addition to the content of our course—which ranged across cognitive psychology, neuroscience, management theory, literature and the arts, and the various fields that compose science-and-technology studies—"This Is Your Brain on the Internet" was intended to model a different way of knowing the world, one that encompa**es new and different forms of collaboration and attention. More than anything, it courted failure. Unlearning.
"I smell a reality TV show," one critic sniffed.
That's not such a bad idea, actually. Maybe I'll try that next time I teach "This Is Your Brain on the Internet." They can air it right after Project Cla**room Makeover.