“Only human.” It’s a downer of an idiom, used to convey the inevitable transgressions and inadequacies of our species. He cheated on his wife with a supermodel, but come on, he’s only human. No, she can’t write three blog posts a day and Tweet every hour and read historical biographies in her spare time, she’s only human.
But, really, what’s “only” about human biology, emotions, behaviors and history? At very least, they make for some good stories.
A cop in Florida once found a scientist dissecting an armadillo penis on the side of the road. A genetic screen made me reconsider my coffee habits. Poverty breaks down connections in a baby’s brain. Tourism in the Galápagos is simultaneously funding conservation efforts and destroying the things that need to be conserved. Stories about people — what we’re made of, what we do, why we do it — are what interest me most, and what you’ll find on this blog.
I’m kicking off with a story about the (maybe) uniquely human capacity to feel emotion through music. Why does a lullaby soothe a newborn, a dirge console the grieving, and a KoRn song make you want to rip your ears out?
According to a study out yesterday in the Proceedings of the National Academy of Sciences, our cognitive connection to music may have evolved from an older skill, the ability to glean emotion from motion. People will choose the same combination of spatiotemporal features — a certain speed, rhythm, and smoothness — whether pairing a particular emotion with a melody or with a cartoon animation, the study found. But most surprising, the results held true in people from two starkly different cultures: a rural village in Cambodia and a college campus in New England.
The study dates to an afternoon in the spring of 2008, when Beau Sievers sat down for a class on the origins of music at Dartmouth College, in New Hampshire. Sievers, a composer, was working on a Master’s degree in something called electroacoustic music (now called digital musics), an unusual program for people who want to study relationships between music, technology and cognitive science. That afternoon the class heard from a guest lecturer, psychology professor Thalia Wheatley, whose neuroimaging studies had pinpointed some of the brain regions involved in perceiving motion. Other labs had found that some of the very same regions activate during music perception, giving Wheatley the idea that the two skills are somehow linked in the mind. She presented the general hypothesis to Sievers’s class, adding that she hadn’t yet found a rigorous and quantitative way to test it.
After class, Sievers asked Wheatley if he could work on that for his Master’s thesis. She said sure, and over the next few months, the duo came up with a clever experiment.
The experiment hinges on a computer program, written by Sievers, that allows participants to create their own melodies or bouncing-ball animations by adjusting five slider bars. Each bar represents a different aspect of the sound or movie: rate sets how many beats per minute; jitter determines the predictability of those beats; smoothness can add a spiky texture to the ball and dissonance to the music; step size gives the height of the bounce and distance between notes; and direction controls whether the ball leans forward or backward and the pitch of the notes.
The study included 50 Dartmouth students. Half used the program to make songs and the other half made animations. After getting used to the program, participants were asked to tinker with the sliders until they had created a song or movie expressing a specific emotion — angry, happy, peaceful, sad or scared.
Surprising finding number one: For each emotion, the song group and the animation group chose essentially the same slider positions. Below are clips of the typical “happy” melody and “happy” movement:
…and of the typical “sad” melody and “sad” movement:
Sievers and Wheatley could have published a paper on those results alone, and were planning to. They thought the findings reflected a universal phenomenon in brain organization that would apply to anybody, anywhere. But a few colleagues raised eyebrows. “Musicologists and composers tend to be extremely skeptical of claims of cross-cultural universality,” Sievers says. One musicologist at Dartmouth said, “‘That’s interesting, but I wonder what would happen if you went somewhere else. It probably wouldn’t work’,” Sievers recalls. “I thought he was wrong.” To know for sure, of course, they had to go do it.
Years earlier, Sievers had volunteered for Cambodian Living Arts, an organization that makes archival recordings of folk musicians. He knew the country had many culturally isolated villages, with an agricultural lifestyle that couldn’t be more different from the co-eds in New Hampshire. So in late 2010, Sievers, Wheatley, another graduate student and a translator went to the Ratanakiri Province, near the Vietnam border, and set up shop in a village called L’ak.
A few hundred people live in L’ak, and they’re all Kreung, ethnic minorities who used to practice slash-and-burn agriculture and were constantly on the move. Their way of life began to shift in the late 1990s, when large-scale logging operations destroyed the forests they depended on. So some Kreung have settled in L’ak, and are gradually stepping into the modern world.
The Kreung culture (like every known human culture, in fact) plays music, but it’s very different from what we’re used to in the West. Here, a middle C sounds the same whether you’re playing a piano in New York or California. There, no such standardization exists. Kreung instruments are different, too. Sievers’s favorite is the mem, a string instrument played sitting down. “One end of the string is between your toes and the other end goes in your mouth,” Sievers says. “You bow the string with a piece of bamboo or a stick, and your mouth becomes a resonating chamber.”
The researchers made several adjustments to the experiment to make it work in L’ak. Most of the villagers couldn’t read or write and none had experience with computers. So the researchers swapped word labels for pictures and used an external controller with real sliders instead of a computer mouse. The team depended on two translators, one to change English into Khmer, the official language of Cambodia, and a second to change Khmer into the Kreung language. Even then it was tricky. The Kreung have no word for ‘peaceful’, so the translators opted for ‘sngap chet’, which translates to something like ‘still heart’. Despite the cultural divide, the villagers were warm, welcoming and curious. The experiments were completed in about a month.
Which leads to surprising finding number two, and the crux of the new study: For each emotion, the Kreung chose the same slider positions, more or less, as the Dartmouth college kids had. I didn’t quite believe it until I saw the end products. Here’s a comparison of the typical “angry” song made in New Hampshire and L’ak, respectively:
And of “peaceful” movies from New Hampshire (left) and L’ak (right):
The study is only the latest of many to ask how our minds make sense of music, and why we love it so. It’s a messy, controversial and absolutely fascinating subject, as fellow Phenomena contributor (whee!) Carl Zimmer wrote about a couple of years ago. Some scientists say music is just a side show, an evolutionary byproduct of our communicative behaviors that didn’t evolve for any specific, adaptive purpose. Harvard psychologist Steven Pinker went so far as to call it “auditory cheesecake,” much to the chagrin of Sievers and Wheatley.
“This idea that music is a frivolous add-on, and is not really serving a purpose, that it’s just a happy coincidence or auditory cheesecake or what have you — it just doesn’t feel right,” Wheatley says. Music is embedded in the rituals of every human culture, she points out, and helps people bond. “There must be something that music is providing for us, that helps us as a social species.”
The new study isn’t going to resolve the debate, but it does point to some intriguing theories. It could be, for instance, that our ancestors first learned to interpret emotion from movement — something that would be useful, say, if you encountered an angry saber-tooth cat. Those same brain systems, finely tuned to detect changes in rhythm and speed, could have also evolved to pick up similar changes in sounds, and later, to intentionally exploit this perceptual system by making music. Rather than waiting around for sounds that just happened to make us feel good or bad, Wheatley says, “we could compose music with the same effects and do so on demand.”
As for Sievers, these themes have made him think about his art, and his future, in a new way. “You know, if you have knowledge about how the human brain works, are you obliged as an artist to act on that knowledge?” he asks. “And once you know that’s what you’re doing when you’re composing music, you gradually become a scientist, right?”
This September, he officially joined Wheatley’s lab to work on a PhD.