As a reporter I spend a lot of time analyzing voices, and I’m often amazed by how much a voice can convey that written words cannot. Sure, you can write down (as I did in a transcribing session yesterday) that a woman said, “He never responded to me.” But that quote doesn’t tell you whether she was apathetic, or annoyed, or profoundly disappointed. It doesn’t tell you what part of the country she’s from, or how old she is, or if she smokes, or how much she trusts you.
A voice gives you all of that in just a few seconds. How do our brains make sense of this rich vocal stream, and so quickly? In 2000, scientists scanned people’s brains and discovered a piece of neural real estate that’s dedicated to the task: a spot above the ear that responds to vocal sounds more strongly than other types of sounds.
This result was intriguing partly because the neuroscience world was buzzing about the specificity of other brain areas, notes Pascal Belin, a neuroscientist at the University of Glasgow and the lead author of the 2000 voice study. Just a few years earlier, another group had shown that a region in the visual cortex, the fusiform face area, is tuned to faces. These studies posed obvious evolutionary questions, Belin says. “These voice regions respond to speech and non-speech. So, are they uniquely human, or not?”
That was answered in 2008, when another lab reported that macaque monkeys have a similar voice-sensitive region in their brains. The area responded more to the voices of other macaques than to vocalizations of other species or non-voice sounds. “With that paper, it became clear: these regions didn’t just appear with humans,” Belin says. “It’s evolutionarily much older.”
You can imagine how interpreting the voices of other members of your species would be evolutionarily advantageous, whether for discerning a rival’s fury or a lover’s desire. That monkey paper suggested that voice-sensitive brain regions existed in the last common ancestor of humans and macaques, which roamed the earth some 30 million years ago.
A brain-scanning study published today in Current Biology reports similar voice regions in the dog brain. One region responds selectively to dog vocalizations, while a nearby area responds to the emotional cues of a voice, regardless of whether the voice came from a dog or from a human. Researchers don’t agree on the evolutionary implications of these results (more on that later). Still, the study may shed light on why dogs and people get on so well.
“Dogs use very similar brain mechanisms to process social and emotional information as humans do,” says Attila Andics, a researcher in the MTA-ELTE Comparative Ethology Research Group in Budapest, who led the new study. “This probably helps the dogs tune in to the feelings of their owners, and also probably helps humans tune in to the feelings of their dog.”
If you’ve ever slid into an MRI machine, you know it’s not the most pleasant experience. The space is very tight and very noisy. And in order for the scans to come out well, you can’t move your head more than half a centimeter.
Dogs, however, can be trained to love resting in the machine. Andics’s team used 11 dogs, all border collies and golden retrievers, and did training sessions about once a week for 20 weeks. “When dogs just arrive in vicinity of building, they can’t wait to get into the scanner room and can’t wait to get on the scanner bed,” Andics says.
Scanning other animals is far more difficult. Monkey experiments typically use anesthetized animals, or awake animals with helmets or surgically implanted head posts to keep them still.
“I find it a very impressive study in how they trained the dogs for scanning in a rather inhospitable environment,” says Christopher Petkov of Newcastle University Medical School, who was the lead author on the monkey study.
Once in the scanner, the dogs heard nearly 200 different sounds, including dog vocalizations (whining, playful barking, aggressive barking), human vocalizations (crying, cooing, laughing), and non-voice environmental sounds (cars, ringing phones). You can see the dogs in action and hear the sounds in this video abstract of the paper.
Andics also scanned the brains of 22 people while they listened to the same set of sounds. This is the first study, he says, to directly compare the brain function of people with that of a non-primate animal.
The researchers found some notable differences between dog and human brain responses. In people, just 3 percent of our auditory cortex responds more strongly to non-voice sounds than to voice sounds. In dogs, it’s 48 percent. “The human auditory system is optimized to process vocal sounds, and the dog auditory brain in general is not as specialized,” Andics says. As to the implications of that difference, who knows. He speculates that it could be what allowed us to develop language.
Andics is more excited about the ways in which the dog and humans brains are similar. It turns out that both species have an area of the brain that is tuned to the “emotional valence” of a voice, meaning it responds more strongly to positive emotions than negative emotions. And for this region, it doesn’t matter whether the voice is human or canine; a burst of laughter is equivalent to a playful bark. This result agrees with a study his team published last month showing that certain acoustic properties convey emotion in both human and dog sounds. The shorter the burst of a call, for example, the more positively it is perceived.
Andics argues that because dogs and humans (and macaques) have regions dedicated to processing voices, this skill probably dates back 100 million years, to the common ancestor of humans and dogs.
Evolutionary biologists, however, don’t think this is very plausible. T. Ryan Gregory, an associate professor of integrative biology at the University of Guelph, pointed me to the graphic below. It shows the evolutionary tree of mammals:
“Humans and dogs are not closely related,” Gregory says. Our last common ancestor, he points out, is shared not only with primates and carnivores, but with tree shrews, rodents, bats, rabbits, whales, even-toed ungulates and odd-toed ungulates.
So if Andics’s theory is correct, then all or most of those species also carry the voice-sensitive brain area. Andics says this is entirely possible.
“What made mammals such a successful order? A hundred million years ago mammals were not a success story,” he says. “One reason might be that this sensitivity to emotional or social [information] evolved, and this made navigation in the social environment much more efficient.”
Gregory counters that that argument implicitly suggests that the ancestor of nearly all mammals had a complex social system. A far more likely scenario, he says, is that dogs and humans acquired this voice region independently, in what’s known as convergent evolution. Humans and squids, for example, independently evolved camera-like eyes, and lots of species independently evolved venom.
These evolutionary questions aren’t likely to be resolved soon, as it’s unlikely that scientists will be able to wrangle a wolf, rabbit, or squirrel into the MRI machine. (Gregory imagines the response to a cat experiment: “I don’t understand, doctor, the only part of the brain that is lighting up is the disdain center!”)
Happily, though, there’s still lots of interesting work to be done with dogs. Belin and Andics are planning to collaborate on a study investigating whether the voice-sensitive regions in dogs can code an individual’s identity, as the region is known to do in people. “Scanning dogs could become a very widespread model,” Belin says, particularly because of animal-rights activists’ objections to primate research. “The dogs have many, many advantages.”