Deep in your brain there are probably several thousand neurons that will respond only to the sight of Lady Gaga. Several thousand others probably only crackle to the sight of Justin Bieber. It might be nice to reassign those neurons to loftier thoughts. For now, though, neurology can’t help you. What neurology can do for you (if you’re up for a little invasive brain surgery) is let you use those Gaga and Bieber neurons to control a computer.
In an unprecedented fusion of pop culture and neurosurgery, scientists at Caltech have invented a surreal brain-machine interface. The history leading up to this discovery goes back to the 1990s, when Itzhak Fried, a neurosurgeon at UCLA, began to collaborate with neuroscientists who wanted to probe the brain from the inside. Fried would sometimes have to perform surgery on people with epilepsy in order to reduce their seizures. First he would implant electrodes in the brains of his patients, so that he could unleash small bursts of current from them. When one of the electrodes triggered epilepsy-like firing from the neighboring neurons, he knew he had found the patch of brain that had to be removed. Sometimes Fried would also implant thin wires into the same regions of the brain that could detect the activity of the neurons in the neighborhood. He then closed up the heads of his patients, and they spent several days hanging out in the hospital with electrodes and wires trailing from their heads. The neuroscientists could show them various pictures or play them various sounds, and listen to the response from the neurons. In some cases, the wires were positioned right next to individual neurons, allowing the scientists to listen to them one by one.
Fried’s collaborators discovered that some of these individual neurons responded faithfully to certain kinds of sights. Some only responded to faces with sad expressions, others only to happy faces. Some only responded to houses. In 2005, however, Christof Koch of Caltech and his colleagues decided to get more fine-grained. They showed pictures of actors and actresses. They found individual neurons that responded almost exclusivey to Jennifer Aniston. Others only responded to Saddam Hussein, others to Pamela Anderson, and so on.
Later, the researchers found that people can develop these so-called “Jennifer Aniston neurons” for anyone they become familiar with in a matter of days. The neurons start out relatively weak, but get stronger with familiarity. The picture of a loved one will trigger a loved-one neuron to fire a lot more strongly than a neuron dedicated to an obscure D-list celebrity. Fortunately, these neurons are not limited to Hollywood celebrities. They seem to be the medium in which we encode any kind of concept. We probably can store ten to thirty thousand concepts in our brains, each of which is encoded in an estimated several thousand Jennifer Anniston neurons. (I talk more about the history of this research in Brain Cuttings Brain Cuttings, and in this column for Discover.)
In a flash of mad genius, Koch and his colleagues wondered if people could use biofeedback to control the strength of these neurons. They interviewed twelve patients, and in each case they identified four celebrities who triggered particularly strong responses from their individual neurons. Then they superimposed two of those celebrities–in one case, Josh Brolin and Marylyn Monroe–on a computer screen. The patients were told to try to shift the picture to one celebrity or the other. The computer was programmed to alter the balance of the images in response to the firing of the Brolin and Monroe neurons. As the Monroe neurons got stronger and the Brolin neurons weaker, for example, the screen would go all Monroe.
This movie shows what happened. On the early trials, the image of the screen sometimes veered back and forth between Brolin and Monroe, as if shifting between channels. But within a few trials, patients could get the hang of the game and push the screen to the correct image in a matter of seconds.
For now, this technology is profoundly limited. There’s no way to eavesdrop on individual neurons from the outside of the brain, so invasive brain surgery is mandatory. But this technology does have some big advantages over other brain-machine interfaces. Up till now, these devices have typically allowed people to control a cursor on a screen or a robotic limb. But this new system taps directly into the concepts of our minds. Someday it might be possible to choose among thousands of concepts to put up on a computer screen for all to see. Let’s just hope that if that marvelous day ever comes, we can think about things worth sharing.
Reference: Cerf et al, On-line, voluntary control of human temporal lobe neurons. Nature, October 28, 2010. doi:10.1038/nature09510