Here’s the 17th piece from my BBC column
In a lab at Harvard Medical School, a man is using his mind to wag a rat’s tail. To send his command, he merely glances at a strobe light flickering on a computer screen, and a set of electrodes stuck to his scalp detects the activity triggered in his brain. A computer processes and relays the electrodes’ signal to an ultrasound machine poised over the rat’s head. The machine delivers a train of low-energy ultrasound pulses into the rat’s brain, stimulating its motor cortex – the area that governs its movements. The pulses are aimed purposely at a rice-grain-sized area that controls the rat’s tail. It starts to wag.
This link-up is the brainchild of Seung-Schik Yoo, and it works more than 94% of the time. Whenever a human looks at the flickering lights, the rat’s tail almost always starts to wag just over a second later. The connection between them is undeniably simple. The volunteer is basically flicking a switch in the rat’s brain between two positions – move tail, and don’t move tail. But it is still an impressive early example of something we will see more of in coming years – a way to connect between two living brains.
Science-fiction is full of similar (if more flamboyant) brain-to-brain links. From the Jedi knights of Star Wars to various characters in the X-Men comics, popular culture abounds with telepathic characters that can read minds and transmit their thoughts without any direct physical contact or the use of their senses. There’s no evidence that any of us mere mortals share the same ability, but as Yoo’s study shows, technology is edging us closer in that direction. The question is: how far can we recreate telepathy using electronics? A human wagging a rat’s tail is one thing. Will we ever get to the point where we can share speech or emotions or memories?
The first step would be to decode what someone is thinking. Neuroscientists have made substantial progress in deciphering images from patterns of brain activity, and several groups are working on decoding inner speech. People have managed to commandeer computer cursors, artificial limbs and virtual drones through brain-computer interfaces (BCI), which use brain activity to control man-made devices. But to achieve true telepathy, brain activity has to be decoded and used to influence another brain. “We’ve got brain-to-computer interfaces, but we need the other side of it – computer-to-brain interfaces,” says Yoo.
Last year, Christopher James from the University of Warwick built a very rudimentary one. He used scalp electrodes to mentally control a set of LEDs, which flashed at one speed when James thought about moving his left hand, and at another when he imagined moving his right hand. James’ daughter was watching the LEDs, and though she couldn’t consciously distinguish between the two flashing speeds, her visual cortex – the part of the brain that processes sights – registered the difference. By measuring the activity in her brain, another set of electrodes could work out what the LEDs were doing.
This may have been an electronic link-up between two human brains, but as James points out, it’s not telepathy. “It’s not like someone sits there imagining a complex thought, and it appears in the other person’s head,” he says. “My daughter was completely unaware. At no point did she say ‘Left’ or ‘Right’. It would have been more informative to put the words on the screen.” She also had to look at the LEDs to register what was happening, which violates the “no senses allowed” rule of true telepathy.
Miguel Nicolelis at Duke University provided another striking example earlier this year, by connecting the brains of two rats that were faced with the same task –press one of two levers to get a rewarding drink. When the first rat made its choice, electrical activity in its motor cortex was recorded and converted into a simpler signal – either one electrical pulse or a train of them, depending on which lever it pressed. The signals were beamed to another implant in the motor cortex of the second rat, which had its own levers. If it picked the same one as its anonymous partner – which it did 64% of the time – both rats got an extra drink. By the way, one rat was in the US city of Durham, North Carolina, while the other was in Natal, Brazil.
These examples are impressive, but all of them involved the transmission of very simple information – nothing more complex than a binary choice. Left or right. One or zero. The telepathy of science-fiction is still looking pretty fictional.
What would it take to send a more complex message? A sensation? A memory? We know it’s certainly possible to evoke vivid sensations by stimulating different parts of the brain. Target the retina or the visual centres and you can produce illusory flashes of light called phosphenes. And in the 1950s, neurosurgeon Wilder Penfield famously elicited vivid colours, sounds and memories by stimulating different parts of a surgical patient’s brain.
But to do this in an accurate, targeted way is far more difficult. Just consider sharing something simple like the feel of a door. “You feel a lot of different sensations like temperature and texture, and multiple areas of the brain are being engaged to interpret what’s going on. You have to decode all of that,” says Yoo.
Now, think about opening the door. “You have to make the decision to open the door, know what a door is and looks like, identify a handle, know that the handle goes down, and instruct your arm to move,” says James. At the moment, our brain-computer interfaces can only cope with the last bit. Scientists have indeed used electrical implants to connect the motor cortex with muscles in an arm, allowing patients to move otherwise paralysed limbs. But the rest of the steps recruit many more parts of the brain involved in memory, language, decision-making, and more.
To make matters worse, all of this will vary from individual to individual. The neurons in my brain that encode the concept of a door may reside in the same general area as their counterparts in your brain, but not in exactly the same spot. To effectively decode complex content from one brain and encode it in another, you’d need to compile a thorough “dictionary” for each brain, linking neural activity across the whole grey blob with different concepts or sensations. “We have to individually customise it all,” says Yoo.
There are technological challenges too. Researchers like James and Yoo have relied on electroencephalography (EEG) – a technique that uses electrodes positioned on the scalp to measure underlying brain activity. On the plus side, it’s simple to use and doesn’t use surgery. Unfortunately, James compares it to “recording 150 conversations in a packed ballroom while you’re sitting outside with 50 microphones.”
Stimulating the brain from the outside is equally crude. Scientists are limited to techniques like transcranial magnetic stimulation, which use magnetic fields to zap large areas of the brain into excitement or submission. And Yoo’s technique – focused ultrasound – is newer, but still involves immobilising a rat’s head under a large machine. Neither technique is well suited for delivering precise sensations.
The best alternative is to actually drill through the skull and implant electrodes. These have improved to the point where we can record activity from a small, tight cluster of neurons and stimulate them with fast and careful timing. But you would have to plaster the whole brain with them. And did I mention the drilling? “Is it something that humans without severe medical problems should put up with?” asks Michael D’Zmura, a psychologist from University of California, Irvine. “I think not.”
All of which raises the critical question: why would you bother? “You have to compare these options to what we’re capable of when we speak to one another,” says D’Zmura. We already have incredibly sophisticated biological hardware for making and interpreting sounds, which don’t rely on any implants or surgeries.
That said, everyone mentions the possibility of communicating with locked-in patients, who are fully awake and aware but unable to move or talk. But it is hard to see what benefit a truly telepathic connection would provide beyond what simpler brain-computer interfaces could achieve. These machines have already allowed locked-in patients to control artificial limbs or send messages to their loved ones, and since these people are awake, getting messages to them is not the issue.
So the ideal of communicating complex information may be a red herring. James finds it easier to envisage situations where conveying simple sensations is more useful. “If you have a busy air-traffic controller whose senses are all over the place, you could imagine bypassing them all and delivering a type of alert when two aircraft are coming close to one another,” he says. It does not have to be a clear message. It could be something as simple as a tingling feeling – less Professor X’s psychic rallying cries, and more Spider-Man’s spider-sense.
More from Will we ever…?
- Will we ever regenerate limbs?
- Will we ever simulate the brain?
- Will we ever lose all our corals?
- Will we ever make a safe cigarette?
- Will we ever decipher everything about a life form based just on its DNA?
- Will we ever predict earthquakes?
- Will we ever photosynthesise like plants?
- Will we ever run the 100 metres in 9 seconds?
- Will we ever clone a mammoth?
- Will we ever have an HIV vaccine?
- Will we ever correct diseases before birth?
- Will we ever have a fool-proof lie detector?
- Will we ever talk to dolphins?
- Will we ever restore sight to the blind?
- Will we ever grow organs?
- Will we ever decode dreams?