- Not Exactly Rocket Science
Will we ever decode dreams? My first BBC column
As mentioned before, I’ve got a new column at the BBC’s new sci/tech site, where I explore the steps we’ll take towards far-flung applications of basic scientific research. For reasons best understood by other people, no one in the UK can actually see the site, but I’ve acquired permission to republish my posts here with a short delay. So here’s the first one:
You wake up. You were dreaming, but in the haze of morning, you cannot quite remember what ran through your head. Childhood acquaintances were there. You were in Australia. One guy was a pirate. There was something about a cow. Perhaps. We have all had similarly murky memories of an earlier night’s dream. But what if you could actually record your dreaming brain? Could you reconstruct the stories that play out in your head?
It appears to be plausible. Science fiction is full of machines that can peer inside our heads and decipher our thoughts, and science, it seems, is catching up. The news abounds with tales of scientists who have created “mind-reading” machines that can convert our thoughts into images, most of these stories including a throwaway line about one day recording our dreams. But visualising our everyday thoughts is no easy matter, and dream-reading is more difficult still.
The task of decoding dreams comes down to interpreting the activity of the brain’s 100 billion or so neurons, or nerve cells. And to interpret, you first have to measure. Contrary to the hype, our tools for measuring human brain activity leave a lot to be desired. “Our methods are really lousy,” says
Professor Jack Gallant, a neuroscientist at the University of California, Berkeley.
Some techniques, like electroencephalography (EEG) and magnetoencephalography (MEG), measure the electric and magnetic fields that we produce when our neurons fire. Their resolution is terrible. They can only home in on 5-10 millimetres of brain tissue at a time at best – a space that contains only a few hundred million neurons. And because of the folded nature of the brain, those neurons can be located in nearby areas that have radically different functions.
More recently, some scientists have used small grids of electrodes to isolate the activity of a handful of neurons. You get much better spatial resolution, but with two disadvantages: you can only look at a tiny portion of the brain, and you need to open up a hole in the volunteer’s skull first. It is not exactly a technique that is ready for the mass market.
Other methods are indirect. The most common one, functional magnetic resonance imaging (fMRI), is the darling of modern neuroscience. Neurons need sugar and oxygen to fuel their activity, and local blood vessels must increase their supply to meet the demand. It’s this blood flow that fMRI measures, and the information is used to create an activation map of the brain. However, this provides only an indirect echo of neural activity, according to Gallant. “Imagine you tried to work out what was going on in an office, but rather than asking people what they did, you went into the kitchen to see how much water they used,” he says.
Despite these weaknesses, Gallant has repeatedly used fMRI to decipher the images encoded in our brain activity. For his latest trick, three of his team watched hours of YouTube clips while Gallant scanned the visual centres of their brains. He plugged the data into a mathematical model that acted as a brain-movie “dictionary”, capable of translating neural activity into moving images. The dictionary could later reconstruct what the volunteers saw, by scanning hours of random clips and finding those that matched any particular burst of brain activity.
The reconstructed images were blurry and grainy, but Gallant thinks that this will improve with time, as we develop better ways of measuring brain activity, better models for analysing it and faster computers to handle the intense processing. “Science marches on,” he says. “You know that in the future, it will be possible to measure brain activity better than you can today.”
While Gallant decodes what we see, Moran Cerf from the California Institute of Technology is decoding what we think about. He uses tiny electrodes to measure the activity of individual neurons in the hippocampus, a part of the brain involved in creating memories. In this way, he can identify neurons that fire in response to specific concepts – say, Marilyn Monroe or Yoda. Cerf’s work is a lot like Gallant’s – he effectively creates a dictionary that links concepts to patterns of neural activity. “You think about something and because we learned what your brain looks like when you think about that thing, we can make inferences,” he says.
But both techniques share similar limitations. To compile the dictionaries, people need to look at a huge number of videos or concepts. To truly visualise a person’s thoughts, Cerf says, “That person would need to look at all the concepts in the world, one by one. People don’t want to sit there for hours or days so that I can learn about their brain.”
So, visualising what someone is thinking is hard enough. When that person is dreaming, things get even tougher. Dreams have convoluted stories that are hard to break down into sequences of images or concepts. “When you dream, it’s not just image by image,” says Cerf. “Let’s say I scanned your brain while you were dreaming, and I see you thinking of Marilyn Monroe, or love, or Barack Obama. I see pictures. You see you and Marilyn Monroe, whom you’re in love with, going to see Barack Obama giving a speech. The narrative is the key thing we’re going to miss.”
You would also have to repeat this for each new person. The brain is not a set of specified drawers where information is filed in a fixed way. No two brains are organised in quite the same fashion. “Even if I know everything about your brain and where things are, it doesn’t tell me anything about my brain,” says Cerf.
There are some exceptions. A small number of people have regular ‘lucid dreams’, where they are aware that they are dreaming and can partially communicate with the outside world. Martin Dresler and Michael Czisch from the Max Planck Institute of Psychiatry exploited this rare trait. They told two lucid dreamers to dream about clenching and unclenching their hands, while flicking their eyes from side to side. These dream movements translated into real flickers, which told Dresler and Czisch when the dreams had begun. They found that the dream movements activated the volunteers’ motor cortex – the area that controls our movements – in the same way that real-world movements do.
The study was an interesting proof-of-principle, but it is a long way from reading normal dreams. “We don’t know if this would work on non-lucid dreams. I’m sceptical that even in the medium-term future that you’d ever have devices for reading dreams,” says Dresler. “The devices you have in wakefulness are very far from reading your mind or thoughts, even in the next couple of decades.”
Even if those devices improve by leaps and bounds, reading a sleeping mind poses great, perhaps insurmountable challenges. The greatest of them is that you cannot really compare the images and stories you reconstruct with what a person actually dreamt. After all, our memories of our dreams are hazy at the best of times. “You have no ground-truthing,” says Gallant. It is like compiling a dictionary between one language and another that you cannot actually read. One day, we might be able to convert the activity of dreaming neurons into sounds and sights. But how would we ever know that we have done it correctly?