A baby fish watches a tasty cell swimming in front of it. Neurons fire in its brain, and it recognises a potential meal. Its eyes converge, it beats its tail, and it heads in for the kill.
Trillions of baby fishes have enacted this little tableau for half a billion years, but this individual is special. It has been altered by a team of Japanese scientists so that its neurons give off a flash of light whenever they fire. And since its head is transparent, any onlooker can see its brain activity. In doing the rather mundane task of capturing a meal, this fish is putting on a nifty real-time lightshow. It’s showing us what a thought looks like.
At the moment, if you want to visualise the brain, you have to sacrifice detail. Scanning techniques like fMRI measure the flow of blood to millions of neurons and, despite producing pretty, blobby diagrams, their resolution is terrible in space and time. Alternatively, you can record the activity of single neurons, or perhaps even a handful, but that tells us little about the rich networks that allow us to think and compute.
What you want is to combine the fine detail you get from zooming in, with the overarching view you get from zooming out. You want to visualise the brain, or even a brain region, at the level of single neurons. And that’s impossible, at least in humans. Akira Muto from the National Institute of Genetics in Shizuoka, Japan has taken a step towards this goal in the larval zebrafish.
These tiny animals, whose brains are a fraction of a millimetre across, have a seductive allure for neuroscientists. Their bodies are transparent, giving a direct window into their brains. And those brains are small, with just 300,000 neurons compared to our 86 billion.
The team began by tweaking a jellyfish protein called GFP so that it gives of a bright green glow whenever it detects a rise in calcium ions, as happens when neurons pass signals to one another. GFP has been a workhorse of biology for decades, but the team produced an exquisitely sensitive version. They also targeted the protein to the fish’s optic tectum – the part of its brain that receives signals from its eyes. Whenever the zebrafish saw something interesting, any firing neurons in its tectum would start to glow.
First, Muto checked how it reacted to a tiny spot projected onto a screen next to it. He focused on the neuropil, an area where the tectum’s neurons hook up to those coming from the retina. By all accounts, the neuropil should act like a map of the space that the fish can see. Neurons on the far left, for example, should fire and glow when the fish sees something moving on its far left.
That’s exactly what Muto found. As he moved the spot up and down, the neuropil’s green glow also moved up and down. As the spot shook from left to right, the glow shook from left to right. The technique was even sensitive enough to identify individual neurons that are tuned to specific directions of movement, firing only when the spot zigged in one way but not when it zagged in the other.
The glowing protein worked. Now, for a real test. Muto tempted the fish with a paramecium—a single-celled creature that swims around by waving tiny hairs. As the cell danced in front of the immobilised larva, the glow of active neurons followed its path.
You can see this in the video below. The fish’s brain is the blue and purple blob at the bottom of the screen, and the paramecium is the violet speck darting madly all around it. As neurons fire, small patches of white and red show up. That’s the fish spotting a moving object. You’re watching an act of a perception in a living animal on your screen.
Note that when the paramecium is on the left of the fish, the neurons on its right side fire, and vice versa. That’s what happens in our brains too – the signals from our eyes cross over into the opposite half of the brain.
And if the paramecium stayed still next to the fish’s eye, its brain never fired. Like a Hollywood tyrannosaur, its vision is sensitive to movement.
If Muto unchained the larva, and allowed it to actually catch the paramecium, something slightly different happened. Just as its eyes converged and it darted forward, the front-most parts of its tectum became active. This region may contain specific circuits that control the larva’s prey-capture routine.
Other groups have managed to film a zebrafish’s brain activity while it watched artificial visuals, like a set of moving stripes. This is the first study to show us how its neurons behave towards a natural object—the paramecium. The team also say that their new version of GFP is more sensitivity than older ones, and has a better resolution. However, Ed Boyden from MIT points out that they haven’t shown any detailed comparisons with popular existing tools. “It’s a little bit unclear to me whether the ability to image the visual response to a natural object was enabled.
Regardless, Muto’s team are ploughing ahead. The tectum is a good place to start because it’s organised in a predictable way, and because perception is an easy act to test—just stick something in front of the fish and let it perceive. The team is now developing ways of targeting their glowing protein to other parts of the brain, so they can identify the circuits that are involved in movements, decision-making and other behaviours. And they hope that they’ll eventually be able to visualise the entire brain at once.
Other neuroscientists, like Boyden, are working on their own technologies for mapping the brain, but this is a step in the right direction. And it’s another sign that the humble zebrafish might help neuroscientists as Virginia Hughes wrote in her superb recent feature, “to answer the biggest question in neuroscience: how a doughy mass of neurons in the brain gives rise to an exquisite suite of behaviours, absorbing information from the outside world and generating responses.”
Reference: Muto, Ohkura, Abe, Nakai & Kawakami. 2013. Real-Time Visualization of Neuronal Activity during Perception. Current Biology http://dx.doi.org/10.1016/j.cub.2012.12.040
More on visualising the brain: