The Where of What: How Brains Represent Thousands of Objects

These images look like the world’s messiest painter’s palettes, but they’re actually maps of brains. The weird shape results from flattening the convoluted surface of the brain onto two dimensions, just as we distort our globe to fit on a flat map. And the colours? The colours show how different categories—whether people, or animals, or moving objects—are represented across different parts of the brain.

As a very rough guide, green tends to correspond to humans, yellow to other animals, turquoises to communication, dark blue to buildings, pink and purple to vehicles and landscapes, red to movement, and so on. These are maps of information, more detailed and comprehensive than anything that has come before. They show the where of what.

For decades, neuroscientists have found parts of the brain that respond to specific objects. The fusiform face area (FFA) specialises in recognising faces. The parahippocampal place area (PPA) becomes active when we see images of places. But there can’t possibly be one such area for all the thousands of categories of objects that we recognise. We’d soon run out of space. Indeed, several brain-scanning studies have failed to find dedicated hubs in the brain for food, clothes, household objects, or other common groups.

Instead, Alexander Huth from the University of California, Berkeley has shown that the brain organises concepts along a continuous map. A huge range of categories are organised according to different qualities, such as whether they are moving or still, or whether they are biological or artificial. Areas that represent similar concepts—like cars and motorcycles, or animals and humans—tend to be close together, while most disparate categories are farther apart. And these patterns are consistent across different individuals.

This arrangement is an efficient one. It takes a lot of energy for the neurons in the brain to communicate with one another, especially if they have long connections between them. “One can assume that the brain is trying to minimise the length of the wires,” says Jack Gallant, who led the research. “One way of doing that is to put bits that are doing similar computations next to each other.”

As the messy palette-like images show, the result isn’t a tidy wave of categories passing from the front of the brain to the back. Still, the maps provide a much richer picture of the brain’s geography than earlier studies, which matched single categories with single regions, like faces and the FFA. “It’s not that the other studies were wrong,” says Gallant. “It’s just that they were the tip of the iceberg.”

Imagine that the brain is a mountain range, where the heights of the different peaks represent how strongly different parts of the brain respond to, say, faces. The FFA is the equivalent of Everest – it’s the region that fires most strongly to faces, and the one that brain explorers will most readily find. Other areas also respond to faces, just less selectively, or less strongly. You need to map the brain’s responses more comprehensively if you want to start finding the rest of the Himalayas.

Making maps

Huth made his maps by asking five volunteers to watch two hours of video clips. He later labelled the objects and actions in the clips using the 1,705 most common nouns and verbs from WordNet, a hierarchy of English words. As the volunteers watched, Huth used an fMRI scanner to measure the blood flow in 30,000 different sections of the brain. (Blood flow is an imperfect but widely used stand-in for brain activity.) The result: a giant matrix showing how all 30,000 points on the brain respond to all 1,705 categories.

Here’s just one example. This map shows how one of the 30,000 spots, in one of the five brains, responded to the different videos. There was a strong burst of blood whenever its owner saw images associated with man-made objects (like buildings or vehicles), and a weak burst when its owner saw people or outdoor scenes (“person”, “athlete”, “hill” or “grassland”). By contrast, it was less active in response to non-human biological things (“bird”, “fish” or “food”).

View Images
How one small part of the brain responds to images in 1,705 videos. Courtesy of the Gallant lab

How does the brain organise these concepts? Does it do it by size? Relevance to living things? Or is it just a random mess? To find out, Huth analysed his smorgasbord of data using a mathematical technique called principal component analysis (PCA), which looks for strong patterns in vast data sets. He pulled out four different qualities, or “dimensions”, where similar categories are represented by nearby parts of the brain.

The first and strongest dimension distinguishes things that move, like animals and vehicles, from things that do not, like buildings.

The second distinguishes categories related to social interactions, like people, and communication verbs, from others.

The third separates categories associated with civilisation, man-made objects, people and vehicles, from those associated with nature, like other animals.

The fourth separates biological categories, like animals, plants and people, from others.  For example, a dog would score strongly on the first and third, but weakly on the second and fourth dimensions.

(Huth says that these descriptions shouldn’t be taken as gospel. They fit, but each dimension represents many other qualities – the fourth, for example, also happens to separate text from other categories. And there are almost certainly many more.)

Next, Huth painted a map of the brain, using a colour scheme defined by the second, third and fourth dimensions (social, civilisation, and biological). The results are the chaotic swirls of colour in the topmost image. It’s messy, but amid the chaos you can see gradients, where related hues (and therefore, categories) flow into one another.

View Images
More semantic maps of brains. Courtesy of the Gallant lab

What this means

To Huth and Gallant, these “semantic space” maps are a striking visual reminder that the categories that we rely upon aren’t confined to specific parts of the brain, but spread throughout it as a continuous map. “It’s not that every category has its own place,” says Gallant. “It’s more like a continuous map of these semantic categories, which are represented in many different locations.”

For example, the FFA—that classic “face region”—does indeed show up as green, but it’s surrounded by yellow areas that correspond to animals. And many other parts of the brain respond to faces too. “I think it will really change the way people think at a fundamental level of the organisation of these more abstract concepts,” says Gallant.

But David van Essen, a neuroscientist from Washington University in St Louis, says that the study is “unlikely to settle the debate” about whether our mental abilities are governed by distinct modules, or continuous gradients. Although he praises Huth and Gallant’s maps, he notes that they relied on fMRI—a technique that has poor resolution in both space and time. Perhaps sharper ways of measuring the brain would detect more distinct modules for the different categories.

Gallant is not shy about discussing fMRI’s weaknesses. “It’s so indirect and spotty and has so many artefacts that it’s surprising the damn thing works at all,” he says. “It is… not awesome. No one would use it to measure the brain if we had a better method.” The problem is that other techniques, at least those that could be applied to the entire brain, are also poor. Until we get better technologies, we’re stuck.

Huth and Gallant’s work is similar to what scientists have done for the brain’s visual centres. For decades, we’ve known that neighbouring parts of our visual field are represented by neighbouring areas in the brain, and we’ve created atlases of those areas called “retinotopic maps”. “But this is the first time that we’ve come up with non-retinotopic maps of the same quality,” says Gallant. So far, he has just looked at how the brain responds to images, but he predicts that scientists will apply the same approaches to other types of information, like language.

And there’s a big caveat: These maps show where information is represented in the brain, but not how it’s represented. Each of the 30,000 “points” that Huth looked at contains millions of neurons, and we have no idea what they’re actually doing to encode a specific category. “Maybe areas that care about faces care about the eyes. Or maybe others care about the shape of the head,” says Gallant. “This doesn’t tell you how these computations arise, just what information about the world is represented where.”

Reference: Huth, Nishimoto, Vu & Gallant. 2012. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron

More on the brain: