The picture above is an X-ray computed tomography (CT) scan of a human lung. Go ahead and take a few seconds to look at it carefully.
How long did it take you to spot the gorilla?
The image takes a starring role in a fun study in press at Psychological Science. Trafton Drew and colleagues at the Brigham and Women’s Hospital showed that when people focus on searching these images for bright white cancer nodules, they never notice the gorilla. More shocking, radiologists — who are trained to read CT scans — usually miss it, too.
“It’s a vivid example that looking at something and seeing it are different,” says Drew, a postdoctoral fellow in Jeremy Wolf’s lab. “You can put your eyes on something, but if you’re not looking for it, then you’re functionally blind to it.”
On Tuesday, science writer Wray Herbert wrote about the work for the Huffington Post, calling the data “really scary” with “life-threatening implications.” I have to respectfully disagree. On the contrary, I’d argue that it’s because of this hyper-focused, selectively blind attention that expert radiologists are useful.
The study draws inspiration from the famous ‘invisible gorilla’ experiment done in 1999. In that work, researchers asked participants to watch a short video of people passing a basketball and count the number of passes made by those wearing white. About halfway through the video, somebody walks through the scene wearing a ridiculous black gorilla suit. The gorilla even does a little jig. Yet half of the study’s participants never noticed.
“We just wondered, how far could we push this?” Drew says. Would the eyes of trained experts also be vulnerable to this so-called ‘inattentional blindness’?
There were signs they might be. In one study, for example, researchers asked radiologists to review chest X-rays with a pretty big anomaly — a missing collarbone. About 60 percent of the experts didn’t notice it was gone.
In the new study, Drew asked 24 radiologists to do a typical lung cancer screening using the CT scans of five patients. This requires the radiologist to sit at a computer and look for small white blobs on hundreds of X-rays, each showing a slightly different slice of the patient’s lung. “It’s incredible to watch them do this,” Drew says. “They go through these things in under three minutes.”
The researchers didn’t do anything tricky to the images from the first four patients, which included on the order of 1,000 scans. But hidden in the stack of 239 images from the fifth patient, the researchers inserted 5 consecutive scans showing the cartoon gorilla. They were sneaky about it, too. On first appearance, the gorilla was 50 percent transparent. On the second it was 75 percent, and on the third fully visible. Then it faded back out on the last two scans.
Just 4 of the 24 radiologists reported seeing the gorilla. What’s more, the researchers had used eye-tracking technology to chart exactly where on the scans the participants had been looking. “The majority of them looked directly at the gorilla for extended periods of time. They just don’t see it,” he says.
Drew repeated the experiment with 25 adults who had no medical training. All of them missed the gorilla.
Like the 1999 gorilla study, this one worked because participants were intensely focused on a very difficult task. As you’d expect (and hope!), the radiologists in the study were far better at spotting the cancer nodules than were the non-experts, with success rates of 55 percent and 12 percent, respectively. And that’s why I doubt the findings have dire implications for medical science.
Let’s say more of the experts had noticed the gorilla. That would necessarily mean that they weren’t as focused on the task they were instructed to perform: find the cancer. If it had been a real clinical setting, a broader focus might very well have caused the doctors to miss a few life-threatening nodules.
In any case, the radiologists didn’t seem very concerned about their competence when they found out about the gorilla at the end of the experiment. “They just thought it was funny,” Drew says.
The bigger lesson here — not just for radiologists, but for scientists, journalists, and anybody else — is about testing your biases and assumptions, Drew says. “It’s important to be willing to look for more than one thing, to set yourself up for success.”