We all know people who act very differently depending on the company they find themselves in. They can be delightful in some circles, and obnoxious in others. The same principles apply to the microbes in our bodies—our microbiome. They have important roles in digestion, immunity, and health, but none of them is inherently good. They can be helpful in one part of the body and harmful in another, beneficial when paired with certain partners and detrimental when teamed up with others.
This means that, as I’ve written before, there’s no such thing as a “healthy microbiome”. Context matters. And contrary to what some companies might tell you, we’re still not very good at looking predicting what any particular community of microbes means for our health. One common approach is to compare microbiomes in people with or without a disease, single out species that distinguish the two groups, and use their presence or absence to make predictions. But those same bugs might have the opposite effect, or none at all, in another setting.
Alyxandria Schubert from the University of Michigan used a less reductionist approach—one that embraces the complexity of the microbiome rather than shoving it aside.
She studied Clostridium difficile: a weedy bacterium, known colloquially as C-diff, which can cause debilitating bouts of diarrhoea. A thriving community of gut microbes can hold C-diff at bay, but when those communities are cleared by antibiotics, the weed can bloom freely. That’s why C-diff is the single biggest cause of hospital-acquired infections in the USA.
But not everyone who takes antibiotics gets infected. What separates them from those who succumb? Is it just luck? Is it the specific drugs they take? And can you look at someone’s microbiome after they take antibiotics, and accurately predict their risk of contracting C-diff? To find out, Schubert put mice on seven different antibiotics, and then exposed them with C-diff. Each drug changed the rodents’ gut bacteria in different ways, giving some species a boost while repressing others.
None of these changes could consistently account for an animal’s susceptibility to C-diff. For example, mice with a particular Bacteroides species were more likely to be colonised by C-diff if they had taken streptomycin, but a lower risk if they had taken cefoperazone.
Akkermansia, a microbe that seems to protect against both obesity and malnutrition, also failed to show a clean pattern. “If you picked the right antibiotic, you’d say Akkermansia is protective. Pick another one, and you’d see mice with just as much Akkermansia and high levels of C-diff,” says Pat Schloss, who led the study. “This is a bug that’s being used in probiotic trials, but we find it associated with inflammation and other stuff. It’s a pretty strong example of context-dependency.”
It’s not the actions of any one microbe that protects a gut from C-diff incursions, but the interactions between them. So, rather than trying to identify a particular protective species, we need to study the community as a whole.
To do that, Schubert turned to a technique called random forest machine-learning. She fed her data from the various post-antibiotic microbiomes into a computer program, and asked it to pick features that could predict the level of C-diff colonisation. The program then built a “decision tree” based on those features—imagine a game of Twenty Questions. Does the community have lots of Bacterium A but little of B? Lots of C and D? Neither E nor F? Any one tree might be wrong a lot of the time, so the program generated a lot of them—an entire “forest”. It could then run any new microbiome through all of the trees, aggregate their responses, and make a prediction.
When Schubert it to predict the degree of C-diff colonisation, it explained 77 percent of the variation from the antibiotic experiment. When she gave it the simpler task of just predicting whether C-diff would colonise or not—yes or no—it got the right answer 90 percent of the time.
This is encouraging. Still, the team needs to test their program on a different data set than the one they used to build it. And although they measured how accurate it is, they need to show that it’s both sensitive (it rarely misses when a person is at risk) and specific (it doesn’t sound a false alarm when the risk is low). And obviously, they need to test it on people, rather than mice.
Still, it’s the right sort of approach. When humans look at complicated data sets, we try to pare things back to manageable simplicities: this bacterium is protective and this one isn’t. Machine-learning avoids this problem, and grapples with all the complexities hidden in the data. “You’re not just looking at one organism but the whole collection,” says Schloss.
Other teams are doing the same. Last year, Sathish Subramanian and Jeff Gordon built a mathematical model that could work out if a baby’s microbiome was maturing at the right pace—if its microbiological age matched its biological one. And Schloss is using the same method to try and predict a person’s risk of colon cancer from their gut microbiome. “Maybe you’d go into the intensive care unit and we’d put you on antibiotics, we could predict your risk of C-diff or colon cancer or any number of diseases,” he says.
If the predictive models work, they could also be used to personalise treatments—another future goal for microbiome research. Rather than just offering everyone the same probiotics, or giving them a faecal transplant (yes—that’s a thing), doctors might be able to tailor a prescription of microbes to a person’s existing community. Given what they’ve got now, what do they need to make them healthier?
“One of my fears with microbiome research is that we’re finding all these associations and not doing anything with it. We have no deliverables,” Schloss says. “My hope is that we could translate this into humans.”
Reference: Schubert, Sinani & Schloss. 2015. Antibiotic-Induced Alterations of the Murine Gut Microbiota and Subsequent Effects on Colonization Resistance against Clostridium difficile. mBio. http://dx.doi.org//10.1128/mBio.00974-15