How You’re Fooled By Meta-Analysis

How You’re Fooled By Meta-Analysis

The number of participants isn’t the only thing that matters


We love nothing more than controversial findings from nutrition research. Thrilling headlines in the news about our favorite foods — meat and fat, gluten and carbohydrates, proteins and milk. Naturally, we prefer science-based articles with academic papers, so that later, during dinner with friends, we can eyeball their plates and say, “But, you know, actually there’s a study saying you’ll get cancer from that!”.

But be honest — how carefully did you actually read this study and are you truly able to evaluate its quality? How profound is your knowledge of the facts that you are so cocky about?

“But it was a meta-analysis with over 32 observational studies combined — altogether over 500 000 participants”, you will say “they can not all be wrong!”

Trust me, they can.

Meta-analyses are summaries of multiple studies and their conclusions.

It’s logical — many researchers investigate one topic and in the end, everything is combined to form a final thesis. By increasing the amount of data, the statistical significance can be increased and becomes stronger. But it’s a delicate craft to select the right studies without manipulating the results in a certain direction.

Economists and the food-industry discovered that the term “meta-analysis” leaves a good impression — and they use it to fool people and exploit science to commercialize their products.

Don’t compare apples and oranges

Nutrition is complex. In fact, food intake is more complex than many pharmaceutical applications or the biochemistry of drugs. The highly individual reactions of people to one and the same food, puzzles scientists constantly. Food science is therefore a difficult field for meta-analyses. Studies often rely on clinical trials and nutritional interventions, which often deviate in fine details in their methodology. But as delicate these differences may be, as trivial as they seem, that’s certainly not the case.

The most crucial point is the question of what is compared to what.

Differences in interventions define if studies are comparable or not — and if they’re suitable for meta-analysis. For example, a meta-analysis investigating the relationship between red meat and blood lipids examined 50 studies — most of them comparing meat with other types of meat and a few comparing meat with plant-based foods. How can these findings be logically summarized?

For a good meta-analysis, it is not the number of studies and participants that is important, but how comparable their methods are. Only if that is the case you can draw valid conclusions.

The quality of studies

Quality over quantity. There are countless studies that can refute or confirm theses — you’ll always find something that supports your opinion. The question is which quality these studies have. The problem we have is that we’re not able to evaluate the quality. Probably everybody knows by now how to use PubMed or GoogleScholar and to read an abstract of an academic paper. But that’s not all there’s to it. The quality lies within the details — and everybody outside of science will have a hard time getting there.

Meta-analyses are often misused to “mix-in” studies of inferior quality and thus produce fuzzy results.

For example, an analysis with 10 excellent unambiguous can be mixed with 20 studies of poor quality, to get a final finding of “the evidence was to inconsistent to draw definite conclusions”. But it’s not the data situation that is inconsistent, but the selection of high-quality studies.

Choice and manipulation go hand in hand

Choosing the studies for your meta-analysis is a balancing act. Without strictly setting inclusion and exclusion parameters, you become easily biased and only choose studies that support your thesis.

For scientists, there’s no tool more powerful than the choice of studies for a meta-analysis.

One great example gives the 2014 meta-analysis that examined the relationship between saturated fats and coronary artery disease. After the publication dozens of scientists were outraged and claimed the gross errors in the analysis — thereupon the authors halfheartedly corrected the falsely published facts. But the damage was already done. News and media had already jumped on it and it spread like wildfire.

Interestingly, the authors of the meta-analysis had simply left out a couple of studies and not included them in the calculations — because they didn’t support their thesis. It was only after other scientists were given access to the raw data that the errors and the unscientific approach could be exposed.

Everything for the sake of the controversy, right?

Can we trust meta-analyses at all?

A meta-analyses do not have to be solely a statistical summary. They show us why studies with the same methods can produce different results. They can bring methodologies further and improve our knowledge of how research works.

If the analysis was prepared well, we can absolutely trust it. But we shouldn’t trust it more just because it says “meta-analysis” in the title — it’s not equal to “higher quality” or “more reliable”. If the chosen studies vary in methods too much, the “meta” loses its value. The analysis is useless and draws false or incomplete conclusions. If that’s the case, you should rather stick to single well-conducted investigations.

To create useful and reliable meta-analyses, and to prevent misuse, we have to improve how science works: we need a profound peer-review that goes further than simply checking if the standard procedures were followed, we need that authors share their raw data, so other scientists can check their calculations. They must make their decisions more transparent.

And we have to be more aware and less naive. This way we can prevent fake news about diets and health spreading through media in the first place.

For further reading
  • Eye-opening article from Neal D. Barnard (2016) on misuse of meta-analysis in nutrition research.
  • How the results of a meta-analysis can confuse rather than clarify therapeutic dilemmas if clinical heterogeneity among trials is ignored Lecky (1996).

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow by Email