Citizens expect 100 percent scientifically correct statements from the Federal Minister of Health. When it comes to nutrition, however, the epidemiologist (!) and self-proclaimed “saltless vegetarian” Karl Lauterbach is not so particular about the truth – preferring instead to spread ideologies that he has come to love.
Lauterbach recently warned in the “Bild am Sonntag” of a major causal danger on the plate: “Every day sausage, schnitzel or roast causes cardiovascular diseases, strokes, diabetes, colon cancer. Unfortunately, too much meat also makes us age faster.”
The problem with this statement is that there is no causal evidence that “sausage and meat every day” causes the diseases mentioned. And certainly not alone, i.e. as the only trigger – because the major widespread diseases are always based on a highly complex mix of potential causes resulting from the individual interaction of numerous personal lifestyle factors. Well, all of this is not so easy to “fix” scientifically and to disentangle what exactly is responsible for what and in what form.
From a scientific point of view, if anything, there are only assumptions and hypotheses based on vague correlations (statistical connections). And that in turn is due to the fact that nutrition research is subject to massive limitations that do not allow any cause-effect evidence – you can read a “small selection” of this scientific limitation phalanx at the end of this article.
According to this, Lauterbach could at best say: “Every day sausage, schnitzel or roast could probably cause cardiovascular diseases, strokes, diabetes, colon cancer.” therefore even the German Society for Nutrition, DGE, for several years: Here it is always “nutrition-related” diseases).
The World Cancer Report (WCRF) from 2020 writes just as clearly conjunctivist: “According to estimates, 30-50 percent of all cancer cases could be reduced by maintaining a healthy body weight, healthy diet, sufficient physical activity and by avoiding occupational carcinogens, environmental pollutants and certain long-term infections.”
But our Federal Minister of Health does not do that – consciously or unconsciously? Whatever the case, simply constructing a “causal truth” from this weak data al gusto is simply unworthy of a federal minister who has committed himself to “acting based on evidence” as a public maxim for action. His untenable statements seem particularly paradoxical against the background that he himself is an epidemiologist – and should therefore know and correctly communicate both the limited informational weakness of observational studies and the difference between correlation and causality. But he doesn’t. Just why?
An inquiry to Lauterbach’s press office in the Federal Ministry of Health (BMG) resulted in “tacit denial by the head of the house”, specifically: The BMG preferred not to comment on the minister’s current meat-causing statements. Regrettably, it also remains nebulous in the dark what Lauterbach means by the fact that “too much meat unfortunately makes us age faster.” At least he provides an exciting approach to the creative discussion of his possible fantasies, which this – also in no way scientifically proven – statement were based.
In this sense: amusing and stimulating conversations! And here he likes to serve the “Best of Limitations” – which our Federal Minister of Health might have forgotten because of all the Corona communication…
… extraordinarily diverse, but very easy to understand. Even if, for example, people keep babbling about “strengthening the immune system by eating XYZ” during winter infection times, it doesn’t change the “eternal fact”: nutritional research cannot provide any causal evidence (i.e. cause-effect relationships) because there are essential prerequisites in the study design cannot be fulfilled for this. The relevant limitations that downgrade ecotrophological studies to crystal ball level are:
… the foundation of nutritional research. These studies, on which the current (semi) nutritional knowledge is based, cannot provide any evidence (causalities), but only vague assumptions and hypotheses derived from weak correlations.
… statistical correlations, about whose actual connection one knows nothing. Example: Red wine drinkers live longer. Is it the red wine or the “rest” of the lifestyle because these people have more money, better health, higher jobs, etc.? Correlation does not provide causality!
… cause-effect relationship, the scarce commodity par excellence in ecotrophology. A simple example: people with scurvy have a vitamin C deficiency. If you compensate for this, the disease disappears completely. Cause: vitamin C deficiency → effect: scurvy.
… Only high-quality studies can provide causal evidence for the decisive research targets (the hard clinical endpoints) such as heart attacks, strokes, cancer or life expectancy. These don’t exist in nutritional science – and they never will. Instead, the food researchers have to deal with…
… content. These are substitute values such as blood pressure or blood values. They are weak in their limited informative value and are usually only available as correlations.
… one of the most important study factors: the random distribution of people into the study groups so that they are comparable. In the area of the “big nutritional questions”, however, randomization is unrealistic if not impossible because: It is not feasible to demand a certain diet from a randomly assembled group and expect the participants to stick to it over the years or decades required.
If you wanted to know, for example, whether a vegetarian diet is healthier than “eating everything” and divided the test persons randomly into these two groups, which steak friend would like to hear: “Alea iacta est – the die has been cast: you are were drawn into the vegetarian group and are now not allowed to eat meat for five years during the study period.” Conversely, one does not want to imagine the outcry: A vegetarian is randomized into the omnivore group. Come in addition …
… Placebo meat would also be necessary, but that doesn’t exist either. In general, there is not a single placebo food – and therefore there is no placebo group as a study arm. Very poor. This is the best way to research the effectiveness of an intervention.
…neither the doctor or head of the study nor the subjects (study participants) know whether they are in the intervention, comparison or placebo group. This increases the probability of real results because no expectations and wishes are interpreted into the study (which usually falsifies the results). Unfortunately, nutritional research remains a blind flight here, because neither single nor double blinding is possible on the plate.
…. most studies show what is known as a J or U curve, which means, for example, people with low, high sausage consumption die earlier than those with moderate consumption. Accordingly, there is no meaningful dose-response relationship pointing to causality, in which increasing consumption would have to be accompanied by an increasing risk.
… the amounts of food consumed, i.e. the basis of the study, are always based on the unverifiable self-reports of the test persons. And here you know: People like to cheat, the answers are (almost) never 100 percent honest, keyword “underreporting” – for reasons of conscience, more supposedly “healthy food” is given, but the “bad” foods are “corrected down”. Ergo: One cannot take the data basis seriously, because it is anything but valid. And often this is asked only once at the beginning of a study that runs for ten years or more. Come in addition …
Even if one could trust the statements of the test persons – no one can consistently stick to a given study-standardized form of nutrition for years or decades – especially not when it comes to the analysis and thus the consumption of individual food (groups), Study requirements in the sense of instructions such as “Eat at least one serving of broccoli, cauliflower or romanesco every day” or “Eat at least seven apples and pears a week” may work for the first few months – but then at some point you will no longer feel like it and may even develop a real aversion to the “intervention diet”.
It would be completely gaga, for example, if you wanted to research the health effect of ready meals such as canned ravioli: the continued forced consumption, which nobody would go along with for more than one to two weeks anyway, could certainly also raise fundamental “ethical-culinary” questions. And that’s not all – do you know:
… these are the “infamous” confounders that have an undesirable biasing effect on the results of observational studies. For example, lifestyle factors such as sex, money, leisure time and all sorts of emotional and social interpersonal issues, but also “banal” things such as exposure to sunlight and fresh air quality in different study countries – all of this influences and falsifies the results (sometimes massively), but is not recorded in the questionnaires. The study designers use various statistical methods to eliminate these distortions. However, no one knows for sure which of these factors falsify the results and in what way.
… the imbalance of the publications. The study situation has a massive list, because the papers that come to contemporary, socially accepted results are more likely to be published than those that observe exactly the opposite. Example: Two studies examine the connection between red meat and heart attack. Only one of them observed a positive correlation, i.e. “the more bad steak, the more heart attacks”, so this work will probably be published sooner – and the other, which showed nothing or even an inverse correlation, disappears into the drawer.
The following finding, which “Der Spiegel” reported in mid-2017, fits in with all of this: “Nutrition, of all things, a topic that affects everyone, defies a few basic rules of serious research: randomized double-blind studies? An absurd notion. Nutrition research has methodological weaknesses and scientific gaps. The fundamental question of lifestyle, of all things, is therefore surrounded by a wreath of myths made up of speculation and unproven hypotheses.”
Due to all these limitations, there is not a single clinical study based on the highest EBM (evidence-based medicine) criteria that has been able to provide causal evidence for even a single hard endpoint. There are myriads of correlations, small, weak, short RCTs (randomised clinical trials), mouse studies, all of which (can) only evaluate surrogate parameters. Strokes, heart attacks, cancer, mortality as a hard endpoint for any diet or special food (groups) or even individual ingredients causally proven? Zero.
It is interesting in this context that even the DGE (German Society for Nutrition) publicly stated in November 2019 that studies to determine causal evidence are not to be expected in the future either: “However, it must be taken into account that in the nutrition area for food recommendations other Paths must be taken than hoping for studies that cannot be carried out in practice…” Or, to put it another way in Donald Trumpisch: “We lack valid causal evidence, so we have to find ‘alternative evidence’ for healthy eating…” based, you know that by now, right?