Statistics and Causation
One of the main reasons we invest our time into research (whether performing the science or reading the reports) is to understand how to avoid problems. We want to know what we can do to improve chances of a good outcome and have the best health. We want to know what treatments are the most effective when a problem arises. In short, we want to know what “causes” things — what causes a problem and what causes healing.
One of the first things you learn as a researcher is that demonstrating cause is terribly difficult. Why? To show causation means you have to show that it is this particular thing and not all the other possible things that are making something happen. When research was mostly about infectious diseases, a set of criteria were created to assess how strongly the evidence supported causation.
Known as Hill’s Criteria, the list includes things like:
- The Cause must always happen before the Outcome
- The Cause should have a stronger relationship with the Outcome than other potential causes have with the outcome.
- If you have more of the Cause you should see more of the Outcome. Conversely, if you take away the Cause you shouldn’t see the Outcome anymore.
- The Cause and Outcome relationship should be seen in many studies using many methods.
These criteria were brilliant for helping scientists identify the cause for diseases like polio or syphilis where one microbe is responsible for the disease. They can also work for some genetic conditions where a small group of mutations or sequences can be isolated as responsible. They fit less well for conditions such as hypertension that result from an interaction of diet, exercise, smoking and genetics. Unfortunately in pregnancy and birth, most problems are the result of the interaction of multiple factors rather than one particular microbe or genomic sequence.
Researchers know this and read the literature accordingly. Different types of studies provide different types and strength of support for claims of causation. Each study answers a small question that fits like a puzzle piece into the big question. Slowly the research begins to reveal a picture that helps us understand how all the pieces fit together and how the condition happens. No single study provides proof of causation by itself.
Even “definitive” studies are only definitive because enough research has been done to provide strong support for a question which allows a study to bring variables of interest together. Remove the research that supports that question and you lose the foundation that allowed the final question to be asked. Think of it like a puzzle, if someone removes the border pieces you have no way of knowing how large the puzzle was intended to be, how many pieces are missing, and what images are contained on those pieces.
Unfortunately, when research is reported in various media you, as reader, have no control over the expertise of the media writer. It is easy to be swept into persuasive writing about studies that “prove” something, especially if the study proves something you believe to be true in the first place. Good science writers don’t make such claims about a single study because they understand the research process, and the small contribution each study makes to the overall understanding.
Inexperienced science writers do make such claims, and in the process can erode the public trust in the scientific process. I’ve been sent articles that claim all kinds of nonsense about research proving definitively the cause of something from friends and acquaintances sharing why they no longer trust “the medical establishment.” What I usually find when I investigate the claims is that the writer is (probably unintentionally) misrepresenting the findings.
- If you don’t know that one study cannot be used to claim causation;
- And if you don’t know that scientists see each study as one small piece making one estimate of the truth;
- And if you don’t read all the other papers written on a subject and synthesize the findings…
It is easy to be angry at the results of one study that shows a result you don’t like. It is just as easy to latch on to one study that shows the results you do like.
It takes time to unlearn the natural human tendency to imply causation from every study. But you can do it. It may help to consider examples of ecological fallacies – interesting pieces of population data that if assumed to reveal the entire picture would lead to very wrong conclusions. Here is an article discussing two wrong conclusions about the cause of polio.
The Birth Worker Survey
We’ve already discussed that the Birth Worker Survey cannot provide reliable estimates due to the sampling. However it can provide us with some wonderful examples of why it takes multiple studies with multiple methods to prove causation.
For example, the survey data reveals all participants who gave birth via cesarean reported dissatisfaction with their birth, but only 12% of participants who gave birth vaginally reported dissatisfaction. The results were statistically significant with a p-value of 0.006 and having a cesarean hand an odds ratio of 29 for dissatisfaction compared to vaginal birth. Does this prove that cesarean birth causes dissatisfaction?
We could do the same analysis comparing hospital to home birth. No one who reported home birth reported being dissatisfied, while 40% of those who gave birth in a hospital reported dissatisfaction. Again, this was statistically significant with a p-value of 0.018. Does this mean it was being in a hospital that caused the dissatisfaction?
Latest posts by Jennifer Vanderlaan (see all)
- Midwife as Coach - January 18, 2015
- What I learn from hosting a web directory – five things you should change - January 10, 2015
- End of the Year and Waterbirth - December 30, 2014