For today’s post, you might want to open a second window at the Cochrane Collaboration Website so you can scroll through what is available while we talk. One thing I have been trying to communicate (over and over and over) is that each study is only one small piece of the puzzle researchers use to help figure out what is going on. Today we are going to talk about one tool researchers use to synthesize the available data, the Systematic Review.
Sometimes you want to understand the odds of an event that everyone in the group will not experience. The best example in pregnancy and birth is the length of labor. When you try to get an average length of labor you need to decide how to handle the labors that end in a cesarean birth.
You see, some women will have zero labor because they will have a planned cesarean. Other women will labor for a time and then have a cesarean. Most of the women will labor and give birth vaginally. If you include the cesarean births you may have falsely low estimates of the length of labor. But if you exclude the cesarean births you lose data that helps you understand the trajectory of labor. What should you do?
We are nearly done with our discussion of statistics, so I wanted to take a day to discuss study quality. When researchers talk about the quality of a study they are considering the quality of the total package, not only the statistical significance of the findings. In fact, the quality of the study will affect the value the findings.
There are two documents you should be familiar with before you begin to assess the quality of a study. The first is the CONSORT Statement and the second is the STROBE statement. Continue reading
One of the main reasons we invest our time into research (whether performing the science or reading the reports) is to understand how to avoid problems. We want to know what we can do to improve chances of a good outcome and have the best health. We want to know what treatments are the most effective when a problem arises. In short, we want to know what “causes” things — what causes a problem and what causes healing.
One of the first things you learn as a researcher is that demonstrating cause is terribly difficult. Why? To show causation means you have to show that it is this particular thing and not all the other possible things that are making something happen. When research was mostly about infectious diseases, a set of criteria were created to assess how strongly the evidence supported causation.
We talked about a random sample last week. Today we are going to talk about randomized controlled trials. The use of the term randomized in this context does not refer to the sampling method. Randomized controlled trials use convenience sampling, meaning they recruit whoever is available and meets the enrollment criteria. Because of this, randomized controlled trials are still subject to sampling bias.
But what randomized controlled trials provide is study groups who should not differ on the characteristics that might confound the outcome. For example, women who give birth at a birth center may think differently about birth than women who give birth at a hospital. Continue reading
On Wednesday we talked about why sample size matters. Today I want to focus on how the sampling method affects the data. Basically, there are two ways to obtain a sample. One way is to randomly select members of the population, and the other is to use whomever is available.
To understand the difference, I want you to imagine you are a member of a midwifery organization. That midwifery organization wants to learn something about its members — maybe they want to know how many families the average midwife works with each month. How is sampling method likely to affect the data the organization collects?
On Monday we talked about regression techniques and I explained that finding a statistically significant result would be difficult because of our sample size. Today we’ll explore how sample size affects the results you will get.
Like the other statistical techniques we’ve talked about, regression techniques allow us to examine the relationship between two variables. But the regression techniques go a step beyond the Chi-Square and T-Test because they allow us to examine the relationship of multiple variables. Unlike the previous tests, regression allows you to find the correlation of multiple variables at one time, and then see what portion of the variation is due to each individual variable. This allows for control of potentially confounding variables, and is helpful for supporting the presence or absence of causation.
In studies that use regression techniques you may read the terms dependent and independent variables. These terms describe the relationship of the variables in the regression equation. The independent variable stands on its own, and is considered to change on its own. The dependent variable is “dependent” because regression is analyzing the portion of change in the variable that responds to changes in the independent variables.
On Wednesday we used a T-Test to see if there was a difference in mean labor time between women who worked as a doula for income and those that worked as a doula for hobby. The next obvious question is, what is a T-Test and why did we choose to use it?
Remember back to last week when we talked about a chi-square? With a chi-square we were able to see if two groups differed on a characteristic that was a categorical variable. This means we could look for differences that are basically yes or no categories. For example, those who had gestational diabetes and those who did not or those with an intact perinium and those without. This works well for some things, but what if you wanted to know how big the difference was?
On Monday we talked about how p-values tell us the probability of obtaining the result if the null hypothesis (the status quo) is true. Today we turn our attention to the confidence interval and the additional information using a confidence interval provides.
There are two things you need to remember to make sense of a confidence interval. First, your sample provides an estimate of the true value. Second, if you took a different sample, you would get a different estimate. So what does this mean? Continue reading