You may have been wondering why I had not discussed qualitative research in this series. The un-glamorous answer is that while qualitative research helps to inform practice, it doesn’t actually use statistics as we think about them.
Statistics lives in the world of numbers, and is used in research that is called quantitative — basically because it counts things. But qualitative research isn’t about counting things. It is about openly exploring an area to gain perspective rather than statistical significance.
So while quantitative studies as people to complete surveys and provide blood samples for testing, qualitative studies ask people open ended questions or observe how they perform a task. This type of information is useful in a few different contexts.
One example is in early stages of research, before much is known about the problem being addressed. A common example is understanding different approaches to health in sub-populations to help identify sources of disparity. While counting the number of people with health insurance may give you a measure of “access,” interviewing people without health insurance helps you understand why certain people are falling through the holes in the system. You need both types of research because they work together to help identify problems and solutions.
The differences don’t stop there. Remember we talked about the importance of sample sizes to quantitative studies? In qualitative studies the rules about sample size are different because you are not looking at distributions of means. Instead your sample would reflect the number of people needed to reach what is called “saturation.” This means when you talk to new people you no longer get new information.
To get a feel for the differences, I’ve included links to two studies you may find interesting.
Pay attention to the results, as this will give you an idea of what you can learn from qualitative research.
Last time we talked about the unique contributions of a systematic review. Today we will talk about how meta-analysis informs our practice. Remember, these are both techniques that synthesize existing data. This means just like a systematic review, a meta-analysis must be performed with rigor. A very specific question should be asked, and inclusion and exclusion criteria defined before collecting available studies. Multiple databases must be searched, and counts of excluded studies and the reasons should be kept.
Where a meta-analysis differs is in the actual analysis. Remember when we talked about the importance of sample size to obtaining accurate estimates? Meta-analysis uses statistical techniques to pool the data from multiple studies providing us with better estimates. Here is an example: Vaginal birth after two caesarean sections (VBAC-2)-a systematic review with meta-analysis of success rate and adverse outcomes of VBAC-2 versus VBAC-1 and repeat (third) caesarean sections.
Notice that in this abstract we are told which databases were searched, what search terms were used and what studies were found? This is the same as providing methods in clinical research because it allows us to repeat the research if we choose so we can verify the findings.
Not only can a meta-analysis provide better understanding by pooling data, it can also use statistical techniques to inform about publication bias. Publication bias happens when researchers (or publishers) are more likely to report positive findings than negative findings. This means research is more likely to report that an intervention worked than that it didn’t. By analyzing the distribution of the pooled data, meta-analysis techniques can identify when publication bias has likely skewed our understanding of a topic.
Like systematic reviews, meta-analyses are powerful tools for helping make sense of the available research. If you want to become more skilled at reading them, consider following this tutorial from Michigan State University on how to read a Meta-Analysis.
We have one more type of study to talk about before we wrap-up this series, and we will discuss that on Monday.
For today’s post, you might want to open a second window at the Cochrane Collaboration Website so you can scroll through what is available while we talk. One thing I have been trying to communicate (over and over and over) is that each study is only one small piece of the puzzle researchers use to help figure out what is going on. Today we are going to talk about one tool researchers use to synthesize the available data, the Systematic Review.
Sometimes you want to understand the odds of an event that everyone in the group will not experience. The best example in pregnancy and birth is the length of labor. When you try to get an average length of labor you need to decide how to handle the labors that end in a cesarean birth.
You see, some women will have zero labor because they will have a planned cesarean. Other women will labor for a time and then have a cesarean. Most of the women will labor and give birth vaginally. If you include the cesarean births you may have falsely low estimates of the length of labor. But if you exclude the cesarean births you lose data that helps you understand the trajectory of labor. What should you do?
We are nearly done with our discussion of statistics, so I wanted to take a day to discuss study quality. When researchers talk about the quality of a study they are considering the quality of the total package, not only the statistical significance of the findings. In fact, the quality of the study will affect the value the findings.
There are two documents you should be familiar with before you begin to assess the quality of a study. The first is the CONSORT Statement and the second is the STROBE statement. Continue reading
One of the main reasons we invest our time into research (whether performing the science or reading the reports) is to understand how to avoid problems. We want to know what we can do to improve chances of a good outcome and have the best health. We want to know what treatments are the most effective when a problem arises. In short, we want to know what “causes” things — what causes a problem and what causes healing.
One of the first things you learn as a researcher is that demonstrating cause is terribly difficult. Why? To show causation means you have to show that it is this particular thing and not all the other possible things that are making something happen. When research was mostly about infectious diseases, a set of criteria were created to assess how strongly the evidence supported causation.
We talked about a random sample last week. Today we are going to talk about randomized controlled trials. The use of the term randomized in this context does not refer to the sampling method. Randomized controlled trials use convenience sampling, meaning they recruit whoever is available and meets the enrollment criteria. Because of this, randomized controlled trials are still subject to sampling bias.
But what randomized controlled trials provide is study groups who should not differ on the characteristics that might confound the outcome. For example, women who give birth at a birth center may think differently about birth than women who give birth at a hospital. Continue reading
On Wednesday we talked about why sample size matters. Today I want to focus on how the sampling method affects the data. Basically, there are two ways to obtain a sample. One way is to randomly select members of the population, and the other is to use whomever is available.
To understand the difference, I want you to imagine you are a member of a midwifery organization. That midwifery organization wants to learn something about its members — maybe they want to know how many families the average midwife works with each month. How is sampling method likely to affect the data the organization collects?
On Monday we talked about regression techniques and I explained that finding a statistically significant result would be difficult because of our sample size. Today we’ll explore how sample size affects the results you will get.
Like the other statistical techniques we’ve talked about, regression techniques allow us to examine the relationship between two variables. But the regression techniques go a step beyond the Chi-Square and T-Test because they allow us to examine the relationship of multiple variables. Unlike the previous tests, regression allows you to find the correlation of multiple variables at one time, and then see what portion of the variation is due to each individual variable. This allows for control of potentially confounding variables, and is helpful for supporting the presence or absence of causation.
In studies that use regression techniques you may read the terms dependent and independent variables. These terms describe the relationship of the variables in the regression equation. The independent variable stands on its own, and is considered to change on its own. The dependent variable is “dependent” because regression is analyzing the portion of change in the variable that responds to changes in the independent variables.