I’ve decided to create a new “no longer useful” tag for posts about topics that social researchers seem to harp on a lot but for which it seems we have already derived all of the useful lessons to be had. I’ve gone back and appended this label to my post on the George Box quote that “all models are wrong but some are useful,” and to my most recent post on the lack of individual objectivity among scientists. I don’t think my posts will put these sort of assertions to bed, but I feel better adding my voice to those who think they ought to be put to bed.
And now, my latest addition to the “no longer useful” category: the convention of distinguishing between qualitative and quantitative research, researchers, methods, etc. These two categories seem to be one of the most common ways to talk about social research. Many journals have built up reputations for publishing or being receptive to only one or the other category of research, and vocal members of different disciplines often criticize their journals and professional organizations for being too qualitative or too quantitative.
This debate may have been useful at some point, but it doesn’t seem very useful now. Part of the reason, I think, has to do with the ambiguity with which people use the terms. But mostly I’m concerned that qualitative stuff and quantitative stuff isn’t really different stuff. It doesn’t help to pretend that it is.
The Current Definitions Aren’t Clear
The qualitative/quantitative dichotomy seems to be used in two different ways. I’ve seen researchers use the terms in the ontological sense to characterize those things that made up the focus of their research, but just as often I’ve heard researchers use the same terms to characterize the manner in which they conducted their research. The sense in which people use the terms has implications for assessing the usefulness of those terms.
“Qualitative research” in the ontological sense is research employed to describe, predict, and/or explain differences in kind, whereas “quantitative research” in that same, ontological sense is employed to describe, predict, and/or explain differences in degree. I’ve tried hard to understand how it could be a good idea to make that ontological distinction, and it’s just not working for me. Quantitative differences are just qualitative differences that have been counted, and qualitative differences are often just quantitative differences that have been binned into categories for the sake of convenience. I don’t see what is to be gained by focusing on only one or the other side of the same coin.
The epistemological definitions are, for me, a little harder to pin down. It seems that “qualitative research” in the epistemological sense is often talked about as research that describes or interprets, as opposed to “quantitative research” which measures and counts. That seems like a weak distinction, since there are plenty of ways to use numbers to describe and plenty of ways to describe numbers, and quantified information is no easier to understand without interpretation – and is no less able to aid in interpretation – than is non-quantified information. I think a lot of researchers who fall at various points along the qualitative-quantitative spectrum of self-identification recognize this. Sometimes I get the sense that self-identified “qualitative” researchers consider themselves the researchers of un-measurable things like “identity” or “meaning,” but that ignores all the research into those issues by cognitive scientists and neuroscientists, who generally (I think) don’t place themselves under the qualitative banner, and it likewise ignores that just recording an observation is an act of measurement, even if it is not recorded using a technical instrument or a standardized scale.
Clearer Definitions Make the Categories Unnecessary
I do think there is an underlying difference in research priority that warrants categorization. I just don’t think the qualitative/quantitative distinction characterizes that difference very well.
The useful distinction, I think, deals with the issue of replicability. I don’t mean replicability of results – I know the fairy-tale version of science tells a story of how one researcher conducts a study and gets a certain sets of results, and then other scientists use similar methods in a similar setting and get similar results, and that bolsters the validity of the original findings. I think that standard of replicability is kind of unrealistic. For one thing, it seems like it’s pretty hard to get the funding to copy someone else’s stuff, and even if you get the funding it’s hard to publish a copy of someone else’s stuff. It happens, but it seems that researchers are much more focused on doing something different from what other people have done, and it’s easy for me to attribute that focus to the dynamics of the research market itself.
But even if replication were as easy to fund and publish as non-replication, I still don’t think it would be realistic to expect that replication to appreciably increase the tendency for social and behavioral researchers to coalesce in their opinions about specific topics or theories. Replication of findings requires a large degree of control over the research settings, and identification of many if not most of the different factors that could impact the result. Most social and behavioral research doesn’t take place under controlled conditions, and our collective understanding of most social and behavioral problems is shaky enough – and our data collection methods are limited enough – that most studies measure only a very small subset of the things that could impact the topic of interest.
I think, instead, it’s useful to distinguish between studies that lay out data collection, cleaning, organization, analysis, and reporting methods clearly enough that any researcher could potentially do the study him- or herself from start to finish without requiring any hand-holding from the original authors, and studies that do not. The studies in the first category are replicable in that they are technically able to be replicated. Studies in the second category do not provide enough information for even an attempted replication to be possible.
No matter whether a researcher defines the qualitative/quantitative distinction in ontological or epistemological terms, the replicability distinction remains important. Research that is technically able to be replicated can be used as the basis for future research, but it’s also a fair admission into any argument about what people really tend to do or not do, or which theoretical concepts are really valid, or which finding have really stood the test of time.
No matter what other arguments someone might make, a researcher can always point to replicable research and say, “Look, I know you disagree with me. That’s fine. I’ve laid out exactly what I did and didn’t do. You can see all of it. Show me where I made a mistake with the data I had, or do your own replicable study and show me how I would have reached different conclusions if I had had different data. Otherwise, even though you may actually have valid points, you’re not giving me any reason to believe what you say.”
Basically, replicable research gives a researcher the right to make the put-up-or-shut-up argument. Non-replicable research doesn’t do that. I’ve done a lot of non-replicable research, and I think it’s valuable. It can help generate ideas, or counter preconceived notions, or even just give a researcher a “feel” for the subject he or she is studying. But it can’t be evidence. For observations to be evidence of a general tendency or pattern, a researcher needs to demonstrate that he or she did not cherry pick those observations (or the results based on an analysis of those observations). A non-replicable study basically says, “Trust me. I really saw what I said I saw and I really didn’t make any mistakes and I really didn’t omit any relevant information and I really understood the whole thing correctly.” No. I don’t trust you. I have no reason to trust you, and you have no reason to trust me. That’s why replicability is important.
Obviously, the replicable/non-replicable distinction is a spectrum: studies can be more or less replicable and there will always be fights over whether a study was replicable enough. But at least that’s a fight that matters. I don’t care if a researcher studies differences of kind or differences of degree, and I don’t care if a researcher prioritizes interpretation over description or the other way around. I care about having some reason to believe a set of findings, and the best way to find that reason is to be able to look at everything a researcher did, from start to finish, and not be able to find anything wrong that could have reasonably been done right. My minimum standard for trusting what a researcher says is the provision that the researcher gives me at least the opportunity find something wrong.
It seems sort of beside the point to state that a qualitative or a quantitative or a “mixed-methods” approach would have been more appropriate in this or that research situation. The distinction doesn’t help us evaluate the research or even have a useful debate about the merits of the findings. All such statements do is point out that there are always other methods a researcher could have used to tackle a problem. I think everyone already knows that.
If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at email@example.com.