A while ago I wrote a post about the broken-ness of the current standard publication model for academic research – write a manuscript, submit to a journal, have the journal send your manuscript to about three somebodies who by someone’s standard have some sort of “expertise” in something related to some part of what you wrote, have those somebodies give a thumbs up or down along with minimal comments that may or may not be supported by evidence or even citations, repeat until you publish or give up. Paul reminded me of that post when he sent me this Note from the Editor of the American Political Science Review.

The note’s behind a stupid pay wall, so let me quote the relevant parts:

…we have two main observations. First, although the discipline as a whole is less fragmented than we had feared…some subfields—and you know who you are—continue to be riven by ideological or methodological conflict. Too often, a paper in one of those fields draws recommendations of “reject,” “minor revision,” and “accept” from three equally esteemed referees. When reviewers diverge so greatly, the editorial team’s judgmental burden increases significantly, compelling editors to discount some expert advice (if possible, without antagonizing reviewers) in order to provide coherent advice to authors. We have no answer to this puzzle, but simply note a pattern accurately described by one co-editor: increasing engagement across sub-disciplines, sustained fratricide within some.

Second, we have become painfully aware of how badly (or how little) some of our colleagues read. Articles are too often cited, by authors and by referees, as making the exact opposite of the argument they actually advanced. Long books are noted, with a wave of the rhetorical hand but without the mundane encumbrance of specific page or even chapter references; and highly relevant literatures, even in leading political science journals, are frequently ignored. We may have fallen victim to an occupational disease of editors, but we have often found ourselves moaning, “Doesn’t anybody read anymore?” It is cold comfort that this sloppiness extends well beyond political science. A recent study has shown that, even in “gold standard” medical research, articles that clearly refute earlier findings are frequently ignored, or even cited subsequently as supporting the conclusion they demolished.

I quote the first paragraph just to underline some of the stuff I wrote about in my original post. In a discipline where three people who are supposedly equal in the competence to review a piece hand down wildly divergent reviews – and that is, I think – a commonplace occurrence in nearly all the social science disciplines – the solution is to get a whole lot more than three reviewers. I especially find it interesting that editors at a journal like the APSR that is known for its emphasis on statistical findings feel it is appropriate to base evaluations of contributions on a sample size of 3.

But it was the second paragraph that Paul pointed out to me as the more interesting one, and I agree. I remember discussing a body of research with some colleagues in grad school. At one point in the conversation, one colleague exasperatedly declared: “I’ve been going through these books and articles and looking up all the references they use to support their arguments, and none of the stuff they cited says what they say it says!” (This colleague was more meticulous about her research than I or anyone else in our cohort was or ever will be, so it didn’t surprise any of us that she had gone back and looked up every citation).

I’m embarrassed to admit how many times I let myself write a manuscript or report that at some point said something like “Research has shown that [argument x] and [argument y] ([citation 1, citation 2, citation 3 …, citation n]).” It’s a bad practice but it’s common. I try very hard to avoid that sort of thing now.

But I found it particularly interesting that the APSR editors felt that claiming a study supports an assertion when in fact it refutes it is a symptom of people not reading the study. I don’t doubt that that actually happens, but it seems to me that it’s entirely as plausible that people honestly see support for their arguments within the stuff they read. It’s really, really easy to interpret the exact same set of findings many different ways, even if those findings are presented clearly. When the findings are bogged down in disciplinary jargon or hidden behind statistical-significance stars, that misinterpretation is practically guaranteed.

That’s why a small set of reviewers doesn’t ensure that quality analyses get published. It doesn’t matter if they’re an “expert in their field,” especially in disciplines where fields are so idiosyncratically defined as to ensure only sporadic overlap across researchers. They’re not experts in a body of literature that is so broad and so diverse in its style and scope that no individual or small subset of individuals can hope to come across even a substantial minority of it within their professional lifetime. They’re not experts in honestly assessing whether they’ve honestly assessed an argument with which they disagree.

The best way to keep someone from saying a piece of research says something it doesn’t really say is to have them say it in public. Then someone else who has read the same research can disagree with them. I’d actually feel a lot better about the whole three-reviewers publication model – even with anonymous reviewers, which I think is completely unnecessary – if the editors made the reviewers comments public and allowed a comment period before making a decision about the paper, and if they appended a complete open and free (no pay wall) link to the original reviews and the comments the print and electronic versions of everything they eventually decided to publish.

That’s the essence of what I was getting at in the last post I wrote on this subject. No editor or reviewer or any individual knows enough of the literature and has a good enough grasp of the current state of all a discipline’s subdivisions and understands enough of all the available methods and can interpret enough of all the available theory and jargon to be able to decide whether a piece of research is “quality” research. No editor or reviewer or any individual is competent to decide whether a piece of research is valuable or interesting. A more reasonable publication model would accept what is already true – that each reader decides those things for him or herself no matter what and editor or reviewer says – and give people the pre-publication comments to serve as tools in deciding whether an argument deserved to be believed.


If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at schaun.wheeler@gmail.com.