I came across this piece in the Chronicle of Higher Education a little while ago. The author’s opening caught my attention – a vignette about someone asking for his advice and then asking how much she owed him for his time – because I had that same experience for the first time not too long ago. I work in the private sector, not academia, and the offer still caught me off guard. It never occurred to me that I might charge someone for my advice. -I guess that means I should be careful about pursuing a consulting career.

As I read further into the article, I at first thought I was seeing some arguments similar to the ones I’ve made previously about academic publishing (see here and here), but it soon became clear that the author was only minimally concerned about the effects the current publishing model has on the quality of academic work:

Publishers can assure the quality of their products only if highly trained experts examine the articles on the academic production line and pick out the 10 percent to 20 percent that meet the highest standards for excellence. Without this free labor, the publishing companies’ entire enterprise would collapse.

When I referee an article for a journal, it usually takes three to four hours of my time. Recently, two Taylor & Francis journals asked me to review article submissions for them. In each case, I was probably one of 20 to 30 people in the world with the expert knowledge to judge whether the articles cited the relevant literature, represented it accurately, addressed important issues in the field, and made an original contribution to knowledge.

If you wanted to know whether that spot on your lung in the X-ray required an operation, whether the deed to the house you were purchasing had been recorded properly, or whether the chimney on your house was in danger of collapsing, you would be willing to pay a hefty fee to specialists who had spent many years acquiring the relevant expertise. Taylor & Francis, however, thinks I should be paid nothing for my expert judgment and for four hours of my time.

So why not try this: If academic work is to be commodified and turned into a source of profit for shareholders and for the 1 percent of the publishing world, then we should give up our archaic notions of unpaid craft labor and insist on professional compensation for our expertise, just as doctors, lawyers, and accountants do.

I think the author’s point about the monetary value of access to expertise is perfectly sound: in most cases, medical doctors can do things I can’t do myself so if I find myself needing those things done, I’m willing to pay a doctor to do them. The value of the money and time I spend on access to their expertise is less than the value of what I would spend acquiring that expertise myself. Same thing goes for chimney inspections and property deed documentation.

Where he loses me is when he claims –well, when he assumes, really – that social researchers like anthropologists have clearly demonstrated that access their expertise provides otherwise un-added value in the same way that doctors, lawyers, and accountants do.

He lists four areas where social researchers have “expert knowledge” that adds value. In each case, he’s saying social researchers, and presumably not other people, have the ability to judge the extent to which a piece of research:

  1. Cited the relevant literature.
  2. Represented that literature accurately.
  3. Addressed important issues in the field.
  4. Made an original contribution to knowledge in the field.

The first area of “expert knowledge” seem ludicrous to me. Given the number of researchers who align themselves with any particular academic discipline, the number of journals who cater to researchers who align themselves with that discipline, the number of articles published in those journals each year, the number of researchers and journals who do not align themselves with that particular discipline but who nevertheless publish research related to or even directly engaging with that discipline’s research, the task of even identifying the bounds of relevance seems quite impossible. When you consider the historical inability of any of the social science disciplines to develop more of a cohesive, core set of principles about how their subject matter works (see here and here), well the task becomes almost laughable.

The second area – making sure the literature is represented accurately – is just as difficult as making sure all the relevant literature is cited, and carries the additional difficulty of relying on individual people to decide what is or is not an accurate representation  of an issue when the extent of each individual person’s knowledge of that issue is questionable.

The third and fourth areas seem to be covering the same issue to me. They both seem to be saying that social researchers have the training and experience to identify which new findings are interesting – which ones deserve to receive the attention of people who are interested in the issue at hand. That proposition seems both unrealistic and egotistical.

A doctor is an expert if he or she can do things that other people can’t readily do, and if doing those things tends to create outcomes that are generally better than what would have happened if nothing had been done in the first place. I’m hard pressed to find instances of social scientists doing things that other people can’t already do.

This is especially true of researchers who steadfastly refuse to consider any research method that looks like it might involve anything that resembles a number. I generally think the qualitative vs. quantitative distinction in social research is pretty useless, but I make an exception to that rule when I find researchers who loudly proclaim themselves to be “qualitative” researchers. In those cases, I find that self-identification pretty reliably identifies researchers whose work differs little in quality, tone, or assumptions from a standard piece of investigative journalism. I look at what those researchers do, and I just don’t see that they’ve done anything to clearly demonstrate that their understanding of their field consistently produces real-world outcomes that are better than what would have been otherwise.

While I tend to prefer research that uses more systematic collection and analysis tools, it seems social and behavioral researchers who use those tools are no less tempted to claim more wins for social science than the field really deserves. For example, Gary King is a researcher at Harvard whose work I greatly respect. I really like his MatchIt program pre-processing data to facilitate causal inferences, and I use his Amelia II program for imputing missing data all the time. On slide 2 of this presentation he gave at the University of Virginia, King attributes the following to social science:

  • “transformed most Fortune 500 firms”
  • “established new industries”
  • “altered friendship networks (Facebook)”
  • “increased hum an expressive capacity (social media)”
  • “changed political campaigns”
  • “transformed public health”
  • “changed legal analysis”
  • “impacted crime and policing”
  • “reinvented economics”
  • “transformed sports (seen MoneyBall?)”
  • “set standards for evaluating public policy”

I can see King’s point more easily on some of these items than on others. MoneyBall seems to be pretty clear example of how systematic analysis of past behavior can improve expectations about future behavior. There also seems to be cases of large-scale data analysis improving law enforcement activities.  There are certainly tons of cases of statistical analyses being used to inform political campaigns and tons of promises that such analysis could inform industries like health care, but I have difficulty seeing how we can draw a clear line of causation from social science to any real or promised outcomes here.

Take any of the MoneyBall-type cases where people clearly did something useful with a systematic analysis. Should we attribute that useful outcome to the tool that accomplished it or the affiliation of the people who used the tool? If I get an MRI and it catches some problem and fixing that problem saves my life, I partially attribute that very good outcome to the MRI tool itself, but mostly attribute it to a doctor who not only knew how to read the tool’s output but also knew enough about how the human brain works that he or she could connect the tool’s output to my particular needs.

I’m not clearly seeing how that sort of thing is happening, even in the MoneyBall sorts of situations. It’s definitely useful to break observations down into data points and then systematically analyze those data points in various ways that generate probabilistic statements about what to expect in the future. I’m totally cool with that. But that’s mostly dealing with the properties of the tools used to accomplish the analyses, not the discipline of the researchers using those tools. Yes, I know that a person needs to know some things about baseball or law enforcement or political campaigns to be able to do an analysis of those things, but do you need to know as much about those things as a doctor needs to know about the brain to be able to expertly interpret and act on an MRI output? It seems like a stretch to attribute those successes to “social science” instead of to “statistics.”

And the causal arrow between these outcomes and social science could plausibly be entirely reversed in some of the cases, such as social media. Social media is a technology. When people use that technology, they leave behind traces of that use. Those traces can be very useful sources of data for researchers. If anything, it seems that much of current “quantitative social science” should be attributed to those social media technologies, not the other way around.

For me to accept that social science as a discipline deserves the same kind of regard and, potentially, remuneration, as medical science or engineering or any of the other fields that do command a decent amount of respect and money, I would need to see examples of outcomes being created as a result of social science where the outcome was clearly better than the alternative, and where the outcome couldn’t just as easily be attributed to just the use of a particular technology, or even just to the fact that people threw time, money, and attention at a problem.


If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at schaun.wheeler@gmail.com.