Stories of recent fraudulent science seem uncomfortably common. In many of those cases the scientists are blamed, and rightly so. Sometimes criticisms identify more systemic problems like current scientific practice, or scientific institutions like the NSF or a university, or academia in general. Blame is also often laid on pop science and the popular science writers who try to tell a counterintuitive and interesting story, or who are under pressure to write frequently and under a deadline.

All of these are valid targets of criticism. But not enough attention is paid to the scientifically-inclined and interested public who are supposedly the victims of the fraudulent findings and stories. Given the incredible specialization of contemporary science, everyone is an untrained and naive reader in some, if not most, areas of science. But despite technical ignorance, the activity of reading about science can be done usefully and well, and doing it well means maintaining a general attitude of scientific doubt.

In general, few things are as important as maintaining a general skepticism. When it comes to science I’m a short-term pessimist and a long-term optimist. Science has made immense contributions to the world, and I think it will continue to do so. But on net, any particular paper is sadly likely to be invalid. Richard Feynman put it like this:

“When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darn sure of what the result is going to be, he is in some doubt. We have found it of paramount importance that in order to progress we must recognize the ignorance and leave room for doubt.”

If what you’re reading packages the world into neat little explanatory presents, and then hands them to you across the page with a bow on top, then it makes sense to stop and think: “…nah, probably not.”

Take Jonah Lehrer for example. Lots of attention was paid to the writing deadlines he was expected to meet. And to his urge to tell a good story. Less attention was paid to the notion that he actually wasn’t that great of a science writer. Whatever the reasons for his apparently deliberate unethical behavior, they wouldn’t have been as interesting if he hadn’t been as widely read as he was. And he shouldn’t have been as widely read as he was. That he was is to some degree a problem of skeptical reading. So it seems worthwhile to try to develop ways of systematically reading critically and skeptically.

I’m taking Gary King’s Gov 2001 class this semester. In a recent lecture he mentioned a paper he’d written about improving the interpretation and presentation of statistical analyses in social science. After class I went and read the paper. In the introduction, he lays out three steps for presenting any statistical procedure:

“More specifically, we show how to convert the raw results of any statistical procedure into expressions that (1) convey numerically precise estimates of the quantities of greatest substantive interest, (2) include reasonable measures of uncertainty about those estimates, and (3) require little specialized knowledge to understand.”

Transformed into questions these three steps form a great guide for critically reading science writing:

  1. Does the article present numerically precise estimates of the quantities of greatest substantive interest?
  2. Does the article include reasonable measures of uncertainty about those estimates?
  3. Does the article require little specialized knowledge to understand?

I’ll demonstrate using a blog post about a study that I read in the MIT Tech Review recently. The post was about a paper titled “The Mesh of Civilizations and International Email Flows”. The paper presents an interesting dataset, though I don’t understand why the authors decided to think about it in the terms of a vague collection of strange ideas like Huntington’s “Clash of Civilizations thesis”. But whatever the merits and demerits of the paper, the blog post about it is a lesson in poor science writing. The above questions help expose that.

Numerical precision A big part of science is the generation of questions and ideas that drive data collection, management, analysis and presentation. But questions and ideas by themselves are not yet “science” or scientific results. If a post is about ideas, then you can set evidence aside and just enjoy the intellectual thrill of thinking creatively. But if it purports to describe scientific results, then the work it’s describing must involve systematic (usually statistical) analysis of numbers that represent some part of the world. When numbers are used, they should be precise. For example, when an analysis shows that a particular drug helps people lose weight, then the write-up should say how much weight it helps people lose. When an analysis reports that a particular policy has reduced crime in a city, then the description should say by how much crime was reduced, and that number should be substantively meaningful.

The post does not present numerically precise estimates of the quantities of interest. Instead we get rambling and vague assertions. When a writer says something like “For example, a common border between two countries actually reduces the communication density between them”, she or he should report the size of that reduction. If the size isn’t reported, then it’s a worthwhile suspicion that either the reduction was so small that it’s not interesting, or that the writer doesn’t know, in which case what he or she has to say about the research isn’t very credible.

Reasonable uncertainty
If numbers are going to be used, then they should be reported precisely. However, uncertainty is an inherent part of the world, and precision should always be accompanied by a reasonable measure of that uncertainty. When results can be reported numerically (and if they can’t, then the research isn’t ready for public consumption, at least not in the guise of science) then uncertainty also can and should be reported numerically. If a common border reduces the communication density between countries, then readers should be told that that reduction is plus or minus some other number.

The post doesn’t come close to including reasonable measures of uncertainty about those estimates. In fact, not only does it not present reasonable measures of uncertainty, it doesn’t convey any kind of uncertainty. Instead we get misleadingly strong assertions about what the research “clearly reflects”.

Specialized knowledge Methodological jargon blurs messy and inconvenient details, since confusion over particular terms stalls more substantive questions and debate. Jargon serves a purpose, but it rarely belongs in writing intended for a general, non-specialist audience.

The post isn’t terrible in its use of jargon (thankfully it never calls a finding “statistically significant”). But while the author splurges on the space and words used to describe Huntington’s thesis (which was hardly the substance of the paper) where a link or two would have served just as well or better, it does use jargon to blur over key details of the research. After noting the problems with the data and that the hard work was in cleaning it, the author describes that work as “a comprehensive “rescaling” of the data to account for these effects.” The paper’s methods are summarized as that the authors “crunched the data using a number of community detection algorithms”. That latter description is an example of using jargon to make a simple and relatively meaningless statement seem instead an impressive outline of a complicated and complex process and procedure. If they had instead said something with the exact same meaning but in plain language like, “the authors used data and a step-by-step procedure for calculation“, you’d have to wonder what information they were providing. They could have said it even simpler and with just as much meaning with the statement “they did an analysis”. Not saying much.

I’d actually extend the original step here a little for reading purposes. Avoiding unnecessary technical and methodological jargon is important for communicating to a general audience. But it’s useful even for experts as a way of avoiding lazy or sloppy thinking. In his book “Surely You’re Joking Mr. Feynman”, Richard Feynman describes his experience discovering the poor quality of science education in Brazil at the time. His account of his talk to an audience of scientists in Brazil makes this point wonderfully:

“I have discovered something else,” I continued. “By flipping the pages at random, and putting my finger in and reading the sentences on that page, I can show you what’s the matter – how it’s not science, but memorizing, in every circumstance…So I did it. Brrrrup – I stuck my finger in, and I started to read: “Triboluminescence. Triboluminescence is the light emitted when crystals are crushed…”

I said, “And there, have you got science? No! You have only told what a word means in terms of other words. You haven’t told anything about nature – what crystals produce light when you crush them, why they produce light. Did you see any student go home and try it? He can’t. “But if, instead, you were to write, “When you take a lump of sugar and crush it with a pair of pliers in the dark, you an see a bluish flash. Some other crystals do that too. Nobody knows why. The phenomenon is called “triboluminescence.”‘ Then someone will go home and try it. Then there’s an experience of nature.”

There’s a joy to reading about science that few things come close to matching. Perhaps part of that joy is because reading about science is itself a partly scientific endeavor. You start with a question, then you get an answer, and then you question that answer. Good science writing takes you through that process. Poor science writing exudes the gross kind of unquestioning confidence, garishly polished style, and flattering focus on the researcher that’s become the mark of a TED talk. But poor science writing is ultimately only a problem if readers themselves can’t tell the difference.


If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at meinshap@gmail.com.