I’ve realized recently how the last few years have changed the way I think about my work. This post is an attempt to put that thinking into writing.

I left grad school feeling I wanted to do more “applied” work than what academia usually offers, but I still assumed that application was a matter of doing an analysis and then letting people who make decisions consume and implement the lessons of that analysis. I created a lot of those for-application sorts of analyses for the U.S. Department of the Army, but left feeling I wanted to be part of the decision making process rather than just producing fodder for it. My current employer gave me the opportunity to work interactively with decision makers to clarify their goals and adapt my analyses to their needs, and also to be somewhat involved in the implementation side of things. So my career has followed a path of closer and closer integration of my analytic work with the decisions and implementation that my work is supposed to facilitate. I think that process has helped me better define how I think about “application.”

I’ve written a fair amount on the blog about the brokenness of the academic publication system (herehere, and Paul’s written about it here and here). I still think an completely open publication system (meaning no peer review required to publish) with robust tagging, search, and filter capabilities would be a whole lot more useful than our current system of firewalled and anonymously-reviewed publications, but I’ve also found myself caring less about the entire issue. Reports – articles, assessments, posters, presentations, etc. – just don’t appear to be a very good way to communicate research. Even if the sole goal of a research project is to generate an “academic” conversation about a topic, the report format seems practically designed to help people fixate on who was and wasn’t cited in a literature review, which jargon was or was not used consistently with previous usage, and a host of other issues that can distract from the actual data and findings from the analysis. A formal narratives encourages repetition of the narrative, not a discussion of the observations and methods that underpin the narrative.

If the goal of the project is to help people make consequential decisions, reports make even less sense. I think application is most usefully defined as a product, not a process. I’ve heard a lot of people talk about analysis “informing their decision making.” I used to use that phrase a lot myself. I don’t think the goal of analysis should be to inform decision making – it should be to create a tool that actually makes decisions for people. It’s not that statistical models are oh-so-trustworthy compared to human judgments (although even a poor model is often better than intuition). It’s that hardwiring analytic results into the decision-making process leaves an audit trail.

As I understand it, scientific publications are supposed to inform people about new data, methods, or other developments. They’re supposed to act as a basis for people replicate and improve upon past practices. They’re supposed to be a record of the state of a particular area of study. And, ideally, they’re supposed to inform the perspectives and decisions of people outside of that field of study. I don’t see how publications are the best way to accomplish any of that.

For example, if I model sales response and then automate the assignment of sales territories to representatives based on the model, I can go back and look at the results at the end of the sales season and figure out where the model failed and where it could be improved. If I just do the analysis, give it to sales, and they let it “inform” their decisions by applying, modifying, and ignoring different parts of the analysis as they see fit, then there’s no audit trail and therefore no way to ensure a better model in the future. Again, I’m not saying “my model is right and no one has the right to change it.” I’m saying a big strength of a programmatic solution to a problem – the ability to look at results in light of an explicit, detailed history of what decisions preceded those results – is diluted when a human filter stands between the analysis and the implementation.

I want my work to do more than generate a conversation. I like it when it generates conversations, and a sparked discussion can be a first step to actually doing something, but I don’t want all my work to end at the discussion. I even count a one-time implementation of something I do as just barely better than ending at discussion. If my goal is to create something durable, that means implementing it more than once, even if the subsequent implementations are (as they should be) very modified versions of what was there originally. Those applications are the best way to get people to use my analyses, and they’re also the best way to check the quality of my data and assumptions. It’s now strange to me to think of application as something that happens completely after, or in any way separated from, an analysis. Creating applications – products, not reports – seems to be the best way to accomplish almost all of what publication is supposed to do.


If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at schaun.wheeler@gmail.com.