I’ve thought a lot over the past few years about people’s aversion to “doing nothing,” specifically as the concept relates to thinking about, addressing, or planning for problematic social and political issues. The most recent incident that made me think of it was a discussion on LinkedIn where I commented:

I think the issues that genuinely compel explanation and action are very few and far between. We certainly feel like we need to do something about all of them, but we don’t really know that. In most cases, it seems the most appropriate response to a pressing issue that involves large populations would be to first admit that we haven’t got a clue about what is really influencing them, then take steps to systematically collect as much information about the actual behaviors as we can, and focus our efforts on preparing our own organizations/governments/societies for the event that the pressing issues will develop into something that directly impacts us.

What I was trying to say was that we’re often faced with social, political, or other problems that seem to require that we do something to change them – we feel that if we do nothing, the problems can do nothing but get worse. The belief that an unaddressed problem can only get worse is an assumption, and I’m not sure it’s a good one. In response to my remarks, one commenter said:

I very strongly disagree with your remark Schaun – it is a peculiar attitude for a researcher in general and an unacceptable stance for anyone working on safety and security issues.

I want to address both of those charges – that it is somehow strange for a researcher to adopt a wait-to-act approach to social problems, and that such an approach is actually unacceptable for people who are trying to protect people’s lives and well-being.

Moral Models and Uninformed Action

I know everyone’s college experiences with his or her discipline is different, but my experience seemed designed to instill within all anthropology majors a deep-seated collective remorse for all of colonialism. A whole lot of anthropology developed as a sort of research wing of various governments that spent a lot of their time fighting and extracting resources from people who lived in places far away from those governments’ traditional seats of power. Then there were the situations where anthropologists had used their fieldwork as cover for spying on people in conflict areas. It seems that sometime around the 70s or 80s, a lot of anthropologists freaked out (academically speaking) about their history, and by the time I entered college right after the 90s I was dumped into a pile of “research” that focused on “demystifying” and “speaking truth to power.”

In the middle of all that, I came across an article published by Roy D’Andrade in Current Anthropology called “Moral Models and Anthropology.” D’Andrade argued that many anthropologists had become intent on making anthropology into a “moral discipline,” where research was conducted in order to facilitate goals that had already been identified as necessary, appropriate, or otherwise good. These goals usually had to do with helping people who normally had few resources to win political or legal victories over people who normally had more resources. I don’t think there’s necessarily anything wrong with these goals, and D’Andrade points that out – that research is often conducted with the intent of helping people. What he criticizes is the tendency, as he saw it, to explicitly incorporate those probably admirable goals into the actual theory and design underling the research.

This is, I think, what has happened with a lot of corruption research, as Paul pointed out in an earlier post.  People have identified a bunch of behaviors and situations as “corruption,” based on the fact that those behaviors and situations often coincide with outcomes that many people consider undesirable. There’s nothing really wrong with that, but there is something wrong with lumping all those things together when collecting observations of corruption, or in looking for unified explanations of all those things without evidence that they actually are all part of the same conceptual thing. It seems a lot of social science has hurt itself by letting the end-of-the-road implementation and policy goals of research drive the beginning-of-the-road theoretical and design choices.

So it surprised me that someone would think it strange for a researcher to advocate a look-before-you-leap approach. I think we ought to try to understand a situation before we try to do something about it. The we-don’t-have-the-luxury-of-waiting argument just doesn’t do it for me. If we don’t understand an issue well enough to be able to give an explicit estimate of how accurate our understanding is, then how in the world do we know that we don’t have enough time to try to understand it better before acting?

Realistic Risk Assessment

I think most researchers generally agree, in principal, that we ought to understand things before we try to change them. I also think most researchers – especially applied researchers – generally agree that we don’t need absolutely complete information or absolutely complete understanding before we can justifiably act. Yes, better information and analysis is nice, but realistically we have to establish cutoff points at which we decide to move forward with whatever we have at the moment.

When working for the Army, it seemed that people tended to place that cutoff point earlier rather than later. (I have a suspicion this happens in businesses as well, but I haven’t been in the private sector long enough to get a feel for that yet). In other words, any amount of information was enough of a basis for action, because the necessity of action was not determined at all by assessments of how well an issue was understood. Protests (usually mine) that we didn’t understand the issue well enough to justify such-and-such course of action were often met with replies that “the perfect is the enemy of the good” or “we’re looking for a 60% solution.” I think those catch-phrases embody the important principal that we need to be able to tolerate uncertainty, but they expose what I believe is a very flawed model of risk assessment.

When we say the perfect is the enemy of the good, we’re stating the obvious – that we’re not going to get something perfect. We’re also making the assumption that what we will get instead will be good. When we say we are going for a “60% solution,” we’re stating the obvious – that we’re not going to get a 100% solution – but then we’re making the assumption that the percent we do get will be above zero. We already know we’re not going to get a 100% solution, but how sure are we that we won’t get a -60% solution – an outcome that is actually worse than present conditions? I pointed out in my last post that U.S. (and coalition) emphasis on development projects in Afghanistan were often assumed to at least be some sort of solution (above 0%) to the insurgency, but that what little systematic study there has been on the issue suggests that the “solution” has been at best 0%, and very possibly somewhere below zero.

It is wrong to assess the potential costs of inaction only in comparison with the potential benefits of action. That’s irresponsible risk assessment. Potential costs of inaction should be weighed against the potential benefits of action and the potential costs of action. That forces us to ask how sure we are that doing something will make a situation better rather than make it worse. Morevoer, however sure we are, it demands that we ask ourselves how we know that we’re that sure. It’s hubris to think that our actions can only ever help. Calls to action need to take place after we’ve honestly considered the ways those actions could hurt.

There’s No Such Thing as Lack of Action

“But we can’t do nothing,” is the response I commonly hear when I make these sorts of arguments. I’m not talking about paralyzing ourselves completely because of lack of information. That would be impossible, because there’s no such thing as a lack of action. When we choose to postpone intervention, we commit at the very least to continue whatever practices or policies we are currently implementing. We can also commit to defensive or preparatory measures. Imagine a situation in which there’s something happening that we think is bad. If we don’t understand that thing well enough to be reasonably sure that our trying to influence it will not make it worse, we can instead devote our time and resources to preparing ourselves (our company, organization, country, etc.) to be less drastically impacted in the event that the bad thing becomes worse all by itself.

The moral of the story here isn’t that we just need to act less. It’s that we need to do a better job of systematically collecting data – regarding our own actions, regarding those actions that concern us, and regarding the environments in which both take place – and making that data as accessible as possible. If we do that, then we’re less likely to be caught in our own ignorance when new problems arise. In cases where we lack the information to reasonably estimate the consequences of proposed actions, it is both reasonable and commendable to hold off, even when we have the resources to do something. There is nothing strange about pausing to assess a situation, and nothing unacceptable about admitting when we don’t know enough to justify intervention.


If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at schaun.wheeler@gmail.com.