One of the things I dislike most about being in academia is the feeling of creeping complacency. I don’t feel it very often – that’s why it’s ‘creeping’ – but when I do it’s painful and soul-frustrating. Working for the military was where I felt least complacent (despite the numerous other downsides), so perhaps it’s not surprising that the things that make me feel most complacent typically have to do with the military and irregular warfare. The current situation in Afghanistan should be enough in itself to make anyone stop and think whether such a situation is really necessary and whether there isn’t anything that could be done about it. 326 members of the international coalition have died in Afghanistan so far this year; 3,021 Afghan civilians died in 2011; suicides among US troops have been averaging one a day in 2012; the Dept. of Veteran Affairs estimates 18 veterans are committing suicide every day. All this is an ongoing and overlooked tragedy. But it’s not the overall tragedy that makes me feel complacent. For me and for most people it’s about as outside the bounds of being influenced as is the weather. No, what makes me stop and feel terribly complacent are the errors in thinking about social phenomena in Afghanistan that policy-makers and military analysts continue to make, and that social scientists seem incapable of helping correct (…perhaps because we’re often not so immune to them ourselves).

This past week the Secretary of State designated the Haqqani network a terrorist organization. Reading about it reminded me of several stories I read last year about a string of attacks in Kabul, including one that was the deadliest in 10 years. Most of the stories had titles like the following:

“According to US officials, the Haqqani network might have been involved in the bombing in Kabul this past weekend.”

Behind that claim lies a consistent, deeply ingrained and detrimental error in the analyses behind the U.S. efforts over the past decade. The error is the perception that the U.S. is fighting an enemy in Afghanistan, rather than that it is involved in a deeply problematic social environment. It’s an orientation towards intentions and motivations, rather than towards the conditions and situations they emerge from and which constrain, or fail to constrain, them.

Here are some examples of the “it’s the enemy” line of thinking:

The involvement of the Haqqani group, believed by Washington to be based in the mountains of North Waziristan on the Afghanistan-Pakistan border, would make the already tough task of bringing Afghanistan and its neighbors together even more difficult.

Officials said that while evidence of Haqqani involvement was by no means conclusive, the style of the attack and some of the equipment used in it raised that possibility.

“If it’s Pakistan, then it is definitely the work of the Haqqanis, but we are not certain as the investigation is underway,” he said, also speaking on condition of anonymity.

“It certainly has all the hallmarks of the Haqqanis,” the official said. “It’s part of their efforts to resist efforts to bring them to the negotiating table.”

The particular mistake behind the claims about the Haqqani network that I want to draw attention to is the mistake of seeing ourselves in the world around us: anthropomorphizing. In the conventional sense of the term, to anthropomorphize is to attribute human characteristics or mental states to non-human agents and objects. In the sample of one study, 79% of people verbally scold and 73% curse their computer when it fails to work the way they want it to (more accessible results from the paper here.) But we don’t just do it with non-human things; we also do it with social phenomena. And that’s where it can be an especially pernicious mistake. Because while it’s fairly easy to remember that our computer doesn’t really hate us when it doesn’t work properly (though Microsoft Word might be a software exception), it’s not so intuitive that the things that happen in society aren’t necessarily the intentional and strategic workings of an individual person or group of people. (A very similar bias in thinking about the behavior of an individual person is the fundamental attribution error; and the belief in a homunculus is an even deeper problem).

In the case of Afghanistan, the mistake has manifested in a constant attempt to identify some group and set of leaders behind the incidents of violence that beset the country. When I first started working on these issues in early 2009, the enemy was the “neo-Taliban”; a little while later the enemy became malign or disenfranchised tribes. By the time I left government service last summer, the Haqqani network had become the prime antagonist.

Schaun and I wrote about this in the context of tribes when we worked for the Army. I’ve written about it in relation to understanding influence in organizations more recently. In the past I’ve always addressed how this way of seeing groups is a mistake, but I’ve never really addressed how these kinds of mistakes are *made. *Sometimes analytical mistakes aren’t just analytical mistakes; they’re the result of certain cognitive bents or tendencies. Take the two tables below:

They look different right? Well they’re not; at least not in terms of their surface dimensions – those are exactly the same. Thinking they’re different could be seen as an analytical mistake. Most people don’t measure them before answering the question. But more importantly, seeing and thinking they’re different is also a cognitive mistake, because what we see makes what isn’t so, very intuitively appealing. That’s what I’m suggesting occurs with how we perceive the people who engage in violence in Afghanistan. Only in this case, our mind tricks us in the opposite direction and we assume they’re all the same, when it just isn’t so.

So unfortunately, correcting analytical mistakes like the perception of the Haqqani network as a cohesive group carrying out its intent and being responsible (as a group) for every bombing that “looks like a Haqqani attack” is probably not just a matter of helping analysts and decision-makers better understand human behavior. In these situations other corrections will have to occur for analysts and policy-makers to begin to consistently avoid the error. In this case, the need to not feel helpless and ineffective may be served by anthropomorphizing social phenomena. For example, in the paper Making Sense by Making Sentient the authors present some evidence for a particular explanation for why people anthropomorphize:

We suggest that anthropomorphism is also determined by effectance motivation – the basic motivation to be an effective and competent social agent. Effectance motivation entails a desire for understanding, predictability, and control over one’s environment…Explanations of others’ behavior typically focus on personal causality because … dispositional factors are seen as more stable, more predictable, and easier to control.

In a range of studies the authors demonstrate that increased effectance motivation increases anthropomorphism. The paper isn’t evidence that military analysts treat the haqqani network as if it were a coherent and unified entity controlling the violence in Afghanistan because that helps those analysts deal with the unpredictability of the situation in Afghanistan. But the argument and evidence are plausible enough to discourage me from concluding that simply better analysis will help produce better decisions and policies there. It suggests that anthropomorphism isn’t just something we mistakenly do, it’s something we mistakenly do because it can help make us feel better and feel more in control of the situations we find ourselves in.

Only, for a problem of the size and importance of the conflict in Afghanistan, our feeling better about the problem we’re facing isn’t a necessarily a good thing. In fact, it’s a downright bad thing if it comes at the expense of perpetuating a flawed understanding of that problem. The conflict in Afghanistan would make a lot more sense if a big part of it was all the workings of the big bad Haqqani network. But it isn’t. Perhaps if decisions and policies are consistently and repeatedly demonstrated to be ineffective, certain analytical mistakes will no longer satisfy the need to feel effective.

Here’s my “take-home”: if the bad analysis of groups is sustained by its ability to make analysts and decision-makers feel effective, then undermining that bad analysis on analytical grounds probably won’t help much. Instead, bad analysis can be undermined by consistently demonstrating to decision-makers that the analysis is actually making them ineffective. And that’s where the role of analysis can come back in: through repeatedly revealing the ineffectiveness of poor policies that are based on poor analysis. If and when the policies are consistently bad enough, or analysts are consistently good enough to reveal their inferiority to some other alternative, they’ll probably change.


If you want to comment on this post, write it up somewhere public and let us know - we'll join in. If you have a question, please email the author at .