A couple of stories have been in the press recently that have some interesting implications for intelligence analysis.
First, courtesy of Discover magazine, is this piece summarizing research that seems to indicate that people that sign their documents on the top of documents (before they’ve entered data or made a statement) their information is more accurate than if they sign at the bottom of the document (after they’ve already done the work).
People are often dishonest in little ways on forms, rounding numbers in a beneficial direction or failing to mention a relatively small item as part of a larger list. If they sign a form once they’ve done all that, they don’t go back and correct it; instead, they’ve already woven a story to themselves—consciously or not—about why what they did was perfectly fine.
It’s worth noting that most intelligence products do not have the author(s) names attached. Now, there’s usually a very good reason for that. Namely, that the analysis done is supposed to represent the agency’s position and not the individuals. Additionally, there’s a security issue as well. Knowing that analyst ‘A’ is the one who writes all the stuff about security issues in Outer Mongolia opens that analyst up to targeting and influence.
That being said, I’ve heard analysts say things like ‘I don’t care, my name’s not on this.’ Anonymity often breeds what I recently heard described as ‘a culture of compliance rather than one of performance’. Check a box…if you get it wrong, who cares?
This isn’t just an individual issue, either. Take a look over at Public Intelligence and you can see all sorts of examples of poor analysis (and occasionally good). Very rarely are agencies held accountable for putting out bad, or just outright wrong, analysis so we can’t just go out and hammer analysts.
There’s got to be a way to address both problems.
The London School of Economics has this podcast about cognitive biases in support of the speakers book titled ‘The Art of Thinking Clearly‘. It’s a fun, easy to access set of examples that demonstrate the various ways in which cognitive biases cause us to make poor decisions.
One particular point I like to emphasize when teaching critical thinking and analysis that Dobelli mentions is that what we see as cognitive biases today are actually traits that were essential for survive for much of the human (and, I suppose, pre-human) evolutionary process. When you’re a hunter-gatherer traveling across the savannah and you see a shadow in the tall grass, your buddies to take off running. Maybe it’s not a lion in the grass but if it is they’ve got a good shot at getting away. Meanwhile, while you’re trying to analyze the various possible hypotheses explaining the movement, some sabre tooth is picturing you with a nice mango salsa.
Another part of the lecture reminded me of a circumstance I had where I had written a product yet it languished in editing/approval hell for an astounding 13 (!) months. Finally I suggested officially killing the project since its contents were of dubious relevance any more and I had increasing concerns about the validity of my original findings. My suggestion seemed to be the spark that was needed for everyone else to decide that the product needed to be disseminated right now! Lengthy, impassioned arguments discussing my concerns were brushed aside. After all, I was told: ‘We’ve already spent so much time on this already…we can’t just let it go.’
When I mentioned the concept of ‘sunk costs‘ I got this sort of look:
For the record, I’m kind of used to those looks now…
The idea that the time spent on project X is already gone doesn’t justify spending more time on it unless project X makes sense and has value but my overlords at the time saw that past time as some sort of investment and were determined to get some sort of return on that investment. Getting them to see the sense in the fact that their ‘return on investment’ would, in fact, just leave readers confused about why they were getting a product about an event that was a year old, took some doing.