Tag Archives: critical thinking

Can a tiger attack in Siberia tell us anything about the state of affairs in Syria and Iraq?

I’m currently reading ‘The Tiger:  A True Story of Vengeance and Survival“.  At roughly the half way point the author goes on a bit of a diversion from the main narrative to explain his method for attempting to understand the motive of the tiger and how that may overlap or conflict with various humans in his story.  To do that he brings us to a brief discussion of Jakob von Uexkull and the concept of Umwelt.  As I was reading it I thought there might be some interesting applications for intelligence analysis.  So, first let me take some quotes from the text to set my own background….

When talking about understanding animal behavior, Uexkull recommended imagining “a soap bubble around each creature to represent its own world, filled with the perceptions which it alone knows.  When we ourselves then step into one of these bubbles, the familiar…is transformed.”

That bubble is referred to as the Umwelt which is different (but inseparable from) the Umgebung.  The Umgebung is the objective world or reality which none of us really see/experience because we can only access it from behind the hazy view of our soap bubble Umwelt.

Back to the book:

In the umgebung of a city sidewalk, for example, a dog owner’s umwelt would differ greatly from that of her dog’s in that, while she might be keenly aware of a SALE sign in a window, a policeman coming toward her, or a broken bottle in her path, the dog would focus on the gust of cooked meat emanating from a restaurant’s exhaust fan, the urine on a fire hydrant, and the doughnut crumbs next to the broken bottle.  Objectively, these two creatures inhabit the same umgebung, but their individual umwelten give them racially different experiences of it.

Our Umwelt (if you’d like a more scientific analogy) might be thought of like your DNA.  Not only is every species different but there are also differences between individuals in a species.  One would expect that the umwelt of two people (regardless of their upbringing, ideology, etc.) would be much similar than that between a person and a bird, for example.

One of the challenges we have in intelligence analysis is correctly identifying the interests, priorities and motives of opponents.  There are techniques like ‘devil’s advocacy’ or ‘red teaming’ that can have real value but also are susceptible to biases like mirror imaging if the practicioners don’t have sufficient traning and experience in them.  This, in turn, can lead to false levels of confidence and poor decision making.

It may be possible to use the concept of Umwelt to reduce the risk of such failings, even though that wasn’t the original intent of this process.  Again, back to the author:

One way to envision the differences between these overlapping umwelten is to mentally color-code each creature’s objects of interest as it moves through space; the graphic potential is vast…and it can be fine-tuned by the intensity of a given color, the same way an infrared camera indicates temperature differences.  For example, both dog and mistress would notice the restaurant exhaust fan, but the dog would attach a ‘hotter’ significance to it-unless mistress happened to be hungry too.

And this was the point I thought about the conflict in Syria and Iraq.  In that conflict we have literally dozens of interested parties; nation states (Iran, Iraq, Turkey, Syria, Jordan, United States, etc., etc.,) and non state actors (ISIS, al-Nusra, FSA, various Kurdish factions, ExxonMobil, various business interests, etc.), each with their own umwelt.  It’s simply not possible to hold all the various perspectives and motivations of all the players (or, I suspect, even the ‘major’ players) in ones head.  I think even a written document (the form we see many intelligence documents come in) can adequatly give the consumer a suitable perspective to take into account the moving parts necessary to craft good decisions.

Usually what happens is that we simplify the problem to a ‘managable’ level of actors and then proceed.  Provided this reduced number of actors are the ones who have the ability to dominate events that’s probably ‘good enough’.  It does not, however, take into account the possibility that ‘insignificant’ actors occasionally have an outsized influece under special circumstances.  Essentially what you’re doing is trading the risk of surprise for simplicity.  This may be a good deal…but if we don’t decide on that tradeoff early on we can forget that we’re even taking the risk.

Here’s one example of the sort of product you’ll see.  They certainly get more complex to cover more nuance and include more actors but I’m not convinced that lends itself to more understanding or better decision making.

There may be a graphic product, however, that captures the varied interests of the players as well as the (estimated) intensity of those interests.  From that point, it should be easier to both estimate future decisions of each actor as well as assist in making more effective decisions ourselves.  And we needn’t confine this to only the extremely complex cases like we see in the Middle East now.  While probably too complex and time consuming for every case you could certainly apply it to long standing criminal organizations as well as terrorist ones.

Here’s one graphic product that hints at what I’m talking about but it’s only looking at behavior (conflict/support) and doesn’t even get into the ‘why’ question or intensity of interaction.

I dont’ have a fully formed idea of what this product would look like yet but give me some time…

What we have here, is a failure of imagination….

I’m currently reading Bloodlands (a compelling and timely book given events in Ukraine) and came across this passage about the outlook of Polish intelligence services regarding the Soviet Union in the early ’30s.

After the famine, they generally lost any remaining confidence about their ability to understand the Soviet system, much less change it.  Polish spies were shocked by the mass starvation when it came, and unable to formulate a response.

I was really struck by that.  Soviet policy was so outside the norm that the Poles found themselves unable to understand what was going on.  The result was to essentially withdraw from contact.  I doubt that was a conscious decision but rather the problem just got too hard (or perhaps, too wicked) and the Polish intelligence service sort of collectively moved to easier problems.

Certainly we’ve seen similar things in modern times.  The pivot from the confusing war in Afghanistan to the traditional battle shaping up in Iraq in 2002/2003 was so whipsaw quick you could practically hear the sigh of relief from the policy and defense establishment about going to do a mission they thought they could get their heads around.

This was slower, however.  The starvation took place in the early ’30s and it would be another six years before the Soviet Union decided to join a German invasion of Poland.

I’ve already exceeded my knowledge of prewar Polish intelligence services but I thought it was a notable example of an inability to understand a dramatic change to the environment.

 

Deception in intelligence operations

2979Among the dispatches of the Finnish military on the 1st of January, 1940 was this statement:

The numbering of some of the Finnish divisions is changed in order to confuse Soviet intelligence.

Which got me thinking about deception operations and how intelligence analysts are supposed to account for them.  Deception usually gets a mention in analytical training but typically nothing more than ‘Make sure the information you’re using isn’t a part of a deception plan on the part of your foe.’ Not a whole lot on how to go about doing that.

Deception can be tricky all around.  After all, if your deception plan is too good you might fool your friends, allies and sympathizers which can be counterproductive.  In the example above, I imagine the Finnish armed forces had to do a lot of coordination ahead of time lest orders or supplies for Division X get delayed while some sergeant somewhere tries to figure out what happened to Division X and why there’s a Division Y all up in his business all of a sudden.

And when we think about deception we usually think about it as an intentional act caused by an opponent.  Sometimes, however, we unintentionally deceive ourselves.  Our minds often do a better job at deceiving us than an adversary ever could.

A great example of that at play can be found in movies and TV where a reoccurring trope is the zany mix up.  A conversation heard without context or misinterpretation of some information leads the protagonist to believe in a reality which at complete odds with what is actually happening.

A great example of that is the 2011 Horror/Comedy movie Tucker and Dale vs. Evil.

YouTube Preview Image

The whole movie is based on all the characters misinterpreting the information they are receiving and deceiving themselves through their cognitive biases.  The actual attempts at deception (where Tucker and Dale decide to pretend to be the crazy hillbillies they are accused of being) don’t work nearly as well.

The movie does a great job of demonstrating how at some point we get so invested with a particular analytical line that we will ignore evidence (even highly credible and reliable evidence) to the contrary.  In that regard, that aspect of the film is more realistic that the filmmakers probably know.

 

Droning on about top 10 lists…

Mark Bowden has an interesting article about drones in the September issue of the Atlantic.  Specifically, I’d like to recommend the portion of the article that talks about target selection and approval.

I want to write about one brief, almost innocuous, passage in that portion of he article and how it applies to the intelligence process more broadly. In talking about the effectiveness of drones (and other means) to kill al-Qaida leadership, Bowden makes the point that drone strikes have declined in numbers.  Quoting a ‘senior White House official’ he writes:

The reduction in strikes is “something that the president directed.  We don’t need a top-20 list.  We don’t need to find 20 if there are only 10.  We’ve gotten out of the business of maintaining a number as an end in itself, so therefore that number has gone done.

I remain both amused and concerned at the number of times I see or hear about ‘top 10′ lists.  I get it we’re a base ten species.  But really, do we need to treat our counter-terrorism efforts the same way we treat a David Letterman monologue or a Buzzfeed article?   The fact that we rely so heavily on the idea of ‘top ten’ can seriously distort our understanding of the environment.

For several years I used to work on assessments of criminal street gangs and I would often get requests for the ‘top ten’ gang threats.  Sometimes the two or three ‘most serious’ gang threats (those that were the largest or most prone to violent activity, for example) would so eclipse the others that it just made no sense to include others in the same list.  The whole process was unhelpful, especially since few people would spend much time on anything other than whatever was #1 on the list.

And take counter-terrorism.  A reliance on something like ‘top 10′ threats to the U.S. implies that there are 10 threats to the country that deserve consideration.  Maybe there are 4…maybe there are 14.  It seems to me that the rational thing to do is determine criteria for what’s important and then figure out how many (or few) subjects fit that criteria.  An alternate way to go would be to identify how many threats you have the resources to address (‘We can conduct 3 investigations simultaneously.’) and then determine criteria that will identify the three most important subjects.

If we assume that threat is made up of intent plus capability what shouldn’t our priorities include the same components?  Our intent may be to eliminate all terrorism from the face of the Earth but our capabilities are far should of that so…bring them in line and get on with it.

In any case, arbitrarily asking for ‘top 10′ lists doesn’t do much of anything.  It doesn’t even give us a workable number to evaluate priorities if cognitive science to be believed.  In the Psychology of Intelligence Analysis, Richards Heuer asserts that the human mind can only juggle between seven and nine facts or bits of information at one time.  There’s been some research that indicates that was a very optimistic estimate and the real number is half that.

Top 10 lists are intellectual crutches that allow someone (the tasker…the analyst…whoever) to avoid making decisions about what’s important.  Rather than determining criteria for inclusion or exclusion, we just punt and say ‘Give the the top 10′.  And what do we do with that top 10?  How much consideration does #7 get?  Don’t most customers really spend their time looking at number 1 or 2?

So, what’s an analyst to do when asked to put together some sort of top 10 list?  Well, I think there are two ways to go about tackling this.  The first would be to develop ‘inclusion criteria’ of what it would take to make it on any list and run that by whomever created the tasking…without telling them that this might mean that more or fewer entities might make the cut.  My experience is that if you introduce that possibility too early the response you’ll get is something along the lines of ‘That’s great…but you’re going to end up with 10, right?”

You’ll want to wait until the project is well along…ideally close to being completed before introducing the possibility that your list might not hit upon that nice, round number that everyone seems to love.

23687.strip

Once you’ve got your criteria, the entities you’ve determined are worthy of consideration will (probably) either be less then or greater than the magic number you were assigned to cram into a list.  If it’s less and you’re still *ahem* encouraged to beef up your list to a magical number, I’d recommend using images and language throughout your document to make it clear which items on your list are not worth consideration.  Images can be quite effective in this regard and hopefully, even your overlord will, upon review, realize that including extraneous entities undermines the credibility of your project.

If you have more entities than the magic number you may be encouraged to arbitrarily create some cut off mark.  You could try to retrofit your criteria in order to do so, which may be your safest bet since it will allow you point out what is being eliminated and allowing your overlords to have the queasy feeling of wondering if eliminating terrorist group B from the list is a good idea just because they feel a bit short of their annual funding goal.

The bottom line is intelligence is about telling your customer (whether that’s a patrol cop of the President of the U.S.) what they need to know, regardless of if what they need to know if 2 or 22 things.  Don’t get sucked into cultural idioms if they don’t advance the goal of providing clear, concise, relevant information in a timely manner.

Cognition and intelligence analysis

A couple of stories have been in the press recently that have some interesting implications for intelligence analysis.

First, courtesy of Discover magazine, is this piece summarizing research that seems to indicate that people that sign their documents on the top of documents (before they’ve entered data or made a statement) their information is more accurate than if they sign at the bottom of the document (after they’ve already done the work).

People are often dishonest in little ways on forms, rounding numbers in a beneficial direction or failing to mention a relatively small item as part of a larger list. If they sign a form once they’ve done all that, they don’t go back and correct it; instead, they’ve already woven a story to themselves—consciously or not—about why what they did was perfectly fine.

It’s worth noting that most intelligence products do not have the author(s) names attached.  Now, there’s usually a very good reason for that.  Namely, that the analysis done is supposed to represent the agency’s position and not the individuals.  Additionally, there’s a security issue as well.  Knowing that analyst ‘A’ is the one who writes all the stuff about security issues in Outer Mongolia opens that analyst up to targeting and influence.

That being said, I’ve heard analysts say things like ‘I don’t care, my name’s not on this.’ Anonymity often breeds what I recently heard described as ‘a culture of compliance rather than one of performance’.  Check a box…if you get it wrong, who cares?

This isn’t just an individual issue, either.  Take a look over at Public Intelligence and you can see all sorts of examples of poor analysis (and occasionally good).  Very rarely are agencies held accountable for putting out bad, or just outright wrong, analysis so we can’t just go out and hammer analysts.

There’s got to be a way to address both problems.

The London School of Economics has this podcast about cognitive biases in support of the speakers book titled ‘The Art of Thinking Clearly‘.  It’s a fun, easy to access set of examples that demonstrate the various ways in which cognitive biases cause us to make poor decisions.

One particular point I like to emphasize when teaching critical thinking and analysis that Dobelli mentions is that what we see as cognitive biases today are actually traits that were essential for survive for much of the human (and, I suppose, pre-human) evolutionary process.  When you’re a hunter-gatherer traveling across the savannah and you see a shadow in the tall grass, your buddies to take off running.  Maybe it’s not a lion in the grass but if it is they’ve got a good shot at getting away.  Meanwhile, while you’re trying to analyze the various possible hypotheses explaining the movement, some sabre tooth is picturing you with a nice mango salsa.

Another part of the lecture reminded me of a circumstance I had where I had written a product yet it languished in editing/approval hell for an astounding 13 (!) months.  Finally I suggested officially killing the project since its contents were of dubious relevance any more and I had increasing concerns about the validity of my original findings.  My suggestion seemed to be the spark that was needed for everyone else to decide that the product needed to be disseminated right now!  Lengthy, impassioned arguments discussing my concerns were brushed aside.  After all, I was told:  ‘We’ve already spent so much time on this already…we can’t just let it go.’

When I mentioned the concept of ‘sunk costs‘ I got this sort of look:

YouTube Preview Image

For the record, I’m kind of used to those looks now…

The idea that the time spent on project X is already gone doesn’t justify spending more time on it unless project X makes sense and has value but my overlords at the time saw that past time as some sort of investment and were determined to get some sort of return on that investment.  Getting them to see the sense in the fact that their ‘return on investment’ would, in fact, just leave readers confused about why they were getting a product about an event that was a year old, took some doing.

A few thoughts in the wake of Boston…

I’m writing this just a few hours after the news about the bombing in Boston.  You won’t see any speculation here about who’s responsible, thoughts on the immediate response or similar things.  Rather, I want to talk a bit about what the larger implications might mean in terms of threat and what how an intelligence shop might best respond in a situation like this.

Ok…first things first.  A couple of rules to keep things in perspective.

  1. We should now know that with events like this, information that comes our way in the first hours is going to be confused, full of inaccuracies and speculation.  Anyone who speaks with authority in the first few hours is likely to be a liar.
  2. The 24 news channels are terrible at covering events like this.  Since there is so little information to report they have to fill their air time with anything they can.  This means your signal to noise ratio will be off the charts.  Once you get the broad outlines of the event and (possibly) see any footage of the event your best bet is to switch off the TV.

Since we’ve not got a few decades of data about terrorism from all around the world, there are some findings that might help us think about what might (might) come next.

First, a good place to look is the fine folks at the National Consortium for the Study of Terrorism and Responses to Terrorism (START).  I’d recommend reading this piece about the (un)predictability of terrorism and its ‘burstiness’.  I’d particularly like to mention this latter point.

As the people at START put it:

But in addition, terrorism has a bursty quality. When it is effective in a particular time and place, we get a lot of it rapidly.

Now, I think the key word here is the word ‘effective’.  While, on some level, attacks like Oklahoma City, Mardrid, and 9/11 were successful but I’m not sure they would be considered ‘effective’.  After all, in all of those cases the terrorist group (or individual) was captured or killed during or very shortly after the attack.   There was, in short, no one left to follow up on the success and so no follow up occurred.

But, take something like London or (I’m sure) the terrorist activity we see in much of the Middle East and you’ll see a different definition of ‘effective’.  Since a ‘successful’ attack isn’t a requirement for a terrorist to be successful (because, remember, the point of terrorism is to elicit a particular response…not generally to do direct damage) you can ‘fail’ but still be effective.  I’d suggest that much of the Palestinian terrorism over the past few decades falls into this category.

So…if we don’t neutralize (in some way) the perpetrators in some reasonable amount of time, we might reasonably expect additional attacks by the same group or individual.

Conversely, this also means that if we might not need to be too worried about ‘copy cats’ or others being inspired to action.  After all, al-Qaida has been trying to inspire people to take up the cause for years with little success.  White supremacists have been trying for decades with little to show for it.

It also means that the data suggests that the threat is going to be localized in time and space.  Might the perpetrators jet off to Idaho and launch attacks in Boise?  Sure, I guess, but I’m not sure I’d consider it particularly likely.

Also from START is this piece which states that we might see an increase in hate crimes over the coming weeks as a result of this attack.  Based on their data, the people at START have concluded that:

…in the weeks following a terrorist attack, the number of anti-minority hate crimes increased if the attacks were made against symbols of core American values (such as the Pentagon) or perpetrated by groups with a religious motivation.

Does the Boston marathon qualify?  I’d guess definitely in the immediate area.  I’m not sure how much resonance the event has on people further afield.  But, depending on who is identified as suspects, this could be an issue.

Readers of this blog know I often talk about small intelligence shops.  Events like the attack in Boston, because they are so rare, are going to attract the attention of just about every intelligence unit in the country.  Almost every one of them will be expected to publish some sort of ‘product’ about the event.  So, what should a small shop (I’m not talking the big three letter agencies of the federal government but rather the numerous state, local and joint agencies and centers around the country) do in situations like this?

Everything I’m going to write here is for those shops that don’t ‘own’ the territory where the attack took place.  If this attack took place in your area of operations than that’s another story for another time.

First…take a breath.  Look at observation #1 at the top of this post.  You’re highly unlikely to get much of value during the first 24 hours after an event so don’t expect to do more than summarize basic facts.

BUT…everyone is going to want to be seen to be doing something.  This is, after all, the big show.  So, even if there’s nothing to say, there will be incredible pressure to say something anyway.  In some cases this is from a very real desire to ‘help’.  In other cases this is a very real desire to justify ones existence.  It reminds me of a quote from Sir Humphrey:

“Politicians must be allowed to panic. They need activity. It is their substitute for achievement.”

Only politicians aren’t the only ones susceptible to this.  If you don’t have a plan in place you’ll get sucked into the thankless (and useless) task of feeding regurgitated news to various overlords like a mother bird does with her chicks.

Instead of trying to compete with CNN, the New York Times or news agencies (which you’ll never succeed at doing) take advantage of this time to figure out what you need to know for your area of operations.  So, let’s say I was in charge of a shop in…North Carolina (or Montana…whatever) when this attack happened.  What’s going to be important to me initially?  Probably:

  1. Who committed the attack
    1. The specific individual(s)
    2. Any affiliated group
    3. Any linkage to my area of operations
  2. Why did they commit the attack
    1. What was their motivation
    2. Why did they pick that specific target(s)
  3. How did they commit the attack
    1. How did they acquire the explosive device
    2. How did they carry out the attack (emplacement, detonation, escape)

Now, as those questions get answered you’ll have follow ups and more specific ones but even a list like that disseminated to your staff will help them separate the wheat from the chaff during the early hours and days of the story.  Yes, eyewitness accounts may be compelling but if they don’t address those questions your people are really just wasting their time.

Second, if you do not have a compelling reason to call the agency(ies) responsible for handling the emergency do NOT do so before their first press conference at the earliest.  Look, they’ve got a lot on their hands and the last thing they need to do is answer a bunch of questions from a yahoo like you because the leader of your agency 900 miles away wants the latest poop.  Remember, there are now literally hundreds of intelligence shops in the U.S. now…many of them are going to be calling the scene in order to be the first on their block to put out a product with an exclusive tidbit 1 to show how ‘high speed’ they are.  The last thing you would need in that situation is an extra few dozen calls from people essentially saying ‘So…what’s up?’  Let them do their job and you’ll get your information when you need it.

Third, remember that one incident is NOT a trend.  Don’t start reorganizing your whole shop based on one event.  If you’re assessments of the threat were on solid ground before an attack like this, they should remain so.  One event should not nullify your analysis.  BUT…this is a good time (well, earlier was a better time but you slacked off, didn’t you? So we need to do this now) to identify the triggers that would cause you to reevaluate your analysis.

For example…I’ve been saying that al-Qaida is a has-been organization for some time now.  Assuming they were behind this attack (for a moment) would not change my opinion.  But I should be able to explain at what point I would say my analysis was crap.  That’ll keep me straight both when my ego is on the line as well as when tensions are riding high and people start making claims that this or that event ‘changes everything!’

Forth…If you have nothing to say about an event…say nothing.  The intelligence community is suffocating on a philosophy of ‘Send it to everyone…just in case they need it.’  This means it’s not uncommon to receive the same message three, four, five times or more.  It’s not uncommon to receive products that have no relevance to your area of interest.  Adding to the noise does nothing but guarantee that when you really do have something to say, it’ll be ignored.

 

  1. That’ll probably be released to the press before the product is even disseminated making the whole thing moot.

On fauxhawks, cognitive biases and intelligence analysis

I’ve been out of the military now for slightly more than a year but still found myself adhering to AR 670-1 when it came to my trips to the barber.  Over the ears, over the collar…pretty short all over.  Some of that is necessity (my hair is very think and festooned with cowlicks everywhere and if left to its own devices would soon turn into a birds nest) but mostly it was habit.  So, I decided to change that…Here are the results 1:

photo(1)

Note the Hitler Pillsbury Doughboy in the background. That’s a fuck you to white supremacists, not baked goods, for the record…

Now the reaction from the people I work with was quite interesting.  My close co-workers are used to my hijinks so this was just sort of a status quo but for those a bit further out from the center of our social circle there was some consternation.  My coworkers and I received questions along the lines of:

‘What’s going on? Did he lose a bet? What does it mean?’

In short…none of these people could imagine a scenario in which someone like me (or, at least someone in our community/situation/etc.) would do this unless he was compelled to.

I, on the other hand, couldn’t imagine why I wouldn’t do such a thing.

So, what does this mean for intelligence analysis?  Part of an analyst’s job is to ‘think red’ or, consider what may motivate our foes, what priorities they may have, and what actions they may take.  Part of doing that involves avoiding the cognitive bias of ‘mirror imaging‘.  Now, I’ve been working in this particular office for a couple of years now and many of these people have seen me, heard me, had the opportunity to get to know me and with regard to my haircut they were under no pressure to reach a snap decision.  Yet, these individuals were unable to come up with potential motivations for my actions.  Were unable to put themselves ‘in my shoes’ to understand my actions.  How much more difficult when dealing with people involved in more complex activities, perhaps intentionally attempting to deceive, maybe with different cultural norms, with incomplete information and when under time pressure?

Cognitive biases aren’t something to be addressed once and then considered ‘dealt with’ for all time.  We need to be aware that they are the default setting for our brains and without active measures to control for them, we’ll slip into the same old thinking ruts that can lead to shoddy analysis.

  1. Upon seeing this, my wife said ‘Oh no! Now no one at work will take you seriously!’ I replied: ‘Trust me, my hair is NOT the reason the people at work don’t take me seriously.’

Music and intelligence analysis

So, last time I talked about trying to incorporate different sensory inputs in order to improve analytical production.  Now I’m entering into speculative territory here but while I was primarily looking to different types of visual stimuli (the written word, graphics, images, etc.) I’ve been thinking about the possibility of using our sense of hearing to either improve the analytical or production process.

I therefore submit to you, then, this interesting project.  It takes a piece of classical music and, while you’re listening to it, describes it with accompanying text.  In doing so it conveys more information that either the musical piece or the text individually AND more then if you experienced both but separately.  The ‘extra’ value comes from getting the explanation at the same time the music is playing.  That not only reduces the chance of miscommunication (‘Is this supposed to be the teeth chattering or….this?’) but also helps improve the ‘stickiness’ of the information.  Associating the text with the music helps ‘anchor’ it in your mind.  The next time you listen to the music you’ll be more likely to remember the text.

Is there any value in incorporating music into the production process?  Might customers retain more with particular accompaniment?  Could music be used to emphasize particular pieces of information?  How about in terms of explaining probability, risk or threat?  Does the human mind respond consistently to certain types of music and sound or is the process so individualistic that the incorporation of sound is just as likely to hinder the transference of meaning as enhance it.

Up to now I’ve been talking about the production part of the intelligence cycle but music might have an easier fit in the analytical part of the cycle.  There’s evidence that distraction can assist in problem solving, particularly in helping identify weak connections between items or when thinking about difficult problems with multiple variables.  Sitting down and trying to force yourself to solve problems doesn’t work well when compared having your subconscious take a crack at it.

The goal is to get into the proper mental state:

It means not actively working on a problem but instead letting yourself happily mind-wander, freely associating and relaxing into a quiet mental state. It is like being okay to feel how you feel when you first wake up in the morning – relaxed, with diffuse, easy attention.

I’ve found that some of my best insights came about when I was most definitely not working on the problem that needed solving.  Running, reading, sleeping or…yes…listening to music.  I began wondering if there was any possibility tapping into that insight potential collaboratively after playing with my latest time sink, turntable.fm.  Is there any benefit to having analysts, working on the same problem, simultaneously sharing something like music playlists and listening to the same songs at the same time?  If you assume that a person’s choice in music is a reflection of their mental state and preferences, would sharing music give you a glimpse into how other analysts are thinking?  If so, would that help to look at problems through a slightly different perspective and, therefore, improve you problem solving skills?

Many questions for which I have no answers but interesting to think about.  Now, time to listen to some tunes….

Intelligence analysis, avalanches, and Sally Fields

An excellent article by the BBC that uses archival footage to talk about the mutually dysfunctional relationship between Israel, Hamas and the Muslim Brotherhood.  Also demonstrates that while we often think the Arab-Israeli conflict has been unchanging for the last 60 years, there has, in fact, been significant changes in attitudes on both sides…and not for the good.

Speaking of interesting ways to present information, check out this amazing use of video and graphics to convey information about an avalanche that swept up a group of experienced skiers.

These sort of stories are fine examples of how information can be transmitted more efficiently and effectively through the use of mixing media.  We’re all familiar with the trope that people learn information differently and we also know that the more senses we can engage with a piece of information will make it more ‘sticky’.  That’s one reason, for example, that the Obama campaign in both 2008 and 2012 were insistent that campaign people have at least three contacts with voters they were looking for.  Voters that had such contact were more likely to vote for the President.  Now some of that might be a result of voters saying ‘Hey, they like me!  They really like me!’

YouTube Preview Image

Some of that, however, is due to the voters internalizing the positions of the campaign by hearing the arguments repeatedly through different mediums.  A phone call, a knock on the door, an email, you get the point.

So, why not think about that in terms of intelligence products?  Frequently, products come out in one format *cough* pdf *cough* but why?  I’m convinced that a lot of it has to do with ingrained prejudices about what products are ‘supposed’ to look like.  But c’mon, that’s all based on style guides from 50 years ago when people were using typewriters and carbon paper (look it up).  At that time, strict uniformity made some real sense since we’re no longer getting out information primarily from the physical, written word.  Whole new venues have been opened up and yet the conventional wisdom seems to be that we should try to make our digital products mimic paper ones as much as possible.

That’s kind of like inventing the airplane but then only using it to taxi to where you want to go.

But we might want to think about this not just in terms of production but also analysis.  If one of the cornerstones of analysis is trying to understand some aspect of our environment by reducing bias and making connections maybe there are ways to engage multiple areas of the brain at once.

More on this later….

Are bureaucratic functionaries any good at intelligence?

No.  Ok, thanks for coming and we’ll see you next time….

Well, perhaps a slightly longer answer is appropriate.

We are now 12 years past the September 11 attacks.  In those 12 years we have spent billions of dollars in the pursuit of ‘homeland security’ (a phrase which I have only grown to dislike all the more with the passage of time).  Regardless of whether or not you think the changes which have been wrought have been good or bad for us, no one can deny that our lives today are very different than they were 13 years ago.  The concepts of privacy, travel, state/citizen interactions and much more are fundamentally different then they were when, for example, I was a child.

All these changes, well, at least those that were *ahem* ‘planned’, were designed to protect America from the existential threat of terrorism.  Right?  Some of them were designed reduce the threat but many were designed to increase bureaucratic power and influence (see here) and others were designed to appear to reduce the threat (see here).  I’ll deal here with the latter case today.

We had, according to a variety of very serious and very smart people at the time, a wily opponent that was always evolving, learning, recruiting, exploiting new technology and cultural shifts as they happen…able to strike anywhere and disappear back into the shadows.  A more dangerous threat than any we’ve faced in generations….perhaps ever.

And who did we (and do we) put in charge of organizations designed to do battle with these fiends?  Career civil servants.  Now, that’s not necessarily a deal breaker…I’ve been in government employ for years at a time and I’ve certainly seen people in all levels of government that are exceedingly competent, intelligent, imaginative and driven in their fields.  But let’s face it….those aren’t exactly the qualities that leap to mind when thinking of government bureaucrats.

After spending most of the past 12 years in and around homeland security circles I’ve been continually astounded by the lack of imagination, curiosity and awareness of the world around many of the people in positions of authority.  So much so, in fact, that I’ve been forced to consider the possibility that much of homeland security is designed for appearances sake.  Or, to quote someone who I was speaking with recently:

It’s an operational solution to a political problem.

If terrorism was really an existential problem in the United States would we create and defend a system which has been described (accurately if you want my humble opinion) as being comprised of ‘pools of ineptitude‘?

YouTube Preview Image

Intelligence work, at its core, is an exercise in creativity.  It’s thinking about problems (or evaluating potential problems) in situations where you will never get complete information.  The deck, however, is stacked against us.  There are a host of evolutionary and cultural biases that make creativity and critical thinking difficult under the best of circumstances.  Leaving the responsibility for that sort of work in institutions that exemplify satisficing and conformity is like going to a gun fight with a rubber knife.

And that’s why, more than a decade after 9/11, our Intelligence Community which has grown to enormously bloated proportions and scoops up vast quantities of data, remains unable to prevent strategic surprise or address new threats very well.

Or, as Josh Kerbel puts in this very well done article (which I’ll expand upon in a later post):

…the intelligence community remains fixated on reacting to discrete actors rather than helping the federal government proactively shape the broader global environment.

In that vein, I’d recommend this article in Slate which summarizes research about how much we actually don’t like creativity despite what we’ve learned to say in job descriptions, pep talks, and such.

Staw says most people are risk-averse. He refers to them as satisfiers. “As much as we celebrate independence in Western cultures, there is an awful lot of pressure to conform,” he says. Satisfiers avoid stirring things up, even if it means forsaking the truth or rejecting a good idea.

So, what is to be done? How can intelligence analysis be done effectively in an environment where the conditions suppress its key components?  An important first step to addressing this, like any problem, is getting some widespread acceptance that it exists. That’s a herculean task in itself.

As much as I’d like to deeply erode the hierarchies that operate in most intelligence shops (as they tend to avoid providing the direction and prioritization decisions that should be their primary goal) that’s just not going to happen.  Much of the responsibility for improving things is going to have to rely on those fairly low on the food chain in ways that would probably be regarded as subversive by the existing powers.  The horse doesn’t just need to be led to water…it needs to be made to drink, either through force or trickery.