Two posts on strategic focus helped crystalize a major criticism I’ve had of the kind of work done in the puzzle palace… natch, make that the kind of work required of the big thinkers sitting in the puzzle palace, who are ultimately responsible for answering the requirements laid out by the stars and bars who run the place.
Drew Conway, picking up on Robert Haddick’s weekly This Week at War report at the Foreign Policy website, writes about stated military interest in developing decentralized, autonomous fighting units. I disagree with some of Drew’s observations. “From my experience,” he writes, “most terrorist networks are organized as highly clustered layers, with central leadership forming the center, pushing orders downrange to the periphery.” OK. “Terrorist foot soldiers are rarely, if ever, allowed to act without explicit consent from agents connect to the leadership.” Here I think Drew overgeneralizes, since there are few givens linking intent and implementation – a.k.a. command and control – and outcomes vary considerably.
Drew goes on to make some excellent points in his discussion of network specialization and niche expertise, which makes for a useful basis for comparison of terrorist networks and proposed military networks. A point not made, and that I would add to this, is that deliberately enabling and accepting real tactical unit autonomy is a catch-22. Modern technology enables very senior people to focus on very very granular issues. Many have argued that that’s a recipe for nano-management and inhibits strategic thinking – producing a peculiar counterpart to the proverbial strategic corporal: the tactical flag officer.
This is at the heart, I think, of what the other Drew – Andrew Exum – asks at Abu Muqawama. Citing Nir Rosen, Ex asks whether mass casualty events like yesterday’s truck bombing in Iraq have any strategic significance. Rosen’s analysis is worth revisiting:
The occasional al Qa’eda suicide attack can still kill masses of innocent civilians, but it has no strategic impact; in fact it is difficult to understand what motivates such attacks today, since their effect is almost nil. It would be naive to say that Iraq’s future is certain, or even likely, to be a peaceful one, but the war between Sunnis and Shiites is now over.
Some of the logic that pre-dates 9/11 and that was amplified by it has been that terrorism does what it does by virtue of the fact that it’s a form of psycho-theatre, so its impact is contingent on both the extent of damage done, and more importanly, on how much attention we pay to it (through fear, sensationalism, politicization, or what have you). Mass casualty incidents certainly emphasize the former, but I think there’s probably an argument to be made even in such cases that it’s the latter that amplifies things – and begs questions about quantitative thresholds and serious cost-benefit analysis of appropriate countermeasures and responses.
The short version is that least likely though most dangerous scenarios – say, bin Laden himself deploying a backpack nuke – require a tactical level focus on individuals and their movements. So, the network fight at the core of counterterrorism and counterinsurgency may, under certain well-defined circumstances, involve a high-level focus on microscopic detail. But I’ve heard silly statements like “tactical events with strategic effect” applied way too many times to the most mundane details to believe that it’s anything more than an excuse, a default setting, for getting and staying stuck in the weeds.
I’m not sure that there’s any way out of the conundrum. Better filtering of information is always a good thing to strive for. Better judgement, too. In both legal and ethical terms, military commanders also have a responsiblity to be as well informed as possible; the consequences of being ill-informed, much less wilfully so, are potentially disastrous. So where to draw the line between command level situational awareness, and the imperative to impose control over units, right down to the tactical level? Do we need to turn off the technology that enables it? That’s tantamount to turning a blind eye to what goes on below strategic level; is it a necessary pre-condition for accepting small unit autonomy? Somewhere between cyberneticism run amok and autonomous battlefield tonka toys – things we’ve debated extensively at CTlab – there’s got be a more effective, if not exactly happy, medium.