UPDATE: Thursday, Jun 29, 2023 · 10:25:18 PM +00:00 · Fffflats
https://thestrategybridge.org/the-bridge/2016/8/16/a-new-plan-using-complexity-in-the-modern-world
If war is a continuation of politics, war must have the same randomness and individuality as the human condition. Wang and Qiao, the authors of Unrestricted Warfare, suggest that planning which seeks to "tie a war to a set of ideas within a predetermined plan is little short of absurdity or naïveté." They instead recommend a circular process, with feedback loops and revisions, to keep the initiative. This approach may appear counter-productive. instead of marching linearly toward an end state, a circular schematic looks as if it achieves nothing and is perpetual. But war is less a march than a dance. Western strategy should therefore be slowed down, and only needs to be ahead of the enemy’s decision making ability. As Henry Kissinger explained:
We must never forget that henceforth the purpose of strategy must be to affect the will of the enemy, not to destroy him, and that war can be limited only by presenting the enemy with an unfavorable calculus of risks. This requires pauses for calculation. Every campaign should be conceived as a series of self-contained phases, each of which implies a particular political objective, and with a sufficient interval between them to permit the application of political and psychological pressures.
There exists an intangible line where the procedures we rely on begin to dilute both individual cognitive agility and collective organizational adaptability. Team members take fewer risks and stop fighting for new insight when they have processes to protect them. It’s not intentional, it’s a function of our innate propensity to seek homeostasis—a comfortable, predictable environment.
Stanley McChrystal suggests that "being effective in today’s world is less a question of optimizing for a known (and relatively stable) set of variables than responsiveness to a constantly shifting environment. Adaptability, not efficiency, must become our central competency." It is time we adopted adaptability, by considering the slides of an old fighter pilot like Boyd.
UPDATE: Saturday, Jun 17, 2023 · 11:58:17 PM +00:00 · Fffflats
Another example from Paul Krugman: https://www.nytimes.com/2023/06/16/opinion/core-inflation-statistics.html
“traditional core inflation is strongly affected by the price of shelter... Most tenants have fairly long leases, so the average rent tenants pay lags far behind the rents paid by new tenants, which more closely reflect the current state of the economy... In the past, it may have made sense to look at changes over the last year, but in an economy going through as much turmoil as we’ve seen recently, that’s just too long a lag. Monthly data is too noisy, so many economists are now focusing on either three- or six-month changes. My sense is that even three-month data is too noisy, so six months is better, but in any case, we don’t want to focus on annual rates of change.“
He’s correct that a year is too long but he’s also saying there’s a Goldilocks zone inside of which you bite off on high frequency noise.
Recently I found myself in a LinkedIn conversation via comments regarding decision cycles described by John Boyd as Observe, Orient, Decide, Act (OODA) loops. This conversation was tangential to the recent Alpha Dogfight Trials in which an artificial intelligence (AI) system competed against a trained and proficient fighter pilot in what the Air Force considers Air Combat Maneuvers (ACM) which the Navy calls Basic Fighter Maneuvers (BFM) and which we may colloquially consider dogfighting. The AI won every engagement. While the developing technology is fascinating, I think the OODA discussion that stemmed from it is more immediately relevant. It has also made me think about “Levels of War” in a manner I have previously considered but never captured. And it has made me revisit Boyd’s work to which I’ve reconsidered in a manner building with that of Charles Kenny. This may seem a strange marriage. After all, Charles Kenny wrote Close the Pentagon seeking international cooperation while Boyd worked in a competitive zero sum view. Yet I think it will work. I’ll write about this marriage and how I actually find it natural later. First, below, I’m sharing the OODA discussion as an initial primer. Before getting to the notion of Boyd in a Positive Sum World, I’ll provide a second primer considering Levels of War to include my own thoughts on the subject. Then I will deliver the Boyd, Kenny, and teamwork piece. Bonus, it will also consider whether or not Chess, Starcraft, and other games are actually strategic.
Note this OODA discussion has been copied and pasted in pieces to make what I feel is the easiest sense to the reader. When it was written, it flowed like a tree from trunk of initial comment through multiple branches of parallel comments made it disjointed. Arranging in a strictly chronological order also seemed a bit off. I’ve also renamed the commenter with whom I was speaking though not quoted authors nor referenced public figures.
Flats: “Side note: was great hearing these pilots reference the creator of the OODA Loop concept John Boyd as they narrated these AI engagements. I only wish Boyd was around to see this. I have no doubt he would be encouraging all strategic thinkers to apply appropriate thought to the impact of this news on the military.” - OODALoop, Bob Gourley
1. Umm... Boyd developed OODA in the Air Force and in the context of dogfights. Why would it be unexpected for him to be referenced? Shouldn’t hearing such talk be normal as opposed to great?
2. Fast OODA loops are usually good technically and tactically but not always so operationally and strategically. Don’t hoorah OODA speed with strategic thinking. Fast OODA can bite off on high frequency noise or miss slow responses which also means loops can make mistakes and are susceptible to operational and strategic trickery. This work may have strategic impact however it is very much tactical level in function. The terms are not interchangeable. Strategic gain here is through potential acquisitions shifts and improving capability not through speed of thought. Actions applied need to allow sufficient time for effects to propagate. Assessment and new actions inside of this can create problems. Observed strategic impact, assessment, and adjustment will be slow.
Example of an OODA too fast - pilot induced oscillation (PIO) results from an input made, delay in airframe response perceived to be no effect achieved, 2nd additive input made. 1st response observed, counter made, 2nd response observed, additional counter made.
Side note: Systems are susceptible to PIO if their responses take longer than 300 milliseconds after human input has been made. Think of all those times the computer gummed up and you got ahead of yourself tying to type.
“George”: At what point does AI get so fast that "getting inside the opponents OODA Loop" becomes meaningless?
Flats: I would say this is a discrete you are inside or not, therefore more faster once inside becomes irrelevant. Regarding technical and tactical, you can generally forecast your effects even with them not yet being observed and therefore carry that forward into subsequent iterations as part of your “observe” even though you haven’t actually observed yet. So at these levels, it’s not quite discrete. When looking at operational and strategic levels, you can’t count on predicted effects actually occurring and you don’t necessarily know unintended consequences so you can’t get too fast or you may make things worse prior to realizing it. Think global warming. Develops so slow that implementing corrections becomes difficult while anti-science deniers have ease in their propaganda. Operationally and strategically, we also need to consider environment, neutral third parties, partners, and allies too, not just adversarial loops. They all have their own contributions to the feedback generating effects that need to be observed.
Anti-vaxers could be another example of an OODA loop too fast. They don’t vaccinate and their kids seem healthy yet the periodicity of old age diseases has been slowed through herd immunized populations. They gain a false sense of positive feedback yet the environment degrades on a slower cycle hitting with significant impact only after several iterations have occurred having increased the anti-vax subset of the population. They’re spinning too fast to really see effect of their actions.
Note: “Old age diseases” here means diseases that plagued human past though should essentially be vanquished now through modern medicine. It does not reference young persons versus old persons being the likely victim set.
“George”: Great examples of OODA applied to non ACM activities. My rhetorical question relates to speed. Is there a point where actions are occurring so quickly that OODA ceases being valuable as a model?
“George”: PIO from a human would slow the loop a bit, but begs the question. Let's say AI vs AI. Only limits are airframe related. Do you see the possibility of a shoot-down scenario that occurs so quickly that defensive maneuvers don't matter?
Flats: A defensive or offensive flavor of excursions seems irrelevant to me with regards to fitting into an OODA loop; they’re both equal in this context and an AI will do either when it sees one or the other appropriate.
AI v AI would fall squarely in the tactical level with significant technological influence. In such cases, OODA faster than the discrete being inside or faster than the opponent’s loop is ok as effects of actions are fairly predictable so as the AI can consider the effects that have not yet occurred as part of the current observation simply by knowing the inputs to achieve them have already been made. The AI can further forecast opponent action without observation of the opponent’s recent inputs based on most likely and most dangerous considerations. Such ways to help speed one’s OODA loop fall apart at operational and strategic levels. In the case of the tactical fight of the AI, we did observe them defending when threatened.
As to PIO, that’s really a technical aspect to which the AI would equally fall victim as a human assuming neither forecast the effect yet to propagate for an action already made. AI and human will both avoid it assuming they understand the causal relationship. Think Paveway II guide laws and consequences.
Note: An excursion is a sacrifice of a portion of one’s energy package (sum of kinetic and potential) in order to either deny or take a shot. It also generally trades against effort for optimum positioning as often shot opportunity and control position aren’t coincident. Sometimes they can be so, however.
“George”: Flats Appreciate the thoughtful response. As you opened with, would love to know how Boyd would respond to these developments. (Late comment, question not answered)
Flats: Planners can do the same anticipation combined with most likely and most dangerous at higher levels though executors still need to observe action and adjust accordingly which requires time for propagation at the higher levels. Too many variables to include multiple players with too many possible outcomes. Hence planners can consider multiple differing possible loops to try to prestage to enable faster loops but they cannot move beyond the next loop; they’re tied to speed of effect propagation not just speed of decision. To do otherwise is to buy massive risk. At operational and strategic levels, the adversary’s opportunity to try a third way and the fact that third parties vote skew opportunity to be faster. You will want to be the slowest while still being faster than the opponent. Otherwise you cede decision space and Sandy Woodward would be upset with you. An analogy would be weaponeering, you want the smallest bomb that gets the job done. You need time to see whether or not desired effects have been achieved and to learn and try to understand unintended consequences. Caveat, in a business world, you may be more concerned with the environment or customers than you are with competitors. Loop accordingly.
I believe this sort of thinking also resonates with Mattis when he wrote against effects based planning. I personally am not against effects based planning though you need to limit yourself to first and second order effects. If you try to plan to third order or beyond, you run into the same problems or too many variables and too many players having voice. The possibility of your desired and expected outcome drops quickly. Without reason to expect an anticipated effect to occur, you need to wait to observe it occurring. Any OODA occurring inside this are either wasted or worse counterproductive.
“George”: Flats I've read Mattis' book and another covering his combat career. He's clearly a Boydian. Not surprising given Boyd's work with the Marines.
Side plug for a friend (though I have yet to read this).
Flats: Thinking of Sandy Woodward working at the operational level of war, “... executive decisions should never be made until they have to be, particularly if circumstances could change in the meantime.” - One Hundred Days The Memoirs of the Falklands Battle Group Commander
Similarly Truman working at the strategic level delayed the Potsdam Conference such that the bulk of it would occur after the Trinity Test. - The Decision to Use the Atomic Bomb, Gar Alperovitz
In both these instances, while their actions could be ascribed to the orient portion, these two leaders deliberately slowed their decision cycles. Two examples, one operational and one strategic, of moves for gain by slowing OODA.
Side note: if you decide to read Woodward, you should read Mike Clapp and Julian Thompson first.
Flats: and with this we should also consider George Kennan and The Long Telegram. Such was foundational for strategy for which effects took generations to propagate; no looping here.
Flats late contribution noted well after the conversation: Heron Systems was using a slower OODA in the technical realm to greater effect gaining a basic flight instruction lesson on “Fast is slow while slow is smooth and smooth is fast.” Per Aviation Week, “Heron credits Falco’s fine-pointing of the F-16 to a control strategy that emphasized smoothness. “We’re controlling it around 10 Hz. It looked like a lot of our competitors were controlling at 50 Hz,” Bell says. That limited update rate required the AI agent to know its trajectory for the next 3 sec. to keep its opponent within the 1-deg. cone of the “gun.”” (Note: strategy is misused there. Control law or technique not strategy)
Thoughts found later after the conversation ended: Seems a lot of folk pick up on timing in OODA as a means to try to introduce chaos and confusion. They also speak to altering tempo and trying to be unpredictable. Hit the half beat to knock an adversary offbeat. Guess what folks, being unpredictable in a dogfight gets you killed. They’re now so well understood that doing something to be unpredictable means making an error upon which a savvy adversary will capitalize. Best using energy and minimizing errors is the best way to survive a dogfight though with today’s missiles, good luck, you’ll probably both die. Now they try to transpose OODA to business and to strategy. Remember Boyd was more a tactician. Think how well being unpredictable in negotiations does for you. Not well. Think saying one thing to make a deal then doing another. You won’t be in business long. Now think strategy. Deterrence doesn’t work by being unpredictable. Coordinating partners and working across all levels of national power doesn’t work by being unpredictable. Such subterfuge also leads to market inefficiency and thus is not in a national interest as it raises costs and creates friction while potentially losing better industry. Competition doesn’t guarantee survival of the greatest or best, it merely flushes the worst. Such raises the average but doesn’t breed excellence or create higher peaks.