Training for the Last Human Decision
Judgment Under Certainty: Refusal, Irreversibility, and Command Responsibility in AI-Enabled Operations
We trained decisiveness when uncertainty dominated.
AI changed the failure mode.
In AI-enabled command environments, the most dangerous moment is no longer confusion.
It’s clarity.
When systems converge early and confidently on a single course of action, friction disappears. Dissent collapses. Authorization starts to feel inevitable. And human judgment quietly compresses into confirmation - right when responsibility is most concentrated.
I’ve been working on a paper that names this failure mode, reframes refusal as a load-bearing command function, and proposes concrete training inserts and commander-level tools to preserve judgment at irreversibility thresholds.
This isn’t an argument against AI.
It’s an argument for training leaders to withstand certainty.
If you work in command training, doctrine, or AI-integrated operations, I’d welcome your thoughts.
Training for the Last Human Decision: Judgment Under Certainty in AI-Enabled Command Environments

