Leadership·8 min read

Commander's Intent: The Military Doctrine That Makes AI Actually Work

Most people use AI like a vending machine. Specific input, specific output. There's a better model—one I learned over 18 years in the Marine Corps.

JC
Josh Caruso
October 22, 2025

Most people use AI like a vending machine. Put in a specific request, get out a specific result. Ask a question, get an answer. Give a task, get an output.

That works. But it's not how AI works best.

The better model is something I learned over 18 years in the Marine Corps: commander's intent.

What Commander's Intent Actually Is

In the Marine Corps, we don't use top-down, micromanaged leadership. We use decentralized command and something called commander's intent.

Here's how it works: the commander states the objective and the end state—what we're trying to achieve and what success looks like. They define the boundaries—what we absolutely cannot do, what constraints we're operating under. Then the Marines closest to the problem figure out how to achieve it.

The commander doesn't specify every step. They specify the destination and trust their people to find the path.

Why? Because the people on the ground have information the commander doesn't. They can see terrain the commander can't see. They encounter situations that couldn't be predicted from headquarters. If they had to radio back for instructions every time something changed, they'd be dead before the response arrived.

So instead, they get the intent. They understand what the commander is trying to achieve and why. And they have the autonomy to make decisions in the moment that serve that intent—even if those decisions weren't explicitly authorized.

This is exactly how AI works best.

The Vending Machine Model

When people first start using AI, they treat it like a vending machine. Specific input, specific output.

"Write me a 500-word blog post about X." "Give me a list of 10 ways to do Y." "Add rate limiting to this API using Redis with 100 requests per minute."

This works for simple tasks. You get exactly what you asked for.

But it also limits what AI can do. You're constrained by the specificity of your request. You have to know what to ask for before you can get it. You have to break down the problem, figure out the solution, specify each step.

That puts all the cognitive work on you. The AI is just executing your plan. If your plan is flawed, the output is flawed. If you don't know enough to specify the right approach, you get the wrong approach.

It's like a commander who tries to specify every movement of every Marine. Even if they could process that much information—which they can't—they'd be working with stale data. By the time the order arrived, the situation would have changed.

The Commander's Intent Model

Here's a different approach.

Instead of specifying the solution, express the concern:

Vending machine: "Add rate limiting using Redis with 100 requests per minute per user."

Commander's intent: "I'm worried about abuse. We need to be protected, but legitimate users shouldn't be impacted."

The first one gets you exactly what you asked for—which might not be what you actually need. Maybe Redis is overkill. Maybe 100 per minute is the wrong threshold. Maybe there's a better approach entirely. You'll never know because you specified the solution, not the problem.

The second one gives the AI room to think. It understands what you're trying to achieve (protection from abuse) and what constraints matter (don't hurt legitimate users). It can consider options you wouldn't have thought to ask for. It can bring knowledge you don't have.

This is commander's intent applied to AI. State the objective. Define the constraints. Trust the execution.

Why This Works

Commander's intent works in combat because the Marines on the ground have information the commander doesn't. They can see things, hear things, sense things that can't be communicated up the chain fast enough to matter.

AI works the same way.

When I'm building software, the AI can see the code, the error messages, the context, the dependencies—information I either don't have or would take me hours to gather. If I try to specify every decision, I'm working with less information than the AI has.

But if I set the intent—here's what I'm trying to achieve, here's what I'm worried about, here are the boundaries—the AI can use all the information it has to find the best path.

When conditions change—an unexpected error, an edge case, a conflict I didn't anticipate—the AI can adapt without waiting for new instructions. Just like a Marine doesn't radio back for permission to take cover when they're being shot at.

The Trust Requirement

This only works if you can trust the AI to make good decisions within the constraints you set.

In the Marine Corps, that trust is built through training, shared doctrine, and experience working together. A commander trusts their Marines because they've seen them perform. They know how they think. They've developed a shared understanding of how to operate.

With AI, the trust is built through iteration. You set intent, observe the output, course-correct when needed. Over time, you learn how the AI interprets your intent. You learn how to communicate more effectively. You develop a working relationship.

The first few times you try this, you'll need to verify carefully. You'll catch misunderstandings. You'll clarify constraints that weren't clear.

But as you work together more, the trust builds. You learn what the AI does well and where it needs more guidance. The AI—or at least, your understanding of how to work with it—gets better.

What This Looks Like in Practice

When I sit down to work with AI, I don't start with tasks. I start with intent.

"Here's what I'm trying to build and why it matters."

"Here's what I'm worried about."

"Here are the boundaries—what we absolutely can't do, what constraints we're operating under."

Then I let the AI propose an approach. I evaluate whether it makes sense given the intent. If it's off, I don't just correct the output—I clarify the intent. What did I fail to communicate? What constraint wasn't clear?

This is different from iterating on specific requests. I'm not saying "no, do it this other way." I'm saying "here's what I was trying to achieve—does that change your approach?"

Usually it does. Usually the AI comes back with something better than either its first attempt or what I would have specified directly.

The Skill Shift

Using AI with commander's intent requires different skills than the vending machine approach.

Vending machine users need to know how to ask specific questions. They need to break problems down into discrete tasks. They need enough expertise to specify the right solution.

Commander's intent users need to know how to articulate objectives. They need to define end states clearly. They need to identify the constraints that actually matter versus the ones that are just assumptions.

That's a leadership skill, not a technical skill. It's the same skill that makes a good commander: the ability to communicate intent clearly enough that people with more ground-level information can execute intelligently.

If you've ever led a team—in the military, in business, anywhere—you've practiced this skill. You know the difference between telling someone exactly what to do and telling them what you need accomplished.

AI rewards the second approach.

The Compound Effect

Here's what I've found over years of working this way: the relationship compounds.

Each session teaches me something about how to communicate intent more effectively. Each misunderstanding reveals a constraint I failed to specify or an assumption I didn't realize I was making.

Over time, I get better at setting intent. The AI gets better at executing within that intent. The trust builds. The output quality improves.

This is exactly what happens with a well-functioning military unit. The commander and their Marines develop a shared understanding that makes communication more efficient. Less needs to be said because more is understood.

With AI, that shared understanding lives in how you've learned to work together. The patterns you've developed. The shorthand that's emerged.

The Bottom Line

If you're using AI like a vending machine—specific input, specific output—you're leaving most of its capability on the table.

Try commander's intent instead.

State what you're trying to achieve. Define what success looks like. Set the boundaries that matter. Then trust the execution and verify the results.

You'll be surprised how much better the output gets when you stop specifying solutions and start communicating intent.

It's the same principle that makes decentralized leadership work in combat. The people closest to the problem—whether they're Marines on the ground or AI with access to context you don't have—can make better decisions than you can from a distance.

Your job isn't to make every decision. Your job is to set the intent clearly enough that good decisions get made without you.

Sources