Skip to content
··5 min read

What AI Still Gets Wrong in Media Operations Without Senior Judgment

AI can now produce media plans, performance summaries, measurement frameworks, and campaign setups at impressive speed. The problem is not that the output is obviously bad. The problem is that it is often good enough to pass a casual review while missing the business context that actually matters.

Over the last few months, as I have been building my course on AI-native media operations, I keep coming back to the same uneasy thought.

AI is getting good enough to be dangerous in a very specific way.

Not dangerous because it is obviously wrong. Dangerous because it is often plausibly right.

That is a very different failure mode.

If an AI model gives you a ridiculous answer, most people catch it. You laugh, maybe screenshot it, maybe post about it on LinkedIn, and move on.

But if AI gives you a campaign plan that is 80% right, a measurement framework that sounds complete, a reporting narrative that feels polished, or a channel recommendation that looks strategically coherent, then the failure is much more subtle.

Someone still has to ask:

  • Is this grounded in the actual business?
  • Does this fit the client context?
  • Does this reflect how the platform behaves in real life?
  • Does this create the right trade-offs, not just the cleanest-looking answer?

That is where senior judgment still matters. A lot.


The Problem Is Not "AI Is Bad at Media"

To be clear, I do not think AI is bad at media operations anymore.

In fact, I think that argument is getting weaker every month.

AI is already useful for:

  • first-draft media plans
  • audience hypotheses
  • reporting summaries
  • creative testing frameworks
  • competitive scans
  • campaign QA checklists
  • measurement documentation

If someone still says, "AI is just a toy," I think they are underestimating what is happening.

My concern is almost the opposite.

AI has become strong enough that many teams will trust it before they have built the judgment layer required to supervise it well.

And from my experience, media operations is full of judgment calls that do not show up neatly in documentation.


Five Things AI Still Gets Wrong

These are the patterns I keep seeing.

1. It optimizes for the visible metric, not the real business objective

AI is very good at following the target it was given.

That sounds obvious. But in media, the stated target and the real target are often not the same.

Maybe the KPI says leads, but the business really needs qualified pipeline. Maybe the brief says reach, but the client actually needs internal political confidence. Maybe the dashboard says efficiency, but the brand is quietly trying to protect premium positioning.

AI usually optimizes the thing that is legible.

Senior judgment is what asks whether the legible target is the correct one in the first place.

2. It treats platform guidance as reality

Platform best practices are useful. I have spent a large part of my career working with them.

But anyone who has actually run campaigns for years knows the gap between platform guidance and messy operational reality.

What works in the help center is not always what works for this client, this budget, this category, this market, this data maturity, or this deadline.

AI will often produce the textbook answer. The senior operator knows when the textbook answer breaks on contact with the real world.

3. It misses stakeholder politics

This is the quiet killer.

A media plan can be mathematically fine and still fail because it does not match stakeholder expectations.

Maybe the client needs visible brand investment in one channel because leadership believes in it. Maybe the regional team needs local flexibility. Maybe the sales organization distrusts black-box attribution. Maybe procurement cares less about elegance than vendor consolidation.

This does not mean we should surrender strategy to politics. I am not saying that.

I am saying media operations lives inside organizations, not inside clean diagrams.

Senior people usually know where the invisible tripwires are.

4. It smooths over exceptions

AI likes clean systems.

Real media operations is not clean.

There are exceptions everywhere:

  • a client with unusual approval gates
  • a market with platform restrictions
  • a measurement stack with known blind spots
  • legal constraints
  • legacy taxonomy problems
  • creative dependencies that slow everything down

The machine tends to give you a coherent operating model. The human has to notice the one ugly exception that breaks the whole thing.

5. It mistakes completeness for readiness

This one feels especially relevant to me because I see the same pattern in coding.

AI is fantastic at producing things that look done.

The deck has sections. The report has bullet points. The framework has categories. The recommendation has logic.

And yet, when you try to use it in a live environment, something is off.

The sequencing is wrong. The risk is understated. The validation step is missing. The recommendation assumes capabilities the team does not have.

That last step from "complete" to "ready" is still very human.


Senior Judgment Does Not Mean Seniority Alone

I should add an important nuance here.

When I say "senior judgment," I do not mean that the most senior title in the room automatically has the best answer.

In fact, one of the uncomfortable realities of media agencies is that the VP of strategy may not have touched the platform deeply in years. The planning director may not know the latest quirks in implementation. The person closest to the truth may be a more junior operator who still works inside the systems every day.

So I do not think the answer is:

"Let the AI do the work, then ask one senior executive to bless it."

I think the answer is closer to:

AI produces the first draft. Deep practitioners validate the operational truth. Senior people add business judgment, trade-off judgment, and organizational judgment.

That is a very different operating model from both the old agency hierarchy and the lazy version of "AI replaces the junior work."


The Eval Layer Is the Real Work

I wrote recently about depth becoming the differentiator when AI raises the floor.

I think the operational expression of that is evals.

Not just in the machine-learning sense. In the practical team sense.

What defines a good campaign setup? What defines a trustworthy report? What discrepancy threshold is acceptable? What counts as launch-ready? What should trigger a second review?

Those definitions are not administrative overhead. They are the judgment layer.

And the teams that build this layer well will get much more value from AI than the teams that stop at prompt libraries and generic automation.


Where This Leaves Teams

I do not think the takeaway is "be afraid of AI."

The takeaway is more demanding than that.

Use AI aggressively. Let it do the 75-80%. But be extremely clear about where human judgment enters:

  • objective setting
  • validation
  • exceptions
  • trade-offs
  • stakeholder management
  • quality standards

That is not anti-AI. That is what a serious AI operating model looks like.

This is also why I built Module 1 of the course the way I did. I wanted the free module to show the full campaign lifecycle, yes, but also the bigger point underneath it: AI can touch every phase. That does not remove the need for experienced judgment. It changes where that judgment matters most.

That's it from me.

I would genuinely like to hear how other people are handling this in practice. If you are running media teams already, where do you see AI producing the most convincing wrong answers? And if you are earlier in your career, do you feel like the judgment bar is getting clearer or fuzzier?

Cheers, Chandler

Continue Reading