• The AI Exchange
  • Posts
  • The Top 3 Playbooking Mistakes (and What to Do Instead)

The Top 3 Playbooking Mistakes (and What to Do Instead)

We taught thousands of people how to playbook this year. Along the way, we noticed a set of recurring mistakes that consistently held people back from getting real leverage with AI.

In our final newsletter of the year, we’re unpacking those mistakes and showing you how to correct them, so you can create better playbooks in 2026.

Playbooking Mistake #1: Starting with the tech instead of the work.

Lots of people assume the way you start creating a playbook is by asking things like:

  • “Can AI do this? What about this?”

  • “How do I build an agent?”

  • “How do I learn Claude Code?”

On the surface, these feel like smart questions. We’re talking about AI after all. But in practice, they derail your entire playbook. 

Why? Because they anchor your thinking in technical solutions instead of what’s operationally meaningful.

This is one of the fastest ways to waste time creating a playbook that never gets used because:

  • The playbook doesn’t make a real difference

  • You spend hours researching capabilities without a clear outcome to optimize for

  • You overbuild solutions to unclear problems

A more worthwhile starting point is asking: What problems or opportunities exist in this work today? (In marketing, sales, ops, delivery, etc.)

Once you start there, valuable playbook ideas become obvious - and AI becomes a lever, not the driver.

Playbooking Mistake #2: Treating work like a black box.

Once you have playbook ideas, you need to write your playbook. People get tripped up here in two ways. The first one is that they get stuck seeing their work as one big, indivisible action.

Let’s break down an example from one of our recent playbook challenges. One participant shared that their Cold Outreach Playbook only had two steps because they only “research the company, then send a cold email to the company.” 

This playbook wasn’t performing too great.

So after further discussion, we helped them realize what they actually do is:

  1. Research a company using specific types of search queries

  2. Scan search results for particular signals

  3. Interpret those signals against their services or past experience

  4. Identify where there’s overlap or where opportunity exists

  5. Based on where there’s overlap, identify 3 ways they can help the company

  6. Write a cold email to the company, naming the personalized and specific ways they can help

Once their process was broken down and they learned how to separate research from analysis, analysis from judgement, judgement to execution, AI suddenly had clear, specific tasks to perform and could do those well.

You can break up your black box of work and create smaller, clearer units of work by asking yourself:

  • What’s the first action I take?

  • How do I do that step?

  • What tips or tricks do I lean on each time I do this?

  • What decision do I make - or what artifact do I create - before moving to the next step?

Playbooking Mistake #3: Under-specifying the work.

The other mistake we see people make when writing their playbook is that even if they do get granular with their work, they don’t clearly explain what “good” looks like for each step. 

We often hear people say:

  • “I know good work when I see it.”

  • “I can tell when something is off.”

  • “My work feels intuitive.”

And that may be true for you. But if that intuition lives only in your head and can’t be articulated, you won’t be able to delegate it to a human… much less to AI.

This ends up showing up like:

  • Playbooks with a ton of steps and yet no specificity on how any of them should be done

  • Playbooks that describe the outcome of the work, but not the standards for the work

AI doesn’t fail because it’s “not smart enough”. It fails because it was given underspecified work. 

As you write your playbook, you should consider:

  • What does “good” actually mean for each step? 

  • What assumptions am I making that need to be stated?

  • Where do I rely on taste, context, or nuance? Have I stated that anywhere in the playbook?

See You in 2026

As we look ahead, we’re increasingly convinced of one thing: Playbooking is the skill to know in 2026.

The people who will have so. much. leverage won’t just have expertise, they’ll know how to translate that expertise into clear, repeatable playbooks their teams can actually use.

If you want to master playbooking your team’s work, learn more about our flagship bootcamp here.

Until then, happy holidays from all of us at The AI Exchange. 🙂 

LINKS

For your reading list 📚

  • Merriam-Webster announced that 2025's word of the year is “slop”. In case you missed it, check out our thoughts on work slop and learn how we encourage our team to use it here

  • Have a winter-break reading list? Your Kindle app now answers questions about what you’re reading. (It’s supposed to be spoiler-free!)

  • Thanks to a deal with Disney, Sora will be able to generate videos drawing on over 200+ Disney, Marvel, Pixar and Star Wars characters – an interesting move within the AI Copyright conversation.

That's all!

We'll see you again soon. Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected].

Cheers,

🪄 The AI Exchange Team