- AMP (Formerly: The AI Exchange)
- Posts
- Same AI, 7.2x less value. What?
Same AI, 7.2x less value. What?
PwC studied 1,200 companies. The gap isn't what you think.
We recently released our first YouTube video explaining the difference between AI, agents, and playbooks. 👇️
Two days later, PwC published a study proving exactly why it matters: 74% of AI's economic value goes to just 20% of companies.
Here's what the other 80% are getting wrong.
PwC surveyed 1,200+ senior executives and found the top 20% are generating 7.2x more AI-driven gains than the average company. They're 1.9x more likely to run AI autonomously across multiple tasks.
Same week, GPT-5.4 beat humans at real desktop tasks for the first time. 75% vs. 72.4%. Cool, sure. But also... so what? If the AI was already good enough and most companies weren't getting value, why would "even better AI" fix anything?
The capability was never the bottleneck
AI has been "good enough" for most business tasks for over a year. The reason 80% of companies aren't seeing results isn't because the AI can't do enough.
It's because they haven't defined what they want done.
What the 80% actually looks like
You know this company. Maybe you are this company?
Someone opens ChatGPT, types something slightly different every time, gets something back that's "fine," spends 45 minutes editing it, and does the exact same thing tomorrow with a totally different prompt.
There's no standard. There's no documented process. There's no way to hand it to someone else on the team. Every single interaction starts from zero.
We talk to these teams every week. They're not lazy or behind. They just skipped a step that nobody told them existed.
GPT 5.4 is a fast executor with no playbook
Let's be real. GPT-5.4 is impressive. But here's the thing nobody's saying about it: a 75%-accurate system operating your desktop without a defined process is just a faster way to produce work you'll have to redo.
All you've done is automate the mess.
The 20% figured this out already. They didn't get access to some secret model. They just wrote down how their work is supposed to be done before they handed it to the AI.
The playbook is the gap
Here's what the winning companies did that the rest didn't. Before they let AI run anything autonomously, they answered two questions:
What does good output look like?
What's a process we can give AI to get there?
That's genuinely the whole difference. The 20% pulling away from the 80% aren't smarter or better resourced. They just did the process work first.
What to do this week
Pick one task you use AI for every week where the results are hit or miss.
Write down two things: what does "good" actually look like for this task, and what could a process look like that gets AI to get that output?
That's your playbook starter. If you want the full framework for how AI, agents, and playbooks fit together, we broke it all down in our first YouTube video!
What's the one task where AI keeps giving you inconsistent output? Hit reply and tell us. Seriously. We read every one of these and we'll tell you if it's a tool problem or a playbook problem.
LINKS
For your reading list 📚
44% of Gen Z workers admit to actively sabotaging their company's AI rollout. If you're wondering why your team isn't adopting the tools you bought them, start here.
Allbirds sold its shoes and rebranded as NewBird AI, a GPU rental company. Stock went up 700% in a day. We have no further commentary. 😅
A Nebraska lawyer got suspended for 57 defective citations, 20 of which were AI hallucinations. Courts have now issued $145K+ in sanctions for AI citation errors in Q1 alone.
OpenAI acquired its first media company and is projecting $2.5B in ad revenue this year.
That's all!
We'll see you again soon. Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected]
Cheers,
🪄 The AMP Team (formerly: the AI Exchange Team)