- The AI Exchange
- Posts
- Anthropic changed their data policies; here’s the 4 steps to take.
Anthropic changed their data policies; here’s the 4 steps to take.
We’re here to help you track the signal from the noise in AI. And that’s exactly what we’ll be doing today.
Anthropic made a quiet but massive move last month. Previously, Anthropic bragged about not touching people’s data for training. That era is done. If you don’t actively say no, your convos live in their training pipeline for up to five years.
What happened
Consumer/personal tiers (Free, Pro, Max): Your chats are NEWLY fair game for model training. Default is opted in. If you don’t want that, you need to go check your settings asap (before Sept. 28th - this SUNDAY!).
Enterprise & API plans: You still have a free pass. Your data won’t be used for training.
What this really means
This move is all about survival. LLMs eat data for breakfast, and Anthropic’s staring down OpenAI and Google at the table.
Your real world convos… coding attempts, problem-solving queries, and business conversations… are a goldmine. That’s why these companies want it; but you should be able to make that choice yourself.
4 steps to take to be smart on data privacy for your company:
Consumer/free tiers are where the risk is highest. Paid enterprise tiers are generally safer. But here’s how to play it smart:
Now’s the time to stop “we all share one paid account”. We see this all the time with our clients; companies that are serious about AI roll out paid subscriptions org wide and invest in training to save the LESS THAN 1 hour a month of efficiency that pays back the cost. (the $30/mo plans are JUST fine).
Even in a paid, secure account – set guardrails. Make an AI use policy. Spell out what’s safe to share and what’s off-limits.
Don’t trust defaults. Slow down. In general (and especially if you are the AI Operator or on the AI Operations team) you need to be reading the fine print anytime you sign up for a tool.
Audit often. Check for retention windows (how long they hold your data), training opt-outs, and data deletion rights regularly.
Lastly this is where one of our core mantras comes in: Own the playbook, rent the tech.
You don’t own Anthropic, OpenAI, or whatever provider has the “best model”.
But what you do own is your processes, workflows, and frameworks for how AI integrates into your work.
When you own your playbook, policies and model changes become speed bumps instead of roadblocks. You can easily swap models and providers as you need.
So document your playbooks.
Keep a library of prompts and automations portable across models and providers.
And always assume the tech will keep shifting.
THAT WAS FAST
The AI Operator Bootcamp is SOLD OUT!
Apply to be first in line for our next cohort starting in December.
LINKS
For your reading list 📚
Google insists it can deliver AI summaries and preserve a healthy web despite Rolling Stone’s lawsuit over lost traffic.
Perplexity is getting sued for plagiarizing definitions – including the word “plagiarize” itself.
The FTC is demanding seven AI giants reveal how chatbots affect kids, spotlighting safety risks and protections for young users.
Ted Cruz’s proposed SANDBOX Act hands Big Tech a decade-long pass to ‘move fast and break things’.
That's all!
We'll see you again soon. Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected].
Cheers,
🪄 The AI Exchange Team