Anthropic Updates Data Policy: Users Must Choose to Opt Out or Share Data

Anthropic Updates Data Policy: Users Must Choose to Opt Out or Share Data
Anthropic, the AI company behind Claude, has rolled out a major change to its data policies. Starting immediately, all Claude users must decide by September 28 whether they consent to their conversations being used for AI model training. This update marks a significant shift from Anthropic’s previous stance, where consumer chat data was not used for training at all.
What Has Changed?
- Data Use for Training: Anthropic will now use conversations and coding sessions from Claude Free, Pro, and Max users (including Claude Code) as training data for future AI models—unless users actively opt out.
- Data Retention: For those who do not opt out, the company will retain data for up to five years. Previously, consumer data was deleted after 30 days unless required for policy or legal reasons.
- Business Customers: Users of Claude Gov, Claude for Work, Claude for Education, and API access are not affected by this change, mirroring OpenAI’s policy of safeguarding enterprise data from training use.
Why Is Anthropic Making This Move?
According to Anthropic, the new policy is about giving users a choice and using real interactions to improve model safety and capabilities. The company claims that data from users will help refine model safety mechanisms and enhance skills like coding, analysis, and reasoning. However, it’s clear that, like other major AI companies, Anthropic needs vast amounts of real-world conversational data to stay competitive in the ongoing AI race.
Industry Pressure and Policy Trends
This update comes amid growing scrutiny on how AI companies handle user data. OpenAI, for instance, is currently facing a court order to retain all ChatGPT conversations indefinitely due to ongoing litigation from publishers. The industry trend is towards longer retention and broader use of consumer data for training—raising concerns about transparency and user consent.
User Experience and Consent Challenges
Anthropic’s new policy rollout highlights common issues in the tech industry regarding user awareness and consent:
- Existing users are prompted with a pop-up labeled “Updates to Consumer Terms and Policies.” The “Accept” button is prominent, but the opt-out toggle for training is less visible and turned on by default.
- New users must choose their preference during signup.
Critics have raised concerns that users may quickly click “Accept” without realizing they are agreeing to data sharing for AI training. This design mirrors patterns seen with other AI platforms, where privacy options are often hidden or unclear.
Privacy and Regulatory Implications
Experts warn that the growing complexity of AI services makes meaningful consent difficult. The U.S. Federal Trade Commission has previously cautioned AI firms against making subtle or poorly disclosed changes to privacy policies. Whether regulators will take action remains unclear, particularly as the landscape continues to evolve rapidly.
What Should Claude Users Do?
- Review Anthropic’s updated consumer terms.
- Decide by September 28 whether you want your conversations used for AI training.
- Be aware that if you do not opt out, your data will be stored for up to five years and used to improve Anthropic’s models.
This shift is part of a broader trend in the AI industry where user data is increasingly valuable for model improvement. As always, staying informed and making conscious choices about your data is key.