Skip to content
The Claude logo on a computer screen on an orange background.
Source: theverge.com

Anthropic to train AI models on user chat transcripts unless opt-out; five-year data retention and Sept. 28 deadline

Sources: https://www.theverge.com/anthropic/767507/anthropic-user-data-consumers-ai-models-training-privacy, The Verge AI

TL;DR

  • Anthropic will train its AI models on user data, including new chat transcripts and coding sessions, unless users opt out.
  • Data retention is extended to five years for users who do not opt out.
  • All users must decide by September 28, 2025; new users choose during signup, existing users via a pop-up that can be deferred.
  • The updates apply to Claude’s consumer tiers (Free, Pro, Max) but do not affect commercial usage tiers or API use.
  • Anthropic asserts it uses filtering/obfuscation to protect privacy and does not sell user data to third parties.

Context and background

Anthropic announced updates to its consumer terms and privacy policy, detailing a change in how user data will be used to train its AI models. The company states that training on user data will occur for new and resumed chats and coding sessions unless the user opts out. The policy change is coupled with a five-year data retention window for accounts that do not opt out. The company notes that the decision deadline for users is September 28, 2025. The information was disclosed in a blog post from Anthropic and has been described as applying to Claude’s consumer subscription tiers. For context, the updates do not apply to Anthropic’s commercial usage tiers, such as Claude Gov, Claude for Work, Claude for Education, or API usage (including via third parties like Amazon Bedrock and Google Cloud’s Vertex AI). For users who are already enrolled in Claude, the opt-out decision will come via a pop-up in the Claude experience; new users will encounter the option during signup. A subsequent note clarifies that existing chats or coding sessions that are not resumed will not be used for training, but resuming a previous session could lead to training on that data. The blog also emphasizes privacy protections, stating that Anthropic uses filtering and obfuscation techniques to protect sensitive information and that user data is not sold to third parties. The Verge covered these changes in detail, highlighting the opt-out mechanism and the potential impact on user privacy. See the original report for more context: https://www.theverge.com/anthropic/767507/anthropic-user-data-consumers-ai-models-training-privacy.

What’s new

  • Training on user-generated content: The company will begin training its AI models on user data, including new chat transcripts and coding sessions, unless the user actively opts out.
  • Five-year retention: Data collected from users who do not opt out can be retained for up to five years.
  • Opt-out workflow: New users must choose their preference during signup, while existing users will encounter a privacy pop-up. If users click Accept, training can begin immediately; if not, users can switch to Off at any time via settings. A deadline of September 28, 2025, is set to finalize decisions.
  • Scope of applicability: The updates apply to Claude’s consumer tiers (Claude Free, Pro, and Max) and even cover behavior when using Claude Code from those accounts. The changes do not extend to commercial tiers or API usage, including third-party integration.
  • Data handling and privacy promises: Anthropic states that it uses tools and automated processes to filter or obfuscate sensitive data and reiterates that it does not sell user data to third parties.

Table: Scope of the updates

ScopeDetails
Affected servicesClaude consumer tiers: Free, Pro, Max
Included data typesNew chats and coding sessions (including resumed chats)
Excluded servicesClaude Gov, Claude for Work, Claude for Education, API usage (including via Amazon Bedrock and Google Cloud’s Vertex AI)

Why it matters (impact for developers/enterprises)

For developers and enterprises relying on Claude for customer support, coding assistance, or internal workflows, the policy change clarifies what data may be used to train models that power these tools. By separating consumer usage from business-oriented tiers, Anthropic aims to limit training on enterprise workloads and API-based integrations, potentially reducing exposure of business data to model training. However, the five-year retention policy on non-opt-out accounts raises concerns about long-term data storage and potential re-use in model updates, even as data is reportedly filtered or obfuscated to protect sensitive information. From an implementation standpoint, organizations should review how their teams interact with Claude in consumer accounts, especially if employees use these tools for non-confidential or personal projects. Enterprises using Claude via API or Gov/Work/Education offerings may be unaffected by these changes, but cross-platform workflows should be assessed to avoid unintentional data leakage from consumer accounts.

Technical details or Implementation

  • User decision flow: New users encounter a signup-based preference for data use in training. Existing users are prompted with a privacy pop-up that requires a decision, with an option to defer by selecting Not now. The decision in settings will apply to future data; past data used to train models remains covered by prior states.
  • Default settings and disclosure: The pop-up presents a default On setting for allowing use of chats and coding sessions to train and improve Claude. Users can turn this Off in Privacy Settings under the Help improve Claude option. The exact phrasing in the UI emphasizes updating consumer terms and policy with an effective date of September 28, 2025.
  • Data protection measures: Anthropic states it employs a combination of tools and automated processes to filter or obfuscate sensitive data. The company also states that it does not sell user data to third parties. These measures are described as efforts to protect user privacy during data processing for model training.
  • Data scope and exclusions: The updates explicitly exclude commercial usage tiers and API usage, including scenarios where Claude is accessed through third-party platforms such as Amazon Bedrock and Google Cloud’s Vertex AI. This delineation is intended to separate consumer data practices from enterprise and API business lines.

Key takeaways

  • Training will include new chats and coding sessions unless opted out.
  • Data retention can extend to five years for non-opt-out users.
  • Decisions are due by September 28, 2025; new users decide on signup, existing users via a pop-up.
  • The updates apply to Claude consumer tiers and exclude commercial and API usage.
  • Anthropic promises privacy protections and no data selling to third parties.

FAQ

  • What data will be used to train Anthropic’s models?

    Training will use user data from new chats and coding sessions for users who do not opt out, including resumed chats and sessions. Historical chats that were not resumed are not included unless resumed.

  • How long will data be retained?

    Data retention can last up to five years for users who do not opt out.

  • Can I change my decision later?

    Yes. You can change your decision anytime via Privacy Settings, but the new choice will apply to future data only.

  • Do these changes affect the API or enterprise products?

    No. The updates do not apply to commercial usage tiers or API usage, including third-party integrations like Amazon Bedrock or Google Cloud Vertex AI.

  • What protections are in place for sensitive data?

    nthropic states it uses filters and automated processes to obfuscate sensitive data and emphasizes that it does not sell user data to third parties.

References

More news