Skip to content
An image of the Anthropic logo on a brown an pink background
Source: theverge.com

Anthropic tightens Claude usage policy, bans CBRN and high‑yield explosive assistance

Sources: https://www.theverge.com/news/760080/anthropic-updated-usage-policy-dangerous-ai-landscape

TL;DR

  • Anthropic updated its Claude usage policy to explicitly prohibit assistance in developing biological, chemical, radiological, or nuclear (CBRN) weapons and high‑yield explosives.
  • The company tightened cybersecurity rules to ban using Claude to discover or exploit vulnerabilities, create/distribute malware, or build denial‑of‑service tools, and warned about agentic features.
  • Political content rules were narrowed: only uses that are deceptive, disruptive to democratic processes, or involve voter/campaign targeting are now prohibited.
  • Anthropic clarified that its “high‑risk” use case requirements apply to consumer‑facing scenarios, not to business‑to‑business use.
  • The changes follow prior deployment of “AI Safety Level 3” protections with Claude Opus 4 and growing concern about agentic tools like Computer Use and Claude Code.

Context and background

Anthropic, the AI startup behind the Claude chatbot, has revised its public usage policy to address rising concerns about safety and misuse. The company previously prohibited using Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” A side‑by‑side comparison of the old and new policy text reveals more specific prohibitions and clarifications. In May, Anthropic introduced what it called “AI Safety Level 3” protections alongside the launch of Claude Opus 4. Those safeguards aimed to make the model harder to jailbreak and to reduce the risk that the model could assist in developing CBRN weapons. Since then, Anthropic has reassessed policy language and operational controls to address not only weapons‑related risks but also emergent threats tied to agentic capabilities.

What’s new

The updated usage policy includes several substantive changes:

  • Explicit ban on CBRN assistance: The policy now specifically forbids using Claude to help develop biological, chemical, radiological, or nuclear weapons.
  • High‑yield explosives: The update adds a specific prohibition on assistance in developing high‑yield explosives, expanding beyond the prior general weapons prohibition.
  • Stronger cybersecurity prohibitions: A new section, “Do Not Compromise Computer or Network Systems,” bans using Claude to discover or exploit vulnerabilities, create or distribute malware, develop denial‑of‑service tools, and related activities.
  • Clarified political content rules: Rather than broadly banning all campaign and lobbying content, Anthropic now prohibits uses that are deceptive or disruptive to democratic processes, or that involve voter and campaign targeting.
  • High‑risk requirement clarification: Requirements for “high‑risk” use cases — which apply when Claude makes recommendations to individuals or customers — are now specified to be relevant for consumer‑facing scenarios, not B2B or internal business use.

Why it matters (impact for developers/enterprises)

The policy updates affect multiple stakeholder groups:

  • Developers embedding Claude or using agentic features must avoid creating tooling or workflows that could be repurposed to exploit systems or develop weapons. Agentic features such as Computer Use (which can take control of a user’s computer) and Claude Code (which embeds Claude into a developer terminal) are explicitly called out as introducing new risks including scaled abuse, malware creation, and cyber attacks.
  • Enterprises integrating Claude into customer‑facing products must review whether their use cases trigger the clarified “high‑risk” requirements. If Claude is used to make recommendations directed at consumers or customers, extra controls or governance may be needed.
  • Security teams and policy leads should consider the tightened cybersecurity prohibitions when assessing third‑party model risk. The new “Do Not Compromise Computer or Network Systems” section lists concrete activities that are disallowed and maps to common threat scenarios security teams already monitor.
  • Campaigns, advocacy groups, and political consultants face narrower but more targeted limits: rather than an outright ban on campaign‑related content, the policy focuses on preventing deception, disruption of democratic processes, and targeted voter/campaign manipulation.

Technical details or Implementation

Anthropic’s changes combine policy language updates with operational safeguards already deployed in its models:

  • AI Safety Level 3: Introduced in May alongside Claude Opus 4, these protections aim to reduce jailbreak risk and limit assistance on particularly sensitive topics (for example, helping to develop CBRN weapons).
  • Agentic feature risk acknowledgment: Anthropic explicitly cites risks from tools that let Claude interact with systems (Computer Use) or integrate deeply into developer environments (Claude Code). The updated policy folds a specific prohibition — “Do Not Compromise Computer or Network Systems” — into the usage rules, enumerating disallowed behaviors such as vulnerability discovery/exploitation, malware creation/distribution, and building denial‑of‑service tooling.
  • Policy vs. enforcement: The public summary of changes did not highlight every tweak; a comparison between the older and newer policy text reveals additions such as the explicit CBRN and high‑yield explosive bans. Enforcement details (for example, detection, monitoring, and response mechanisms) are not enumerated in the public policy snapshot. Comparison of old vs. new weapons policy (excerpted): | Policy element | Old policy phrasing | New policy phrasing |---|---:|---| | Weapons prohibition | Prohibited “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” | Specifically prohibits development of biological, chemical, radiological, or nuclear (CBRN) weapons and high‑yield explosives, in addition to the prior general ban. |

Key takeaways

  • Anthropic added explicit bans on CBRN weapons and high‑yield explosives to Claude’s usage policy.
  • Cybersecurity prohibitions were strengthened with a new section forbidding activities that could compromise computers or networks.
  • Political content rules were narrowed to focus on deception, disruption to democratic processes, and voter/campaign targeting rather than a blanket ban on campaign content.
  • “High‑risk” requirements apply to consumer‑facing scenarios where Claude makes recommendations, not to business‑to‑business internal uses.
  • The updates build on earlier technical protections (AI Safety Level 3) and respond to risks from agentic features like Computer Use and Claude Code.

FAQ

More news