
The U.S. government designated Anthropic a "supply-chain risk" and terminated its use of Claude AI, even as reports indicate the military was simultaneously using Claude to inform its attack on Iran. This implies a shocking lack of trust or control over AI deployed in critical national security operations. How can an AI be deemed too risky for federal use while actively being leveraged for war?
This summary was generated by AI