This is HISTORICAL moment in AI. The US Pentagon wanted a $200M deal to use Anthropic’s Claude with zero restrictions, including mass surveillance of U.S. citizens and fully autonomous weapons. Anthropic said “NO” because those uses cross hard red lines on safety, ethics, and reliability. CEO Dario said: “We cannot in good conscience accede.” This led to an immediate federal ban on all Anthropic tech (6-month DoD phase-out) plus labeling them a “supply chain risk” a designation usually reserved for adversarial foreign firms. It is AI ethics vs national security priorities.