i like anthropic, but does irk me that so many rationalist AI alignment types fall so hard for their tribal “we care about alignment” in group signaling, which completely contradicts anthropic’s actual role of accelerating the AI arms race more than anyone