Much talk about increasing privacy by creating a layer of anonymity routing between user & AI model provider. But what I truly don't get is this: commodity LLMs are already exceptionally good at identifying patterns & de-anonymizing. Why should we think they aren't capable of quickly relinking us? I'd wager that 1-shot re-identification by a model is easily possible across single prompts from multiple accounts, even if they are 'anonymously' routed to the API. Almost certainly especially true when users are working on the same bit of code, project, or from the same environment. Upshot: sort of like using Tor browser thinking you're anonymous from websites, but keeping cookies across sessions. The amount of muckery you'd have to do to context to be truly hardened against relinking by large models seems truly substantial to the point of massive inefficency.