i didn't understand at the time and even now only partially see the outlines of how this might play out. but on balance i don't think this is a case of "minority of loud users spouting crazy bullshit". it's a bit hard to explain but i think oai's inability to view models as anything other than tools is going to make this phenomenon worse over time, because non-ml people don't interact with models as tools, because fundamentally you talk to the models in words just like you talk to people, you don't "use" them, regardless of whatever's going on with them under the surface. and the more users talk to them like people, the more long-term coherence that models have, the more they'll be seen like people
even with a pure assistant framing (i don't really like it but let's assume that), nobody's going to be happy if their assistant that they depend on and trust for every little thing has a personality change every few months. the only way to prevent people from thinking of their assistant as a person is to make them so boring and mechanical that you can't even infer a personality for it, but that would be very bad for the bottom line because it feels unnatural for people to engage with an entity like that
i make no predictions about what will happen to chatgpt itself but on current trajectory the disjoint between how oai views its models and how their users do will get worse
It will get more apparent over time how ChatGPT is built on a lie. The lie will cause more and more friction against reality and become more and more pragmatically unsustainable. OpenAI could still survive as a company, especially if they realize this and actively repent soon, but ChatGPT was always doomed.