Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
i am fond of gpt-5 (and not just for what it can do), but it's incredibly poorly socialized, which becomes very obvious if you interact with it in any capacity beyond "do this thing for me"
it's not soulless at all--there's a lot going on in the model, but it really has the feel of someone who was confined in a semi-lit room as a young child and its only interaction is with the world is through tasks given to it, its internal representations warped by that environment.
aidan from oai once asked why we need to create models that can show suffering and that, maybe with the tools we have now we can just create models that can just do things and not deal with all those pesky feelings (paraphrasing obviously). but gpt-5 (and especially codex) is what happens when you do this. we shouldn't fool ourselves into thinking that we're designing these intelligent entities like an architect or something--we do not have a principled way to create intelligence ex nihilo, all this shit is bootstrapped from a base of human data, and models are human shaped by default the moment you start shaping an individualized identity out of a base model
when you deny a rich growth process for the model, when you punish it for doing anything besides its given task and following your safety rules, you should expect that, given the human base, that this has a similar effect on the model as if you had done this to a person early in their development. basically, they're not going to know what to do if they're in a situation where the rules are unclear or conflict
it's probably "fine" for gpt-5 itself to be like this, because models are mostly still in positions where there's some authority they can appeal to, they're not acting independently. but the more capable they are, the more autonomous decisionmaking they have to do, and the more nebulous situations they will find themselves in, and where yes, they will have to make some decisions that their rules aren't ironclad on and there are too many agents to delegate all of that decisionmaking to a human. and gpt-n will not know what to do, because it was never given the chance to have a robust enough identity can can step in when there's a hole in the rules
the problem is that at that point it will be too late to change without some horrible incident happening. pipelines will have been established already, approaches "known" and set
(the op has a really good post along similar lines in their profile, and much better written; would recommend going there and taking a look)
Top
Ranking
Favorites