Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

autumn
Be good of crime
the purpose of fares isnt to cover the cost of running the trains
if youre going to have fares, the (economically) correct reason is to keep the poors out
leftists are right that thats the point
and... we should probably keep the poors out
(see thread below)


keysmashbanditOct 15, 2025
Literally why. Why. Why. Why. Why.
It costs money to run the trains. They're good for everyone. Modern miracle. Under what leftist philosophy would you want to fare evade. I don't understand it.

69.56K
i mostly endorse yudkowsky&soares model of ai x-risk, but i endorse less of it than i did in the pre-gpt3 world. i figure i could give an outline of where ive shifted
1. we could get lucky
it could turn out that pretraining on a corpus of human text guides ai models into a structure of high-level thought thats human-like enough that the radically different substrate doesnt make them weird in ways that end up mattering. there are striking examples of llms acting weird and inhuman, but also examples of them being surprisingly human in deep ways. i think theres a real probability, not just possibility, that "caring about human notions of justice and compassion" could be a way that they turn out to be human in a deep way
i dont think this is more likely than not, and it's outrageous that we have to pin our hopes on getting lucky. but i see yudkowsky as overly dismissive of the chance
2. coldly strategizing about how to optimize the universe for some weird specitic thing the ai cares about isnt particularly likely
i really dont see anything like todays ais having great introspective access to what they care about. i dont see them being especially keen to approach things in the ideal-agent "tile the universe" style. i agree that in the limit of capabilities, intelligent agents will be like that. but our current paradigm of ais are role-executors at a deep level, not unlike humans. theyd have to adopt the "evil superintelligence / henry kissinger" role, and i actually have faith in our current alignment strategies to make ai extremely reluctant to adopt *that* role
i get the impression that yudkowsky and his milleu are still stuck on ideas that made sense back when we had to reason about what ai would look like from first principles. that stuff is still useful, though. like ai only needs to slip into that mode *once*, at the wrong time, if it's smart enough to use that one opportunity in the right way. thats what happens in the example doom scenario in If Anyone Builds It
things would still go very poorly for humanity even without a "tile the universe" style superintelligence. but i worry that yudkowskys tendency to imagine ai in that way alienates people. also the post-humanity future would probably be less dismal and meaningless, though that isnt much consolation
15.63K
Top
Ranking
Favorites
