Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Elon Musk just destroyed OpenAI’s credibility, and he can because he founded it.
Musk: “I don’t trust OpenAI. I started that company as a non-profit open source. The Open in OpenAI. I named the company.”
Mission was in the name. Open source non-profit built as counterweight to big tech controlling AI.
Musk: “And it is now extremely closed source and maximizing profit. So I don’t understand how you actually go from being an open source non-profit to a closed source for maximum profit organization.”
Not pivot. Betrayal. Can’t be “Open” with proprietary code optimizing for profit.
Musk: “At various points, he’s claimed not to be getting rich, but he’s claimed many things that were false. And now apparently he’s going to get $10 billion of stock.”
Years of narrative: Sam Altman has no equity, pure altruism, selfless steward. Then $10 billion appears. Story changes. Incentives change.
Musk: “So I don’t trust Sam Altman, and I don’t think we want to have the most powerful AI in the world controlled by someone who is not trustworthy.”
Real threat isn’t technology. It’s leadership. Most powerful AI controlled by someone whose motivation shifted from safety to wealth accumulation.
When incentives flip from protecting humanity to capturing billions, safety can’t stay priority. Economics won’t allow it.
How do you legally convert non-profit benefiting everyone into hundred-billion-dollar corporation benefiting specific people? That transformation reveals whose interests actually get served.
Musk built OpenAI specifically to prevent profit-driven control of AI. Watching it become exactly that explains his position completely.
Danger is most advanced AI controlled by entity that already abandoned founding principles for money. Principles that changed once for profit will change again.
Next conflict between safety and profit arrives at higher stakes with more powerful technology. Outcome won’t favor safety. It didn’t before. Why would it now when financial pressure is exponentially larger and the person deciding already showed which priority wins?
Top
Ranking
Favorites
