Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Prakash (Ate-a-Pi)
FOLLOW ME for AI commentary; tech optimist, future shocked self-aware neuron, once fooled by superconductors;
This is writ large across most of the economy… lots of interesting, useful stuff stuck behind the corporate firewall… how participants made a decision on a merger or investment, what they were worried about, how they mitigated or underwrote the risks. Millions of pages.

Ruxandra Teslo 🧬21 tuntia sitten
FDA sits on a treasure trove of data: past regulatory submissions, which often span tens of thousands of pages. These would be incredibly informative, but are locked away due to trade secret law. I propose leveraging AI & bankruptcy law to unleash this info &
empower start-ups.
2,97K
This is why there are only a small number of elite AI researchers.

Fleetwood10.8. klo 20.36
The amount of tacit knowledge in training models is literally insane.
Is there some secret cabal trying to restrict this knowledge? Is karpathy the only defector?
2,86K
TLDR: ChatGPT maybe subtly influencing billions of people in the next few years and OpenAI wants to get the degree of AI influence vs free will for humanity right.

Sam Altman11.8. klo 08.37
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).
This is something we’ve been closely tracking for the past year or so but still hasn’t gotten much mainstream attention (other than when we released an update to GPT-4o that was too sycophantic).
(This is just my current thinking, and not yet an official OpenAI position.)
People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.
Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. There are going to be a lot of edge cases, and generally we plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want.
A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today.
If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.
I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.
There are several reasons I think we have a good shot at getting this right. We have much better tech to help us measure how we are doing than previous generations of technology had. For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more.
3,16K
That’s a $140 million dollar one year bonus

TBPN11.8. klo 06.30
BREAKING: Ex-OpenAI researcher Leopold Aschenbrenner’s (@leopoldasch) Situational Awareness Fund tops $1.5B and posts +47% in H1 ’25.
He turned his viral Situational Awareness essay into one of the fastest-growing hedge funds on record.
The fund is long the AI supply chain: semiconductors, data centers, and the power grid.
Aschenbrenner calls the firm a “brain trust on AI,” built to front-run what he predicts will be AGI by 2027 and the trillion-dollar infrastructure buildout that follows.

2,19K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin