Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

swyx 🇸🇬
achieve ambition with intentionality, intensity, & integrity - @smol_ai - @dxtipshq - @sveltesociety - @aidotengineer - @coding_career - @latentspacepod
this might be my single biggest correct call i ever make in my lifetime lol


The Pragmatic EngineerSep 3, 01:01
State of the software engineering job market in 2025: Big Tech has started to hire more software engineers.
Also: tenure at Big Tech has dramatically increased since the 2023 layoffs (surprising!)
A lot more details in today's deepdive:

36.67K
## Human-level Efficiency is necessary for AGI
ending a great 🇸🇬 trip where I got to hang out with @agihippo + got to see @jeffdean @quocleix @denny_zhou et al give "State of GDM" 2025 updates*
by far the #1 recurring theme in our convos is learning efficiency. inference time compute is all the rage now but #efficiencyiscoming next, and i'm usually 3-6months ahead of majority when this kind of frontier lab consensus reaches thru my thick skull.
people are often surprised and then generally agree when i draw a through line from the breakthru of self supervised learning and RLAIF and RLVR that they're all just step changes in performance gained per datapoint, and the next exponent jump of this efficiency was obviously what Quoc confirmed Ilya is also working on in his talk referenced below + Ilya's "mountain" that he identified at NeurIPS.
I'd bet some money that the solution to this is a new non-transformers block/abstraction that isn't "just multiagents" and is more akin to realtime testing and resolution of potential world model hypotheses, kind of like how you solve a sudoku puzzle or play Cluedo. this is the only systematic way I know how to boil down the human learning process where we can few-shot learn concepts with 10-20x datapoints per concept less than current SOTA.
If i were a researcher I'd start here now... and it'd be very ironic/cool if this was @ylecun's ultimate victory to have.
*+ did some meetings w local sovereign wealth + startups!

7.44K
## Human-level Efficiency is necessary for AGI
ending a great 🇸🇬 trip where I got to hang out with @agihippo + got to see @jeffdean @quocleix @denny_zhou et al give "State of GDM" 2025 updates*
by far the #1 recurring theme in our convos is learning efficiency. inference time compute is all the rage now but #efficiencyiscoming next, and i'm usually 3-6months ahead of majority when i get feelings like this.
people are often surprised and then generally agree when i draw a through line from the breakthru of self supervised learning and RLAIF and RLVR that they're all just step changes in performance gained per datapoint, and the next exponent jump of this efficiency was obviously what Quoc confirmed Ilya is also working on in his talk referenced below + Ilya's "mountain" that he identified at NeurIPS.
I'd bet some money that the solution to this is a new non-transformers block/abstraction that isn't "just multiagents" and is more akin to realtime testing and resolution of potential world model hypotheses, kind of like how you solve a sudoku puzzle or play Cluedo. this is the only systematic way I know how to boil down the human learning process where we can few-shot learn concepts with 10-20x datapoints per concept less than current SOTA.
If i were a researcher I'd start here now... and it'd be very ironic/cool if this was @ylecun's ultimate victory to have.
*+ did some meetings w local sovereign wealth + startups!

4.14K
Top
Ranking
Favorites