Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Jim Fan
NVIDIA Director of Robotics & Distinguished Scientist. Co-Lead of GEAR lab. Solving Physical AGI, one motor at a time. Stanford Ph.D. OpenAI's 1st intern.
I'm observing a mini Moravec's paradox within robotics: gymnastics that are difficult for humans are much easier for robots than "unsexy" tasks like cooking, cleaning, and assembling. It leads to a cognitive dissonance for people outside the field, "so, robots can parkour & breakdance, but why can't they take care of my dog?" Trust me, I got asked by my parents about this more than you think ...
The "Robot Moravec's paradox" also creates the illusion that physical AI capabilities are way more advanced than they truly are. I'm not singling out Unitree, as it applies widely to all recent acrobatic demos in the industry. Here's a simple test: if you set up a wall in front of the side-flipping robot, it will slam into it at full force and make a spectacle. Because it's just overfitting that single reference motion, without any awareness of the surroundings.
Here's why the paradox exists: it's much easier to train a "blind gymnast" than a robot that sees and manipulates. The former can be solved entirely in simulation and transferred zero-shot to the real world, while the latter demands extremely realistic rendering, contact physics, and messy real-world object dynamics - none of which can be simulated well.
Imagine you can train LLMs not from the internet, but from a purely hand-crafted text console game. Roboticists got lucky. We happen to live in a world where accelerated physics engines are so good that we can get away with impressive acrobatics using literally zero real data. But we haven't yet discovered the same cheat code for general dexterity.
Till then, we'll still get questioned by our confused parents.
279,41K
My bar for AGI is far simpler: an AI cooking a nice dinner at anyone’s house for any cuisine. The Physical Turing Test is very likely harder than the Nobel Prize. Moravec’s paradox will continue to haunt us, looming larger and darker, for the decade to come.

Thomas Wolf19.7. klo 16.06
My bar for AGI is an AI winning a Nobel Prize for a new theory it originated.
98,99K
I've been a bit quiet on X recently. The past year has been a transformational experience. Grok-4 and Kimi K2 are awesome, but the world of robotics is a wondrous wild west. It feels like NLP in 2018 when GPT-1 was published, along with BERT and a thousand other flowers that bloomed. No one knew which one would eventually become ChatGPT. Debates were heated. Entropy was sky high. Ideas were insanely fun.
I believe the GPT-1 of robotics is already somewhere on Arxiv, but we don't know exactly which one. Could be world models, RL, learning from human video, sim2real, real2sim, etc. etc, or any combo of them. Debates are heated. Entropy is sky high. Ideas are insanely fun, instead of squeezing the last few % on AIME & GPQA.
The nature of robotics also greatly complicates the design space. Unlike the clean world of bits for LLMs (text strings), we roboticists have to deal with the messy world of atoms. After all, there's a lump of software-defined metal in the loop. LLM normies may find it hard to believe, but so far roboticists still can't agree on a benchmark! Different robots have different capability envelopes - some are better at acrobatics while others at object manipulation. Some are meant for industrial use while others are for household tasks. Cross-embodiment isn't just a research novelty, but an essential feature for a universal robot brain.
I've talked to dozens of C-suite leads from various robot companies, old and new. Some sell the whole body. Some sell body parts such as dexterous hands. Many more others sell the shovels to manufacture new bodies, create simulations, or collect massive troves of data. The business idea space is as wild as research itself. It's a new gold rush, the likes of which we haven't seen since the 2022 ChatGPT wave.
The best time to enter is when non-consensus peaks. We're still at the start of a loss curve - there're strong signs of life, but far, far away from convergence. Every gradient step takes us into the unknown. But one thing I do know for sure - there's no AGI without touching, feeling, and being embodied in the messy world.
On a more personal note - running a research lab comes with a whole new level of responsibility. Giving updates directly to the CEO of a $4T company is, to put it mildly, both thrilling and all-consuming of my attention weights. Gone are the days when I could stay on top of and dive deep into every AI news.
I’ll try to carve out time to share more of my journey.

876,9K
The Physical Turing Test: your house is a complete mess after a Sunday hackathon. On Monday night, you come home to an immaculate living room and a candlelight dinner. And you couldn't tell whether a human or a machine had been there. Deceptively simple, insanely hard.
It is the next North Star of AI. The dream that keeps me awake 12 am at the lab. The vision for the next computing platform that automates chunks of atoms instead of chunks of bits.
Thanks Sequoia for hosting me at AI Ascent! Below is my full talk on the first principles to solve general-purpose robotics: how we think about the data strategy and scaling laws. I assure you it will be 17 minutes you don't regret!
107,17K
Some day in the next decade, we will have robots in every home, every hospital and factory, doing every dull and dangerous jobs with superhuman dexterity. That day will be known as “Thursday”. Not even Turing would dare to dream up our lifetime in his wildest dreams.

signüll21.4.2025
we crossed the turing test & no one gave a shit. no parades. no front page headlines. just… a casual shrug. like “oh yeah, the machines are smart enough to fool us now. anyway, what’s for lunch?”
that silence tells you everything about the pace we’re moving at.
back in my cs classes, the turing test was treated like the final boss. now every break through is another god damn tuesday.
101,91K
humanoid olympics in 2030 will be quite a spectacle

Jim Fan5.2.2025
We RL'ed humanoid robots to Cristiano Ronaldo, LeBron James, and Kobe Byrant! These are neural nets running on real hardware at our GEAR lab. Most robot demos you see online speed videos up. We actually *slow them down* so you can enjoy the fluid motions.
I'm excited to announce "ASAP", a "real2sim2real" model that masters extremely smooth and dynamic motions for humanoid whole body control.
We pretrain the robot in simulation first, but there is a notorious "sim2real" gap: it's very difficult for hand-engineered physics equations to match real world dynamics.
Our fix is simple: just deploy a pretrained policy on real hardware, collect data, and replay the motion in sim. The replay will obviously have many errors, but that gives a rich signal to compensate for the physics discrepancy. Use another neural net to learn the delta. Basically, we "patch up" a traditional physics engine, so that the robot can experience almost the real world at scale in GPUs.
The future is hybrid simulation: combine the power of classical sim engines refined over decades and the uncanny ability of modern NNs to capture a messy world.
45,79K
We RL'ed humanoid robots to Cristiano Ronaldo, LeBron James, and Kobe Byrant! These are neural nets running on real hardware at our GEAR lab. Most robot demos you see online speed videos up. We actually *slow them down* so you can enjoy the fluid motions.
I'm excited to announce "ASAP", a "real2sim2real" model that masters extremely smooth and dynamic motions for humanoid whole body control.
We pretrain the robot in simulation first, but there is a notorious "sim2real" gap: it's very difficult for hand-engineered physics equations to match real world dynamics.
Our fix is simple: just deploy a pretrained policy on real hardware, collect data, and replay the motion in sim. The replay will obviously have many errors, but that gives a rich signal to compensate for the physics discrepancy. Use another neural net to learn the delta. Basically, we "patch up" a traditional physics engine, so that the robot can experience almost the real world at scale in GPUs.
The future is hybrid simulation: combine the power of classical sim engines refined over decades and the uncanny ability of modern NNs to capture a messy world.
543,1K
That a *second* paper dropped with tons of RL flywheel secrets and *multimodal* o1-style reasoning is not on my bingo card today. Kimi's (another startup) and DeepSeek's papers remarkably converged on similar findings:
> No need for complex tree search like MCTS. Just linearize the thought trace and do good old autoregressive prediction;
> No need for value functions that require another expensive copy of the model;
> No need for dense reward modeling. Rely as much as possible on groundtruth, end result.
Differences:
> DeepSeek does AlphaZero approach - purely bootstrap through RL w/o human input, i.e. "cold start". Kimi does AlphaGo-Master approach: light SFT to warm up through prompt-engineered CoT traces.
> DeepSeek weights are MIT license (thought leadership!); Kimi does not have a model release yet.
> Kimi shows strong multimodal performance (!) on benchmarks like MathVista, which requires visual understanding of geometry, IQ tests, etc.
> Kimi paper has a LOT more details on the system design: RL infrastructure, hybrid cluster, code sandbox, parallelism strategies; and learning details: long context, CoT compression, curriculum, sampling strategy, test case generation, etc.
Upbeat reads on a holiday!

300,4K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin