Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
🦾Great📷 milestone for open-source robotics: pi0 & pi0.5 by @physical_int are now on @huggingface, fully ported to PyTorch in @LeRobotHF and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots!
As described by Physical Intelligence, π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization.
While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
Generalization must occur at multiple levels:
- Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
- Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
- Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from:
- Multimodal Web Data: Image captioning, visual question answering, object detection
- Verbal Instructions: Humans coaching robots through complex tasks step-by-step
- Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
- Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities
- Multi-Environment Data: Static robots deployed across many different homes
- Mobile Manipulation Data: ~400 hours of mobile robot demonstrations
This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
Huge thanks to the @physical_int team & contributors
Model:
LeRobot:

Top
Ranking
Favorites