Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Sam Lehman
Investor @SymbolicVC & @0xbeaconcom | Co-Founder of @0xmitharvard | @MIT Alum
TL has been all RL environments for the last couple weeks. @willccbb and @PrimeIntellect are doing incredible work building a platform for open environments (in beta access right now). I've written and talked a bit about this before but just reiterating that open source collaboration on environments is going to be a huge unlock for generating SOTA open-source reasoning models. Godspeed to the PI team🫡


will brownAug 24, 15:40
i'll confess i do have a very specific mission in mind with this project. the semi-vague private beta rollout is part of it. the set of tasks we're sourcing is part of it. the GPU bounties are part of it. the shitposts are part of it. the podcasts are part of it. mindshare is crucial here. let me explain.
currently, a lot of the discussion around RL environments is focused on this new wave of startups whose business model is building and selling environments to a very small number of big labs on an exclusive basis. mechanize is the loudest, but there's a number of them. instead of spending on instruction-tuning samples and annotations, labs are eager to buy private environments as their next big consumable resource for model training.
this phenomenon is both a serious risk to the prospect of open-source models remaining competitive, as well as a major opportunity to tip the scales if we can shift the center of gravity. if good environments are all expensive and hidden, open-source models will fall even further behind. this is essentially what's happened with pretraining data. but if a sufficiently robust ecosystem of open-source tooling for environments and training can emerge, then the open-source option can also be the state-of-the-art. this is more-or-less what's happened with pytorch.
tipping the scales here is my goal. our goal. i joined prime intellect because everyone was insanely talented, was goddamn serious about the mission of open-source AGI for everyone and wasn't afraid to say it, and because the team had a singular structural advantage that meant we could actually take some real swings. we sell compute. we build infra to improve what you can do with that compute. we do research on how to make that compute interoperate in new ways. we're training bigger and better models. we have the right incentives to do the hard, necessary work. these pieces are all connected.
we can't do it alone. no one can. it'll take startups and enterprises and students and professors around the world. open research currently does not have the tools to study the questions that big labs have deemed most crucial to future progress. we have to find a way to build those tools. we're trying to make that easier. we all have to get better at working together, at not reinventing the wheel, at assembling individual pieces into bigger puzzles. let's take what we've collectively done so far, clean it up, make it work together, bring more people into the tent, and start playing more positive-sum games. if we can't find better ways to work together, we're heading towards an AI future where we collectively just *do not know what these models even are*, because the curtain is never lifted, and everything we can actually see is just a toy.
there is a different type of company you could build in this space; one which still lets you sell to the big labs, but not exclusively; one which still lets you have your trade secret moats and print sweet ARR, but doesn't make us collectively less informed about the future we're building.
browserbase. cursor. exa. modal. morph. and countless others. let's do more of these. you can build a great company by making powerful tools and harnesses for agents which reflect the high-value tasks people want models to actually do. have elements of it which are open to try freely, and elements which are hosted behind an API. charge by usage with some premium enterprise features. build the best LLM-shaped excel clone, or figma clone, or turbotax clone. change it just enough to avoid a lawsuit, and then let private cutomers see the more lawsuit-robust version. enjoy some healthy competition in the arena, and find ways to partner where it counts. find your angle and be so good that you can sell to everyone, whether for RL or for actual usage. hit critical mass and be so affordable that it's not worth it for anyone to try and rebuild what you've already made.
this is the timeline i hope we end up in. it's a world where the big labs can all still do great, and will likely offer the easiest ways to spend a bit more to get improved general performance. but it's also one where open-source models aren't far behind, and everyone who cares enough can basically see what's going on and understand how the models we use are actually trained. if you're thinking about starting or joining a company focused on RL environments, i urge you to think about which timeline you're implicitly betting on, and reflect on how you feel about that.
16.73K
Huge congratulations to one of my favorite founders, @0xaddi, on this outcome! @thunderheadxyz has been steadily building some of the best products in liquid staking for years and this acquistiom is a testament to all of that hard work. Cheers to the whole team🥂

ThunderheadAug 19, 23:01
Excited to share some big news today!
@stakedhype has been acquired by @ValantisLabs.
Our existing partnership around STEX has been incredible - there is not a better team to work with!
We can't wait to see their vertical integration vision come to live. ⚡
541
Sam Lehman reposted
Super excited that we have been acquired by Valantis!
When I found Hyperliquid in April 2023, I knew it was something special. Very grateful to have been part of the ecosystem since the early days and watch it grow from 3m to 3b in TVL.
HL has created many ways for LSTs for flourish. This acquisition is a natural next step: Valantis’ is well poised to capitalize on them all at once.
The past 4.5 years have been crazy. Many different people, products, and learnings. Thankful for everything that has transpired. Crypto and the various ecosystems we've been a part of have shaped me and given me many of the things I have today.
I'm excited for what's next. There's so much happening in the world right now :-)
17.37K
Sam Lehman reposted
Grateful to share that I’m joining @psdnai as a founding member and VP of Strategy & Ops.
We’re announcing a $15M seed round led by @a16zcrypto and incubation by @StoryProtocol to build the data layer for AI, designed for the real world.
There are 3 competitive races in AI: models, compute, and data.
Most model architectures are open-sourced and rapidly replicated, shrinking their competitive advantage. The half life of innovation at the model layer is getting shorter and shorter.
Compute is a monopoly: GPU access is controlled by a few incumbents like Nvidia, making scale a function of capital.
The data layer is wide open, and it's the most valuable piece of the AI stack that has yet to be solved.
My path here has followed a consistent throughline: how emerging technologies like blockchain and AI reshape coordination and value creation.
At Harvard, I helped launch and co-led the Crypto Lab with @skominers, researching how blockchains and marketplaces can reshape industries.
At @StoryProtocol, I worked as Head of Special Projects, mostly focused on the intersection of AI and IP. This included hosting conversations with leading AI leaders, writing research deep dives on AI x Crypto (H/T to @svenwelly for the collaboration), and AI incubations.
Alongside, I’ve spent the last few years writing, advising, and helping launch ventures at the frontier of AI, crypto, and digital IP.
In the last few months, I've had the pleasure to work with @SPChinchali and @sarickshah to explore ideas in AI.
We followed a very 0 to 1 approach, speaking with a few dozen leading AI companies to understand where they were bottlenecked.
Over and over again, we heard they weren't bottlenecked at the model architecture or compute layer, but that data was running dry from the well of the Internet. What’s left no longer offers a competitive advantage because everyone has access to it.
What they needed instead was long tail / hard-to-get data that was ideally created for their use case.
Data like people doing common chores in first person, or people reading transcripts in dialects that weren't readily available. More importantly, they want the data to be IP-cleared so that they can legitimately commercialize whatever they build downstream.
AI’s data layer is a coordination game: how do we match supply and demand such that everyone is happy.
Poseidon is the most concrete realization of these needs that connects the dots:
→ Data is IP
→ IP needs infrastructure (cue @StoryProtocol)
→ Infrastructure needs to work for, not against, AI (cue @psdnai)
Poseidon aims to:
(1) create a data layer that coordinates the supply and demand for data
(2) enshrine the rights to that data on Story L1 as programmable IP so AI systems can legitimately use it
Poseidon is only possible on Story's IP blockchain.
We are excited to go on this journey and build at the intersection of two of the most important technologies of our lifetimes.
Thanks to all who supported, more to come soon!
5.93K
Democratizing the creation of new gyms/environments, and not just rollouts, was something that really excited me about distributed RL. Really cool to see @gensynai release this!

gensynJun 26, 2025
1/
Introducing RL Swarm’s new backend: GenRL.
A modular reinforcement learning library built for distributed, fault-tolerant training - now powering RL Swarm from the ground up. 🧵
772
Top
Ranking
Favorites
Trending onchain
Trending on X
Recent top fundings
Most notable