Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Chamath Palihapitiya
God is in the details.
.@JDVance + the Besties!!

The All-In Podcast19 tuntia sitten
Winning the AI Race: Vice President JD Vance
@JDVance sits down with the Besties!
(0:00) The Besties welcome JD Vance
(1:24) Immigration
(6:34) AI policy
(9:27) Relationship with China: Technology competition, striking a balance, trade
(12:42) Diplomatic relations when AI is at-scale, operating in a multipolar world
(16:41) AI’s impact on jobs
(22:02) Should America facilitate “National Champions” in strategic industries?
(24:09) Working at the White House with Sacks
@VP
105,87K
An interesting footnote in the AI wars will be how The Bitter Lesson applies to the human capital working on AI.
On one side is this hiring spree (Google, Microsoft, Meta) and the kinds of hires (PhDs, human experts) being made right now vs the other side the xAI approach (computer and synthetic approaches).
Pay one person $1B OR buy $1B more compute.

NIK23.7. klo 04.41
🚨BREAKING: META JUST POACHED 3 GOOGLE DEEPMIND RESEARCHERS WHO BUILT IMO GOLD MEDAL MODEL
The Zucc cannot be stopped.


259,27K
.@JonathanRoss321 (the inventor and father of TPU) and I made this bet in 2017. @GroqInc is now the fastest inference solution in market today.
Here are some lessons learned so far:
- if we assume we get to Super Intelligence and then General Intelligence, the entire game will converge to Inference. The design architecture for Inference couldn’t be more different than for Training. Totally different memory and power considerations. Speed is paramount.
- Start on a proven, cheap and highly available process node and stay on that curve. Stacking complexity (ie 2nm) and yield issues into an Inference chip will result in a cost per output token that won’t make sense commercially.
- It’s all about model availability and diversity. If your chip is essentially an ASIC for one model, you will either need to use that at mega scale internally (ie Google and TPU) or you will need to incentivize app companies to build to it (ie Amazon with Anthropic and Inferentia).
Anyways, I’m a big fan of the direction, but Meta will need to avoid some potholes in execution strategy.
I wish them well - more solutions are better imo.

Shay Boloor23.7. klo 18.19
$META CUSTOM AI CHIP OLYMPUS TARGETING $TSM 2NM NODE FOR 2027 MASS PRODUCTION TO CHALLENGE $NVDA RUBIN

186,43K
Or whomever makes Dojo…oh wait!

Aaron Levie23.7. klo 02.15
NVIDIA will be the first $10 trillion dollar company
696,54K
An important corollary to this for those that want online importance:
Influence and popularity are absolute value exercises.
Negative or positive attention doesn’t matter. What matters is that the population is talking about you and not someone else.
When you measure your subscriber/follower growth in this way, you will see this hiding in plain sight.

signüll22.7. klo 10.19
being unhinged online is now a pre seed strategy.
the arc is almost too clean:
shitpost → audience → ideas → product → bag.
243,63K
With all the progress being made in model quality on “generic silicon” by the foundational model makers, it’s time to turn the models themselves onto the task of designing their own specialized silicon per use case.
This happens to a small degree right now but supply constraints should force more model makers to realize the need to design custom silicon.
Some constraints are important, though, for this design challenge.
The most important is maintaining a useful design at a non cutting edge process node so you don’t create supply constraints or yield issues and aren’t battling physics at the electron level.
The second would be how you design memory usage - cache v HBM per use case is an obvious paradox.
Cooling is yet another. I’ve been working with a great engineer designing a CO2 chilled heat pump. If it works, racks, systems and chips should contemplate it and work to take advantage of these different cooling dynamics because it yields fewer constraints.
Etc etc etc.
Anyways, it’s just clear to me that in five years, we’ll look back and silicon diversity will have exploded.
I’m expecting to see a ton of announcements about new chips starting with pre-training and then bleeding to inference.

Sam Altman21.7. klo 06.14
we will cross well over 1 million GPUs brought online by the end of this year!
very proud of the team but now they better get to work figuring out how to 100x that lol
134,35K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin