.@dylan522p gives a deep dive on the 3 big bottlenecks to scaling AI compute: logic, memory, and power. And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers. Learned a ton about every single level of the stack. 0:00:00 – Why an H100 is worth more today than 3 years ago 0:24:52 – Nvidia secured TSMC allocation early; Google is getting squeezed 0:34:34 – ASML will be the #1 constraint for AI compute scaling by 2030 0:56:06 – Can’t we just use TSMC’s older fabs? 1:05:56 – When will China outscale the West in semis? 1:16:20 – The enormous incoming memory crunch 1:42:53 – Scaling power in the US will not be a problem 1:55:03 – Space GPUs aren't happening this decade 2:14:26 – Why aren’t more hedge funds making the AGI trade? 2:18:49 – Will TSMC kick Apple out from N2? 2:24:35 – Robots and Taiwan risk Look up Dwarkesh Podcast on YouTube, Apple Podcasts, or Spotify. Enjoy!