Anyone fading $TAO is setting themselves up for the biggest cope of the cycle. Need I remind you that Bittensor successfully trained the largest permissionless 72B-parameter LLM on Subnet 3 (Templar) over commodity internet with 70+ nodes, outperforming LLaMA-2-70B on key benchmarks. And now we see Jensen Huang publicly praising Bittensor’s decentralized training capabilities The big guys are watching. $TAO is not done yet.