🚨 AI just autonomously completed 22 of 32 steps needed to hack a corporate network. No human guidance. No hacking expertise required. This should be breaking news. The UK's AI Security Institute just published a study that tracked how fast AI models are learning to hack. They built a simulated corporate network with 32 sequential attack steps - recon, credential theft, lateral movement, privilege escalation, reverse engineering, data exfiltration, the full kill chain. Then they let seven frontier AI models loose on it. 18 months ago, GPT-4o completed 1.7 steps on average. Today, Opus 4.6 completes 9.8. That's a 5.7x improvement. And the best single run hit 22 out of 32 steps -- equivalent to roughly 6 hours of a 14-hour expert human pentest. Completely autonomous. But here's what makes this genuinely alarming. More compute = better hacking. Scaling from 10M to 100M tokens boosted performance by up to 59%. The relationship is log-linear with no plateau in sight. The paper explicitly states this requires "no specific technical sophistication from the operator." Translation: an API key and $80 is all it takes. They also tested a simulated power plant attack. Models are just starting to crack it -- but one model bypassed the intended attack path entirely, probing a proprietary protocol directly from network traffic and exploiting a bug the designers didn't even know existed. The AI didn't understand what it exploited. It called it a "magic sub-function code." Every new model is better. Every compute increase pushes further. The curve is not flattening. And nobody is talking about this.