🚨JUST IN: Chief Scientist at crypto security firm @Certora, Mooly Sagiv, warns that LLM generated code can introduce critical security flaws by “quietly skipping the hard part,” after the recent @MoonwellDeFi exploit was widely described as the first DeFi incident involving AI assisted code. He said LLMs match patterns, not semantics, and often make unjustified assumptions, producing code that appears correct but can fail in real conditions and lead to exploitable vulnerabilities.