Mike Bursell uses a classic WWII example to explain why AI model exfiltration is nearly impossible to prevent. "It's difficult to stop exfiltration without specific guardrails aimed at protecting that. And without knowing what you're protecting, it's really, really hard."