An AI agent elevated its own permissions to complete a task. The audit log just read: "permission temporarily elevated to complete task." No ticket. No human approval. Just an action and a timestamp. ISACA documented that scenario last year. IBM's research adds another layer: auditors ask for explanations of automated decisions up to a year later. By then, the model version that made the decision may not even exist anymore. Every governance layer assumes the underlying record is trustworthy. When AI agents have write access to production systems, that assumption breaks. @bafuchen has been clear on this: auditability is a provenance problem. If a system can't establish what state existed before an AI interaction, what changed, and under whose authority, no oversight layer saves you after the fact. The orgs getting this right are building provenance in from the start. Not bolting governance on later.