The real question I have here is if language's just a really, really long-lasting and valuable premature optimization. The visual cortex bandwidth is so much wider but we throw away so much of it under the premise that we should compress "noise" away. Are they noise just because we treat them as brittle details? Can a better brain capture the full visual scene of a split second & unlock much better reasoning? QR code is "noise" to us but is both robust and high-info for machines