Language Models are Provably Injective and Invertible! A groundbreaking paper challenges the long-held belief that LLMs lose information. They prove mathematically and show empirically across billions of tests that inputs map uniquely to representations, making them lossless. This is a powerful finding for transparency and interpretability.