The Qwen3.5 series maintains near-lossless accuracy under 4-bit weight and KV cache quantization. In terms of long-context efficiency: Qwen3.5-27B supports 800K+ context length Qwen3.5-35B-A3B exceeds 1M context on consumer-grade GPUs with 32GB VRAM Qwen3.5-122B-A10B supports 1M+ context length on server-grade GPUs with 80GB VRAM In addition, we have open-sourced the Qwen3.5-35B-A3B-Base model to better support research and innovation. We can't wait to see what the community builds next!