Tencent releases WeDLM-8B-Instruct A diffusion language model that runs 3-6× faster than vLLM-optimized Qwen3-8B on math reasoning tasks. -3-6× faster than vLLM-optimized Qwen3-8B on math reasoning tasks - Outperforms base Qwen3-8B-Instruct on most benchmarks - Native KV cache compatible (FlashAttention, PagedAttention, CUDA Graphs)