🚀 Scaling embeddings, not just experts—introducing a new path for efficient LLMs. Key Finding: In high-sparsity scenarios, N-gram embeddings yield a better Pareto frontier than just adding more MoE experts. Therefore, we introduce LongCat-Flash-Lite—the first opensource model built on this insight. ⚙️ 68.5B Total Params(37.13B non-embedding) | 2.9B~4.5B Active 📊 High Performance: SWE-Bench 54.4 | τ²-Bench 72.8 | TerminalBench 33.75 📃 256K Context Window (YARN-powered) ✨ Optimized for Agentic/Coding, strong in general reasoning ⚡ ~700 tokens/s peak inference speed The result: Achieves competitive performance within its scale at a significantly lower cost and latency. Hugging Face: Tech Report: