Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
The Qwen3.5 series maintains near-lossless accuracy under 4-bit weight and KV cache quantization.
In terms of long-context efficiency:
Qwen3.5-27B supports 800K+ context length
Qwen3.5-35B-A3B exceeds 1M context on consumer-grade GPUs with 32GB VRAM
Qwen3.5-122B-A10B supports 1M+ context length on server-grade GPUs with 80GB VRAM
In addition, we have open-sourced the Qwen3.5-35B-A3B-Base model to better support research and innovation.
We can't wait to see what the community builds next!
Top
Ranking
Favorites
