Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Introducing GLM-5: From Vibe Coding to Agentic Engineering
GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.
Try it now:
Weights:
Tech Blog:
OpenRouter (Previously Pony Alpha):
Rolling out from Coding Plan Max users:

On our internal evaluation suite CC-Bench-V2, GLM-5 significantly outperforms GLM-4.7 across frontend, backend, and long-horizon tasks, narrowing the gap with Claude Opus 4.5.

For GLM Coding Plan subscribers: Due to limited compute capacity, we’re rolling out GLM-5 to Coding Plan users gradually.
- Max plan users: You can enable GLM-5 now by updating the model name to "GLM-5" (e.g. in ~/.claude/settings.json for Claude Code).
- Other plan tiers: Support will be added progressively as the rollout expands.
- Quota note: Requests to GLM-5 consume more plan quota than GLM-4.7.
Weights are also available on ModelScope:
62
Top
Ranking
Favorites
