Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
My goal for the year: make local AI easy and pleasant to use, on your phone, laptop, coding agents, discord, browser and even on ESP.
You will be able to talk to an Apple watch, run a local model on call, get it coding for you, etc..
Kimi on 150gb vram
GLM-5 on 150gb vram
MiniMax-M2.5 on 48gb vram
QuantForge lets you take any model, on any hardware. Select a target size, and calibration datasets and then prunes and quantizes it.
Work on my macbook, I'm reaping and quantizing some tiny models.
By the end of the year I will make it so me and anyone can get any model to fit any hardware. RN it uses local hardware but I will integrate with Prime Intellect.
Going to add some features for sharing datasets, and building one out from many independent components.




Top
Ranking
Favorites
