Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
I have stumbled onto a way to improve agent steering. Namely, how to improve performance when you say "make sure you do this" and the LLM doesn't do it. Here it is:
Saying "remember to do X" is unreliable - it requires the LLM to agent to spontaneously initiate a procedural behavior. But presenting the agent with a specific, possibly-wrong claim ("You should be doing X - are you still doing it?") reliably triggers corrective behavior when the claim is wrong.
The agent doesn't need to remember to check. The mismatch between presented state and actual state creates a correction event that the agent LLM naturally responds to.
This reminds me of the old maxim of "the best way to get a correct answer on the internet is to post a wrong one" and I guess that makes sense since LLMs are predominantly the distilled "knowledge" of the internet.
Anyhow I've been building a long-running memory system for my agents and implementing it this way fixed a lot of problems.
Top
Ranking
Favorites
