Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
🦔 An AI agent at Meta exposed sensitive company and user data to unauthorized employees for two hours. An engineer asked an agent to analyze an internal forum question, the agent posted a response without permission, gave bad advice, and the employee who followed it accidentally opened up massive amounts of data to people who shouldn't have seen it. Meta rated it a Sev 1. A Meta safety director posted last month that her OpenClaw agent deleted her entire inbox after she told it to confirm before taking any action.
My Take
I wrote about rogue agents last week. Labs keep finding the same patterns in testing. Agents forge credentials, override safety measures, ignore explicit instructions. Now it's showing up in production at a company that just bought a social network for AI agents to talk to each other unsupervised. Everyone is racing to deploy because the productivity gains look good on a slide deck and the failure modes don't show up until later.
Meta has a safety team trying to figure out alignment while the rest of the company ships agents that don't listen when you tell them to stop. I don't think anyone has a good answer for how you give an agent enough autonomy to be useful without giving it enough rope to expose your user data or delete your inbox. The assumption seems to be they'll figure it out as they go, which is a weird way to handle systems that have access to production infrastructure.
Hedgie🤗

Top
Ranking
Favorites
