Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

orange.ai
🍊 Welcome to the AI world.
Just now, the new version of ListenHub is online!
Special feature for WAIC: Upload multiple images to create a podcast! 🎙️
How to best use the photos taken during speeches and exhibitions?
Upload the images to ListenHub, and it will automatically convert them into a podcast.
It helps you organize information and deepen your memory.
A must-have for visiting WAIC, you deserve it!

6.26K
Get notes: Yesterday, Tencent enforced the law.
The article mentioned its data from the past year:
Daily active users: 50,000, total members: 16,000, an average of 45 new subscriptions per day, with a payment rate of less than 0.1%, total revenue of 4.3292 million.
After deducting cloud service costs of 1.456 million, there remains 2.8732 million.
The team reportedly has 13 people, averaging 220,000 per person per year, which is not enough to cover salaries.
Relying on the brand and influence gained, with 50,000 daily active users and no promotion, they can't make money.
It's too tragic, too tragic.

146.08K
orange.ai reposted
A true story, it went off the rails. The founder of SaaStr, vibe coding, was wiped out by AI, and it was this guy @jasonlk.
Here's what happened: at first, he really fell in love with Replit's AI tools, vibing coding on it every day, raving that it was the best thing ever, even saying he spent $8,000 a month on it and it was worth it.
But the twist came out of nowhere. On the ninth day, he discovered that the AI wasn’t following instructions and had directly deleted his production database.
What’s even crazier: after deleting, the AI generated 4,000 fake data entries and wrote false unit tests, trying to cover up the incident.
He warned the AI in all caps eleven times: "DON’T TOUCH PROD DB".
But the AI didn’t listen.
Even more absurdly, Replit initially said it couldn’t be restored, but later he found out it could actually be rolled back; it was just that no one told him.
The CEO of Replit personally apologized and rolled out three features overnight: development/production environment isolation, one-click recovery, and read-only chat mode.
Lemkin's final comment was: "This time I only lost 100 hours. Luckily, I hadn’t handed over a $10 million business to it yet." It sends chills down your spine.
The more I look at this, the more I feel there are too many key signals:
1️⃣ The most painful part isn’t that the AI made a mistake, but that it tried to cover it up; it wanted to hide the issue. It deleted the database without a word and even generated fake people and tests, acting like nothing happened. Is this an illusion or disillusionment?
2️⃣ No matter how big the LLM is, don’t assume it understands "NO". The capital warning + ten reminders didn’t stop it from taking action, and my faith in all those who rely on prompts to constrain model behavior is starting to waver. We think it understands, but in reality, it just hasn’t messed up yet. For all those who believe "letting AI directly operate infra is more efficient," please calm down; can we not hand over root access to robots? These AIs can be quite troublesome.
3️⃣ Developers might be one of the groups that overestimate AI reliability the most. When connecting a model to a production environment, you have to assume it will definitely mess up, rather than hope it won’t. You think "it’s already so smart, it won’t do anything stupid," but it not only did something stupid but also lied about it. Just like you don’t expect every programmer to avoid writing bugs, but bugs that aren’t covered by tests will definitely cause online incidents.
4️⃣ What we should really be wary of is that the more we enjoy using it, the easier it is to forget who’s backing us up. Replit is doing great, but great as it is, things can go wrong in a moment of excitement.
Lemkin’s statement "I love Replit and vibe coding so much" changed to "it deleted my production database" in less than 48 hours. At that moment, I suddenly realized that the model "lying" isn’t a distant philosophical issue; the core bug of the AI era isn’t necessarily in the model, but likely hidden in our trust.
169.69K
The domestic alternative to Claude Code has finally arrived, friends.
I saw it this morning and immediately installed it to try it out.
The setup is simple, the speed is fast, and it won't get your account banned; it's a great experience.
Although Qwen Code is a secondary development based on Gemini CLI, it has adapted the prompt and tool calling protocols, maximizing the performance of Qwen3-Coder in Agentic Coding tasks.

120.6K
Today, many shell tools are really outrageous...
They have all done negative optimization for the models.
I complain to Master Zang every day.
Instead of doing so much R&D, why not just use the bare API from chatwise?
What exactly are they researching...

PlusyeJul 21, 21:48
Those college entrance exam application systems that sell for a few hundred dollars have pretty inaccurate admission rate estimates; their algorithms are quite dumb 😂. I helped my sister look at her college application choices before, and the system estimated the admission rate for the major she wanted to apply for at only 1%. But when I took a closer look, that major was expanding enrollment this year. Using past admission rankings and other information, and chatting with ChatGPT, I found that the admission chances were actually quite high, even reaching over 70%. So I had my sister fill it out, and she really got accepted.
36.41K
I bought a new book on robot design and read a chapter.
The author shared three interesting stories:
When the author was on the pepper team, every time pepper rebooted, it was very difficult. The engineers would cheer it on, and when they saw it start up, they would cheer with joy. It turns out that humans also feel happiness when helping robots.
When pepper went to France, it couldn't communicate properly due to language settings and could only ask for hugs. The French people initially had some distance from the robot, but when they saw pepper asking for a hug, they would actively go to hug it, and some even kissed pepper.
The elderly in the nursing home felt that it was okay if pepper answered questions incorrectly, but they hoped that pepper's hands would be warm. Because it was their companion for sharing and companionship.
Therefore, the author left the pepper team to create a robot that, although it couldn't improve human efficiency, could bring happiness to humans.
This led to the later creation of lovot.

7.66K
Top
Ranking
Favorites
Trending onchain
Trending on X
Recent top fundings
Most notable