Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

响马
。。。。。。
响马 reposted
After nearly a week of struggles, I finally managed to reproduce the Transformer model, albeit with some difficulties. My current feeling is that the construction of the Transformer is definitely a product of genius and hard work. It has epoch-making significance both theoretically and in engineering. (Although saying this now feels a bit like hindsight.)
59.4K
My bad connection lasted for several days, only the Iceland node could connect. It suddenly got better this afternoon.

Jason YoungAug 23, 19:41
The ladder broke all afternoon, feeling like I've lost connection with the world.
5.92K
AI is just a tool, and how well it is used depends on the user. The code written by AI can only be considered passable. It has seen too many mixed-quality codes and lacks its own taste, which often leads to the generation of some terrible code. These codes may seem fine as module snippets, but when placed in a larger architecture, they can create countless hidden issues.

宝玉Aug 23, 00:11
The most advanced batch of AI models writes code that isn't bad, with module-level performance far exceeding the average level of human programmers. If the code produced is poor, first check which model was selected, whether the context is sufficient, and if the prompts need optimization.
Cross-module code, limited by the context window length, may require human assistance for design and planning. If the project structure is reasonable, AI can also reuse existing code to maintain DRY.
6K
AI can read, but it can't remember. Only when we truly achieve memory within the model can we break through the constraints of context.

geniusvczhAug 22, 18:58
After careful consideration, the reason AI writes poor code is not that it can't write, but that it can't read. Humans look at many seemingly unrelated things to implement a feature, but it's precisely these seemingly unrelated things that provide the possibility of keeping the project DRY. For AI to become a pillar, it must first learn this.
2.34K
For those who have used AI programming, have you noticed that using AI actually makes you busier?
In the past, a single requirement would take about two weeks to complete, from describing the requirement, assembling a team, to product front-end and back-end testing, with a whole bunch of people mobilized.
Now, one person can get it done in one night.
But back then, you could use the excuse of waiting for the team and take the opportunity to slack off. Now, you can't slack off anymore. 😶🌫️
66.81K
贡品被快饿死的人吃掉,是被祭奠的人的功德。

garrulous abyss🌈Aug 22, 16:58
Don't even mention the Bodhisattva.
If you were out in the wild and about to starve... and you saw offerings at a grave by the roadside. You bow your head and say, "Sorry, I'm about to starve, can I take an offering to eat?" I don't think anyone or any ghost would make things difficult for you...

2.2K
When a field starts discussing philosophy, it indicates that a stage is about to end. 😂

𝙩𝙮≃𝙛{𝕩}^A𝕀²·ℙarad𝕚g𝕞Aug 22, 06:53
Why does it feel like AI engineering research is becoming more focused on language philosophy?

4.81K
The single rag is a product of the first generation enhancement. My current approach is to implement rag as mcp, providing two APIs: ask and fetch. The retrieval still uses the traditional rag mechanism, while extended reading allows AI to call fetch to read the surrounding context. Mechanically, it is similar to cline grep reading files again.

宝玉Aug 21, 07:39
I deeply agree: 1. Multi-agent parallel collaboration is less stable than single-threaded; 2. RAG is unreliable and worse than traditional retrieval; 3. The more instructions in the prompt, the less the model knows how to choose.
—— The original translation is as follows ——
On the road to building AI agents, our team @Cline has identified three "thought viruses." These so-called "thought viruses" are enticing ideas that sound brilliant but are completely impractical in practice.
The three viruses are:
* Multi-Agent Orchestration
* Retrieval Augmented Generation (RAG)
* More instructions = better results
Let's explore them!
1. Multi-Agent Orchestration
The kind of scene from a sci-fi movie—"back-end agents, supply agents, analysis agents, command agents" dispatching a large group of sub-agents, and then summarizing the results—sounds really cool. But the reality is that most useful agent work is single-threaded.
Complex collaboration processes rarely bring real value and often create chaos instead. It's important to know that just getting the model to work stably in a single thread is already quite difficult, let alone handling those parallel collaboration logics. This not only increases implementation complexity but also makes the model's understanding and decision-making process exceptionally complicated.
2. Building agents with RAG
RAG, or Retrieval Augmented Generation, is also a thought virus. It looks powerful in theory, but in practice, especially in agent scenarios, sometimes even a basic text search command like GREP is more useful than it.
Why does the halo of RAG fade in actual agent workflows? Because the information retrieved is often fragmented, preventing the model from forming a coherent and useful "understanding."
A better approach is almost always: let the model list the files itself, search in a grep-like manner, and then open and read the entire file (just like humans do). The @Cline team started doing this early on, and later we saw @Amp — Research Preview and @Cursor also shift to this more pragmatic approach.
3. More instructions = better results
There is a widespread misconception: piling more "instructions" into the system prompt will make the model smarter. This is completely wrong.
"Watering down" the prompt will only confuse the model, as more instructions often lead to conflicting suggestions and information overload.
The result is that you have to keep patching the model's various strange behaviors like playing a "whack-a-mole" game, rather than getting truly useful outputs. For most cutting-edge models today, the best approach is to not block their path, rather than shouting from the sidelines, trying to steer them in a specific direction. Cherish every word (or Token) you use.
In summary, all three of these ideas are highly tempting. If you don't deal with AI all day, you might find them very reasonable—however, that is not the case. Of course, as the capabilities of underlying models improve, our views on these methods may change in the future.
6.2K
Top
Ranking
Favorites
Trending onchain
Trending on X
Recent top fundings
Most notable