Populære emner
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Saoud Rizwan
For noen dager siden fornærmet en tweet fra vår AI-sjef mange. Selv om jeg ikke tror hans opprinnelige tweet var ment å være støtende, gjenspeiler ikke hans nektelse å be om unnskyldning min eller Clines standpunkt. Vi erkjenner at dette forårsaket ekte smerte, og det fortjener anerkjennelse og empati.
Han er ikke lenger sammen med Cline. Selv om jeg var uenig i hvordan han reagerte, fortjener ingen truslene og overgrepene han har fått. Vennligst la ham og familien hans være i fred.
Til alle som ble såret av dette – jeg beklager.
342
Cline v3.39 kan nå generere diff view-kommentarer for å forklare endringene han gjør. 🚀 Du kan også be om hjelp til å gjennomgå pull requests, nylige commits og mer! Å skrive kode er enkelt – å gjennomgå og godkjenne er den nye flaskehalsen, og vi gleder oss til at du skal prøve denne nye funksjonen.
305
Kodeagenter sliter med komplekse oppgaver i store, rotete repos, og dette blir ikke bedre før vi slutter å bruke mettede benchmarks med tester som ikke ligner på ekte ingeniørkunst.
Derfor satser vi 1 million dollar på cline-bench, vår åpne referanse for virkelige kodeoppgaver!

pash21. nov. 2025
We are announcing cline-bench, a real world open source benchmark for agentic coding.
cline-bench is built from real world engineering tasks from participating developers where frontier models failed and humans had to step in.
Each accepted task becomes a fully reproducible RL environment with a starting repo snapshot, a real prompt, and ground truth tests from the code that ultimately shipped.
For labs and researchers, this means:
> you can eval models on genuine engineering work, not leetcode puzzles.
> you get environments compatible with Harbor and modern eval tooling for side by side comparison.
> you can use the same tasks for SFT and RL so training and evaluation stay grounded in real engineering workflows.
Today we are opening contributions and starting to collect tasks through the Cline Provider. Participation is optional and limited to open source repos.
When a hard task stumps a model and you intervene, that failure can be turned into a standardized environment that the entire community can study, benchmark, and train on.
If you work on difficult open source problems, especially commercial OSS, I would like to personally invite you to help. We're committing $1M to sponsor open source maintainers to take part in the cline-bench initiative.
"Cline-bench is a great example of how open, real-world benchmarks can move the whole ecosystem forward. High-quality, verified coding tasks grounded in actual developer workflows are exactly what we need to meaningfully measure frontier models, uncover failure modes, and push the state of the art."
– @shyamalanadkat, Head of Applied Evals @OpenAI
"Nous Research is focused on training and proliferating models that excel at real world tasks. cline-bench will be an integral tool in our efforts to maximize the performance and understand the capabilities of our models."
– @Teknium, Head of Post Training @nousresearch
"We are huge fans of everything Cline has been doing to empower the open source AI ecosystem, and are incredibly excited to support the cline-bench release. High-quality open environments for agentic coding are exceedingly rare. This release will go a long way both as an evaluation of capabilities and as a post-training testbed for challenging real-world tasks, advancing our collective understanding and capabilities around autonomous software development."
– @willccbb, Research Lead @PrimeIntellect:
"We share Cline's commitment to open source and believe making this benchmark available to all will help us continue to push the frontier coding capabilities of our LLMs."
– @b_roziere, Research Scientist @MistralAI:
Full details are in the blog:

311
Topp
Rangering
Favoritter
