Subiecte populare
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Saoud Rizwan
Acum câteva zile, un tweet al șefului nostru de AI a ofensat mulți oameni. Deși nu cred că tweet-ul său inițial a fost intenționat să fie ofensator, răspunsul său de a refuza să-și ceară scuze nu reflectă poziția mea sau a lui Cline. Recunoaștem că acest lucru a cauzat durere reală și că merită recunoaștere și empatie.
Nu mai este cu Cline. Deși nu am fost de acord cu modul în care a reacționat, nimeni nu merită amenințările și abuzurile pe care le-a primit. Vă rog să-l lăsați pe el și familia lui în pace.
Tuturor celor care au fost răniți de asta – îmi pare rău.
301
Cline v3.39 poate acum genera comentarii diferite pentru a explica modificările pe care le face 🚀 Poți de asemenea să ceri ajutor pentru revizuirea pull request-urilor, commit-urilor recente și altele! Scrierea codului este ușoară – revizuirea și aprobarea sunt noul obstacol, iar noi suntem entuziasmați să încercați această nouă funcție.
263
Agenții de codare au dificultăți la muncă complexă în depozite mari și dezordonate, iar situația nu se va îmbunătăți până nu vom înceta să folosim benchmark-uri saturate cu teste care nu seamănă deloc cu ingineria reală.
De aceea investim 1 milion de dolari în cline-bench, reperul nostru deschis pentru sarcinile reale de programare!

pash21 nov. 2025
We are announcing cline-bench, a real world open source benchmark for agentic coding.
cline-bench is built from real world engineering tasks from participating developers where frontier models failed and humans had to step in.
Each accepted task becomes a fully reproducible RL environment with a starting repo snapshot, a real prompt, and ground truth tests from the code that ultimately shipped.
For labs and researchers, this means:
> you can eval models on genuine engineering work, not leetcode puzzles.
> you get environments compatible with Harbor and modern eval tooling for side by side comparison.
> you can use the same tasks for SFT and RL so training and evaluation stay grounded in real engineering workflows.
Today we are opening contributions and starting to collect tasks through the Cline Provider. Participation is optional and limited to open source repos.
When a hard task stumps a model and you intervene, that failure can be turned into a standardized environment that the entire community can study, benchmark, and train on.
If you work on difficult open source problems, especially commercial OSS, I would like to personally invite you to help. We're committing $1M to sponsor open source maintainers to take part in the cline-bench initiative.
"Cline-bench is a great example of how open, real-world benchmarks can move the whole ecosystem forward. High-quality, verified coding tasks grounded in actual developer workflows are exactly what we need to meaningfully measure frontier models, uncover failure modes, and push the state of the art."
– @shyamalanadkat, Head of Applied Evals @OpenAI
"Nous Research is focused on training and proliferating models that excel at real world tasks. cline-bench will be an integral tool in our efforts to maximize the performance and understand the capabilities of our models."
– @Teknium, Head of Post Training @nousresearch
"We are huge fans of everything Cline has been doing to empower the open source AI ecosystem, and are incredibly excited to support the cline-bench release. High-quality open environments for agentic coding are exceedingly rare. This release will go a long way both as an evaluation of capabilities and as a post-training testbed for challenging real-world tasks, advancing our collective understanding and capabilities around autonomous software development."
– @willccbb, Research Lead @PrimeIntellect:
"We share Cline's commitment to open source and believe making this benchmark available to all will help us continue to push the frontier coding capabilities of our LLMs."
– @b_roziere, Research Scientist @MistralAI:
Full details are in the blog:

267
Limită superioară
Clasament
Favorite
