المواضيع الرائجة
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Saoud Rizwan
قبل عدة أيام، أثار تغريدة من رئيس قسم الذكاء الاصطناعي لدينا الكثير من الناس. بينما لا أعتقد أن تغريدته الأصلية كانت مقصودة بالإساءة، إلا أن رده برفض الاعتذار لا يعكس موقفي أو موقف كلاين. ندرك أن هذا سبب ألما حقيقيا، وهذا يستحق الاعتراف والتعاطف.
لم يعد مع كلاين. بينما لم أتفق مع طريقة رده، لا أحد يستحق التهديدات والإساءة التي تلقاها. أرجوك اتركه هو وعائلته بسلام.
لكل من تأذى من هذا - أنا آسف.
311
وكلاء البرمجة يواجهون صعوبة في الأعمال المعقدة في مستودعات كبيرة وفوضوية، ولن يتحسن هذا حتى نتوقف عن استخدام معايير الأداء المشبعة مع اختبارات لا تشبه الهندسة الحقيقية.
لهذا السبب نخصص مليون دولار على cline-bench، وهو معيارنا المفتوح لمهام البرمجة الواقعية!

pash21 نوفمبر 2025
We are announcing cline-bench, a real world open source benchmark for agentic coding.
cline-bench is built from real world engineering tasks from participating developers where frontier models failed and humans had to step in.
Each accepted task becomes a fully reproducible RL environment with a starting repo snapshot, a real prompt, and ground truth tests from the code that ultimately shipped.
For labs and researchers, this means:
> you can eval models on genuine engineering work, not leetcode puzzles.
> you get environments compatible with Harbor and modern eval tooling for side by side comparison.
> you can use the same tasks for SFT and RL so training and evaluation stay grounded in real engineering workflows.
Today we are opening contributions and starting to collect tasks through the Cline Provider. Participation is optional and limited to open source repos.
When a hard task stumps a model and you intervene, that failure can be turned into a standardized environment that the entire community can study, benchmark, and train on.
If you work on difficult open source problems, especially commercial OSS, I would like to personally invite you to help. We're committing $1M to sponsor open source maintainers to take part in the cline-bench initiative.
"Cline-bench is a great example of how open, real-world benchmarks can move the whole ecosystem forward. High-quality, verified coding tasks grounded in actual developer workflows are exactly what we need to meaningfully measure frontier models, uncover failure modes, and push the state of the art."
– @shyamalanadkat, Head of Applied Evals @OpenAI
"Nous Research is focused on training and proliferating models that excel at real world tasks. cline-bench will be an integral tool in our efforts to maximize the performance and understand the capabilities of our models."
– @Teknium, Head of Post Training @nousresearch
"We are huge fans of everything Cline has been doing to empower the open source AI ecosystem, and are incredibly excited to support the cline-bench release. High-quality open environments for agentic coding are exceedingly rare. This release will go a long way both as an evaluation of capabilities and as a post-training testbed for challenging real-world tasks, advancing our collective understanding and capabilities around autonomous software development."
– @willccbb, Research Lead @PrimeIntellect:
"We share Cline's commitment to open source and believe making this benchmark available to all will help us continue to push the frontier coding capabilities of our LLMs."
– @b_roziere, Research Scientist @MistralAI:
Full details are in the blog:

278
الأفضل
المُتصدِّرة
التطبيقات المفضلة
