Andrew Ng released an “Agentic Reviewer” for research papers. It just hit near human-level agreement after training on real ICLR 2025 reviews. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗶𝘁 𝘁𝗮𝗿𝗴𝗲𝘁𝘀 Paper review is slow. Each cycle takes around six months. One student saw six rejections over three years. Iteration speed, not ideas, became the bottleneck. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 The system learns from real conference feedback. It reads your paper, then searches arXiv for related work. The flow is simple: Analyze claims and structure Ground comments in published research Produce structured reviewer-style feedback It works best in fields with open literature. 𝗛𝗼𝘄 𝗴𝗼𝗼𝗱 𝗶𝘁 𝗶𝘀 Human-to-human review correlation sits at 0.41. AI-to-human correlation reaches 0.42. That is near reviewer agreement today.
Link:
58