Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
1/ The answer is you can’t — not cheaply. Verifying whether a paper is a breakthrough requires roughly the same expertise as producing one. Generation scales. Verification doesn’t.

Mar 21, 04:00
If AI scientists are writing millions of papers, many of which are slop, and some of which are incremental progress, how would we identify the one or two which come up with an extremely productive new idea?
In 1948, Shannon was one of hundreds of engineers at Bell Labs working on how to cleanly send voice signals over noisy copper wires. His paper sat in the same technical journal as reports on reducing static and building better filters.
How would you recognize that he has come up with this very general framework for thinking about information and communication channels, which over the coming decades would have enormous use from domains as far apart as cryptography to genetics to quantum mechanics?
It seems like it can take fields multiple decades to recognize the significance of unifying new concepts. Because it is on that time scale that the fruits of such general concepts lead to new discoveries across many different fields.
We’ve managed to solve this peer review problem for human scientists (at least somewhat). Now we’ll need to do it at a much greater scale for the mass of AI science that will be thrown at us.
2/ We call this the Measurability Gap. The harder a task is to verify relative to how hard it is to execute, the less safely you can automate it. AI science is the extreme case: infinite generation, fixed verification bandwidth.
3/ We formalize this, and map the conditions under which verification becomes the binding constraint on AI progress, in Section 5:
65
Top
Ranking
Favorites
