Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
🚨I JUST READ SOMETHING SHOCKING.
Researchers just trained an AI to predict which scientific ideas will succeed before any experiment is run.
It is now better at judging research than GPT-5.2, Gemini 3 Pro, and every top AI model on the market.
And it learned by studying 2.1 million research papers without a single human scientist teaching it what "good science" looks like.
Here is what they did.
A team of Chinese researchers built two AI systems. The first, called Scientific Judge, was trained on 700,000 matched pairs of high-citation vs low-citation papers. Every pair came from the same field and the same time period. The AI's only job: figure out which paper would have more impact.
It worked.
The AI now predicts which research will succeed with 83.7% accuracy. That is higher than GPT-5.2. Higher than Gemini 3 Pro. Higher than every frontier model that exists.
Then they built the second system.
Scientific Thinker doesn't just judge ideas. It proposes them. You give it a research paper, and it generates a follow-up idea with high potential impact.
When tested head to head against GPT-5.2, Scientific Thinker's ideas were rated as higher impact 61% of the time. It is generating better research directions than the smartest AI models in the world.
It gets stranger.
They trained the Judge only on computer science papers.
Then they tested it on biology. Physics. Mathematics. Fields it had never seen. It still worked. 71% accuracy on biology papers it was never trained on. The AI didn't learn what makes good computer science. It learned what makes good science, period.
Then the researchers tested whether it could see the future. They trained it on papers through 2024, then asked it to judge 2025 papers. It predicted which ones would gain traction with 74% accuracy. The AI learned to spot winners before the scientific community did.
Here is what nobody is talking about. A 1.5 billion parameter model, tiny by today's standards, jumped from 7% to 72% accuracy after training. That is a 65-point leap. The ability to judge scientific quality isn't some emergent property of massive models. It can be taught to small, cheap, fast AI systems that anyone can run....

Top
Ranking
Favorites
