While frontier models excel at agentic search, they are prohibitively expensive and slow for such token-intensive tasks. This is a problem, since search precision tends to scale with tokens processed. The solution is small, carefully RL-trained models tailored to individual search engines, which can outperform general frontier models while being one to two orders of magnitude cheaper and faster.