I've found the marginal value of using AI while working out of the same code base drops a lot over time. It's always useful, but it does asymptote pretty quickly. I think it's less about context prioritization. And more that it just takes a lot more effort to describe things in relation to each other. Every feature has a potential of interacting with every other feature. So there is n^2 complexity in the limit. The work converges on describing all that and testing for it, which gets more formal over time. AI really is a dramatic step at tackling the first 80%. And it forces thinking in higher level abstractions to limit manage the number of pairwise interactions. But as always, the remaining 20% is where all the time and value lies.