It's really remarkable how fast AI tools for Excel have evolved. Even three months ago I found them almost completely unusable. Today, I was able to update my Uber model for the last four quarters in a fraction of the time, accurately, even when I consider the time I spent de-bugging and validating the key inputs. The three big unlocks for me were creating my own skills files, which are recipe cards encoding an incredibly detailed dissection of every step of the financial modeling process (put together in an 86 page document then crafted into six distinct modeling skills...unfortunately, I won't be sharing this at this time, but will consider in the future), connecting the Daloopa MCP to Claude in Claude Excel for accurate data, and creating a validation space in Perplexity Computer to do final checks and de-bugging. (I am not sponsored by either Daloopa or Perplexity, or any vendor for that matter) Obviously this AI augmented process is only valuable to the extent that it is 98%+ accurate and 100%+ accurate on critical metrics. Validation has to be a systematic process blending coding tools and human validation checklists (i.e. hand checking key model variables and understanding where in the model there is tolerance for mistakes, and where there isn't). But the ability of new LLMs to read & analyze models (particularly GPT 5.4) and the rise of Agentic Workspaces like Perplexity Computer to route tasks to the right LLMs seems to be resulting in big progress here. Really exciting stuff. I have been a huge skeptic here...Excel-based models are the foundation of institutional decision making, and they are no place for AI slop. With the technology improving, particularly workflows around systematic validation, that skepticism is melting.