For most of my 25 years in finance, the bottleneck was never the thinking. It was the time between the thinking and the answer.
A senior leader has a question. The FP&A team is good — genuinely good — but they're also balancing three other priorities, a close cycle, and a board package due Friday. So the question gets queued. The model gets built when it gets built. The review happens when calendars align. And by the time a decision-ready answer lands, the moment has sometimes already passed.
That's not a people problem. That's an architecture problem. And Claude Opus 4.6 + Excel just changed the architecture.
Here's exactly what happened over the past few weeks at my firm.
Model built in hours. Review delayed days. Back-and-forth rounds on assumptions. Decision-ready output eventually lands — after competing priorities allowed it.
Senior leaders built a working model in 30 minutes. CFO validation and upgrade in another 30. Decision-ready output by end of morning.
The Before: One Week to Decision-Ready
A senior leader asked my FP&A team member to analyze three investment options. The model itself took a few hours to build — that part was actually fast. But competing priorities pushed the review out by a few days.
When the review finally happened, it ran thirty minutes. Then another hour refining assumptions and scenarios. Then a few more back-and-forth rounds as the leaders stress-tested their thinking against the numbers.
Total time from question to decision-ready answer: approximately one week.
Nobody did anything wrong. That's just how it works when skilled people are spread across multiple priorities. The FP&A team member was doing exactly what a good FP&A team member does — they just couldn't be in three places at once.
The After: Same Day
Last week, three senior leaders wanted to add a fourth opportunity into the same model. This time, instead of waiting on FP&A, they spent thirty minutes with Claude Opus 4.6 + Excel and built a model themselves.
Then they sent it to me.
"Hey John — here's what we did. Can you validate the numbers?"
Honestly? It was surprisingly solid for non-finance people. The logic held. The structure made sense. They'd asked Claude the right questions and it had given them a working framework.
But it also had gaps that a seasoned FP&A person catches immediately.
The model was static, not dynamic. Assumptions were hardcoded rather than referenced from a central inputs table. The moment a leader wants to change a growth rate or discount factor, they're editing raw cells instead of changing one input that flows through the whole model. One wrong edit and the integrity breaks silently.
There was a base case. That's it. A real investment decision needs at minimum a base, a downside, and an upside — ideally with sensitivity tables showing which assumptions the outcome is most sensitive to. Without that, you're not analyzing an investment. You're doing arithmetic.
The numbers were reasonable on the surface, but untested. What happens if the revenue ramp takes twice as long? What's the IRR at half the projected margin? Those are the questions that separate a model from a decision tool.
This is the last 30%. Not because the first 70% wasn't valuable — it absolutely was. But because this is exactly where 25 years of doing this earns its place.
What I Did Next
I spent about thirty minutes with Claude, giving it clearer and more specific instructions. Not vague directions — precise ones. I told it the model needed a dynamic input structure with a dedicated assumptions table. I told it to build three scenarios with named variables. I told it to add a sensitivity analysis on the two assumptions that would most affect the decision.
Then I grabbed a coffee and watched it rebuild the model, with commentary as it worked.
That last part matters more than it sounds. Claude doesn't just produce output — it explains what it's doing and why. Watching it restructure the model while narrating its reasoning is genuinely useful for anyone trying to learn. I found myself catching moments where its instinct differed from mine, which forced me to articulate why I'd do it differently. That's a surprisingly good way to pressure-test your own assumptions.
Thirty minutes later, I sent the upgraded file back. Deeper scenarios. Cleaner decision support. Stress-tested assumptions. A sensitivity table that immediately showed the leaders which variable they should spend the most time getting right.
Decision-ready output: same day.
AI didn't replace the team. It changed who can start — and what finished should look like.
Three Takeaways for CFOs and Finance Leaders
-
01
AI gets non-subject matter experts to 70% — fast. Your senior leaders don't need to become FP&A analysts. They need to be able to start, get 70% of the way there on their own, and ask better questions when the expert reviews their work. That changes the dynamic of every finance conversation in your organization.
-
02
The last 30% still belongs to the subject matter expert. Validation. Dynamic structure. Sharper scenario thinking. Stress-tested assumptions. Real judgment about which numbers actually matter for the decision. None of that goes away — it just moves earlier in the process and becomes more concentrated. The FP&A professional's job isn't disappearing. It's being elevated.
-
03
Don't do this alone. The gains compound faster when you learn in public, with other people, in real time. When those three senior leaders built that model together in thirty minutes, they weren't just getting an answer — they were building intuition about what AI can and can't do. That's an organizational capability, not just an individual one. Bring people into the process. The learning is the product.
What This Means for Your Finance Team
The before/after I described above isn't a one-time efficiency win. It's a new operating model.
When non-finance leaders can get to 70% on their own, your FP&A team stops being a production bottleneck and starts being a quality layer. They're not building models from scratch anymore — they're elevating models that already exist, catching the gaps that require real expertise, and spending their time on the judgment work that actually changes decisions.
That's a better use of a skilled person's time. And it's a better experience for the leaders who used to wait a week for an answer they now get the same morning.