There’s a structural problem with how AI consulting works. The people who sell the strategy are not the people who can build the product. And the people who can build the product rarely work at consulting firms.
This is the execution gap. It’s where AI projects go to die.
The Consulting Model Is Broken
Traditional consulting follows a predictable pattern: assess, strategize, recommend, leave. It works for some things — organizational design, market entry analysis, cost optimization. It doesn’t work for AI.
AI is a building problem, not a thinking problem. The hard part isn’t figuring out what to build. It’s building it. And you can’t outsource the building to a strategy firm.
What the Gap Looks Like
In practice, the execution gap shows up as a handoff. The consulting firm delivers a beautiful strategy document — 60 slides, clear recommendations, projected ROI. Then they hand it to your engineering team.
Your engineering team has never built an AI product. They’re good engineers, but they don’t know the gotchas. They don’t know that the model will need retraining every 6 weeks. They don’t know that the data pipeline will consume 70% of the engineering effort. They don’t know that the first version should be embarrassingly simple.
So they follow the strategy deck. They build the ambitious version. It takes twice as long. It works in the lab. It fails in production. And eventually, quietly, the project gets shelved.
Closing the Gap
The alternative is straightforward but uncommon: the people who develop the strategy should be the same people who execute it. Or at minimum, they should have done it before.
This means consultants who write code. Who’ve been in the trenches of a failing data pipeline at 2am. Who know that “97% accuracy” in the pitch deck means “3% of your customers are going to have a terrible experience.”
It means short engagements focused on building, not analyzing. Weeks, not months. Working products, not slide decks.
Why This Is Rare
It’s rare because it doesn’t scale the way traditional consulting scales. You can’t train a fresh MBA graduate to embed with an engineering team and ship an AI product in 12 weeks. It requires hard-won experience that takes years to build.
That’s also why it works. The experience is the moat. Not the framework. Not the methodology. The fact that you’ve shipped 20 products and you know exactly where this one is going to break.
The Test
Here’s a simple test for any AI consulting engagement: at the end of it, will there be working software? Not a roadmap. Not a prototype that lives on someone’s laptop. Working software, in production, delivering value to actual users.
If the answer is “we’ll hand off to your team for implementation,” you’re looking at the execution gap. And statistically, you’re looking at failure.