I was sitting in a sprint demo with a client in Longueuil last November. Nine weeks of integration work. Complex piece of engineering, the kind that touches three different systems and requires somebody to understand how SAP talks to a custom warehouse module. During the demo, everything worked. Flawlessly, actually. The team had built exactly what they were asked to build.

Silence. The client watched, arms crossed, and when the demo ended said five words: "This solves a problem we don't have."

Nine weeks. Gone. The team had implemented the stated requirement to the letter, done the job they were given, and done it well. But the stated requirement (written by a Business Analyst who'd assumed instead of validated) had never been checked against what the business actually needed. One assumption. Nine weeks.

This story plays out everywhere I look, and it's not because teams are incompetent. They're executing their instructions brilliantly. The instructions are wrong. The problem is upstream, always upstream, in the place where someone should have asked "why" before anyone asked "how."

47%
of projects fail to meet their original objectives

Why didn't AI coding tools fix the failure rate?

AI coding agents have arrived with genuine power. GitHub Copilot has millions of users. Development teams are using these tools to ship faster. The acceleration is real.

And yet project failure rates haven't moved. The PMI Pulse of the Profession 2025 reports 47% of projects still fail to meet their original objectives. The Standish Group's CHAOS data tells a similar story: the top three failure factors are incomplete requirements, lack of user involvement, and changing requirements. Not slow code. Not bad frameworks. Requirements.

Here's what makes it worse. A Faros AI study of 10,000 developers across 1,255 teams found that 75% of engineers now use AI coding tools, yet the productivity gains "evaporate at the company level." Individual developers merge more pull requests, sure. But organizational outcomes don't improve. The bottleneck isn't where the tools are pointed.

Why? Because it was never writing code.

Software development is a chain. You start upstream with understanding what to build. Then you move downstream to actually building it. The failure in my sprint demo story happened upstream. The team's code execution was flawless. They didn't fail at the code layer. They failed at the requirements layer.

AI coding tools made the downstream part faster. They didn't change upstream at all. When 47% of projects are failing because of poor understanding, giving teams better code generation is like giving a kitchen better blenders when the recipe is wrong.

What are AI coding agents actually built to do?

Before I go further, I want to be direct about what these tools do well. They're extraordinary executors.

Give an AI coding agent a clear specification. Give it a stack. Give it constraints. It will build. It will iterate. It will debug. It will ship. The execution quality is often better than what you'd get from a human in the same time. That's not hyperbole. That's the product working as designed.

The agent doesn't hallucinate about your architecture. It doesn't get bored. It doesn't make tired mistakes at 10 PM. It takes your specifications and executes them well.

That's the entire design goal. These tools are built to operate at the execution layer. You give them clear input. They produce clear output. Brilliantly good at that job.

The limitation isn't a version number away. Not even close. It's structural, baked into the design philosophy itself, and no amount of parameter scaling or fine-tuning will change what these tools are fundamentally built to do: execute specifications, not question whether those specifications are right.

What structural gap do they leave open?

True story. Last March, a VP at a logistics company in Boucherville walked into a sprint review and said she wanted a dashboard tracking customer churn. Her exact words. The AI agent took that spec, and within two weeks the team had a working dashboard: clean, functional, real-time charts, the whole thing. Nobody asked her why she wanted to track churn, what decision the dashboard would actually inform, or whether churn was even the right metric for the problem she was trying to solve.

Nobody. Not the agent. Not the PM who typed the Jira ticket at 11:30 PM on a Wednesday.

I've watched software get built for 25 years across maybe 200 projects and I have never (not once, and I say this knowing how absolute it sounds) seen an automated coding tool push back on a requirement. Never seen one say "I think you're solving the wrong problem." That's not a bug. That's the architecture. And honestly? I'm not sure I'd want it to try, because the skill required to challenge a stakeholder's assumptions is fundamentally human and messy and context-dependent in ways that don't reduce to prompts.

A Business Analyst sits in a room (or on a Zoom call that should have been an email, let's be honest) and asks uncomfortable questions about intent. About the gap between what people say they want and what they actually need. Slow work. Irreplaceable.

Different discipline entirely.

Here's how the gap plays out. The team gets a requirement that looks buildable, so they build. Agent takes the spec, runs with it, ships in three weeks instead of eight. Six weeks later (and I keep seeing this exact timeline, saw it three times in Montreal last year alone) the stakeholder squints at the demo and says something that makes the room go quiet: "That's not what I meant." Built the wrong thing. On time. Under budget, even. Which somehow makes it worse.

Blame lands on the developer. Unfair. The developer used a tool designed to execute specifications; nobody asked that tool to validate them. The gap lived upstream, in a conversation that never happened, in a question nobody thought to ask before someone opened their IDE.

The bottleneck was never the code. It was the conversation that should have happened before anyone opened their IDE.

Why does speed make this worse, not better?

Faster code generation is positive when requirements are right. Spectacular, even. You move from six weeks to two weeks to shipping validated solutions. Pure win.

When requirements are wrong or ambiguous, speed becomes a liability. Bad assumptions now reach production in days. You discover the mistake faster, sure. But you discover it after deployment, after integration testing, after the team has moved on.

The cost structure flips. In the old world, a wrong requirement took months to discover. That was a cost. But the long timeline gave you more chances to catch misunderstandings early. Requirements were often wrong, sure. But you had runway to fix them before shipping to production.

Now you have a week. The integration is built. The agent executed it flawlessly. The code is clean. The feature works as specified. And the business still doesn't want it.

And the perceived speed gain? Might be an illusion. A randomized controlled trial by METR (published July 2025) tested 16 experienced open-source developers on 246 real tasks. With AI tools, developers took 19% longer to complete tasks. Not faster. Slower. But here's the kicker: those same developers believed they were 24% faster. The perception gap is real, and it's dangerous, because teams are making staffing and timeline decisions based on the feeling of speed rather than measured outcomes.

Speed amplifies both good and bad inputs. Good requirements move fast to production. Bad requirements move just as fast. The acceleration doesn't discriminate.

What does Requirements Intelligence change?

This is where Requirements Intelligence enters. It's not the opposite of code generation. It's the complement. It's the upstream discipline that happens before any agent sits down to code.

Requirements Intelligence asks clarifying questions. What problem does this feature solve? Who experiences that problem? How do you know they experience it? What have they tried before? What constraints matter? What would success look like?

These questions surface contradictions. They reveal assumptions. They validate whether the stated requirement is actually the right requirement. They move from "build what I'm asking for" to "understand what I actually need."

When Requirements Intelligence is done well, the specification that goes to the AI agent is solid. It's been challenged. It's been validated. It reflects actual stakeholder need. Then when the agent executes it, you get speed and correctness together.

The gap closes because someone upstream asked the right questions first.

What are we seeing in practice?

A manufacturing company came to us with a requirements list for a production scheduling system. The top item was "integrate with legacy ERP to pull real-time inventory." The team was ready to build it. The budget was approved. The timeline was locked.

We asked why. What problem would real-time inventory integration solve?

The answer: they assumed their scheduling system needed live data to be accurate.

We dug deeper. When was the last time inventory changed between their morning planning window and the actual production run?

Silence. Then: "Almost never. Maybe once a quarter."

That one question saved them the entire integration effort. They needed daily snapshots, not real-time data. The architecture became simpler. The timeline compressed. The cost dropped.

An AI coding agent would have built the real-time integration perfectly. It was a stated requirement. The code would have been clean. And it would have solved a problem they didn't have.

Requirements Intelligence caught it upstream.

What this means for your team

The trend is clear. Coding will get faster. AI agents will get smarter at execution. That's the trajectory.

The risk is equally clear. Speed is only valuable if you're moving in the right direction. If your team adopts faster code generation without improving upstream understanding, you're optimizing the wrong layer.

The fix isn't technical. It's process. Before any agent generates any code, someone needs to validate that the requirement is right. That person asks the questions the agent won't ask. That person challenges the specification. That person ensures the team is solving the actual problem.

Invest in Requirements Intelligence. Use AI agents to execute. The bottleneck will move from execution to understanding. That's progress.

What are the most common questions about AI agents and requirements?

No. AI coding agents excel at execution but don't question requirements. They take specifications and build them fast. Business analysts understand stakeholder intent, validate assumptions, and ask why before anyone writes code. Those are fundamentally different tasks.

The agent is an executor. The analyst is a discoverer. Both roles matter. The agent won't replace the analyst because the agent isn't designed to do what the analyst does.

AI coding tools solve the execution layer. The failure rate is driven by the requirements layer. According to the PMI Pulse of the Profession 2025, 47% of projects fail to meet their original objectives, with poor requirements as the leading cause.

Faster code generation doesn't fix upstream understanding problems. The agent accelerates the downstream work. It doesn't change what happens upstream. When the upstream is broken, the acceleration just makes the mistake faster.

Code generation takes a clear specification and builds it quickly. Requirements intelligence asks the right questions before you write any specifications.

One is downstream. One is upstream. Both matter, but the bottleneck today is upstream. Most teams have code generation covered now. Few have Requirements Intelligence covered.

When requirements are right, speed is pure value. When requirements are wrong or ambiguous, speed amplifies the cost.

Bad assumptions now reach production in days instead of months. You find out later that the feature solves a problem the business doesn't have. The mistake is embedded in production faster. The fix is more expensive. The team moves on to the next project before realizing the failure.

Requirements Intelligence asks clarifying questions. Why does the stakeholder want this feature? What problem are they solving? Have they validated this problem exists? Are there constraints or preferences they haven't mentioned?

It surfaces contradictions in stated requirements. It challenges whether a feature is solving the right problem. It validates assumptions. It's the discipline that turns good intent into good specifications. The AI agent can't do that because it's not designed to question the specification. It's designed to execute it.

Nicolas Payette, CEO and Founder of Specira AI
CEO and Founder, Specira AI

Nicolas Payette has spent 25 years in enterprise software delivery, leading digital transformations at companies like Technology Evaluation Centers and Optimal Solutions. He founded Specira AI to solve the root cause of project failure: unclear requirements, not slow code.