Twenty-five years of building software. I've watched the tools get faster, the frameworks get better, the deployment pipelines shrink from days to seconds. And in all that time, the thing that actually determines whether a project succeeds has barely changed.

It's not the code. Never has been.

It's whether the team truly understood what they were supposed to build before they started building it. Right now, in 2026, we have an industry investing billions into making developers write code faster, while the upstream problem that causes the majority of project failures remains almost completely untouched.

Why is requirements the conversation nobody wants to have?

Every software project begins with a conversation. A stakeholder has a vision. A product owner translates it. A developer interprets it. Somewhere in that chain of translation, things get lost.

Not because anyone's bad at their job. Translating a business idea into something precise enough to build correctly is genuinely one of the hardest things in software. We just don't talk about it. We talk about frameworks, architecture, sprint velocity, deployment strategy.

But the conversation that happens before any of that? The one where someone tries to explain what they see in their head, and someone else tries to turn it into a plan? Nobody wants to have that conversation.

$2.41T
The annual cost of poor software quality in the U.S. alone,
driven primarily by requirements-related rework

$2.41 trillion. That's not an abstract projection. I've sat in the room for maybe a dozen of those "that's not what I meant" conversations, and each one felt like watching three months of work evaporate in a single sentence. Multiply that feeling by every software team on the continent, every sprint that went sideways because nobody pushed back on an assumption early enough, and you start to see how the number gets that big.

What do 25 years of software project failures reveal?

Early in my career, I was leading a development team. We got a request from the business side. It was detailed. It had specs. We followed them to the letter.

Three months later, we delivered. And the stakeholder looked at the screen and said: "That's not what I meant."

We hadn't misread the document. We'd built exactly what was written. The problem was that what was written wasn't what was needed. I've since seen this play out at four different companies, in three different industries, at scales from 10 people to 20,000. The words change. The frustration doesn't.

In 2013, I was directing the software engineering teams at Quebecor's Retail Division. Nathalie, who ran operations at the book warehouse, kept sending requirements to my team to adjust their ERP system. Every time we made improvements, it was never quite right. We weren't understanding each other.

With the holiday season approaching, I told her I'd be at the warehouse at 9 AM the next day. She expected another meeting. Instead, I said: "I'm spending the day here. Treat me like an employee. I want to know everything. Do everything. I want to understand how you work."

I moved from station to station. I used their scanners. I watched the screens they worked with. I learned how inventory was navigated, where the friction was, what the actual workflow looked like beyond what any document could capture.

By the end of that day, I could finally translate their pain points into real requirements. Because here's the thing: the client understands their pain points, but they don't always understand their needs.

That day became one of my first real successes as a director, not just because the holiday order volume was handled smoothly for the first time in years, but because Nathalie finally felt understood.

I think about that day a lot. Not the technical outcome, though it worked. What stuck with me is simpler than that: Nathalie had been trying to tell us what she needed for months. We just weren't listening in the right way. The answer was never in the requirements document she emailed over. It was on the warehouse floor, in the way she squinted at a screen that showed inventory counts she didn't trust, in the workaround she'd invented with sticky notes because the system couldn't do what she actually needed. That's the gap where projects live or die, and no spec template in the world can close it without someone willing to show up and pay attention.

Why haven't AI coding tools improved project success rates?

Now, here's where it gets interesting. And honestly, a little concerning.

Over the past two years, AI coding assistants have transformed how software gets written. Copilot sits in 90% of Fortune 100 companies. A controlled study by GitHub and Microsoft clocked developers completing tasks 55% faster. By every conventional metric, a genuine leap. (For a deeper look at what these tools actually deliver, see The Copilot Hangover.)

But project outcomes haven't improved. Rework rates haven't dropped. The Standish Group CHAOS Report still shows 31% to 75% of enterprise projects fail to fully meet objectives. PMI Pulse of the Profession research? 12% outright failure. Another 40% deliver "mixed" results.

The Acceleration Paradox Code gets faster. Projects don't get better. Until you fix what comes before code. Code Speed Project Success Project Outcomes base 2020 Manual coding ~same ? 2024 AI-assisted coding 10× 2026+ Clarity-first AI The gap: speed ↑ but outcomes flat The fix: clarity before code
Coding speed has increased dramatically, but project success rates stayed flat. Until you address what comes before the code.

Sit with that for a second. We gave developers what amounts to a turbo button. And the scoreboard didn't move. I had a conversation last year with an engineering director at a Series B company (he asked me not to name them) who said something that stuck: "We shipped more features in Q3 than in the entire previous year. Our NPS went down." He wasn't confused. He knew exactly what happened. They'd built the wrong things, just at unprecedented speed.

I've started calling this the Copilot Hangover. It's that queasy realization, usually around month six of adoption, that velocity without clarity doesn't equal progress. It equals accelerated waste. And once you see it, you can't unsee it.

What is the hidden cost of unclear requirements?

Here's a number I wish I'd had earlier in my career: research consistently shows 40 to 50% of development effort goes toward rework caused by requirement gaps. Not bugs in the traditional sense. Features built correctly according to the spec, but the spec itself was wrong, incomplete, or misunderstood. I remember explaining this to a VP of Engineering once, and his first reaction was disbelief. Then he went and looked at his own team's Jira history. He came back pretty quiet. (We explore why this stays invisible to leadership in The Invisible Problem Costing Your Team Millions.)

Let me make the math personal. A 50-developer team with a $5M annual budget? Somewhere between $1.5M and $2.5M of that is going toward rebuilding things they already built once. Every single year. And that doesn't even touch the opportunity cost, the features that never shipped, the market windows that slammed shut while the team was stuck in rework cycles.

The Standish Group's CHAOS research frames it even more starkly: 31% to 75% of enterprise software projects fail to meet objectives. The leading cause isn't technical debt. It's not bad developers. It's the upstream problem, the one nobody budgets for, of building the wrong thing because the requirements were incomplete or misread.

71%
of software projects are challenged or fail outright, with incomplete requirements as a top contributing factor

What does requirements intelligence look like in practice?

For most of my career, the answer to this problem was a person. Specifically, a senior business analyst who'd been burned enough times to develop a sixth sense for where the gaps would hide. Someone like Nathalie's counterpart on our side, the person who knew which questions to ask before anyone opened an IDE. I've been that person on some projects. It's exhausting, and you can't be everywhere.

Those people are increasingly hard to find. The U.S. Bureau of Labor Statistics projects roughly 98,100 management analyst openings per year through 2034, growing 9% annually (nearly 3x faster than the average occupation). Senior analysts run $120K to $180K per year (Glassdoor) and they're typically juggling three or four projects at once. You simply can't hire your way out of this. I tried at two different companies. It doesn't scale.

So here's the question that kept nagging at me: what if every project team could get structured, multi-perspective analysis from the very first working session? Not replacing the human judgment (I spent a day in a warehouse to prove that matters), but augmenting it. Making sure the hard questions get surfaced before the first line of code, not discovered after the first failed demo. (See how this works in practice in How to Compress Requirements Discovery from Weeks to Hours.)

That's what I've been building. I genuinely believe AI's most impactful role in software isn't writing code. It's structuring the thinking that happens before code should even be considered. I call it pre-code governance: the missing layer between having a business idea and being ready to build it. It sounds like a small distinction. In practice, it changes everything downstream.

Why does the draft-first vs. decision-first AI distinction matter?

I want to draw a distinction here that took me a while to articulate, even to myself. Most AI tools approach requirements the same way: you give them a prompt, they give you a document. I thought that was progress, initially. I was wrong. Well, partially wrong.

ChatGPT, Copilot, and most general-purpose AI tools are what I'd call draft-first. Give them a prompt, they produce a draft. It looks polished. Reads well. A product manager could hand it to a developer and feel good about it. But it's one perspective, from one angle, with no structured analysis behind it. I've seen teams treat these outputs as finished specs. That's exactly what makes them dangerous: they feel complete when they're not.

What actually works (and I learned this the hard way, over about fifteen projects) is decision-first thinking. You don't start by generating a document. You start by mapping the decisions that need to be made. What do the users actually need? What are the technical constraints the architect is worried about? What security and compliance boxes does legal need checked? And critically: what trade-offs is the team making, and has anyone said those trade-offs out loud?

The difference isn't cosmetic. I've watched two teams start the same week on similar projects. One had a beautiful 30-page spec from ChatGPT. The other had a two-page decision matrix with twelve unresolved questions flagged in red. Guess which team shipped on time.

The shift that changes everything

The industry is investing in acceleration. The breakthrough will come from alignment. Speed without clarity is just faster waste. The teams that figure this out first will have a structural advantage that compounds with every project.

The real question isn't "how fast can we code?" It's "how clear are we before we start?"

How does pre-code governance lead to faster delivery?

I hear the objection constantly. At conferences, on LinkedIn, in investor meetings. "Investing more in requirements means slowing down. More process. More bureaucracy." I get why people think that. I used to think it myself, back around 2008, when I was pushing teams to skip the discovery phase so we could "get to the real work faster." That didn't end well.

Here's what traditional requirements gathering actually looks like, in my experience: 3 to 6 weeks of separate meetings with different stakeholders, weeks of email follow-ups where context gets lost in forwarded threads, alignment sessions that somehow create more questions than answers. By the time developers start building, half the original decisions have already drifted because nobody remembers what was agreed in week one.

A structured, AI-guided approach compresses that into a single working session. Not by cutting corners (I spent a day in a warehouse; I'm not the "skip discovery" guy anymore), but by ensuring all critical perspectives get surfaced simultaneously rather than sequentially. Business needs, user experience, technical architecture, security and compliance, all in the same room, at the same time, with a structured framework keeping the conversation honest.

The result is counterintuitive until you live it: faster delivery of the right thing. Not slower delivery of everything. One of my former colleagues put it best: "We spent two extra days up front and saved two months of rework." That math holds up every time I've seen it applied.

Why does the future of software delivery start before code?

Look, I've been at this for twenty-five years. Four companies, three industries, team sizes from 10 to 20,000. The technology has changed completely. The frameworks, the cloud infrastructure, the deployment pipelines: all unrecognizable compared to when I started. And the fundamental problem? The gap between what someone pictures in their head and what actually gets built? Stubbornly, maddeningly unchanged. I watched it happen at Quebecor. I watched it happen at Technology Evaluation Centers. I watched it happen at three consulting clients last year.

I don't think it has to stay that way.

The same AI capabilities that are making developers code 5–10× faster can be pointed at the upstream problem. Not to replace the human judgment and empathy that make great requirements work. You can't automate what I learned on Nathalie's warehouse floor. But you can structure it. You can scale it. You can make sure the hard questions don't get skipped just because someone's feeling pressure about a release date.

The biggest breakthrough in software won't come from writing code faster. It'll come from knowing what to write in the first place. I've spent twenty-five years learning that lesson the hard way. Now I'm trying to make sure other teams don't have to.

Frequently asked questions

Requirements intelligence is the practice of using AI to detect ambiguity, contradictions, and missing context in project requirements before development begins. Unlike traditional requirements gathering, it provides structured, machine-assisted analysis that catches the gaps human review often misses.
Requirements management tools (like Jira, Azure DevOps, or IBM DOORS) store and track requirements. Requirements intelligence actively analyzes them: flagging vague language, identifying contradictions between sections, and surfacing unstated assumptions. It is the difference between a filing cabinet and an auditor.
AI coding tools accelerate code generation, but they operate downstream of the real problem. If requirements are unclear or contradictory, generating code faster just produces the wrong thing more efficiently. The Standish Group CHAOS data shows project success rates have not improved despite widespread AI coding tool adoption.
Pre-code governance is a quality gate applied before any development work begins. It ensures that requirements are complete, unambiguous, and validated by stakeholders before a single line of code is written. This prevents the 40 to 50 percent of development effort typically wasted on rework caused by unclear specifications.
No. AI cannot replace the human judgment, empathy, and domain expertise that skilled professionals bring to requirements work. What AI can do is structure, scale, and systematize the analysis, ensuring that critical questions are not skipped when deadlines are tight. It augments human expertise rather than replacing it.
Nicolas Payette, CEO and Founder of Specira AI
CEO and Founder, Specira AI

Nicolas Payette has spent 25 years in enterprise software delivery, leading digital transformations at companies like Technology Evaluation Centers and Optimal Solutions. He founded Specira AI to solve the root cause of project failure: unclear requirements, not slow code.