I've been building software for 25 years. I've watched the tools get faster, the frameworks get better, the deployment pipelines shrink from days to seconds. And in that entire time, the thing that actually determines whether a project succeeds has barely changed at all.
It's not the code. It's never been the code.
It's whether the team truly understood what they were supposed to build before they started building it. And right now, in 2026, we have an industry that is investing billions into making developers write code faster, while the upstream problem that causes the majority of project failures remains almost completely untouched.
Why is requirements the conversation nobody wants to have?
Every software project begins with a conversation. A stakeholder has a vision. A product owner translates it. A developer interprets it. And somewhere in that chain of translation, things get lost.
Not because anyone is bad at their job. But because translating a business idea into something precise enough to build correctly is genuinely one of the hardest things in software. We just don't talk about it. We talk about the frameworks. The architecture. The sprint velocity. The deployment strategy.
Nobody talks about the conversation that happens before any of that. The one where someone tries to explain what they see in their head, and someone else tries to turn that into a plan.
driven primarily by requirements-related rework
That number isn't theoretical. It's the accumulated weight of every feature that was built wrong, every sprint that went sideways because assumptions went unchecked, every project where the stakeholder looked at the final product and said: "That's not what I meant."
What do 25 years of software project failures reveal?
Early in my career, I was leading a development team. We got a request from the business side. It was detailed. It had specs. We followed them to the letter.
Three months later, we delivered. And the stakeholder looked at the screen and said: "That's not what I meant."
We hadn't misread the document. We'd built exactly what was written. The problem was that what was written wasn't what was needed. I've since seen this play out at four different companies, in three different industries, at scales from 10 people to 20,000. The words change. The frustration doesn't.
In 2013, I was directing the software engineering teams at Quebecor's Retail Division. Nathalie, who ran operations at the book warehouse, kept sending requirements to my team to adjust their ERP system. Every time we made improvements, it was never quite right. We weren't understanding each other.
With the holiday season approaching, I told her I'd be at the warehouse at 9 AM the next day. She expected another meeting. Instead, I said: "I'm spending the day here. Treat me like an employee. I want to know everything. Do everything. I want to understand how you work."
I moved from station to station. I used their scanners. I watched the screens they worked with. I learned how inventory was navigated, where the friction was, what the actual workflow looked like beyond what any document could capture.
By the end of that day, I could finally translate their pain points into real requirements. Because here's the thing: the client understands their pain points, but they don't always understand their needs.
That day became one of my first real successes as a director, not just because the holiday order volume was handled smoothly for the first time in years, but because Nathalie finally felt understood.
That experience taught me something that has shaped everything I've done since: the answer isn't in the document. It's in the room where the work actually happens. And the gap between what someone imagines and what gets built is where projects live or die.
Why haven't AI coding tools improved project success rates?
Now, here's where things get interesting, and honestly, a little concerning.
Over the past two years, AI coding assistants have transformed how software gets written. GitHub Copilot is deployed across 90% of Fortune 100 companies. A controlled study by GitHub and Microsoft found developers completed tasks 55% faster with Copilot. By every conventional metric, we've made a genuine leap forward. (For a deeper look at what these tools actually deliver, see The Copilot Hangover.)
But project outcomes haven't improved. Rework rates haven't dropped. The Standish Group CHAOS Report still shows that somewhere between 31% and 75% of enterprise software projects fail to fully meet their objectives. The PMI Pulse of the Profession research shows 12% of projects outright fail and another 40% deliver "mixed" results.
Think about what that means. We've dramatically increased the speed at which teams can write code, while the thing that actually determines whether that code solves the right problem hasn't changed. We're accelerating without aligning. Building the wrong thing, just faster than ever.
I've started calling this the Copilot Hangover: the moment when organizations realize that velocity without clarity doesn't equal progress. It equals accelerated waste.
What is the hidden cost of unclear requirements?
Here's a number that should keep every CTO awake at night: research consistently shows that 40–50% of development effort goes toward rework caused by requirement gaps. Not bugs in the traditional sense. Features that were built correctly according to the spec, but the spec itself was wrong, incomplete, or misunderstood. (We explore why this problem stays invisible to leadership in The Invisible Problem Costing Your Team Millions.)
For a team of 50 developers with a $5 million annual budget, that's $1.5 to $2.5 million spent building things that need to be rebuilt. Every year. And that doesn't account for the opportunity cost: the features that didn't ship, the market windows that closed, the innovation that was crowded out by rework.
There's an even sharper way to frame it. The Standish Group's CHAOS research consistently shows that 31% to 75% of enterprise software projects fail to meet their objectives, with requirements misalignment as the leading cause. Not technical debt. Not bad developers. The upstream problem of building the wrong thing.
What does requirements intelligence look like in practice?
For most of my career, the answer to this problem was experience. You needed a senior business analyst who had seen enough projects go sideways that they could anticipate where the gaps would be. Someone who knew the right questions to ask before anyone started writing code.
The problem is that those people are increasingly hard to find. The U.S. Bureau of Labor Statistics projects about 98,100 management analyst openings per year through 2034, growing 9% annually, nearly three times faster than the average occupation. Senior analysts cost $120K–$180K per year (Glassdoor) and typically juggle multiple concurrent projects. You simply can't hire your way out of this problem.
But what if you could give every project team the benefit of structured, multi-perspective analysis from the very first session? Not replacing human judgment, but augmenting it. Ensuring that the hard questions get asked before the first line of code is written, not after the first failed demo. (See how this works in practice in How to Compress Requirements Discovery from Weeks to Hours.)
This is what I've been working on. The idea that AI's most impactful role in software isn't writing code. It's structuring the thinking that happens before code should even be considered. What I call pre-code governance: the missing layer between having a business idea and being ready to build it.
Why does the draft-first vs. decision-first AI distinction matter?
There's a fundamental difference between how most AI tools approach requirements and what actually moves projects forward.
ChatGPT, Copilot, and most general-purpose AI tools are what I'd call draft-first. You give them a prompt, they produce a draft. It looks polished. It sounds good. But it's one perspective, from one angle, with no structured analysis behind it. It's a starting point that feels like a finished product, and that's exactly what makes it dangerous.
What actually works in practice is decision-first thinking. Instead of generating a document, you structure the decisions that need to be made. What are the user needs? What are the technical constraints? What security and compliance requirements apply? What trade-offs are being made, and does everyone agree on them?
The difference isn't cosmetic. It's the difference between a team that starts building with a polished-looking spec that hides unresolved assumptions, and a team that starts building with genuine clarity about what they're doing and why.
The shift that changes everything
The industry is investing in acceleration. The breakthrough will come from alignment. Speed without clarity is just faster waste. The teams that figure this out first will have a structural advantage that compounds with every project.
The real question isn't "how fast can we code?" It's "how clear are we before we start?"
How does pre-code governance lead to faster delivery?
One of the most common objections I hear is that investing more in requirements means slowing down. Adding process. Creating bureaucracy.
The opposite is true. Traditional requirements gathering takes 3 to 6 weeks: separate meetings with different stakeholders, weeks of email follow-ups, alignment sessions that create more questions than answers. By the time developers start building, many of the original decisions have already drifted.
A structured, AI-guided approach can compress that into a single working session. Not by cutting corners, but by ensuring that all the critical perspectives (business needs, user experience, technical architecture, security and compliance) are addressed simultaneously rather than sequentially.
The result isn't slower delivery. It's faster delivery of the right thing. And the difference between those two outcomes is worth more than any coding speedup could ever provide.
Why does the future of software delivery start before code?
I've spent 25 years watching the same failure pattern repeat across every company I've worked at, from startups to enterprises with 20,000 employees. The technology keeps getting better. The tools keep getting faster. And the fundamental problem, the gap between what someone imagines and what gets built, stays stubbornly, persistently the same.
I don't think it has to be that way anymore.
The same AI capabilities that are making developers code 5–10× faster can be pointed at the upstream problem. Not to replace the human judgment and empathy that make great requirements work (the kind of understanding you only get by spending a day in someone's warehouse) but to structure it, scale it, and ensure that the hard questions don't get skipped when deadlines are tight.
The biggest breakthrough in software won't come from writing code faster. It'll come from knowing what to write in the first place.
And honestly, it's about time.
