Last fall, I watched a team of eight developers ship a feature in eleven days that would have taken them six weeks two years ago. Copilot wrote maybe 40% of the code. The pull requests were clean. The CI pipeline stayed green. Everyone felt productive. Then the client looked at the demo and said five words that erased all of it: "That's not what I asked for."
Eleven days. Eight people. A feature nobody wanted. And here's what got me: the team wasn't wrong. They'd followed the spec perfectly. The spec was the problem, and Copilot had made it possible to execute on that broken spec faster than anyone could catch the mistake.
That's the Copilot Hangover. I've been watching it play out across the industry for the past year, and the numbers confirm what I'm seeing on the ground: 90% of Fortune 100 companies now use GitHub Copilot, developers complete tasks 55% faster with AI assistance, and yet project success rates haven't moved. Not an inch.
What is the Copilot Hangover?
I used to think the problem was communication. Better user stories, more detailed acceptance criteria, tighter feedback loops. I was partially right, but the real issue runs deeper than that.
The Copilot Hangover isn't a bug in the AI. It's a consequence of how we chose to deploy it. We optimized for acceleration without ever addressing alignment. We made it possible to write code faster than we can confirm we're writing the right code. And that gap, between velocity and correctness, is where projects go to die.
You've probably seen the pattern yourself. A developer opens their IDE with a vague requirement. Copilot suggests completions that look polished, intelligent, ready to ship. Hours later the code is done. Days later it's deployed. Two weeks later the stakeholder squints at the screen and says: "That's not what I meant." The code was correct. The specification was followed to the letter. But the specification itself? Incomplete. Ambiguous. Built on assumptions that nobody examined. By the time anyone catches it, somewhere around 40 to 50 percent of the sprint's effort has already been poured into building the wrong thing.
Why isn't faster code improving project outcomes?
Look, I've managed development teams for 25 years, and the gap that actually kills projects hasn't changed in all that time: velocity versus alignment. Velocity is how fast a developer turns a specification into code. Copilot is spectacular at that. Alignment is whether the product owner, the architect, the developer, and the person who'll actually sit in front of this thing every day all have the same picture in their heads. Nobody's built a tool for that part yet.
The Copilot Hangover is acceleration without alignment. Building the wrong thing, just faster than ever.
I watched this happen at three different client engagements last year. A product owner writes a requirement. The developer reads it and pictures something different. The stakeholder who requested the feature had a third meaning in mind entirely. Nobody catches the disconnect until code is written, deployed, and failing to deliver value. At that point (and this still surprises people), fixing the requirement error costs 29x more than catching it during planning. The 2024 Stack Overflow Developer Survey confirmed something I've felt in my gut for years: developers spend more time understanding requirements and debugging misaligned code than writing new features. And yet AI tools remain focused almost exclusively on code generation. That mismatch frustrates me more than I can tell you.
80% rarely or never used. That's not marginal risk. It's the accumulated weight of every sprint that went sideways because assumptions weren't validated, every feature that shipped without solving the actual problem, every project where the team delivered the specification perfectly but the specification was wrong. The invisible problem, hiding in plain sight across the industry.
What is the difference between draft-first and decision-first AI?
Here's something that took me a while to articulate. Most AI tools today (Copilot, ChatGPT, the lot of them) are what I've started calling draft-first. You give them a prompt. They hand you back a polished, complete-looking document. Professional tone, confident assertions, organized structure. And that's exactly the problem.
I sat in a planning meeting last March where the product owner had asked ChatGPT to write the requirements for a new reporting module. The document was beautiful. Headings, acceptance criteria, edge cases, the works. Everyone in the room nodded along. Nobody pushed back. Why would they? The AI had already pre-answered every question, and the answers looked right. Three sprints later we discovered that the module assumed real-time data access to a system that only syncs overnight. A detail that would have surfaced in the first five minutes of an actual conversation with the operations team.
That's what draft-first tools do: they solve the confidence problem while completely ignoring the alignment problem. I spent maybe six months trying to articulate what the alternative looked like before I landed on a name for it: decision-first thinking. The idea is dead simple. You don't start by generating a document. You start by listing the decisions that need to get made and then you force every person in the room to actually make them. Who are the actual users? What constraints exist? What are we trading away, and who's comfortable with that trade? Only after those decisions survive a real conversation (not a nod-along session with an AI-generated spec) should code enter the picture.
The difference is profound. Draft-first gives you speed masquerading as completeness. Decision-first gives you clarity. And clarity (not velocity) is what actually determines whether a project succeeds.
What is pre-code governance and why does it matter?
Between "we have an idea" and "we're ready to code," something is missing. I've been calling it pre-code governance, though I'll admit the name makes people wince. (Nobody gets excited about the word "governance." I get it.) What I mean is simpler than it sounds: before anyone opens an IDE, has someone actually confirmed that we're all building the same thing?
In 25 years, I can count on one hand the number of teams that do this well. Most jump from a vague Slack conversation to detailed code in hours. The result? Code that's fast and wrong. I was guilty of this myself early in my career, convinced that shipping faster was always better. It took watching maybe a dozen projects fail the same way before I accepted that the speed was the problem, not the solution.
Here's the counterintuitive part: pre-code governance doesn't slow delivery. It accelerates it. The moment you get clarity on what you're building and why, you stop pouring effort into the wrong things. You eliminate roughly 40 to 50 percent of downstream rework. Features actually ship because they solve the actual problem. We explored the mechanics of this when looking at compressing six weeks of requirements into one day: the time saved downstream dwarfs the upfront investment.
A pre-code governance session typically takes one working day. Four critical questions get addressed before any code is written:
Who exactly are the users, and what workflow are we fitting into? Not a persona document. A concrete description of the person who'll use this feature: what they're doing before and after, and what outcome they need.
What are the hard technical constraints? Latency targets, integration points, data volume limits, security boundaries. These shape the architecture. Discovering them after coding begins? That's how teams end up rebuilding from scratch.
What are we explicitly NOT building in this release? Scope is defined as much by what you exclude as what you include. Making exclusions explicit prevents scope creep and those "while we're at it" additions that derail timelines.
What trade-offs are we making, and who approved them? Every project involves trade-offs between speed, quality, cost, and scope. Implicit trade-offs become surprises. Explicit, signed-off trade-offs become decisions. Projects with clear requirements and stakeholder involvement have three times the success rate of those without, per the Standish Group CHAOS Report.
Back in October 2015, I walked into a new role at Technology Evaluation Centers, tasked with leading a digital transformation that spanned sales, legal, operations, and customer support. The team had been using Microsoft XRM, a no-code tool where they'd tried to build all their workflows themselves. They thought they'd just automate everything: link sales calls to inventory, legal approvals, the whole chain. What they'd actually built was a bazooka to kill a mosquito: an overcomplicated beast that no one really understood and that didn't work correctly.
So my architect Alexei Goldman and I sat down with each stakeholder and said, "Let's not start with how you want it done. Let's start with what's bugging you." We made them step back from tech talk and just tell us their pain points, their ideal outcomes, and where they were willing to compromise.
That human-first conversation saved us a ton of headaches. Yes, it took three months to get a solid package for the developers, typical with delays of aligning everyone's calendars and the back-and-forth of follow-up questions. But we ended up with a system that was actually maintainable and made sense to everyone. The big takeaway: before you jump into building, make sure you're all speaking the same human language first. Alignment is everything.
How can you adapt your AI coding strategy before it's too late?
If you're using Copilot (and at this point, who isn't?), you've got a choice. Keep using it the way most teams do, as a code generation engine that ships faster without anyone confirming the team's building the right thing. Or treat it as one piece of a broader approach where alignment happens before the first line of code. I know which one I'd pick. But I'm biased; I've been cleaning up the mess from the first approach for a quarter century.
Four steps that actually work. I've seen each of these make a concrete difference at real organizations:
1. One structured requirements session before coding begins. Not weeks of meetings. One day. Maybe less. The team sits down and aligns on user needs, technical constraints, security boundaries, and trade-offs. Every decision gets written down. Every trade-off gets a name attached to it. I ran one of these at a financial services client last September; it surfaced three critical integration constraints that would have cost the team six weeks of rework if they'd discovered them mid-sprint.
2. Make decisions the artifact, not documents. Who are we building for? What exactly are we building? What are we explicitly not building, and why not? Write those decisions down and reference them during code review to catch drift. Documents go stale. Decisions stay sharp.
3. Sit with the people who actually use the software. Not at the demo stage. At requirements. Embed yourself in their workflow the way I did at that Quebecor warehouse, scanning barcodes and moving boxes until I understood what the ERP screen couldn't tell me. Understanding what users actually need (versus what they think they can ask for) is a skill that no AI can replace.
4. Stop treating code velocity as a measure of progress. It's useful information, sure. But it measures output, not outcomes. Measure alignment instead. Are stakeholders saying "yes, exactly" during design rather than "that's not what I meant" during QA? That's the metric that predicts whether your project will actually succeed.
The real Copilot opportunity
We've spent two years using AI to accelerate the execution phase of software delivery. The real breakthrough comes when we point that same capability at the thinking phase: making it easier to ask the right questions before code, not cheaper to fix the answers afterward.
Teams that figure this out first will have a structural advantage that compounds with every project. They'll ship faster because they spend less time building the wrong thing. That's why the biggest breakthrough in software delivery isn't faster code; it's clearer thinking before code.