The Fallacy of 'We'll Refactor It Later': How Technical Debt Compounds
Examining the cost of deferring refactoring. Why 'later' never comes and how to embed quality into development velocity.
Core: Every engineer knows the promise: “We’ll refactor this later.” Nobody has refactored code written three years ago. The reason isn’t lack of time—it’s compound interest. Poorly designed code creates more poor code, which creates more poor code, until refactoring requires rewriting the entire system.
The Geometry of Poor Decisions
Detail: I once inherited a system where the authentication module was poorly designed. Rather than request/response abstraction, it was a 400-line God function that did authentication, authorization, logging, metrics, and error handling all in one place. Refactoring it would have taken a week.
The problem was that every new feature had used that 400-line function as a template. By year two, we had 12 different places with similar 300-500 line functions. Refactoring required touching 12 different areas. By year three, 40 places. By year five, the codebase had 200 variations of authentication patterns.
The initial refactoring would have taken one week. After five years, it would have taken eight weeks. But we didn’t need eight weeks—we needed to rewrite the entire system because the poor pattern was now baked into every feature. That rewrite took six months.
The cost of the initial poor decision wasn’t the time to implement it (zero, it was faster than right). The cost was the compound effect of that decision becoming the codebase’s standard.
Application: Watch for patterns that become templates. When you see the same code copy-pasted more than twice, extract it into a shared pattern. The cost of extraction when it’s localized is 10% of the cost of extraction when it’s spread across 40 places.
Why Later Never Comes
Core: Refactoring isn’t a line item in sprint planning. Nobody says, “This sprint, we’ll improve code we wrote two years ago instead of shipping new features.” Refactoring happens when it’s forced—when a bug forces you to rewrite code, or when a new feature requires touching the broken area.
Detail: At one company, we had a “refactoring sprint” every quarter. We allocated 20% of engineering time to improving existing systems. Sounds great. In practice, the refactoring rarely happened. Sprint planning would discuss improvements (“let’s improve our logging infrastructure”), but then a customer bug would surface or a feature would take longer than expected. That 20% allocation would get reallocated. The refactoring sprint became a mythical concept.
The exception: when we tied refactoring to specific customer problems, it happened. When a customer experienced a bug traceable to poor logging, we fixed the logging. When pagination was causing database performance issues, we rewrote the pagination. These weren’t “refactoring sprints”—they were customer-driven improvements forced by reality.
The lesson: refactoring that’s decoupled from business value never happens. Refactoring that’s coupled to customer problems, performance issues, or onboarding pain always happens.
Application: Don’t create refactoring backlogs. Instead, create policies: “If we touch this code, we must improve it.” When a bug surfaces in legacy code, fixing it includes improvement. When a feature requires modifying poorly structured code, the feature includes refactoring. This ensures refactoring happens as a side effect of necessary work.
The Wrong Refactoring: Rewriting Instead of Improving
Core: The biggest mistake is treating “refactoring” as “rewriting.”
Detail: One team decided to “refactor” their payment processing system. They decided the current system was “too messy” to refactor, so they’d rewrite it from scratch. Eight months later, they had a new system that worked 90% as well as the old one and had introduced three new classes of bugs. The old system kept running because they couldn’t migrate off it yet.
Now they maintained both systems. The company paid the cost of both: the old system’s operational complexity plus the new system’s instability. Nobody had refactored anything—they’d duplicated the problem.
Real refactoring is incremental. You improve small pieces while maintaining the system’s operation. You don’t rewrite systems—you evolve them. Rewrites are expensive, risky, and rarely finish.
| |
Application: When you encounter legacy code, ask: “What’s the smallest, safest change I can make?” Extract one function. Create one test. Improve one section. Over weeks and months, you’ve refactored the system without the risk of a rewrite.
The Cost of Shortcuts
Core: Every shortcut you take multiplies the cost of future work by a constant factor.
Detail: We had a reporting feature that needed to query data from ten different tables across three databases. The “right” way would have been to model the data properly or create a data warehouse. The shortcut was to extract the data and transform it in the application layer—lots of loops, lots of memory usage, but it worked.
That shortcut worked for a year. Then we needed a second report. We copy-pasted the logic and adapted it. Three months later, a third report. Three shortcuts became nine shortcuts (each report had multiple code paths).
When we needed to change how data was extracted (upgraded database schema), we had to modify nine different code paths. A change that should have taken one day took three days across nine locations.
By year three, we had 20 different extraction patterns scattered across the codebase. Each new report took longer because we had to understand which existing pattern was closest and adapt it. Every extraction change required finding and updating all 20 patterns.
The shortcut saved three days. The shortcuts cost us six weeks every year. We would have paid back the initial cost in the first year and been ahead ever since.
Application: Ask yourself: “If I use this shortcut, will I use the same pattern again?” If yes, pay the upfront cost to do it right. If no, take the shortcut. Most engineers misjudge this—they think their shortcut is unique when it’s actually a repeating pattern.
The Refactoring Trap: Perfection
Core: The dangerous refactoring is the one that chases perfection instead of improving specifics.
Detail: A team spent six months “modernizing” their authentication system. They read papers on OAuth2, learned about JWT tokens, studied best practices. They rewrote the authentication layer using every best practice they could find. The result: a system that was architecturally beautiful and operationally worse.
The old system had been “good enough” for three years. It authenticated users reliably. The new system was more elegant but had edge cases the old system didn’t. It took weeks to stabilize.
The mistake: refactoring for its own sake rather than for specific problems. The team should have asked: “What about our current authentication is causing us pain?” If the answer was “nothing,” refactoring was waste. If the answer was “JWT token expiration is confusing,” then focused refactoring made sense.
Refactoring toward perfection is endless. There’s always another layer of abstraction, another design pattern, another optimization. Refactoring toward specific problems has a clear endpoint: the problem is solved.
Hero Image Prompt: “Timeline visualization showing code quality degradation over time. Start with clean code architecture, then show copies multiplying (branching tree structure), showing compound growth of poor patterns. Include cost curves showing cost to refactor increasing exponentially over years. Visual representation of technical debt accumulation. Dark professional theme with red gradient showing increasing cost.”