One of the toughest challenges confronting the technology ‘owner’ (i.e. the software technology provider ergo “Programmers”) of older legacy applications is making the case for committing resources to periodic sprints to freshen and declutter the bones of the application. While the business case for the service that the technology provides is fairly clear as it is limited to the functional efficiency of the automation lever it initially provides the ‘business’, the business is usually reluctant to invest a portion of its budget for periodic efforts of infrastructure de-cluttering, whose benefits are less immediate and obvious to all but extremely seasoned technical stakeholders in the dev team itself. The future negative implications are sometimes obscure to business stakeholders dependent on the application.
Here I am laying out a bird’s eye view of the financial justification for these periodic freshen-up efforts.
What Does It Mean to ‘Refactor’ Legacy Software Systems As They Age?
Neglecting to periodically engage in efforts intended to keep legacy systems current can have a profound – and expensive – impact over time. A well-known historical example of such neglect is found in the Y2K panic that occurred in the late 1990s. Many business-critical legacy applications had, essentially, remained stagnant since their deployment. Why not? Once a software program has been deployed, it will continue to do the same calculations, over and over, into perpetuity unless external factors break it. In other words, software doesn’t rust. It doesn’t ‘wear out’. All other things being equal, those algorithms will execute the same job, faithfully, for as long as the supporting machinery remains unchanged.
And there’s the rub. In order to remain an ‘INDUSTRY’, the technology world depends on change. Otherwise it would cease to exist after the first version of a operating platform was released. Remember? Software doesn’t rust. It doesn’t wear out.
Yes. This means that the IBM 360/370 series of mainframes that provided every computation needed for men to land on the moon would provide the exact same computations, today. So why don’t we see these systems continuing their assign jobs today? The world moves on, requiring repetitive iterative upgrades to the entire stack or the environment to avoid being orphaned.
So, unless the organization has designed and built its own self-contained hardware and software ‘ecosystem’, change due to external factors (obsolescence of stack components, planned or otherwise) will inevitably catch up with all app deployments that don’t match the upgrade pace of their environmental bunkmates.
The Y2K Lesson
In the 20th century, especially in the USA, most software apps handled dates in the customary MM/DD/YY format, which featured two-digit notation for the year (YY). Why not? It’s good for a whole century. Except, as we all know, we were hurtling headlong into a brand new century. Many well-functioning, business-critical, legacy systems, especially financial support systems, had remained essentially frozen for decades due to a lack of periodic sprints to keep those software systems current. This led to the, at the time, panicked, pressure-filled projects to identify vulnerabilities and refresh and update business-critical legacy systems before the clock struck midnight on December 31, 1999.
Why? Because the business owners poorly understood the threat that was lurking in their stagnant systems until it was almost too late. They only knew that the original development costs were $x. The systems had performed their jobs with little intervention since inception. All of the original programmers were who knew where. The ROI on that initial investment was spectacular. What’s to worry?
The pages of history chronicled the resulting panic. Business-side stakeholders are subject to the same turnover dynamic as any other portion of the workforce. By the time they started to ponder the consequences, the original business-side stakeholders were usually gone. And the new folks had a tenuous grasp on the realities of the threat, much less what to do about it. I, for one, slept soundly on December 31, 1999 as I had led a team to certify a large, business-critical app for a major financial company. We handled it. But it was a massive, white-knuckle, effort to meet the project requirements by the deadline
How does something that doesn’t rust or wear, do it anyway?
Legacy software systems are the backbone of many organizations, but as they age over time, they, absent periodic refreshing efforts, become more difficult and costly to maintain. This happens even if not a single character of code in the application is changed. It’s static. But the world around it is not.
Refactoring—restructuring existing code without changing its external behavior—can provide both immediate and long-term financial benefits. This very trait provides the trip wire that lulls business stakeholders into a false sense of security. How many times have our concerns been dismissed with “If it ain’t broke, don’t fix it.”. The proper response is, “It’s breaking right now. You just can’t see the cracks yet.”.
Seeking A Breakthrough
Below is a financial defense that supports the case for periodic refactoring of a fictitious exemplar: A ten-year-old legacy software application. Point by point.
1. Maintenance Costs Reduction
Current Scenario: As software ages, the cost of maintenance grows. Legacy systems often accumulate technical debt due to outdated technology, hard-to-understand code, and lack of documentation.
Cost Implications: According to industry reports, 60-80% of IT budgets for legacy systems are allocated toward maintaining existing software rather than developing new features. This is unsustainable in the long run.
Refactoring Benefits: Periodic refactoring reduces technical debt by cleaning up the codebase, improving structure, and adopting modern development practices. Over time, this decreases the effort required for bug fixing and maintenance tasks, potentially saving 15-30% of yearly IT operating costs.
2. Improved Developer Productivity
Current Scenario: Legacy systems typically result in increasingly reduced productivity for developers, as they need more time to understand and work around poorly structured code. The source of most poorly-structure code that accumulates over time are the following:
- Developer turnover. Over time, people come and go. The new developer walking into the middle of an existing legacy system usually does not have a clear understanding of the original imperatives and techniques adopted by the original developers, who are often long gone. A prevailing climate of austerity and demand for a quick turnaround often has this fresh developer re-inventing wheels in the codebase (commonly called ‘code clutter’) to accomplish a particular task assignment that was already handled but found in a section of the legacy code he/she is not yet familiar with. With deadline pressure looming, it’s faster and cheaper, in the short term, to just write your own function than to wade through hundreds of thousands of lines of legacy code to handle it ‘properly’. This leads to confusing logic redundancies and bloating that will increasingly hamper all future developer efforts.
- Reluctance to keep supporting platforms in the stack current over time. It is rare, if not unheard of, for today’s systems to have been developed in a vacuum. Most software today is built along the ‘stack’ model. There are usually multiple supporting layers, each possibly provided by a different author, in the stack upon which the original developers built their app.
Like anything else, these components are periodically updated and refreshed by their own authors. (Technical organizations, by and large, already fully embrace the benefits of refactoring). As this happens, support for the older versions of the components becomes increasingly hard to obtain. Which, if the organization does not approve periodic upgrades due to perceived costs or other factors, will tend to orphan the legacy app over time. At some point there will be few resources left alive familiar enough with a given stack component to provide technical assistance to the dev team. (COBOL? FORTRAN? Anyone? Buehler?). Which leaves the dev team saddled with familiarizing themselves enough with the deprecated platform components before they can provide meaningful bug fixes and enhancements and whose own authors of the stack component have essentially abandoned the version of it, themselves (abandonware). Moreover, ambitious, committed, energetic professionals – you know? The kind of people dev teams lust after? – are loathe to stay behind in the glass ceiling world of outdated tools, moving on frequently from these backwaters. - Poor understanding of the financial benefits. The payback. This occurs most often due to what I call “frog in a pot syndrome”. I guess most people are familiar with the oddity of a frog placed in a pot of ambient-temperature water. Then, when heat is added to the pot, the frog remains unaware that it will eventually be boiled alive. The code cluttering process occurs at such a gradual rate, the business’ perception of the accumulated dead weight that has crept in over time is unclear to those stakeholders.
The following takes a deeper-dive into quantifying these often-overlooked details.
The Man-Hour Quantifier and the Negative Consequences of Standing Pat
Developer salaries are likely the single-most operational expenses in software development. Reduced productivity means more time (and thus, more cost) to develop or fix features.
Refactored code is easier to read, maintain, and extend. This boosts developer productivity by 20-40%, leading to faster development cycles and lower labor costs. Over a multi-year period, this can result in significant savings.
Quantifying Cost-Benefits
Let’s assume an average developer salary of $100,000 annually. This yields a net hourly expense per staff member of $50 (sans any benefits or real-property investment). This is the Man-Hour unit of measure. Improving productivity by even 20% can save $20,000 per developer, per year. For a dev team with five members…Well. I think every reader can do that math.
Reduction in System Downtime and Bugs
Current Scenario: Older systems often suffer from more frequent bugs and system failures due to accumulated complexity and fragility. Every incident can result in costly downtime and loss of business operations.
Cost Implications: The average cost of downtime for businesses can range from $5,600 per minute to $100,000 per hour, depending on the industry. Frequent downtime and slow recovery from errors directly impact the company’s bottom line.
Refactoring Benefits: Refactored systems are less prone to crashes and bugs because the code is simplified, and potential problem areas are removed. Even reducing downtime by 10% can save tens of thousands of dollars annually in industries where uptime is critical.
Extension of System Lifespan
Current Scenario: Without periodic refactoring, legacy systems face the risk of obsolescence as newer technologies emerge, and the cost to replace the entire system becomes inevitable.
Cost Implications: A full system replacement can be prohibitively expensive, potentially costing millions of dollars, especially for enterprise systems deeply integrated into an organization.
Refactoring Benefits: By refactoring, the lifespan of the existing system can be extended, delaying or even avoiding the need for a full-scale system replacement. This results in significant capital expenditure savings. Even if refactoring costs 10-20% of what a full replacement would cost, the investment is recouped through prolonged system usability and avoidance of major disruptions.
Facilitating Scalability and Modernization
Current Scenario: Legacy systems often struggle to integrate with modern technologies (e.g., cloud services, APIs) and support scalability, limiting the company’s ability to innovate or meet growing demands.
Cost Implications: A system that cannot scale or integrate with new tools requires costly custom solutions or workarounds. This slows down innovation and increases operational costs.
Refactoring Benefits: By refactoring, the system can be made more modular, enabling easier integration with modern technologies and cloud-based infrastructures. This reduces future integration costs and prepares the company for growth. Improved scalability can save 10-20% in additional development costs for future projects.
Regulatory Compliance and Risk Mitigation
Current Scenario: Legacy systems might not comply with the latest regulatory standards, leading to legal risks and penalties.
Cost Implications: Non-compliance with data privacy and security regulations (such as GDPR or HIPAA) can result in fines of up to 4% of annual revenue.
Refactoring Benefits: Regular refactoring ensures that the software meets modern security and compliance standards, reducing the risk of legal costs and fines. While the cost savings are more indirect, they help avoid large, unforeseen expenses in the event of a security breach or regulatory violation.
Opportunity Cost of Not Refactoring
Current Scenario: Every dollar spent on maintaining a cumbersome, outdated system is a dollar that cannot be invested in new innovations or revenue-generating activities.
Cost Implications: The opportunity cost of not refactoring is significant. The longer a company delays refactoring, the more resources are tied up in maintaining the status quo, and the less budget is available for innovation.
Refactoring Benefits: By refactoring and streamlining the current system, the business can reallocate resources to focus on growth initiatives and innovation, which can directly impact revenue.
Return on Investment (ROI) Calculation for Refactoring
Refactoring Costs: Assume the refactoring costs for a legacy system amount to $500,000 over a year.
Cost Savings: Maintenance reduction: 15% of a $2 million maintenance budget = $300,000 annually.
Productivity gains: 20% productivity improvement for 10 developers earning $100,000 each = $200,000 annually.
Reduced downtime: Estimated reduction of $50,000 annually.
Total Annual Savings: $550,000.
Bottom Line?
In this scenario, the investment in refactoring would break even within the first year, and the company would see positive ROI in subsequent years.
Conclusion
Periodic refactoring of a ten-year-old legacy software application offers substantial financial benefits. By reducing maintenance costs, improving developer productivity, minimizing downtime, extending the system’s lifespan, and enabling scalability, refactoring provides a solid return on investment. While the upfront cost may seem significant, the long-term savings and risk mitigation make it a financially sound decision for any organization dependent on legacy systems.
Key terms used: Man-Hour, Technical Debt, Frog in a Pot
Initial draft of this post was co-written with AI tools, the product of which were then tweaked by the author to improve formatting, grammar, clarity and technical specificity.