Technical debt is a recent metaphor in my industry concocted to refer to the phenomenon that as we write and modify computer program code, its increased complexity costs us more.
The idea is that as programs age, they cost a company more money in terms of developer time and expertise to maintain or enhance them. Debt can also be a function of resources the application consumes, stuff like computer power, database size, increased disk space, memory and network bandwidth requirements.
This debt metaphor applies especially to the accumulation of code that's duplicated, tightly coupled, untested or doesn't fall under paths of known test coverage—all creating the situation where a developer goes in to add a feature and finds either that he cannot do it without breaking something else because of unforeseen dependencies between existing components or must do a great deal more work because there are considerably more points in the source base that demand his attention than might be estimated.
Obviously, reducing existing debt is thought to be important; limiting it in the first place is a pretty big goal.
And yet, this metaphor can be abused.
One might postulate that an application's debt can be seen like a mortgage at a certain interest rate. If the company considers "paying it off," the exercise quickly becomes daunting. A great deal of effort must be expended to make that happen.
Of course, the tools used to measure technical debt are suspicious in and of themselves. Their use and configuration cannot be taken lightly.
In the end, technical debt isn't really like a mortgage because one needn't always pay it down. One particularly nasty component may, in the wisdom of a measuring tool, constitute a huge debt in terms of its complexity. But, tools don't measure the fact that maybe there aren't bugs in it or that there is little or no need to go into that component to do anything. Whatever its cost may be estimated at, if no developer's ever going to have to touch it, the imaginary cost is not important.
Other code that must be touched frequently, such as a user interface, would be an important undertaking full of potential pitfalls. It would be more important to concentrate on doing it right, maybe choosing a widely understood framework, ensuring freater much test coverage, etc. than other components.
Beyond these considerations is the longevity of an application. If an application is soon to be replaced by a new version or by another application altogether, it's not important to spend any more money on it. It's like spending money on janitors to clean up a building that will fall to the wrecking ball.
Nevertheless, this metaphor is apt. And it underlines the need for software development to be agile, invested in test- or behavior-driven development techniques so that no code is written that's not covered by tests.
Test first; then code. State behavior by means of a test first, then code to solve the challenge.
No comments:
Post a Comment