Warning:
This is long. I didn’t have time to make it shorter.

Chet and I were chatting about technical debt this morning. One foundation for our chat was this item from Ward Cunningham, who created the term. What Chet said to start the conversation was that he felt that the term “technical debt”, as used, is too mild to properly describe some pretty bad behavior. I confess that I went so far as to suggest that even Ward’s original coining of the phrase was perhaps too gentle.

The center of our concern is that debt is a thing that can be entered into wisely: we borrow money to go to college, or to buy a car, or to buy a house. In return for paying some interest, we can get the benefit just as if we had had the money all along. That can be a perfectly rational thing to do, depending on the interest rate, one’s ability to pay, and so on.

Now we all know cases of people who have taken on debt unwisely. At one point in my life, I had credit card debt amounting to about one-third of my annual income before taxes. That was unwise and took me some time to clear up. Even clearing it up was costly: my recollection is that I sold some stock before its time. Quite likely, payday loans, as popular as they are, are often unwise, because the interest rates are incredibly punitive. And so on. There are many ways to take on debt unwisely.

But, of course, no one here is dumb enough to do that. We all make judicious decisions about when to take on debt and when not, ensuring that we are always within our ability to pay, and never spending more than we can afford. We gain higher benefit from our borrowing, and can readily afford the cost of paying back the debt.

And we think the same thing about “technical debt” as the term is used in practice, namely taking some shortcuts in the code now, so has to build more features than we could in the short term. We imagine that it’s possible to make a rational decision about that, by comparing how many features we’ll get tomorrow, compared to the slow-down we’ll suffer as we pay back the “debt”. And, for that matter, we imagine that we will be able to pay it back, and that we will in fact pay it back. I emphasize the word “imagine”.

Chet and I touched on a few topics, which I’ll talk about here:

  • what are we trying to accomplish with code quality?
  • can a tactical decision to reduce “quality” ever make sense?
  • do teams really make such decisions consciously?
  • how do experience and skill come into the quality-speed trade-offs?
  • how does “craftsmanship” apply to this topic?
  • what can be said about attempts to compensate for skill with tools and processes?
  • is the term itself a problem?

What is code quality for?1

The point of code quality, as I see it, is to deliver more working software, sooner, over the entire time the effort goes on. Since we do things highest value first, it seems likely that we could accept a decline in the rate of delivery as time goes on, in return for faster delivery at the beginning. Note, however, that if code quality declines rapidly enough, it makes each capability more and more expensive, while their value declines because we’re doing value first, and because capability value generally declines the later we ship it anyway. So we can’t allow ourselves to slow down too much, or we’ll fail entirely to deliver some key capabilities.

We keep the code at high quality, well-tested and bright, to keep delivering new capabilities. How does that work

Let’s just consider two aspects of “code quality”: the amount of testing that has gone into the code, and the amount of design work that is embedded in the code.

If we do not do sufficient testing, we have more defects. If we have more defects, we have more defects that are bad enough that we have to fix them. Fixing a defect almost invariably takes much longer than it would have to have prevented it. So defects slow us down.

If our design is poor, adding new capabilities takes more time, because the existing code does not support them, or actively pushes back against having them put in. Changing poorly-designed code causes us to inject more defects. Adding capabilities to well-designed code delivers more capabilities in better working order.

Bright, well-tested, well designed code allows us to deliver more capability over a longer time period. It is a more timely and more cost-effective way to work. It works best for the business purpose of the software.

Tactical reduction in quality

In theory, theory and practice are the same. In practice, they are not.

  • A. Einstein

Clearly it is possible, in theory, to polish the code too much, or to write too many tests. Since each bit of polish, or each test, can be said to increase quality, it’s at least theoretically possible to go too far. Therefore, it’s at least theoretically possible that a reduction in quality of code would result in more features, sufficient sustainability, and few enough defects to make for an overall improvement.

In theory. It would be incredibly rare to see a piece of software with too many tests. It would be incredibly rare to see a piece of software whose code was too habitable, in the sense of slowing down real progress. In theory, we need to worry about the code being too bright, too well-designed, too well tested. In practice, this rarely happens. It is not happening on your project.

We’re all part of the same company, aren’t we?

  • A. Manager

Your Product Champion, Product Owner, or manager is hungry for new capabilities in the product. That’s their job: delivering the maximum value for the time invested. It is absolutely natural for them to push for more capability and to want to be sure you’re not wasting time on things that don’t matter. So we’ll hear things like this:

“But what about the short term? You’re pretty good programmers here, aren’t you? We hire only the best, don’t we? So surely you programmers can cut back a bit on testing and just pay more attention so as not to have too many defects. Surely a bit of code review could find problems, and save us all that testing, at least for a while. “

“Surely you can let the refactoring slide for a while, can’t you? OK, sure, the design will be a bit harder to work with, for a while, but you can push in a few key features for the next release, and improve it later, can’t you? We’re all working for the same company here, and the company is counting on you.”

Often, even without these things being said explicitly, developers will feel the pressure to deliver and try to go faster by relying on brain power to figure out complex code instead of fixing it, and relying on brain power to be sure it works rather than testing it. This is a fool’s game, but under pressure we’re all fools.

Still, it is probably possible to go a bit faster for a short while with a bit less attention to code quality. The capabilities we build will be a bit more buggy, the code will resist us a little more, but for a while, we can probably go faster. Probably. Possibly. Maybe. But when?

It’s likely that everyone within sight of these words has cut some corners on quality and managed to get away with it. The software did ship, the bugs weren’t too awful, the debugging overtime didn’t kill anyone, the customers weren’t too unhappy. Therefore, it was a good decision.

No!!! The fact that we got away with something does not mean it was a good decision. It means that it was not a fatal decision.2 There is really no evidence about what was fastest, much less best, coming from a situation where we cut corners and got away with it. No evidence at all.

But lack of evidence doesn’t mean it’s a bad idea, at least not always. Maybe there are cases where it’s a good idea to test less than we “should”, keep the code less bright than we “should”. The question becomes, what’s “should”?

“Should” is about our best judgment. “Should” is what we’d do if we were as good as we want to be, as conscientious as we want to be, as squared-away as we want to be. And when it comes to code, the best judgment we have in the room is the programmers, not the managers, Product Owners, or Product Champions. The programmers need to understand how good the code should be, and they need to hold themselves responsible to keep it that way. If the managers, Product Owners, Product Champions are any good at all, they’ll demand that the programmers keep the code as good as it “should” be, because that’s the way to deliver the most capability over the course of the effort.3

“Yeah, Ron, but is it ever right?”

Well, my best guess, based on over a half-century programming, and nearly two decades of “Agile”, is that slacking on testing and refactoring is never right. When Chet and I are programming, we can feel the push-back in the code tomorrow, if we didn’t test or refactor enough today. We will slow down tomorrow, based on inferior work today!

Sometimes, we don’t even realize today that we’re not working carefully enough. We might not even notice tomorrow. Sooner or later, we’ll notice. And when we do, we don’t ask permission to test, we don’t ask permission to refactor. Instead, we bear down a bit on testing and refactoring. We do a bit more quality-focused work than our normal ideal level, because we are trying to clean up the campground a bit, leaving it better than we found it. Last time we were here, we messed it up more than we should have: this time, we bring it back.

Summing up on “tactical” reduction in quality, I hold that:

  • Tactical reduction is [almost] never a good idea;
  • Recovering from it is a technical concern for the technical team;
  • Recovering should [almost] only ever be done in code that’s already being worked on for business reasons;
  • The result is that once we slack (er, back off on quality for tactical reasons), we’ll never quite recover.

Yes. I hold that once we slack on quality, we’ll never get it all the way back. We’ll be slower forever by some amount.

Conscious decision?

I’ll be brief this time: I believe that teams generally slack on quality due to feeling pressure, and either do not make a conscious decision to do so, or rationalize the decision based on the notion that they can make up the time. I believe that any such decision is most likely to be wrong.

Most such decisions are survivable, mind you. They’re just not ideal. How far below ideal one falls depends on how often, and how long, one does it. Have you ever worked in a code base that really fought back against you? Decisions were made, consciously or unconsciously, that caused that.

The role of experience and skill

With higher skill in testing and refactoring, we are better equipped to do them, to do them well, and to see when they were needed. This is probably the definition of skill.

The idea of the “Agile Developer Skills” workshop that Chet and I do is to expose people to doing work while applying these skills, and to helping them see what difference is made when they apply them well, or not so well. We try to give enough of a taste of the ideas to give participants the beginnings of the ability to sense when their code is not of high enough quality, and the beginnings of the skills needed to improve it.

We’re trying to set them on a long road to doing things better and better.

The role of “craftsmanship”

The “Craftsmanship” movement, to my taste, is about doing the same thing: giving people a taste for code quality, a start at how to provide it, a sense of its impact. The movement also tries to imbue an almost moralistic view of doing these things. It says, nearly explicitly, that if you don’t do these things, you’re not working professionally.

I tend to agree with that view, though I do believe that people get to decide how much to invest in their professional life. I’d prefer to work with people who are rather deeply invested in improving, and in my own way I try to help people get that attitude.

I am very supportive of the Craftsmanship movement, if a bit leery of the moralistic tones sometimes taken on.

Compensating with tools, standards, or practices

Chet and I spoke briefly of some attempts we’re aware of to produce better code quality by causing inexperienced programmers to use patterns or frameworks provided by better programmers. This comes down, of course, on the “processes and tools” side, more than on the “individuals and interactions” side of the balance.

That said, I prefer working with a compiler to coding in binary, and I prefer using an editor that knows the calling sequences of library routines, and I prefer using libraries and other frameworks that help me.

When they help me. Not every library and framework really lets me go faster, nor lets me write better systems.

So this kind of thing needs to be used judiciously. Used well, teaching people to code using provided patterns might help them learn why the pattern are good, learn to tell the difference between good and not so good, and in the end, become better programmers. Unfortunately, when we hire people who are not very good, as a matter of policy, when they become good, they’ll often move on, so that we do not get the benefit of the learning. This seems to us to be quite short-sighted. Still, companies get to do what they want to do.

That said, tools and processes can be quite useful when used well. I use lots of tools, and a new process every day, and I’m doing OK. To select a tool or process that works well with “Agile”, I think you need to be pretty darn good at “Agile”, whatever that is. Which means that such tools and processes should probably not be chosen by people who have not lived in code for a long time. But again, companies, and people, get to do what they want to do. I’m just here to tell you what I think.

The term “technical debt”

On the ground, the term “technical debt” suggests that just as we can borrow money judiciously to buy a car or a house or a rabbit, we can build slightly inferior code today to get more capability. My long experience suggests that the analogy is extremely poor:

  • We often defer the decision to people who don’t even understand code;
  • Most of us cannot make a judicious determination of when to do it, even if we do understand code;
  • Once we have “borrowed”, we can almost never pay it back.

For these reasons, as the term is used, I think the analogy to monetary debt is quite weak.

As for Ward’s original notion, it seems to me that he was referring to a gap between what the code “understands” and what we understand as its authors.

One of Kent Beck’s “Rules of Simple Design” is that the code expresses all our design ideas. In this light, I think what Ward is referring to is that while the code was the best we could write with our understanding then, it does not embody what we understand now. There’s a gap between what the code understands and what we understand. Today’s design understanding isn’t in the code, because when we wrote the code, we understood less.

Ward described how his team would observe the code getting further from what the team understood, and then collapsing back down to a better design. I like the gestures of expansion and contraction he used, because often a system feels that way to me. It’s getting bigger and bigger but not necessarily better. I feel like it’s expanding in ways that are not ideal. Then, one day, with luck, I see what it “should” have been.

What happens next? In a living system, such as Ward describes, we make the system what it “should” have been. We put our best knowledge back into the code. It collapses from the melange it was into something better shaped, something better designed.

Or, we just ship it, and resolve to do better next time.

To me, Agile software development is about turning “next time” into “this time”, as often and as frequently as we can. It’s about improving the design as often as we can, as soon as we can. Sooner, or later, it seems, we’ll have an idea or an understanding that we can’t put in, because we, or the software, has moved on. But if we can put off that moment as long as possible, we’ll do better.

TL;DR

Technical debt is never a good thing. It is sometimes inevitable. We should never take on technical debt on purpose, and we should pay it back as soon as we know how.



  1. What are prepositions for? Primarily, they are used to end sentences with. That’s what they’re about. 

  2. I once hit 135 MPH in my 911 Carrera 4 on I-96 on the way home from Chrysler, late at night, in very light traffic. I didn’t fly off the road, didn’t kill anyone, wasn’t killed myself. Didn’t even get a ticket. Good decision? You would be crazy to think so. Not everything we survive was a good idea. 

  3. This is why there should be no “refactoring stories” or “testing stories”. These are technical matters. There should be no “for versus while” stories or “if versus subclassing” stories. There should be no “technical stories” of any kind. Stories are about what capability we are to deliver. It’s the programmers’ job to decide how to deliver it.