In a previous post I defined Ugly Code as Code that is UnProtected by Tests, Ill-Structured, and UnReadable. Ugly Code is Code that has lots of Technical Debt…
There is lots of Ugly Code out there, and many of use have to maintain or extend it. One of the best books describing the technical aspects of how to work with such Code is Michael Feather’s book, Working Effectively with Legacy Code, which I recommend to any Coder who is serious about his or her craft.
That’s all well and good… but, what is our strategy for working with our Ugly Code? How much should we clean up the Code before we work on it? How much should we clean it up as we’re working on it? How clean is clean enough?
Before progressing further, let me remind you what it means to CleanUp Ugly Code. Since Ugly Code has no tests, we must Protect it with Tests. Since it is Ill-Structured and UnReadable, we must Repair it. The most common way to Repair it is with Refactoring, and all but the most trivial Refactorings require that the Code is already Protected by Tests. In simple terms, we have:
CleanUp = Protect + Refactor,
and we need to Protect the Code before we can Refactor it.
Comments on Refactoring
First of all, “Refactoring” is not a synonym for “cleaning up Code.” Individual Refactorings (also called ‘micro-refactorings’) are specific, named, techniques (recipes) that are used to improve the structure or readability of Code without changing its behavior. Refactorings always result in Code that is more Readable, more Cohesive, and less Coupled; and it is generally agreed that Refactorings pay for themselves — time spent Refactoring is regained later because the Code is easier to work with. Every Professional Coder should be practiced at using the most common Refactorings, such as ‘Extract Method,”Replace Conditional with Polymorphism,’ ‘Introduce Gateway,’ and so on; and you can find a Catalog of the more commmon ones here.
Most agilists (including me) consider Refactoring to be something done by Professional Coders as part of their work — it’s just something they do when working on Code. In other words, Coders should be Refactoring all the time; it’s just part of the ‘universal’ Definition of Done (DoD). In order to manage this work, a specific Story’s Definition of Done (DoD) might put some sort of time limit on how much Refactoring to do – you might not want a Coder to go on a 2-day Refactoring spree during a time crunch — but the Coders should determine which Refactorings to do within this time limit. Their job is to make the Code as Well-Structured and Readable as time allows.
So, Refactoring is something that should be done all the time. However, it is considered risky to do most Refactorings unless the Code is Protected by Tests. In fact, the (perhaps unwritten) first step of almost all refactorings is “Run the Tests to make sure they pass” (see Wikipedia). Some people do Refactoring without a Safety Net, but it’s not a good general practice.
Refactoring provides value to the Coders; it makes the Code more Extensible, Modifiable, and Readable. Since the primary benefit of Refactoring is to Coders, so makes sense that Coders would get to prioritize which Refactoring to do. Of course, if other people, outside the team, want the team to Refactor something, that should be done with its own story, as it’s providing value to external stakeholders, and not the Team’s Coders… just sayin’…
In this blog I’m talking about working with Ugly Code, which has no Tests. Because it is Ill-Structured and UnReadable, it needs significant Refactoring. However, I will assume that we can’t Refactor it until it is Protected by Tests. This assumption makes things interesting…
Strategy Options and Discussion
Now what? Our Coders will be glad to start Refactoring once the Ugly Code is Protected, right? So we need to get the Ugly Code Protected — but what is our strategy for doing so? It seems to me that we have four basic options:
- We can do nothing, and produce even more UnProtected Code;
- We can Protect any new Code we add;
- We can Protect new Code as well as any Code that the new Code comes in contact with; or
- We can Protect everything before we start writing any more new Code.
Unfortunately, many Organizations choose either Option 1 or Option 4, which are the two worst options of the four. Option 1, which is the ‘we don’t know what to do yet, so we’ll do nothing’ option, is bad because you’re not even trying to pay back any Technical Debt, and you’re just making it worse. Option 4, which is the ‘let’s get everything Protected up front’ option, is bad because it might take a long time before you start adding any additional functional value to the Product.
So, we’re left with Options 2 and 3. Now, Option 2 should be automatic — it is truly the least you can do. The easiest way to do it is one test at a time. First, we specify the new functionality with an Acceptance Test, and then make sure that test gets automated as a Regression Test to Protect that new functionality. Specifying the Acceptance Test that defines the needed functionality is a ‘Product Owner thing’ that is largely done during Refinement… so let’s get cracking… the Team’s Product Owner needs to make sure that any new functionality to be added is specified by an Acceptance Test, and the Team adds this Acceptance Test to the Regression Test suite as part of the Story’s Definition of Done (DoD). This type of development (adding functionality one test at a time) is called “Sashimi Development.”
Discussion of Option 3
So, let’s talk about Option 3, which I will consider as an extension of Option 2. The Team is working on a Story based on a single Acceptance Test, and they have to modify some existing Ugly Code as part of it. They don’t want to break this existing Code, but there is no Test telling them what this Code does. So, they feel a need to do some Exploratory work to figure out what this Code does, and then write some Tests that Protect it, right? But, should they? This Exploration and Protection could be a never-ending process — there could be lots of existing Ugly Code this new Code will interact with, and the Team will naturally want to Protect all of it with Regression Tests, right? There’s probably no way they have enough time to Protect as much a they’d like to (or need to). How does the Team determine which Functionality to Explore and Protect?
I suggest that the Team prioritize as they go, determining, on a case-by-case basis, whether or not a particular part of the Code they have come in contact with should be Explored and Protected. There is no way the Team can spend enough time and effort to protect all the Functionality they touch, so the Team has to determine which Code (which Features) need to be protected, and which don’t. During development, Developers should constantly be asking each other questions like: “How about this Feature? Does it need Protection? Is it more important than that?” “How much should we dig into this stuff; it’s a mess,” and so on… this is an important part of the Scrum Team’s self-organization in this case.
Now, the value of this Protection is to Protect existing Features, so it’s value to the Product — not just the Code. Because the value of this Protection is to the Product, the prioritization of what to Protect belongs to the Product Owner. “Why?” you ask… Well, the PO is the Scrum Team Member who is accountable to the outside world for the value of the Scrum Team’s work — for prioritizing which work should be done — and determining which Code (or Features) should be Protected by Regression Tests is is certainly a prioritization issue that affects the value of the Results. It also stands to reason that the Developers would want the Team’s PO to make these decisions, as the Team’s PO is the person who will be held accountable for them anyway.
So, let the Team’s PO decide which Features or Code needs to be Explored and Protected. This sort of prioritization and tradeoff is precisely why we have a PO, right? He/she owns this stuff…
If the PO decides to Protect existing Code while doing new functional Stories, it must be accounted for in the Backlog somehow: either as CleanUp Stories that are tightly coupled to the new functional Stories, or as Doneness Criteria on the new Functional Stories — in either case, this work should probably be time-boxed so it doesn’t get out-of-hand. This work should be carefully managed; the Team’s PO must make considered decisions about which code to Protect and how much effort to spend on it — remember that the Team’s PO is accountable for maximizing the overall value of the Team’s work, not just the value of the Product (see this blog).
Summary of Strategies
So, let me summarize my suggestions for dealing with Ugly Code — Code that is UnProtected by Tests, Ill-Structured, and UnReadable.
- When adding new Code, make sure the new Code is Protected by Tests.
- Note: This is the minimal requirement; I’d prefer that they Team do full-blown TDD for all new Code.
- Coders should be allowed to Refactor any Code they find that is already Protected by Tests; this includes both new and existing Code. I recommend that you time-box the Refactoring for each Story to keep it focused and under control.
- Note: This may include minimal Refactoring of UnProtected Code, because it might be impossible to draw a clean line between Protected Code and UnProtected Code. Try to keep this to a minimum, and be careful!
- The Team’s PO should be a member of (or working with) the Development Team to determine which existing UnProtected Code needs to be Protected with Tests, probably based on what existing Functionality needs to be Protected. The effort for Protecting this Code should be time-boxed, either by using time-boxed CleanUp Stories that combine Exploratory Testing and Automated Regression Test Development, or by adding time-boxed Doneness Criteria to the new Functional Stories themselves.
Combining these three suggestions into a coordinated Strategy results in two things:
- All new Code will be Clean (especially if the Team is using TDD), and
- The existing Ugly Code will be Cleaned Up incrementally, based on the Team’s Product Owner’s prioritization, as new Code comes in contact with it.
This is what we want…