Back in Time with the Cost of Change Curve

Clock
Clock Mechanism – Triberg, Germany

The first time I’ve seen the Cost of Change Curve, similar to the one below, was more than 10 years ago. The curve shows that cost of fixing a software defect increases exponentially as we proceed in the software life-cycle.

Is it still valid? After all, the software industry has changed since the first time I’ve seen it. Nowadays we use different technologies and different methodologies. I believe we’ve made quite a big progress over the years, so I’ve decided to dig the origin of the curve. From where has this info come from?

I invite you to join me for a quick journey back in time.

Cost of Change Curve
Cost of Change Curve

We start our journey 20 years ago, deep into the ’90s. The days of Java 1.0, the days we’ve admired the graphics of the first Sony PlayStation and we’ve played Dune II, which has marked the birth of the Real Time Strategy games. On 1993, the excellent book Code Complete by Steve McConnell was first published. The Cost of Change Curve was there, gaining big popularity.

And from where did Steve McConnell get his info? In order to find out, we need to travel 12 more years further back in time. Follow me into the happy ’80s.
Michael Jackson has released Thriller, we’ve played Pac-Man and Nintendo was the ultimate gaming machine. Barry Boehm published Software Engineering Economic at 1981, putting the Cost of Change Curve on the printed paper.

But wait, don’t stop there. Just one more time travel, back one decade more. Boehm actually first published his findings in the IEEE article “Software Engineering” way back in 1976.

The ’70s! The days of Led Zeppelin, Space Invaders and the first Star Wars movie. Programming languages back then were Cobol and Fortran.
Today, after 40 years, we still listen to Led Zeppelin and watch Star Wars movies, but we do use more modern computer languages and our development methodologies have evolved.

When you incorporate techniques like Continuous Delivery, TDD and similar that gives you a Short Feedback Loop, you are diminishing the basic assumptions that were valid back in the ’70s by Boehm. Today Unit Tests and Continuous Integration are de-facto standards. We can fix a defect in production within minutes. We can move many of the tests to earlier stages of the development cycle, making it a bit unclear whether in a TDD methodology the Test life-cycle actually exists after the Code life-cycle. The boundaries of the two are not distinct as the have used to be.

I’m writing this as many people still use the above curve to proof their claims about “the right things to do”. What was valid in the ’70s might not be valid to your project today.

Don’t blindly rely on data from the ’70s.

What’s your opinion? Do you think the Boehm curve is still valid? Do you have a more updated data?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this page
  • Arlo Belshee

    Don’t stop there. TDD & CI shorten the feedback loops and decrease transactional costs, sure, but refactoring + pairing has an even more significant effect: it prevents bugs. Done well, this can change a team entirely from “first we write features and bugs, then we find all the bugs, then we remove them” to “we code in such a way that bugs don’t get written. Even our unit tests pretty much never catch anything.”
    This can obviate the entire argument. Not writing bugs at all is cheaper than detecting and fixing them, no matter when they are detected. The curve becomes undefined.
    As an example case, look to Hunter Technologies. They improved their practices (from pairing up to mobbing). They got something like an 8-10x improvement in productivity (number of products shipped per year), and went more than 50 products in a row without ever checking in a single bug. Only a tiny number were even written (ever appeared on the screen); all of these were found instantly by the mob or the tests.
    So developer testing helps & is probably sufficient to flatten the curve (or, rather, to ensure you are always in the left-most 1 hour of the curve, where it looks flat). Refactoring + pairing (or mobbing) makes the curve altogether irrelevant.

    • http://ToCodeIsHuman.com/ Uri Nativ

      Spot on Ario!

      Indeed constant refactoring, pair programming and other practices significantly reduce the number of bugs, thus leading to higher productivity and better products for the end-user. I also talk in further detail about it in my presentation QA without QA (starting at slide #46).

      In my group, we work in BDD & TDD all the time. We pair program almost on every single task, and we have the whole team review each and every line of code. Our code coverage is over 98% and we ship high quality software in frequent deliveries. And yet, some bugs do find their way to production code. Thus, while continuously improving our ways of preventing defects from going into production code, we put a lot of focus to make sure we have a very short feedback loop. Making sure we can fix and deploy to production within minutes.

      For me, I cannot totally diminish the curve, however I do make sure it stays very flat and low.

  • http://ToCodeIsHuman.com/ Uri Nativ

    Spot on Ario!
    Indeed constant refactoring, pair programming and other practices significantly reduce the number of bugs, thus leading to higher productivity and better products for the end-user. I also talk in further detail about it in my presentation QA without QA (starting at slide #46).
    In my group, we work in BDD & TDD all the time. We pair program almost on every single task, and we have the whole team review each and every line of code. Our code coverage is over 98% and we ship high quality software in frequent deliveries. And yet, some bugs do find their way to production code. Thus, while continuously improving our ways of preventing defects from going into production code, we put a lot of focus to make sure we have a very short feedback loop. Making sure we can fix and deploy to production within minutes.
    For me, I cannot totally diminish the curve, however I do make sure it stays very flat and low.

  • Pingback: Agile Myths: Change Without Cost - Atos Blog()

  • Tory Decker

    Here is the thing I have come to believe. Tools and metrics are only as good as the developer who is writing the code. Sure SCRUM / Agile methodologies help us to work more efficiently. But on the back side of that, anyone who can press keys on a keyboard is trying to get their foot in the door for a job in application development. Also their are a lot of old developers who take a perfectly good Object – Oriented Language and write a set of code that is completely procedural and is not scale-able without throwing more iron at the fire (mirrored servers, more memory to overcome memory leaks, etc). On top of that, the code itself is a lot more complex than it was in the 90s. So I would argue the Cost of Change graphic is still relevant. Maybe not so much at a top notch development shop, but at your average shop, yes, the graphic still holds water.