Back in Time with the Cost of Change Curve

Clock Mechanism – Triberg, Germany

The first time I’ve seen the Cost of Change Curve, similar to the one below, was more than 10 years ago. The curve shows that cost of fixing a software defect increases exponentially as we proceed in the software life-cycle.

Is it still valid? After all, the software industry has changed since the first time I’ve seen it. Nowadays we use different technologies and different methodologies. I believe we’ve made quite a big progress over the years, so I’ve decided to dig the origin of the curve. From where has this info come from?

I invite you to join me for a quick journey back in time.

Cost of Change Curve
Cost of Change Curve

We start our journey 20 years ago, deep into the ’90s. The days of Java 1.0, the days we’ve admired the graphics of the first Sony PlayStation and we’ve played Dune II, which has marked the birth of the Real Time Strategy games. On 1993, the excellent book Code Complete by Steve McConnell was first published. The Cost of Change Curve was there, gaining big popularity.

And from where did Steve McConnell get his info? In order to find out, we need to travel 12 more years further back in time. Follow me into the happy ’80s.
Michael Jackson has released Thriller, we’ve played Pac-Man and Nintendo was the ultimate gaming machine. Barry Boehm published Software Engineering Economic at 1981, putting the Cost of Change Curve on the printed paper.

But wait, don’t stop there. Just one more time travel, back one decade more. Boehm actually first published his findings in the IEEE article “Software Engineering” way back in 1976.

The ’70s! The days of Led Zeppelin, Space Invaders and the first Star Wars movie. Programming languages back then were Cobol and Fortran.
Today, after 40 years, we still listen to Led Zeppelin and watch Star Wars movies, but we do use more modern computer languages and our development methodologies have evolved.

When you incorporate techniques like Continuous Delivery, TDD and similar that gives you a Short Feedback Loop, you are diminishing the basic assumptions that were valid back in the ’70s by Boehm. Today Unit Tests and Continuous Integration are de-facto standards. We can fix a defect in production within minutes. We can move many of the tests to earlier stages of the development cycle, making it a bit unclear whether in a TDD methodology the Test life-cycle actually exists after the Code life-cycle. The boundaries of the two are not distinct as the have used to be.

I’m writing this as many people still use the above curve to proof their claims about “the right things to do”. What was valid in the ’70s might not be valid to your project today.

Don’t blindly rely on data from the ’70s.

What’s your opinion? Do you think the Boehm curve is still valid? Do you have a more updated data?

We are not Building Bridges

Bridge by 96dpi, on Flickr

I gave my popular QA without QA talk at Nordic Testing Days 2013. When I finished my presentation, questions were raised that since no car manufacturer is doing Continuous Delivery or is pushing the tests to the developers, why should this technique be valid for software engineering?

I often hear this comparison between software engineering and civil engineering. Indeed, we can learn a lot from the art of building cars, bridges and buildings. However I find this analogy misleading.

We use similar terminologies like Design, Build, Architecture and many more. Yet the whole analogy to civil engineering is just wrong. We are not building bridges. We are building software.

In software, we can start by building a small house, then add ten additional floors, later add an underground parking, and eventually convert it to a skyscraper. You just cannot work this way in civil engineering. You would probably need to tear down the entire building every time you wanted to change its purpose.

The flexibility we have in software engineering is of a higher magnitude than any other engineering. Have a look at your latest project. It may have more moving parts than the International Space Station, and was probably built by a smaller team.

The error tolerance we have in software is also of a higher magnitude than in building bridges. Both should work as designed, and having your web site available for 99.9999% of the time is amazing. Having your bridge collapse for just 3.15 seconds a year (0.0001%) would be catastrophic.

Keep on learning from building bridges, but don’t tell me it’s the same.