Member-only story
Confirmed: Code Coverage Is a Useless Management Metric
Discover the simple proof that dismantles the code coverage metric

There is a strong belief that code coverage is a strong metric to measure the quality of a software product, a belief that’s been shared without question among tech leaders for many years. Its rationale seems sound on the surface: the more thorough the testing, the higher the code coverage, and consequently, the more robust and error-proof our software should be. That’s the idea that has been firmly planted in our minds. But what if I have proof that code coverage is fundamentally wrong? What if I could show you such a simple idea to get you out of doubt? So, get ready and brace yourself.
Given this article shows which kind of metric are not useful for management —although it is very useful for developers— but does not show which ones you should follow, I recently wrote a followup explaining the four fundamental metrics you should use and why, backed by scientific evidence:
The Code Coverage
The code coverage, in its simplest form, measures how much of your code is being ‘touched’ or ‘covered’ by your tests. We assume that in our product, we have tested and run those tests at least before every release. When those tests execute, they perform operations with the product, making the code execute. Soon, we realize that if we track which code is executed by tests, we can start measuring how much code is executed. To the ratio of executed code to the total amount of code in our product, we give the name ‘code coverage’:
That is a very simple metric. If we have 100 lines of code but tests only execute 75 of them, we have code coverage of the 75%.
And soon, we realize something greater. If the code coverage is not 100%, we have code that is not executed by our tests, or in other words: we have untested code!
Therefore, having untested code is dangerous because it can contain bugs. Furthermore, it may…