💾 Archived View for mediocregopher.com › posts › code-coverage.gmi captured on 2023-11-14 at 07:43:35. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-09-28)
-=-=-=-=-=-=-
I'm probably beating a dead horse here, but let's do it anyway: I don't really care about code coverage. In my opinion code coverage is fine as a heuristic, but it cannot be used to replace actually understanding your tests.
To give a trivial example of code coverage failing to catch a common bug, here's a go package:
// foo.go package foo func Foo(overwriteDefault bool) int { var a *int if overwriteDefault { a = new(int) } return *a }
// foo_test.go package foo import "testing" func TestFoo(t *testing.T) { if res := Foo(true); res != 0 { t.Errorf("expected %d, got %d", 0, res) } }
Here's the output of the test tool, showing 100% code coverage:
# go test ./foo -cover ok local-playground/foo 0.002s coverage: 100.0% of statements
And here's the perfect code panicking when it's called as `foo.Foo(false)`:
# go run main.go panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x480fd0] goroutine 1 [running]: main.foo(0xa0?) /tmp/tmp.6TH1EqLUWv/main.go:14 +0x30 main.main() /tmp/tmp.6TH1EqLUWv/main.go:18 +0x1b exit status 2
You could argue that yeah, sure it's easy to construct a specific test case where code coverage based testing fails to catch a bug. But if you look at this "specific" case, what you're really looking at is a very generic case. There's a default value of a variable, `a = nil`, and then an if statement which sets the value to something else. The test covers the if statement, but doesn't cover the case where the if statement is not hit, which is the buggy case.
This kind of code pattern is extremely common, it's how you implement default values of arguments and fields in Go. And you can't rely on code coverage to properly detect bugs in the default case, which is (by definition) the most common case!
What's the lesson here? Code coverage is a useful tool, sure. It can show if there's parts of your code which you _thought_ were covered but which actually aren't. But it can't show you if all cases are covered, no matter how high of a % you get. So 100% code coverage isn't really a useful goal to have.
I don't write tests for code coverage, I (try to) write tests for input coverage. Given all possible inputs to some piece of code, what are the expected outputs in all those cases? If some of the inputs are non-deterministic, such as a database or event bus or wall clock, those get wrapped in an interface and mocked.
Code coverage driven testing is a symptom of a larger problem, which I have yet to name, but which presents as a search for a perfected pipeline of development. Given some set of requirements, do this, that, and the other step, and you'll reliably get the desired product, where each step may (but ideally doesn't) require a human. Such a system eliminates all uncertainty, which is the white whale of all profit seeking enterprises.
This perfect system cannot exist, and we are only distracting ourselves by trying. Uncertainty will exist in your code, if not from within the code then simply because your code exists in a world which is changing unpredictably. Figure out all possible inputs, figure out the expected outputs, encode that mapping in tests, and test it manually before shipping as best you can. But accept that, in the end, there are no guarantees, and don't burn yourself out looking for them.
========================================
Published 2023-09-26 by mediocregopher
This post is part of a series!
Prevous in the series: How to Errors Good
========================================