Just more grumblings about testing while sitting on a deployment

It's 2:17 am as I'm typing this, sitting on a phone bridge during a deployment of the new “Project: Lumbergh [1]” and I'm glad to know that I'm not the only one with a clicky keyboard [2]. My comment about it brought forth a brief conversation about mechanical key switches, but it was short lived as the production kept rolling out. It's sounding a bit like mission control at NASA (National Aeronautics and Space Administration) [3]. So while I'm waiting until I need to answer a question (happened a few times so far), I thought I might go into some detail about my recent rants about testing.

It's not that I'm against testing, or even writing test cases. I think I'm still coming to grips with the (to me) recent hard push for testing über alles in our department. The code was never set up for unit testing, and for some of the harder tests, like database B returning results before database A, we did manual testing, because it's hard to set such a test up automatically. I mean, who programs an option to delay responses in a database?

It's especially hard because “Project: Lumbergh” maintains a heartbeat (a known query) to ensure the database is still online. Introducing a delay via the network will trip the heartbeat monitor, taking that particular database out of query rotation and thus, defeating the purpose of the test! I did end up writing my own database endpoint (the databases in question talk DNS (Domain Name Service)) and added an option to delay the non-heartbeat queries. But to support automatic testing, I now have to add some way to dynamically tell the mocked database endpoint to delay this query, but not that query. And in keeping with the theme, that's yet more testing, for something that customers will never see!

Then there's the whole “checking to ensure something that shouldn't happen, didn't happen” thing. To me, it feels like proving a negative. How long do we wait until we're sure it didn't happen? Is such activity worth the engineering effort? I suspect the answer from management is “yes” given the push to Test All The Things™, but at times it feels as if the tests themselves are more important than the product.

I'm also skeptical about TDD (Test Driven Development) in general. There's this series of using TDD in writing a sudoku solver:

Reading through it, it does appear to be a rather weak attempt at satire of TDD that just ends after five entries. But **NO!**—this is from Ron Jeffries [9], one of the founders of Extreme Programming [10] and an original signer of the Manifesto for Agile Software Development [11]. If he gave up on TDD for this example, why is TDD still a thing? In fact, in looking over the Manifesto for Agile Software Development, the first tenent is: Individuals and interactions over processes and tools. But this “testing über alles” appears to be nothing but processes and tools. Am I missing something?

And the deployment goes on …

[1] /boston/2018/09/11.2

[2] https://en.wikipedia.org/wiki/Model_M_keyboard

[3] https://www.nasa.gov/

[4] https://ronjeffries.com/xprog/articles/oksudoku/

[5] https://ronjeffries.com/xprog/articles/sudoku2/

[6] https://ronjeffries.com/xprog/articles/sudokumusings/

[7] https://ronjeffries.com/xprog/articles/sudoku4/

[8] https://ronjeffries.com/xprog/articles/sudoku5/

[9] https://en.wikipedia.org/wiki/Ron_Jeffries

[10] https://en.wikipedia.org/wiki/Extreme_programming

[11] http://agilemanifesto.org/

Gemini Mention this post

Contact the author