I might come across as someone who hates testing, if the past two [1] months [2] have been anything to go by, but I've really been complaining about automating the tests, including some rather difficult ones to automate [3]. But honesty compels me state this: the new regression test [4] has found another potential bug.
Because I added code to delay (or entirely block) responses from the various database sources, a few test cases were added to a problematic feature to ensure it's been fixed. The Happy Path™ has been fixed, but there is a Sad Path™ that's been missed. We query two sources, A and B. In the scenario we are testing, the data we want is from B—any data from A is ignored (but we have to query anyway due to “reasons”). So the case of A has no data, B has data is fine. But it's when A doesn't return (or times out), the reponse from B is ignored when it probably shouldn't be (since that data does get back to us). And it would not surprise me if there aren't more cases like this.
Normally, I wouldn't expect this to happen all that much [It doesn't. We have a KPI (Key Performance Indicator) for that, and I don't think it's worth worring about–the largest spike I've seen over the past month is easly three orders of magnitude lower than our volume; the rest barely show up on the graph. —Sean], and the re-engineering required to handle these casees might be significant since it would require adding more states to the processing state machine. But that's not my call to make.