I am experiencing culture shock with the new testing regime.
So I had this exchange with the new manager, AG, over email. AG: “In which case or scenario does ‘Project: Sippy-Cup [1]’ send a 480 response code? Please see the attached packet capture.”
I answered, “‘Project: Sippy-Cup’ does not send a 480 response. I checked the packet capture, and the IP (Internet Protocol) address does not appear to be ours.”
AG replied, “Thanks for the information. Can you also confirm that there is no code in ‘Project Sippy-Cup’ to send a 480 response code?”
When I read that, I admit I got a bit upset—I did check the code the first time around! Does he not trust me? But upon reflection, I could see how AG might have thought I didn't check the code to “Project: Sippy-Cup” at all because the packet capture had the incorrect IP addresses. I see I may have to be more explicit in my future responses.
Then during the regularly scheduled meeting AG asked if we had any tests for 480 responses. … What? I replied, “No, because ‘Project: Sippy-Cup’ does **NOT** send a 480 response.”
AG then asked if there was a list of responses “Project: Sippy-Cup” replies with. Again, the answer is no, not explicitly written down. I then read through the code saying out loud each response code it does return. Then AG asked if there were any tests for any of those responses, like the “version not supported” response?
…
I'm having a hard time wrapping my brain around a test for
if message.request.version ~= "2.0" then info.socket:send(remote,pack_error(message,response.VERSION_NOT_SUPPORTED)) return end
Really? Testing the convoluted business logic I can see, but this? What? I got the logic wrong? There's a bug in two lines of code?
So now I have to find a way to inject invalid SIP (Session Initiation Protocol) messages into the regression test.
I swear, mocking system calls is coming soon …