top of page

When Not to Test

When we were expecting our firstborn, months before her birth, the doctor’s tried to send us to all sorts of tests. For each of these, we had a simple process of questions: What does a positive result mean? What does a negative result mean? What’s the cost of the test in terms of money, discomfort and risk?

For low risk, mildly annoying and cheap ultrasounds, we didn’t argue too much. Also, most of them are important to the mother’s safety and health. For an amniotic fluid tap, with 8% risk of damaging the pregnancy, we politely declined. The one that stuck in my mind, though, is one of the others. It’s a simple blood test, with little risk and discomfort, and the cost wasn’t too high. We asked what a positive result would mean, and were told it was a one in three hundred chance of a debilitating condition. A negative, I got the doctor to look it up, would mean a one in thirty thousand chance of the same thing. We looked at each other, and knew we wouldn’t abort on an 0.3% chance.

We didn’t take that test, because there was simply no point in it. The point of testing is to derive knowledge that improves your product. If the test results will never yield an action item, don’t bother with that test.

Sometimes, we know that any bugs found in a specific area will simply not be fixed. The cost of correcting the issue outweighs the cost of having the bug in the first place. In that case, we will act the same way no matter the result of the test, and rightly so. If the test results will not influence your behavior, you don’t need that test.

There are times, however, where it is useful to know about an issue even if we won’t fix it, to prepare the support team, or warn sales not to run into it at demos. But those are also actions.

Where there will be no actions, there should be no test. Use the time for testing something else.

Need help creating and optimizing an efficient and talented QA team? Get in touch.


bottom of page