Wednesday, May 9, 2012

Throw someone else in to help QA it faster!

“Throw someone else in to help QA it faster!”

I've heard this statement many times in my career but it happened again just recently and it got me thinking. Aside from the poor choice of words, about QAing something (is that really a verb?), why would someone say this?

This seems to happen when management realizes it will take longer to test something than they initially planned and/or some client demands a product sooner. The most recent occurrence came when management didn’t communicate the expected release date and freaked at the estimated test duration. My response was you can have the product whenever you want but let me tell you what won't be tested. This elicited the response "no we don't want to not test it, how about we... throw someone else in to help QA it faster." Clearly someone hasn’t heard of Brook’s law.

Brook’s Law is a term coined by Fred Brooks in his book The Mythical Man-Month which states “adding manpower to a late software project makes it later”. It also appears the person saying this doesn’t understand what Quality Assurance (or QA) means.

If the role of software testing is to help the team understand vital information about the product and you bring in someone who doesn’t understand how this is accomplished, the value both will be providing is diminished. You slow down the primary tester as they coordinate with the person being brought in as work is divided up based on skill and comfort level. Configurations of different machines, problems local to the users and a whole host of other problems can crop up as well. In other words it takes time for people to become productive on a project.

Anyone who does time-based work (has a deliverable) can tell you it's helpful to know when you need to be done. Crazy, right? Testing a product is a never ending journey but specific periods of testing can end, for example when the product is needed in the field. There will always be more testing to do but you don't always have time nor is it always a good use of resources. Dates can help. If this statement comes up often either Management or the Team has problems communicating with each other about when things need to be done. Setting dates isn’t a sure fire method since dates can change but so can the decision on what needs to still be tested and what’s acceptable risk for the field.

While it’s possible to get an additional resource to add some incremental value into a project (they might have some unique perspectives that can help, especially if they are subject matter experts) it’s going to take them awhile. Don’t assume “throwing someone else in” will do anything other than make the testing duration longer.


mpalmerlee said...

Due Dates are great, and helpful, but part of the problem, both with building software and with testing it is estimations are by nature inaccurate, and management assumes that the job will get done as fast as possible.  Part of the difficulty in estimating testing time is the inherent problem with Quality Assurance, (as you've pointed out in your post: "Role of testing by James Bach") that perfect quality is unattainable.  So when someone asks a tester "how long will this take to test" you could say anything from 1 week to two years depending on how rigorous the testing should be.  Of course it is the business that should decide to what level testing is done, weigh the risk of reducing testing time and shipping with more bugs against the cost of the testing itself.  The problem there is that it is difficult to know how many bugs you'll ship without doing thorough testing, the unknown unknowns.

Chris Kenst said...

Once you understand perfect software is unattainable it should make estimating testing a little easier by narrowing the scope. Since you can never be completely done testing what level of testing should you shoot for? 

There are numerous things to take into account including the level of testing, customer demands and where possible problems might be but that doesn’t mean they are unknown unknowns. More like known unknowns. You know where the software has changed, where you haven’t tested, so you can predict to some degree where new problems will be found. 

It may not be in the the best interest of the business to be concerned with the number of bugs they ship (how would you quantify that anyways?) but rather the risk of severity of the bugs. Due dates can help this. It requires the team and management to focus for a period of time on what can be done, even if the dates are imperfect and will change. 

mpalmerlee said...

 Yes, but how do you come up with a due date if you don't know how long it should take?  All you know is when you need it by.  So if I say: "I need this in two weeks" a tester can't tell me, "Ok, then you'll only find 50% of the bugs."  A tester might be able to say "I'll only be able to run 50% of the test cases I've made" which might be the best way to compromise.  So probably, before a test estimation can be made, the Test cases should be built and then an estimation should be made based on how long it should take to run the test cases?  Then the due date would be based on that estimate.

Chris Kenst said...

The due date doesn't need to come from testing, it can and probably should be based on an external source like when the product is needed in production or UAT. Something management comes up with based on credible expectations and input from testing. This finite amount of time gives the tester the ability to prioritize their effort for the time given - like I said above testing isn’t finite. Like most self-interested groups when testing has the say they give large durations because there are so many possible ways to test. Instead of making up durations how about starting with dates you already know? This can help align testing with the needs of the business or project.

One way to look at it is this is more like top down than bottom up estimation that might be used when building an application. Different approaches to estimation can be expected since development and testing approaches to software differently. 

How can you make estimates based off of test cases that don’t exist? If a product or change is new you aren’t going to have tests written for it until you start testing. By that time you hopefully already have a finish date in mind.

mpalmerlee said...

Would you build test cases based on the due date or build test cases based on what should be tested and then decide what test cases will be left out or what bugs won't be fixed based on the due date? 

I agree the due date should come from management based on when the Client needs the release, but usually that date would be ASAP or yesterday, which is why it'll always be a compromise between getting things released as soon as possible and finding as many bugs as possible.  Which is why it probably makes sense to set management expectations by giving a clear picture of the tradeoffs between delivery time and tests performed.  When asked the question "How long will it take to test this?" you could say "That depends on how much you want it tested." Or better yet, say "I'll look it over and we can decide which tests we'll run in order to get it out as fast as possible."

Chris Kenst said...

Writing test cases only to leave some out would be a waste of precious time. You’d figure out the areas to test in the given time and focus your testing (which includes test case writing) on those. The tradeoffs you make between time and testing will depend on various factors including how many fixes you have to retest before release. 

You’re correct in that this is all based on having management understand their role. If they ignore what you tell them it’s pretty easy to say what won’t be tested. It’s also not much use to the tester if management says get this tested ASAP; it’s similar to saying “throw someone else in to help” in that it makes people less productive because you don't have an end goal to focus on.

mpalmerlee said...

I agree that you wouldn't want to write full test cases just to not actually exercise them, but