Matt Heusser sent the context-driven-testing email group a series of questions about handling regression testing. Specifically he asked:
How do your teams handle regression testing? That is, testing for features /after/ the 'new feature' testing is done. Do you exploratory test the system? Do you have a standard way to do it? Super-high level mission set? Session based test management? Do you automate everything and get a green light? Do you have a set of missions, directives, 'test cases', etc that you pass off to a lower-skill/cost group? (or your own group)? Do you run all of those or some risk-adjusted fraction of them? Do they get old over time? Or something else? I'm curious what your teams are actually doing. Feel free to reply with /context/ - not just what but why - and how it's working for you. :-)I thought it was a good line of questioning so I responded to Matt:
I worked for a small software company where I was the only tester among (usually) 3-4 developers. Our waterfall-ish development cycles were a month+ in development with the same time given to testing. After the new features were tested, if we had time to do regression testing it was done through exploratory testing at a sort of high level. I don't think I ever wrote out a "standard way" to do it but it fit into my normal process of trying to anticipate where changes (new features, bug fixes) might have affected the product. If I had understood SBTM at the time I would have used it.
We've never gotten around to automating regression testing. A part of that has to do with time constraints - small company I wear multiple hats, could have managed my time better, etc. Other reasons involve not really knowing how to approach designing a regression suite. I've used Selenium IDE in the past but automating parts of our web applications GUI isn't possible without changes.
When I've had test contractors we used a set of missions to guide the group so everyone could hit parts of the application using their own understanding and skill (although this was our approach I don't think it was necessarily an understood approach =).) In all of the testing / regression or otherwise its all based on some sort of risk - based fashion.
Mostly I feel like I don't know enough about automation checking and how to design my testing approach to include an appropriate amount of automation. It seems reasonable to assume some automated regression checking could help provide some assurance in the build not changing for the worse (at least for the areas you've checked). Although I continue to commit time to learn more about testing I haven't committed much time to learning about automation and I believe its to my detriment. I guess I know where to focus more.