Wednesday, July 25, 2012

Rapid Testing Intensive 2012: Day 2

9:00 AM - Start of the day. James doing some talking about what we did yesterday, he's built a mind map. James and Jon are going over our schedule - gonna try to stick to it better than we did yesterday.

9:23 AM - Jon is doing a de-brief from yesterday / Project Check in. Talking about how good the bugs are that were filed. Nice job.

9:30 AM - Reviewing the TCOs from yesterday. Don't update them if it's going to cost too much. Being critical of one TCO that according to James could be affected by Visual Bias - only test the things they see. This is why we use heuristics strategies.

9:40 AM - Lighting and Wheel center for eBay Motors have been added to the scope of My Vehicles and Tire Center. Session of survey the functionality and do until 10:45 AM. Modify your TCOs.

10:45 AM - Break time!

11:00 AM - eBay has been making updates to our bugs so make sure you take a look. eBay says they are seeing some coool bugs! The instructors will make comments if people post in Jira on pages. You can start TCOs in a brainstormy kind of way. You can make edits later.

11:06 AM - You can assign risks to your TCO. You can assign risk to your survey of the product based on probability and impact. Make a list of the possible bugs and then you can group them together. Are they bad or not so bad?

11:10 AM - Risk brainstorming time. Fist make a list of the kinds of bugs you worry about and then summarize them into risk areas (James says 5-12). Then create a component risk analysis based on your TCO. 

11:45 AM - Debrief on the risk brainstorming. If you get stuck with a blank slate you can use the Heuristic Test Strategy - Quality Criteria Categories to get un-stuck. Rob is describing how his team built their mind map, how they created their risk outline. The artifacts you come up with aren't as important as the mental preparation you go through and how it gets you ready. 

12:01 PM - Time for Lunch.

1:12 PM - James talking about models and his Test Testing Framework diagram that he started talking about yesterday. Deriving test cases from requirements is as if there is no thinking going on, makes it sound like there is a mathematical procedure, which there isn't. Shouldn't talk about testing that way. Different kinds of modeling and designing experiments are conceived which drives learning and new test cases. Confusion and struggling is a normal part of the learning experience but James' diagrams help him get out of the confusing things.

1:29 PM - Risks from eBay Motors according to James' mind map: Usability, Feature Capability, Performance (page load times), International Consistency, Compatibility, Feature Consistency and Data Integrity. Participants will have risk categories that James missed.

1:38 PM - Rapid Testing focuses on Test Activities - the types of things you do when testing. Those things could map to test cases as long as its something done by a tester and not a tool. A human using a tool is a test activity but a tool by itself is not.

1:45 PM - James will talk about his and Michael Bolton's new ideas on what an Oracle means - a medium. Interpret the product for the people whose opinion matters, you are an agent.

1:48 PM - James made a note that he needs to put up the most recent up to date slides for RST. They aren't online yet.

2:03 PM - Jon is trying to determine a Risk Exercise. A search result within one of the centers: Wheel, Light, Tire which is not relevant to the query. Maybe the seller has put it in the wrong category. Online participants get to check UK and DE sites versus US and perform an international comparison.

2:45 PM - End of risk exercise and time for a break.

3:00 PM - Back from break and James is doing a brief on a participants search. The person who likes learning something new is going to be better the next time.

3:13 PM - Talking about Oracles - most specifications will not contain Oracles. Here comes the calculator question. "What do you expect from a calculator when someone enters 2 + 2?" There is a difference between expected and unexpected. You may expect the calculator to return a number 4, you may expect the calculator to remain on long enough so you can read the answer, you may expect the calculator not to blow up in your face, etc. Many expectations are inherit and you aren't aware until of them until they violate your expectations.

3:26 PM - 10 Consistency Heuristics from the RST slides. The purpose isn't to teach someone test but to help someone explain -  you can push back against the process bullies. There is no Oracle guaranteed to always solve a problem, they can't be perfect.

3:32 PM - Test activities in which we will use Oracles. Jon Bach and I are going to pair test in front of the room!

4:18 PM - Done! As Jon says it seemed like we were going for less than 10 minutes! Jon is debriefing our live paired exploratory session test. Someone managed to get a photo of our pair test session:

4:21 PM - James says we are going to use these reports to document test sessions. SBTM packages exploratory testing into session units because if they have a fixed time you have a meaningful way to compare test activities. Sessions are relatively stable compared to a test case - no orders of magnitude differences. Sessions should be logically uninterrupted instead of physically uninterrupted; Jon and James had a little bit of an argument about this but if you get interrupted get that time back and continue.

4:35 PM - If you get chronically interrupted you can't do SBTM but you can do thread based test management which is testing with check lists. TBTM includes a list of test activities, arranged in a mind map and you service the threads as you go. You can't count threads but together they define the testing story. Artifact based test management is where you test based on counting test cases - something James tries to get companies away from. You also have activity based test management (which includes SBTM and TBTM) and people based test management.

4:40 PM - We will have test charters and will work from them tomorrow.

4:43 PM - James talks about humility, specifically epistemic humility.

4:45 PM - James says the way you manage these sessions is through managing the charters. You can list the charters, mind map them, organize them based on activity or risk, etc. Jon shows his and James set of charters which they've created a grid with details on the sessions.

4:50 PM - How do you ensure that the notes for sessions are readable? You train them. Tell the story of your testing briefly in a reasonably sharp way. The last 10 minutes you can focus on finishing up your notes, if you are taking longer than that you are taking too many notes.

5:02 PM - Done. Dice game tonight!

7:00 PM - 9:30 PM - After hours, game night! James and Paul (Holland) started us off on the "mind reading" game and between the 10 of us around the time we figured it out, according to James, "the quickest of any of his students". It took maybe 20 minutes. Then we moved on to a series of dice games with escalating challenges where we tried stumping Jon Bach and Paul Holland by creating our own!

Photos from the event have been posted on Flickr:
Check out the other days:

No comments: