Saturday, 6 December 2008

Evolution of Exploratory Testing

I wanted to talk a little bit about what we’re doing with Exploratory Testing where I work, where it’s come from, where I think its heading and what we’ve done so far.

I’ve been at my current company for around 7 years and as far as I can remember “exploratory testing” has been around for the whole time but for some reason, until recently, its always been used in an “ad-hoc” way and was usually bolted on to the end of a largely scripted test cycle as something of a last minute bug-hunt exercise.

Around 3-4 years ago a colleague of mine attended a couple of Exploratory Testing workshops held by James Lyndsay in London and from his attendance at those we developed a template which allowed test leads to structure an exploratory test; it highlighted areas of a program in which a tester should focus when “exploratory testing”.

The next change in how we used Exploratory came around 12 months ago when we were doing a large project (2-3 years development + testing). We’d been asked to de-scope our testing effort because our test cycles were too large so one of the things we looked at was reducing the cycle by doing more exploratory testing. We looked at one area of the program in particular, realised our scripted tests weren’t telling us much new about the product, and decided that we’d stop scripted testing and start exploratory testing this area in each test cycle. We had the benefit that the testers assigned to that area of the product had a lot of product knowledge, and therefore our only dilemma was how we plan each cycle, and report progress. This is where we came upon James Bach’s’ Session-Based-Test-Management and we used the basics of this as a way of planning. We split the particular area of the product into chunks, turned the chunks in to time-boxed sessions, set the testers some guidelines for items within the chunk which they should ensure they look at, and followed each session up with a de-brief where the tester discussed what he, or she, had tested with the test lead. This allow the test lead to understand what had been tested, ask questions, understand defects and it also allowed the tester to request more time, if they felt they’d run out of time.

So where do we head next? Well, I still don’t think we’re doing Exploratory Testing as it’d be described in the industry, I don’t think we’re in a place where we’re doing “simultaneous learning, test design & test execution” and that’s where I’d like us to be. I’d like us to have testers who can sit at a program, use their array of test techniques and tools and test a product by using the results of their last test to guide their focus for the next test they perform – but we’re not there yet.

We’re looking at expanding how we use Exploratory Testing by trialling Microsoft’s Exploratory Tour idea which looks like it does an excellent job of setting a testers’ mindset ahead of testing and this is key, I think mindset has a huge part to play in the quality of testing a tester performs. I’m hoping we can further understand how Microsoft use their tours and then either use it as-is, or adapt it, and then trial it on a few projects – tying it in with the SBTM; another advancement in how we do Exploratory.

My goal is to see Exploratory Testing seen as an equal to scripted testing within our test team, and the wider division, which isn’t something we’ll achieve over night. However, our use of Exploratory Testing is progressing and it’s exciting to see where it’s come so far, and where it may end up.

3 comments:

Philk said...

Good to see you blogging, hope you continue as it's off to an interesting start

wvole said...

Simon, what you are describing would certainly qualify as industrial strength exploratory testing, particularly based on James Lyndsay's and the Bach Bros ideas. The definition is a "purist" one to some degree, as the reality of needing to support risk based testing (test most important stuff first), coverage goals (test across the app to assess quality), and synchronize testing by multiple people (using prioritized list of charters [or test ideas for chunks], typically revised and added to during testing) detracts from it, but it is still relevant within that scope.
Are you screen grabbing or videoing sessions so they can be revisited or replayed, in case bugs are missed the first time through? Most importantly, are you leveraging bug clusters? See my talk www.tinyurl.com/80-20-rules for more.... Once you start finding bugs, that will typically dictate where the next tests should be!
cheers,
Erik

Simon Godfrey said...

Hi Erik,

We screen grab for defects and video sessions (well, screen record) occassionally but not consistently.

Part of what I want to work on is a consistent approach to our exploratory testing and I think these types of ideas will be picked up by that.

Your point's interesting; maybe I'm chasing a purist ideal, rather than applying a good idea to the context in which we need to operate?

Thanks for replying, the inspiration to blog will come from people taking an interest in what I have to say!

Simon