Thursday, 18 December 2008

Getting from the strategy to the tests

Our test team have had a bit of feedback, which was nice, but it's left me struggling to think of solutions.

The feedback: "I'd like to see how the test team go from the test strategy, to the test cases, and I'd like that process to be more visible to a wider audience".

This sounds like a request for a test design walk-through, which we don't do, and never have. It's usually just in tester's head this part isn't it? We write a strategy which says what we're going to test, how long it'll take, how many testers we need, what environments we'll test on and how much it'll cost and then we get some testers, or scripters, and we write the tests before we get the product and start testing. If we're lucky, a developer might review some of our tests / planned coverage and highlight some inaccuracies but that's usually as good as it get's. And now someone wants us to make that whole thought process more visible; a reasonable request I guess.

Reality is, I haven't a clue how we'd do this and I haven't heard of people doing this elsewhere. Are people familiar with this kind of process? Do people need to do this? If so, how have you done this?

There's no reason why we couldn't write another document which details how we get from the strategy to the tests, maybe discussing risks and types of testing along the way but when we feel we don't get much input in to the strategy it's going to be difficult to think of reasons to justify yet another document...

Wednesday, 17 December 2008

Wordle of my blog...

Inspired by Corey Goldberg's blog I did a Wordle for my blog (below) - quite interesting I think!

I wonder if team objectives show up anything...

Tuesday, 16 December 2008

Mindsets continued: Training testers how to think around problems

I’ve been doing a lot of reading around testing for a good couple of years and I’m only just starting to get some clarity around the issues which face us as testers. When I read about testing I come across material which details ways to go about resolving problems, usually in isolation, and very rarely do they appear to tie up with other problems which is where it can become difficult to understand how all these models or techniques can come together.

For example this post highlights two solutions to two problems and is one of the few examples of articles I’ve seen which link the two and show how they overlap.

Problem 1: You don’t know what to test, or how the program works, or what to test first
Solution 1: Use a mnemonic, or an application tour
Problem 2: How do you transform those tours in to tests?
Solution 2: You apply oracle
Result: Provide the tester with the tours, or mnemonics and the possible oracles, and they can start to test a system using their own brain, rather than a prescriptive test.
Problem: It’s dangerous to assume that a tester understands all the problems detail above:
  • I can’t test everything
  • Hold on, what is everything?
  • Ok, how do I now choose what to test?
  • Sometimes I won’t know where to start
  • Sometimes I won’t know whether I’ve got a problem or not
In my experience I worry that tester training is somewhat disjointed and we don’t do a very good job of educating new testers on the key problems which face testers, and the techniques which they can use to solve those problems. Is it that we teach new testers how to run the tests, rather than how they think about what tests they need to run, and how they decide if those tests are problems which need escalating. Is it that we teach the solution, which is often inflexible, rather than how to think of a solution, which is infinitely more flexible?

The AST Black Box Software Testing Foundations course , along with some of the reading I’ve recently done has highlighted to me some inadequacies in how I’ve seen testers trained, and how I’ve been trained previously, which makes me want to further understand what kind of knowledge / understanding any tester should have before they pick up a mouse and start testing on a project.

Therefore, I’d be interested on hearing how other teams train new testers or what wisdom / reading experienced testers look back on as material which provided them with clarity on what it means to be a tester, and the techniques we can use to overcome the problems we’re exposed too.

Finally, apologies for a fairly unstructured blog, free time + energy on topic = blog!

Friday, 12 December 2008

Testing: It’s largely about the mindset, isn’t it?

I was sat on the train to the recent SIGIST and I got thinking about what a colleague had told me about James Whittaker’s talk at Eurostar on the topic of Exploratory Testing and how Microsoft are trialling this Exploratory Tour idea. The idea, I think, is fantastic and I understand that James is writing a book which will cover the tours in detail and so, rather than talking about these I want to use this blog to look at how I took the concept, and applied it to the nearest thing I had that very morning when travelling to SIGIST, the train…

To recap, James Whittaker of Microsoft looked at using tour analogies to focus their Exploratory Testing and having reflected on this idea it seems to be very much about the mindset with which the tester adopts when designing or executing tests and ultimately, it doesn’t matter whether he or she is doing ET, some form of prescriptive test or an automated test – the mindset is still important and can be consistent across all forms of testing.

So, how can the mindset, or approach to some testing be linked directly with trains? Well, here’s what I came up with (and this was at 7.15am so I make no apologies if it’s complete tosh):
  • The fast train tour: Take the quickest / shortest route through the program
  • The slow train tour: Stop and observe (test?) each point in the program
  • The Bullet Train tour: Review each function for performance; how quickly does this program perform?
  • The Flying Scotsman tour: See the heuristic oracle, “consistent with previous product”
  • The Rush hour tour: Apply lots of data / input at every opportunity (stress / load testing?)
  • The Viaduct tour: High level view of the system, what might a customer’s first impressions of this product be?
  • The Underground tour: Look at the program from a lower level; focus on data, the code…
  • The Rotate train tour (I don't know what these are called!): Ensure a consistent change of direction, maybe through testing navigation, whilst testing the program
  • The Signal Point tour: Test for correct instruction / messaging from the application
  • The Impatient Commuter tour: test for any unexpected delays in operation, possibly whilst performing the fast train tour
  • The Packed Station Tour: Get lots of users using the program, how does it react? How good is the information provided to different users? How do they perceive the usefulness of the information provided?
Irrespective of the use of the tours I believe this would aid consideration for different types of tests, again, different mindsets required to test an application. They’re almost heuristics, I guess…
Can you think of any more? How can you use this type of thinking to inspire thought around tests?

Final note; this is an expansion of James Whittaker / Microsoft’s idea and I merely wish to expand, not take credit, for this type of thinking…

Wednesday, 10 December 2008

An informal research project: Understanding code coverage achieved through tests

Yesterday I attended the December SIGIST and of particular interest was a talk by Microsoft’s BJ Rollison which looked to dispel a number of myths about Exploratory Testing. However, one thing in particular which interested me about this talk was where BJ referred to an experiment they’d been conducting which looked at the kind of code coverage testers could gain, on a program, when doing either exploratory, or scripted testing.

Now, I’ll not go in to the detail in this post but this got me thinking about code coverage and how, within our team, we never look at code coverage; we don’t try to understand how much of the code our (the test team’s) tests cover. Therefore, I’m planning to run an informal research project which investigates just this.

It’s important to understand that I’m not going to use this to turn around and say, “Hey, we’ve achieved 80% code coverage so let’s release!”. No, what I want to understand is the rough numbers and use them to understand more about how we test. If on component X we’re achieving 20% code coverage then I want to understand what this means? Should we be able to cover more? Are the developers covering the other 70-80%? Are we releasing products which have masses of untested code in them? Either way, I expect this will tell us something.

Whilst I hope the figures are positive, that is, our tests are covering a considerable amount of the code, there’s also a part of me which thinks a low figure may un-earth some questions about how we design tests, and what information we should be using to ensure that our tests are gaining maximum code coverage.

I’m unsure how feasible this will be, but I’ll keep the blog updated with any progress we make along the way…

Monday, 8 December 2008

Post Release Defects: What I’ve learnt from them?

A while ago I blogged about post release defects, and how, as test teams, we can learn from the defects which we didn’t spot prior to the release of the product. I think it’s important, as a tester, to continue to review your testing after the product’s been released because there are questions we can ask of ourselves, such as – should we have tested for that? Why didn’t we spot that? What might we have done differently to test that? And I’ve performed such an assessment on a project I’ve recently worked on.

The first problem I came across when trying to do this was trying to capture the issues which are found after the release of a product. I decided, due to the large number of support calls being made, that any issue which resulted in a software update, or support document, would be counted in my measures. Because it was such a large project, which was released with a number of known issues it wasn’t feasible to count every support call logged (although this might be feasible for smaller products).

This gave me a list of around 100 issues to review, which I split in to the following categories:
· We found it before release (we knew the defect existed prior to release)
· We didn’t find it, we could have found it, but we wouldn’t expect to find it
· We should have seen it (we missed it in our coverage)
· We didn’t see it, we couldn’t have seen it and we shouldn’t have seen it

By doing this, we were able to understand that 60% of the defects which were either being fixed in updates, or which required support articles to inform the customers of issues, or provide a work-around, were known prior to release and we decided to release with those defects in the system. As a test lead this was useful to know, it gave me confidence in our coverage, and our decisions, and backed up the messages we’d been delivering prior to release of the product, i.e. these are bugs which will impact customers!

The other categories allowed us to understand some gaps in our testing; there were defects which were simply due to gaps in coverage, a scenario which hadn’t been tested, or a type of hardware which hadn’t been included in our coverage, and these we can learn from. On top of this there were some defects which we wouldn’t expect to find; such as defects which were found due to inconsistencies with 3rd party products, or defects with areas of the product which had been de-scoped from the testing effort earlier in the project (due to time pressures).

So, to conclude – this post-release defect review gave us confidence in the testing performed, something which we’d started to question because of the volume of defects which were being found by customers. It also highlighted some gaps in our coverage, which we can learn from in future projects and finally, it told us that generally – we’d done a pretty good job. For those who use DDP, ours was around 95%.

To keep this readable I’ve excluded a lot of detail, and decision making, from the process above but if anyone’s interested in knowing more I’d be happy to discuss on a 1-2-1 basis, or via another blog if the interest is there.

Saturday, 6 December 2008

Evolution of Exploratory Testing

I wanted to talk a little bit about what we’re doing with Exploratory Testing where I work, where it’s come from, where I think its heading and what we’ve done so far.

I’ve been at my current company for around 7 years and as far as I can remember “exploratory testing” has been around for the whole time but for some reason, until recently, its always been used in an “ad-hoc” way and was usually bolted on to the end of a largely scripted test cycle as something of a last minute bug-hunt exercise.

Around 3-4 years ago a colleague of mine attended a couple of Exploratory Testing workshops held by James Lyndsay in London and from his attendance at those we developed a template which allowed test leads to structure an exploratory test; it highlighted areas of a program in which a tester should focus when “exploratory testing”.

The next change in how we used Exploratory came around 12 months ago when we were doing a large project (2-3 years development + testing). We’d been asked to de-scope our testing effort because our test cycles were too large so one of the things we looked at was reducing the cycle by doing more exploratory testing. We looked at one area of the program in particular, realised our scripted tests weren’t telling us much new about the product, and decided that we’d stop scripted testing and start exploratory testing this area in each test cycle. We had the benefit that the testers assigned to that area of the product had a lot of product knowledge, and therefore our only dilemma was how we plan each cycle, and report progress. This is where we came upon James Bach’s’ Session-Based-Test-Management and we used the basics of this as a way of planning. We split the particular area of the product into chunks, turned the chunks in to time-boxed sessions, set the testers some guidelines for items within the chunk which they should ensure they look at, and followed each session up with a de-brief where the tester discussed what he, or she, had tested with the test lead. This allow the test lead to understand what had been tested, ask questions, understand defects and it also allowed the tester to request more time, if they felt they’d run out of time.

So where do we head next? Well, I still don’t think we’re doing Exploratory Testing as it’d be described in the industry, I don’t think we’re in a place where we’re doing “simultaneous learning, test design & test execution” and that’s where I’d like us to be. I’d like us to have testers who can sit at a program, use their array of test techniques and tools and test a product by using the results of their last test to guide their focus for the next test they perform – but we’re not there yet.

We’re looking at expanding how we use Exploratory Testing by trialling Microsoft’s Exploratory Tour idea which looks like it does an excellent job of setting a testers’ mindset ahead of testing and this is key, I think mindset has a huge part to play in the quality of testing a tester performs. I’m hoping we can further understand how Microsoft use their tours and then either use it as-is, or adapt it, and then trial it on a few projects – tying it in with the SBTM; another advancement in how we do Exploratory.

My goal is to see Exploratory Testing seen as an equal to scripted testing within our test team, and the wider division, which isn’t something we’ll achieve over night. However, our use of Exploratory Testing is progressing and it’s exciting to see where it’s come so far, and where it may end up.