Wednesday, 21 January 2009

Understanding risks

I’ve just been reading a blog from BJ Rollison of Microsoft (http://blogs.msdn.com/imtesty/archive/2009/01/19/the-minefield-myth-part-1.aspx) and it struck a chord with me and some of the concerns I have with regression testing.

When I initially came across the minefield analogy I really liked it, I thought, “That’s what we’re doing. We’re running tons of regression tests and they’re not finding many bugs so why don’t we take more of an exploratory approach to it?” Then I read Bj’s blog and I think, “You’re right, regression testing does have it’s place!” but then I find myself wondering about the age old problem of a test team having enough information to fully understand risks.

So you run a bunch of tests (you wander through the minefield), you find some mines, you miss some mines, the mines you find are removed and the mines you don’t find are still there. Some mines aren’t properly removed and so still exist, probably pretty close to the original mine you found. Someone makes some changes to the mine field, maybe they change the size of it, maybe they change where some of the mines are located or maybe they change what the mines look like? What if you’re asked to scour the minefield and check for mines again but nobody tells you what’s changed; how do you do that?

If you follow the same paths you may find some mines that are missed, if you look closely you may find some of the mines which weren’t removed properly and if you’re lucky you may find some mines which have been introduced when someone made some changes.

Here’s my problem. I don’t believe that as testers we get enough information about what’s changed. We know what defects should have been removed, so we can re-test them. We know where the defects clustered before, so we can run regression tests in those areas but what we don’t get, in my opinion, is enough information about what’s changed between test cycles. You exit a test cycle, you prepare your test environments and you want to re-circulate your strategy, probably based on an updated risk assessment, but you can’t. You can’t because nobody seems to be able to tell you what’s changed – you don’t know how big that minefield now is, you don’t know what the new mines look like and you don’t know how many new mines have been added to the field.

This uncertainty costs testers time, effort and of course money. A lack of information leads to uncertainty which leads to over-testing, often through masses of regression tests, and a wild-goose chase which may not throw up any useful information. Testers strive on information - information about risks, information about requirements and information about what’s changed and until we get better at asking for that information or the holders of that information get more empathetic to our needs we’re always going to be less efficient and less useful than we could be.

3 comments:

Phil said...

I have been in the situation where even when I asked what had changed the devs couldn't tell me for sure as the code was such an unmaintainable pile of spaghetti that they never knew what effects changing a line of code would have

This is when testers start to become QA as without some sort of process then life is just impossible

Michael Bolton http://www.developsense.com said...

You've described the problem well, Simon. How are you going to fix it?

To Phil: I have been in the situation where even when I asked what had changed the devs couldn't tell me for sure as the code was such an unmaintainable pile of spaghetti that they never knew what effects changing a line of code would have.

May I suggest that you're done testing at this point? (Why? If you haven't already done so, read Weinberg's Perfect Software and other Illusions About Testing for the answer.)

This is when testers start to become QA as without some sort of process then life is just impossible

Danger! Unless I'm a manager, I have grave hesitations about trying to run the project or telling other people how to do their work, especially when I am not very experienced in the work that they do. My job, as a tester, is to provide information to management, especially about risk. My job is explicitly not to run the project. I don't have authority over schedule, budget, hiring, firing, training, product scope, market responses, contractual obligations, business relationships, and the like; nor do I have information about them; nor am I paid to make decisions about them. Limited experience + no authority = zero credibility.

What you've done in the situation you describe above—revealing that the code is an unmaintainable pile of spaghetti—is information. Management may (or may not) choose make decisions and to act on that information. Those decisions, and the actions that follow, are management's responsibility. I'd caution strongly against usurping that responsibility.

---Michael B.

Phil said...

Fair comment - but in my situation I did have the authority after being in the place for 'x' years and having had to deal with the spaghetti so in your equation I did have lots of credibility, some authority and a lot of information

But your point is well taken