Tuesday 27 January 2009

Back to school

Recently I blogged about sharpening the saw where I mentioned that I wanted to start learning to code in VB as it was an area I'd identified as a weakness in my skillset (not the only weakness, I might add...).

Then, on Friday I read this blog (Don't Forget to Test) which reminded me of another concern of mine; I haven't tested in ages. I've been in a management role for 18 months and although I spent 12 of those months still doing some test leading I haven't been doing any testing at all. This worries me; I'm meant to be in a position where I can coach testers about most things testing, but I've not done any testing myself in ages...

I've had this dilemma before and figured it wasn't feasible due to time / other constraints but I've decided to challenge that. I've offered my team the chance to have a free tester, i.e. me (I won't put my time against the project), for a little bit of time each week so help them do some testing. I said I'd be more than happy to run some ET tests which they may be struggling to get the time to run, or I'd happily give a fresh perspective on some aspect of a program - whatever they want really.

We'll see how this goes and I'll come back and blog about it. I've found, so far, the single biggest challenge to team leading is trying to stay up-to-date with the challenges the team face on a day-2-day basis and you can read all the blogs you like but unless you allow yourself a little time to test I'd worry you could end up removed from the realities of life as a tester!

Wednesday 21 January 2009

Understanding risks

I’ve just been reading a blog from BJ Rollison of Microsoft (http://blogs.msdn.com/imtesty/archive/2009/01/19/the-minefield-myth-part-1.aspx) and it struck a chord with me and some of the concerns I have with regression testing.

When I initially came across the minefield analogy I really liked it, I thought, “That’s what we’re doing. We’re running tons of regression tests and they’re not finding many bugs so why don’t we take more of an exploratory approach to it?” Then I read Bj’s blog and I think, “You’re right, regression testing does have it’s place!” but then I find myself wondering about the age old problem of a test team having enough information to fully understand risks.

So you run a bunch of tests (you wander through the minefield), you find some mines, you miss some mines, the mines you find are removed and the mines you don’t find are still there. Some mines aren’t properly removed and so still exist, probably pretty close to the original mine you found. Someone makes some changes to the mine field, maybe they change the size of it, maybe they change where some of the mines are located or maybe they change what the mines look like? What if you’re asked to scour the minefield and check for mines again but nobody tells you what’s changed; how do you do that?

If you follow the same paths you may find some mines that are missed, if you look closely you may find some of the mines which weren’t removed properly and if you’re lucky you may find some mines which have been introduced when someone made some changes.

Here’s my problem. I don’t believe that as testers we get enough information about what’s changed. We know what defects should have been removed, so we can re-test them. We know where the defects clustered before, so we can run regression tests in those areas but what we don’t get, in my opinion, is enough information about what’s changed between test cycles. You exit a test cycle, you prepare your test environments and you want to re-circulate your strategy, probably based on an updated risk assessment, but you can’t. You can’t because nobody seems to be able to tell you what’s changed – you don’t know how big that minefield now is, you don’t know what the new mines look like and you don’t know how many new mines have been added to the field.

This uncertainty costs testers time, effort and of course money. A lack of information leads to uncertainty which leads to over-testing, often through masses of regression tests, and a wild-goose chase which may not throw up any useful information. Testers strive on information - information about risks, information about requirements and information about what’s changed and until we get better at asking for that information or the holders of that information get more empathetic to our needs we’re always going to be less efficient and less useful than we could be.

Tuesday 13 January 2009

Exploratory v Scripted testing – my understanding evolves further, I think...

On Friday afternoon our team sat down in a meeting room for our regular Friday Forum which is a recurring meeting request with the sole purpose of allowing anyone within the team to use the session to hold a brainstorm, discuss a new idea / technique or give a demo of a new product. The first Friday Forum of the new year saw my manager relay James Whittaker’s Exploratory Tours presentation which he’d given at Eurostar in Nov ’08 which was followed by a discussion on whether this was the kind of thing which our test team might use and whether it’d be of benefit to us.

As we got to the discussion one of our testers raised an interesting question, “How can we ensure we get good coverage when doing exploratory testing?” at which point two or three of our team jumped in with, “Well how can you ensure that with scripted testing?” I think this question highlights the view of ET as ad-hoc, unplanned and which leads to uncertainty over its value.

Until now, and despite reading about this elsewhere, I’ve struggled to explain how I think the two differ, or are similar, in a way which removes the doubt from the doubters. However, I think I’m starting to get a bit of clarity around it…

I think using the term “scripted testing” is misleading; ET can be scripted. The difference is that “scripted” tests are simply written upfront, usually before any testing has happened but exploratory tests are thought-up or created as we test or as our understanding of the program evolves. Surely we’re simply talking about tests which are written prior to the start of the test cycle and tests which are written after the start of the test cycle? The cause of concern by the tester mentioned above was that usually, pre-designed tests are written against requirements & design documents where as ET (in his mind) is performed by exploring the program without the use of such material. I feel this is a misunderstanding and occurs because appropriate time isn’t allowed for ET during test cycles. Where has anyone written that you can’t use these documents during an ET session?

So I’m now in a position where I’ve seen why this tester has this view that it’s difficult to understand what coverage you’re getting when performing ET because you’re not using requirements or design documents as a basis for your tests.

I tried to think of another way in which we could define ET but it’s difficult. I’d prefer to say pre-design tests versus concurrent testing, if “testing” encapsulates the learning, designing & executing of tests. I’ve read about this before, I understand it (I think), but it wasn’t until that tester asked that question that I seemed to get a bit more clarity and perspective on what I think ET is all about.

From an internal perspective I think we need a shift in how we think about these. The decision our test leads need to make is; “are we writing our tests upfront?” or “are we going to write our tests as we test?”. Both will use exploration, both will use documentation, both will result in some form of script.

Thursday 8 January 2009

Sharpening the saw

As part of my “sharpening the saw” (Heusser) activities in 2009 I’ve decided to start learning VB as a way of familiarising myself with how code can be used, written and read. The reason for this is probably two-fold; 1. I have a subconscious worry that a large majority of the testing experts / bloggers are able to code and 2. I feel that due to now being in a managerial role I’m moving away from some of the technical aspects of the job which I used to enjoy, a lot.

Why VB? I looked at other languages, but VB and the MS IDE seems to be a great way for a complete newb such as myself to get in to computer programming. It’s interesting that the IDE provides links explicitly for new developers and the way the lessons, via an online help, are structured are already making it easy to start understanding some of the terminology and concepts.

I’ll continue to blog about this, as it’ll hopefully show my progress and may be inspire others to follow suit…

If you’re doing anything similar, I’d love to hear about it.

Finally, my “sharpening the saw” quote comes from an article in the April 2008 edition of Better Software magazine. The article is written by Matthew Heusser.

Monday 5 January 2009

1st day of work in 2009 and straight into a coverage reporting conundrum

I’m working on something called Early Life Monitoring (as mentioned in a blog below) whereby I review any defects found after project release and review them to understand if they were found prior to release and if not, whether they should have been found by our test effort.

Today I’ve been looking through the list of issues to try and I’m attempting to give each missed defect a catchy category which highlights why those defects weren’t found prior to release which lead me to this problem; summarising defects which sit outside of your testing visibility? I.e. you didn’t test for it, and you didn’t communicate that you weren’t testing it.

To further detail, when I write a test strategy (or, when I did before a move into a management role) I’d define what I was testing, I’d define what I wasn’t testing but there’s still this huge area of the product which isn’t covered by those two; what is that area called? It’s like a testing abyss, those tests you don’t consider, or communicate; how do you communicate that to your departmental manager? Or your technical support representative who’s wondering why we didn’t test for that?

Currently, I’m at a loss. Maybe there isn’t a word to summarise it, maybe I shouldn’t need to look for that word, maybe my brain’s not sharp enough to find an answer to this question today…

Happy new year to all.