Thursday, 21 May 2009

Breaking the back of automation

I’ve worked in this test team for almost 8 years now and in that time we’ve had numerous cycles where we’ve tried to make automation work for us but each time we’ve failed for a number of reasons…and now we’re trying again.

Originally we tried with WinRunner, then we tried TestPartner, then we tried QTP and now we’re trying with Selenium. To clarify “automation”, I’m talking about record & playback here as we’ve successfully used code to make other aspects of testing easier but for the purposes of this blog I’m talking about record & playback.

There are numerous reasons why I don’t think we’ve managed to make automation work for us in the past, and these are a few of them:

• Test leads not being aware of the benefits of automation and therefore not thinking about automation early in the product development and ensuring the product’s designed with automation/testability in mind
• Skillset – i.e. testers having the appropriate knowledge, or interest, to do the coding around any R&P
• Interest
• Thinking too big, too soon
• Products not working with automation tools

We’re now coming back round to another automation cycle and this time the weapon of choice is Selenium. Selenium’s a hugely popular tool in the world of automation at the moment and our development team are actually using it to automation their dev tests and they’re building a large range of tests which they run as part of nightly builds.

I’m really hoping we can finally get to a point where our team understand what opportunities a tool like Selenium can offer them and how they can use it to their benefit. I want to see tools being used to automate the mundane, checking/verification type tests that we use people to perform so our testers can concentrate on thinking up complex tests which requires them to spend time gaining a deeper level of understanding about the product – something a tool will never accomplish.

Hopefully I won’t be writing the same blog in 12 months time.

Wednesday, 13 May 2009

We ain’t got no SMART requirements

As a manager I write objectives for people in my team and I’m told these objectives have to be SMART; that’s Specific, Measurable, Achievable, Realistic and Timebound. And, as a manager, I have people within my team complaining about a lack of quality requirements across projects – they lack clarity, they don’t detail expectations, they’re ambiguous etc. So why don’t people write SMART requirements?

Requirements should be specific because an ambiguous requirement isn’t going to be a testable requirement and the more open to interpretation a requirement is the poorer it is. More importantly, your interpretation of a requirement may be an incorrect interpretation and that means you could end up testing and providing information which is of no value to your project team.

Requirements should be measurable, and by measurable I mean testable. If you can’t measure whether the requirement’s been met, or whether it performs as expected, it’s not testable.

Requirements should be achievable. Ok, there is scope for highly desirable requirements which may make it in to the product if time allows (ever seen that happen?!) but this should serve as a warning. Does it sound achievable to you? If it doesn’t you probably want to question it before you spend time designing tests for it…

Requirements should be realistic. Well, yes, but sometimes we don’t know until we try to implement a requirement that we can’t actually implement it in the given timescale, or because of timescales, and therefore it’s not realistic. But again, if it doesn’t sound realistic it should probably be questioned before you start spending time designing those tests.

Requirements should be timebound. Think performance. “Requirement 1: Function A must do Y in N seconds” Is that realistic? What if it doesn’t? Would we still release if it took an extra 10 seconds, 30 seconds or 10 minutes? How good are your performance requirements?

The SMART method of reviewing requirements has it’s flaws but it does highlight the need for good requirements. You might use SMART as a guide to review requirements as it’ll raise some questions for you to ask or another method might be to try and write early high level test cases against requirements as a way of teasing out ambiguity or a lack of testability. Whatever your method, it’s key that we understand product requirements and more importantly, it’s key we ensure our understanding is aligned with that of the project team.

Friday, 1 May 2009

Get out of the office and be inspired

This isn’t particularly pertaining to software testing, but I thought I’d blog about it anyway. Yesterday I went down to London and spent the day at two schools, one primary, on secondary, in one of the more deprived parts of London. Working at an Education company it’s important that the employees who work here take the time to visit schools, our customers, and build an empathy for our end users, something which can be quite difficult when you develop and test market products.

The day in itself was absolutely fascinating. Schools have moved on dramatically since I were there (not that long ago) and it’s great to see students using web-based products to collaborate on assignments with students of other schools around the region, country, or world.

It’s been far to long since I’ve been to a school, probably a couple of years, long enough, but it was an inspiring day and I’ve come back to the office enthused about what we do as a company and how I can play my part in ensuring we meet the demands of our customers.

Similarly, I’ve felt the same when returning from something like SIGIST, or Eurostar. That time away from the office allows you to regain perspective, to think new thoughts, and to be inspired.

So, to my point! Getting out of the office to meet customers, go on a training course or attend work-related conferences is a great way to inspire and energise your self. We can all become tired, or bogged down in our day-to-day work and responsibilities and it’s important to know you don’t always need two weeks in the Maldives, however nice that’d be, to re-charge your batteries!

Tuesday, 28 April 2009

An Observation on the challenge of being a tester

Last night, Monday night, was shopping night, so I met my girlfriend in Tesco’s after work and we set about our weekly shop. I noticed recently that Tesco’s have altered the setup of their tills slightly and they’ve changed the arrangement so that customers are now able to scan their clubcards without the cashier having to do it and the card payment device was also moved toward the end of the till, in the area where you place your shopping in to bags – whether this is to make the whole process quicker, or just remove tasks from the cashier I’m not sure.

Anyway, as we went through the tills and completed bagging up our goods I took my Tesco clubcard from my wallet and went to scan it through the clubcard-scanner, which is situated at adult waist height, pointing slightly downward. “You can’t use that”, declared the cashier, “We’re not using them anymore.” “You’ve only just had them installed haven’t you?”, said I. “Yeah, but they realised that the laser beams which scan the barcode can damage children’s eyes so we can’t use them”, “you’d have thought they’d have tested that”, she then added. I wasn’t sure how to reply to that, as a hundred reasons why it might not have been tested flew through my head and I almost told her I was a tester, but then I’d imagine that information would have fallen on deaf ears – what did she care what I did for a living? Instead, I just laughed, thanked her, and wandered off thinking that this was a good topic for a blog post.

So what makes it a good topic for a blog post? Well, for me it highlights the difficulties that we have as testers. On the one hand, it sounds like whoever manufactured and tested this product didn’t consider the types of users that would be using or impacted by this device and for that there is no excuse. But there’s also the consideration that the barcode reader may have been through tens of test cycles, with masses of defects found & fixed, or maybe the test team only had a week to test the damn thing before someone told them it had to be released and so they spent the whole week ensuring it could actually read barcodes and transmit that data to the client, rather than test or understand how dangerous the rays were from the device.

As testers we have an almost impossible job because there will always be untested situations, scenarios or environments at the point of release and we never truly know if those untested areas will be the very areas which throw up a customer critical defect upon release. With risk assessments, and good judgement, we can have a very good go and ensuring we’ve removed most of the risk ahead of release, but there will always be that percentage of risk we release with, and that’s what we’ll always worry about.

Friday, 24 April 2009

Are you doing your bit to save money?

*I just wrote this as a blog at work, so thought I'd share it*

As we all know, it’s key that in the current climate (not the Sauna!) we ensure we’re doing our bit to reduce spend and monitor cost across projects. So, off the top of my head here are a few things we can be doing to make efficient use of our money.

1. More Exploratory Testing – the savings aren’t clear, but it is evident that the big-upfront-test-scripting approach could be replaced by exploratory testing. There may be a need to invest more time in training for your testers and you may take people out of their comfort zones but choosing to adopt a more explorative approach to your testing could reduce the cost of your test cycles.

2. Write less detailed test scripts – how much detail are you putting in to your scripts? And, more importantly, is that level of detail really required? We know that people who’re new to a product require detailed steps to guide them through the product but as with the exploratory point, could appropriate training replace this? Less detailed scripts could also reduce maintenance effort later on in the project when functions and features change which require script updates and re-writes.

3. Complete clarity on coverage PT1 – are we spending time and money testing on configurations which aren’t high priority? Will testing on those configurations provide useful information? Or is that testing which can wait for another time? Are the project team in agreement with your planned testing scope? Let’s ensure we’re not burning time and money running tests which aren’t producing useful information, and information which is required here and now.

4. Complete clarity on coverage PT2 – how are you ensuring that you’re not going to find showstopper or major defects via new tests in later test cycles? How can you make sure the really important bugs are found as early as possible? Are you talking to the right people to understand what the high risk areas of the product are? Ultimately, are you test the right stuff?

5. Risk Assessments – the primary aim of a tester should be to find important bugs fast and using a risk assessment is the main tool which facilitates this. So, have you performed a risk assessment? Are you testing the most important or complex areas first? How are you going to make sure you don’t come across a showstopper on the final day of your testing?

There are undoubtedly numerous other ways in which we can actively control and reduce cost and it’s also worth considering that whenever a decision is made to cut cost there can be an impact on time and quality. I don’t want to teach my grandma to suck eggs but hopefully this will make you think a little about how you’re doing things, and whether you can be working smarter, or more efficiently, or just making decisions about your approach with one eye on the cost aspect.

Thursday, 23 April 2009

Usability: A second class citizen?

I’m sure we’ve all been in the situation where you get an opportunity to test a product so you sit down and you start testing for functionality, you run some system tests, you run some security tests and maybe a performance test or two before you then run some usability tests. Your defects are raised, most of the functional defects get fixed, the big security and performance issues are fixed but then you look at your usability defects are they’re set as “Fix in the Future” or “Not a problem” or “Proposing not to fix” – Argh! Why is this?

Usually, as we discussed in our team meeting this morning, it’s for two reasons:
1. You’re testing for usability too late in the cycle
2. Time pressures mean the focus is on making the product functional, not usable

Quite often it seems that, what appears to be a small usability issue, actually requires a significant (> 1hour) amount of time to fix because the fix requires some re-working of how the UI works, or it would have a significant impact on other functionality – which tells us we’re testing for usability too late in the software development lifecycle.

Our other battle is members of the project who lack empathy with the end user. “Why on earth would we want to hold up a project to fix these paltry 10 usability defects?” (I recognise that sometimes it’s absolutely the right decision to release with those 10 defects unfixed). So we sit in defect reviews and watch as our usability defects get set to the statuses shown above, usually because we’ve failed to articulate the real end-user impact of those issues.

So, what are we doing about it?

Well, we’re doing a few things about this. We’re trying to use the power of the checklist to create a common set of usability heuristics which can be run against a design doc, a prototype, an early version of the product or a final version of the product. And as that implies, we also recognise that we need to test for usability earlier in the development of the product. It’s clear that we’re testing for usability across most of our products but generally we’re leaving it until too late in the day to do this so it’s key we test for usability as early as we can in the development of a product and continue to focus on usability throughout its creation. Finally, we recognise that we need to get better at influencing our project team on the impact of these defects. We need to be better at understanding our customers and also understanding those people within the business who also understand those customers and who can therefore help us articulate why these issues are important.

Approximately 50-60% of calls our support teams take are “How to” calls which suggests usability of our products is an area in need of improvement so hopefully some of the measures we’re putting in place will start to reduce this in future product releases.

Very interested in hearing other’s experiences with usability testing – have you seen the same problems? If so, what are you doing about it?

Wednesday, 18 March 2009

Accessibility requirements? Pfft

Q. What do you do when you find your product fails to meet the Accessibility standards which the requirements state it must adhere to?

A. Remove the requirement.

Quicker than fixing the bugs ennit?

Yes, this has recently happened.

Friday, 6 March 2009

Testing, what's that??

It's been a while since I blogged so to dust off the cobwebs I thought I'd mention what I've been doing for the last 5 weeks. The answer? Not reading much on testing.

It occured to me recently that I spent a lot of time reading and thinking about testing and worried that perhaps I had my balance wrong because that outweighed the time I spent thinking about leadership and management, so I've tilted the balance.

Instead, I've recently been reading blogs on leadership and innovation, and I've been reading books such as "The one minute manager" and "Who moved my cheese?" which are both thought-provoking and inspiring reads. Why? Because I felt the balance wasn't reflective of my daily priorities, and also, whilst testing is a passion of mine, it's perhaps not where my future aspirations lie and so I wanted to sharpen the saw in a different way.

I've found a change in focus quite energising and it's been rewarding to try new ideas out and apply them to day-to-date life as a team leader.

I'll drop some links of what i've been reading if anyone's interested, and I might just have to transform this blog in to a testing / leadership thing, the latter mentioning lightbulb moments, rather than anything ground-breaking!

Tuesday, 27 January 2009

Back to school

Recently I blogged about sharpening the saw where I mentioned that I wanted to start learning to code in VB as it was an area I'd identified as a weakness in my skillset (not the only weakness, I might add...).

Then, on Friday I read this blog (Don't Forget to Test) which reminded me of another concern of mine; I haven't tested in ages. I've been in a management role for 18 months and although I spent 12 of those months still doing some test leading I haven't been doing any testing at all. This worries me; I'm meant to be in a position where I can coach testers about most things testing, but I've not done any testing myself in ages...

I've had this dilemma before and figured it wasn't feasible due to time / other constraints but I've decided to challenge that. I've offered my team the chance to have a free tester, i.e. me (I won't put my time against the project), for a little bit of time each week so help them do some testing. I said I'd be more than happy to run some ET tests which they may be struggling to get the time to run, or I'd happily give a fresh perspective on some aspect of a program - whatever they want really.

We'll see how this goes and I'll come back and blog about it. I've found, so far, the single biggest challenge to team leading is trying to stay up-to-date with the challenges the team face on a day-2-day basis and you can read all the blogs you like but unless you allow yourself a little time to test I'd worry you could end up removed from the realities of life as a tester!

Wednesday, 21 January 2009

Understanding risks

I’ve just been reading a blog from BJ Rollison of Microsoft ( and it struck a chord with me and some of the concerns I have with regression testing.

When I initially came across the minefield analogy I really liked it, I thought, “That’s what we’re doing. We’re running tons of regression tests and they’re not finding many bugs so why don’t we take more of an exploratory approach to it?” Then I read Bj’s blog and I think, “You’re right, regression testing does have it’s place!” but then I find myself wondering about the age old problem of a test team having enough information to fully understand risks.

So you run a bunch of tests (you wander through the minefield), you find some mines, you miss some mines, the mines you find are removed and the mines you don’t find are still there. Some mines aren’t properly removed and so still exist, probably pretty close to the original mine you found. Someone makes some changes to the mine field, maybe they change the size of it, maybe they change where some of the mines are located or maybe they change what the mines look like? What if you’re asked to scour the minefield and check for mines again but nobody tells you what’s changed; how do you do that?

If you follow the same paths you may find some mines that are missed, if you look closely you may find some of the mines which weren’t removed properly and if you’re lucky you may find some mines which have been introduced when someone made some changes.

Here’s my problem. I don’t believe that as testers we get enough information about what’s changed. We know what defects should have been removed, so we can re-test them. We know where the defects clustered before, so we can run regression tests in those areas but what we don’t get, in my opinion, is enough information about what’s changed between test cycles. You exit a test cycle, you prepare your test environments and you want to re-circulate your strategy, probably based on an updated risk assessment, but you can’t. You can’t because nobody seems to be able to tell you what’s changed – you don’t know how big that minefield now is, you don’t know what the new mines look like and you don’t know how many new mines have been added to the field.

This uncertainty costs testers time, effort and of course money. A lack of information leads to uncertainty which leads to over-testing, often through masses of regression tests, and a wild-goose chase which may not throw up any useful information. Testers strive on information - information about risks, information about requirements and information about what’s changed and until we get better at asking for that information or the holders of that information get more empathetic to our needs we’re always going to be less efficient and less useful than we could be.

Tuesday, 13 January 2009

Exploratory v Scripted testing – my understanding evolves further, I think...

On Friday afternoon our team sat down in a meeting room for our regular Friday Forum which is a recurring meeting request with the sole purpose of allowing anyone within the team to use the session to hold a brainstorm, discuss a new idea / technique or give a demo of a new product. The first Friday Forum of the new year saw my manager relay James Whittaker’s Exploratory Tours presentation which he’d given at Eurostar in Nov ’08 which was followed by a discussion on whether this was the kind of thing which our test team might use and whether it’d be of benefit to us.

As we got to the discussion one of our testers raised an interesting question, “How can we ensure we get good coverage when doing exploratory testing?” at which point two or three of our team jumped in with, “Well how can you ensure that with scripted testing?” I think this question highlights the view of ET as ad-hoc, unplanned and which leads to uncertainty over its value.

Until now, and despite reading about this elsewhere, I’ve struggled to explain how I think the two differ, or are similar, in a way which removes the doubt from the doubters. However, I think I’m starting to get a bit of clarity around it…

I think using the term “scripted testing” is misleading; ET can be scripted. The difference is that “scripted” tests are simply written upfront, usually before any testing has happened but exploratory tests are thought-up or created as we test or as our understanding of the program evolves. Surely we’re simply talking about tests which are written prior to the start of the test cycle and tests which are written after the start of the test cycle? The cause of concern by the tester mentioned above was that usually, pre-designed tests are written against requirements & design documents where as ET (in his mind) is performed by exploring the program without the use of such material. I feel this is a misunderstanding and occurs because appropriate time isn’t allowed for ET during test cycles. Where has anyone written that you can’t use these documents during an ET session?

So I’m now in a position where I’ve seen why this tester has this view that it’s difficult to understand what coverage you’re getting when performing ET because you’re not using requirements or design documents as a basis for your tests.

I tried to think of another way in which we could define ET but it’s difficult. I’d prefer to say pre-design tests versus concurrent testing, if “testing” encapsulates the learning, designing & executing of tests. I’ve read about this before, I understand it (I think), but it wasn’t until that tester asked that question that I seemed to get a bit more clarity and perspective on what I think ET is all about.

From an internal perspective I think we need a shift in how we think about these. The decision our test leads need to make is; “are we writing our tests upfront?” or “are we going to write our tests as we test?”. Both will use exploration, both will use documentation, both will result in some form of script.

Thursday, 8 January 2009

Sharpening the saw

As part of my “sharpening the saw” (Heusser) activities in 2009 I’ve decided to start learning VB as a way of familiarising myself with how code can be used, written and read. The reason for this is probably two-fold; 1. I have a subconscious worry that a large majority of the testing experts / bloggers are able to code and 2. I feel that due to now being in a managerial role I’m moving away from some of the technical aspects of the job which I used to enjoy, a lot.

Why VB? I looked at other languages, but VB and the MS IDE seems to be a great way for a complete newb such as myself to get in to computer programming. It’s interesting that the IDE provides links explicitly for new developers and the way the lessons, via an online help, are structured are already making it easy to start understanding some of the terminology and concepts.

I’ll continue to blog about this, as it’ll hopefully show my progress and may be inspire others to follow suit…

If you’re doing anything similar, I’d love to hear about it.

Finally, my “sharpening the saw” quote comes from an article in the April 2008 edition of Better Software magazine. The article is written by Matthew Heusser.

Monday, 5 January 2009

1st day of work in 2009 and straight into a coverage reporting conundrum

I’m working on something called Early Life Monitoring (as mentioned in a blog below) whereby I review any defects found after project release and review them to understand if they were found prior to release and if not, whether they should have been found by our test effort.

Today I’ve been looking through the list of issues to try and I’m attempting to give each missed defect a catchy category which highlights why those defects weren’t found prior to release which lead me to this problem; summarising defects which sit outside of your testing visibility? I.e. you didn’t test for it, and you didn’t communicate that you weren’t testing it.

To further detail, when I write a test strategy (or, when I did before a move into a management role) I’d define what I was testing, I’d define what I wasn’t testing but there’s still this huge area of the product which isn’t covered by those two; what is that area called? It’s like a testing abyss, those tests you don’t consider, or communicate; how do you communicate that to your departmental manager? Or your technical support representative who’s wondering why we didn’t test for that?

Currently, I’m at a loss. Maybe there isn’t a word to summarise it, maybe I shouldn’t need to look for that word, maybe my brain’s not sharp enough to find an answer to this question today…

Happy new year to all.