Tuesday, 28 April 2009

An Observation on the challenge of being a tester

Last night, Monday night, was shopping night, so I met my girlfriend in Tesco’s after work and we set about our weekly shop. I noticed recently that Tesco’s have altered the setup of their tills slightly and they’ve changed the arrangement so that customers are now able to scan their clubcards without the cashier having to do it and the card payment device was also moved toward the end of the till, in the area where you place your shopping in to bags – whether this is to make the whole process quicker, or just remove tasks from the cashier I’m not sure.

Anyway, as we went through the tills and completed bagging up our goods I took my Tesco clubcard from my wallet and went to scan it through the clubcard-scanner, which is situated at adult waist height, pointing slightly downward. “You can’t use that”, declared the cashier, “We’re not using them anymore.” “You’ve only just had them installed haven’t you?”, said I. “Yeah, but they realised that the laser beams which scan the barcode can damage children’s eyes so we can’t use them”, “you’d have thought they’d have tested that”, she then added. I wasn’t sure how to reply to that, as a hundred reasons why it might not have been tested flew through my head and I almost told her I was a tester, but then I’d imagine that information would have fallen on deaf ears – what did she care what I did for a living? Instead, I just laughed, thanked her, and wandered off thinking that this was a good topic for a blog post.

So what makes it a good topic for a blog post? Well, for me it highlights the difficulties that we have as testers. On the one hand, it sounds like whoever manufactured and tested this product didn’t consider the types of users that would be using or impacted by this device and for that there is no excuse. But there’s also the consideration that the barcode reader may have been through tens of test cycles, with masses of defects found & fixed, or maybe the test team only had a week to test the damn thing before someone told them it had to be released and so they spent the whole week ensuring it could actually read barcodes and transmit that data to the client, rather than test or understand how dangerous the rays were from the device.

As testers we have an almost impossible job because there will always be untested situations, scenarios or environments at the point of release and we never truly know if those untested areas will be the very areas which throw up a customer critical defect upon release. With risk assessments, and good judgement, we can have a very good go and ensuring we’ve removed most of the risk ahead of release, but there will always be that percentage of risk we release with, and that’s what we’ll always worry about.

Friday, 24 April 2009

Are you doing your bit to save money?

*I just wrote this as a blog at work, so thought I'd share it*

As we all know, it’s key that in the current climate (not the Sauna!) we ensure we’re doing our bit to reduce spend and monitor cost across projects. So, off the top of my head here are a few things we can be doing to make efficient use of our money.

1. More Exploratory Testing – the savings aren’t clear, but it is evident that the big-upfront-test-scripting approach could be replaced by exploratory testing. There may be a need to invest more time in training for your testers and you may take people out of their comfort zones but choosing to adopt a more explorative approach to your testing could reduce the cost of your test cycles.

2. Write less detailed test scripts – how much detail are you putting in to your scripts? And, more importantly, is that level of detail really required? We know that people who’re new to a product require detailed steps to guide them through the product but as with the exploratory point, could appropriate training replace this? Less detailed scripts could also reduce maintenance effort later on in the project when functions and features change which require script updates and re-writes.

3. Complete clarity on coverage PT1 – are we spending time and money testing on configurations which aren’t high priority? Will testing on those configurations provide useful information? Or is that testing which can wait for another time? Are the project team in agreement with your planned testing scope? Let’s ensure we’re not burning time and money running tests which aren’t producing useful information, and information which is required here and now.

4. Complete clarity on coverage PT2 – how are you ensuring that you’re not going to find showstopper or major defects via new tests in later test cycles? How can you make sure the really important bugs are found as early as possible? Are you talking to the right people to understand what the high risk areas of the product are? Ultimately, are you test the right stuff?

5. Risk Assessments – the primary aim of a tester should be to find important bugs fast and using a risk assessment is the main tool which facilitates this. So, have you performed a risk assessment? Are you testing the most important or complex areas first? How are you going to make sure you don’t come across a showstopper on the final day of your testing?

There are undoubtedly numerous other ways in which we can actively control and reduce cost and it’s also worth considering that whenever a decision is made to cut cost there can be an impact on time and quality. I don’t want to teach my grandma to suck eggs but hopefully this will make you think a little about how you’re doing things, and whether you can be working smarter, or more efficiently, or just making decisions about your approach with one eye on the cost aspect.

Thursday, 23 April 2009

Usability: A second class citizen?

I’m sure we’ve all been in the situation where you get an opportunity to test a product so you sit down and you start testing for functionality, you run some system tests, you run some security tests and maybe a performance test or two before you then run some usability tests. Your defects are raised, most of the functional defects get fixed, the big security and performance issues are fixed but then you look at your usability defects are they’re set as “Fix in the Future” or “Not a problem” or “Proposing not to fix” – Argh! Why is this?

Usually, as we discussed in our team meeting this morning, it’s for two reasons:
1. You’re testing for usability too late in the cycle
2. Time pressures mean the focus is on making the product functional, not usable

Quite often it seems that, what appears to be a small usability issue, actually requires a significant (> 1hour) amount of time to fix because the fix requires some re-working of how the UI works, or it would have a significant impact on other functionality – which tells us we’re testing for usability too late in the software development lifecycle.

Our other battle is members of the project who lack empathy with the end user. “Why on earth would we want to hold up a project to fix these paltry 10 usability defects?” (I recognise that sometimes it’s absolutely the right decision to release with those 10 defects unfixed). So we sit in defect reviews and watch as our usability defects get set to the statuses shown above, usually because we’ve failed to articulate the real end-user impact of those issues.

So, what are we doing about it?

Well, we’re doing a few things about this. We’re trying to use the power of the checklist to create a common set of usability heuristics which can be run against a design doc, a prototype, an early version of the product or a final version of the product. And as that implies, we also recognise that we need to test for usability earlier in the development of the product. It’s clear that we’re testing for usability across most of our products but generally we’re leaving it until too late in the day to do this so it’s key we test for usability as early as we can in the development of a product and continue to focus on usability throughout its creation. Finally, we recognise that we need to get better at influencing our project team on the impact of these defects. We need to be better at understanding our customers and also understanding those people within the business who also understand those customers and who can therefore help us articulate why these issues are important.

Approximately 50-60% of calls our support teams take are “How to” calls which suggests usability of our products is an area in need of improvement so hopefully some of the measures we’re putting in place will start to reduce this in future product releases.

Very interested in hearing other’s experiences with usability testing – have you seen the same problems? If so, what are you doing about it?