I’ve worked in this test team for almost 8 years now and in that time we’ve had numerous cycles where we’ve tried to make automation work for us but each time we’ve failed for a number of reasons…and now we’re trying again.
Originally we tried with WinRunner, then we tried TestPartner, then we tried QTP and now we’re trying with Selenium. To clarify “automation”, I’m talking about record & playback here as we’ve successfully used code to make other aspects of testing easier but for the purposes of this blog I’m talking about record & playback.
There are numerous reasons why I don’t think we’ve managed to make automation work for us in the past, and these are a few of them:
• Test leads not being aware of the benefits of automation and therefore not thinking about automation early in the product development and ensuring the product’s designed with automation/testability in mind
• Skillset – i.e. testers having the appropriate knowledge, or interest, to do the coding around any R&P
• Interest
• Thinking too big, too soon
• Products not working with automation tools
We’re now coming back round to another automation cycle and this time the weapon of choice is Selenium. Selenium’s a hugely popular tool in the world of automation at the moment and our development team are actually using it to automation their dev tests and they’re building a large range of tests which they run as part of nightly builds.
I’m really hoping we can finally get to a point where our team understand what opportunities a tool like Selenium can offer them and how they can use it to their benefit. I want to see tools being used to automate the mundane, checking/verification type tests that we use people to perform so our testers can concentrate on thinking up complex tests which requires them to spend time gaining a deeper level of understanding about the product – something a tool will never accomplish.
Hopefully I won’t be writing the same blog in 12 months time.
Thursday, 21 May 2009
Wednesday, 13 May 2009
We ain’t got no SMART requirements
As a manager I write objectives for people in my team and I’m told these objectives have to be SMART; that’s Specific, Measurable, Achievable, Realistic and Timebound. And, as a manager, I have people within my team complaining about a lack of quality requirements across projects – they lack clarity, they don’t detail expectations, they’re ambiguous etc. So why don’t people write SMART requirements?
Requirements should be specific because an ambiguous requirement isn’t going to be a testable requirement and the more open to interpretation a requirement is the poorer it is. More importantly, your interpretation of a requirement may be an incorrect interpretation and that means you could end up testing and providing information which is of no value to your project team.
Requirements should be measurable, and by measurable I mean testable. If you can’t measure whether the requirement’s been met, or whether it performs as expected, it’s not testable.
Requirements should be achievable. Ok, there is scope for highly desirable requirements which may make it in to the product if time allows (ever seen that happen?!) but this should serve as a warning. Does it sound achievable to you? If it doesn’t you probably want to question it before you spend time designing tests for it…
Requirements should be realistic. Well, yes, but sometimes we don’t know until we try to implement a requirement that we can’t actually implement it in the given timescale, or because of timescales, and therefore it’s not realistic. But again, if it doesn’t sound realistic it should probably be questioned before you start spending time designing those tests.
Requirements should be timebound. Think performance. “Requirement 1: Function A must do Y in N seconds” Is that realistic? What if it doesn’t? Would we still release if it took an extra 10 seconds, 30 seconds or 10 minutes? How good are your performance requirements?
The SMART method of reviewing requirements has it’s flaws but it does highlight the need for good requirements. You might use SMART as a guide to review requirements as it’ll raise some questions for you to ask or another method might be to try and write early high level test cases against requirements as a way of teasing out ambiguity or a lack of testability. Whatever your method, it’s key that we understand product requirements and more importantly, it’s key we ensure our understanding is aligned with that of the project team.
Requirements should be specific because an ambiguous requirement isn’t going to be a testable requirement and the more open to interpretation a requirement is the poorer it is. More importantly, your interpretation of a requirement may be an incorrect interpretation and that means you could end up testing and providing information which is of no value to your project team.
Requirements should be measurable, and by measurable I mean testable. If you can’t measure whether the requirement’s been met, or whether it performs as expected, it’s not testable.
Requirements should be achievable. Ok, there is scope for highly desirable requirements which may make it in to the product if time allows (ever seen that happen?!) but this should serve as a warning. Does it sound achievable to you? If it doesn’t you probably want to question it before you spend time designing tests for it…
Requirements should be realistic. Well, yes, but sometimes we don’t know until we try to implement a requirement that we can’t actually implement it in the given timescale, or because of timescales, and therefore it’s not realistic. But again, if it doesn’t sound realistic it should probably be questioned before you start spending time designing those tests.
Requirements should be timebound. Think performance. “Requirement 1: Function A must do Y in N seconds” Is that realistic? What if it doesn’t? Would we still release if it took an extra 10 seconds, 30 seconds or 10 minutes? How good are your performance requirements?
The SMART method of reviewing requirements has it’s flaws but it does highlight the need for good requirements. You might use SMART as a guide to review requirements as it’ll raise some questions for you to ask or another method might be to try and write early high level test cases against requirements as a way of teasing out ambiguity or a lack of testability. Whatever your method, it’s key that we understand product requirements and more importantly, it’s key we ensure our understanding is aligned with that of the project team.
Friday, 1 May 2009
Get out of the office and be inspired
This isn’t particularly pertaining to software testing, but I thought I’d blog about it anyway. Yesterday I went down to London and spent the day at two schools, one primary, on secondary, in one of the more deprived parts of London. Working at an Education company it’s important that the employees who work here take the time to visit schools, our customers, and build an empathy for our end users, something which can be quite difficult when you develop and test market products.
The day in itself was absolutely fascinating. Schools have moved on dramatically since I were there (not that long ago) and it’s great to see students using web-based products to collaborate on assignments with students of other schools around the region, country, or world.
It’s been far to long since I’ve been to a school, probably a couple of years, long enough, but it was an inspiring day and I’ve come back to the office enthused about what we do as a company and how I can play my part in ensuring we meet the demands of our customers.
Similarly, I’ve felt the same when returning from something like SIGIST, or Eurostar. That time away from the office allows you to regain perspective, to think new thoughts, and to be inspired.
So, to my point! Getting out of the office to meet customers, go on a training course or attend work-related conferences is a great way to inspire and energise your self. We can all become tired, or bogged down in our day-to-day work and responsibilities and it’s important to know you don’t always need two weeks in the Maldives, however nice that’d be, to re-charge your batteries!
The day in itself was absolutely fascinating. Schools have moved on dramatically since I were there (not that long ago) and it’s great to see students using web-based products to collaborate on assignments with students of other schools around the region, country, or world.
It’s been far to long since I’ve been to a school, probably a couple of years, long enough, but it was an inspiring day and I’ve come back to the office enthused about what we do as a company and how I can play my part in ensuring we meet the demands of our customers.
Similarly, I’ve felt the same when returning from something like SIGIST, or Eurostar. That time away from the office allows you to regain perspective, to think new thoughts, and to be inspired.
So, to my point! Getting out of the office to meet customers, go on a training course or attend work-related conferences is a great way to inspire and energise your self. We can all become tired, or bogged down in our day-to-day work and responsibilities and it’s important to know you don’t always need two weeks in the Maldives, however nice that’d be, to re-charge your batteries!
Tuesday, 28 April 2009
An Observation on the challenge of being a tester
Last night, Monday night, was shopping night, so I met my girlfriend in Tesco’s after work and we set about our weekly shop. I noticed recently that Tesco’s have altered the setup of their tills slightly and they’ve changed the arrangement so that customers are now able to scan their clubcards without the cashier having to do it and the card payment device was also moved toward the end of the till, in the area where you place your shopping in to bags – whether this is to make the whole process quicker, or just remove tasks from the cashier I’m not sure.
Anyway, as we went through the tills and completed bagging up our goods I took my Tesco clubcard from my wallet and went to scan it through the clubcard-scanner, which is situated at adult waist height, pointing slightly downward. “You can’t use that”, declared the cashier, “We’re not using them anymore.” “You’ve only just had them installed haven’t you?”, said I. “Yeah, but they realised that the laser beams which scan the barcode can damage children’s eyes so we can’t use them”, “you’d have thought they’d have tested that”, she then added. I wasn’t sure how to reply to that, as a hundred reasons why it might not have been tested flew through my head and I almost told her I was a tester, but then I’d imagine that information would have fallen on deaf ears – what did she care what I did for a living? Instead, I just laughed, thanked her, and wandered off thinking that this was a good topic for a blog post.
So what makes it a good topic for a blog post? Well, for me it highlights the difficulties that we have as testers. On the one hand, it sounds like whoever manufactured and tested this product didn’t consider the types of users that would be using or impacted by this device and for that there is no excuse. But there’s also the consideration that the barcode reader may have been through tens of test cycles, with masses of defects found & fixed, or maybe the test team only had a week to test the damn thing before someone told them it had to be released and so they spent the whole week ensuring it could actually read barcodes and transmit that data to the client, rather than test or understand how dangerous the rays were from the device.
As testers we have an almost impossible job because there will always be untested situations, scenarios or environments at the point of release and we never truly know if those untested areas will be the very areas which throw up a customer critical defect upon release. With risk assessments, and good judgement, we can have a very good go and ensuring we’ve removed most of the risk ahead of release, but there will always be that percentage of risk we release with, and that’s what we’ll always worry about.
Anyway, as we went through the tills and completed bagging up our goods I took my Tesco clubcard from my wallet and went to scan it through the clubcard-scanner, which is situated at adult waist height, pointing slightly downward. “You can’t use that”, declared the cashier, “We’re not using them anymore.” “You’ve only just had them installed haven’t you?”, said I. “Yeah, but they realised that the laser beams which scan the barcode can damage children’s eyes so we can’t use them”, “you’d have thought they’d have tested that”, she then added. I wasn’t sure how to reply to that, as a hundred reasons why it might not have been tested flew through my head and I almost told her I was a tester, but then I’d imagine that information would have fallen on deaf ears – what did she care what I did for a living? Instead, I just laughed, thanked her, and wandered off thinking that this was a good topic for a blog post.
So what makes it a good topic for a blog post? Well, for me it highlights the difficulties that we have as testers. On the one hand, it sounds like whoever manufactured and tested this product didn’t consider the types of users that would be using or impacted by this device and for that there is no excuse. But there’s also the consideration that the barcode reader may have been through tens of test cycles, with masses of defects found & fixed, or maybe the test team only had a week to test the damn thing before someone told them it had to be released and so they spent the whole week ensuring it could actually read barcodes and transmit that data to the client, rather than test or understand how dangerous the rays were from the device.
As testers we have an almost impossible job because there will always be untested situations, scenarios or environments at the point of release and we never truly know if those untested areas will be the very areas which throw up a customer critical defect upon release. With risk assessments, and good judgement, we can have a very good go and ensuring we’ve removed most of the risk ahead of release, but there will always be that percentage of risk we release with, and that’s what we’ll always worry about.
Friday, 24 April 2009
Are you doing your bit to save money?
*I just wrote this as a blog at work, so thought I'd share it*
As we all know, it’s key that in the current climate (not the Sauna!) we ensure we’re doing our bit to reduce spend and monitor cost across projects. So, off the top of my head here are a few things we can be doing to make efficient use of our money.
1. More Exploratory Testing – the savings aren’t clear, but it is evident that the big-upfront-test-scripting approach could be replaced by exploratory testing. There may be a need to invest more time in training for your testers and you may take people out of their comfort zones but choosing to adopt a more explorative approach to your testing could reduce the cost of your test cycles.
2. Write less detailed test scripts – how much detail are you putting in to your scripts? And, more importantly, is that level of detail really required? We know that people who’re new to a product require detailed steps to guide them through the product but as with the exploratory point, could appropriate training replace this? Less detailed scripts could also reduce maintenance effort later on in the project when functions and features change which require script updates and re-writes.
3. Complete clarity on coverage PT1 – are we spending time and money testing on configurations which aren’t high priority? Will testing on those configurations provide useful information? Or is that testing which can wait for another time? Are the project team in agreement with your planned testing scope? Let’s ensure we’re not burning time and money running tests which aren’t producing useful information, and information which is required here and now.
4. Complete clarity on coverage PT2 – how are you ensuring that you’re not going to find showstopper or major defects via new tests in later test cycles? How can you make sure the really important bugs are found as early as possible? Are you talking to the right people to understand what the high risk areas of the product are? Ultimately, are you test the right stuff?
5. Risk Assessments – the primary aim of a tester should be to find important bugs fast and using a risk assessment is the main tool which facilitates this. So, have you performed a risk assessment? Are you testing the most important or complex areas first? How are you going to make sure you don’t come across a showstopper on the final day of your testing?
There are undoubtedly numerous other ways in which we can actively control and reduce cost and it’s also worth considering that whenever a decision is made to cut cost there can be an impact on time and quality. I don’t want to teach my grandma to suck eggs but hopefully this will make you think a little about how you’re doing things, and whether you can be working smarter, or more efficiently, or just making decisions about your approach with one eye on the cost aspect.
As we all know, it’s key that in the current climate (not the Sauna!) we ensure we’re doing our bit to reduce spend and monitor cost across projects. So, off the top of my head here are a few things we can be doing to make efficient use of our money.
1. More Exploratory Testing – the savings aren’t clear, but it is evident that the big-upfront-test-scripting approach could be replaced by exploratory testing. There may be a need to invest more time in training for your testers and you may take people out of their comfort zones but choosing to adopt a more explorative approach to your testing could reduce the cost of your test cycles.
2. Write less detailed test scripts – how much detail are you putting in to your scripts? And, more importantly, is that level of detail really required? We know that people who’re new to a product require detailed steps to guide them through the product but as with the exploratory point, could appropriate training replace this? Less detailed scripts could also reduce maintenance effort later on in the project when functions and features change which require script updates and re-writes.
3. Complete clarity on coverage PT1 – are we spending time and money testing on configurations which aren’t high priority? Will testing on those configurations provide useful information? Or is that testing which can wait for another time? Are the project team in agreement with your planned testing scope? Let’s ensure we’re not burning time and money running tests which aren’t producing useful information, and information which is required here and now.
4. Complete clarity on coverage PT2 – how are you ensuring that you’re not going to find showstopper or major defects via new tests in later test cycles? How can you make sure the really important bugs are found as early as possible? Are you talking to the right people to understand what the high risk areas of the product are? Ultimately, are you test the right stuff?
5. Risk Assessments – the primary aim of a tester should be to find important bugs fast and using a risk assessment is the main tool which facilitates this. So, have you performed a risk assessment? Are you testing the most important or complex areas first? How are you going to make sure you don’t come across a showstopper on the final day of your testing?
There are undoubtedly numerous other ways in which we can actively control and reduce cost and it’s also worth considering that whenever a decision is made to cut cost there can be an impact on time and quality. I don’t want to teach my grandma to suck eggs but hopefully this will make you think a little about how you’re doing things, and whether you can be working smarter, or more efficiently, or just making decisions about your approach with one eye on the cost aspect.
Thursday, 23 April 2009
Usability: A second class citizen?
I’m sure we’ve all been in the situation where you get an opportunity to test a product so you sit down and you start testing for functionality, you run some system tests, you run some security tests and maybe a performance test or two before you then run some usability tests. Your defects are raised, most of the functional defects get fixed, the big security and performance issues are fixed but then you look at your usability defects are they’re set as “Fix in the Future” or “Not a problem” or “Proposing not to fix” – Argh! Why is this?
Usually, as we discussed in our team meeting this morning, it’s for two reasons:
1. You’re testing for usability too late in the cycle
2. Time pressures mean the focus is on making the product functional, not usable
Quite often it seems that, what appears to be a small usability issue, actually requires a significant (> 1hour) amount of time to fix because the fix requires some re-working of how the UI works, or it would have a significant impact on other functionality – which tells us we’re testing for usability too late in the software development lifecycle.
Our other battle is members of the project who lack empathy with the end user. “Why on earth would we want to hold up a project to fix these paltry 10 usability defects?” (I recognise that sometimes it’s absolutely the right decision to release with those 10 defects unfixed). So we sit in defect reviews and watch as our usability defects get set to the statuses shown above, usually because we’ve failed to articulate the real end-user impact of those issues.
So, what are we doing about it?
Well, we’re doing a few things about this. We’re trying to use the power of the checklist to create a common set of usability heuristics which can be run against a design doc, a prototype, an early version of the product or a final version of the product. And as that implies, we also recognise that we need to test for usability earlier in the development of the product. It’s clear that we’re testing for usability across most of our products but generally we’re leaving it until too late in the day to do this so it’s key we test for usability as early as we can in the development of a product and continue to focus on usability throughout its creation. Finally, we recognise that we need to get better at influencing our project team on the impact of these defects. We need to be better at understanding our customers and also understanding those people within the business who also understand those customers and who can therefore help us articulate why these issues are important.
Approximately 50-60% of calls our support teams take are “How to” calls which suggests usability of our products is an area in need of improvement so hopefully some of the measures we’re putting in place will start to reduce this in future product releases.
Very interested in hearing other’s experiences with usability testing – have you seen the same problems? If so, what are you doing about it?
Usually, as we discussed in our team meeting this morning, it’s for two reasons:
1. You’re testing for usability too late in the cycle
2. Time pressures mean the focus is on making the product functional, not usable
Quite often it seems that, what appears to be a small usability issue, actually requires a significant (> 1hour) amount of time to fix because the fix requires some re-working of how the UI works, or it would have a significant impact on other functionality – which tells us we’re testing for usability too late in the software development lifecycle.
Our other battle is members of the project who lack empathy with the end user. “Why on earth would we want to hold up a project to fix these paltry 10 usability defects?” (I recognise that sometimes it’s absolutely the right decision to release with those 10 defects unfixed). So we sit in defect reviews and watch as our usability defects get set to the statuses shown above, usually because we’ve failed to articulate the real end-user impact of those issues.
So, what are we doing about it?
Well, we’re doing a few things about this. We’re trying to use the power of the checklist to create a common set of usability heuristics which can be run against a design doc, a prototype, an early version of the product or a final version of the product. And as that implies, we also recognise that we need to test for usability earlier in the development of the product. It’s clear that we’re testing for usability across most of our products but generally we’re leaving it until too late in the day to do this so it’s key we test for usability as early as we can in the development of a product and continue to focus on usability throughout its creation. Finally, we recognise that we need to get better at influencing our project team on the impact of these defects. We need to be better at understanding our customers and also understanding those people within the business who also understand those customers and who can therefore help us articulate why these issues are important.
Approximately 50-60% of calls our support teams take are “How to” calls which suggests usability of our products is an area in need of improvement so hopefully some of the measures we’re putting in place will start to reduce this in future product releases.
Very interested in hearing other’s experiences with usability testing – have you seen the same problems? If so, what are you doing about it?
Wednesday, 18 March 2009
Accessibility requirements? Pfft
Q. What do you do when you find your product fails to meet the Accessibility standards which the requirements state it must adhere to?
A. Remove the requirement.
Quicker than fixing the bugs ennit?
Yes, this has recently happened.
A. Remove the requirement.
Quicker than fixing the bugs ennit?
Yes, this has recently happened.
Subscribe to:
Posts (Atom)