Monday, 8 December 2008

Post Release Defects: What I’ve learnt from them?

A while ago I blogged about post release defects, and how, as test teams, we can learn from the defects which we didn’t spot prior to the release of the product. I think it’s important, as a tester, to continue to review your testing after the product’s been released because there are questions we can ask of ourselves, such as – should we have tested for that? Why didn’t we spot that? What might we have done differently to test that? And I’ve performed such an assessment on a project I’ve recently worked on.


The first problem I came across when trying to do this was trying to capture the issues which are found after the release of a product. I decided, due to the large number of support calls being made, that any issue which resulted in a software update, or support document, would be counted in my measures. Because it was such a large project, which was released with a number of known issues it wasn’t feasible to count every support call logged (although this might be feasible for smaller products).


This gave me a list of around 100 issues to review, which I split in to the following categories:
· We found it before release (we knew the defect existed prior to release)
· We didn’t find it, we could have found it, but we wouldn’t expect to find it
· We should have seen it (we missed it in our coverage)
· We didn’t see it, we couldn’t have seen it and we shouldn’t have seen it

By doing this, we were able to understand that 60% of the defects which were either being fixed in updates, or which required support articles to inform the customers of issues, or provide a work-around, were known prior to release and we decided to release with those defects in the system. As a test lead this was useful to know, it gave me confidence in our coverage, and our decisions, and backed up the messages we’d been delivering prior to release of the product, i.e. these are bugs which will impact customers!

The other categories allowed us to understand some gaps in our testing; there were defects which were simply due to gaps in coverage, a scenario which hadn’t been tested, or a type of hardware which hadn’t been included in our coverage, and these we can learn from. On top of this there were some defects which we wouldn’t expect to find; such as defects which were found due to inconsistencies with 3rd party products, or defects with areas of the product which had been de-scoped from the testing effort earlier in the project (due to time pressures).

So, to conclude – this post-release defect review gave us confidence in the testing performed, something which we’d started to question because of the volume of defects which were being found by customers. It also highlighted some gaps in our coverage, which we can learn from in future projects and finally, it told us that generally – we’d done a pretty good job. For those who use DDP, ours was around 95%.

To keep this readable I’ve excluded a lot of detail, and decision making, from the process above but if anyone’s interested in knowing more I’d be happy to discuss on a 1-2-1 basis, or via another blog if the interest is there.

No comments: