Friday, February 5, 2010

Who Does the Testing

As I discussed in my last post there are many types of testing, but who should be doing the testing? This depends on what type of test is being conducted. Unit tests, regression tests and integration tests can be automated in many cases. They were probably written by the developers during development or by the QA department.

Automated tests are extremely good at demonstrating that the features and functionality work correctly. They can also be used to measure performance. By always running the same tests regressions can be identified quickly and we can be assured that existing features continue to work as new features are added.

However, there are many tests that simply cannot be automated, especially if your application contains a lot of UI elements. Sure there are programs that click buttons in a specific sequence and can "observe" the results. But, this isn't the same as a person interacting with the product. When a real person is clicking the buttons they an also evaluate how good the workflow is, how easy it is to move through the steps of the task and most importantly provide qualitative feedback.

These types of testers are a rare find. A good QA person needs to be someone who enjoys working on repetitive tasks or at the very least configuring several complex environments. The tester needs to be able to evaluate not only that the software is performing correctly but that it is doing the job in a usable way. Joel Spolsky has some advice on choosing testers.

If your program has a significant user interface you should include some Usability Testing into your schedule. These tests should be conducted in a semi-controlled environment with little direction. A user should be given a task to complete with the software and the session should be observed and/or recorded. This allows the developer(s) to see how easy their interface is to use. It can also identify places where additional error handling or explanation is necessary.

These kinds of tests should not be conducted by the engineer who built the interface. Let's face it, the interface that I build may make perfect sense to me because I know what it is supposed to do and how it does it. Using the form may not be as obvious to someone else working with the product. A sequence of actions to perform a task that is quite natural to me may be frustrating for many users. Jeff Atwood made a similar point in his article "Open Source Software, Self Service Software" when discussing self-service checkout.

There are certain rituals to using the self-service checkout machines. And we know that. We programmers fundamentally grok the hoops that the self-service checkout machines make customers jump through. They are, after all, devices designed by our fellow programmers. Every item has to be scanned, then carefully and individually placed in the bagging area which doubles as a scale to verify the item was moved there. One at time. In strict sequence. Repeated exactly the same every time. We live this system every day; it's completely natural for a programmer. But it isn't natural for average people. I've seen plenty of customers in front of me struggle with self-service checkout machines, puzzled by the workings of this mysterious device that seems so painfully obvious to a programmer. I get frustrated to the point that I almost want to rush over and help them myself. Which would defeat the purpose of a.. self-service device.

I related to this example right away. It seems very natural to me to do things step by step since that is the way we have to write programs. Most people don't break tasks down into these small chunks. A good tester can identify where these steps seem onerous which allows the developers to streamline the task.

Finally if you need some convincing on hiring testers take a look at Top Five (Wrong) Reasons You Don't Have Testers.

No comments: