When I first started in the employee survey business, I was responsible for some big mistakes. Eventually, I realized that anything we did not test would go wrong; and that it wasn’t just testing that mattered, but testing thoroughly and intelligently. The latter realization came after an Excel copy/paste bug — where the first 20 and last 20 cells were correct, but everything in the middle was just a repeat of the first 50 or so cells, over and over.
Sadly, that’s not unique to surveys. As one example, back in the 1980s, an automaker launched a critical series of new engines. They had been through endurance testing in extreme weather conditions — but they soon started getting a flood of seized engines at the dealerships. The problem: the company saved money by buying a single brand of oil for testing. Many dealers and owners used inferior oils that thickened in the cold much more than they should have (due to high paraffin content). Engineers quickly changed the design to deal with glue-like oil, and the engines developed a good record for longevity.
Lesson learned? Not quite. Thirty years later, they made almost the same mistake again, this time with gasoline. There are different gasoline formulations in different states, and different brands of gasoline have different additives. The company used a single brand, again to save money, and despite thorough testing, ended up with an embarrassing and expensive problem — the heads of certain engines, when used with certain gasoline under certain conditions, would crack.
Between those failures of testing were many others — some as products were rushed into production, some because people didn’t ask the right questions about them. Billions of dollars were lost to poor quality — in warranty costs, incentives, and extra advertising and marketing to cover cars no longer seen as above-average in reliability.
Some products seem to lack any testing at all — particularly web sites and other software. In some cases, this is because all testing was done on a single operating system and web browser, and sometimes only on a single type of computer and monitor. That can result in a web site that only works for fifth of their customers (or potential customers) — or which breaks as soon as the operating system is updated (such as any web site “designed for Internet Explorer 6”).
One form of testing is watching someone try to use a product — even an employee survey. You can quickly spot problems as people try to do things in ways that were unforeseen, interpreting directions differently or ignoring them entirely, missing major user interface features, and such. Major and minor corporations and government agencies alike often seem skip these tests, or to test with “people of convenience” who may be too familiar with their group’s way of thinking, or other versions, or past versions. That can result in big problems down the road — as in one organization, where a critical control was not even visible on the screen of most people.
One form of testing is the “mystery shopper” approach, even if done without actual mystery shoppers. Try contacting your own company (or agency or university) with a problem — particularly one requiring human contact. See how the phone system works — does the introductory message take so long that, if you were a real customer/citizen/student-or-instructor, you would be ready to clobber the first person who answered the phone? Do the choices make sense? Is one appropriate? Do they actually work? The same approach can and should be used for most user interfaces, whether on a computer, a dashboard, or in a company lobby.
It’s slow and mildly expensive to test everything thoroughly. It’s even more expensive and time-consuming to re-do and repair things that could have been done right the first time, and to lose or enrage customers.
“If it’s not tested properly, it’ll go wrong.” That became the paranoid mantra of our consulting practice, and it served us well for many years. Perhaps it could serve your organization well, too.