Understanding the Messy Process of Developing product performance test methods
The cleaning product market is flooded with competing products vying for your attention, with labels that include “better than,” “more energy efficient,” and “works faster.” Verifying those claims requires a process. Many companies support their claims with performance-based product testing. However, designing a sound test method is not as simple as making a mess and cleaning it up.
A major problem continually plaguing the industry is unrealistic or biased product performance tests.
Performance results are only as good as the procedure that is used to gather them, and a poorly designed test will not fairly or accurately represent a product’s “in home” performance.
Unfortunately, most of the unrealistic claims only come to light when a competitor presents the company making the claims with a lawsuit.
Renowned vacuum company Dyson filed two such lawsuits against its competitors. The competitors claimed that their vacuums had a 750W rating, but independent testing presented by Dyson showed that they could consume more than 1600W of power. Dyson claimed that one of the reasons for the power discrepancy was that the vacuums were tested in a dust free environment. If that claim is true, then the test procedure used to determine the power ratings was not consumer-relevant and essentially false in a real world setting.
Can companies accurately make consumer-relevant claims?
The best way to answer that is to understand what makes a good product test, and doing so will make an inadequate product test easier to identify. The main goal of any product performance test should be to make it as consumer-relevant as possible while still maintaining enough control over the system to gather meaningful, accurate and repeatable data. Generally, when a test procedure is being developed, not all of the relevant information is known at the onset and some initial research is required before actual testing begins.
Many challenges can arise while attempting to take the completely uncontrolled world of the consumer and map it within the constraints of a scientific test procedure. Too much control and a test can become completely unrealistic, so the data gathered will not accurately mirror a real life scenario. Such was the case in the aforementioned vacuum cleaner example. On the other hand, if there is not enough control, the data is useless because little can be determined from a test with numerous uncontrolled variables influencing the results.
Steps to a better test
1. Determine the question(s) the test is attempting to answer
There are many factors which contribute to a product’s overall performance including, but not limited to, its speed, effectiveness, power consumption, lifespan, performance deterioration and ease of use. A single test is not going to be able to measure all of these things simultaneously, so it’s necessary to decide what is going to be tested.
When possible, it’s best to design tests that measure data quantitatively versus qualitatively.
Quantitative data is objectively measured numerical values, while qualitative data consist of subjectively evaluated descriptions. If a test is designed to determine “how clean” something is, qualitative data would be based on the tester’s perception, which could introduce their associated bias as well as variability between testers. The same result could receive a “cleanliness” rating of a “five out of ten” for one tester where another would assign a seven. The same tester’s rating behavior can even change after several trials.
2. Find out what is consumer-relevant
The basis of any good consumer testing should be focused on mimicking the environment and “use cases” that will occur in the real world. Information about these factors can be gleaned from several different sources, including test standards and user research. Test standards are useful in setting up a framework for a test, and they can provide guidance on how similar tests have been executed. However, it is important to always approach those test procedures – even the standardized ones – with a critical eye.
User research is one of the most relevant ways to gather data on typical use cases. User research can take several forms including surveys, in-home studies and user testing in a controlled lab.
Surveys represent a fast and easy way to gather information from many users. However, there are instances where survey respondents’ information is not captured accurately or quantitatively in such a way that a sound test procedure can be created.
In-home studies are useful in observing how products are being used in a realistic uncontrolled setting. They can provide insight on how consumers interact with a product while cleaning. This approach is often more useful than a survey because many consumers will oversimplify or omit steps when recounting their cleaning patterns in a survey. Observational research can also reveal unintended use cases and unmet needs of a product. The observed cleaning patterns and behavior can then be recreated by the test method.
User testing in a controlled lab setting can be used to gather specific data in order to mimic use behavior. For example, if the test is investigating how a user wipes up a spill with a towel, a test can be set up to determine the downward forces applied to clean the surface as well as the speed and motion of the moving towel. Variation in speed and force between users is to be expected, so multiple individuals and varying demographics need to be measured to determine typical forces and speeds for use in the test procedure.
3. Standardize the test
One of the most difficult aspects of any cleaning test method is standardization of the mess itself. This single aspect can have a far-reaching influence on the test results. It is also one of the most complicated features of testing cleaning products, because in the real world messes are as different as snowflakes. A mess “recipe” can also form the starting point in user research.
An important factor to consider during mess development is the stability of every ingredient. It is necessary to consider how ingredients will be measured, as well as how any potential instability will influence those measurements. Ingredients such as dirt or grass clippings can have fluctuating moisture content, which may influence a product’s performance and present a problem if the test method incorporates weight measurements. If ingredient stability is a concern, additional investigation will be needed to determine if the ingredient is unstable enough to cause an issue and whether there are any steps which can be taken to stabilize that ingredient.
Mess composition is a complex factor that can influence product comparison results. Mess composition can sometimes unfairly favor one cleaning product over another.
An example of this can be seen in a recent lawsuit against the Swiffer Sweeper. The Swiffer packaging claimed that the “Swiffer Sweeper leaves your floors up to three times cleaner than a broom…on dirt, dust and hair.” In this particular case, the plaintiff, Libman, argued that the testing was unfairly skewed in favor of the Swiffer Sweeper, due in part to the fact “that larger particles that might not stick to a sweeper pad had been sifted out of the “dirt” used for testing.”
4. Clarify the test procedure
The test procedure itself should be clear and precise. The ultimate goal should be to create a test that will produce repeatable results regardless of the individual performing the test. There should be no ambiguity that could cause a test executor to make assumptions or deviate from the intended procedure. One way that a test procedure can be vetted for ambiguity is to recruit individuals who have no prior knowledge of the test and ask them to perform the test using only the test procedure. Any deviation from the procedure or confusion should be noted, and any issues respecting the procedure clarified.
Improve the Performance of Your Performance Tests
The realm of product performance testing presents a lot of challenges, but valuable information and insights can result when a test procedure is constructed correctly.
The key to success is keeping the procedure consumer-relevant, accurate and repeatable.
The consumer-relevant portion can be addressed with user research. The accurate and repeatable requirements can be provided by standardizing the highly variable aspects of mess creation and removal. Depending on the test method, product performance testing can range from meaningful to worthless, but if proper care is taken, the results can be extremely useful for developing and marketing new products.