Statistics Abuse In QA/QC: 3 Lessons Learned
The tools we use to develop new or improved products are essential in ensuring we deliver the product as soon as possible and as efficacious as technology, testing, and knowledge/experience permits. But the caution one must take in using tools is to be mindful that we use the tools correctly and effectively. The worst that can occur is to use a tool incorrectly and, as a result, make errors in conclusion that could lead to injury, costly litigation, or a tarnished reputation.
One of the best tools at our disposal in a quality assurance/quality control context (manufacturing, supplier quality, etc.) is statistics. Statistics helps to quantify uncertainty. This allows us to make conclusions with a degree of certainty regarding this uncertainty. Statistics also helps us extrapolate and interpolate information in the form of data to make informed decisions. The key word here is informed. Statistics is not a substitute for thinking. Remember, statistics is a tool. As a tool, it provides more clarity around uncertainty so you can make a more appropriate conclusion on the data that relates to your product’s performance. This is extremely important regardless of the project’s phase, whether it is feasibility, development, validation, or an aspect of process improvement. Statistics can be abused through purposely misapplying statistics or it can be abused by improper application or interpretation.
Lesson #1: Statistics Is Not A Substitute For Thinking
A form of statistics abuse can be demonstrated in the following example. A company where a manufacturing site was updating a testing machine to a more automated version applied statistics to test data that showed the P-Value of a 2-gram difference in test comparisons between the two testing machines is significant. The engineer determined that difference meant the machines were not the same. Significant debate ensued. The main concern was the 2-gram difference was not a measurable difference by the customer, yet statistics implied the 2-gram difference was significant. The test was a subjective destructive test measuring a constantly changing resistance force. Things were at an impasse until informal testing of prepared samples was conducted with customers. The customer feedback showed that differences of less than 10 grams were undetectable. In fact, for larger products, differences of less than 35 grams were undetectable. The conclusion was, despite statistics indicating this small difference to be significant, it did not impact the performance or the perception of performance by the customer. In other words, there was no practical difference between the two machines, even though statistics concluded there was a difference; the difference was not practical.
This scenario illustrates where a practical litmus test coupled with experience and additional testing can provide valuable information or clarity around statistical uncertainty. Always remember that statistics is a tool to help you make a decision — it is not a substitute for thinking.
Lesson #2: Set A Realistic Confidence Interval
I’ll share another example from my personal experiences: A manufacturing site was switching to another supplier and was under pressure to complete the conversion quickly. The quality engineers from the site presented their data statistically comparing the two manufacturers’ products, with the conclusion that all five materials being switched were the same. This, of course, was great news and would eliminate a significant amount of work by the site. However, this euphoric conclusion was dashed when the statistical analysis was reviewed. The site engineers used a confidence interval of 99% rather than 95%, which is typically used for analysis. This seemingly slight difference was far from slight, because the higher the confidence interval percentage, the greater the margin of error. For example, let’s say there are 10 horses in a race and you want to be sure to pick the winner with 99% confidence. The only way to do this is by stating that the winner is one of the 10 horses. If we used a 95% confidence interval, there is less certainty. We would have to make the statement that the winning horse will come from one of the nine horses you selected. This makes for more uncertainty as the 10th horse could be the winning horse. What the site engineer did in stating the two suppliers were the same at a confidence interval of 99% was in fact using the statistics to conclude they are the same. When they redid the statistics at a confidence interval of 95%, it showed that four out of the five products were actually quite different.
In this case, a knowledge of statistics was being used to avoid doing the proper testing and, conversely, knowledge of statistics helped avoid a potential error that could have been very troubling. Statistics is like a double-edged sword — it can work for you or against you depending upon how it is employed and the intentions behind its use. This is a good justification for having a statistician on a company’s payroll to ensure statistics are properly applied.
Lesson #3: Look Into The Causes Of Testing Anomalies
Statistics is a powerful and insightful tool. When used properly it is extremely helpful and timesaving, and it can help drive you to the right solution. Unfortunately, statistics is only as good as the data and how the data is collected. Further, it has to be applied correctly.
During another personal experience, a supplier was testing a material for release, but when the test was performed by the receiving company, the results indicated the material failed. A review of the testing suggested the problem lay with the testing technicians, with one in particular having very different results from the other technicians; obviously, this technician was one of the reasons for the difference in test findings. Using statistics over many weeks and many different test protocols, we discovered that the testing variations were due to the fact that each technician had to dissolve samples of the supplied material to make a solution. Variations in the amount of the solution dispensed by the different technicians was what was causing the testing anomalies. The fix was to purchase an automatic dispensing machine that removed the variation from the mixing step, and we bought one for the supplier and one for the receiving company. Over 44% of the variation was eliminated. And that one rogue technician whose testing was significantly different from the other technicians? Turned out the technicians was actually the most accurate and consistent of all of the technicians.
Points To Remember In Using Statistics
These cases and the points made show why it is important for your company to ensure that your project teams have solid training and solid ongoing training in statistics. It is important that teams know how to collect data and how to preserve data integrity. Employ or consult with a statistician who can review project analyses and guide those using statistics to ensure they are using the right statistics and drawing the right conclusions. A statistician is also a valuable resource to verify that statistics are applied and interpreted correctly, to be available as a consultant to product teams, and to educate and improve project teams’ knowledge and use of statistics. After all, statistics are only as good as the inputted data and how they are collected, and the decisions made based upon statistics are only as good as the statistics. In the end, product safety, customer safety, product efficacy, and your company’s reputation and vitality are at stake.