Quality Engineers: Are You Making The Right Call?

by Eric Hinrichs, senior principal quality engineer (retired), Ethicon, part of the Johnson & Johnson Medical Device Companies

As a quality engineer, have you ever had the following conversation with a project leader?
Project leader: “Hey, do you have a minute?”
You: “Sure, what is it?”
Project leader: “Well, we need your signature on this validation completion report. As you know, the project is hot and the product launch is due next week. This approval is all we need to get the product released. You’re the last signature. Everything is fine, so just sign it and I will get things moving.”
You look at the report the project leader has handed you. You had reviewed it earlier when it showed up in your queue, but you saw red flags in the data analysis. There were outliers that were “explained away” and not included in the final analysis. Not just one or two, but many, and they were outliers by a lot. Something wasn’t right about them and the explanation to exclude them didn’t sit well with you. You know the product launch is the company’s number one priority; everyone is looking to this launch as a big win for the company, but these outliers….
You: “Sorry, but I need to understand these outliers you excluded. I do not align with the rationale in the completion report regarding them. I think there needs to be more investigation as to how and why they occurred.”
Project leader: “Ah, c’mon, the rationale is good. Everyone else has signed off on the report. They’re no big deal. Yeah, a few outliers happened, but that happens all the time. C’mon, we need this product launch to happen and you’re the last person to sign off. C’mon, as a favor for me. Trust me, the outliers are a fluke; there isn’t anything that is going to go wrong.”
So, what do you do? Sign off on the report as a favor or reject the report and ask for more data analysis and risk having management upset with you for the delay?
In quality we are often faced with such dilemmas. The above scenario is based upon a more innocent request early in my career. I wanted to be viewed as a team player. I was new and I didn’t want to look bad to management. I gave the project leader the benefit of the doubt. What happened? There was a problem with the product, and I became the project leader’s and management’s scapegoat, with the oft-stated comment, “Well, Quality approved it.” As if I was the only one that could have stopped the launch.
This is the tough part of the role Quality plays in a company: having the ethical fortitude to make the right decisions. Did I have a choice? I did, but I caved in favor of my rapport with the project leader and the fear of being perceived as a roadblock to the company’s goal of getting the product launched. My gut, my intuition, my sixth sense all told me not to approve the report, but the emotions behind helping a friend and not looking bad overwhelmed my common sense. The product did have issues and had to be redesigned.
What were my options? Did I have any? I did. What I should have done was get with the project’s quality engineer and review the raw data and their findings. Gain alignment on what really occurred during the validation. Once done, work up a synopsis of our findings, what options were available to ameliorate the situation, and what resources would be needed to fix the issue, as well as the best-case scenario for a new launch date.
True, no one would like this scenario, but I found that it’s best to be late and right then early and wrong. Management should understand the situation and align with it if the information and situation are clearly and properly framed and presented. No one likes delays, but delays for the right reason should be understood.
Pressure in the workplace is real and often overwhelming. Sometimes one needs to step back and take a deep breath and look at what options are available that are best, not only in the short term, but in the long term as well; remember to look at the bigger picture. One thing to consider in making a decision is what is often called the red-face test: In an audit, could you look at the auditor and tell them why you approved something without being embarrassed? Could you defend your decision and feel comfortable doing so?
That approach has helped me immensely in my career. You will often get people that will try to use your friendship or position to get something done and when there is a problem later, they are the ones leaving you hanging.
Ethics and doing the right thing are difficult conversations to have when a job, a career, or a reputation is involved. Doing the right thing in the end is the best solution regardless of what influences are happening around you. Remember to utilize data to support your position, present your position in a clear and concise manner, and provide options as solutions with timing and resource needs. Doing this will leave you in the best light with management and show that you are also coming to the table with solutions. If you ever seem at a loss as to what to do, remember what Teddy Roosevelt said: “In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.”
Good luck; you’ve got this.

Statistics Abuse In QA/QC: 3 Lessons Learned

The tools we use to develop new or improved products are essential in ensuring we deliver the product as soon as possible and as efficacious as technology, testing, and knowledge/experience permits. But the caution one must take in using tools is to be mindful that we use the tools correctly and effectively. The worst that can occur is to use a tool incorrectly and, as a result, make errors in conclusion that could lead to injury, costly litigation, or a tarnished reputation.

One of the best tools at our disposal in a quality assurance/quality control context (manufacturing, supplier quality, etc.) is statistics. Statistics helps to quantify uncertainty. This allows us to make conclusions with a degree of certainty regarding this uncertainty. Statistics also helps us extrapolate and interpolate information in the form of data to make informed decisions. The key word here is informed. Statistics is not a substitute for thinking. Remember, statistics is a tool. As a tool, it provides more clarity around uncertainty so you can make a more appropriate conclusion on the data that relates to your product’s performance. This is extremely important regardless of the project’s phase, whether it is feasibility, development, validation, or an aspect of process improvement. Statistics can be abused through purposely misapplying statistics or it can be abused by improper application or interpretation.

Lesson #1: Statistics Is Not A Substitute For Thinking

A form of statistics abuse can be demonstrated in the following example. A company where a manufacturing site was updating a testing machine to a more automated version applied statistics to test data that showed the P-Value of a 2-gram difference in test comparisons between the two testing machines is significant. The engineer determined that difference meant the machines were not the same. Significant debate ensued. The main concern was the 2-gram difference was not a measurable difference by the customer, yet statistics implied the 2-gram difference was significant. The test was a subjective destructive test measuring a constantly changing resistance force. Things were at an impasse until informal testing of prepared samples was conducted with customers. The customer feedback showed that differences of less than 10 grams were undetectable. In fact, for larger products, differences of less than 35 grams were undetectable. The conclusion was, despite statistics indicating this small difference to be significant, it did not impact the performance or the perception of performance by the customer. In other words, there was no practical difference between the two machines, even though statistics concluded there was a difference; the difference was not practical.

This scenario illustrates where a practical litmus test coupled with experience and additional testing can provide valuable information or clarity around statistical uncertainty. Always remember that statistics is a tool to help you make a decision — it is not a substitute for thinking.

Lesson #2: Set A Realistic Confidence Interval

I’ll share another example from my personal experiences: A manufacturing site was switching to another supplier and was under pressure to complete the conversion quickly. The quality engineers from the site presented their data statistically comparing the two manufacturers’ products, with the conclusion that all five materials being switched were the same. This, of course, was great news and would eliminate a significant amount of work by the site. However, this euphoric conclusion was dashed when the statistical analysis was reviewed. The site engineers used a confidence interval of 99% rather than 95%, which is typically used for analysis. This seemingly slight difference was far from slight, because the higher the confidence interval percentage, the greater the margin of error. For example, let’s say there are 10 horses in a race and you want to be sure to pick the winner with 99% confidence. The only way to do this is by stating that the winner is one of the 10 horses. If we used a 95% confidence interval, there is less certainty. We would have to make the statement that the winning horse will come from one of the nine horses you selected. This makes for more uncertainty as the 10th horse could be the winning horse. What the site engineer did in stating the two suppliers were the same at a confidence interval of 99% was in fact using the statistics to conclude they are the same. When they redid the statistics at a confidence interval of 95%, it showed that four out of the five products were actually quite different.

In this case, a knowledge of statistics was being used to avoid doing the proper testing and, conversely, knowledge of statistics helped avoid a potential error that could have been very troubling. Statistics is like a double-edged sword — it can work for you or against you depending upon how it is employed and the intentions behind its use. This is a good justification for having a statistician on a company’s payroll to ensure statistics are properly applied.

Lesson #3: Look Into The Causes Of Testing Anomalies

Statistics is a powerful and insightful tool. When used properly it is extremely helpful and timesaving, and it can help drive you to the right solution. Unfortunately, statistics is only as good as the data and how the data is collected. Further, it has to be applied correctly.

During another personal experience, a supplier was testing a material for release, but when the test was performed by the receiving company, the results indicated the material failed. A review of the testing suggested the problem lay with the testing technicians, with one in particular having very different results from the other technicians; obviously, this technician was one of the reasons for the difference in test findings. Using statistics over many weeks and many different test protocols, we discovered that the testing variations were due to the fact that each technician had to dissolve samples of the supplied material to make a solution. Variations in the amount of the solution dispensed by the different technicians was what was causing the testing anomalies. The fix was to purchase an automatic dispensing machine that removed the variation from the mixing step, and we bought one for the supplier and one for the receiving company. Over 44% of the variation was eliminated.  And that one rogue technician whose testing was significantly different from the other technicians? Turned out the technicians was actually the most accurate and consistent of all of the technicians.

Points To Remember In Using Statistics

These cases and the points made show why it is important for your company to ensure that your project teams have solid training and solid ongoing training in statistics. It is important that teams know how to collect data and how to preserve data integrity. Employ or consult with a statistician who can review project analyses and guide those using statistics to ensure they are using the right statistics and drawing the right conclusions. A statistician is also a valuable resource to verify that statistics are applied and interpreted correctly, to be available as a consultant to product teams, and to educate and improve project teams’ knowledge and use of statistics. After all, statistics are only as good as the inputted data and how they are collected, and the decisions made based upon statistics are only as good as the statistics. In the end, product safety, customer safety, product efficacy, and your company’s reputation and vitality are at stake.