The Need for Generalists: Part 3

Originally titled ''

In failure analysis, there is a tendency to gravitate to a few common test protocols. But this approach can result in a mismatch of techniques to the problem.

Related Topics:

There is a saying that has been attributed at various times to Mark Twain, Abraham Maslow, and many others: “If all you have is a hammer, everything looks like a nail.” Warren Buffet, although not the originator of the maxim in anyone’s account, cited it in the 1980s when critiquing academic studies of financial markets based on what he deemed inappropriate mathematical techniques.

It also has application in the field of failure analysis. There are perhaps as many as 60 different analytical techniques that can be used to analyze a polymer problem. But like it or not, most of us in the analytical services field have become comfortable with six to eight of these, and we tend to rely on them almost exclusively. Which techniques we select depend greatly upon our training, our professional path, and what is available to us.

Unfortunately, this often results in a mismatch of techniques to the problem. And when the data does not fit the emphasis offered by the method, this simple fact is often ignored or an attempt is made to shoehorn the data into a rather tortured scheme.

Several years ago, we worked on a problem that followed such a path. It involved a very low failure rate on an LDPE part. A solution to the problem had previously been sought by two facilities with well-defined specialties. The first group focused on internal stress in the parts and used instrumentation designed to make precise measurements of these stress levels.

They had taken numerous data points; however, when the numbers were crunched, the correlation between the measured stress and the likelihood of failure was poor. The next group had focused on composition. This facility was very strong in high-end techniques such as chromatography and nuclear magnetic resonance and liter- ally pulled the PE molecule apart while at the same time analyzing the additives in the material. This approach also uncovered nothing that distinguished a good part from one that failed.

Problems with very low failure rates are the most challenging because they inevitably are due to multiple factors. A strategy that looks for one major contributor may work in a case where 10-20% or more of the parts are failing. But when the occurrence is in the range of 10-20 ppm, experience has shown that one influence significant enough to cause product failure simply does not come and go with this frequency. Instead, three or four factors will be involved and only when these multiple factors combine in a particular way will the part fail. 

In this case, the failures were caused by a combination of poor color mixing, incidental damage caused by automated handling equipment, a tolerance stack-up between parts in the assembly that did result in elevated stress, and the effects of extended storage conditions on the crystallinity of the material. Therefore, part failure depended on polymer structure combined with dimensional considerations as well as manufacturing issues related to part handling and melt processing. No one of these factors alone was capable of producing a failure—they all had to be in play and to a sufficient magnitude to affect the performance of the polymer. Facilities made up only of polymer chemists or stress analysts are not likely to bring to the table the range of skills needed to properly weigh all of these factors and assemble them into a cohesive picture. To uncover the confluence and the relative importance of each factor required multiple test techniques that represented these different disciplines, a careful evaluation of the raw data from each technique, and a practical appreciation of how significant each factor could be. Frequent conversations with the client were also invaluable since each call brought out new observations.

Many of the tests used to unravel the problem had not been performed previously. And not every test produced a positive result. One of the tasks of the analyst is to distinguish a significant result from an insignificant one. But one test, known as differential scanning calorimetry (DSC), was of particular importance and had been done previously. But a lack of thorough- ness and a misinterpretation of the results resulted in a missed opportunity.

First, these tests had only been performed on failed parts. While most of the tests had included both good and failed parts, the DSC tests had only been run on parts that failed. It was not until the tests were repeated with both good and failed parts that the significance of the results was noted.

The accompanying graph shows a comparison between good and failed parts. Every failed part exhibited the unusual secondary step in the thermogram just before the primary melting event. None of the good parts displayed this extra transition. 

The research literature is full of papers that discuss the ability of DSC to detect minor transitions in a material that can provide information on the thermal history of a molded part. Semi-crystalline polymers, when exposed to elevated application temperatures, will develop a new population of crystals that will melt at a temperature slightly above the temperature at which they were formed. Therefore, these inflections not only tell us that the material had been exposed to elevated temperatures, they also give us a good approximation of that temperature. In this case the temperature of the secondary transition was in good agreement with the summertime temperature inside the warehouse where these parts were stored for extended periods of time.

It is also known that with an increased degree of crystallinity, the strength and stiffness of a semi-crystalline polymer will increase while the ductility will decrease. The DSC results that we noted had been evident in an earlier analysis. But because the tests had only been performed on failed parts, the significance was missed and the inflections were accounted for by a different mechanism that turned out to be incorrect.

It should be evident that this elevated temperature exposure was not the cause of the failures. Hundreds of thousands if not millions of these parts went through storage at these summertime conditions. Only those parts that had experienced this treatment and exhibited particularly poor color homogeneity and had been damaged by the high-speed automated assembly machinery and were assembled from parts at the extremes of the tolerance range displayed failure.

This last factor is particularly important. None of the parts in the assembly were out of print. But when a part molded to the high end of its outer diameter was inserted into a part with a mating inner diameter that was on the low end of the tolerance, this created the high-stress condition needed to tip the scales. Too often when a dimensional analysis is performed, parts are checked to determine if they meet the print and if they do then this factor is set aside as not worth considering.

Before we leave this subject of specialization, there is one more feature of our modern organization structure that we need to address. And that involves the division of labor within the analytical laboratory. That will be the subject of my next column.

Read The Need for Generalists: Part 1

Read The Need for Generalists: Part 2

ABOUT THE AUTHOR Mike Sepe is an independent, global materials and processing consultant whose company, Michael P. Sepe, LLC, is based in Sedona, Ariz. He has more than 40 years of experience in the plastics industry and assists clients with material selection, designing for manufacturability, process optimization, troubleshooting, and failure analysis. Contact: (928) 203-0408 • mike@thematerialanalyst.com.