Michael Sepe is an independent materials and processing consultant based in Sedona, Ariz., with clients throughout North America, Europe, and Asia. He has more than 35 years of experience in the plastics industry and assists clients with material selection, designing for manufacturability, process optimization, troubleshooting, and failure analysis. Contact: (928) 203-0408 •firstname.lastname@example.org
The Effects of Time
Click Image to Enlarge
If you plot elastic modulus vs. temperature on a linear scale and then plot apparent modulus as a function of time on a logarithmic scale, you’ll see that the shapes of the curves are very similar. Therefore, if you have information on the temperature-dependent behavior of a material, you can infer, at least in a semi-quantitative way, what will happen to that material over an extended time under constant stress.
Last month we briefly discussed the influence of temperature on the mechanical properties of polymers and reviewed some of the structural considerations that govern these effects. We noted that this behavior distinguishes plastic materials from metals and makes them inherently less predictable. This month we will start to cover the second dimension of the map that governs the behavior of these materials: time.
Knowing the effects of temperature on the load-bearing properties of a material is important, but the measurements are made over a short time scale. If we extend the time scale of the measurement process, we encounter a continuing change in the response of the material. If the material is under constant stress, we observe this continual change as an ever-increasing level of strain, known as creep or cold flow. If the material is under constant strain, we observe a decay in the stress required to maintain this strain. The two responses, while not precisely the same, are similar. And one of the ways that we express this behavior is using a quantity known as the apparent modulus.
The modulus of a material is expressed as the applied stress divided by the resulting strain. On a data sheet this usually is given for that limited region of the stress-strain curve where the two values are consistently proportional. As such, this value is the slope of the stress-strain curve in the linear region of the curve and is sometimes referred to as the tangent modulus or Young’s modulus.
For some materials, a secant modulus is provided. This represents a straight line drawn from the origin to a particular point on the stress-strain curve such as 1% or 2%. This is most often done for materials like polypropylene and polyethylene. Since these polymers do not exhibit linear behavior at such high strains, the drawn line cuts the corner off the actual stress-strain curve and therefore the secant modulus will always be lower than the tangent modulus.
When a material is placed under a specific stress, the strain realized initially will be a function of the modulus of the material. If the stress is maintained, the strain will continually increase. Therefore, if we calculate the modulus an hour later it will appear to have declined. If we wait longer, say 1000 hr, the modulus will appear to have declined even further.
The actual stiffness of the material is not really decreasing. If we remove the stress from the material and then reapply that same stress again we find that the relationship we observed between stress and initial strain will be essentially the same. Consequently, the apparent modulus is just what it suggests; it is the apparent stiffness of the material based on a calculation made up of a constant stress and an ever-increasing strain. It is, therefore, a mathematical construct for talking about the effect of a constant stress on the manner in which strain increases over time.
As hard as it is to find continuous plots of modulus versus temperature, it is even more difficult to find continuous plots of apparent modulus as a function of time. Occasionally, a data sheet will provide a line item or two that probably look unfamiliar. They will refer to a property known as creep modulus and there will be a reference to a particular time. The accompanying table provides an example for an unfilled acetal copolymer.
The tensile modulus can be thought of as the representing the instantaneous or “zero-time” response of the material. The creep modulus values tell us something about how the material behaves if a constant stress is maintained. The increase in the strain is proportional to the ratio of the “zero-time” modulus and the creep modulus. So after 1 hr under constant stress, these values tell us to expect an increase in strain of approximately 12.7%. After 1000 hr the creep modulus predicts that the initial strain will have approximately doubled (2758/1352 = 2.04).
This brings to light an interesting and extremely important aspect of the behavior of viscoelastic materials. The same changes in modulus that can be brought about by increases in temperature can also be simulated by increasing the time scale of the measurement process without changing the temperature. A plot of elastic modulus versus temperature for this material shows that the modulus of the material will decline to 1352 MPa at a temperature of 89 C (192 F). In other words, applying a constant stress at room temperature for 1000 hr produces the same response that would be observed instantaneously under the same stress at 89 C. This is an example of what is often referred to in treatments of viscoelastic materials as the time-temperature equivalence. Whatever can happen rapidly at an elevated temperature can also happen more slowly at a lower temperature. In this particular case, an increase of 66° C equals a time scale of 1000 hr under load.
If we plot elastic modulus vs. temperature on a linear scale and then plot apparent modulus as a function of time on a logarithmic scale, we find that the shapes of the curves are very similar (see Figs. 1 and 2). So if we have access to information regarding the temperature-dependent behavior of a material, we can infer, at least in a semi-quantitative way, what will happen to that material over an extended time period under constant stress. Using advanced techniques and the same dynamic mechanical analyzer that we employ to determine the temperature-dependent behavior of a material, we can establish quantitative relationships between time and temperature for any material over a wide range of conditions.
More importantly, because creep and stress relaxation proceed much more rapidly at elevated temperatures, we can leverage the time and temperature link to construct short-term experiments that allow us to make accurate predictions of long-term behavior without waiting for real-time results that might take years to obtain.
The only shortcoming of the apparent modulus measurements we have presented here is that they do not define the actual stress and strain at which the measurements are made. Modulus refers only to the ratio of stress to strain. Because plastic materials produce stress-strain curves that change slope considerably as they proceed from zero to the yield or break point, the picture is incomplete unless we define the applied stress. This is the third dimension needed to define the long-term load-bearing performance of a material. But before we tackle this dimension we have a little more to say about time. And we will cover that next month.
The Strain Rate Effect
Click Image to Enlarge
A falling-dart impact test with an instrumented tup. Figure 1 shows the results for PVC at a low impact velocity of 5 ft/sec. Figure 2 shows the response of the same material at a higher impact velocity of 15 ft/sec. The solid line represents the load on the test specimen, while the dotted line represents the energy collected during the course of the test. Both are plotted as a function of time.
Tensile tests apply stress over a significantly longer time scale than impact tests. Figure 3 shows results of varying strain rate for a PP. At higher strain rates we measure higher yield stress and tensile modulus. Figure 4 shows tensile test results for another grade of PP (with 15% glass). Here the strain rate was held constant and the temperature was varied. As temperature increases, the strength and modulus of the material decline, mimicking the effects of changing strain rate.
Last month we briefly discussed the influence of time as the second important factor in the mechanical response of polymers. The flip side of time is strain rate.
The rate of loading for a plastic material is a key component of how we perceive its performance. High strain rates—events that occur over a short period of time—tend to favor the elastic properties of materials. Elasticity is associated with load-bearing performance as embodied in properties such as strength and stiffness. However, low strain rates—events that occur over a longer time frame—favor the viscous or energy-damping aspects of material behavior. Viscous flow is associated with energy management, often referred to as impact resistance or toughness.
Here we can see another manifestation of the relationship between time and temperature. We know that as temperature declines, materials become stronger and stiffer but less impact resistant, while increasing temperatures produce the opposite response. This suggests that we will observe the same effect on material properties by varying strain rate as we do when we change temperatures. This, in fact, turns out to be the case.
Consider these two applications. The first involves impact tests performed on a PVC compound. This is not your typical notched Izod impact test. This is a falling-dart impact test with an instrumented striking device called a tup. The placement of a transducer inside the tup allows us to collect thousands of data points during the impact event and display them immediately after the conclusion of the test.
Figure 1 shows the graphical result of such an impact test performed on the PVC at an impact velocity of 5 ft/sec. The solid line represents the load on the test specimen while the dotted line represents the energy collected during the course of the test. Both of these are plotted as a function of time. The load curve rises to a maximum value as the tup makes contact with the test specimen and then places increasing pressure on the specimen, causing an increase in the load.
At some point the load reaches a maximum and begins to decline as the tup begins to penetrate the sample. As this penetration occurs, we continue to collect a significant amount of additional energy, and at the conclusion of the test we can see that the area under the load curve is almost evenly divided between the time frame before and after maximum load. While the whole event is over in 20 millisec, this result is typical of a ductile response and indicates that the material is quite tough. If we reported this result on a data sheet, all we would see is the final result of 32 Joules, or about 23 ft-lb, but we would not know how the material failed because the failure mode is never reported on a standard data sheet.
Figure 2 shows the response of the same material when the velocity of the impact event is increased to 15 ft/sec. The result is significantly different. First, the impact event only lasts 3 millisec, seven times shorter, even though we only increased the velocity by a factor of three. In addition, the total energy required to fail the specimen declines to 21 J, or about 15 ft-lb. But the cause of both of these changes is related to the difference in the failure mode. In this case the material fails abruptly as soon as maximum load is achieved. Almost no additional energy is collected. This is characteristic of a brittle failure mode.
So while holding temperature constant, we have produced a change from ductile failure to brittle failure by simply changing the velocity of the impact event. If we inspected specimens from the two tests we would see that the ductile failure would produce only a small hole punched through the specimen. This piece of displaced material would still be attached to the specimen; stress whitening would be evident around the impact zone; and no cracks would be visible. The brittle failure would show jagged edges around the impact zone where material has been removed from the specimen and secondary cracks would radiate from the primary failure zone. While energy to failure is certainly important, the failure mode is also critical in many applications, since sharp edges on a broken part can constitute an additional hazard. This also explains why the automotive industry requires that instrumented impact tests be performed at two velocities when a new material is being qualified.
Interestingly, if we examine the data sheet for this PVC, we can see this transition from ductile to brittle behavior in notched Izod impact test results performed at two different temperatures. The data sheet for this grade of PVC reports a value of 8 ft-lb/in. at 73 F but only 1 ft-lb/in. at 0°F. So just as there is a ductile-to-brittle transition temperature for plastic materials there is also a ductile-to-brittle transition strain rate.
We can observe this behavior using a lower strain-rate test such as a tensile test. Tensile tests apply stress over a significantly longer time scale than impact tests. Instead of a few millisec, tensile tests can last for 30 to 300 sec. But within this longer time envelope we can vary the strain rate by pulling the sample at different rates. The ASTM method allows for crosshead speeds of 0.2, 2.0, or 20 in./min (5, 50, or 500 mm/min). If we perform tensile tests at all three of these speeds on a given material we can see the effect on the measurements we obtain.
Figure 3 shows this result for a polypropylene. As the strain rate increases we measure a higher yield stress and tensile modulus. What is not seen in this plot is the conclusion of the test represented by the strain at which the sample breaks. If it were displayed we would see that the material elongates to over 300% at 5 mm/min, 120% at 50 mm/min, and 30% at 500 mm/min. Higher ultimate elongation values are associated with increased toughness, so this provides us with another illustration of the trade-off between load-bearing properties and impact resistance. It also underscores the importance of making sure that the strain rate of the tensile test is specified when reporting the results.
Figure 4 shows the results of tensile tests performed on another grade of polypropylene. In this case the strain rate was held constant and the temperature was varied. As expected, as the temperature increases the strength and modulus of the material decline, mimicking the effects of changing strain rate. The equivalence of strain rate and temperature can be demonstrated by performing this simple exercise. Cover up the labels on the two graphs and then try to determine whether the variation in the curves was caused by a change in temperature or a change in strain rate. It is not possible. And just as we cannot tell the difference, neither can the polymer.
Both of these exercises demonstrate the equivalence of strain rate and temperature on the mechanical properties of polymers. Higher strain rates and lower temperatures have the same effect while higher temperatures and lower strain rates have the same effect. Now that we have established this additional implication of the time-temperature equivalence, we are ready to tackle the third axis of this picture, stress. And this we will do next month
The Effects of Stress
Click Image to Enlarge
A stress-strain curve at room temperature for an unfilled PC shows that the relationship of stress to strain is linear for only a small portion of the curve. The modulus of the material, which is the ratio of stress to strain, is calculated in this linear region. For most polymers the upper limit for this linear region occurs at or below 1% strain.
Higher stresses give rise to faster increases in creep strain. The longer a product must perform in the field, the lower the initial stress must be on the material to ensure proper functioning of the product. The “Time to Fracture” line gives the relationship between stress and time to failure at room temperature. The line marked “crazing” represents the relationship between the applied stress and the time required to initiate crazing
In the last two columns we have discussed the effects of temperature and time on the long-term behavior of polymers, and we took a small side trip while covering the subject of time to treat the topic of strain rate.
The third axis that completes the picture is stress. Normally we think of stress as a short-term property since data sheets provide information only at selected points on the stress-strain curve. These points usually involve a catastrophic event such as the point where the material yields or completely fails. However, this only tells us what we should not do with a material; it provides no guidance on what we can expect in a successful application.
Assuming that the stress-strain curve is generated at the application temperature, the yield or failure point will provide a useful upper limit as long as the duration of the application is very short. However, as the time frame of the loading increases, we find that in polymer systems the allowable stress that can be applied must be reduced if the part is to function as expected for the life of the product.
Figure 1 provides a stress-strain curve generated at room temperature for an unfilled polycarbonate. It shows a yield stress of 9500 psi (65.5 MPa) and a yield strain of 8%. We already know that the shape of this curve will change as a function of temperature, but for now let’s assume that the application environment will always remain at room temperature and these values will serve as a good guideline for the capabilities of the material.
If we examine this curve closely we can see that the relationship of stress to strain is linear for only a small portion of the curve. The modulus of the material, which is the ratio of stress to strain, is calculated in this linear region. For most polymeric materials the upper limit for this linear region occurs at or below 1% strain. For many materials this proportional limit takes place even below 0.5% strain. Beyond this point ever smaller increments of stress are required to produce the next interval of strain.
For example, since the modulus of PC is typically about 340,000 psi (2340 MPa), a strain of 0.5% is associated with a stress of 1700 psi (11.7 MPa). If the behavior of the material were strictly linear, then it would require three times this stress, or 5100 psi (35.1 MPa) to attain a strain of 1.5%. However, the graph shows that the stress required to reach 1.5% strain is only 4270 psi (29.45 MPa), or about 84% of what would have been required if the material maintained a proportional stress-strain relationship. It is apparent that the slope of the curve has started to decrease at this 1.5% strain and further increases in stress produce a continued decrease in this slope as the curve approaches the yield point.
This has implications for long-term behavior under load. We already know that a polymer system under constant stress exhibits a continual increase in deformation known as creep strain. The rate at which creep strain increases is related to the slope of the stress-strain curve at the applied stress.
For example, if a stress of 2000 psi (13.8 MPa) is applied to a design element produced from this PC, it will exhibit an instantaneous strain of approximately 0.6%. Experiments have shown that if this stress is maintained for 1000 hr this strain will increase to 0.8%, about one-third greater than the initial strain.
However, if we elevate the initial strain by a factor of 2.5 to 5000 psi (34.5 MPa), the initial strain increases by a factor of not 2.5 but more than three, to 1.9%, because we are now well into the non-linear or plastic deformation portion of the curve. And if this stress is maintained for the same 1000-hr period, the strain grows to 3%, a much faster rate of increase than we saw at the lower stress level.
Higher stresses give rise to faster increases in creep strain. Since all materials have a limiting strain beyond which they cannot function, this faster increase in strain means that the product life will shorten with increasing stress. Said another way, the longer a product must perform in the field, the lower the initial stress must be on the material to ensure proper functioning of the product.
This principle can be observed in Fig. 2. This plot contains a great deal of information about the long-term performance of the PC. However, one of the data points of greatest interest to this discussion is the “Time to Fracture” line. This gives the relationship between stress and time to failure at room temperature. Note that at very short time frames of less than an hour the maximum allowable stress is nearly equivalent to the yield stress measured in Fig. 1.
However, as the time scale increases, the allowable stress declines. In approximately 50 hr the maximum allowable stress has declined by almost 16% to 8000 psi (55.2 MPa) and by the time we reach the 50,000-hr point (about 5.7 yr) the value has declined to 7000 psi (48.3 MPa).
This is an important consideration that shows why knowing the expected lifetime of a product is important to gauging the mechanical capabilities of the materials selected for the application. One of the key causes of failure involves extending the warranty of a product without taking the time to reassess the long-term capability of the materials of construction. While the behavior of metal components may be essentially constant when time, stress, and temperature are considered, this is not the case for plastic compounds.
There is another line on the plot in Fig. 2 that is of interest. It is the line marked “Crazing.” This represents the relationship between the applied stress and the time required to initiate crazing. Crazing represents the initial stages of failure in a material, particularly in amorphous polymer like PC. If crazing occurs in the presence of certain chemicals that can act as stress-cracking agents, crazing leads quickly to outright failure.
Notice that the stresses required to produce crazing are lower than those required to produce fracture and that the slope of this line is more severe than the “Time to Fracture” line. The stress required to produce failure in 50,000 hr produces crazing in less than 1 hr. At 50,000 hr, crazing will occur at a stress level of 4000 psi (27.6 MPa), less than 60% of the stress required to produce fracture when no chemicals are present.
Stress-cracking agents are everywhere. They can be in cleaning agents, lubricants and greases, plasticizers in mating rubber components, and protective coatings on metal parts that may be assembled into molded bosses. Stress cracking is the most common cause of plastic product failure and is particularly difficult to manage since the time frame of this type of failure is typically weeks or months. Therefore, the problem does not show up until a significant amount of product has been shipped.
This concludes our discussion on the key factors that influence the long-term performance of a plastic material. To recap, they are temperature, time, and stress. And they are interrelated. Anytime one of these three parameters changes, the other two must also be adjusted in order to ensure proper product function. An increase in temperature will require a corresponding decrease in some combination of product lifetime and applied stress.
Similarly, extending the lifetime expectations of a product will require a reduction in applied stress, temperature, or some combination of the two. This interaction, along with the shortage of good data that captures this behavior, is part of the challenge of selecting the appropriate polymeric material for an application.
The Importance of Melt & Mold Temperature
Click Image to Enlarge
As the mold temperature increases, room-temperature stiffness of this high-temperature nylon also increases. But more significant is the effect of mold temperature on sample stiffness at elevated temperatures. This indicates how cooling too rapidly can reduce part crystallinity.
As I work with processors I find a general lack of appreciation for how significantly process conditions can influence the final properties of the molded part. The prevailing opinion appears to be that the selected material exhibits the properties published on the data sheet, independent of how the raw material is converted into the molded article.
Under this way of thinking, the processor’s job is simply to heat the material to the molten state, pass it through the appropriate piece of processing equipment, and re-solidify the polymer into the shape described by the print. As long as the part fulfills aesthetic expectations and the critical dimensions meet the print, the processor has done his job. Properties are the province of the material supplier.
Unfortunately, it is not that simple. In injection molding, for example, molding conditions have a significant influence on the final properties of the material regardless of the part design. Two of the process conditions that have a substantial influence on the behavior of the polymer are the melt temperature and mold temperature.
First, it is important to distinguish between these process conditions and the setpoints that we use to exercise control over them. Melt temperature is the actual temperature of the polymer as it exits the nozzle and enters the mold. The barrel setpoints represent the tools we use to arrive at the desired melt temperature, but they are not the same thing.
The mechanical work imparted to the material, the residence time, and the condition of the screw and barrel all play a significant role in determining the actual melt temperature. Similarly, the actual surface temperature of the mold cores and cavities are related to, but not necessarily the same as, the temperature of the fluid passing through the channels in the mold.
Assuming that this is understood, we can examine the effects of these two parameters on the properties of the polymer. It is generally understood that melt temperature has an effect on viscosity. But melt temperature also has an influence on the final molecular weight of the polymer in the molded part.
For example, in an experiment involving parts molded in polypropylene, polymer in parts molded at a melt temperature of 400 F (204 C) had a measurably higher average molecular weight than parts molded at 480 F (249 C). This translated into better impact resistance as well as lower energy consumption in molding and shorter cycle time.
Mold temperature has perhaps a less obvious but often more profound effect on final properties. In amorphous polymers such as ABS and polycarbonate, higher mold temperatures produce lower levels of molded-in stress and consequently better impact resistance, stress-crack resistance, and fatigue performance.
In semi-crystalline materials the mold temperature is an important factor in determining the degree of crystallinity in the polymer. The degree of crystallinity governs many performance parameters, including creep resistance, fatigue resistance, wear resistance, and dimensional stability at elevated temperatures. Crystals can only form at temperatures below the melting point but above the glass-transition temperature (Tg) of the polymer.
TIME TO CRYSTALLIZE
When molding semi-crystalline materials, the ideal mold temperature will be above the Tg in order to give the polymer adequate time to crystallize. Figure 1 compares the behavior of a high-temperature nylon (PPA) when molded at the proper mold temperature and at lower mold temperatures. The plot shows the modulus of the material as a function of temperature. As the mold temperature increases, the stiffness of the material at room temperature also increases.
But the more significant difference between the samples molded at the proper temperature and at the lower mold temperatures can be seen at elevated test temperatures. As the material approaches the glass-transition region at 130 to 140 C, the modulus begins to decline in material molded at lower temperatures and falls farther and faster at even lower mold temperatures. This behavior is in the hands of the processor.
Figure 2 shows the interaction of mold and melt temperatures in determining the impact performance of ABS, an amorphous polymer typically selected for its toughness. The contour plot captures falling-dart impact resistance as the mold temperature is varied from 29 to 85 C (85 to 185 F) and the melt temperature is adjusted from 218 to 271 C (425 to 515 F). It may be surprising to some that the impact resistance ranges from less than 2 N-m (1.4 ft-lb) to almost 50 N-m (36.5 ft-lb) simply as the result of these process changes.
The mold temperature is the dominant factor; however, the best results are obtained when higher mold temperatures are combined with lower melt temperatures. The ideal range of processing conditions, as well as those conditions that should be avoided, are very apparent in this plot.
This behavior is characteristic of all polymers. In general, optimal performance is produced by combining a lower melt temperature with a higher mold temperature. Unfortunately, this is the opposite of what we usually find on the production floor. Typically melt temperatures are running higher than is ideal because melt temperature is often considered to be the only available tool for reducing the melt viscosity. Higher melt temperatures increase energy consumption, degrade the polymer, and extend the cooling time needed to create a dimensionally stable part.
To compensate for this extended cycle time, processors will rely on reduced mold temperatures to gain back lost productivity. Yet a reduced melt temperature combined with a higher mold temperature often produces a part with the same or shorter cycle time and a better set of mechanical properties. When processors understand their role in establishing the properties of the polymer, they approach process development in a very different manner that ultimately reduces cost and improves quality.
Low-Cost Processing, Long-Term Problems
Last month I visited molding suppliers in an Asian country (not China). One of the visits unfolded in a manner that is all too familiar and raises questions about how the qualification process is performed when selecting suppliers in low-cost locations.
The molder was running a relatively large part in a polycarbonate-based alloy. An inspection of the drying equipment showed that it was a hot-air system as opposed to a dehumidifying desiccant dryer. The parts exhibited splay at various locations on their surface that are consistent with the presence of excess moisture in the material. However, the parts were being painted and the paint in this case masked the cosmetic effects of the splay. But with this type of resin, the cosmetic effects are the least of our worries. When this material is molded with excess moisture, the polymer undergoes degradation. The resulting decline in molecular weight causes the material to lose impact strength, a quality that is a primary reason for using this material in the first place.
This scenario plays out repeatedly in low-cost manufacturing locations. Fundamental aspects of material preparation such as drying are carried out using inadequate equipment. Not all materials suffer damage at the molecular level if they are dried improperly. Many resins are dried prior to processing strictly for cosmetic purposes—like ABS, SAN, and acrylic. However, materials such as polycarbonate, polyesters, nylons, and polyurethanes undergo hydrolysis when they are exposed to elevated levels of moisture at high temperatures. This chemical reaction can take hundreds of hours if the exposure occurs at end-use application conditions.
This is shown in the accompanying table as a progressive increase in melt flow rate (MFR) with time. Increases in MFR that exceed 40% are considered to be associated with excessive changes in average molecular weight. The data in the table show that this benchmark is reached in approximately 675 hr, or four weeks. In the barrel of an injection molding machine at a temperature of 300 to 320 C, this process takes just a few minutes.
To make matters worse, hydrolysis is not always accompanied by visual defects. All materials have a range of moisture contents where molecular damage can occur without producing any cosmetic evidence for the presence of moisture. PET polyester, for example, can become extremely brittle without showing any cosmetic signs of wet material. This is the reason that manufacturers of hydrolytically sensitive polymers instruct processors to use dehumidifying desiccant dryers capable of achieving a dewpoint of -40 F (-40 C). In order for the heated air passing through the hopper to remove moisture from the pellets, it must be extremely dry. If it is not, there is no tendency for the moisture in the raw material to migrate to the heated air. In fact, hot, humid air can actually add moisture to a resin.
Those are exactly the conditions found in many processing plants in South Asia. With hot-air dryers, the dewpoint of the air is essentially the same as the dewpoint of the ambient air in the plant. This will vary over the course of the year. In the cooler months the dewpoint may fall as low as 10 to 15 F (-13 to -10 C).
While far from ideal, it is frequently adequate in practical terms when coupled with the fact that the initial moisture content of the pellets in these cooler, dryer months is lower and therefore less moisture must be removed. However, the summer months in that part of the world bring oppressive heat and humidity, and most processing plants are not climate-controlled. The climate inside the plant is the same as the climate outdoors.
During these times of year, dewpoints can be as high as 95 F (35 C), and hot-air dryers are not suitable for drying hydrolytically sensitive engineering polymers under such conditions. The parts may look satisfactory, but they are a ticking time bomb waiting for application conditions such as an abrupt impact event, chemical exposure, or a sustained load to produce premature failure.
Because the problems associated with improper drying are seasonal, and because the logistics of product flowing from thousands of miles away can make traceability very challenging, fluctuating product quality can be very difficult to relate to the root cause in the plant environment. The incidence of failed parts in the field rises and falls in a pattern that is very difficult to decipher, particularly when the OEM selecting the molder is not aware of the inadequacies within the overseas processing plant.
Who is signing off on these processors? How is it that companies making products where proper functioning is critical do not possess the proper equipment for sustained levels of successful processing? Who conducts reviews and audits of prospective suppliers and according to what criteria do they award the business?
Polymer degradation remains one of the primary causes for plastic part failure. It is not a coincidence that a disproportionately large number of these failures occur in the types of materials that we have mentioned. It is ironic and more than a little bit frustrating that in a world where the documentation process associated with quality assurance has grown by orders of magnitude over the last 20 years, our procurement practices have left OEMs barely treading water in the process of quality improvement, simply because no one knows what to look for in selecting a qualified supplier.
Even more distressing is the behavior that often takes place once the realization has dawned and it is time to take corrective action. The first step is to educate the part supplier as to the need for the proper equipment. When we informed the molder of the polycarbonate-alloy panels that he was not drying the material properly, he was initially stunned. As far as he was concerned, drying meant passing heated air of a particular temperature over the pellets in the hopper for a prescribed period of time. It took some time to explain the need for a more sophisticated approach. Dehumidifying desiccant dryers are more expensive and require more attention to detail in their operation and maintenance.
Once this reality has dawned on the molder, the price of the part typically goes up. In other words, it costs a little more to make good parts that it does to make junk, at least in the short term. But short term tends to be the focus these days, especially when we are looking for the best “deal.” Warranty costs, customer complaints, and even the occasional litigation are problems for other departments.
Some time ago, business author Tom Peters observed that you cannot cut your way to prosperity. Cost reduction is an important function in any organization, but it should be recognized that it is a self-limiting activity. At some point the product simply cannot be made for any less without some quantum leap provided by innovation. Continuing to move the responsibility for manufacturing to increasingly less capable venues for the sake of a lower piece price is just plain bad economics.
Working with Color Concentrates
One of the big advantages that plastics have over metals is the option of providing molded-in color. While many large parts are still painted in order to achieve the desired aesthetics, even in these cases the eventual goal is the elimination of the paint line in favor of color that is incorporated directly into the base polymer.
Because the consumer sees so many colored plastic articles, it is natural to assume that the process of achieving molded-in color is a simple one. Those of us in the industry know differently. Selecting a colorant system must take into account chemical compatibility with the base resin, the ability to achieve a suitable color match with a standard, surviving the stresses and temperatures of the fabrication process, and providing durability in the colorant system to match the life of the product while at the same time meeting cost constraints. It is not as easy as it looks.
Once the appropriate colorant system has been developed for an application, a choice must be made between molding the part from a fully compounded material where the color has been incorporated uniformly into the raw material, or using a color concentrate that is added to the base resin at the machine just before molding the part. The concentrate can be in the form of a pellet, powder, or liquid. In all cases the principle is the same: Add a relatively small amount of concentrated color to a much larger amount of natural-colored polymer and produce a finished part of the correct color by using the temperatures and mechanical work of the process to distribute the color uniformly.
There are potentially significant economic incentives to the color concentrate approach. First, most resin suppliers today prefer not to tie up their capacity producing small lots of custom-colored product. It was not always the case. In the mid-1980s I visited what was then a Borg-Warner plant making various grades of ABS. Not only did Borg-Warner offer a wide variety of ABS resins with different properties, they took great pride in providing a wide range of fully compounded colors.
During the tour we saw many lines running everything from white to burgundy, and we were taken to the color lab where more than 40,000 different color formulations had been created over the years. Over 6000 of these, we were told, were commercially active. Color matching and compounding were a core competency, to use the buzzword vernacular of today, and it represented a skill set that was marketed as a competitive advantage.
Today, providing small lots of a wide variety of colors is not something that the “majors” want to spend their resources on unless the volumes are significant. Even in these cases there is no guarantee that the color compounding is being done at a facility owned and operated by the company whose name is on the bag or gaylord of material. The coloring operation is often outsourced to a compounding firm that makes its living producing these smaller lots.
The cost for converting natural-colored material to a specialty color can be high; as much as three to five times that of the base-resin cost if volumes are low. While the cost per pound for color concentrates can be very high, these materials are typically only added at letdown ratios of 2 to 5 lb per 100 lb of natural resin. So even a very expensive color concentrate typically allows the processor to achieve an aggregate cost for the colored product that is substantially less than that of the fully compounded material.
Then there is the matter of inventory. Managing multiple colors for a product line can be challenging, especially since consumption of different colors follows changes in the public’s tastes. One of the first jobs that I had in the industry involved going through the raw material section of the warehouse and disposing of the obsolete precolored materials that had been purchased at one time for production needs and then had languished in storage as demand sank. Thousands of pounds of avocado green, harvest gold, burnt orange, and something called desert sand were consigned to the dumpster at a cost of tens of thousands of dollars. If these colors had been achieved using color concentrates, the economic damage would have been limited to much smaller containers of concentrate and the natural resin could have remained natural for molding the newer colors that were in demand.
Lead times are also part of this picture. Once the matching process has been completed, most color concentrate orders can be delivered in two to three weeks. The lead time for fully compounded materials is typically much longer. I can recall one application for a relatively large part that we ran in an impact-modified, glass-reinforced nylon 6 in red, green, and yellow. Initially, the material was supplied in fully compounded form. But the supplier was constantly late with shipments. The red and the yellow would be delivered but the green would not show up for another two weeks. So we would put the mold in the press and produce part of our customer’s order and then have to hang the mold again two or three weeks later to finish the order.
All this costs money. Finally we ordered only natural resin from our material supplier and obtained color concentrates from our colorant supplier. And let’s not forget about that minimum order quantity. Today even the smaller producers want to make at least 1000 to 1500 lb of something once they have the extruder running. There are many applications where even this relatively small amount of material lasts for years.
Finally, there is the matter of thermal history. Many people have a phobia about reground materials. There are a lot of part drawings out there with notes that restrict or forbid the use of regrind in the manufacturing process because of a perception that once the material has been through the molding process it is somehow damaged goods and is only fit to be sold as scrap. The color compounding process subjects the original material to an additional heat history before it ever reaches the processor. Most of this compounding is accomplished without a significant loss in performance.
However, property changes are a potential result any time a material is melted and subjected to mechanical work such as occurs during the compounding process. Using natural plus color concentrate eliminates this intermediate step and has the ability to deliver a potentially superior final product.
So why is there still so much fully compounded material out there? There are practical challenges to bringing the coloring process into the molding plant. These include ratio control, proper color incorporation, consistency in the color of the base resin, and purchasing color concentrates that do not contain hidden problems caused by incorrect material and molecular-weight selection. We will discuss each of these in detail in upcoming columns.
Biopolymers: Time to Take A Deep Breath
Click Image to Enlarge
The author argues that large-scale use of food crops—or land that could be used to cultivate them—to produce biopolymers would raise concerns about global food supplies, water use, and the environmental effects of agriculture. A more sustainable solution, he argues, would be use of inedible plant matter, preferably grown on land not suitable for food production. (Photo: RTP Co.)
The enthusiasm for polymers made from biologically derived materials is understandable. After years of plastics being assailed by environmental groups, the media, and the general public for being “part of the problem” of consumption of fossil fuels, it undoubtedly feels good to be able to point to the growing sector of biopolymers as evidence that the plastics industry is acting in a conscientious and proactive manner. It is consistent with the ethic behind already worn-out terms such as green and sustainable.
This newfound enthusiasm for a sector of the industry that seems to have finally reached critical mass after 35 years of painfully slow and uneven development has given rise to some bold and even outrageous predictions regarding the degree to which biopolymers will replace oil-derived plastics as this century unfolds. One online survey that was posted recently asked a question to the effect of, “How soon will biopolymers displace oil-based polymers as the dominant class of materials used in the plastics industry?” The multiple choice menu contained options like 10 years and 25 years. I checked “Never.”
Never is a long time, but my answer was motivated less by a belief that it can’t happen than by a conviction that it should not happen, at least in its current incarnation.
Deriving propanediol from grains or ethylene from sugar cane may sound like a good idea. However, this is an extension of the strategy that introduced alcohol derived from corn to the motor-fuel industry. This approach has contributed to a substantial increase in the price of food as the supply of staples has been reduced in favor of growing crops for fuel. This has gone largely unnoticed in Europe and the U.S., where the cost of food does not constitute the percentage of the family budget that it does in developing nations.
But it has had a large impact on the ability of large segments of the world’s population to feed itself.
There are some inescapable facts to be confronted about the state of food production in the world today and the growing problem of feeding a rapidly expanding population. And it begins with the fact that approximately 1 billion of the 7 billion inhabitants on the planet already suffer from chronic hunger. This is a conservative estimate; some place the number closer to 2 billion.
Add to this the projection that by 2050 we will add another 2 billion to 3 billion people, and that those already living or yet to be born into countries with rapidly developing economies will consume more food, and the best estimates indicate that world food production will need to double in the next 40 years. It is not entirely clear that this is an attainable objective given the fractured nature of the world political system. However, it is clear that one thing that is not needed to add further to the burden of meeting this objective is diverting large amounts of food crops to production of polymers.
As things stand, only 60% of the food grown today ends up going directly to human consumption. An additional 35% is used for animal feed. It takes 30 lb of grain to make 1 lb of edible, boneless beef. The other 5% already goes to biofuels and other industrial products, including our burgeoning biopolymer sector.
The argument is made that the products grown for animals and industry do not detract from the human food sources since these products are not fit for human consumption.
One night while coping with a bout of severe insomnia, I watched Congressional proceedings on C-Span where testimony to this effect was being given to Congress by representatives of Big Alcohol. But this defense ignores the reality that there is a finite amount of land on which crops can be grown, and if the economic incentives dictate that crops will sell for higher prices when made into biofuels and biopolymers than when made into food, then land will be set aside to do just that.
Market theory would respond to this by using more land to produce all of the needed products. And this brings us to the next issue, the impact that agriculture has had on our environment. Only energy production has had a greater impact on our environment than agriculture. Agriculture is the largest single source of greenhouse gases, accounting for 35% of all the carbon dioxide, methane, and nitrous oxide that we release. This is more than the worldwide emissions from transportation or electricity generation. In addition, agriculture has cleared or significantly transformed large percentages of prehistoric grasslands, savannahs, and temperate and tropical forests.
Finally, fresh water has been part of the collateral damage associated with agriculture. Irrigation has drawn so much volume away from natural waterways that many large rivers, such as the Colorado, have diminished flows or have dried up altogether, and many places have rapidly declining water tables, including regions in the U.S. And where water is not disappearing it is being contaminated. Fertilizers, herbicides, and pesticides are ubiquitous. While fertilizers have been an important ingredient in improved agricultural yields, nearly half of the applied fertilizer runs off and ends up in coastal waters where it impacts fishing grounds, another key element in the cycle of food production. And fish do not require large allotments of grain to be converted into food.
Kermit the Frog had it right: It’s not easy being green. Before we rush to replace petroleum with “renewable resources,” we need to pause, take a breath, and truly understand the impact of siphoning off key resources designed to feed people to make our polymers.
Biopolymers are an inherently good idea. But if this is to be done in a sustainable way, to use the vernacular, we need to make them from the parts of the plant that we do not eat or from crops that can grow in places and under conditions that would not sustain food production and therefore do not compete for those resources. Then we can get down to answering the technological questions regarding where biopolymers fit when it comes to requirements for efficient processing and the properties they offer relative to the incumbents.
Density & Molecular Weight in Polyethylene
Click Image to Enlarge
Schematics show what we imagine molecules of HDPE, LLDPE, and LDPE to look like. A very linear PE chain can closely approach other PE chains of similar structure, creating a very densely packed network. This results in a high-density material that is relatively strong and stiff. But this network tells us nothing about the length of the individual molecules, which affects properties like toughness and creep resistance.
Polyethylene has been around as a commercial material for a little over 70 years. It was one of the early developments in synthetic thermoplastics. Given its long history, it could be assumed that we understand everything there is to know about this polymer. However, I am constantly surprised by the confusion that exists regarding two key properties that describe the properties and performance of PEs. These are the density and the molecular weight. Molecular weight is typically captured by a number known as melt index or melt flow rate. The higher the melt index of the material the lower the average molecular weight of the polymer.
If you ask a processor what type of PE they are running, a typical response will be, “I’m running a 7-melt, 953 material.” Translation: a material with a nominal melt index of 7 g/10 min. and a density of 0.953 grams/cm3. These two numbers convey a lot of information regarding the balance of properties that can be obtained from parts molded in this material. For example, this particular selection would be typical for a 5-gal pail. These parts require a balance of creep resistance and toughness so that when stacked they do not collapse under their own weight and when dropped they do not crack.
While molders may know through experience what works for a given application, they often do not know why. They tend to think of melt index as a gauge of processability and they often believe that molecular weight and density are linked, when in fact they can be varied independently of one another. Part of the confusion is understandable. The ability to specify density as a property of a material may be unique to polyethylene. For most polymers, unless they are filled, the density is essentially an inherent property of the material. The density of blends may vary as a function of the ratio of the two materials being blended; but here again, once the recipe is fixed this property is a fundamental aspect of the product.
Not so with PE. Because of the ability of the material to form either linear or branched structures, the density of polyethylene can vary from a low of 0.857 g/cm3 to a maximum of 0.975 g/cm3. If you held two parts of the same design made from materials representing these two extremes you could readily tell the difference. The higher-density part would be much stronger and stiffer and would also be more likely to fail an impact test, particularly if it were conducted at a low temperature.
If you weighed the two parts you would also notice that the part made from the higher-density product was heavier. And this is where the confusion comes in. Many people assume that the weight of the part is related to the weight of the individual polymer molecules that make up the part. However, the two are not related at all. If you look at enough PE data sheets you will see that there are low-density materials with both high and low values for melt index and there are high-density materials with this same range of melt index values. If the two properties were linked this would not be possible.
Figure 1 helps to illustrate the difference. It shows schematics of what we imagine molecules of HDPE, LLDPE, and LDPE to look like. A very linear PE chain can closely approach other PE chains of similar structure, thus creating a very densely packed network. This results in a high-density material that is relatively strong and stiff.
But this network tells us nothing about the length of the individual molecules. They may be quite long or relatively short. The same is true of the branched structures. The branching keeps the individual chains farther apart, reducing the density and the load-bearing properties of the material. But here again, the individual chains may be quite long or quite short. If the chains are long the melt index will be low, the material will exhibit a higher melt viscosity and greater melt strength, and the resulting molded part will have better impact performance compared with a material of comparable density that is made up of shorter chains.
The fact that these two properties are independent variables is both good and bad news. The good news is that it offers the possibility for a very wide array of processing and performance options and is one of the reasons that polyethylene is such a versatile material. The bad news is that it makes understanding the relationship between these properties and part performance more difficult.
Sometimes these two properties work together to improve a particular aspect of performance while at other times they work in opposition to one another. For example, creep resistance is improved by increasing either density or molecular weight. However, improving toughness is accomplished by increasing molecular weight or decreasing density. If the melt index of the pail material we discussed above were raised to make processing easier, the impact performance might decline to a point where the parts would not pass standard tests performed by the container industry. If the density of the material were then reduced to restore this lost toughness, a change of even a few thousandths might cause the creep resistance to become unacceptable. The right combination for a given application is critical.
This was illustrated more than a decade ago when manufacturers of small gas tanks for products such as lawnmowers and snowblowers changed the density of the polyethylene they were using from 0.946 to 0.952 g/cm3 when the original material became unavailable. The melt index remained the same at 4 g/10 min. Initially there were no apparent problems with the new material.
However, over time certain designs showed an increased tendency to crack while in use. This prompted a large recall and the experience has had lasting effects on the regulations for manufacturing these tanks. It is possible that the loss in toughness brought about by the increase in density could have been offset by a reduction in melt index. But most processors involved in molding the parts deemed the higher-molecular-weight materials to be too difficult to process. It was not until another PE supplier stepped in with a material having a molecular weight and density comparable to the original product that the problem was solved.
So as commonplace as PE may be, this “commodity” material is very complex. Selecting the correct grade from the thousands of commercial options requires a thorough understanding of the interaction between these two properties. The property envelope for this polymer extends from materials that border on elastomers to those that are relatively strong and rigid.
When It Comes to Nylon, Don’t Do the Math
Click Image to Enlarge
Nylon 6/6, like all other nylon polymers with a nomenclature that involves two numbers, is produced by reacting two different chemicals to form a precursor. This precursor is then polymerized to form the final material. The numbers after the word nylon signify the number of carbon atoms that are present in each of the original reactants.
This is the chemical reaction used to produce nylon 6. The monomer is known formally as aminocaproic acid, but in its ring form it is called caprolactam. When the ring is opened the chemical can react with itself to produce the nylon 6 polymer. Note that this repeating unit contains six carbons, five that are attached only to hydrogen while the sixth one is part of the amide group.
Most people do not have fond memories of their formal education in chemistry. Other than the occasional story about crafting something in the lab that frothed, foamed, or exploded, bringing up chemistry class elicits only negative memories of endless battles with structures and reaction mechanisms. This is particularly true of organic chemistry, which is the basis for polymer chemistry. So it is perhaps not surprising that the working knowledge in the plastics industry of what is really going on with our materials is somewhat hazy at best.
This was demonstrated to me once again in a conversation I had just this week with one of my lab colleagues. He had analyzed a nylon 6/6 material for a client and had found the presence of nylon 6. This is not that unusual. Nylon 6 can be grafted onto the backbone of the nylon 6/6 molecule to produce a copolymer that processes at a lower temperature and more easily achieves a resin-rich finish in parts molded of highly reinforced grades.
More commonly, suppliers simply blend the two polymer types to create a balance of properties that falls somewhere between those of the two base resins. In the case of the copolymer, it is not easy to tell that the nylon 6 is present. The material presents a single melting point that falls just about halfway between those of the nylon 6 and nylon 6/6. In the blends, it is a little easier to tell that something has changed because the melting points for the two polymers will appear as separate events when the material is tested by differential scanning calorimetry (DSC). But when my colleague’s client was told of the presence of nylon 6 in the compound they had believed was pure nylon 6/6, someone informed them that this was because the material was degraded and that nylon 6 forms when nylon 6/6 degrades.
This is so far from the real story that it is the organic chemistry equivalent of telling someone that the Earth is flat. This belief likely comes from a misunderstanding regarding the meaning of the numbers that come after the word nylon.
To the uninitiated, it may seem reasonable that a material called nylon 6/6 (or 66), when it degrades, loses one of the 6s to become nylon 6. Not true. Nylon 6 and nylon 6/6, despite their physical and chemical similarities, are different polymers made in very different ways.
Nylon 6/6, and all other nylon polymers with a nomenclature that involves two numbers, such as nylon 6/12 or nylon 4/6, is produced by reacting two different chemicals to form a precursor. This material is then polymerized to form the final material. The numbers after the word nylon signify the number of carbon atoms that are present in each of the original reactants. Nylon 6/6 begins as a diamine known as 1,6-hexane diamine (or hexamethylenediamine) and a diacid known as hexanedioic acid (or adipic acid).
The diamine contains six carbons and the diacid contains six carbons, as suggested by the “hexa-” root in both names, so when the polymer is formed there are two distinct segments in each repeating unit of the polymer chain that are six carbons in length.
Figure 1 shows the reaction and if you have the patience, you can count out the six carbons in each reactant as well as in the repeating unit of the chain segment.
One of these carbons is part of the key chemical group that gives nylon its distinct properties. This group is called an amide group (in most of the world nylons are called polyamides) and it is responsible for the relatively high melting point, strength, and affinity for moisture that nylons exhibit. This group is highlighted in Fig. 1 and shown in more detail in Fig. 2. The first number refers to the diamine and the second one refers to the diacid. So in nylon 4/6 the same diacid is used that is employed in making nylon 6/6, but the diamine only contains four carbons. This decreases the spacing between the amide groups, resulting in a material that is stronger, stiffer, and has a higher melting point than nylon 6/6.
In the case of nylon 6/12, the diamine remains the same but the diacid that is used contains 12 carbons. This increases the spacing between the amide groups, resulting in decreased mechanical properties and melting point. But because the amide group is responsible for the affinity that these polymers have for moisture, nylon 6/12 is more resistant to moisture gain and the associated changes in dimensions and mechanical properties that are experienced with nylon 4/6 and nylon 6/6.
Nylon 6 and other nylon polymers that contain only a single number in the name are made in a completely different manner. In these cases a single chemical is used that has a ring structure and contains both of the chemical groups needed to produce the amide functionality. Figure 3 shows the chemical reaction used to produce nylon 6. The monomer is known formally as aminocaproic acid, but in its ring form it is called caprolactam. When the ring is opened the chemical can react with itself to produce the nylon 6 polymer. Note that this repeating unit also contains six carbons, five that are attached only to hydrogen while the sixth one is part of the amide group.
Even though the average spacing between amide groups is the same in nylon 6 as it is in nylon 6/6, the nylon 6/6 has a melting point approximately 40oC higher and is slightly stronger and stiffer than nylon 6 because of details in the molecular spacing that are beyond the scope of this article. But the bottom line is that nylon 6 and nylon 6/6 are completely different materials made by completely different routes. While they can be combined to produce a wide range of properties, they cannot be converted into one another. When nylon 6/6 degrades it produces by-products that ultimately resemble the chemicals from which it was produced, not nylon 6.
Right now everyone is talking about nylon 12. The shortage in the world market was caused by an explosion at a plant that makes a chemical known as cyclododecatriene (CDT). Through a series of chemical reactions, CDT, a 12-carbon ring structure, becomes a chemical known as laurolactam, a 12-carbon ring with the same amide functionality built into it that is contained in caprolactam.
But when nylon 12 is made, the amide groups are twice as far apart as they are in nylon 6. This results in a material that has a melting point 50-60°C lower than nylon 6. But more importantly this material is much more flexible so that it can be made into hose, tubing, and even balloon catheters. By the same mistaken logic as was mentioned above, where nylon 6 supposedly forms when nylon 6/6 degrades, we should be able to solve the nylon 12 shortage by simply degrading some nylon 6/12. Better yet, by simple math it should be obvious that nylon 6/6 can be converted into nylon 12. Easy, right?
Obviously not. The automotive industry summit meeting in Detroit in mid-April to discuss the nylon 12 shortage was not convened because this is going to be simple. And while automotive is getting most of the attention in the press, it will be medical device suppliers that get to stand first in line when the limited supply is doled out. If other nylons do successfully stand in for nylon 12, (nylon 11 makes the most sense but is also in short supply) it will be due to a combination of skillful manipulations of chemistry coupled with tough decisions about design, processing, and even the consequences of using additives to optimize performance. Chemistry is seldom as simple as it looks. Polymer chemistry takes the complexity up a notch. Nylon chemistry is about much more than doing the math.
Use MFR Cautiously with Filled Materials
Recently I received a request to perform MFR testing on a glass-fiber reinforced PBT polyester to determine whether the material had been degraded during molding. Some of the parts were exhibiting brittle behavior, and a reduction in molecular weight—often referred to as degradation—is always a possible cause for reduced ductility. In many polymers, a relatively straightforward relationship exists between average molecular weight and melt flow rate (MFR). Therefore, comparing the MFR of the raw material to related molded parts can provide some information on how well the polymer survived the molding process.
It is difficult to obtain unanimous agreement on the benchmark value that should signal the need for concern regarding the molecular weight of the polymer. When using MFR, some cite an allowable increase of 30%, others use 40%, and a few even opt for changes as small as 20-25%. However, the original work on this problem, performed largely on materials like polycarbonate and PBT, actually used a measurement stated in terms of melt viscosity.
Using a correlation between melt viscosity and impact performance, a guideline was established that limited the viscosity reduction to 30%. This has been interpreted by many as justification for employing a limit of 30% when measuring an increase in MFR. But the two statements are not equivalent. A 30% reduction in melt viscosity is actually equal to a 43% increase in MFR when the tests are performed at the same conditions. Therefore, there is some justification for using 40% as an upper allowable limit.
This guideline works for many polymers, provided that the compound is unfilled. Unfortunately, interpreting MFR results for filled materials is not as simple because during processing the fillers can also be affected in a manner that contributes to the increase in MFR of the compound. This is especially true of materials that contain a reinforcement such as glass fiber. First, consider that the addition of the glass increases the melt viscosity or decreases the MFR of the material even if the average molecular weight of the polymer is unchanged.
As an example, an unfilled polycarbonate with an MFR of 10 g/10 min, when filled with 10% glass fiber, will have an MFR of 7.5 g/10 min. At a 20% loading the MFR drops to about 4 g/10 min. The molecular weight of the polymer has not changed, but the addition of the fiber makes the compound more viscous and this is reflected in the MFR.
When an unfilled polymer is processed, the elevated temperatures and residence time associated with the process can break the polymer chains, reducing the average molecular weight and decreasing the melt viscosity. Short chains flow more easily than long ones, and the MFR will display a corresponding increase. Failure to remove enough moisture from the resin can aggravate this problem for some polymers such as polycarbonates and polyesters.
But many processes such as injection molding and extrusion also generate a substantial amount of mechanical work. This work has the effect of reducing the length of the glass fibers. And this reduction in fiber length will cause an increase in MFR that is independent of changes in the molecular weight of the polymer. So when MFR tests are performed on glass-filled materials, the change in MFR is a compound effect of a change in the average molecular weight of the polymer and the length of the fibers. Consequently, the allowable increase in MFR for filled materials is greater and the more filler there is in the compound the greater this change can be before we become concerned about the health of the product.
This has been verified by correlating MFR results with another test called intrinsic viscosity (IV). IV tests work by dissolving the compound in a solvent and measuring the viscosity of the solution relative to that of the pure solvent. The higher the molecular weight the more viscous the solution becomes for a given concentration. In these tests the fillers are removed because they do not dissolve and they precipitate out before the test is performed. Tests have shown that relatively large increases in MFR for filled compounds are actually associated with comparatively small changes in IV, confirming that much of the MFR increase has nothing to do with polymer degradation.
But judging which changes are allowable and which are excessive can be tricky. Tests must be performed in order to correlate MFR increases with performance. The accompanying table shows the results of some of these studies on a range of materials and illustrates the effect that increasing amounts of glass fiber have on the permissible increases in MFR.
As straightforward as this concept is, it is not something of which many analysts are aware. So when a lab tested some brittle parts and obtained an MFR value of nearly 40 g/10 min based on a specification that appeared to read 18 g/10 min, they did the math, obtained a value of 111% for the increase, and declared the polymer degraded. But the compound contained 20% glass fiber, so based on the accompanying table the change that had been measured should not be related to the observed brittleness.
There was a second complicating factor. The MFR specification was not given as a mass flow but as a volumetric flow. A close look at the data sheet revealed that the specification read 18 cm3/10 min but the lab reported its results in grams/10 min. The melt density of the material is 1.40 g/cm3, so the mass MFR specification is actually 25.2 g/10 min (18 x 1.40). When this new baseline is used, the parts look even better, exhibiting an increase of only 60%. This is still above the benchmark value of 40% for an unfilled product, and therefore could still be interpreted as signaling a processing issue; however IV tests confirm that the polymer is not degraded.
Judging the condition of filled materials becomes even more difficult when the type or amount of filler changes. Mineral fillers, for example, because of their lower aspect ratio, undergo smaller changes in geometry and therefore contribute less to any observed change in viscosity. Long glass fibers are extremely problematic because their average length is much greater than the size of the orifice in the MFR instrument. This can result in values that defy interpretation.
The solution is provided by the IV test, which eliminates the contribution of the filler. However, IV tests require an apparatus that is more expensive and more complicated than the MFR tester and involves working with some rough chemicals. And some polymers such as PPS and PEEK are extremely difficult to dissolve and can make this type of testing impractical. In addition, many suppliers of filled materials provide an MFR specification, which encourages the practice of performing this test on these compounds. But it is important to keep in mind that if MFR tests are used to evaluate the effect of processing on the average molecular weight of the polymer, the applicable rules must consider the contribution of the filler to the test result.
PBT and PET Polyester: The Difference Crystallinity Makes
Click Image to Enlarge
The differences between PBT and PET are best understood by examining the chemical structure of the repeating unit that makes up the polymer chains. The essential feature that makes the materials distinctive is the terephthalate ester group that lends its name to this family of materials.
Last month we profiled similarities and differences between different chemistries in the acetal polymer family. This month we will begin making a similar comparison between different commercially available polyesters known as PBT and PET. Many years ago, while working for a custom injection molder, I worked closely with one of our customers to convert a series of parts from 30% glass-fiber reinforced PBT to a PET with the same level of filler. It served as a practical application of the differences between the materials on both processing and performance levels.
Fundamentally, the chemistry of PET and PBT is very similar. Polyesters are synthesized by reacting an organic acid, in this case terephthalic acid, with an alcohol. In the case of PBT the alcohol is generically referred to as butylene glycol while in PET it is ethylene glycol. The resulting polymers are known, therefore, as polybutylene terephthalate (PBT) and polyethylene terephthalate (PET).
The differences in the materials are best understood by examining the chemical structure of the repeating unit that makes up the polymer chains, as shown in Fig. 1. The essential feature that makes the materials distinctive is the ester group that lends its name to this family of materials. Other polymers such as PTT and PCT are also members of this chemical family that display slight variations on these structures.
Another key feature of the chemistry in this material is the six-sided ring that appears at regular intervals in the backbone. Known as a phenyl ring or more generally as an aromatic ring, this element provides stiffness to the polymer chain. This influences several important properties, including the glass-transition temperature, a region where polymers lose a significant percentage of their load-bearing properties.
It is not evident from a two-dimensional rendering of the chemical structure, but a 3D view would show that while many of the chemical groups in a polymer chain project into or out of the page, the aromatic ring sits in a plane. It also constrains the natural tendency of the other groups in the chain to rotate and vibrate. This is part of the stiffening effect of this ring structure. The reduced mobility and the bulky nature of the ring also influence the ability of the polymer to crystallize as it cools. PBT, with greater spacing between the aromatic rings, crystallizes more efficiently than PET. But PET, if it is successfully crystallized, provides for better mechanical properties, including strength, stiffness, and performance at elevated temperature.
Most consumers are familiar with PET in the containers that holds their bottled water or soft drinks. This type of PET is amorphous and is engineered to prevent crystallization. If bottle-grade PET did crystallize it would become cloudy and, more importantly, it would lose its impact resistance. So while there are probably a lot of parts molded in crystalline PET polyester under the hood of your car, where they encounter elevated temperatures and aggressive chemical environments; the vast majority of the PET in the world is consumed in packaging, where it is amorphous, unreinforced, and incapable of handling such rigorous environments.
The type of PET that we will be discussing in Part 2 of this article is semi-crystalline and almost always contains high levels of glass fibers and/or mineral fillers. PBT polyester, however, can be provided in its semi-crystalline form both filled and unfilled. In fact, because PBT crystallizes faster than PET, it is not possible under normal processing conditions to produce PBT parts that are amorphous. The polymer crystallizes efficiently enough to always achieve some level of organization in its structure. The stiffness of the ester groups and the aromatic rings is balanced by the flexibility and mobility of the butylene group. But in PET the shorter ethylene group makes crystallinity optional. We can have amorphous PET if we cool it rapidly or we can have semi-crystalline PET if we cool it slowly.
Most PET bottles start out as injection molded preforms. They are clear and tough and relatively thick-walled in order to allow for the thinning effect that the wall will undergo when the preform is reheated and stretched to form the bottle. If you have worked in a bottle manufacturing plant you know that if, during the preheat cycle, the preforms get too hot they turn cloudy—a sign of crystallization. (In fact, if you look closely at the gate area of a preform you will see a small amount of haze in this area due to the extra heat generated in this area of the part). If you try to blow a bottle from this cloudy, partly crystallized material, it will have reduced impact performance. The preform may even shatter during the blowing process if enough crystallinity develops. So the trick is to keep the material above its glass-transition temperature, but below its crystallization temperature. This temperature window may not be very wide, as can be seen in Fig. 2.
This graph shows the behavior of amorphous PET polyester, an unfilled clear material that is used to make parts that require toughness and transparency but do not need to withstand elevated temperatures. As the material is heated from room temperature, the first notable event is the glass transition. This appears as a step change in the heat content of the material, and in this compound that process is complete at 75 C (167 F). At this point the material has lost the rigidity it possessed at room temperature and is soft and pliable. As the temperature is increased, the viscosity of the softened polymer will decrease until it reaches a temperature near 110 C (230 F). This is the temperature at which the baseline of the scan starts to rise rapidly, and the gap between 75 C and 110 C represents the window of opportunity for blowing the bottle.
Most PET bottle plants that I have visited are running a preheat temperature near 100 C (212 F). Once the crystallization process has started, the material begins to turn cloudy. It will also begin to regain some of the stiffness that it lost as it went through the glass transition. If this process goes far enough, the polymer becomes crystallized at about 140 C (284 F). At this point the material will be opaque and brittle and will stay that way until the crystal structure melts at about 245 C (473 F). So PET can go either way—amorphous or semi-crystalline—depending on how we treat it.
But PBT is always semi-crystalline under normal commercial circumstances. So to properly understand the differences in performance between PET and PBT we need to compare apples to apples—the semi-crystalline forms of each polymer. Because PET crystallizes very slowly, producing parts with a semi-crystalline structure requires the help of chemicals known as nucleating agents, as well as the presence of solid particles of fillers and reinforcements. Thus commercial semi-crystalline PET polyesters are always sold filled or reinforced, and to make a fair comparison of PET vs. PBT performance we therefore need to compare the materials with an equivalent level of the same type of filler. We will do this in Part 2 of this investigation and also discuss the differences in processing that molders will encounter as they work with these two polymer families.
Dimensional Stability After Molding—Part 2
My first exposure to the prolonged shrinkage process exhibited by some materials came when I was molding large parts in acetal homopolymer. Most of my experience with molding acetal parts to close tolerances had come from producing small gears with diameters of no more than half an inch.
However, our company landed a project that involved housings and quadrant gears with critical dimensions in the range of 3.5 to 4 in. and print tolerances of ±0.010 in. on parts with nominal wall thicknesses of 0.110 in. One specific dimension governing the spacing of alignment holes called for a specification of 4.046 ±0.010 in.
During initial sampling we produced parts with measurements that ranged from 4.038 to 4.042-in. in a 30-piece capability study. Statistical process control (SPC) was a relatively new concept in U.S. manufacturing at that point, so the fact that we were operating at one end of the tolerance range did not particularly concern us. We were quite satisfied that the dimensional range was tight and all the parts were to print.
But the parts had been measured about 90 minutes to two hours after they were produced. The parts were room temperature to the touch and based on our experience with other semi-crystalline materials we were satisfied that everything was fine. The next day the quality assurance people pulled us into the lab to show us that half of the parts we had produced the day before were too small to meet the print. A review of all 30 parts showed that they had all continued to shrink and were now 0.004 in. smaller than the previous afternoon. In another day they moved an additional 0.001 in. and then things seemed to settle down. Later I observed similar problems with large parts produced in polypropylene, even in filled PP.
In order to understand what was happening it is important to appreciate the relationship between mold shrinkage and crystallization in semi-crystalline materials. The more the material crystallizes the more it shrinks. Optimal levels of crystallinity are desirable. Semi-crystalline materials offer improved levels of fatigue and wear resistance compared with amorphous polymers and they generally provide improved creep resistance at elevated temperatures. But if the material is molded in a manner that prevents the development of crystallinity, these properties are not realized at the intended levels.
The opportunity for crystallization exists in a temperature window below the melting point of the polymer and above the glass-transition temperature (Tg). Processes like injection molding involve rapid cooling of the polymer as it enters the mold. Even when running a material such as PEEK, where the mold temperature may be 375 F (190 C), this represents a thermal shock to the flowing polymer that enters the mold at 700 F (371 C). This rapid reduction in temperature is needed to solidify the material so that the part can assume its intended shape. But as long as the polymer remains at a temperature above its glass transition, about 295 F (146 C) for PEEK, there will be sufficient mobility at a molecular level to allow the crystal structure to develop. Once the temperature of the material drops below this point no more crystals can form.
So what happens with acetal and PP? I often hear people state that these materials do not follow the rules. The rule they are referring to is that if the part feels like it is at room temperature then it is dimensionally stable. In fact, these materials follow the rules precisely. The problem is that the Tg of these materials is below room temperature. This is also true of polyethylene and ethylene copolymers such as EVA. The Tg of PP can vary from about -10 C to +15 C (14 to 59 F) depending upon the grade. The Tg of acetals is -78 C (-109 F), or the temperature of dry ice. This is a temperature only rarely experienced anywhere on our planet. So the chances are quite good that wherever you might be using an acetal part, it is above its Tg. Room temperature is 100° C above the Tg of acetal. So as our molded parts sat in the quality lab overnight, they had more than enough freedom at a molecular level to continue crystallizing.
A properly packed out acetal part produced at the correct mold temperature will exhibit continued shrinkage of about 0.001 in./in. between the time the part reaches room temperature and the time that it is truly stable. This value may increase if the nominal wall is very thick. For small parts this change will be difficult to detect unless very precise measurements are made. But in larger parts it becomes a significant factor. Even after the dimensions stop changing at a measurable level, structural rearrangements continue to occur for weeks. Several years ago we studied the tensile strength and modulus, impact resistance, and dynamic mechanical properties of molded PP and acetal specimens over a period of five weeks after molding.
Properties were measured 48 hr after parts were produced, which is the ASTM guideline, and then additional tests were performed at intervals of seven days out to 35 days after the molding date. Tensile strength and modulus continued to rise over time, although the rate of increase declined as the time extended. Impact performance became worse. Some materials that were ductile two days after molding began to exhibit brittle behavior in 7 to 14 days. All of this was the direct result of continued crystallization.
Most processors do not consider the effect of processing conditions such as mold temperature on part performance. They are much more concerned with making parts that are to print. So when a process produces parts that are undersized or on the low end of the tolerance range, processors think in terms of making adjustments to correct this condition. The adjustment of choice is almost always mold temperature. Reduce the mold temperature and the part will be larger. The reason is simple: A lower mold temperature limits the ability of the polymer to crystallize. Fewer crystals mean less shrinkage.
But in a material that has a Tg below room temperature, these gains may be temporary. In every material there is an ideal spacing between molecules. The material “knows” what this spacing is and given the opportunity it will do everything it can to achieve it. Suppressing crystallinity in a material that possesses structural mobility at room temperature is a short-term strategy. This is why the suppliers of acetal materials have been so insistent over the years on the importance of mold temperature and proper packing to prevent something they call post-mold shrinkage.
It turns out that dimensional changes can also occur due to another mechanism that primarily affects amorphous materials, a class of polymers that is considered to have the upper hand when it comes to dimensional stability. However, these changes take much longer to unfold and they are smaller in magnitude. But if you are really focused on very close tolerances and the parts are large enough, these changes may still be of concern. We will deal with this mechanism in our next installment.
Dimensional Stability after Molding—Part 3
Amorphous polymers are generally considered to be a molder’s insurance policy against the problems of dimensional stability.
Amorphous structures undergo a smaller and more predictable change in volume as they cool from the melt to the solid, making them easier to mold to close tolerances. If you study the shrinkage behavior of a part molded in an amorphous material, you will also find that the part achieves stable dimensions in a shorter period of time. However, over time amorphous polymers also undergo a slow and subtle structural change that can result in continued shrinkage. This process is known as physical aging and it was not well understood in polymers until the 1970s.
Any process that involves melting and re-solidifying a polymer involves a compromise between achieving the perfect structure and producing a part that can be sold at a price that the market is willing to pay. Optimal structural stability is achieved by allowing the polymer chains in a system to reach their ideal configuration in terms of spatial separation. In a semi-crystalline material, as we have already discussed, this means attaining an optimal degree of crystallinity. In amorphous polymers, where no significant level of crystallinity is obtained, the ultimate objective is something called thermodynamic equilibrium. In both cases this involves allowing the polymer chains to reach their ideal arrangement at the molecular level. This is typically achieved by maintaining the material at an elevated temperature for a prolonged period.
Unfortunately, perfection requires too much time to be practical from an economic standpoint. So the objective of good process control is to achieve a level of structural stability that is adequate for applications in the real world. When a part is molded in an amorphous polymer, the material has not reached thermodynamic equilibrium. It spends the rest of the life of the product trying to get there. Physical aging can be thought of as the slow contraction of the polymer as the chains get closer, collapsing into the excess free volume that remained when the part was first produced. This closer approach produces a structure that is stronger and stiffer than the original matrix, however it also loses toughness in the process.
The rate at which physical aging occurs is a function of the difference between the application temperature and the glass-transition temperature (Tg) of the polymer. The closer the temperature is to the Tg the more rapidly physical aging occurs. So it is not surprising, in retrospect, that one of the first practical implications of physical aging was observed in an amorphous polymer with a relatively low Tg, PET polyester.
The amorphous PET used to mold preforms and then blow beverage bottles has a Tg of about 172 F (78 C). At room temperature, the degree of physical aging needed to produce a measurable change in mechanical properties can be as long as one to two years. However, when bottles were stored in a warehouse at 120 F (49 C), this reduced the gap between the application temperature and the Tg by about 50% and accelerated the rate of physical aging by approximately an order of magnitude. Bottles stored under these conditions exhibited a measurable loss in impact strength in just a few weeks.
Physical aging only occurs in what material scientists refer to as glasses, which are essentially any amorphous material that exists as a “solid.” All commercial polymers, even semi-crystalline ones like polyethylene and nylon, contain some amorphous regions and exhibit a corresponding glass transition. Therefore, technically it is possible for semi-crystalline materials to undergo physical aging and the associated changes in properties.
However, in most cases these changes are overshadowed by the contributions from the crystalline phase. But some semi-crystalline materials achieve a relatively low level of crystallinity that is highly dependent upon processing conditions. These tend to be the high-performance materials such as PEEK and PPS, where failure to cool the polymer at an appropriate rate will result in an almost completely amorphous structure. In these materials physical aging can occur and has been observed along with the associated property changes.
Because physical aging involves a change in volume, it can be detected as a change in dimensions. This represents a relatively small dimensional change, as a percentage of the size of the part, so detection either requires making very precise measurements or the part must be very large so that this small percentage results in a readily measurable difference. Precise dimensions are not demanded in a beverage bottle, but there are industries that make measurements in microns. They have observed that parts molded in amorphous polymers with low glass-transition temperatures, such as rigid PVC, continue to exhibit a very small degree of dimensional change over a period of months. These changes are much smaller than the ones that occur due to the solid-state crystallization that we discussed previously, but they can still result in a part that drifts out of print over an extended period of time, even though the parts are only exposed to room temperature.
This process can be slowed down if the parts are stored at sub-ambient temperatures, but it cannot be stopped completely unless the temperature is lowered to the point where the polymer undergoes another more subtle transition called the beta transition. For most polymers this transition takes place at a very low temperature that would not be practical for part storage and certainly will not be encountered in most application environments. For example, the beta transition in PVC occurs near -58 F (-50 C) and in polycarbonate it is at -148 F (-100 C).
Up to this point, we have discussed long-term behavior that causes molded parts to become smaller over time. But there are some long-term influences that actually cause parts to grow. In the next installment we will discuss this behavior and look at the mechanisms behind it.
Dimensional Stability After Molding: Part 4
Up to this point our discussion of dimensional stability has focused on those influences that cause parts to get smaller. But there are environmental factors that also cause parts to increase in size over time. The best example of this is the dimensional growth that occurs when parts molded in nylon absorb moisture from the atmosphere.
Many polymers are hygroscopic; they absorb water. Since water vapor is always present in the atmosphere, this is the usual source of the water that becomes absorbed into the polymer. Most hygroscopic materials under normal atmospheric conditions can absorb 0.1-0.2% water over an extended period. But nylon, because it contains hydrogen bonding, attracts water to a much greater degree. At room temperature in a “normal” environment where the relative humidity is in the range of 35-65%, the equilibrium moisture content for an unfilled nylon will hover around 1.5-2% by weight. If continually immersed this can increase to 5-8%.
This level of moisture absorption changes the mechanical properties of the polymer. Many designers and engineers complain that nylon parts lack toughness in the cold, dry winter months when indoor humidity levels can drop to 5-10%, and this is why many processors will purposely condition their nylon parts as they are being molded. It simply shortens the time frame required for the part to reach equilibrium from weeks or even months to a couple of days.
Moisture absorption begins the moment the part leaves the mold. As molded, the moisture content of the part is approximately equal to that of the pellets that went into the press. Since we assume that all nylon is properly dried before molding, this would place the moisture content for a part produced from unfilled nylon below 0.20% or 2000 ppm. This is what we refer to as dry-as-molded. From that point, the process of going from 0.2% to 2.0% starts.
Water is a plasticizer for nylon. This means it reduces the material’s glass-transition temperature. For workhorse polymers nylon 6 and nylon 6/6, this takes the glass transition from 65-70 C down to about 10 C. Water has the same hydrogen bonding as nylon; when water enters a nylon part it has the same opportunity to become loosely attached to the nylon chain as does another nylon chain.
When the water molecule becomes positioned in this way, it forces the spacing between the polymer chains to increase. The greater the amount of moisture absorption the greater the volumetric expansion. Someone measuring parts will view this as an increase in the size of the part. Typically the increase in size can equal 0.5-0.6% in an unfilled nylon 6 or nylon 6/6 if the exposure occurs at room temperature. If the temperature increases, the moisture absorption levels will increase and the corresponding dimensional changes will become larger. Fillers and reinforcements will reduce this dimensional change, however even a highly filled nylon molded under optimal conditions will still expand by approximately 0.1% or 0.001 in. for every inch of part dimension. This makes it important that the processor and the end user define the conditions under which parts are to be measured for first-article approval.
While nylon represents the extreme where moisture absorption is involved, the vast majority of polymers absorb water at some level. Materials such as polyethersulfone and polyetherimide can absorb as much as 2% by weight over time. However, because these polymers are amorphous and have extremely high glass-transition temperatures, the effect of this moisture absorption on dimensions and properties is much smaller than it is in nylon. But as the environment becomes more severe, even materials that are normally considered to be very dimensionally stable in moisture-laden environments can produce surprising responses, particularly in situations where close tolerances must be held in assemblies.
Acetal is often considered as an alternative to nylon that avoids the difficulties associated with moisture absorption. The fact that acetal is rarely dried prior to processing reinforces this notion that moisture uptake is not a concern. At room temperature and 50% relative humidity an unfilled acetal will absorb only about 0.2% moisture and will expand by approximately 0.2%, effects that are much less problematic than those encountered in nylon.
But acetal is a polar polymer and is capable of taking on larger amounts of water under the right conditions. At 100% relative humidity the moisture absorption levels increase to almost 0.8% and the dimensional growth increases to 0.7%. This behavior led to a significant problem with the actuation of an acetal piston riding in the hole in a mating part. The tolerance stack up between the piston and the hole ranged from 0.004 in. to 0.010 in. on a diameter of 0.600 in. At room temperature the assembly functioned as intended.
However, validation testing involved prolonged exposure to a conditioning process at a temperature of 85 C and a relative humidity of 95%. This allowed the acetal part to absorb enough moisture allow it to grow by 0.004 in. Parts produced to the low end of the tolerance provided for enough clearance between the two parts to allow for proper operation. But if the parts were molded to the high side of the tolerance, the subsequent expansion due to moisture gain was sufficient to cause the part to bind and resist movement.
Before we end this series on dimensional stability, we will discuss a reversible effect that results from a property that all materials have: the coefficient of thermal expansion. In Part 5 in the next issue, we will cover how this property influences the results that are obtained when parts are measured in different environments.
Dimensional Stability after Molding—Part 5
At the International Bureau of Weights and Measures near Paris, sits a bar made of 90% platinum and 10% iridium with two marks engraved very precisely into it. The bar is stored under very controlled conditions, and at 0° C the distance between the two marks represents one meter. For more than 70 years this bar served as the worldwide standard for this fundamental unit of measure in the SI system (International System of Units).
Why would so much effort go into defining a unit of length? The answer lies in part on a property that all materials possess: the coefficient of thermal expansion (CTE). The vast majority of materials increase in size as their temperature increases and decrease in size as their temperature declines. For most metallic materials this value is approximately 10 x 10-6 m/moC. For most unfilled polymers the number is approximately an order of magnitude greater or about a tenth of a thousandth of an inch per inch per °C change in temperature.
A story from the world of producing and measuring close-tolerance parts provides an illustration of the importance of this property. In the first part of this series we reviewed a part molded from unfilled PBT polyester. This part was essentially a cylinder approximately 5 in. high and closed at the end at which it was gated. It had a critical depth dimension from the bottom flange to the other end of the part of 4.941 +0.008 in. Maintaining this dimension was a challenge every time the job ran, particularly since the nominal wall thickness was 0.250 in. and the part was not dimensionally stable until 2 hr after it was molded.
This uncertainty was solved by creating the graph of the critical dimension vs. time discussed in Part 1. But this part had presented a number of challenges over the years that always had us looking over our shoulders for some unexpected calamity.
These parts went out for a machining operation and then came back to our plant for a final inspection before being shipped to our customer. One day the parts came back from the machinist and were dropped off on the dock. It was mid-January in Wisconsin and the outside temperature was typical for that time of year: 10 F (-12 C), a full 35° C below room temperature.
Since the boxes were heavy, rather than haul the parts upstairs to the QC lab the quality person took the depth micrometer down to the dock and began to perform a final inspection on this critical dimension. Upon finding the parts to be undersized, he announced this with the singular delight that only a QC person can display when delivering bad news to the production staff. When I picked up a part to verify the measurement, I noticed that it was ice cold. I advised QC to take the parts to the lab and allow them to come to room temperature before measuring them. When they did, magically, the parts were to print again.
Looking at the properties of the PBT polyester in question, the behavior was predictable. With a CTE of 0.000075 in./in.°C, multiplied by a 35° C change in temperature and a length of nearly 5 in., the parts had “moved” almost 0.013 in., or almost the entire span of the print tolerance. This illustrates the importance of measuring critical dimensions at standardized conditions.
Occasionally you may see a print that actually calls out a temperature range at which dimensions are to be checked. This reflects an awareness of the importance of CTE to the size of the part, a factor that escapes many. As with all of the other considerations we have been discussing, the larger the part the more important this becomes.
This also relates to a lesson that was learned when we first began to perform SPC at the molding machines in the early 1980s. The importance of checking and charting critical dimensions while the parts were running was one of the big messages of SPC. So it made sense to put the inspection tools right at the press, where the operator could check and input the dimensions and observe the control charts as they developed. Some of the gauges were measuring critical dimensions to the tenths of thousandths of an inch. When we would analyze the data for Cp and Cpk values, we found disappointingly low process-capability results, even though the ranges for any given set of samples taken at a particular time did not suggest that this should be the case. In addition, when the parts collected for SPC over the course of a run were reviewed by the quality people after the fact, the variation was much smaller.
This was initially attributed to the greater expertise that the QC people brought to the measurement process and the fact that having a single person making all the measurements reduced some of the aspects of gauge repeatability and reproducibility. But further investigation showed that there was more to it. When the efforts to bring SPC to the production floor began, the plant had not yet gone through the conversion to a climate-controlled environment. Anyone who has worked in a molding plant without climate control knows that during the summer months, even in Wisconsin, the temperature in the building can vary significantly, from relatively cool conditions first thing in the morning to sweltering heat in early second shift.
Temperature gauges in the plant documented swings of as much as 50° F (28° C). When we looked at our control charts we could see the waveform associated with these temperature fluctuations superimposed onto the normal variation of our processes. Parts molded when the plant was at its hottest did not shrink as much as the parts molded when it was cooler simply because of the effects of the CTE. Not only did it make the process look less capable, it often gave the impression of a trend that the technicians and quality people reacted to that was not really there.
So in addition to all of the other considerations that we have discussed relating to dimensional stability, we must add one more: Even parts that have become stable will still produce different measurement results when checked under different environmental conditions. The CTE of the material, the size of the part, and even the orientation of the polymer relative to the dimension being checked, will all have an influence on the sensitivity of the measurements to the temperature at which these measurements are made.
Melt Flow Rate Testing–Part 1
Click Image to Enlarge
Melt indexers, or extrusion plastometers, are common lab tools used to determine the melt flow rate (MFR). The test, while often disparaged, gauges the relative average molecular weight (MW) of the polymer. Since MW is the driving force behind performance in polymers, it turns out to be a very useful number. (Photo: Tinius Olsen)
Melt flow rate testing can be characterized as the Rodney Dangerfield of material test methods—it gets no respect. People from all parts of the industry downplay the value of the test or denigrate its usefulness outright. Those who instruct industry professionals on processing are quick to point out that the melt flow rate (MFR) value for a material is a single point on a curve that characterizes viscosity as a function of shear rate. Since plastics are non-Newtonian, their viscosity varies with shear rate.
The melt flow rate test moves the molten material at a single flow rate, therefore a single shear rate, and fails to capture the full range of the behavior of a material as a function of changing shear rate. To make matters worse, the shear rate is not even controlled. While the load on the material, or the shear stress, is constant during the course of the test, the shear rate is an output of the test. The MFR itself is a reflection of the shear rate used during the course of the test, but it is a result of the test and not a controlled input.
Academicians do not like the units. How do you get from grams/10 minutes to anything meaningful in terms of fundamental polymer behavior? I have written extensively on the relative relationship between MFR and average molecular weight. This has earned me occasional irate e-mails from university professors insisting that I explain how you convert from g/10 min to the proper units for molecular weight, which are grams/mole. It is a good question and one that we will answer as completely as possible over the course of these next few articles.
In fact, melt flow rate testing is a poor tool for gauging processability, for reasons that we will explain in detail later in this series. But it was never intended to be a measurement of processability; this is an interpretation that has been applied by some in the processing community. The notion that a melt flow rate tester is some type of poor man’s capillary rheometer is fundamentally incorrect. In addition, the relationship between MFR and average molecular weight is a relative one. There are a lot of factors that can skew this relationship and make interpretation tricky.
For example, adding ingredients to a compound such as glass fibers, impact modifiers, and certain additives can alter the MFR of a material without changing the average molecular weight of the polymer one bit.
But if the test is so useless, why does the value appear on so many material data sheets? Not only is it a line item on the majority of published data sheets, it is often the key characteristic that distinguishes one grade of material from another within a given polymer family. In materials as diverse as polycarbonate, acetal, and polystyrene, the MFR may be the only value on the data sheet that varies significantly from grade to grade within a particular product offering. The reason is simple. Assuming that all other factors are kept constant, the MFR is a very good gauge of the relative average molecular weight of the polymer. Since molecular weight (MW) is the driving force behind performance in polymers, it turns out to be a very useful number.
Flow rate in a polymer is related inversely to viscosity. High-viscosity materials flow with greater resistance and therefore more slowly under any particular set of conditions than low-viscosity materials do. Therefore, higher-MW polymers have lower MFR values and lower-MW polymers have higher MFR.
Practitioners of injection molding prefer the latter because it is easier to fill demanding flow paths in a mold with what we refer to as “high-flow” materials. Extruders and blow molders are more likely to prefer higher-MW materials because they provide higher melt strength, a factor that makes it easier to control the shape of a parison or a complex profile, die-swell considerations notwithstanding. End users prefer higher-MW polymers whether they know it or not, because higher MW correlates with better product performance. Impact resistance, fatigue performance, environmental stress-crack resistance (ESCR), and barrier properties (to name a few) all improve with higher MW.
My first exposure to the importance of MW as a material-selection criterion came on the molding floor over 30 years ago. We were molding traffic-light signal housings out of a high-MW injection grade of polycarbonate. The nominal MFR of the material was 5 g/10 min. The geometry of the part, coupled with the age of our molding machines, made this a very challenging part to produce. Because the performance of the part was quite critical, we conducted a falling-dart impact test once every hour to make sure that the process was under control. At the conclusion of the run we would randomly select 20 parts from the lot and repeat the impact test. The typical result was that 20 out of 20 parts passed.
One day we decided to see what would happen if we used a lower-MW grade. The reasoning went something like this: If we can use a material that flows more easily, we can reduce the melt temperature of the material and the pressures associated with injection and packing. This will reduce the stress on the material and improve the impact performance of the part or at least compensate for the reduced impact strength of the lower-MW polymer.
When we sampled a grade with a nominal MFR of 10 g/10 min we did, in fact, observe that we could reduce the melt temperature of the material by 40° F (22° C), and our pressure during first-stage filling declined by 10%. But when we ran our impact performance evaluation on 20 parts made from this material, only four of them survived the test. This large change in behavior occurred despite the fact that the notched Izod impact values published on the data sheets for these two grades were the same. This disparity between actual performance and the expectations created by the data sheet occurs every day in our industry, and we will address the reasons for this in a later article.
In our next article we will describe the MFR test procedure and discuss some of the strengths and weaknesses of the test. We will also explain the reason why so many material suppliers use the property, not only as a published line item but also as a key parameter for quality certification from lot to lot.
Melt Flow Rate Testing – Part 2
Click Image to Enlarge
Because the viscosity of a polymer varies with flow rate, or shear rate, a complete characterization of viscosity must allow for measurements at multiple shear rates and the production of a graph that captures this relationship. A capillary rheometer can provide such data, but not a melt-flow-rate tester. Here is a capillary rheometer output for two polypropylenes.
In order to fully appreciate the strengths and weaknesses of the melt-flow-rate (MFR) test it is important to know something about the way the test is performed. The methodology is covered by ASTM D 1238, while the corresponding international standard is ISO 1133. There are small differences between the methods, but they essentially perform the same function. (Link to Part 1 top right of this page.)
Both establish the rate at which a polymer flows under very specific conditions through an instrument with a very specific geometry. For the ASTM method, the cylinder into which the material is loaded has a diameter of 0.376 in., and at the bottom of the cylinder is a removable insert with an even smaller opening or orifice, as it is usually designated. While a few materials are tested using a non-standard orifice, the standard opening has a height of 0.315 in. +0.001 in. and a diameter of 0.0825 in. +0.00025 in.
The tolerances here suggest that the dimensions of the flow path are considered to be critical…and they are. The MFR instruments come with a go/no-go gauge for the orifice diameter that must be used regularly to ensure that the opening is within specification. Orifices are available in different materials and some are more durable than others, particularly against the effects of aggressive cleaning that can enlarge the opening. In addition, cleaning the top and bottom surface of the orifice can reduce the height. These are factors that can detract from the accuracy of the measurements.
Assuming the physical equipment is in good condition, the test involves first establishing the appropriate temperature for the material being tested. The prescribed temperatures are polymer-specific: Polycarbonate (PC) is typically tested at 300 C, polyethylene at 190 C, etc. For some polymers there are two or even three recognized conditions that may be used and it is usually the resin supplier that decides which one it uses. Probably the best example of this is ABS, where one of three different temperatures can be used to perform the test. So when performing incoming quality-control tests it is important to use the same conditions as your material supplier if you expect to get the same results. Just as with orifice geometry, temperature calibration is very important.
The other key input parameter is the mass that is placed on the material sample once it has been loaded into the cylinder and brought up to the specified temperature. This is also a polymer-specific setpoint and it can be a single number agreed upon by all, such as 1.2 kg for PC, or it can be one of two or three values, as in the case of ABS. For each of the three temperatures that can be used to test ABS there is a particular mass that accompanies that temperature. You can use 200 C with a 5 kg load, 230 C with 3.8 kg, or 220 C with 10 kg. And, yes, each condition will give you a different result for the same compound. The actual MFR value provided by the test is expressed in grams/10 minutes and is governed by the test conditions and the composition of the material being tested.
Here is the important fundamental regarding this test: The load is the input that drives the material through the orifice; the flow rate is the output. Consequently, the MFR test is a constant-shear-stress test, not a constant-shear-rate test. By definition it is a pressure-limited configuration. In this respect it is different from a capillary rheometer, a device for measuring viscosity that can control and vary the flow rate of the test while measuring the force required to achieve that flow rate.
Capillary rheometry is a controlled-shear-rate test and can provide a true measurement of viscosity, or resistance to flow. And because the viscosity of a polymer varies with flow rate, or shear rate, a complete characterization of viscosity must allow for measurements at multiple shear rates and the production of a graph that captures this relationship. Figure 1 shows a capillary rheometer output for two polypropylene materials.
This is where one of the common criticisms of the MFR test comes in. Critics point out that while capillary rheometry provides a complete picture of the relationship between viscosity and shear rate across a wide range of conditions that mirror many different processes, the MFR captures only one point on the curve.
This is true. The question is, “So what?” This critique implies that the MFR test was designed to provide some indication of processability. That was never its primary purpose. Instead it is intended to provide a simple way of measuring the relative average molecular weight (MW) of the polymer. As MW decreases, MFR increases. Since MW weight drives performance in polymeric materials, this is something that should be of interest.
When a polycarbonate supplier creates a range of grades distinguished primarily by their MFR, it is identifying the products according their average MW. Molecular weight influences impact performance, fatigue resistance, creep resistance, environmental stress-crack resistance, and barrier properties. The higher the MW, the better the performance. When a supplier establishes a specification range around a published nominal MFR value for a grade, it not doing so because it is concerned with how the material will process in a particular piece of equipment. The supplier monitors the MFR because it knows this is an indicator that the average MW of the material is under control and within the agreed-upon range.
The actual MFR value certainly has implications for processing. No one would pretend that a polycarbonate with an MFR of 4 g/10 min will flow as far through the same flow path and under the same molding conditions as one with an MFR of 20. But the real difference in viscosity is not as large as these numbers would suggest, because the difference in the flow rate automatically means a difference in shear rate, and with higher shear rates come lower viscosity. The shear rate at which an MFR test is performed is actually proportional to the MFR value itself. Multiplying the MFR by approximately 2.2 will give the shear rate at which the test was performed. So a 4-melt material is tested at a shear rate of about 8.8 sec-1 while the 20-melt material is tested at 44 sec-1. Not only are these shear rates different, they don’t represent the shear rates experienced by polymers under most melt processes.
In the next article we will explore the quantitative relationship between MFR and MW, the utility of using low shear rates to measure MW, and the reasons why some processors are convinced that variations in their process are caused by lot-to-lot variation in the MFR of the material.
Melt Flow Rate Testing—Part 3
Click Image to Enlarge
FIG. 1. You cannot measure viscosity at a shear rate of zero because viscosity is a measurement of resistance to flow. In order to measure resistance to flow you have to make the polymer flow, and as soon as you do so the shear rate becomes some non-zero value. However, logarithmic plots of viscosity vs. shear rate tend to level off as the shear rate approaches zero, so liberties are taken to a extrapolate the curve back to the y-axis to arrive at the value for zero-shear vis
FIG. 2. This graph shows an actual viscosity/shear rate curve for a commercial acetal copolymer with a nominalMFR of 9 g/10 min. The measurements cover shear rates from 1.4 to 1400 sec-1 and illustrate the early portion of the behavior shown in Fig. 1.
FIG. 3. The arrows show the shear rates at which the MFR are measured for the two materials. Because the shear rate for the test performed on the 22-MFR material is significantly higher than that used for the 4-MFR material, the differences in viscosity are exaggerated. This is an admitted imperfection of the MFR test. However, it also illustrates the principle that differences in viscosity caused by variations in molecular weight are most effectively measured at low shear rates.
A couple of years ago I received an indignant e-mail from a gentleman challenging my statement that melt flow rate measurements usually provide a good assessment of the relative average molecular weight of a polymer. How, he demanded, do you convert a flow-rate measurement expressed in units of grams/10 minutes, to a measurement of molecular weight, which is given in grams/mole? Researchers have addressed this question.
There is a well-established relationship between something called the weight-average molecular weight of a polymer and a parameter known as the zero-shear viscosity. While the exact relationship is somewhat dependent upon the polymer, the general equation that can be found in the literature is:
ho = kMw3.4
where ho is the zero shear viscosity, Mw is the weight-average molecular weight, and k is a constant that is specific to the polymer being evaluated. The exponent 3.4 does not apply universally; however, the values do tend to fall in a range between 3.2 and 3.9. In any event, this relationship clearly shows that relatively modest changes in molecular weight (MW) result in large changes in melt viscosity when this viscosity is measured at a very low shear rate.
Zero shear rate is a concept only a mathematician could love. In practice you cannot measure viscosity at a shear rate of zero because viscosity is a measurement of resistance to flow. So to measure resistance to flow you have to make the polymer flow, and as soon as you do so the shear rate becomes some non-zero value. But logarithmic plots of viscosity versus shear rate tend to level off as the shear rate approaches zero, so we can extrapolate the curve back to the y-axis to arrive at the value for zero-shear viscosity (see Fig. 1).
Figure 2 shows an actual viscosity/shear rate curve for a commercial acetal copolymer with a nominal melt flow rate (MFR) of 9 g/10 min. The measurements cover shear rates from 1.4 to 1400 sec-1 and illustrate the early portion of the behavior shown in Fig. 1. The viscosity does not change by a statistically significant amount at shear rates between 1.4 and 7 sec-1 before entering what is referred to as the non-Newtonian region of the curve. Figure 2 also pinpoints the shear rate at which the MFR test is performed on this material, about 20 sec-1. It can be seen that the MFR test is performed at a point very close to the plateau leading back to the zero-shear point on the curve.
This illustrates the usefulness of the MFR test in providing a relative measurement of the average MW of a polymer. Because the MFR test is performed at relatively low shear rates, the results approximate the zero-shear viscosity, despite the fact that the shear rate is not controlled. The fact that the shear rate is not controlled in an MFR test means that the test actually exaggerates the real differences in melt viscosity that occur as a function of molecular weight.
This can be shown by revisiting the viscosity vs. shear rate curves for the two polypropylenes shown in last month’s column. The MFRs for the two materials are often interpreted as indicating that the viscosity of the 4-MFR material is 5.5 times greater than that of the 22-MFR resin.
However, if we compare the viscosity of these two materials at any specific shear rate we can see that this is not the case. The largest difference in viscosity between the two materials is observed at the lowest shear rate, and even at this point the ratio of the two viscosities is not quite 3.5:1 (3460/1005=3.44).
Note that as the shear rate becomes higher, the difference in viscosity between the two materials decreases. At the highest shear rate on the graph of 10,000 sec-1 the ratio has declined to 18/13 or 1.38:1. The faster the materials flow the more similar their viscosities appear to be. This is caused by orientation of the long polymer chain, a phenomenon often referred to as shear thinning. This behavior allows processors to move polymer melts long distances through relatively restrictive flow paths, particularly in injection molding.
The reason that the MFR values suggest a larger difference than the viscosity measurements indicate is because the shear rate for the two MFR tests is not the same. The arrows in Fig. 3 show the shear rates at which the MFR tests are conducted for the two materials. As mentioned before, these shear rates are not controlled, they are a result of the rate at which the two materials flow under a constant load. But because the shear rate for the test performed on the 22-MFR material is significantly higher than that used for the 4-MFR material, the differences in viscosity are exaggerated. This is an admitted imperfection of the MFR test. However, it also illustrates well the principle that differences in viscosity caused by variations in MW are most effectively measured at low shear rates. As the shear rates get higher, materials with substantially different MW start to look very similar.
At the extremes, commercial raw materials can be found with MFR as low as 0.1 g/10 min and as high as 500 g/10 min. However, for the vast majority of commercial compounds where an MFR is published, the values fall between 1 and 50 g/10 min. This means that the shear rates of MFR tests typically fall between 2.2 and 110 sec-1.
The exaggerated differences in melt viscosity that the MFR test produces have an unfortunate effect on the processing community, particularly those processors involved with injection molding. Processors tend to think of the MFR values as being representative of the real differences in the flow of the materials.
When molded parts fail to perform as expected, one of the changes that can be made to improve performance is to increase the MW of the polymer being used to produce the parts. This involves changing to a lower MFR material. Often this change is dismissed out of hand by the processor because of the anticipated increase in processing difficulty suggested by the difference in the MFR values. The viscosity curves show that the real differences in viscosity are not as large as suggested by the MFR values.
But it is difficult to get people to think beyond the MFR numbers. Many performance problems that could have been solved with higher-MW materials have instead been approached with complicated changes in part design and process conditions simply because the MFR numbers looked too intimidating.
Next month we will explore the reasons for the perception of MFR as a barrier to processing and explain why processes that exhibit sensitivity to lot-to-lot fluctuations in MFR have fundamental problems related to the molding machine or the way the process is set up.
Melt Flow Rate Testing—Part 4
While melt flow-rate testing is not extremely difficult, and the equipment is not very costly, few molders perform the test in-house. Of those that do, a relatively small percentage of them understand why they are doing it or what they are measuring. As we have mentioned previously, most material suppliers specify and control their products to a melt flow-rate (MFR) specification because they know that it is related to the average molecular weight (MW) of the polymer they are producing. If the MFR is consistent, the average MW is consistent.
But many molders who check their incoming materials to verify the certifications they receive believe that they are testing to ensure process consistency. This belief is based on a poor understanding of the relationship between the MFR value and the actual viscosity of the material at processing conditions.
The capillary rheometry curves we have shown in previous columns (see Editor's Pick top right) illustrate that the viscosity of a polymer is dependent upon the shear rate. We also know that a component of the shear rate is the volumetric flow rate of the material. The other component is the geometry of the flow path, which should be a constant for any given mold. Of course, the shear rate varies with location in the flow path. It is different in the runner than it is at the gate, for example.
But for any given location within the flow path there is the expectation that the cross-sectional area of the flow path remains the same. Therefore, the only process variable that can influence the shear rate is the flow rate of the material as it is filling the mold. Flow rate is controlled by a machine parameter known to processors as injection speed. On today’s machines the technician selects the injection speed by finding the appropriate screen and entering a setting that is usually given as a linear distance per unit time such as in./sec or mm/sec.
Let’s assume for a moment that the setup document for a particular mold running in a particular machine calls for a first-stage injection speed of 3 in./sec. If the stroke distance from the beginning of the injection process to the point at which the machine transfers from first to second stage is 6 in., then the fill time should be 2 sec. Very few processors ever check to see if this is actually the case, even though fill-time clocks accurate to two or three decimal places are standard equipment on modern molding machines. Processors tend to enter the setpoint and trust that the machine is doing what it is told to do.
There are a number of factors that could prevent the machine from responding the way it should, but in the end these all boil down to a lack of pressure needed to achieve the desired velocity. Processes that run without an abundance of first-stage injection pressure are called pressure-limited.
Viscosity is defined as resistance to flow. It can be thought of as the product of the pressure applied to the fluid and the time over which this pressure is applied. To move a fluid of higher viscosity a certain distance, it is necessary to either use more pressure over a particular period of time or apply the same pressure over a longer period of time. The SI units for viscosity, Pascal-seconds (Pa-s), reflect this relationship.
If a molding machine is to run in a velocity-controlled mode it must be set up to provide more pressure than is needed to deliver the desired volume of material to the mold in a fixed time. In this way, when the viscosity of the material increases due to a change in molecular weight, the pressure will increase proportionally but the time will remain the same. If the time needed to deliver the material remains the same, then the shear rate remains the same and will limit any change in viscosity to the inherent effect of the higher MW. The latter contribution is relatively small when the material is flowing at velocities typical of first-stage injection.
If the pressure is limited, either by the design of the molding machine or by the manner in which the machine has been set up, then when the material viscosity increases, the time required to deliver the material to the mold becomes longer. In other words, the injection speed slows down all by itself. A slower injection speed translates to a lower shear rate and this exaggerates the difference in the viscosity of the material—just like the MFR test does.
The MFR instrument is pressure-limited. A constant load is applied and the operator simply observes the behavior of the material. If it is true that the molding machine in the plant “notices” the difference in the MFR of the material, it is because the machine is set up like the MFR tester, to be pressure-limited.
To put it another way, a velocity-controlled process minimizes the effect of changing viscosity on the stability of the process, while a pressure-controlled process exaggerates this effect. Normal MFR variations only influence the process if the machine runs with a limit on the available pressure.
Lot changes in raw material are not the only possible cause of a change in the viscosity. If regrind is blended into the virgin material, then fluctuations in the amount of regrind or changes in the MW of that regrind can cause variations.
In addition, many polymers exhibit changes in melt viscosity due to variations in their moisture content. For example, a 30% glass-filled PET polyester dried to 50 ppm may have a viscosity 10-15% higher than that same material dried to 200 ppm. Both of these materials are considered to be adequately dry for most applications, but they may process differently, depending upon how the machine is set up. This effect is even more pronounced in nylons.
That is the reason for a lot of the folklore in the industry about nylon materials being “too dry.”
In short, if the molding process is properly established, typical fluctuations in MFR for a given grade of material should have no significant effect on the process or the parts coming out of that process, because the conditions of the MFR test do not resemble those of the molding process until you reach the pack and hold phase. If you are doing things correctly, by that point your mold cavity or cavities should be almost (but not completely) full. A robust process, properly established in a capable molding machine, should handle changes in viscosity much larger than the typical lot-to-lot variation observed in a given grade of material.
So if the MFR test is not intended as a gauge of processability, then what is it supposed to be used for? Next month we will address this topic.