Many people have asked this question regarding roughness parameters. Ramax? Rzmax? These parameters often appear on drawings but don’t generally appear in measuring systems.
So let’s dig into this a bit.
First off “max” isn’t max
The extension “max” has nothing to do with a “maximum” or an upper limit. The extension “max” is used to invoke what is called the “max rule”. The “max-rule” deactivates the ISO default “16% rule”. (More on this in a minute.) In terms of specifics, when you put “max” at the end of a parameter name, you invoke the max rule and you are saying –
“no single trace on this surface is allowed to go outside this tolerance limit”.
For example, the “max” rule can be applied to upper and lower limits (based on ISO 1302 formatting):
The above example requires that all measurements of the “Ra” parameter lie between 0.2 (lower limit) and 0.6 (upper limit) micrometers. Did you catch that last sentence where it said “measurements of the Ra parameter”? “Ra” is the parameter that we report with our measuring system. “max” is the rule that we use to judge the part as being good or bad.
Wait a minute! I thought the limits are the limits…
ISO 4288 provides “decision rules” for using surface texture measurements in determining if a specific part is good or bad. Buried in this standard is a default rule which says something to the effect of “unless otherwise specified; up to 16% of the traces on a surface are allowed yield results that are outside the tolerance”.
In other words, if you make 100 measurements on a specific surface on a specific piece of metal – you must treat the surface as “good” if less than 16 percent of the measurements are outside the tolerance limits. Yes, that means you can get lots of “bad” readings, but the part must still be considered as “good”.
Yes… this would be a good part – even if all of the orange trace locations were outside the tolerance limits:
Why 16%? Why not 17%?
The 16% come from statistics. For a normally distributed set of results, the mean value, plus one standard deviation will include 84% of the observations. Leaving 16% beyond the 1-sigma limit:
What to use?
For the most part, designers want to control all of the surface. Thus they are inclined to think in terms of the “max” rule. However, many surfaces have flaws or defects that might not be part of the actual “texture” produced by the manufacturing process. These flaws or defects need to be controlled. However, they may be controlled by other means… not necessarily surface texture.
For example, in the optics industry many specifications contain specific details regarding “scratches and digs”. These “flaws” are described and controlled. Thus, when measuring surface roughness, if a flaw is encountered in a specific trace, that specific trace does not have to fall inside the roughness tolerance limits.
The calibration of roundness measuring systems is often misunderstood and misapplied. There are several factor at play here. Let’s take a look at the basic concepts and shed some light on this.
Let start with the language…
First off, the term “calibrate” is not properly used in the world of roundness. The word “calibrate” basically means “determine what you have; as compared to the proper value.” Gage blocks are calibrated.
In roundness measuring systems we often see the word “Calibrate” or the word “Calibration” to refer to the process of “adjusting” the system. Technically, this isn’t the correct use of the word “calibration” – the proper word is “adjustment” or “correction”.
For the sake of this discussion, I’ll try to keep the concepts clear between the two.
One other bit of terminology…
The roundness measuring sensor has two parts. The electronic part that senses motion (the “probe”) and the shaft/contact that touches the part (the “stylus”). These two things work in combination with the spindle in order to make a roundness measurement. So let’s go…
What are all these things that came with my roundness gage?
Typical roundness systems come with a few extra things that can help with calibration and/or adjustment. Instruments may come with things such as:
- a precision sphere/hemisphere,
- a flick/dynamic calibration standard
- an optical flat with gage blocks
The Precision Sphere/Hemisphere
A precision sphere or hemisphere is sometimes included in the kit with a roundness measuring system. This is typically used for testing the instrument’s spindle. The sphere/hemisphere is very round. By measuring this very round sphere/hemisphere, your measurement results are primarily made up the instruments errors. Thus if you have a vibration you will see it in this measurement.
Here’s where things can go wrong. This ball is made to be “zero reference” (as if it can be considered to be perfect in comparison to the measuring instrument) but there is still some error in the ball. The actual errors in the ball are provided as a “certificate of calibration” for the ball.
The calibration lab typically arrives at the ball’s certified value by using extremely accurate methods and it often includes the use of a “reversal” which is something for another blog topic.
Here’s an example measurement on a precision hemisphere as analyze by OmniRound:
This hemisphere had a calibrated value of approximately 0.040 µm. The screen shows a roundness of 0.065 µm (when using a 50 UPR Gaussian filter). This means that the errors in the roundness instrument combine with the errors in the hemisphere to give a result of 0.065 µm. A more advanced “reversal” can be used to separate the instrument from the ball, but this requires two measurements, some very precise fixturing and software that performs the correction.
Can I “calibrate” with the ball? (The short answer is: no, no, no, no, no!)
In some cases, people have used the certified value for the sphere/hemisphere as the nominal value for a “calibration” of their roundness system. This is VERY dangerous! Remember that the word “calibrate” typically means “adjust” in a roundness system. Thus, in this example, if we “calibrated” the instrument with the hemisphere we will introduce a scale factor that will reduce the 0.065 µm value down to the certified 0.040 µm value. In this example, the factor would be 0.6; meaning all measured values would be reduced to only 0.6 of their actual value. A part that has a 10 micron roundness would only measure 6 microns!
Using a precision sphere/hemisphere to “calibrate/adjust” a roundness measuring system would be the same as using an optical flat to set the gain on an electronic indicator. There isn’t enough deflection to set the gain. Spheres/hemispheres and optical flats are “zero references” not gain adjusting tools.
Verifying and adjusting the probe gain
The probe sensitivity or “gain” is a factor that must be set and controlled in the roundness measuring system. For example, if a long stylus shaft is used there is a lower sensitivity. This sensitivity is represented by a “gain” value inside the instrument that must be set via a process that the system typically calls “calibration”. (Technically it’s an “adjustment” since we are changing things.)
To adjust the probe’s gain we need to exercise more of the probe’s motion. This is typically handled via a “Flick Standard” (also called a “Dynamic Standard”) or by gage blocks. Both of these approaches allow for checking the probe gain and/or adjusting the probe gain. Both have their pro’s and con’s.
The Flick/Dynamic Standard
The “flick” or “dynamic” specimen is typically the easiest and can also be used for a quick check to see if the measuring system is giving back the expected roundness values. The specimen has a region that is very, very round with a “flat” ground into it. The roundness value is based on the depth of the flat area.
These specimens are typically more expensive and they must be properly calibrated. Furthermore, the measuring instrument needs to process the data in the same manner that the specimen was calibrated.
The calibration (I know I’m using the word to mean “adjustment”) process with a flick specimen involves.
- Center the flick standard – taking care to avoid the flick area while centering
- Measure the standard
- Scale the measured value to match the certified value via the instrument’s software
The gage block (gauge block, slip gauge, Jo-block) approach involves wringing two, certified gage blocks to an optical flat. The step height between the two blocks is used to test or set the probe’s gain.
The calibration (you’re right… I’m using that word to mean “adjustment”) process with a gage blocks involves.
- Level the optical flat.
- Place the measuring stylus on one block and record the height.
- Place the measuring stylus on the other block and record the height difference.
- Scale the measured height difference to match height difference.
Wait a minute… gage blocks steps aren’t “roundness”!
Shouldn’t I use a radial (roundness-type) measurement to “calibrate” roundness not a vertical (gage-block) measurement? Not necessarily. The gain-setting operation should isolate the probe/stylus combination and move it through a known displacement. The orientation doesn’t necessarily matter other than it should be in a stable configuration.
How often should I do this?
The roundness measuring system should be checked and potentially “adjusted” whenever:
A new stylus or probe is inserted.
Different probe lengths and insertions lead to different sensitivities.
A new stylus contact angle is used.
Changing the contact angle changes the effective probe length.
There is doubt about the condition.
Did anything happen while I wasn’t looking?
The most efficient way of checking the probe gain is via a “flick” or “dynamic” calibration standard. Simply measure the standard and see if an adjustment is needed.
So there you have it…
Hopefully this will take some of the confusion out of the tools for roundness calibration. Digital Metrology always welcomes your questions and comments. Feel free to contact us today!
It seems like a rather trivial topic, but let’s think about our profile graphs…
The surface texture plot is often more important than the parameter value. Sure, the parameter value is the thing that is toleranced. But when a parameter like RzDIN goes out of tolerance, can you walk out to the manufacturing line and turn the “RzDIN knob”?
In order to control a process, it is likely that you will need to see the surface. Was RzDIN out of tolerance due to dirt on the surface? Was it due to porosity? Maybe noise in the measurement? These questions cannot be answered without a profile graph.
Many people (labs, manufacturing lines) provide profile graphs with their measurements. The good ones – provide consistent, fixed scaling so that the graphs look the same from measurement to measurement. This helps highlight subtle changes. Auto-scaling should only be used as a starting point while you are figuring out what your fixed scales should be.
Take a look at these two profile plots from OmniSurf. Notice any difference?
The above plot on the left is from a milled surface. It has an Ra value of 0.417 µm. The right profile is from an optical flat and it has an Ra value of 0.002 µm. That’s a huge difference!
This difference is more apparent with consistent, fixed scaling as these graphs show:
However, there’s more to this profile plotting topic:
Most people simply plot a roughness profile – after all, roughness is usually the thing that is toleranced on the drawing. However, OmniSurf’s default graph type is “Primary + Waviness”. This is for several good reasons.
1. A roughness graph doesn’t show waviness.
Here’s a roughness profile for a leaking shaft:
Here’s the same data with the primary and waviness profiles plotted. It’s very apparent that waviness is more significant than roughness.
2. The Primary+Waviness graph shows how the filter is working.
With the P+W graph you are able to see if the filtering “fits” the shapes of the profile. The roughness filtering operation takes place on the primary profile. Thus, it makes the most sense to display the filtering operation as applied to the primary profile. In the below example: we can easily see that the 0.8 mm filter cutoff does a better job of following the shape of the surface and thus it will do a better job of describing the shapes that ultimately caused this component to fail.
3. It’s easy for your eye to “subtract”
The roughness profile is “everything that is above and below the waviness profile”. When seeing a graph like this. Your eye can easily see what is above and what is below the waviness profile. There really isn’t a need to even plot roughness!
4. It’s very difficult for your eye to “add”
It is hard to visual how this roughness profile and this waviness profile combine to form the “real” surface:
Putting the two graphs on top of each other doesn’t help very much:
However, if we plot the Primary with Waviness profiles this is our view of the surface:
With this graph we can immediately see:
– The filtered waviness profile is moving up and down with the feedrate… might be time to consider using a longer filter cutoff
– The profile has a general “U-shape,” the middle is lower than the edges. This was very hard to pick out of the other graphs.
– The Primary+Waviness graph gives a much clearer picture of the actual peak-to-valley heights of the profile features. These actual peak-t0-valley heights are much higher than those indicated on the roughness profile graph.
5. Don’t fall for filtering problems
When plotting the roughness profile for a surface with deep scratches or pores we often see high peaks on each side of the scratch or pore.
Sometimes these are real. Sometimes they are caused by the filter being “pulled” into the valleys. The Primary+Waviness plot helps us know for sure:
In this case, the filter is being pulled into the valleys. The areas above the waviness profile become the artificial “peaks” in the roughness profile. This is definitely a case where robust filtering is needed.
Hopefully, this help you make more sense of your profile graphs and ultimately make better decisions based on your measurements!
For more information contact Digital Metrology today!