Calibration Results

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Calibration Results

    Does anyone have anything specifically that they go by to determine whether or not calibration results are acceptable?

    Since there are so many variables that can control the accuracy of the measurement (probe, stylus diameter, length, build up, # connections, accuracy of the machine, etc...), I was just curious as to what peoplel use as an accepted practice.

    We currently run Brown and Sharpe Global Image 12-22-10 machines (volumetric accuracy of ~0.008mm). I have been told by some of the operators and programmers that they typically call results of less than 0.005 acceptable.

    Thanks in advance for the help.

  • #2
    Originally posted by ktanner View Post
    Does anyone have anything specifically that they go by to determine whether or not calibration results are acceptable?

    Since there are so many variables that can control the accuracy of the measurement (probe, stylus diameter, length, build up, # connections, accuracy of the machine, etc...), I was just curious as to what peoplel use as an accepted practice.

    We currently run Brown and Sharpe Global Image 12-22-10 machines (volumetric accuracy of ~0.008mm). I have been told by some of the operators and programmers that they typically call results of less than 0.005 acceptable.

    Thanks in advance for the help.
    The STD-DEV is the first thing to look at, I go for less than 0.005mm (0.0002"). Then, the next thing SHOULD be the range of size, I want the sizes to all be the 'same'. Anything bigger than nominal size, IMO, is very suspect. Smaller is OK, BUT, they SHOULD all be the 'same' size. Length and hardware will determine HOW MUCH undersize is acceptable.
    sigpic
    Originally posted by AndersI
    I've got one from September 2006 (bug ticket) which has finally been fixed in 2013.

    Comment


    • #3
      Right... I understand that the STD-DEV is the first thing to look at. I guess my question is more geared toward, why is the less than 0.005mm what you go for? Where did you get this number? Why 0.005? Why not 0.007? Is there some sort of criteria that you used to get that number?

      Also, by 'same' size, do you don't literally mean the same size correct? Just close?

      One more question to you, can you explain why oversize IYO is suspect? Just curious.

      Thanks again for the response. Sorry so many questions.

      Comment


      • #4
        Originally posted by ktanner View Post
        Right... I understand that the STD-DEV is the first thing to look at. I guess my question is more geared toward, why is the less than 0.005mm what you go for? Where did you get this number? Why 0.005? Why not 0.007? Is there some sort of criteria that you used to get that number?

        Also, by 'same' size, do you don't literally mean the same size correct? Just close?

        One more question to you, can you explain why oversize IYO is suspect? Just curious.

        Thanks again for the response. Sorry so many questions.
        As for the value I use for the STD-DEV tolerance, well, that is kind of trial and error. In a perfect world, it would be zero, BUT, all of us in quality KNOW that nothing is ever perfect (except for ourselves!). First thing to look at is the specs on your cal-tool, then the specs on the tips you buy. You can not expect to have a value SMALLER than the roundness values of the two of those added together. I'm not saying you WON'T see values smaller, but in reality, you SHOULD expect to see "perfect" calibration results equal to those 2 values added together. And, since nothing is perfect, to keep from getting errors, you will need a value BIGGER than those 2 values added together. Also take into account the machine and other hardware you are using. See? It ain't as easy an answer as you thought, is it? BUT, with good quality tips and good quality cal-tool, lets just look at the hardware. A TP2 (which is what I use) is more-or-less the bottom of the line of probing units available today. So, it is the weak point in my entire set up. A TP2 is tri-lobed due to its construction. BUT, they are supposed to be good within 0.0002" (I think, it has been a LONG time and once set, this STD-DEV doesn't need to be messed with). Anyway, I think I have it set to the TP2's specs. NOW, one thing to remember, the STD-DEV is NOT a roundness value, BUT, if you DOUBLE it, you will get [email protected] close to the roundness value. SO, tip, too, unit, I am looking for something less than 0.0004" round for my tips. NOW, let's look at hte machine itself. My machine OEM Specs are 0.0007". I am using just over 1/2 of that for my STD-DEV. So, as long as my tips calibrate better than twice as good as my machine, I am happy. Sounds funny, don't it? I demand (and get) better results than the machine's OEM specs.

        "SAME", yeah, close, that's why I "quoted" "same". I wish they would put that in as one of the settings in the calibration routine instead of just STD-DEV and ANY SIZE. I am happy if they are within 0.001" of each other (min to max less than 0.001"). Look above about machine specs to see why.

        OVERSIZE is suspect because of the mechanics involved. The flex of the probe shaft, the delay in the triggering of the touch all go to making the calibrated size SMALLER than the actual size, so if it says it is bigger, well, that tells me something is wrong, either with the tool or the probe.
        sigpic
        Originally posted by AndersI
        I've got one from September 2006 (bug ticket) which has finally been fixed in 2013.

        Comment


        • #5
          I figured that it would be complicated. Everything with these machine usually is. Thanks again for the responses to all my questions. Everything that you said makes sense to me. I just wanted some input from other users to help me make some desicions.

          Thanks again

          Comment


          • #6
            Originally posted by ktanner View Post
            I figured that it would be complicated. Everything with these machine usually is. Thanks again for the responses to all my questions. Everything that you said makes sense to me. I just wanted some input from other users to help me make some desicions.

            Thanks again
            Here is a down-and-dirty (literally) way to do it. Take a brand-new probe, use it to calibrate, calibrate it a second time, look at the values, take the biggest value, multiply it by 1.5 and use that.

            Now, take some grease, wipe it on the probe, and calibrate it again and see if it gives you a bad result.

            Just an idea, BUT, it does let you play with grease!
            sigpic
            Originally posted by AndersI
            I've got one from September 2006 (bug ticket) which has finally been fixed in 2013.

            Comment


            • #7
              Now why the 1.5

              Comment


              • #8
                Originally posted by ktanner View Post
                Now why the 1.5
                Because after a double-cal, the second cal is as good as it will EVER be, and since that is as good as it will ever be, if you use the tiny amount it shows you, a sneeze in the room will make it tell you it's bad. You need some wriggle room.
                sigpic
                Originally posted by AndersI
                I've got one from September 2006 (bug ticket) which has finally been fixed in 2013.

                Comment


                • #9
                  Originally posted by Matthew D. Hoedeman View Post
                  A TP2 (which is what I use) is more-or-less the bottom of the line of probing units available today. So, it is the weak point in my entire set up. A TP2 is tri-lobed due to its construction.
                  Matt,
                  Do you think TP20 is more accurate than TP2?
                  sigpicIt's corona time!
                  737 Xcel Cad++ v2009MR1....SE HABLA ESPAƑOL

                  Comment


                  • #10
                    Originally posted by Roberto View Post
                    Matt,
                    Do you think TP20 is more accurate than TP2?
                    It is 'supposed' to be, else why would they have made a "new and improved" unit? BUT, the accuracy of your unit (TP2, TP20, TP200) will never be better than the accuracy of the machine. Also, I have my doubts (especially when monkeys are involved) of the real world accuracy of ANY unit that is magnetically held in place (is the TP20 held on with magnets?). We all know how useless the monkeys are 99.9% of the time and it sure don't take much of a piece of ANYTHING (lint, grit, grease, etc) to screw-up the mounting surface of a magnetically attached unit. Doing what I do, the TP2 has never been less than perfectly acceptable. The tri-lobe effect, can't even really see it on this machine. No matter what the specs of the machine, for example, THIS machine OEM specs were 0.0007", well, the scales that came on the machine from OEM as well as the controller had as it's smallest increment 0.0008" (yes, I know this for a fact). So, since the smallest increment it could see was 0.0008", how did they come up with a spec of 0.0007"?

                    One thing about the TP2, the user has at his command the full ability to adjust the tension of the trigger, all the way down to a 10x1 probe won't stay un-triggered up to locked right down (NOT a good thing to do). So, I can go with a 2x10 tip on it OR I can go up to 75mm of total length probe build up w/ 6mm ball (that's about the limit, too). NOW, when you do that, you MUST take into account WHAT you are measuring and you have to be careful. I sure wouldn't do anything with a 0.010" or less tolerance. I will see that 6mm ball calibrated about 0.007 or 0.008" smaller than it will if it were just 6x20, but, slow-n-steady and it works, and all with a single unit.
                    sigpic
                    Originally posted by AndersI
                    I've got one from September 2006 (bug ticket) which has finally been fixed in 2013.

                    Comment


                    • #11
                      Originally posted by Matthew D. Hoedeman View Post
                      Here is a down-and-dirty (literally) way to do it. Take a brand-new probe, use it to calibrate, calibrate it a second time, look at the values, take the biggest value, multiply it by 1.5 and use that.

                      Now, take some grease, wipe it on the probe, and calibrate it again and see if it gives you a bad result.

                      Just an idea, BUT, it does let you play with grease!
                      Originally posted by Matthew D. Hoedeman View Post
                      It is 'supposed' to be, else why would they have made a "new and improved" unit? BUT, the accuracy of your unit (TP2, TP20, TP200) will never be better than the accuracy of the machine. Also, I have my doubts (especially when monkeys are involved) of the real world accuracy of ANY unit that is magnetically held in place (is the TP20 held on with magnets?). We all know how useless the monkeys are 99.9% of the time and it sure don't take much of a piece of ANYTHING (lint, grit, grease, etc) to screw-up the mounting surface of a magnetically attached unit. Doing what I do, the TP2 has never been less than perfectly acceptable. The tri-lobe effect, can't even really see it on this machine. No matter what the specs of the machine, for example, THIS machine OEM specs were 0.0007", well, the scales that came on the machine from OEM as well as the controller had as it's smallest increment 0.0008" (yes, I know this for a fact). So, since the smallest increment it could see was 0.0008", how did they come up with a spec of 0.0007"?

                      One thing about the TP2, the user has at his command the full ability to adjust the tension of the trigger, all the way down to a 10x1 probe won't stay un-triggered up to locked right down (NOT a good thing to do). So, I can go with a 2x10 tip on it OR I can go up to 75mm of total length probe build up w/ 6mm ball (that's about the limit, too). NOW, when you do that, you MUST take into account WHAT you are measuring and you have to be careful. I sure wouldn't do anything with a 0.010" or less tolerance. I will see that 6mm ball calibrated about 0.007 or 0.008" smaller than it will if it were just 6x20, but, slow-n-steady and it works, and all with a single unit.
                      Well stated Matt. I especially like the part about the scales, and how they came up with .0007" when the increments are .0008". Benobo math

                      Everyone in here knows about your facination with grease
                      sigpic
                      Originally posted by Ironhoe
                      I got something under my sporran for you, take care of it and you got my vote.

                      Comment


                      • #12
                        Originally posted by Matthew D. Hoedeman View Post
                        One thing about the TP2, the user has at his command the full ability to adjust the tension of the trigger, all the way down to a 10x1 probe won't stay un-triggered up to locked right down (NOT a good thing to do). So, I can go with a 2x10 tip on it OR I can go up to 75mm of total length probe build up w/ 6mm ball (that's about the limit, too). NOW, when you do that, you MUST take into account WHAT you are measuring and you have to be careful. I sure wouldn't do anything with a 0.010" or less tolerance. I will see that 6mm ball calibrated about 0.007 or 0.008" smaller than it will if it were just 6x20, but, slow-n-steady and it works, and all with a single unit.
                        And how do you adjust the tension?
                        sigpic

                        Comment

                        Related Topics

                        Collapse

                        Working...
                        X