Thursday, September 14, 2023

Oops, my bad—tools CAN'T always calibrate each other!

I goofed last week. I said you can have two tools calibrate each other, and I didn't put any restrictions around that. I was wrong.

You remember the whole topic was whether two tools can calibrate each other. The question I asked was this: "Suppose you calibrate some of your own tools in-house, instead of sending them out. And suppose that when you calibrate Tool-1, you use Tool-2. Normally there's nothing wrong with that, so long as Tool-2 itself is also correctly calibrated. But back when you calibrated Tool-2, you did it using Tool-1. Is that a problem?"

I argued that you can do this, within the parameters of quality system standards like ISO 9001 and ISO 17025. And right away I got helpful feedback from commenters on LinkedIn telling me, "Not so fast!"

Christopher Paris pointed out a technical issue I had forgotten. When I described the calibration history of Tool-1 and Tool-2, I traced them both back to the day you bought them from the manufacturer. I assumed you got a certification from the manufacturer at that point. But Chris observed "that the original calibration certificate from the manufacturer is rarely traceable to national/international standards. It's typically some basic certificate that doesn't really provide much information. So tracking back to that doesn't get you full compliance to ISO 9001. If the OEM's cert doesn't list traceable standards used to calibrate the device, then the device still has to be subject to a third-party lab or some other traceable calibration."

So yes, I accept that correction. Tool-1 and Tool-2 both have to be calibrated at the beginning in a way that is traceable to the correct international standards.

But what about the part where you then use the tools to support each other? What about the way that they leapfrog one another on and on into the future forever?

Scott Kruger and John Schultz each flagged this as an improper use case, and we had some helpful discussions about pragmatic topics to clarify why. But I wanted chapter-and-verse. If this is a bad practice, it should be forbidden by the relevant standards—either that, or there's a hole in the standards and someone is going to exploit it.

I finally found it, but I had to dig. ISO 17025, clause 6.5.1, states: "The laboratory shall establish and maintain metrological traceability of its measurement results by means of a documented unbroken chain of calibrations, each contributing to the measurement uncertainty, linking them to an appropriate reference." [Emphasis mine.]

Let's apply this to my thought experiment in last week's post. Here's what I wrote then.

Remember that when you calibrate any tool, that measurement typically has a validity period. Maybe it's valid for one year. So let's say you calibrate Tool-1 every January, and that calibration is good from January to December. Then you calibrate Tool-2 every July, and that calibration is good from July to June.

Last month was July 2023. Time to calibrate Tool-2.

So you pull out Tool-1. Is it a valid tool to use? Check the sticker. It was calibrated in January 2023, and is good through December 2023. So it must be good to use.

But wait. Let's check the paperwork to make sure. According to the paperwork, when we calibrated it in January 2023 we used Tool-2. Hold on! Isn't that the same tool we're trying to check right now?

No. It's not.

The tool we are trying to check "right now" (meaning last month, when I've set this story) is "Tool-2-as-of-July-2023." The tool we used last winter back when we were calibrating Tool-1 was "Tool-2-as-of-January-2023." If you look at it right those should count as different tools,....

Stop right there.

What I should have seen is that as soon as I treat "Tool-2-as-of-July-2023" and "Tool-2-as-of-January-2023" as different tools, I've got a problem. What does Tool-2's unbroken chain of calibrations look like?

July 2023: Tool-2-as-of-July-2023 was calibrated by Tool-1-as-of-January-2023.

January 2023: Tool-1-as-of-January-2023 was calibrated by Tool-2-as-of-July-2022.

July 2022: Tool-2-as-of-July-2022 was calibrated by Tool-1-as-of-January-2022.

January 2022: Tool-1-as-of-January-2022 was calibrated by Tool-2-as-of-July-2021.

And so on.

Every single one of those measurements introduced an uncertainty. Maybe it was small, but it was there.

Just to make things simple, let's pretend the additional uncertainty is the same each time. (In real life, it might not be.) Call that uncertainty ε. [That's a Greek epsilon.] Then every year that you play this leapfrog game, the uncertainty for each tool increases by 2ε (adding one in January and one in July). If you've been using Tool-1 and Tool-2 to calibrate each other for ten years, you have added 20ε to the uncertainty of each one. 

When does that accumulated uncertainty become too much? At what point does it make the tool worthless?

It all depends what you normally use your tools for. How small an uncertainty do you need? Maybe if the tools start off a lot better than you actually need, you can get away with it for a while. But you must need some level of precision and accuracy, or you wouldn't bother calibrating your tools at all. And you probably didn't spend the extra money to get tools that were 100x more precise and accurate than you really needed. So you can't really play this game too long. Certainly not forever.

As I say, I goofed. I was wrong, and I'm grateful for the corrections. Thank you, all. 

           

No comments:

Post a Comment

Five laws of administration

It's the last week of the year, so let's end on a light note. Here are five general principles that I've picked up from working ...