What Happens When You Fail to Calibrate Your Flowmeters?

Failure to calibrate flowmeters can negatively impact performance, while calibrating too frequently can result in excessive costs without providing any benefits. So the question is, how do you determine if calibration is needed and what the frequency should be? Download this white paper to find out the answer!

Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


  • <p>Paul, I enjoyed the Endress &amp; Hauser white paper on calibration. I'd like to see a better explanation of their 'verification tool.' It appears to be an ad hoc device that is periodically attached to the instrument in order to determine if the instrument needs calibration by reading internal parameters. That would make it brand specific. Also, it appeared that it 'tests' the flow instrument by disturbing the process to a know perturbation from given standards.</p> <p>Periodic, brand specific, flow only, and intrusive are not the characteristics of an ideal solution. I'd like to re-familiarize you with a couple articles Putman's Pharmaceutical Magazine was kind enough to run in the past (Oct. 2008 and Jan. 2011) that suggested a more universal solution using on-line, non-intrusive statistical analysis tools.</p> <p>You buy an instrument for *one* purpose, its accurate assessment of the sensed parameter. An instrument's accuracy is a function of the manufacturer's design and correct installation. Its veracity is a function of its calibration. How precise and how certain you need to be affects the cost benefit analysis of any solution. The E&amp;H white paper is a nice starting point but there is a lot more exciting engineering to follow. Precision and certainty and the cost to perform difficult calibrations should drive the instrumentation community towards seeking *continuous*,*on-line*, *non-intrusive* solutions. </p>


  • <p>Thanks for your very interesting observations about the Endress + Hauser white paper! I reviewed your articles in Pharmaceutical Manufacturing and must admit, they describe an approach I had not heard about in 20 years covering process control and industrial maintenance. Are there commercial implementations out there, and if so, in industries other than pharmaceutical?</p>


  • <p>Paul, Thanks for getting back to me on the instrument calibration white paper.</p> <p>The techniques described in the two referenced 'Pharmaceutical Magazine' articles are based on text book statistical formulas. They are used extensively in the Six Sigma Programs where I used to be a certified Six Sigma Black Belt. The problem that I uncovered after completing my Six Sigma stint and returning to a line position as Validation Manager on a pharmaceutical manufacturing construction site was that approximately 75 to 90 percent of all the FDA compliance calibrations find the instrument is operating within specs and does not need calibration. This is a tremendous cost because validated calibrations require additional documentation. All this cost is then passed onto the consumer (and we wonder why medical cost are so high).</p> <p>The proposed solution in the referenced papers is not cheap. In the first paper, the ideal solution requires a second instrument to be installed which increases the initial capital cost but dramatically reduces the annual metrology budget. However, using the Shewhart &amp; Deming statistical control charts mathematics from the 1930's you can identify a failing instrument before it goes out of spec and impacts product quality. This eliminates superfluous calibrations and increases product quality by detecting problems before off spec product is produced. In the second paper, different statistical algorithms are used to identify a change in behavior of a single instrument that could be used to predict the need for a calibration. This second approach is much cheaper because it does not require a second instrument but it is not nearly as deterministic as the first.</p> <p>It is most interesting to note that both approaches are generic - flow, temperature, pressure, specific gravity, turbidly, etc. or Brand A, Brand B, Brand C etc. - and non-intrusive and require no modification to existing instruments. They are pure math that resides somewhere on the facility's network and sends need-for-calibration messages to the calibration department.</p> <p>Paul, you asked what other applications might also find this attractive outside of the pharmaceutical industry. Three come to mind immediately: First, wherever the cost of the calibration is high or impossible to perform on demand but the accuracy of the physical parameter is critical, for example, inside a nuclear reactor or a blast furnace or in a remote, inaccessible location like along the Alaskan pipeline or on an offshore oil rig or pump station or an orbiting satellite would be good candidates.</p> <p>Second, custody meters where an oil refinery has a meter on its side of the fence passing feedstock, like say ethanol, to a chemical plant next store, Typically, the chemical plant will have an instrument on its side of the fence as well and each month the two companies must reconcile the different instrument readings and balance of plant inventories before they can agree on an invoice amount. Providing a mutual calibration platform will reduce finger pointing, increase customer satisfaction, and shorten the payment cycle.</p> <p>The third is safety systems. There is a school of thought that believes in triple mode redundancy. I.E. Install three instruments for each critical parameter and feed their signals into a safety system computer that has three relatively independent sets of circuits with a voting mechanism to determine if all three inputs are equal. If one is different than the other two, the oddball is ostracized and the safety system determines the 'safeness of the process' as a function of the remaining two inputs. The new statistical technology discussed here suggests that you only need two instruments and if one fails, the system will recognize which one failed and determine the 'safeness of the system' based on the good one. In both cases, both systems recognize a single failure, alarm it to the operator, and continue to function in a degraded mode. Neither system can continue to determine safeness when it is hit by a second failure; and both systems execute an orderly process shut down when that happens. If you accept the parity of the two systems, then you will be pleasantly surprised when you realize that the statistical approach takes one third less instruments (and wiring and maintenance) than a triple mode redundant system. Since safety systems are non productive, process plant owners should be intrigued with the prospect of reducing this sunk cost by one third.</p> <p>Paul, Like I wrote before, there is a lot of exciting engineering ahead in this arena. Enjoy the view from your editor-in-chief chair.</p>


RSS feed for comments on this page | RSS feed for all comments