How does calibration affect precision




















There are several circumstances where the calibration curve will not reduce or eliminate bias as intended. Some are discussed on this page. A critical exploratory analysis of the calibration data should expose such problems. Poor instrument precision or unsuspected day-to-day effects may result in standard deviations that are large enough to jeopardize the calibration. There is nothing intrinsic to the calibration procedure that will improve precision, and the best strategy, before committing to a particular instrument, is to estimate the instrument's precision in the environment of interest to decide if it is good enough for the precision required.

Outliers in the calibration data can seriously distort the calibration curve, particularly if they lie near one of the endpoints of the calibration interval.

Isolated outliers single points should be deleted from the calibration data. An entire day's results which are inconsistent with the other data should be examined and rectified before proceeding with the analysis.

It is possible for different operators to produce measurements with biases that differ in sign and magnitude. This is not usually a problem for automated instrumentation, but for instruments that depend on line of sight, results may differ significantly by operator.

To diagnose this problem, measurements by different operators on the same artifacts are plotted and compared. Small differences among operators can be accepted as part of the imprecision of the measurement process, but large systematic differences among operators require resolution.

Possible solutions are to retrain the operators or maintain separate calibration curves by operator. It may also include adjustment of the instrument to bring it into alignment with the standard. Yes: even the most precise measurement instrument is of no use if you cannot be sure that it is reading accurately — or, more realistically, that you know what the error of measurement is.

By checking the instrument against known reference standards that have themselves been calibrated in a chain of measurements that can be traced back to agreed International Standards — the system of SI units — for example the Volt; Ampere; Watt; metre; litre.

Yes: Demonstrable control of measurement and test equipment is required. Part of this is ensuring that instruments are calibrated on a rational periodic cycle, and that records are maintained and reviewed. By using a calibration laboratory that is accredited to international standard ISO ISO requires laboratories to demonstrate competence in both the technical aspects of the measurements and in the quality assurance aspects that ensure that you will get the service that you ask for if you have specific requirements, or will ensure that you get a useful and valid certificate and set of results if you wish to leave the detailed requirements to the laboratory.

This depends on how important the measurements being made are to your product or service; the degree of wear and tear that the instrument will experience in service; the stability of the instrument itself and a review of the calibration records that already exist to determine whether adjustment has been needed previously.

OTC recommends a starting periodicity of 12 months for most instruments with an increase in calibration frequency to 6 or 9 months if adjustment is required, and a reduction in periodicity to 2 years after a sequence of annual calibrations has shown that adjustment has not been needed.

It is very difficult to judge the performance of your instrument without a set of calibration results.



0コメント

  • 1000 / 1000