Error analysis

From Course Wiki
Revision as of 23:40, 26 February 2014 by Steven Wasserman (Talk | contribs)

Jump to: navigation, search
20.309: Biological Instrumentation and Measurement

ImageBar 774.jpg


Overview

A thorough, correct, and precise discussion of experimental errors is the core of a superior lab report, and of life in general. This page will help you understand and clearly communicate the consequences of experimental error.

What is experimental error?

The goal of a measurement is to determine an unknown physical quantity Q. The measurement procedure you use will produce a measured value M that in general differs from Q by some amount E. Experimental error, E, is the difference between the true value Q and the value you measure M, E = Q - M.

“Experimental error” is not a synonym for “mistake,” although mistakes you make during the experiment can certainly result in errors.

Sources of error

Error sources are root causes of experimental errors. Some examples of error sources are: shot noise, electromagnetic interference, and miscalibrated instruments.

Error sources fall into three categories: fundamental, technical, and illegitimate. (Inherent is a synonym for fundamental.) Fundamental error sources are physical phenomenon that place an absolute lower limit on experimental error. Experimental errors introduced by technical error sources can (at least in theory) be reduced by improving the instrumentation or measurement procedure — a proposition that frequently involves spending money. Illegitimate errors are mistakes made by the experimenter that affect the results. There is no excuse for those.

Pentacene molecule imaged with atomic force microscope.[1]

Classify error sources into the three categories based on the way they affect the measurement. In order to come up with the correct classification, you must think each source all the way through the system: how does the underlying physical phenomenon manifest itself in the final measurement? For example, many measurements are limited by random thermal fluctuations in the sample. It is possible to reduce thermal noise by cooling the experiment. Physicists cooled the pentacene molecules shown at right to 4°C in order to image them so majestically with an atomic force microscope. But not all measurements can be undertaken at such low temperatures. Intact biological samples do not fare particularly well at 4°C. Thus, thermal noise could be considered a technical source in one instance (pentacene) and a fundamental source in another (most measurements of living biological samples). There is no hard and fast rule for classifying error sources. Consider each source carefully.

Types of errors: random and systematic errors

Imagine you are conducting an experiment that requires you to swallow a large, orange polka dotted pill and take your temperature every day for a month. You have two instruments available: an analog thermometer and a digital thermometer. Both thermometers came with detailed specifications.

Accuracy versus Precision.png The specification sheet for the analog thermometer says that it may have an offset error of up to two degrees. “Offset error” means that all measurements differ from their true value by the same amount. (This is also called zero-point error.) In other words, E = Koffset. Koffset is unknown to you and can take on any value between ±2°. Assume you don't have an accurate temperature standard you can use to find the value of Koffset.

The specification of the digital thermometer says that it has 0&degC; offset, but noise in its amplifier causes the reading to vary randomly around the true value. The variation has an approximately Gaussian distribution with an average value of 0°C and a standard deviation of 2&dec;C. The analog thermometer's specification states that it has a standard deviation of 0°C.

In the context of observational errors, the terms precise and accurate have specific meanings. The analog thermometer is precise. The digital thermometer is accurate.

Which thermometer should you use?

The answer is: it depends.

Raw temperature data could be used in a variety of ways. It is easy to imagine experimental hypotheses that involve the average, change in, or variance in your temperature. The accurate and precise thermometers cause different kinds of errors in each of the three circumstances.


Because nearly all measurements involve error sources that introduce random variation, repeated measurements rarely give identical values. One way to refine a measurement that exhibits random variation is to average N measurements: M = <Mi>, i ∈ 1…N. If the measurement error is really random, the error terms Ei&nbsp= Q - Mi will sometimes be positive and other times negative; sometimes large and other times small. When multiple measurements are averaged, the errors will tend to cancel each other. Averaging several measurements increases the precision of the result at the expense of measurement bandwidth. In other words, it takes longer to make the measurement.

Some error sources result in measurement errors that do not decrease when multiple measurements are averaged. These are called systematic errors. An example of a possible systematic error source is a mass measurement using a scale that reads five pounds too light all the time. This is called a zero-point or offset error. Systematic errors reduce the accuracy of a measurement.

Bottom line: the magnitude of random errors tends to decrease with larger N; the magnitude of systematic errors does not.

The central limit theorem provides a mathematical model for averaging multiple measurements. Informally stated, the theorem says that when you add random variables, their variances add.

According to the central limit theorem, the uncertainty in your estimate of Q in most cases decreases in proportion to the square root of the number of measurements you average, N. Averaging multiple measurements increases the precision of a measurement Because the increase in precision is proportional to the square root of N, averaging multiple measurements is frequently a resource intensive way to achieve precision. You have to average one hundred measurements to get a single additional significant digit in your result. The central limit theorem is your frenemy. The theorem offers an elegant model of the benefit of averaging multiple measurements. But it is also could have been called the Inherent Law of Diminishing Returns of the Universe. Each time you repeat a measurement, the value added by your hard work diminishes.

If you are measuring your body mass index, which is equal to your mass in kilograms divided by your height in meters squared, your result M will be smaller than the true value Q. Your result will also include random variation from other sources. Averaging multiple measurements will reduce the contribution of random errors, but the measured value of BMI will still be too low. No amount of averaging will correct the problem.

Types of errors

Systematic errors affect accuracy. Random errors effect precision.

Sample bias

Quantization error

Accuracy and precision

Experimenters usually worry about two types of error in measurements: random variation and systematic bias.


References

  1. Gross, et. al The Chemical Structure of a Molecule Resolved by Atomic Force Microscopy. Science 28 August 2009. DOI: 10.1126/science.1176210.