Essential idea: Scientists aim towards designing experiments that can give a "true value" from their measurements, but due to limited precision in measuring devices, they often quote their results with some form of uncertainty.
Understandings: Random and systematic errors; Absolute errors
Applications and skills: Explaining how random and systematic errors and be identified and reduced
Data booklet equations: None
Experiments and observations in physics involve a lot of measurement taking. We wish our measurements to be as close as possible to the true value. We want our measurements to be precise and accurate.
Precise measurements are close to each other. When making several measurements of the same situation they are precise if we get the same, or nearly the same, result every time.
Accurate measurements are close to the actual value. Repeated measurements might be spread out but, but on average they will be close to the true figure.
High precision but low accuracy
High accuracy but low precision
Sometimes, for some measurements, you will always get a slightly different reading every time. For example when measuring the height of a bouncing ball. This can be caused by minor changes in the controlled conditions or by small, random, inaccuracies in the use of the measuring tools or equipment.
As a minimum, the random error is described as half the minimum scale division of the measuring tool, of the full division if digital. e.g. If the balance is accurate to 0.01g then use ±0.01g. But your process may bring greater errors - e.g. hand using a stopwatch is likely to be accurate to no more than ±0.3s.
To compensate for random errors, repeat your measurements and take an average.
As described above, you should have calculated the best possible precision you could obtain, from the tools and process you used. However, once you have a complete set of data, analysis of the results can give a better sense of an appropriate uncertainty.
In this method you look at all your trials for the same value of the independent variable (in other words look along the row). Find the largest value recorded, and subtract from it the smallest value. This gives you the range. Divide this value by two to give you your uncertainty. Remember you can only quote uncertainties to one or two decimal places.
This method is probably best suited to experiments with five trials or less.
The standard deviation is a statistical concept that can be calculate for a set of data with a random error (it must be random to give a meaningful result). The mechanism of Standard Deviation is a matter for the mathematics courses, but spreadsheets such as Google Sheets or Microsoft Excel will offer is as a function. Given a truly random distribution and a large enough data set, Standard Deviation is probably the better method.
This method works best for data sets with at least six trials of each variation of the independent variable.
Below is the data for the Pendulum Lab used in the vodcast as a Google Sheet. Make your own copy to follow along with the video or make your own version.
Systematic errors, like random errors, cause a deviation between the measurement and the true value, but unlike them - they have a consistent bias (either positive or negative by the same amount).
Systematic errors can be caused by broken, faulty or badly calibrated equipment - or by consistently employing a flawed methodology, not taking parallax into account, etc.
Systematic errors can be difficult to spot. They are normally identified by comparison to other, expected, data or results.
Oxford Physics: pages 8 - 11, with worked examples
Hamper HL (2014): pages 8 - 10, 13 - 14 (order is a little different to how I do things)
Hmaper SL (2014): pages 8 - 10, 13 - 14 (order is a little different to how I do things)
pages 11 - 13