Presenting numbers with excessive and artifical precision in product labels, newspaper articles and report tables does nothing for scientific credibility and sows confusion in the mind of the reader.
GARRISON Keillor’s book Lake Wobegon Days states that “The lake is 678.2 acres, a little more than a section…” To me this is a master-stroke, providing corroborating detail that produces utter belief in the reader.
In contrast, a science-fiction novel about exploring a new planet in a home-made zeppelin claims that a crew member cried out, “Captain! That mountain must be at least five thousand five hundred and forty five meters high!”
This paragraph stopped my reading dead in its tracks (to mix a metaphor). I guarantee that no entity, in this galaxy or anywhere in the universe, has ever gurgled or telepathed that “The mountain must be at least 5,545 glugs high!”
Obviously the original American edition said the mountain was “at least three miles high”, then the task of converting to metric values was given to the publisher’s idiot nephew who didn’t know enough to change “at least three miles” into “at least five kilometres”. Instead he relied blindly on the output of his hand calculator. Readers who know that visual measurement of distance is imprecise, cannot be bamboozled. Overly precise numbers can be a source of amusement but all too often are a form of spin-doctoring. Commercial organisations are especially prone to report excessively precise numbers that pretend to an impossible degree of accuracy. I propose the term, ‘hyperdigititis’ to describe such pseudo-scientific nonsense.
Under what circumstances do we accept improbably precise values, and when do we reject them? I suspect one important factor is whether we ourselves can estimate whatever is being measured, as opposed to invisible values only measurable by a white-coated scientist. Invisible units are typically over-specified whereas visible units are rounded to sensible values.
Excessive digits act as barriers to readers’ understanding. Table 1 is an example from a (name-protected) agricultural report.
|Treatment||Raw yield||% Sugar|
|Table 1. Excessive digits in an agricultural report.|
A brief explanation is needed here: The LSD or Least Significant Difference indicates how far apart two averages must be in order to conclude that they differ significantly with 95 percent confidence. The CV or Coefficient of Variation measures the variability of a measurement, in this case about 15 percent for yield and eight percent for sugar percentage. An important lesson here is that all biological data has at least five percent variability.
The table above demonstrates an all too common misuse of numbers, to convince us that the authors are incredibly precise, rather than to present useful information. The large degree of uncertainty (LSD and CV) shows that none of the digits to the right of the decimal point are valid. That even applies to the LSD itself, since the LSD also has a certain amount of uncertainty.
So the figures ought to be as in Table 2.
|Treatment||Raw yield||% Sugar|
|Table 2. The same report figures with adjusted digits.|
I think you’ll agree with me that the second version is much easier to understand, showing that Chemical B lowered yields but chemicals A and C had no effect.
I once had to compile comprehensive tables of animal feed-stuff compositions. Published reports usually had three-decimal precision, eg, “4.35% arginine”. Never mind that analyses of different samples showed coefficients of variability up to 19 percent.
Enormous tables showing 17 amino acids with three-decimal accuracy are bulky and impossible to understand. By dropping the unjustifiable precision, these tables became smaller and quite readable. After all, the readers of that report were mainly animal feed formulators, who probably don’t want to know more than low, medium, or high. I was able to inform them that six independent analyses of, say, methionine in wheat, showed a low of 0.10, mean 0.17, maximum 0.22.
On 22 April 2009, the Christchurch Press published a beautifully illustrated half page to show that alcoholic beverages are energy-rich. This article inadvertently demonstrated the difference between invisible kilojoules and visible foods (blocks of chocolate).
The article claimed that one glass of wine contains 390 kJ, gin-and-tonic 400 kJ, and a shot of Baileys 408 kJ. (In addition, a pint of beer was measured, with incredible precision, as 1098 kJ.) Some credulous readers might have switched to drinking wine instead of Baileys, yet the published values were basically meaningless!
The energy value of wine depends on whether it’s red or white, dry or sweet. According to the November 2006 issue of Healthy Food, the energy value of 100 ml of white wine is between 345 and 395 kJ, while red wine is 340-365 kJ (www.healthyfood.co.nz/articles/2006/november/how-many-kjs-are-you-drinking).
Those figures are based on a ‘standard’ 100 ml serving of wine, rather than the 135 ml servings proclaimed on wine bottles (5.6 servings from 750 ml). Don’t bother working out ratios, unless you are prepared to measure out beverages to three-place accuracy.
The real conclusion, entirely missed by the newspaper, is that a typical alcoholic drink has about 400 kJ regardless of whether it’s wine or spirits.
In stark contrast to the hyperdigitised kilojoule values, the article states that each drink is equivalent in fattening power to half a block of chocolate. Not 0.48 of a block! When the measurement involved something we can see for ourselves, the journalist automatically rounded correctly.
The consumer-food industry, world-wide, seems determined to confuse consumers with food composition tables filled with excessive and unjustifiable detail. To fit all these digits in, the tables are often printed in tiny fonts. Even with large fonts, the length of numbers makes it difficult for shoppers. Processing “12.34” requires more than double the effort to handle “12”. (The decimal point is part of the problem.) I believe that hyperdigitised numbers are misleading because 1) they claim accuracy that is not there; 2) the analytical methods employed provide only approximations to the food components purportedly measured.
Almost all food labels disregard biological variability, which is typically at least five percent. Other than near-pure chemicals like sugar and salt, most prepared foods are made from plants and animals that have different histories. What cultivar of wheat was used? Was the beef from a Friesian cow or another breed? What region? What soil type? Irrigated or dry-land? Many food labels state, with admirable honesty, that they represent indicative values based on averages. Unfortunately that doesn’t mean any reduction in unjustifiable precision. My candidate for worst offender is a packet of delicious Vietnamese snacks, the label of which proclaims that sodium per biscuit is 14.22 mg. Western food manufacturers are not much better.
Table 3 shows part of a Nutrition Information table from a tin of imported luncheon meat:
|Component||Per 56 g serve (sic)||Per 100 g|
|Fat, total (g)||12.0||21.4|
|Carbohydrate, total (g)||4.2||7.5|
|Table 3. Luncheon meat nutrition information.|
Let me put energy values aside for just a moment, except to note that the calculated “259 Cal/100g” was almost surely provided by the same idiot nephew who worked on the science-fiction novel cited at the beginning of this article. Multiplying a value that is accurate to two places by a factor that is accurate to three or more places, does not provide a three-place result.
The standard way to measure protein is to digest foodstuff in boiling sulphuric acid (Kjeldahl analysis). This converts all nitrogenous chemicals into ammonia. The liberated ammonia is measured and that value multiplied by 6.25 is reported as “crude protein”. Unfortunately, the correct multiplier depends on what’s being analysed. Factors as low as 5.71 and as high as 7.69 may apply. (Hint: the factor is the inverse of the percentage N, which in turn is related to the amino acid composition of each protein.)
Many non-protein chemicals are converted to ammonia during the Kjeldahl procedure. That includes not only alkaloids and free amino acids, but also man-made chemicals like melamine. In any effort to improve precision of protein analysis, an erudite committee of nutritionists has recommended that proteins should be hydrolysed gently, so that individual amino acids can be measured. That route is not only more expensive than digestion but also opens a Pandora’s box of complexity, because all proteins are not created equal. Proteins with lysine, methionine and perhaps threonine are more valuable for growing animals than other proteins. Do we need another data entry on the Nutritional Contents tables showing relative protein values for children as opposed to adults?
With all these uncertainties about protein analysis, even a two-digit claim of “8.9 g protein” seems unjustifiable. Who needs such precision? A nutritionist who relied on these numbers to formulate a patient’s diet could be grossly misled. Consumers mostly need rough indications that a food is low, medium or high protein.
The FAO says that total carbohydrate can be estimated by difference, that is, everything left over once protein, fat, water, ash, and alcohol are subtracted. This is a friendly touch from the FAO. It allows ‘carbohydrate’ values that include fibre (polymeric carbohydrates) and organic acids.
Carbohydrates can be either soluble or insoluble, with starch the major insoluble material. If we consider only insoluble material, mostly it’s starch and ‘fibre’. Generally only starch is available for our nutrition, and then only after cooking, although heat may convert up to eight percent of total starch into indigestible ‘resistant starch’.
Soluble carbohydrates include small sugars as well as oligosaccharides, such as fructose-containing material from onions and artichokes. The latter are not utilised by the human body but rather by micro-organisms residing in our gut. Clearly, a simple chemical result of “7.5 g carbohydrate” is only a rough approximation to digestible carbohydrate.
For a change, measurement of fat as lipid-soluble material is straightforward. I’m not aware of any technical problems with estimates of saturated versus unsaturated fats. There are some issues about how mixtures of fats may not have the same digestibility as pure fats.
Strictly speaking, energy content should be measured by combustion of a sample of food, with another food sample being fed to someone who is willing to collect all his bodily excretions for the next day or so. Such volunteers are hard to find. Even the feedstuff people rarely use animal feeding studies, because they have equations that convert individual components into an estimate of digestible energy. For poultry, the formula is 0.34% x Fat + 0.16% x Protein + 0.13% x Sugars. It’s obvious that any errors in measurement of fat, protein or sugar will affect the final energy values.
For people, similar formulas are available with ‘Atwater’ factors. There is a ‘general’ Atwater table and a ‘Specific’ table that tries to compensate for different ingredients. There’s only a two percent difference when animal-based food values are crunched through the Atwater methods. For wheat flour the discrepancy is seven percent and for cabbage or snap beans 20 percent. How, then, can a claim of “1089 kJ” be justified for a food made from a mixture of ingredients?
My suggested version
In view of all the uncertainties, I’d suggest a major simplification of nutritional information tables. Shorter numbers would be comprehensible and readable, while the present over-long numbers are mind-numbing rather than informative.
So Table 4 has my version of what I’d like to see on the luncheon meat container:
|Component||Per 56 g serve||Per 100 g|
|Fat, total (g)||12||21|
|Carbohydrate, total (g)||4||8|
|Table 4. Luncheon meat nutrition information, adjusted.|
Mann, J. D. 1998: Feedstuffs of monogastric animals. NZ Institute for Crop and Food Research.
FAO “Methods of Food Analysis” www.fao.org/DOCREP/006/Y5022E/y5022e03.htm