If meetings really lower IQ…

… then there’s little hope for the world, says Alison Campbell, who attends far too many meetings. Fortunately however, that may not be the case.

I attend a lot of meetings; that’s the nature of my job. Recently the Dean came in and waved the front section of the NZ Herald under my nose. “Look,” he said, “all those meetings are really bad for you.” Scenting a way of getting out of them, I grabbed the paper and found the article in question (syndicated from the UK paper, The Telegraph).

“Attending meetings lowers your IQ,” cried the headline, and the article goes on to say that:

“[the] performance of people in IQ tests after meetings is significantly lower than if they are left on their own, with women more likely to perform worse than men.”

The story is based on a press release about research carried out at Virginia Tech’s Carilion Institute. And this showed that the research outcomes were more nuanced and more complex than the newspaper story would have it. The research found that small-group dynamics – such as jury deliberations, collective bargaining sessions, and cocktail parties – can alter the expression of IQ in some susceptible people (Kishida et al. 2012).

In other words, meetings don’t necessarily lower your baseline IQ. What they may do is change how you express that IQ, particularly if you’re susceptible to peer pressure. The internal urge to conform can result in people making decisions as part of a group that they might not have made on their own, especially if they have concerns about their status in that group. (As the Virginia Tech release notes, this was shown to good effect in the superb film 12 Angry Men, with Henry Fonda leading a stellar cast.)

The researchers placed study participants in groups of five and studied their brain activity (using MRI scans) while the groups were engaged in various tasks. While the groups were working they were also given information about the intellectual status of group members, based on their relative performance on those cognitive tasks. (There’s a tendency for people to place great store on relative measures of IQ, and where they personally sit on the scale.) And afterwards, when participants were divided on the basis of their performance into high- and low-performing groups before their IQs were measured again, they were found to differ quite signficantly despite the fact that all participants had statistically similar baseline IQs when tested at the beginning of the study.

“Our results suggest that individuals express diminished cognitive capacity in small groups, an effect that is exacerbated by perceived lower status within the group and correlated with specific neurobehavioural responses. The impact these reactions have on intergroup divisions and conflict resolution requires further investigation, but suggests that low-status groups may develop diminished capacity to mitigate conflict using non-violent means.”

As I said, this is altogether more nuanced, more complex, and much more interesting than the news story that caught the boss’s eye. I suspect I’ll be attending meetings for a while yet.

K.T.Kishida, D.Yang, K.Hunter Quartz, S.R.Quartz and R.Montague (2012) Phil.Trans.R.Soc.B 367(1589): 704-716.

Using pseudoscience to teach science

There may indeed be a place for creationism in the science classroom, but not the way the creationists want. This article is based on a presentation to the 2011 NZ Skeptics Conference.

We live in a time when science features large in our lives, probably more so than ever before. It’s important that people have at least some understanding of how science works, not least so that they can make informed decisions when aspects of science impinge on them. Yet this is also a time when pseudoscience seem to be on the increase. Some would argue that we simply ignore it. I suggest that we put it to good use and use pseudoscience to help teach about the nature of science – something that Jane Young has done in her excellent book The Uncertainty of it All: Understanding the Nature of Science.

The New Zealand Curriculum (MoE, 2007) makes it clear that there’s more to studying science than simply accumulating facts:

Science is a way of investigating, understanding, and explaining our natural, physical world and the wider universe. It involves generating and testing ideas, gathering evidence – including by making observations, carrying out investigations and modeling, and communicating and debating with others – in order to develop scientific knowledge, understanding and explanations (ibid., p28).

In other words, studying science also involves learning about the nature of science: that it’s a process as much as, or more than, a set of facts. Pseudoscience offers a lens through which to approach this.

Thus, students should be being encouraged to think about how valid, and how reliable, particular statements may be. They should learn about the process of peer review: whether a particular claim has been presented for peer review; who reviewed it; where it was published. There’s a big difference between information that’s been tested and reviewed, and information (or misinformation) that simply represents a particular point of view and is promoted via the popular press. Think ‘cold fusion’, the claim that nuclear fusion could be achieved in the lab at room temperatures. It was trumpeted to the world by press release, but subsequently debunked as other researchers tried, and failed, to duplicate its findings.

A related concept here is that there’s a hierarchy of journals, with publications like Science at the top and Medical Hypotheses at the other end of the spectrum. Papers submitted to Science are subject to stringent peer review processes – and many don’t make the grade – while Medical Hypotheses seems to accept submissions uncritically, with minimal review, for example a paper suggesting that drinking cows’ milk would raise odds of breast cancer due to hormone levels in milk – despite the fact that the actual data on hormone titres didn’t support this.

This should help our students develop the sort of critical thinking skills that they need to make sense of the cornucopia of information that is the internet. Viewing a particular site, they should be able to ask – and answer! – questions about the source of the information they’re finding, whether or not it’s been subject to peer review (you could argue that the internet is an excellent ‘venue’ for peer review but all too often it’s simply self-referential), how it fits into our existing scientific knowledge, and whether we need to know anything else about the data or its source.

An excellent example that could lead to discussion around both evolution and experimental design, in addition to the nature of science, is the on-line article Darwin at the drugstore: testing the biological fitness of antibiotic-resistant bacteria (Gillen & Anderson, 2008). The researchers wished to test the concept that a mutation conferring antibiotic resistance rendered the bacteria possessing it less ‘fit’ than those lacking it. (There is an energy cost to bacteria in producing any protein, but whether this renders them less fit – in the Darwinian sense – is entirely dependent on context.)

The researchers used two populations of the bacterium Serratia marcescens: an ampicillin-resistant lab-grown strain, which produces white colonies, and a pink, non-resistant (‘wild-type’) population obtained from pond water. ‘Fitness’ was defined as “growth rate and colony ‘robustness’ in minimal media”. After 12 hours’ incubation the two populations showed no difference in growth on normal lab media (though there were differences between four and six hours), but the wild-type strain did better on minimal media. It is hard to judge whether the difference was of any statistical significance as the paper’s graphs lack error bars and there are no tables showing the results of statistical comparisons – nonetheless, the authors describe the differences in growth as ‘significant’.

Their conclusion? Antibiotic resistance did not enhance the fitness of Serratia marcescens:

… wild-type [S.marcescens] has a significant fitness advantage over the mutant strains due to its growth rate and colony size. Therefore, it can be argued that ampicillin resistance mutations reduce the growth rate and therefore the general biological fitness of S.marcescens. This study concurs with Anderson (2005) that while mutations providing antibiotic resistance may be beneficial in certain, specific, environments, they often come at the expense of pre-existing function, and thus do not provide a mechanism for macroevolution (Gillen & Anderson, 2008).

Let’s take the opportunity to apply some critical thinking to this paper. Students will all be familiar with the concept of a fair test, so they’ll probably recognise fairly quickly that such a test was not performed in this case: the researchers were not comparing apples with apples. When one strain of the test organism is lab-bred and not only antibiotic-resistant but forms different-coloured colonies from the pond-dwelling wild-type, there are a lot of different variables in play, not just the one whose effects are supposedly being examined.

In addition, and more tellingly, the experiment did not test the fitness of the antibiotic-resistance gene in the environment where it might convey an advantage. The two Serratia marcescens strains were not grown in media containing ampicillin! Evolutionary biology actually predicts that the resistant strain would be at a disadvantage in minimal media, because it’s using energy to express a gene that provides no benefit in that environment, so will likely be short of energy for other cellular processes. (And, as I commented earlier, the data do not show any significant differences between the two bacterial strains.)

What about the authors’ affiliations, and where was the paper published? Both authors work at Liberty University, a private faith-based institution with strong creationist leanings. And the article is an on-line publication in the ‘Answers in Depth’ section of the website of Answers in Genesis (a young-earth creationist organisation) – not in a mainstream peer-reviewed science journal. This does suggest that a priori assumptions may have coloured the experimental design.

Other clues

It may also help for students to learn about other ways to recognise ‘bogus’ science, something I’ve blogged about previously (see Bioblog – seven signs of bogus science). One clue is where information is presented via the popular media (where ‘popular media’ includes websites), rather than offered up for peer review, and students should be asking, why is this happening?

The presence of conspiracy theories is another warning sign. Were the twin towers brought down by terrorists, or by the US government itself? Is the US government deliberately suppressing knowledge of a cure for cancer? Is vaccination really for the good of our health or the result of a conspiracy between government and ‘big pharma’ to make us all sick so that pharmaceutical companies can make more money selling products to help us get better?

“My final conclusion after 40 years or more in this business is that the unofficial policy of the World Health Organisation and the unofficial policy of Save the Children’s Fund and almost all those organisations is one of murder and genocide. They want to make it appear as if they are saving these kids, but in actual fact they don’t.” (Dr A. Kalokerinos, quoted on a range of anti-vaccination websites.)

Conspiracy theorists will often use the argument from authority, almost in the same breath. It’s easy to pull together a list of names, with PhD or MD after them, to support an argument (eg palaeontologist Vera Scheiber on vaccines). Students could be given such a list and encouraged to ask, what is the field of expertise of these ‘experts’? For example, a mailing to New Zealand schools by a group called “Scientists Anonymous” offered an article purporting to support ‘intelligent design’ rather than an evolutionary explanation for a feature of neuroanatomy, authored by a Dr Jerry Bergman. However, a quick search indicates that Dr Bergman has made no recent contributions to the scientific literature in this field, but has published a number of articles with a creationist slant, so he cannot really be regarded as an expert authority in this particular area. Similarly, it is well worth reviewing the credentials of many anti-vaccination ‘experts’ – the fact that someone has a PhD by itself is irrelevant; the discipline in which that degree was gained, is important. (Observant students may also wonder why the originators of the mailout feel it necessary to remain anonymous…)

Students also need to know the difference between anecdote and data. Humans are pattern-seeking animals and we do have a tendency to see non-existent correlations where in fact we are looking at coincidences. For example, a child may develop a fever a day after receiving a vaccination. But without knowing how many non-vaccinated children also developed a fever on that particular day, it’s not actually possible to say that there’s a causal link between the two.

A question of balance

Another important message for students is that there are not always two equal sides to every argument, notwithstanding the catch cry of “teach the controversy!” This is an area where the media, with their tendency to allot equal time to each side for the sake of ‘fairness’, are not helping. Balance is all very well, but not without due cause. So, apply scientific thinking – say, to claims for the health benefits of sodium bicarbonate as a cure for that fungal-based cancer (A HREF=”http://www.curenaturalicancro.com”>www.curenaturalicancro.com). Its purveyors make quite specific claims concerning health and well-being – drinking sodium bicarbonate will cure cancer and other ailments by “alkalizing” your tissues, thus countering the effects of excess acidity! How would you test those claims of efficacy? What are the mechanisms by which drinking sodium bicarbonate (or for some reason lemon juice!) – or indeed any other alternative health product – is supposed to have its effects? (Claims that a ‘remedy’ works through mechanisms as yet unknown to science don’t address this question, but in addition, they presuppose that it does actually work.) In the new Academic Standards there’s a standard on homeostasis, so students could look at the mechanisms by which the body maintains a steady state in regard to pH.

If students can learn to apply these tools to questions of science and pseudoscience, they’ll be well equipped to find their way through the maze of conflicting information that the modern world presents, regardless of whether they go on to further study in the sciences.


Thoughts on a billboard

On a recent visit to New Plymouth I was rather taken aback to see a billboard outside a central city church posing the question: “Evolution? How come we still have apes?” It wasn’t so much surprise that someone could know so little about evolutionary theory that they would think this was a persuasive argument – versions of this are often to be seen in the less sophisticated creationist publications – it was more that they should feel the urge to display their ignorance on a busy street corner.

The question is easily answered: it’s a bit like asking someone why there are still Scots if their ancestors came from Scotland. Evolution proceeds through localised change in sub-populations, not wholesale transformations of species across their entire range – and none of the modern ape species are ancestral to us in any case. One could also ask why, if humans were created separately from all other animals, there are animals which are so much like us – in other words if creationism is true, why are there apes at all?

I was reminded of a trivia word game my daughter once played, in which the clue was “Darwin’s theory of evolution”, and the answer was “natural selection”. The person who failed to answer this asserted she couldn’t be expected to know such things, since she didn’t believe in evolution. The same principle seems to apply at the New Plymouth church – decide you don’t believe in something, then refuse to learn anything about it. This has got it backwards, of course; if you’re going to disbelieve something, the least you can do is find out what it is that you don’t believe in.

The same challenge is often thrown at skeptics by believers who are convinced that if only we read the literature on homeopathy, or chiropractic, or UFOs or whatever, we would see the truth of their claims. While it isn’t necessary to have detailed knowledge of every last wacky idea – if it defies basic laws of physics and chemistry it’s almost certainly bunk – the irony is that many skeptics are very well informed about such things, and disbelieve because of what they know rather than what they don’t know. In the end though, it isn’t knowledge or the lack of it that makes the difference between a believer and a sceptic (whether they be sceptical of evolution or homeopathy), it’s the habit of critical thought – or the lack of it.

Every picture tells a story – sometimes they’re whoppers

Pictures don’t lie, right? Of course they do. And they were deceiving us long before Photoshop made the manipulation of images almost child’s play.

Today, nobody would bat an eye at a ghostly image of Abraham Lincoln standing behind his grief-stricken widow, apparently comforting her. But back in the 1860s when William Mumler produced the first ‘spirit photographs’ the public was stunned. These photos appeared to show dead relatives hovering around the living subject who had posed for the picture. Photography was magical enough, so it didn’t seem such a stretch that the camera could see things that the human eye could not

Mumler discovered ‘double exposure’ accidentally when he mistakenly used a previously exposed but undeveloped photographic plate. He immediately recognised the financial potential of this discovery and reinvented himself as a psychic medium who specialised in communicating with the other side through photographs. By today’s standards his efforts were amateurish but in the heyday of spiritualism they were readily accepted as authentic. Only when Mumler made the mistake of using images of people who were still alive as his ‘ghosts’, did his little scam crumble. But by this time many other ‘spirit photographers’ had recognised the lucrative nature of the business and had gotten into the game. And amazingly, the clever ruse even snared luminaries like Sir Arthur Conan Doyle and Sir William Crookes. Conan Doyle, the creator of Sherlock Holmes, was a physician and Crookes was a pioneer in chemistry and physics. One would think they would have known better.

Conan Doyle was a staunch believer in spiritualism, a position his famous detective would have taken a dim view of. But it was Sir Arthur’s championing of another type of fake photograph that best demonstrates the extent of his credulity. In 1917 two young girls produced a photo that purported to show fairies dancing in the woods. Conan Doyle was convinced the pictures were real and refused to believe that he had been fooled by the simple trick of hanging cardboard cutouts by a thread in front of the camera. It was inconceivable to him that a couple of uneducated girls could put one over on someone of his stature. The pictures therefore had to be evidence of the existence of fairies! In 1983 Elsie Wright and Frances Griffiths finally admitted that they had faked the photographs but nevertheless maintained they had actually seen real fairies.

By the time the ladies had unburdened their souls, Roger Patterson and Robert Gimlin had outdone the ‘Cottingley fairies’. In 1967 these two thrilled the world by capturing the first images of the fabled Bigfoot. Their short film shows a creature lumbering across the woods, looking very much like a man in a gorilla suit. There is good reason for that. It is a man dressed in a gorilla suit. The elaborate hoax was described in detail at a recent conference on magic history by Phillip Morris, a man who should know, since it was his costume company that provided and altered the gorilla suit used to stage the scene. Needless to say there are legions of Bigfoot believers who don’t buy Morris’ claim and remain convinced that some sort of giant ape-like creature prowls the Pacific Northwest.

With such ample historical evidence about photographic manipulation, it’s surprising how few people question the authenticity of a series of photographs being circulated on the internet purporting to show the results of a student’s science fair experiment. The pictures depict plants supposedly watered either with microwaved water, or with water that has been heated on a stove top. And guess what! The microwave-watered plants wither while the others flourish!

One can come up with all sorts of possible explanations for the difference. Was the soil the same in the two plants? Were they given equal amounts of water? Could they have been exposed to different lighting conditions? Was there some difference in the seeds? But how about a simpler possibility? Fraud. It isn?t very hard to set up two plants side by side and ensure that one thrives while the other dies. Just water one and not the other. Of course the possibility that this is the way the pictures were created does not prove the case.

Heating water in a microwave oven does nothing other than raise its temperature. Any talk about “the structure or energy of the water being compromised” is plain bunk. But absurdly implausible arguments don’t prove that the pictures are fraudulent either. What proves it is the good old standard of science: reproducibility. Or lack of.

I did the experiment. I watered plants with microwaved water, kettle-boiled water, and stove-top boiled water, feeling pretty silly about it, but I did it. The results? As expected, no difference. I didn’t take any pictures because, after all, how would you know that they are not faked? So here is the choice. You can take my word that the experiment cannot be reproduced, accept that science tells us that microwaves do nothing to water other than heat it, or take at face value some pictures in a circulating email that purport to show an effect that has eluded scientists around the world but was discovered by a student pursuing a science fair project. Better yet, do the experiment yourself!

As you might guess, I don’t believe in spirit photographs, fairies, Bigfoot or plants succumbing to the evils of microwaved water. And I would have put goats that climb trees into the same ‘unbelievable’ category. But I would have been wrong. It seems that some Moroccan goats have learned to climb the argan tree in search of its olive-like fruit. Legend has it that the undigested seeds that pass through the goats used to be collected and pressed into “argan oil,” a traditional food flavouring. Highly questionable. The oil, also used in the cosmetic industry, is actually pressed from fruit that has been picked by human hands, making the tree-climbing goats a nuisance. Still, one can appreciate their remarkable athleticism. Easy to find pictures of their exploits on line. And pictures don’t lie? Right?

Avoiding the trap of belief-dependant realism

The Believing Brain: how we construct beliefs and reinforce them as truths by Michael Shermer. Times books, New York. 386pp. ISBN 978-0-8050-9125-0. Reviewed by Martin Wallace.

Aa a member of NZ Skeptics I have become increasingly aware of the huge and ever-growing list of unsubstantiated beliefs in our society, including religion, alternative medicine, alien abductions, ESP, flying saucers, vaccination refusal, and so on and on. Why are there so many of them and their adherents, and so few of us skeptics?

In his new book Michael Shermer sets out the reasons for this situation. It is our believing brains, evolved hundreds of thousands of years ago, that are responsible. Belief without evidence is a salutary behaviour when facing a trembling bush behind which a predator may be lurking. Don’t wait for evidence – just go! Survival is selected for by belief.

Michael Shermer is the founding publisher of Skeptic magazine in the US, writes a regular column in Scientific American, and is an adjunct professor at Claremont Graduate University. He lives in Calfornia.

In this book he explores beliefs in many fields, and how we select data after forming the beliefs, to reinforce them. He describes how deeply inherent is our desire to detect patterns in our sensory information, and the evidence from neurophysiology and behavioural genetics which shows how and where this occurs. Religion for example exists in all cultures and can be called “a universal”.

Dr Shermer explores the history of empiricism and the extraordinary prescience of Francis Bacon (c 1620) in his recognition of those human behaviours which inhibit the determination of reality, and the need for a new approach.

He makes a strong argument for the teaching of scientific method in our schools as well as teaching the nature of the world revealed by that process. It is the unwillingness to apply that method which has resulted in the perseverance of our plethora of beliefs. We are not endowed by evolution with that aptitude, which after all is only 400 years old. We have to learn it.

Unsubstantiated beliefs have been part of our nature for a million years. This is why there are so many of them, and why they are so widespread. Shermer writes: “Science is the only hope we have of avoiding the trap of belief-dependant realism. It is the best tool ever devised to determine: does belief equate with reality?”

The prologue is available on Shermer’s web page (www.michaelshermer.com) and gives some idea of what lies within. There are liberal notes for each chapter and a comprehensive index.

I would recommend this book to anyone, sceptic or not, who wishes to better understand our human nature.

Martin Wallace is a retired physician who is resuming his education in literature, natural history, and in trying to understand human behaviour.

Everyone take a bow

The NZ Skeptics cast the net wide for the 2011 Bent Spoon.

The NZ Skeptics have awarded their annual prize for journalistic gullibility to all those media outlets and personalities who took Ken Ring’s earthquake prediction claims at face value, thereby misinforming the public and contributing to 50,000 people leaving Christchurch with all the inconvenience, cost and emotional harm that caused.

We believe that it is the business of the professional media to ask pertinent questions on behalf of the public when presenting material as factual. We even have broadcasting standards which call for accurate reporting. Many, many media outlets and journalists failed the basic standards of their profession in failing to ask “where is the evidence?” in the face of Ken Ring’s claims to predict earthquakes. They did us all a disservice.

The group Bent Spoon award is an unusual one for the NZ Skeptics, but we felt that so little was asked by so many that it had to be a broader award this year. That said, we did single out some reporters and commentators whom we felt had made particularly poor journalistic efforts in this area. They include:
Marcus Lush (RadioLIVE), for giving great and unquestioning publicity for Ring’s claims that Christchurch would have a major earthquake – “one for the history books” – on 20 March, and continuing to support Ring’s promotion as an earthquake predictor and weather forecaster.
Closeup’s Mark Sainsbury for giving Ring another platform to air his ideas with very little in-depth critique (12 July).

The best thing about Ken’s failure on March 20 was his long silence afterwards. Yet there he was back on what is supposed to be a credible current affairs show with more vague pronouncements and self-justifications. Surely Closeup had another Kate-and-William clip they could have played instead to maintain their level of journalistic quality.

The Herald on Sunday’s Chloe Johnson, who provided uncritical publicity for Ring which continued long after his failures had been well and truly demonstrated (26 June).

It’s been sad to see the Herald name devalued by the tabloid approach of the Herald on Sunday, especially when the spin-off can sometimes do good stuff such as its hard-hitting editorial headlined “Charlatan Ring merits contempt” (20 March).

Brian Edwards, described by one commentator as providing ” banal and rigourless equivocations”, including such gems as “the evidence that the moon has some contributory influence on earthquakes seems slight … however, it is not impossible that it does”.

We’ve seen Edwards cogently skewer sloppy thinking in the past, so it was surprising to see just how wishy-washy he was in this particular case.

And what of the notorious John Campbell interview where the television interviewer lost his cool and boosted sympathy for Ring by shouting him down? This has given us the unusual situation of seeing nominations come in to give Campbell both the Bent Spoon and the society’s Bravo Award for critical thinking.

We appreciate what John was trying to do – introduce a little evidence and call into question some very dubious claims – but we knew he’d blown it as soon as he started to talk over the top of Ken.

Bravo Awards

The NZ Skeptics also applaud critical thinking with a number of Bravo Awards each year. This year’s recipients are:
Janna Sherman of the Greymouth Star for her item “Sceptics revel in Hokitika ‘earthquake’ non-event” (14 March). Ken Ring predicted an Alpine Fault rupture and/or an extreme weather event which would require Civil Defense to prepare for gales and heavy rain at the Hokitika Wildfoods Festival in March. As Sherman’s report noted:

“The 22nd annual Wildfoods Festival on Saturday was held under sunny skies, with temperatures climbing over 20 deg C.”

In science, a lack of evidence or a failed prediction can tell us a lot; in the media, we rarely see any stories about a non-event. That’s why it was great to see Sherman and the Star cover Ken’s failure – pseudo-scientists and psychics alike will only trumpet their successes as part of their self-promotion. To get the real picture, you need to hear about their failures too.

Philip Matthews, writing in the Marlborough Express, for a great article on 1080 that actually says there is really only one side to the story rather than introducing an alleged controversy with token ‘balance’ (22 June).

We don’t ask the Flat Earth Society to provide balance for a story on the International Space Station orbiting a spherical Earth. Why should we give a false impression of evidence-based ‘debate’ in other areas such as 1080 or immunisation? In discussing the entrenched views regarding the use of 1080, Matthews wrote:

“One of those ‘entrenched views’ is the weight of science; the other, emotive opinion. The debate is done a disservice by suggesting the views are somehow equivalent.”

The NZ Skeptics also commend Dr Jan Wright, the Parliamentary Commissioner for the Environment, who, while not in the media itself, did a great job of evaluating the evidence on 1080 and presenting a report clearly outlining the evidence.

As always, the Bent Spoon was awarded telepathically by those gathered for the annual NZ Skeptics Conference.

The natural origins of morality

The Moral Landscape: How Science can Determine Human Values. Sam Harris. 2010. Free Press, New York. ISBN 978-1-4391-7121-9 Reviewed by Martin Wallace.

If faith is belief without evidence, then it is not open to scientific enquiry by a weighing of evidence. This attitude was supported and promulgated by Stephen Jay Gould. He claimed that there are “non-overlapping magisteria” of science and religion (NOMA).

However, what if it could be shown that there are events in the world of human brain physiology which can account for such “religious” activity as a sense of moral values?

This question is discussed brilliantly in this new book by Sam Harris. He says: “Questions about values are questions about the well-being of conscious creatures.” A sense of well-being is dependant in sentient beings like us on cerebral events and is therefore open to scientific investigation.

Well-being is engendered for example, by happiness, kindness, and compassion. Harris is a neuroscientist and has studied brain function by magnetic resonance imaging while subjects consider propositions. He has shown that the same part of the brain is active when considering scientific suggestions as when considering moral or religious precepts. The process of belief is the same, irrespective of content.

The part of the brain involved is that where activity can be seen with the placebo effect.

Harris makes interesting comments about the damaging effects of religion and politics on our sense of well-being. Given his past writing, we can expect some acerbic comments:

” For nearly a century the moral relativism of science has given faith-based religion-that great engine of ignorance and bigotry-a nearly uncontested claim to being the only universal framework for moral wisdom.”

He dismisses “cultural relativism” as a creation of academics. Well-being is shared by all members of all human cultures given the same conducive surroundings, as is our shared physiology.

He also is very firm about “scientific relativism” and the inhibitory effect it has had on human well-being. There can be no such thing as Christian physics or Muslim algebra!

The text of this book is accompanied by an expansion of the arguments in extensive Notes which are listed in the Index. There is also an extensive list of references.

This book answers the question my mother put to me 60 years ago. “It is all very well to talk about your lack of belief in religion, but what will you put in its place?”

Yet more reasons why people believe weird things

Research at Victoria University of Wellington is shedding light on the often irrational processes by which people assess new information. This article is based on presentations to the 2010 NZ Skeptics conference.

Jacqui Dean was alarmed. The Otago MP had received an email reporting the deaths of thousands of people – deaths caused by the compound dihydrogen monoxide. Dihydrogen monoxide is commonly used as an industrial solvent and coolant, it is fatal if inhaled, and is a major component of acid rain (see dhmo.org for more facts about dihydrogen monoxide). Only after she declared her plans to ban dihydrogen monoxide did she learn of its more common name: water (NZ Herald, 2007).

Ms Dean’s honest mistake may be amusing, but when large groups of people fail to correctly assess the veracity of information that failure can have tragic consequences. For example, a recent US survey found 25 percent of parents believe that vaccines can cause autism, a belief that may have contributed to the 11.5 percent of parents refusing at least one recommended vaccine for their child (Freed et al, 2010).

Evidence from experimental research also demonstrates the mistakes people can make when evaluating information. Over a number of studies researchers have found that people believe:

  • that brand name medication is more effective than generic medication;
  • that products that cost more are of higher quality;
  • and that currency in a familiar form – eg, the US dollar bill, is more valuable than currency in a less familiar form – eg, a dollar coin (Alter & Oppenheimer, 2008; for a review, see Rao & Monroe, 1989).

Why is it that people believe these weird things and make mistakes evaluating information?

Usually people can evaluate the veracity of information by relying on general knowledge. But when people have little relevant knowledge they often turn to feelings to inform their decisions (eg Unkelbach, 2007). Consider the following question: Are there more words in the English language that start with the letter K or have K in the third position? When Nobel prize winner Daniel Kahneman and his colleague Amos Tversky (1973) asked this question most people said there are more words that start with the letter K. And they were wrong. People make this error because words that start with the letter K, like kite, come to mind more easily than words that have a K in the third position, like acknowledge, so they judge which case is true based on a feeling – the experience of ease when generating K examples.

Generally speaking, information that is easy to recall, comprehend, visualise, and perceive brings about a feeling of fluent processing – the information feels easy on the mind, just like remembering words such as kite (Alter & Oppenheimer, 2009). We are sensitive to feelings of fluent processing (fluency), and we use it as a cue to evaluate information. For example, repeated information feels easy to bring to mind, and tends to be judged as more true than unrepeated information; trivia statements written in high colour contrast (Osorno is the capital of Chile) are easier to perceive and are judged as more true than statements written in low colour contrast (Osorno is the capital of Chile); and financial stocks with easy to pronounce ticker symbols (eg KAR) outperform those with difficult to pronounce ticker symbols such as RDO (Alter & Oppenheimer, 2006; Hasher et al, 1977; Reber & Schwarz, 1999).

Most of the time, fluently processed information is evaluated more positively – we say it is true, we think it is more valuable. And on the face of it, fluency can be a great mental shortcut: decisions based on fluency are quick and require little cognitive effort. But feelings of fluency can also lead people to make systematic errors. In our research, we examine how feelings of fluency affect beliefs, confidence, and evaluations of others. More specifically, we examine how photos affect people’s judgements about facts; how repeated statements affect mock- jurors’ confidence; and how the complexity of a name affects people’s evaluations of that person.

Can decorative photos influence your beliefs about information?

If we told you that the Barringer Crater is on the northern hemisphere of the moon, would that statement be more believable if we showed you a photo of the Barringer Crater? Because the photo is purely decorative – that is, it doesn’t actually tell you anything about the location of the Barringer Crater (which is in fact in Arizona) – you probably wouldn’t expect it to influence your beliefs about the statement.

Yet, evidence from fluency research suggests that in the absence of relevant knowledge, people rely on feelings to make decisions (eg Unkelbach, 2007). Thus, not knowing what the Barringer Crater is or what it looks like, you might turn to the photo when considering whether the statement is true. The photo might bring about feelings of fluency, and make the statement seem more credible by helping you easily picture the crater and bring to mind related information about craters – even though this would still give you no objective information about where the crater is located. In our research, we ask whether decorative photos can lead people to be more willing to believe information.

How did we answer our research question?

In one experiment, people responded true or false to trivia statements that varied in difficulty; some were easy to answer (eg, Neil Armstrong was the first person to walk on the moon), some were more difficult (eg, Turtles are deaf). Half of the time, statements were paired with a related photo (eg, a turtle). In a second study, people evaluated wine labels and guessed whether each of the wine labels had won a medal. We told people that the wine companies were all based in California. In fact, we created all of the wine names by pairing an adjective (eg, Clever) with a noun (eg, Clever Geese). Some of the wine labels contained familiar nouns (eg, Flower) and some contained unfamiliar nouns (eg, Quills). Half of the wine labels appeared with a photo of the noun.

What did we find?

Overall, when people saw trivia statements or wine names paired with photos, they were more likely to think that statements were true or that the wines had won a medal. However, photos only exerted these effects when information was difficult – that is, for those trivia statements that were difficult to answer and wine names that were relatively unfamiliar. Put more simply, decorative photos can lead you to believe claims about unfamiliar information.

Is one eyewitness repeating themselves as believable as three?

If you were a juror in a criminal case, you would probably be more willing to convict a man based on the testimony of multiple eyewitnesses, rather than the testimony of a single eyewitness. But why would you be more likely to believe multiple eyewitnesses? On the one hand, you might think that the converging evidence of multiple eyewitnesses is more accurate and more convincing than evidence from a single eyewitness, and indeed, multiple eyewitnesses are generally more accurate than a single eyewitness (Clark & Wells, 2008).

On the other hand, as some of the fluency research discussed earlier suggests, you may be more likely to believe multiple eyewitnesses simply because hearing from multiple eyewitnesses means hearing the testimony multiple times (Hasher et al, 1977). Put another way, it may be the repetition of the testimony, rather than the number of independent eyewitnesses, that makes you more likely to believe the testimony. In our research, we wanted to know whether it is the overlap of statements made by multiple eyewitnesses or the repetition of those statements that makes information more believable.

How did we answer our research question?

We asked subjects to read three eyewitness reports about a fictitious crime. We told half of the subjects that each report was written by a different eyewitness, and we told the other half that all three reports were written by the same eyewitness. In addition, half of these subjects read some specific claims about the crime (eg, The thief read a Newsweek magazine) in one of the eyewitness reports, while the other half read those same specific claims in all three reports. Later, we asked subjects to tell us how confident they were that certain claims made in the eyewitness reports really happened during the crime (eg, How confident are you that the thief read a Newsweek magazine?).

What did we find?

This study had two important findings. First, regardless of whether one or three different eyewitnesses ostensibly wrote the reports, subjects who read claims repeated across all three reports were more confident about the accuracy of the claims than subjects who read those claims in only one report. Second, when the claims were repeated, subjects were just as confident about the accuracy of a single eyewitness as the accuracy of multiple eyewitnesses. These findings tell us that repeated claims were relatively more fluent than unrepeated claims – making people more confident simply because the claims were repeated, not because multiple eyewitnesses made them.

Would a name influence your evaluations of a person?

Your immediate response might be that it shouldn’t – people’s names provide no objective information about their character. We hope that we make decisions about others by recalling information from memory and gathering evidence about a person’s attributes. Indeed, research shows that when we have knowledge about a topic, a person or a place, we do just that – use our knowledge to make a judgement- and we can be reasonably accurate in doing so (eg, Unkelbach, 2007).

But when we don’t know a person and we can’t draw on our knowledge, we might be influenced by their name. As we have described, when people cannot draw on memory to make a judgement, they unwittingly turn to tangential information to make their decisions. Therefore, when people evaluate an unfamiliar name, tangential information, like the complexity of that name, might influence their judgements. More specifically, we thought that unknown names that were phonologically simple – easier to pronounce – would be judged more positively on a variety of attributes than names that were difficult to pronounce.

How did we answer our research question?

We showed people 16 names gathered from international newspapers. Half of the names were easy to pronounce (eg, Lubov Ershova), and half were difficult to pronounce (eg, Czeslaw Ratynska). We matched the names on a number of factors to make sure any differences we found were not due to effects of culture or name length. So for example, people saw an easy and difficult name from each region of the world and names were matched on length. Across three experiments, we asked subjects to judge whether each name was familiar (Experiment 1), trustworthy (Experiment 2), or dangerous (Experiment 3).

What did we find?

Although the names were not objectively different from each other on levels of familiarity, trustworthiness, or danger, people systematically judged easy names more positively than difficult names. Put another way, people thought that easy-to-pronounce names were more familiar, more trustworthy, and less dangerous than difficult-to-pronounce names. So although we would like to think we would not evaluate a person based on their name, we may unwittingly use trivial information like the phonological complexity of a name in our judgements.


Why is it that people believe these weird things and make mistakes when evaluating information? Our research suggests that decorative photos, repetition of information, and a person’s name all influence the way people interpret information. More specifically, decorative photos lead people to think information is more credible; repetition leads mock-jurors to be more confident in eyewitness statements – regardless of how many eyewitnesses provided the statements; and an easy-to-pronounce name can lead people to evaluate a person more positively.

Relying on feelings of fluency can result in sensible, accurate decisions when we are evaluating credible facts, accurate eyewitness reports, and trustworthy people. But the same feelings can lead people into error when we are evaluating inaccurate facts, mistaken eyewitnesses, and unreliable people. More specifically, feelings of fluency might lead us to think false facts are true, be more confident in inaccurate eyewitness reports, and more positively evaluate an unreliable person.

A common finding across our studies is that the effect of fluency was specific to situations where people had limited general knowledge to draw on. In the real world, we might see these effects even when people have sufficient knowledge to draw on. That is because we juggle a lot of information at any one time and we do not have the cognitive resources to carefully evaluate every piece of information that reaches us – as a result we may turn to feelings to make some decisions. Therefore it is inevitable that we will make at least some mistakes. We can only hope that our mistakes are comical rather than tragic.

The authors thank Professor Maryanne Garry for her invaluable guidance and her inspiring mentorship on these and other projects.


Alter, A, Oppenheimer, D 2006: Proc. Nat. Acad. Sci. 103, 9369-9372.
Alter, A, Oppenheimer, D 2008: Psychonomic Bull. & Rev. 15, 985-990.
Alter, A; Oppenheimer, D 2009: Personality and Soc. Psych. Rev. 13, 219-236.
Clark, SE; Wells, GL 2008: Law & Human Behavior 32, 406-422.
Dihydrogen Monoxide – DHMO Homepage. (2010).dhmo.org
Freed, G; Clark, S; Butchart, A; Singer, D; Davis, M 2010: Pediatrics, 125, 653-659.
Hasher, L; Goldstein, D; Toppino, T 1977: J. Verbal Learning & Verbal Behavior 16, 107-112.
NZ Herald 2007:www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=10463579
Rao, A; Monroe, K 1989: J. Marketing Research, 26, 351-357.
Reber, R; Schwarz, N 1999: Consciousness & Cognition 8, 338-342.
Tversky, A; Kahneman, D 1973: Cognitive Psych. 5, 207-232.
Unkelbach, C 2007: J. Exp. Psych.: Learning, Memory, & Cognition 33, 219-230.

NearZero Inc: A sadly prophetic company name

Many people lost a lot of money investing in non-existent data compression software because well:established principles of information theory were ignored. This article is based on a presentation to the 2010 NZ Skeptics conference.

In the late 1990s, Nelson man Philip Whitley claimed to have invented a new data compression technology worth billions of dollars. Over the next decade money was raised on a number of occasions to develop this technology, culminating in a company called NearZero Inc raising $5.3 million from shareholders. According to a well:established body of theory, Whitley’s claims were obviously false. Unsurprisingly, within a few months of NearZero’s formation, it was in liquidation, with its funds gone.

I thought the saga of NearZero could be of interest to skeptics as it involves claims that were clearly false according to well&#8211established theory, and those claims cost investors a lot of money.

But first, a quick introduction to how data is stored by computers, and how that data can be compressed. Computers store data digitally, using the digits 0 and 1 in a binary code. A piece of storage capable of storing a 0 or a 1 is known as a bit (short for binary digit). With 1 bit we can store two values: 0 and 1. While this might be enough to store a simple data value (such as whether someone is male or female), for most pieces of data we need to store a larger range of values. With each bit we add, the number of possible values doubles; by the time we get to 8 bits we have 256 different values. The byte (a group of 8 bits) has proved to be a very useful unit of storage; storage sizes are usually quoted in bytes.

Character data is usually stored 1 byte per character (in European languages). Lower case ‘a’ is represented as 01100001, for example. A picture is a grid of dots. Each dot is called a pixel, and usually 4 bytes are used to encode the colour of a pixel. Standards are needed so that everyone interprets bit patterns in the same way.

Data representation methods are often chosen based on how easy it is to process the data. Often, the same data can be stored more compactly at the cost of making it harder to process. The process of translating a piece of data into a more compact form (and back again) is known as data compression. Compressing data allows us to put more data onto a data storage device, and to send it more quickly across a communications link. The size ratio between the compressed version and the uncompressed version is known as the compression ratio.

In ‘lossless’ compression, the uncompressed data is always identical (bit for bit) to the original data we started with. A compression method designed to work with any type of data must be lossless.

In ‘lossy’ compression, we are willing to accept small differences between the original data and the uncompressed data. In some situations we do not want to risk data being changed by compression, and lossless methods must be used. With images and sound, small changes that are difficult for humans to detect are tolerable if they lead to big space savings. The JPG image format and mp3 video/audio format have lossy compression methods built in to them. Users can choose the tradeoff between quality and space.

A question of pattern

For it to be possible to compress data, there must be some pattern to the data for the compression method to exploit. Letter frequencies in English text are well known, and could be the basis for a text compression method. We can do better if we take context into account. The most frequent letter is ‘e’ (12.7 percent), but if we know the next letter is the first in a word then ‘t’ is the most likely (16.7 percent). If we know the previous letter was (q) then the next will almost certainly be (u). A compression method that takes context into account will do better than one that doesn’t, as the context-based one will be a better predictor of the next symbol.

Likewise images are not random collections of coloured dots (pixels). Rather, pictures typically include large areas that have much the same colour. Sequences of frames in a movie often differ little from each other, and this can be exploited by compression methods.

The effectiveness of a compression method depends on how predictable / random the data is, and how good the compression method is at exploiting whatever predictability exists. If data are random, then no compression is possible. In these cases compression methods can actually create a compressed file larger than the original, because the compression methods have some costs. A compressed file is much more random than the uncompressed version, because the compression method has removed patterns that were present in the original.

In many branches of computer science it is important to establish the best possible way in which something could be done, to serve as a benchmark for current methods. In information theory, Shannon’s entropy is a measure of the underlying information content of a piece of data. A 1000-character extract from a book has more information content than 1000 letter ‘x’ characters, even though both might be represented using 1000 characters. To quote Wikipedia: ” Shannon’ s entropy represents an absolute limit on the best possible lossless compression of any communication” . Modern compression algorithms are so good that ” The performance of existing data compression algorithms is often used as a rough estimate of the entropy of a block of data” . In other words, it is not possible to achieve large improvements over current compression techniques.

The claims

It is time to have a look at Philip Whitley’ s claims. He claimed that he could compress (losslessly) any file to under seven percent of its original size, but this is not credible. Compression potential varies widely depending on patterns in the original file. Many files are already compressed, so have little potential for further compression. Even for uncompressed files, seven percent is achievable only in exceptional cases (English text entropy means the best achievable for English text is around 15 percent).

If it was possible to compress any file to less than seven percent of its original size then it would be possible to compress any file down to 1 bit. The first compression takes you down to under seven percent of the original file. Given that Whitley claimed his technique worked on any file, we could then compress the compressed file, reducing it to less than 0.5 percent of the original size, and so on.

Initial tests of Whitley’s technology were done on one computer. This made it easy to cheat. The ‘compression’ program can easily save a copy of the original file somewhere on disk as well as producing the ‘compressed’ version. Then, when the compressed version is ‘expanded’, the hidden copy can be restored. Whitley remained in control of the equipment, ostensibly to prevent anybody from stealing his software.

Critical assessment

Philip Whitley’s company Astute Software paid Tim Bell (an associate professor of computer science at the University of Canterbury) for an opinion on the technology. Tim Bell has an international reputation in the field of data compression; Microsoft has used him as an expert witness, and he has co-authored two well-known compression textbooks. An irony of the NearZero case is that New Zealand has more expertise in this field than you might expect for a small country (the co-authors of the two text books are New Zealand-born or live in New Zealand).

Tim Bell’s views were blunt: “The claims they were making at the time defied what is mathematically possible, and were very similar to claims made by other companies around the world that had defrauded investors.” One of his criticisms was that the tests were not two-computer tests. In such a test the compression is performed on one computer and the compressed file is transferred to a second computer, where it is decompressed. A two-computer test prevents the hidden-file form of cheating. It is reasonably easy to monitor the network cable between two computers, to check that the original file is not sent in addition to the compressed file (though the tester must be alert for other possible communication paths, such as wireless networks).

A two-computer test was subsequently conducted, and described in a 14-page report by Titus Kahu of Logical Networks. At first glance the report looks impressive, but on closer reading flaws quickly emerge. The two computers used were Whitley’s. The major flaw was that Kahu was limited to testing a set of 24 files selected by Whitley. The obvious form of cheating this allows is that the set of files can be placed on the second computer before the tests. Then all that the first computer needs to do is to include in the ‘compressed’ data details of which file is required (a number between 1 and 24 would suffice). The receiving computer can then locate the required file in its hiding place.

Titus Kahu did check the receiving computer to see if files with the names of those used in the test were present, but you would expect that someone setting out to deceive would at the very least rename the files.

The report makes for interesting reading. The files were of a number of types, including text files, pictures in JPG and GIF formats, MP3 audio files, and tar files. A tar file is a way of collecting a number of files together into a single file (zip files in Windows serve the same purpose).

One would expect text files to compress well, but JPG, GIF and MP3 files to compress poorly (they are all compressed formats). How well a tar file will compress depends on the files that it contains.

A simple comparison

To get some data to compare with the results in the report, I ran some tests using gzip (a widely used lossless compression method) on some text, tar and JPG files. I managed to locate two of the tar files used in the Titus Kahu tests: Calgary.tar and Canterbury.tar. Gzip achieved savings of 67.24 percent and 73.80 percent (so Calgary.tar was compressed to about one third of its original size, and Canterbury.tar to about one quarter). I also located three text files that were later versions of text files used by Kahu: on these Gzip achieved savings of 63.08 percent, 62.05 percent, and 70.77 percent. I also compressed a JPG file using gzip, and achieved a saving of 2.34 percent.

There are no great surprises in my results. There was quite a variation in the compression achieved, even amongst files of the same data type (the three text files for example). Compressing a JPG file gave little extra compression (not enough to make it worth further compression with gzip).

By comparison, savings in the report were 93.52 percent for four files and 93.53 percent for the other 20. I suspect that the difference in the fourth significant figure is due to rounding the file size to the nearest byte. These results are not remotely believable. The compression achieved is too good to be true even for data that compresses well (such as text), let alone for data formats that are already compressed. The incredible consistency of the compression achieved is also not credible.


Having looked at some background, it is time to look at the chain of events that culminated in NearZero Inc’s rise and fall. Philip Whitley’s early forays into business were not promising. In 1995 he was adjudged bankrupt (discharged in 1998). In 1997 he became a shareholder in Nelic Computing Ltd, which went into liquidation in 1999, owing unsecured creditors $70,000.

In 1999 Philip Whitley formed a software company (Astute Software) with a number of Nelson investors (who put in $292,000). Astute worked on a number of projects, and developed the data compression technology. In early 2001 the ‘one-computer’ tests were done, and Tim Bell’s opinion was sought. In mid 2001 the logical Networks ‘two-computer’ tests were done by Titus Kahu. In 2002, a Mr Cohen (an investor) asked for a (long-awaited) copy of the compression technology; he was told by Philip Whitley the only copies had been accidentally burnt when cleaning out his safe. Later in 2002 work stopped due to Whitley becoming ill.

In 2005 Whitley resumed work on the technology. Some of the original investors put in a further $125,000. On 10 July 2006, NearZero was incorporated in Nevada, with Philip Whitley as president, treasurer and sole director. Later in 2006 Titus Kahu became engineering director for Syntiro (a Philip Whitley company doing development work for NearZero) on the generous salary of $250,000 a year.

In February to April 2007 NearZero share purchase meetings were held in Auckland, Wellington and Christchurch. A total of 490 investors invested $5.3 million. The investment opportunity brochure forecast that the near-term NearZero market capitalisation would be US$482 billion to $780 billion, and was expected to exceed one trillion US dollars. Note that the largest company in the world, Petrochina, is a US$405 billion company, and the largest US companies, including Exxon Mobil, Apple and Microsoft, are in the 200 to 300 billion bracket.

Things quickly went wrong. In May 2007, the Securities Commission started investigating the legality of the NearZero share offer (there is no registered prospectus, for example). Also in May, Price Waterhouse Coopers (PWC) were appointed as interim liquidators for NearZero, and moved to sell houses and cars. In June, PWC said $218,000 went to Richmond City Football Club, $523,000 on vehicles, $852,000 on property, $683,000 to US-based company secretary Sherif Safwat, and $270,000 on household expenses. They found little evidence of money spent developing compression technology.

In June Whitley invited investors to contribute money to fund legal action to prevent liquidation. Also in June PWC found no evidence of any compression technology. Whitley claimed to have wiped it; PWC found no evidence of use of wiping software.

Then in July Whitley made some rather curious statements in an affidavit sworn in relation to the liquidation: “I will however say that it isn’t binary and therefore not subject to Shannon’s Law of algorithmic limitation.” If there was a real technology that was not based on binary it is hard to see it being of widespread use in computer and communication systems that store, transmit and process all data in binary. The affidavit continues: “Shannon’s Law is attached to this affidavit as Annexure “Y” and it can be seen that this is a 1948 paper”. Claude Shannon founded information theory, which is the basis of how digital computers represent data (according to one tribute, the digital revolution started with information theory). Shannon coined the term bit, and introduced the concept of information entropy referred to earlier. It is interesting that Shannon’s fundamental research results are dismissed as being in “a 1948 paper”.

He also stated: “In regard to the item 3/ I have never asserted that the technology is based on an algorithm”. In computer science, an algorithm is simply a description of how to do something in a series of steps. A common analogy is to say that a cooking recipe is an algorithm for preparing food. If Philip Whitley’s compression technology is not based on an algorithm then that implies it cannot be described as a sequence of steps, and therefore cannot actually be implemented!

In November, Associate Judge Christiansen ordered NearZero’s liquidation, and ruled that the compression technology had no value. Then in August 2008 Whitley faced the much more serious charge of making fraudulent claims about his technology.

In September 2008 all shareholders were given the option of keeping their shares or getting their money back. They proved to be remarkably loyal: $3.1m voted to stay in; $2.2m voted for reimbursement. I’m not sure whether there was any money to reimburse those who voted that way (probably not). In August 2009 Philip Whitley was convicted and fined for making allotments without having a registered prospectus.

The trial

In February 2010 the fraud trial began in Nelson. Whitley was charged with making a false statement as a promoter between July 2006 and May 2007. There were many sad stories in the Nelson Mail about wasted money and time (and resulting stress). Some of the information to emerge in the trial:

  • Philip Whitley hired a team of seven body guards headed by “Oz” (Oswald Van Leeuwen), who was on a salary of $300,000. This level of security was needed because of the (supposed) enormous value of the compression technology
  • According to Sherif Safwat, Philip Whitley believed a Chechnyan hit team had arrived in New Zealand on a Russian fishing boat.
  • Philip Whitley: “The [security guards] said that the Russians were trying to penetrate and we ended up with security guards living in my house, camped on the floor … I couldn’t go out of the house without having security … it just built up inside me to the point where I just lost it from a point of paranoia.”

In his summing up on May 27, the defence lawyer said:

  • “Whitley had a distorted view of reality which led him to believe his data compression technology was real.”
  • “… [we are] not challenging the evidence of … Prof Bell that Whitley’s claimed invention was mathematically impossible.”

In July Philip Whitley was found guilty on two counts of fraud (but maintains he still has his inventions).

On August 10, 2010, he was sentenced to five years and three months in prison.

The NearZero mess should not have happened. New Zealand has more researchers in this field than you would expect for a country of this size. One of the most prominent, Tim Bell, clearly stated in 2001 that the claims were false. However, investors still committed (and lost) millions of dollars over a number of years. Compression claims are easily tested (much more easily than medical claims, for example). Whitley refused to allow his technology to be independently tested using the excuse of protecting his intellectual property. Many people have been harmed, especially the investors. Moreover, this type of case is not good for the reputation of the IT industry, which struggles to attract investment.

I was asked at the conference how non-technical NearZero investors could have protected themselves. I had no answers at the time, but have given it some thought since. Some things they could have done:

  • Google the names of the company principals.
  • Check to see how the predicted market capitalisation compared to that of existing companies. Finding that the lowest estimate would make NearZero the biggest company in the world should have lead to some scepticism.
  • Google the terms ‘data compression’ and ‘scam’.

Much of the information in this article is based on the Nelson Mail’s extensive reporting of the issue, for which they are to be congratulated. Another good source of information was nearzero.bravehost.com, a website set up by and for NearZero’s shareholders in 2007 in response to the liquidation of NearZero. An article by Matt Philp on Philip Whitley and NearZero appeared in the October 2010 issue of North & South.

Oxygenated food for the brain?

Alison Campbell finds some claims about raw foods hard to swallow.

I was reading a couple of articles about ‘raw foods’ today. This is ‘raw foods’ as in ‘foods that you don’t heat above 40°C in processing them.’ It’s also as in, a vegetarian diet. (I do rather enjoy vegetarian food, but I don’t think I could eat nothing but, all the time; I like meat too much.) Anyway, what caught my eye wasn’t so much the diet programme itself but the mis-use of science to promote it. That did rather get my goat broccoli.

Apparently you should get your kids to eat their greens (along with the rest of the diet) by telling them that plants do this wonderful thing: they turn sunlight into chlorophyll and – when you eat it – it will give you extra oxygen. Sigh&#8230 This concept was repeated in the second article, which told me that raw (but not cooked) foods are ‘oxygenated’ and thus better for your brain, which needs to be fully oxygenated to work properly.

Well, yes, and so do all your other bits and pieces, and they don’t get the oxygen from food. As Ben Goldacre once said, even if chlorophyll were to survive the digestive process and make it through to the intestine, it needs light in order to photosynthesise, quite apart from the fact that you don’t normally absorb oxygen across the gut wall. And it’s kind of dark inside you.

The second shaky claim related to digestive enzymes. Because raw foods are ‘alive’ then they are full of enzymes. And so we’re told that eating them will help you to digest your meals better.

Er, no. First, because when said enzymes – being proteins – hit the low pH environment of your stomach they are highly likely to be denatured. This change in shape means that they lose the ability to function as they should, and in fact they’ll be chopped up into amino acids like any other protein in your food, before being absorbed and then used by your cells to make their own enzymes.

And second – the raw foods diet is plant-based. Yes, plants and animals are going to have some enzymes in common. I’d expect that those involved in cellular respiration and DNA replication/protein synthesis would be very similar, for example, because these are crucial processes in any cell’s life and any deviations in form and function are likely to be severely punished by natural selection. But we already have those enzymes; they’re manufactured in situ as required. In other words, even if the plant enzymes somehow made it into cells intact and capable of functioning, they’d be redundant.

However, with a very few exceptions, plants aren’t in the habit of consuming other organisms so, in regard to plant cells being a good source of the digestive enzymes required for the proper functioning of an omnivore’s gut – no, I don’t think so. No.

Some might ask, why on earth do I bother about this stuff? After all, it’s not doing any harm. But the thing is – science is so cool, so exciting; it tells us so much about the world – why do people have to prostitute it in this way? Kids (and others) are fascinated by the way their bodies’ organ systems work, and I can’t see why there seems to be a need to provide ‘simple’ – and wrong! – alternative ‘explanations’ when the real thing is so wonderful.

The Unfortunate Experiment: Revisiting the Cartwright Report

This article is a response to ‘Truth is the daughter of time, and not of authority’: Aspects of the Cartwright Affair by Martin Wallace, NZ Skeptic 96.

The Cartwright Inquiry1 was held after the publication of “An Unfortunate Experiment at National Women’s” in Metro magazine in June 1987. The events leading up to the publication of the article and the findings of the subsequent inquiry have been contested ever since.

The inquiry heard from 67 witnesses, many doctors, 84 patients and relatives, and four nurses. In addition, 1200 patient records were reviewed, with 226 used as exhibits. The final report released in August 1988 has had a long-lasting impact. It recommended many changes in the practice of medicine and research, including measures designed to protect patients’ rights and a national cervical screening programme. These have since been implemented. The Medical Council announced in 1990 that four doctors were to face disciplinary charges resulting from the inquiry’s findings of disgraceful conduct and conduct unbecoming a medical practitioner. Charges against Dr Herbert Green were dropped due to ill health.

The report of the Committee of Inquiry has withstood many challenges, including judicial reviews and many articles alleging its findings to be flawed. Yet there have been allegations of a miscarriage of justice, charges of a witch-hunt, even a feminist conspiracy.

Where does this leave Dr McIndoe and others who had mounting concerns for so many years? Why did so many women develop cancer? In this article I will explore the findings of the Cartwright Inquiry, its context, the research and the criticisms, and attempt to find a more nuanced understanding of the “unfortunate experiment” and its ongoing effects. Page numbers in parentheses refer to pages in the Cartwright Report. CIN3 and CIS are interchangeable terms for a lesion of the cervical epithelium which can be a precursor to invasive cancer.

The Findings of the Inquiry

The report found that Green, rather than developing a hypothesis, aimed to prove a point (p 21) that even at the time was known not to be the case. A 1961 compilation of studies from Paris, Copenhagen, Stockholm, Warsaw, and New York showed CIS progressed to invasive cancer in 28.3 percent of cases (p 23). As at 1958 the official policy was “… treatment of carcinoma of the cervix Stage 0, [CIS] should be adequate cone biopsy … provided the immediate follow-up is negative and … the pathologist is satisfied that the cone biopsy has included all the carcinomatous tissue” (p 26). Standard treatment of the time involved excising all affected tissue and the ‘conservative’ treatment of conisation was in use well prior to 1966.

Green’s initial proposal stated “… It is considered that the time has come to diagnose and treat by lesser procedures than hitherto, a selected group of patients with positive (A3-A5) smears. Including the four 1965 cases, there are at present under clinical, colposcopic, and cytological observation, 8 patients who have not had a cone or ring biopsy. All of these continue to have positive smears in which there is no clinical or colposcopic evidence of invasive cancer”… The minutes then record that “… Professor Green said his aim was to attempt to prove that carcinoma-in-situ (CIS) is not a premalignant disease”… (p 22). This appeared to come about because of concern about unnecessarily extensive surgery for CIS between 1949 and 1962. During this period, some centres were beginning to use cone biopsy as effective treatment; however there were limitations to its use (p 27).

There were some questions over whether the work was a research project. The inquiry concluded this was the case and that a research protocol, however flawed, was put in place (p 69). Green published in peer-reviewed journals on his hypothesis and findings. By 1969, three cases of invasive disease had occurred in patients with positive cytology monitored for more than a year, and this should have made it clear that following patients with persistent CIS was unsafe (p 52).

Green then explained those patients by concluding that they’d had invasive cancer that was missed at the outset. The report contends this was dangerous to the patients as it demonstrated that the proposal was incapable of testing the hypothesis. These patients were reclassified by Green and the patients removed from the study (p 55). In addition, patients over the age of 35 were included in the research in breach of the protocol vp 49).

There were many subsequent issues, including lack of patient consent (p 136). Patients also had to return for repeated tests and other invasive procedures, often receiving general anaesthetics in the process (p 42-49). A collection of cervices from foetuses and stillborn infants and another of baby uteri in wax were collected by Green for research which was later abandoned. This did not appear to comply with the Human Tissue Act (1964) as no consent was obtained from the parents of the stillborn infants (p 141).

As part of an earlier 1963 trial to test whether abnormal cytology in women later developing CIS or invasive cancer was present at birth (pp 34 & 140), 2,244 new-born babies had their vaginas swabbed without formal consent from the parents (there was a decision to abandon this trial soon after it started but this wasn’t communicated to nursing staff until 1966).

Procedures such as vaginal examinations and IUD insertions/removals on hysterectomy cases were performed by students without patient knowledge or consent while they were under anaesthetic (p 172). There was a further study on carcinoma of the cervix treatment, where patients either had radiotherapy alone or hysterectomy and radiation (p 170). The method of randomisation was by coin toss.

The Research

The idea that patients were divided into two experimental groups arose from McIndoe et al (1984)2. The patients were divided retrospectively into two groups which overlapped strongly but not completely with groups defined by Green, that he called “special series”. In his 1969 paper, cited in the report (p 40-41) he stated: “The only way to settle the question as to what happens to carcinoma in situ is to follow adequately diagnosed but untreated lesions indefinitely … it is being attempted at NWH by means of 2 series of cases. (I) A group of 27 women … are being followed, without ‘treatment’, by clinical, colposcopic, and cytologic examination after initial histological diagnosis of carcinoma in situ … has been established by punch biopsy … (II) A group of 25 women who have had a hysterectomy (4 for cervical carcinoma in situ) and who now have histologically-proven vaginal carcinoma in situ, has been accumulated …” This was done semi-randomly, with cases presenting themselves fortuitously.

The outcome for the group of 25 who were included in the punch biopsy “special series” was summarised in the McIndoe et al (1984) paper. Nine out of 10 women who were monitored with continuing positive smears developed invasive cancer. Only one out of 15 women who had normal follow-up cytology later developed invasive cancer. While Coney and Bunkle may have made a mistake, it’s clear the judge didn’t. The report states: “Green’s 1966 proposal was not a randomised control trial, but it was experimental research combined with patient care” (p 63).

Green’s interpretation of the data in his 1974 paper is suspect, having concluded that the progression rate was 7-10/750 (0.9 to 1.3 percent) or 6/96 (6.3 percent) of ‘incompletely treated’ lesions (p 54). These were explained by suggesting that either invasive cancer was missed at the start, or over-diagnosed at the end. Dr Jordan (expert witness) deemed this interpretation incorrect as of the 750 cases, 96 had continuing positive cytology, meaning that the other 654 patients could be considered free of disease. Of that 96, 52 patients had not been assessed further, making it impossible to know whether or not this group already had unsuspected invasion. Of the 44 patients remaining with ongoing carcinoma in situ who had more investigations, seven were found with invasive carcinoma. The incidence of known progression was therefore 7/44 (16 percent), which approximates McIndoe et al (1984) findings. This means that the proportion of invasive cancer cases in those inadequately treated was much higher compared with those who had returned to negative cytology, even before any cases where slides were re-read and excluded are considered.

McIndoe et al (1984) covered the follow-up data for 948 patients with a histological diagnosis of CIS patients who had been followed for a minimum of five years; there was a further paper in 1986 regarding CIS of the vulva. The same method used by Dr Green to group women by cytology after diagnosis and treatment was used, but using the correct denominators and the original diagnosis. Patients who were diagnosed with invasive cancer within one year were excluded to avoid the possibility the cancer had been missed initially. The management was cone biopsy or amputation of the cervix in 673 patients, with 250 managed by hysterectomy. The only biopsies in 25 women were punch biopsy (11), wedge preceded by punch biopsy (7) and wedge biopsy alone (7). Twelve out of 817 (1.5 percent) of group 1 patients developed invasive cancer. Given the lengthy follow-up with negative cytology for group 1 patients, the authors concluded these represented the development of new carcinoma. There were marked differences in the completeness of excision between the two groups and the second group shows markedly different results, with 29/131 (22 percent or 24.8-fold higher chance) with positive cytology developing invasive cancer. At 10 years this was 18 percent rising to 36 percent after 20 years, irrespective of the initial management or histologic completeness of excision. This needs to be explained, as those figures strongly suggest the progression of CIS to invasion when it is and was a totally curable lesion. The answer is that a prospective investigation, as done by Green, has to establish that invasive disease is not present, while conserving affected tissue that is required for later study. The argument has been posed that women in the second group did get cone biopsies and hysterectomies. This ignores the fact that while many women were treated with various procedures, there was evidence of continuing disease, demonstrating that the intervention was inadequate. This was not followed up, posing a high risk of development of invasive disease.

This differs from group 1 patients, who were successfully treated at the outset. It’s pertinent to point out that the Cartwright Report did not rely on this study (or the Metro article) to reach its conclusions, but on review of patient records.

There have been two follow-up studies. McCredie et al (2008)3 examined medical records, cytology and histopathology for all women diagnosed with CIN3 between 1955 and 1976, whose treatment was reviewed by judicial inquiry. This paper gave a direct estimate of the rate of progression from CIN3 to invasive cancer. For 143 women that were managed by only punch or wedge biopsy the cumulative incidence was 31.3 percent at 30 years and 50.3 percent in a subgroup who had persistent disease at 24 months.

The cancer risk for 593 women who received adequate treatment and who were treated conventionally for recurrent disease was 0.7 percent at 30 years. These findings support McIndoe et al (1984) and extend the period of follow-up.

McCredie et al (2010)4, described the management and outcomes for women during the period 1965-74 and makes comparisons with women diagnosed 1955-64 and 1975-76. This showed that women diagnosed with CIN3 in 1965-74 were less likely to have treatment with curative intent (51 percent vs 95 percent and 85 percent), had more follow-up biopsies, were more likely to have positive cytology during follow-up and positive smears that were not followed by curative treatment within six months, as well as a higher risk of cancer of the cervix or vaginal vault.

Those women initially managed by punch or wedge biopsy alone in the period 1965-74 had a cancer risk 10 times higher that women treated with intention to cure. This was despite the 1955-64 group being largely unscreened, which would have delayed diagnosis. This study is important as it shows the medical experience of the women, where they were subjected to many interventions that were not meant to treat but rather to monitor.

Whistle blowing

Scientific misconduct happens, and for those trying to address it the risks are high. Brian Martin5 looked at several cases, and stated: “In each case it was hard to mobilize institutions to take action against prestigious figures. Formal procedures, even when invoked, were slow and often indecisive.”

McIndoe and others encountered similar difficulties and ultimately failed to get Green’s proposal reviewed. The concept of “clinical freedom” (p 127), where the doctor was the arbiter of the best course of action for the patient, was one major issue to emerge from the report. Colleagues tended to be very reluctant to intrude upon this, and this meant that the proposal could continue with little oversight or intervention. McIndoe had mounting concerns, particularly after 1969, which were disregarded or treated lightly.

These concerns were shared by pathologist-in-charge Dr McLean, and were raised internally with Medical Superintendent Dr Warren, who consulted with the Superintendent-in-Chief, Dr Moody and an internal working party set up to look at the issue in 1975. Twenty-nine cases that had developed invasive disease were referred to it; however only 13 were examined, and having set up its own terms of reference it only considered whether the protocol had been adhered to and disregarded concerns about patient safety (p 83).

The 1966 proposal effectively ceased when McIndoe withdrew colposcopic services and Green reverted to cone biopsy in most new cases (p 88), but it was never formally terminated. While Green himself did not take any steps to prevent the review of records by McIndoe and colleagues, Bonham did, and wrote a letter to the Medical Superintendent (p 92).

There are some important lessons to be learned from this, including that those with the authority to deal with the situation should make the best effort to achieve a balanced view of the situation and assess it fairly to allow the claimant a fair hearing.


The potential risks of Green’s proposal outweighed any benefits such as avoiding hysterectomy or cone biopsy. Invasive cancer could not be ruled out because there were poor safeguards against the risk of progression. This was unethical from the outset, regardless of the issue of informed consent. In addition, patients that developed invasive disease had their slides reclassified and were removed by Dr Green from the study. This would be considered research misconduct then and now as it manipulated the data.

It does not matter if the initial motivations were sincere; they ultimately fail on these points. This proposal had a very human cost. Moreover Green’s views had long-term effects, including influence on undergraduate and postgraduate medical students, and support for the attitude that cervical screening was not worthwhile. This ‘atypical’ viewpoint was also promoted in the scientific literature and in the press, creating confusion within the medical scene and with the public.

It can be incredibly hard to admit our failings and let go of old loyalties. In the aftermath of the report many doctors objected to cervical screening, ‘unworkable’ consent forms and the intrusion of lay committees on practice6. It’s true this had negative effects on the perception of doctors overall, particularly in regard to practices that were widespread in hospitals at the time, and there were times that unfair criticisms were aired. This impacted on the nursing profession as well, for nurses are meant to be patient advocates.

This was also about power. The really unfortunate thing is that medical responsibilities to patients are almost totally ignored in the midst of the argument, when they should be brought to the forefront. Likewise respect, justice and beneficence were lacking for the patients involved. No doctor raised concerns about the lack of consent, even though from the 1950s there was the growing expectation that this be sought, particularly with participants in research.

The Medical Association working party that examined this stated that it was “regrettable that the trial deteriorated scientifically and ethically and did not change as scientific knowledge advanced or as adverse results were observed”7. They found it deplorable that patients involved did not know they were part of a trial, and that it took a magazine article for it to be investigated.

Unfortunately, instead of addressing this and examining whether Dr Green made any errors or misinterpretations himself, the findings in McIndoe et al (1984) and other papers were not accepted. There is the unfortunate implication that, rather than there being mounting and valid concerns over decades, that Green was unfairly toppled and the resulting inquiry was a whitewash.

The report couldn’t have been written without the assistance of the medical community as expert witnesses and advisors. It’s not surprising that there would be loyalty for a colleague, but perhaps instead of attempting to rehabilitate Green it’s time McIndoe and his colleagues were vindicated. Morality did not totally fail and attempts were made to prevent patients being harmed8.

Acknowledgements: many thanks to Dr. Margaret McCredie of Otago University who assisted me with my research.

  1. The Cartwright Report: www.nsu.govt.nz/current-nsu-programmes/3233.asp
  2. W.A. Mcindoe; M.R. McLean; R.W. Jones; P.R. Mullins 1984: J. Am. Coll. Obst. 64(4).
  3. M.R.E. McCredie; K.J. Sharples; C. Paul; J. Baranyai; G. Medley; R.W. Jones; D.C. Skegg 2008: The Lancet Oncology DOI:10.1016/S1470-2045(08)70103-7
  4. M.R.E. McCredie; C. Paul; K.J. Sharples; J. Baranyai; G. Medley; D.C. Skegg; R.W. Jones 2010: A&NZ J. Obst. Gyn. DOI:10.1111/j.1479-828X.2010.01170.x
  5. B. Martin 1989: Thought and Action 5(2), 95-102.
  6. J. Manning (Ed.) 2009: The Cartwright Papers: Essays on the Cervical Cancer Inquiry 1987-88. Bridget Williams Books Ltd.
  7. L. Bryder 2009: A History of the “Unfortunate Experiment” at National Women’s Hospital. Auckland University Press.
  8. C. Paul 2000: BMJ 320, 499-503.

The fallibility of eyewitness memory

Eyewitness testimony is commonly regarded as very high quality evidence. But recent research has shown there are many ways memories of events can become contaminated. This article is based on a presentation to the NZ Skeptics conference in Wellington, 27 September 2009.

In 2003, a woman was tragically attacked and raped after leaving a bar in Christchurch. She remembered her assailant as a man with “rat- like” features. Later, she chose the police suspect from a photographic lineup, indicating that she was “90 percent sure” that he was her assailant. This identification became the central piece of evidence that convicted Aaron Farmer. But, in June 2007, Mr Farmer was exonerated after DNA proved that he could not have been the rapist – he had spent almost three years in prison.

Unfortunately, Mr Farmer’s case is not an isolated incident. Decades of legal and psychological research have shown that eyewitness identification error is the leading cause of wrongful conviction. Recently the former High Court judge, Sir Thomas Thorp, published an extensive review of legal research on miscarriages of justice. In that paper, he estimated that there are at least 20 innocent people in New Zealand prisons, and he emphasised eyewitness error as a leading cause of convictions. This conclusion fits neatly with exoneration data from the Innocence Project, based in New York. Since 1992, the Innocence Project has exonerated over 250 wrongfully convicted people, over 75 percent of whom were identified by at least one eyewitness.

How can human memory be so fragile as to lead a witness to choose an innocent person from a lineup? Over 30 years of research has shed light on this question. Ultimately, this research has shown that memory can go wrong in several ways. The best way to understand these errors is to think of memory as a three-stage process:
[1] encoding,
[2] retention, and
[3] recall.

At the encoding stage, information is perceived and transferred from the environment, through our senses. These perceptual processes allow us to lay down memory traces. Next, those traces are retained for a period of time. Of course this retention stage can last for anywhere between seconds and years, until finally we recall that information from memory. It is important to know that any one of these three stages can go awry.


Encoding depends heavily on our ability to pay attention to information in the environment. However, our attentional systems are limited. We can only pay attention to a few things at once. Anything that does not receive the requisite amount of attention does not have the chance to make it through the encoding phase of memory.

Furthermore, many variables, such as stress, can limit our attentional processes even more. As a result, witnesses will often not pay attention to details that could be forensically relevant. For example, a witness under stress may pay particular attention to the weapon being brandished by the offender, rather than paying attention to his facial details. If this is the case, those facial details may never be stored in memory, and if information is not stored, it cannot be recalled later.


The information that makes it into memory can be distorted easily. Perhaps the best known psychological science research in this field is the misinformation effect pioneered by Elizabeth Loftus. This research shows that a simple suggestion can change witnesses’ memories. In a typical misinformation experiment, there are three stages.

First, participants watch a simulated crime, such as a man stealing a maths book from a bookstore. After a delay, participants are exposed to post- event information (PEI), which is usually a narrative describing the simulated crime. For some participants, the PEI is accurate but generic (eg, “the man stole a book”), and for others the details are misleading (eg, “the man stole a science book”).

Finally, participants are questioned to determine their memory’s accuracy for the event. These participants are often specifically told to ignore everything they read in the narrative and only rely on what they saw during the event. Typically, those participants who read misleading details during the PEI have less accurate memories than those who read generic information.

This research shows the ease with which a person’s memory can be changed. Decades of research have shown that people can come to remember having seen a crime when in fact they have seen an innocuous event. Using this paradigm people can even come to remember having seen an innocuous event, when in fact they have seen a crime. Witnesses can often be exposed to misleading details from co- witnesses, suggestive interviewing techniques or sometimes, media reports of the crime. Any of these sources can lead witnesses to remember details that did not happen.


Psychological science has also shown that the way we test witnesses can also affect their memories for what they have seen. Some of the most prolific research in this field has examined the way that we test witnesses’ memories for offenders’ faces using the lineup technique. Photographic lineups are the most common method of testing eyewitness recall for offenders.

Usually, a lineup depicts a police suspect surrounded by known innocent people – known as distracters. A witness chooses a person from a lineup in the same way that a person chooses an option from multiple- choice question. When people choose the correct answer from a multiple- choice question it is considered evidence that they recognised the correct answer by relying on memory; and when witnesses choose the suspect from a montage, it is considered evidence that they recognised the suspect from the crime scene.

However, people do not always rely on their memory in either multiple- choice questions or lineups. A multiple-choice question can be biased towards the correct answer, as in this example:

What is the capital of Burundi?

Most people cannot rely solely on their memory to answer this question. Now consider these choices:

(a) Paris;
(b) Sydney;
(c) Wellington;
(d) Bujumbura.

You probably chose the correct answer (d), not because you had a memory for Burundi’s capital, but because you used a process of elimination to choose that answer. Similarly, a lineup is sometimes constructed so that witnesses do not need to rely on their memory for the offender; instead, they use a process of elimination – the suspect becomes the Bujumbura of the lineup.

Lineup bias

The danger arises when the wrong person is suspected of a crime and then included in a biased lineup. Research shows that witnesses will often choose from a lineup, even when the actual offender is not present. If the lineup has been constructed in a biased way (like the multiple choice question above), witnesses are even more likely to choose from the lineup. It is misidentifications like these that often lead to wrongful convictions.

Taken together, this research shows that witnesses’ memories are susceptible to several sources of error. As such, we need to ensure that we collect and test witnesses’ memories with scientifically valid interview and lineup techniques. Scientific recommendations regarding best practice procedures for witness evidence have been available for several decades, but few jurisdictions worldwide have taken them up. This lack of recognition for scientific validation is surprising given the relatively fast uptake of forensic science methods, such as DNA testing.

As a result, the best way to think of witness memory evidence is like biological evidence at a crime scene. If we were unlucky enough to stumble across a bloody crime scene, most people would be careful not to contaminate the scene by trampling through the blood spatter patterns, or handling any evidence. Similarly, we should treat witness memory with the same caution. When a witness has been exposed to a crime, we should not contaminate their memories with suggestive questioning and biased lineups. Instead, we should collect and preserve their memories with scenically valid techniques. Only then can we hope to reduce the increasing number of wrongful convictions caused by erroneous witness evidence.