Using pseudoscience to teach science

There may indeed be a place for creationism in the science classroom, but not the way the creationists want. This article is based on a presentation to the 2011 NZ Skeptics Conference.

We live in a time when science features large in our lives, probably more so than ever before. It’s important that people have at least some understanding of how science works, not least so that they can make informed decisions when aspects of science impinge on them. Yet this is also a time when pseudoscience seem to be on the increase. Some would argue that we simply ignore it. I suggest that we put it to good use and use pseudoscience to help teach about the nature of science – something that Jane Young has done in her excellent book The Uncertainty of it All: Understanding the Nature of Science.

The New Zealand Curriculum (MoE, 2007) makes it clear that there’s more to studying science than simply accumulating facts:

Science is a way of investigating, understanding, and explaining our natural, physical world and the wider universe. It involves generating and testing ideas, gathering evidence – including by making observations, carrying out investigations and modeling, and communicating and debating with others – in order to develop scientific knowledge, understanding and explanations (ibid., p28).

In other words, studying science also involves learning about the nature of science: that it’s a process as much as, or more than, a set of facts. Pseudoscience offers a lens through which to approach this.

Thus, students should be being encouraged to think about how valid, and how reliable, particular statements may be. They should learn about the process of peer review: whether a particular claim has been presented for peer review; who reviewed it; where it was published. There’s a big difference between information that’s been tested and reviewed, and information (or misinformation) that simply represents a particular point of view and is promoted via the popular press. Think ‘cold fusion’, the claim that nuclear fusion could be achieved in the lab at room temperatures. It was trumpeted to the world by press release, but subsequently debunked as other researchers tried, and failed, to duplicate its findings.

A related concept here is that there’s a hierarchy of journals, with publications like Science at the top and Medical Hypotheses at the other end of the spectrum. Papers submitted to Science are subject to stringent peer review processes – and many don’t make the grade – while Medical Hypotheses seems to accept submissions uncritically, with minimal review, for example a paper suggesting that drinking cows’ milk would raise odds of breast cancer due to hormone levels in milk – despite the fact that the actual data on hormone titres didn’t support this.

This should help our students develop the sort of critical thinking skills that they need to make sense of the cornucopia of information that is the internet. Viewing a particular site, they should be able to ask – and answer! – questions about the source of the information they’re finding, whether or not it’s been subject to peer review (you could argue that the internet is an excellent ‘venue’ for peer review but all too often it’s simply self-referential), how it fits into our existing scientific knowledge, and whether we need to know anything else about the data or its source.

An excellent example that could lead to discussion around both evolution and experimental design, in addition to the nature of science, is the on-line article Darwin at the drugstore: testing the biological fitness of antibiotic-resistant bacteria (Gillen & Anderson, 2008). The researchers wished to test the concept that a mutation conferring antibiotic resistance rendered the bacteria possessing it less ‘fit’ than those lacking it. (There is an energy cost to bacteria in producing any protein, but whether this renders them less fit – in the Darwinian sense – is entirely dependent on context.)

The researchers used two populations of the bacterium Serratia marcescens: an ampicillin-resistant lab-grown strain, which produces white colonies, and a pink, non-resistant (‘wild-type’) population obtained from pond water. ‘Fitness’ was defined as “growth rate and colony ‘robustness’ in minimal media”. After 12 hours’ incubation the two populations showed no difference in growth on normal lab media (though there were differences between four and six hours), but the wild-type strain did better on minimal media. It is hard to judge whether the difference was of any statistical significance as the paper’s graphs lack error bars and there are no tables showing the results of statistical comparisons – nonetheless, the authors describe the differences in growth as ‘significant’.

Their conclusion? Antibiotic resistance did not enhance the fitness of Serratia marcescens:

… wild-type [S.marcescens] has a significant fitness advantage over the mutant strains due to its growth rate and colony size. Therefore, it can be argued that ampicillin resistance mutations reduce the growth rate and therefore the general biological fitness of S.marcescens. This study concurs with Anderson (2005) that while mutations providing antibiotic resistance may be beneficial in certain, specific, environments, they often come at the expense of pre-existing function, and thus do not provide a mechanism for macroevolution (Gillen & Anderson, 2008).

Let’s take the opportunity to apply some critical thinking to this paper. Students will all be familiar with the concept of a fair test, so they’ll probably recognise fairly quickly that such a test was not performed in this case: the researchers were not comparing apples with apples. When one strain of the test organism is lab-bred and not only antibiotic-resistant but forms different-coloured colonies from the pond-dwelling wild-type, there are a lot of different variables in play, not just the one whose effects are supposedly being examined.

In addition, and more tellingly, the experiment did not test the fitness of the antibiotic-resistance gene in the environment where it might convey an advantage. The two Serratia marcescens strains were not grown in media containing ampicillin! Evolutionary biology actually predicts that the resistant strain would be at a disadvantage in minimal media, because it’s using energy to express a gene that provides no benefit in that environment, so will likely be short of energy for other cellular processes. (And, as I commented earlier, the data do not show any significant differences between the two bacterial strains.)

What about the authors’ affiliations, and where was the paper published? Both authors work at Liberty University, a private faith-based institution with strong creationist leanings. And the article is an on-line publication in the ‘Answers in Depth’ section of the website of Answers in Genesis (a young-earth creationist organisation) – not in a mainstream peer-reviewed science journal. This does suggest that a priori assumptions may have coloured the experimental design.

Other clues

It may also help for students to learn about other ways to recognise ‘bogus’ science, something I’ve blogged about previously (see Bioblog – seven signs of bogus science). One clue is where information is presented via the popular media (where ‘popular media’ includes websites), rather than offered up for peer review, and students should be asking, why is this happening?

The presence of conspiracy theories is another warning sign. Were the twin towers brought down by terrorists, or by the US government itself? Is the US government deliberately suppressing knowledge of a cure for cancer? Is vaccination really for the good of our health or the result of a conspiracy between government and ‘big pharma’ to make us all sick so that pharmaceutical companies can make more money selling products to help us get better?

“My final conclusion after 40 years or more in this business is that the unofficial policy of the World Health Organisation and the unofficial policy of Save the Children’s Fund and almost all those organisations is one of murder and genocide. They want to make it appear as if they are saving these kids, but in actual fact they don’t.” (Dr A. Kalokerinos, quoted on a range of anti-vaccination websites.)

Conspiracy theorists will often use the argument from authority, almost in the same breath. It’s easy to pull together a list of names, with PhD or MD after them, to support an argument (eg palaeontologist Vera Scheiber on vaccines). Students could be given such a list and encouraged to ask, what is the field of expertise of these ‘experts’? For example, a mailing to New Zealand schools by a group called “Scientists Anonymous” offered an article purporting to support ‘intelligent design’ rather than an evolutionary explanation for a feature of neuroanatomy, authored by a Dr Jerry Bergman. However, a quick search indicates that Dr Bergman has made no recent contributions to the scientific literature in this field, but has published a number of articles with a creationist slant, so he cannot really be regarded as an expert authority in this particular area. Similarly, it is well worth reviewing the credentials of many anti-vaccination ‘experts’ – the fact that someone has a PhD by itself is irrelevant; the discipline in which that degree was gained, is important. (Observant students may also wonder why the originators of the mailout feel it necessary to remain anonymous…)

Students also need to know the difference between anecdote and data. Humans are pattern-seeking animals and we do have a tendency to see non-existent correlations where in fact we are looking at coincidences. For example, a child may develop a fever a day after receiving a vaccination. But without knowing how many non-vaccinated children also developed a fever on that particular day, it’s not actually possible to say that there’s a causal link between the two.

A question of balance

Another important message for students is that there are not always two equal sides to every argument, notwithstanding the catch cry of “teach the controversy!” This is an area where the media, with their tendency to allot equal time to each side for the sake of ‘fairness’, are not helping. Balance is all very well, but not without due cause. So, apply scientific thinking – say, to claims for the health benefits of sodium bicarbonate as a cure for that fungal-based cancer (A HREF=””> Its purveyors make quite specific claims concerning health and well-being – drinking sodium bicarbonate will cure cancer and other ailments by “alkalizing” your tissues, thus countering the effects of excess acidity! How would you test those claims of efficacy? What are the mechanisms by which drinking sodium bicarbonate (or for some reason lemon juice!) – or indeed any other alternative health product – is supposed to have its effects? (Claims that a ‘remedy’ works through mechanisms as yet unknown to science don’t address this question, but in addition, they presuppose that it does actually work.) In the new Academic Standards there’s a standard on homeostasis, so students could look at the mechanisms by which the body maintains a steady state in regard to pH.

If students can learn to apply these tools to questions of science and pseudoscience, they’ll be well equipped to find their way through the maze of conflicting information that the modern world presents, regardless of whether they go on to further study in the sciences.


Have universities degraded to teaching ‘only’ scientific knowledge?

Alison Campbell considers the current state of tertiary education.

The title for this article is taken from one of the search terms used by people visiting my ‘other’ blog, Talking Teaching, which I share with Marcus Wilson and Fabiana Kubke. It caught my eye and I thought I’d use it as the basis of some musings.

We’ll assume that this question is directed at science faculties. Using the word ‘degraded’ suggests that a university education used to provide more than simply a knowledge base in science.

(If I wanted to stir up a bit of controversy I could say that it’s just as well that they ‘only’ teach scientific knowledge, however that’s defined. My personal opinion is that the teaching of pseudoscience, eg homeopathy, ‘therapeutic touch’ etc, has no place in a university, and it’s a matter of some concern that such material has appeared in various curricula in the US, UK and Australia, among other countries. Why? Because it’s not evidence-based, and close investigation – in one case, by a nine-year-old schoolgirl – shows that it fails to meet the claims made for it. You could teach about it, in teaching critical thinking, but as a formal curriculum subject? No way.)

Anyway, back to the chase. Did universities teach more than just ‘the facts’, in the past? And is it a Bad Thing if we don’t do that now?

I’ll answer the second question first, by saying that yes, I believe it is a Bad Thing if all universities teach is scientific knowledge – if by ‘knowledge’ we mean ‘facts’ and not also a way of thinking. For a number of reasons. Students aren’t just little sponges that we can fill up with facts and expect to recall such facts in a useful way. They come into our classes with a whole heap of prior learning experiences and a schema, or mental construct of the world, into which they slot the knowlege they’ve gained. Educators need to help students fit their new learning into that schema, something that may well involve challenging the students’ worldviews from time to time. This means that we have to have some idea of what form those schemas take, before trying to add to them.

What’s more, there’s more to science than simply ‘facts’. There’s the whole area of what science actually is, how it works, what sets it apart from other ways of viewing the world. You can’t teach that by simply presenting facts (no matter how appealingly you do this). Students need practice in thinking like a scientist, ‘doing’ science, asking and answering questions in a scientific way. And in that sense, I would have to say that I think universities may have ‘degraded’.

Until very recently, it would probably be fair to say that the traditional way of presenting science to undergraduates, using lectures as a means of transmitting facts and cook-book labs as a means of reinforcing some of those facts (and teaching practical skills), conveyed very little of what science is actually all about. And it’s really encouraging to see papers in mainstream science journals that actively promote changing how university science teaching is done (eg Deslauriers et al, 2011, Haak et al, 2011, and Musante, 2012).

Of course, saying we’ve ‘degraded’ what we do does make the assumption that things were different in the ‘old days’. Maybe they were. After all, back in Darwin’s day (and much more recently, in the Oxbridge style of university, anyway) teaching was done via small, intimate tutorials that built on individual reading assignments and must surely have talked about the hows and the whys, as well as the whats, of the topic du jour.

However, when I was at university (last century – gosh, it makes me feel old to say that!) things had changed, and they’d been different for quite a while. Universities had lost that intimacy and the traditional lecture (lecturer ‘transmitting’ knowledge from up the front, and students scrabbling to write it all down) was seen as a cost-effective method of teaching the much larger classes that lecturers faced, particularly in first-year.

In addition, the sheer volume of knowledge available to them had increased enormously, and with it, the pressure to get it all across. And when you’re under that pressure to teach everything that lecturers in subsequent courses require students to know before entering ‘their’ paper, transmission teaching must have looked like the way to go. Unfortunately, by going that route, we’ve generally lost track of the need to help students learn what it actually means to ‘do’ science.

Now, those big classes aren’t going to go away any time soon. The funding model for universities ensures that. (Although, there’s surely room to move towards more intimate teaching methods in, say, our smaller third-year classes? And in fact I know lecturers who do just that.) But there are good arguments for encouraging the spread of new teaching methods that encourage thinking, interaction, and practising a scientific mindset, even in large classes. Those papers I referred to show that it can be done, and done very successfully.

First up: there’s more to producing a scientifically literate population than attempting to fill students full of facts (which they may well retain long enough to pass the end-of-term exam, and then forget). We need people with a scientific way of thinking about the many issues confronting them in today’s world. Of course, we also need a serious discussion at the curriculum level, about what constitutes ‘must-have’ knowledge and what can safely be omitted in favour of helping students gain those other skills. (This is something that’s just as important at the level of the senior secondary school curriculum.)

And secondly: giving students early practice at doing and thinking about science may encourage more of them to consider the option of graduate study, maybe going on to become scientists themselves. In NZ graduate students are funded at a higher rate than undergraduates, and the PBRF system rewards us for graduate completions, so there’s a good incentive for considering change right there!

Deslauriers, L.; Schelew, E.; Wieman, C. (2011): Improved learning in a large-enrollment physics class. Science, 332 (6031), 862-4.
Haak, D. C.; HilleRisLambers, J.; Pitre, E.; Freeman, S. (2011): Increased structure and active learning reduce the achievement gap in introductory biology. Science, 332 (6034),1213-6.
Musante, S. (2012): Motivating tomorrow’s biologists. Bioscience 62(1): 16.

Every picture tells a story – sometimes they’re whoppers

Pictures don’t lie, right? Of course they do. And they were deceiving us long before Photoshop made the manipulation of images almost child’s play.

Today, nobody would bat an eye at a ghostly image of Abraham Lincoln standing behind his grief-stricken widow, apparently comforting her. But back in the 1860s when William Mumler produced the first ‘spirit photographs’ the public was stunned. These photos appeared to show dead relatives hovering around the living subject who had posed for the picture. Photography was magical enough, so it didn’t seem such a stretch that the camera could see things that the human eye could not

Mumler discovered ‘double exposure’ accidentally when he mistakenly used a previously exposed but undeveloped photographic plate. He immediately recognised the financial potential of this discovery and reinvented himself as a psychic medium who specialised in communicating with the other side through photographs. By today’s standards his efforts were amateurish but in the heyday of spiritualism they were readily accepted as authentic. Only when Mumler made the mistake of using images of people who were still alive as his ‘ghosts’, did his little scam crumble. But by this time many other ‘spirit photographers’ had recognised the lucrative nature of the business and had gotten into the game. And amazingly, the clever ruse even snared luminaries like Sir Arthur Conan Doyle and Sir William Crookes. Conan Doyle, the creator of Sherlock Holmes, was a physician and Crookes was a pioneer in chemistry and physics. One would think they would have known better.

Conan Doyle was a staunch believer in spiritualism, a position his famous detective would have taken a dim view of. But it was Sir Arthur’s championing of another type of fake photograph that best demonstrates the extent of his credulity. In 1917 two young girls produced a photo that purported to show fairies dancing in the woods. Conan Doyle was convinced the pictures were real and refused to believe that he had been fooled by the simple trick of hanging cardboard cutouts by a thread in front of the camera. It was inconceivable to him that a couple of uneducated girls could put one over on someone of his stature. The pictures therefore had to be evidence of the existence of fairies! In 1983 Elsie Wright and Frances Griffiths finally admitted that they had faked the photographs but nevertheless maintained they had actually seen real fairies.

By the time the ladies had unburdened their souls, Roger Patterson and Robert Gimlin had outdone the ‘Cottingley fairies’. In 1967 these two thrilled the world by capturing the first images of the fabled Bigfoot. Their short film shows a creature lumbering across the woods, looking very much like a man in a gorilla suit. There is good reason for that. It is a man dressed in a gorilla suit. The elaborate hoax was described in detail at a recent conference on magic history by Phillip Morris, a man who should know, since it was his costume company that provided and altered the gorilla suit used to stage the scene. Needless to say there are legions of Bigfoot believers who don’t buy Morris’ claim and remain convinced that some sort of giant ape-like creature prowls the Pacific Northwest.

With such ample historical evidence about photographic manipulation, it’s surprising how few people question the authenticity of a series of photographs being circulated on the internet purporting to show the results of a student’s science fair experiment. The pictures depict plants supposedly watered either with microwaved water, or with water that has been heated on a stove top. And guess what! The microwave-watered plants wither while the others flourish!

One can come up with all sorts of possible explanations for the difference. Was the soil the same in the two plants? Were they given equal amounts of water? Could they have been exposed to different lighting conditions? Was there some difference in the seeds? But how about a simpler possibility? Fraud. It isn?t very hard to set up two plants side by side and ensure that one thrives while the other dies. Just water one and not the other. Of course the possibility that this is the way the pictures were created does not prove the case.

Heating water in a microwave oven does nothing other than raise its temperature. Any talk about “the structure or energy of the water being compromised” is plain bunk. But absurdly implausible arguments don’t prove that the pictures are fraudulent either. What proves it is the good old standard of science: reproducibility. Or lack of.

I did the experiment. I watered plants with microwaved water, kettle-boiled water, and stove-top boiled water, feeling pretty silly about it, but I did it. The results? As expected, no difference. I didn’t take any pictures because, after all, how would you know that they are not faked? So here is the choice. You can take my word that the experiment cannot be reproduced, accept that science tells us that microwaves do nothing to water other than heat it, or take at face value some pictures in a circulating email that purport to show an effect that has eluded scientists around the world but was discovered by a student pursuing a science fair project. Better yet, do the experiment yourself!

As you might guess, I don’t believe in spirit photographs, fairies, Bigfoot or plants succumbing to the evils of microwaved water. And I would have put goats that climb trees into the same ‘unbelievable’ category. But I would have been wrong. It seems that some Moroccan goats have learned to climb the argan tree in search of its olive-like fruit. Legend has it that the undigested seeds that pass through the goats used to be collected and pressed into “argan oil,” a traditional food flavouring. Highly questionable. The oil, also used in the cosmetic industry, is actually pressed from fruit that has been picked by human hands, making the tree-climbing goats a nuisance. Still, one can appreciate their remarkable athleticism. Easy to find pictures of their exploits on line. And pictures don’t lie? Right?

Resistance to science

Alison Campbell reviews a study of why so many struggle with scientific concepts.

One of the topics that comes up for discussion with my Sciblogs colleagues is the issue of ‘resistance to science’ – the tendency to prefer alternative explanations for various phenomena over science-based explanations for the same observations. It’s a topic that has interested me for ages, as teaching any subject requires you to be aware of students’ existing concepts about it, and coming up with ways to work with their misconceptions. So I was interested to read a review paper by Paul Bloom and Deena Weisberg, looking at just this question.

Bloom and Weisberg conclude there are two key reasons why people can be resistant to particular ideas in science. One is that we all have “common-sense intuitions” about how the world works, and when scientific explanations conflict with these, often it’s the science that loses out. The other lies with the source(s) of the information you receive. They suggest that “some resistance to scientific ideas is a human universal” – one that begins in childhood and which relates to both what students know and how they learn.

Before they ever encounter science as a subject, children have developed their own understandings about how the world works. This means they may be more resistant to an idea if it’s an abstract concept and not one that they have experienced – or can experience – on the personal level. Bloom and Weisberg cite research showing that the knowledge that objects are solid, don’t vanish just because they’re out of sight, fall if you drop them, and don’t move unless you push them, is developed when we are very young children. And we develop similar understandings about how people operate (eg, that we’re autonomous beings whose actions are influenced by our goals) equally early.

Unfortunately for science educators, these understandings can become so ingrained that if they clash with scientific understandings, those particular science facts can be very hard to learn. It’s not a lack of knowledge, but the fact that students have “alternative conceptual frameworks for understanding [these] phenomena” that can make it difficult to move them to a more scientific viewpoint. The authors give an example based on the common-sense understanding that an unsupported object will fall down – for many young children, this can result in difficulty seeing the world as a sphere, because people on the ‘downwards’ side should just fall right off. This idea can persist until the age of eight or nine.

And it seems that psychology also affects how receptive people are to scientific explanations. When you’re four, you tend to view things “in terms of design and purpose”, which means (among other things) that young children will provide and accept creationist explanations about life’s origins and diversity. Plus there’s dualism: “the belief that the mind is fundamentally different from the brain”, which leads to claims that the brain is responsible for “deliberative mental work” but not for emotional, imaginative, or basic everyday actions. This in turn can mean that adults can be very resistant to the idea that the things that make us who and what we are can emerge from basic physical processes. And that shapes how we react to topics such as abortion and stem cell research.

In other words, those who resist the scientific view on given phenomena do so because the latter is counterintuitive, although this doesn’t really explain the fact that there are cultural differences in willingness to accept scientific explanations. For example, about 40 percent of US citizens accept the theory of evolution – below every country surveyed with the exception of Turkey (Miller et al. 2006). Part of the problem seems to lie with the nature of ‘common knowlege’: if everyone regularly and consistently uses such concepts, children will pick them up and internalise them (believing in the existence of electricity, for example, even though it’s something they’ve never seen). For other concepts, the source of information is important. Take evolution again: parents may say one thing about evolution, and teachers, another. Who do you believe? It seems, according to Bloom and Weisberg, that it all depends on how much you trust the source.

The authors conclude:

“These developmental data suggest that resistance to science will arise in children when scientific claims clash with early emerging, intuitive expectations. This resistance will persist through adulthood if the scientific claims are contested within a society, and it will be especially strong if there is a nonscientific alternative that is rooted in common sense and championed by people who are thought of as reliable and trustworthy.”

Yet we live in a society where ‘ alternative’ explanations are routinely presented by media in a desire to present ‘ balance’ where there isn’ t any, or indeed, without any attempt at balance at all. And the internet makes it even easier to present non-scientific views of the world in an accessible, authoritative and reasonable way. As science communicators and educators, my colleagues and I really are up against it, and I would say there’s a need for Bloom and Weisberg’s findings to be much more widely read.

Bloom, P; Weisberg, DS (2007): Childhood origins of adult resistance to science. Science 316 (5827), 996-7.
Miller, JD; Scott, EC; Okamoto, S 2006: Public acceptance of evolution. Science 313: 765 – 766.

Science as a human endeavour

If students are to pursue careers in science, they need to be able to see themselves in that role. One way to encourage this may be through the telling of stories. This article is based on a presentation to the 2008 NZ Skeptics Conference in Hamilton.

New Zealand’s new science curriculum asks us to develop students’ ability to think critically. As a science educator I think that’s about the most important skill we can give them: the ability to assess the huge amount of information that’s put in front of them from all sorts of sources. We also need to recognise that the ideas and processes students are hearing about have come to us through the activities of people – it’s people who develop science understanding. Science changes over time, as people’s ideas change. It’s fluid, it’s done by people, and it’s a human endeavour.

This puts science in an interesting position. It has its own norms, and its own culture, but it’s embedded in the wider culture as well. Those norms of science include its history. I find it sad that many of my students have no idea of where the big ideas in science came from. They don’t know what the people who were developing those ideas were like.

The new curriculum document recognises that the nature of science is an important strand in the curriculum, because it is what gives science its context, and lets students see science as a human endeavour. They’re going to learn what science is, and how scientists do science. They will become acquainted with the idea that scientists’ ideas change as they’re given new information; that science is valuable for society. And students are going to learn how it’s communicated.

Our future prosperity depends on students continuing to enter careers in the sciences. Richard Meylan, a senior adviser at the Ministry of Research, Science and Technology, said to me recently that somewhere between the end of year 13 and that two-month break before they go to university, we seem to be losing them. The universities are tending to see a drop in the number of students who have picked science as something that they want to continue in. Students don’t seem to see it as a viable career option, and there are many reasons for that.

We need more scientists, we need scientifically-literate politicians, and we need a community that understands science: how science is done, how science is relevant; one that sees science and scientists as being an integral part of the community. But how are we going to get there? What sorts of things can we do that are going to make young people want to carry on in science? Students often don’t choose science – how are we going to change that?

One of the reasons, perhaps, is that they often don’t see themselves as scientists. We did a bit of research on this at Waikato University last year, asking what would encourage our first-year students to continue as scientists. And what they were saying was, “Well, a lot of the time I don’t see myself as a scientist.” We asked, what would make a difference? The response: “Seeing that my lecturers are people.” People first, scientists second.

When I googled ‘scientist’ I had to go through eight or nine pages of results before finding something that looks like my own idea of a scientist. (‘Woman scientist’ is a bit better!) Almost all the guys have moustaches, they’ve all got glasses, all the women are square-shaped. Students don’t see themselves in this. We need them (and the rest of the community!) to see science as something that ordinary people do.

Now, what sorts of things are those ordinary people doing? They’re thinking; they’re speculating, they’re saying ‘what if?’ They’re thinking creatively: science is a creative process and at its best involves imagination and creativity. Scientists make mistakes! Most of the time we’re wrong but that doesn’t make good journal articles; usually no-one publishes negative results. So you just hear about the ‘correct’ stuff. Scientists persist when challenged, when things aren’t always working well.

Science stories

One way of fostering students’ engagement with science, and seeing themselves in it, is to tell them stories, to give them a feeling of how science operates. Brian Greene, a science communicator and physicist in the US, says:

I view science as one of the most dramatic narratives our species can tell. The story of our search to understand the Universe and ourselves. When that search is conveyed using the power of story – the story of discovery – we can all feel part of the journey.

So I’m going to tell you stories. And I’m going to tell stories about old, largely dead, people because one of my passions at the moment is the history of science. A lot of science’s big ideas have a history that stretches back 3-400 years. But they’re just as important today, and I think that an understanding of the scientists who came up with those ideas is also important today.

I think it’s important that kids recognise that a lot of scientists are a bit quirky. But then, everyone’s a bit quirky – we’re all different. One example of someone ‘a bit different’ is Richard Feynman. Famous for his discoveries in the nanotech field, he was a polymath: a brilliant scientist with interests in a whole range of areas – biology, art, anthropology, lock-picking, bongo-drumming. He was into everything. He also had a very quirky sense of humour. He was a brilliant scientist and a gifted teacher, and he showed that from an early age. His sister Joan has a story about when she was three, and Feynman was nine or so. He’d been reading a bit of psychology and knew about conditioning, so he’d say to Joan: “Here’s a sum: 2 plus 1 more makes what?” And she’s bouncing up and down with excitement. If she got the answer right, he’d give her a treat. The Feynman children weren’t allowed lollies for treats, so he let her pull his hair till it hurt (or, at least, he behaved as if it did!), and that was her reward for getting her sums right.

Making mistakes

We get it wrong a lot of the time. Even the people we hold up as these amazing icons – they get it wrong. Galileo thought the tides were caused by the Earth’s movement. At the time, no-one had developed the concept of gravity. How could something as far away as the Moon possibly affect the Earth? We look back at people in the past and we think, how could they be so thick? But,in the context of their time, what they were doing was perfectly reasonable.

Louis Pasteur, the ‘father of microbiology’, held things up for years by insisting that fermentation was due to some ‘vital process’ it wasn’t chemical. He got it wrong.

And one of my personal heroes, Charles Darwin, got it completely wrong about how inheritance worked. He was convinced that inheritance worked by blending. When Darwin published The Origin of Species, in 1859, Mendel’ s work on inheritance hadn’ t been published. It was published in Darwin’s lifetime – Mendel’s ideas would have made a huge difference to Darwin’s understanding of how inheritance worked – part of the mechanism for evolution that he didn’t have. But he never read Mendel’s paper.

Scientists do come into conflict with various aspects of society. Galileo had huge issues with the Church. He laid out his understanding of what Copernicus had already said: the Universe was not geocentric, it didn’t go round the Earth. The Church model was that the Universe was very strongly geocentric: everything went round us. Galileo was accused of heresy, and shown the various instruments of torture; for pulling out his thumbnails and squashing his feet. He did recant, and he was kept under house arrest until his death. And the Church officially apologised to him in 1992. A long-running conflict indeed.

And there’s conflict with prevailing cultural expectations. Beatrice Tinsley was an absolutely amazing woman; a New Zealander who has been called a world leader in modern cosmology, and one of the most creative and significant theoreticians in modern astronomy. She went to the US to do her PhD in 1964, and finished it in 1966. Beatrice published extensively, and received international awards, but she found the deck stacked against her at the University of Texas, where she worked. She was asked if she’d design and set up a new astronomy department, which she did. The university duly opened applications for the new Head of Department. Beatrice applied. They didn’t even respond to her letter. So she left Texas. (Yale did appreciate her, and appointed her Professor of Astronomy.) A couple of years later she found she had a malignant melanoma, and was dead by the age of 42. The issue for Beatrice was a conflict between societal expectations and the area where she was working: women didn’t do physics.

Science versus societal ‘knowledge’

Raymond Dart was an English zoologist who worked at the University of Witwatersrand in South Africa. He was widely known among the locals for his fondness for fossils; you could trundle down to Prof Dart’s house, bring him a lovely bit of bone, and he’d pay you quite well. One day in 1924 the workers at Taung quarry found a beautiful little skull – a face, a lower jaw, and a cast of the brain – in real life it would sit in the palm of your hand. Dart was getting ready for a wedding when the quarry workers arrived, and he was so excited by this find that when his wife came in to drag him off to be best man, he still didn’t have his cuffs and his collar on and there was dust all over his good black clothes. He was absolutely rapt.

Dart looked at this fossil and saw in it something of ourselves. He saw it as an early human ancestor. The jaw is like ours, it has a parabolic shape, and the face is more vertical -relatively speaking – than in an ape. He described it, under the name Australopithecus africanus, as being in our own lineage and went off to a major scientific meeting, expecting a certain amount of interest in what he’d discovered. What he got was a fair bit of doubt, and some ridicule. How could he be so foolish? It was surely an ape.

By 1924 evolution was pretty much an accepted fact in the scientific community. But there was a particular model of what that meant. In some ways this built on the earlier, non-evolutionary concept of the Great Chain of Being. They also had a model that tended to view the epitome of evolutionary progress as white European males. It followed from this that humans had evolved in Europe, because that’s where all the ‘best’ people came from. Black Africans were sometimes placed as a separate species, and were regarded as being lower down the chain.

Yet here was Dart saying he’d found a human ancestor in Africa. This would mean the ancestor must have been black – which didn’t fit that world-view. It’s a racist view, but that reflected the general attitudes of society at the time, and the scientists proposing that view were embedded in that society just as much as we are embedded in ours today.

Another difficulty for Dart had to do with prevailing ideas about how humans had evolved. By the 1920s Neanderthal man was quite well known. Neanderthals have the biggest brains of all the human lineage – a much bigger brain than we have. And the perception was that one of the features that defined humans, apart from tool use, was a big brain. It followed from this that the big brain had evolved quite early. Dart was saying that Australopithecus was a hominin, but Australopithecus as an adult would have had a brain size of around 400cc. We have a brain size of around 1400cc. Australopithecus didn’t fit the prevailing paradigm. The big brain had to come first; everybody knew that.

And belief in that particular paradigm – accepted by scientists and non-scientists alike – helps to explain why something like Piltdown man lasted so long. Over the period 1911-1915 an English solicitor, Charles Dawson, ‘discovered’ the remains of what appeared to be a very early human indeed in a quarry at Piltdown. There were tools (including a bone ‘cricket bat’), a skull cap, and a lower jaw, which looked very old. The bones were quite thick, and heavily stained. This was seized upon with joy by at least some anatomists because the remains fitted in with that prevailing model: old bones of a big-brained human ancestor.

People began to express doubts about this fossil quite early on, and these doubts grew as more hominin remains were confirmed in Africa and Asia. But it wasn’t completely unmasked as a fake until the early 1950s. The skull looked modern because it was a modern (well, mediaeval) skull that had been stained to make it look really old. The jaw was that of an orangutan, with the teeth filed so that they looked more human and the jaw articulation and symphysis (the join between right and left halves) missing. When people saw these remains in the light of new knowledge, they probably thought, how could I have been so thick? But in 1914 Piltdown fitted with the prevailing model; no-one expected it to look otherwise. And I would point out that it was scientists who ultimately exposed the fraud. And scientists who re-wrote the books accordingly.

Thinking creatively

The next story is about Barry Marshall, Robin Warren, and the Nobel Prize they received in 2005. (These guys aren’t dead yet!) Here’s the citation:

[The 2005] Nobel Prize in Physiology or Medicine goes to Barry Marshall and Robin Warren, who with tenacity and a prepared mind challenged prevailing dogmas. By using technologies generally available… they made an irrefutable case that the bacterium Helicobacter pylori is causing disease.

The prevailing dogma had been that if you had a gastric or duodenal ulcer, you were a type A stress-ridden personality. The high degree of stress in your life was linked to the generation of excess gastric juices and these ate a hole in your gut. Marshall and Warren noticed that this bacterium was present in every preparation from patients’ guts that they looked at. They collected more data, and found that in every patient they looked at, H. pylori was present in the diseased tissue. One of them got a test-tube full of H. pylori broth and drank it. He got gastritis: inflammation of the stomach lining and a precursor to a gastric ulcer. He took antibiotics, and was cured. The pair treated their patients with antibiotics and their ulcers cleared up.

Because they were creative, and courageous, they changed the existing paradigm. And this is important – you can overturn prevailing paradigms, you can change things. But in order to do that you have to have evidence, and a mechanism. Enough evidence, a solid explanatory mechanism, and people will accept what you say.

Which was a problem for Ignaz Semmelweiss. He had evidence, alright, but he lacked a mechanism. Semmelweiss worked in the Vienna General Hospital, where he was in charge of two maternity wards. Women would reputedly beg on their knees not to be admitted to Ward 1, where the mortality rate from puerperal fever was about 20 percent. In Ward 2, mortality was three or four percent. What caused the difference? In Ward 2 the women were looked after exclusively by midwives. In Ward 1, it was the doctors. What else were they doctors doing? They were doing autopsies in the morgue. And they would come from the morgue to the maternity ward, with their blood-spattered ties, and I hate to think what they had on their hands. Then they would do internal examinations on the women. Small wonder so many women died. Semmelweiss felt that the doctors’ actions were causing this spread of disease and said he wanted them to wash their hands before touching any of the women on his ward. Despite their affronted reactions he persisted, and he kept data. When those doctors washed their hands before doing their examinations, mortality rates dropped to around three percent.

The trouble was that no-one knew how puerperal fever was being transmitted. They had this idea that disease was spread by miasmas – ‘bad airs’ – and although the germ theory of disease was gaining a bit of traction the idea that disease could be spread by the doctors’ clothes or on their hands still didn’t fit the prevailing dogma. Semmelweiss wasn’t particularly popular – he’d gone against the hospital hierarchy, and he’d done it in quite an abrasive way, so when he applied for a more senior position, he didn’t get it, and left the hospital soon after. He was in the unfortunate position of having data, but no mechanism, and the change in the prevailing mindset had to wait for the conclusive demonstration by Koch and Pasteur that it was single-celled organisms that actually caused disease.

Collaboration and connectedness

Scientists are part of society. They collaborate with each other, are connected to each other, and are connected to the wider world. Although there have been some really weird people that weren’t. Take Henry Cavendish – the Cavendish laboratory in Cambridge is named after him. He was a true eccentric. He did an enormous amount of science but published very little, and was quite reclusive – Cavendish just didn’t like talking with people. If you wanted to find out what he thought, you’d sidle up next to him at a meeting and ask the air, I wonder what Cavendish would think about so-and-so. If you were lucky, a disembodied voice over your shoulder would tell you what Cavendish thought. If you were unlucky, he’d flee the room.

But most scientists collaborate with each other. Even Newton, who was notoriously bad-tempered and unpleasant to people whom he regarded as less than his equal, recognised the importance of that collaboration. He wrote: “If I have seen further than others, it is because I have stood on the shoulders of giants.” Mind you, he may well have been making a veiled insult to Robert Hooke, to whom he was writing: Hooke was rather short.

What about Darwin? Was he an isolated person, or a connected genius? We know that Darwin spent much of the later years of his life in his study at Downe. He had that amazing trip round the world on the Beagle, then after a couple of years in London he retreated to Downe with his wife and growing family, and spent hours in his study every day. He’d go out and pace the ‘sandwalk’ – a path out in the back garden – come back, and write a bit more. Darwin spent eight years of that time producing a definitive work on barnacles, and he didn’t do it alone. He wrote an enormous number of letters to barnacle specialists, and to other scientists asking to use work that they’d done, or to use their specimens to further the work he was doing.

He was also connected to a less high-flying world: he was into pigeons. This grew from his interest in artificial selection and its power to change, over a short period of time, various features in a species. So he wrote to pigeon fanciers. And the pigeon fanciers would write back. These were often in a lower social class and various family and friends may well have been a bit concerned that he spent so much time speaking to ‘those people’ about pigeons. And Darwin had a deep concern for society as well. He was strongly anti-slavery, and he put a lot of time (and money) into supporting the local working-class people in Downe. He was still going in to London to meet with his colleagues, men like Lyell and Hooker, who advised him when Alfred Wallace wrote to him concerning a new theory of natural selection. Now there’s an example of connectedness for you, and the impact of other people’s thought on your own! It was Wallace who kicked Darwin into action, and led to him publishing the Origin of Species.

That’s enough stories. I’m going to finish with another quote from Brian Greene:

Science is the greatest of all adventure stories, one that’s been unfolding for thousands of years as we have sought to understand ourselves and our surroundings. Science needs to be taught to the young and communicated to the mature in a manner that captures this drama. We must embark on a cultural shift that places science in its rightful place alongside music, art and literature as an indispensable part of what makes life worth living.
Science lets us see the wonder and the beauty of the stars, and inspires us to reach them.

“Intelligent Design” in the Science Classroom

A critique of “Walking with Beasts”, by Ian Wishart, Investigate Magazine, June 2002

A Prominent English state school, Emmanuel City Technological College, has recently decided to include creationism as a viable alternative to evolution in the science classroom. In the wake of this, Ian Wishart of Investigate magazine has written an article, “Walking With Beasts”, in which he conveys the impression that the status of organic evolution is very fragile indeed. Therefore he asks: “If Darwin’s Theory of Evolution is on such shaky ground in the upper reaches of science, why are New Zealand high school students still being taught the subject without any reference to the many controversies now dogging it?”

The article is a mixture of the old and the new – arguments against evolution which have long been the province of young-earth creationism and some from the most recent version of creationism, Intelligent Design (ID) theory. ID theory has its roots in creation “science”, which probably accounts for the retention of some of the arguments associated with that movement, and in some ways it can be regarded as a more sophisticated version of its predecessor. Most significantly, when examined closely, it turns out to be the old Argument from Design in modern garb. At its core is the view that Darwinian theory is unable to account for life’s complexity – hence an Intelligent Designer must be invoked.

Sound familiar? William Paley’s watch immediately springs to mind. The only real difference between Paley and modern IDers is the incorporation of factors and processes at the biochemical and cellular levels of which Paley, of course, was unaware. Prominent names in the ID movement are Phillip Johnson (Darwin On Trial), Michael Behe (Darwin’s Black Box), Jonathan Wells (Icons of Evolution) and William Dembski (Intelligent Design: The Bridge Between Science and Theology).

A major contention of Wishart’s article is that “scientists are increasingly doubting the theory of evolution”. Unfortunately, he never really distinguishes clearly between the occurrence of evolution and its proposed mechanism, of which natural selection (Darwinism) is generally regarded as the chief agent of change. Consequently, the article switches from one aspect to the other in disconcerting fashion, such that, to the uninitiated, evolution itself appears seriously in doubt. Argument – the sign of a healthy science, not one in decline – now pertains to the “how” of the process.

The idea that evolution is on its last legs will be familiar to those conversant with creationist attacks over the years. The article repeats the hoary and long discounted argument that the fossil record lacks the expected transitional forms. “Nowhere,” writes Wishart, “are there fossils that show a cat-monkey, or a horse-giraffe, or any other of the alleged half-breed species said to have existed.” Setting aside such ludicrous caricatures, excellent examples of transitional forms between major groups do exist (see Evolution: the fossils say YES! NZ Skeptic, Summer 2001).

Somewhat ironically, Wishart sheds extreme doubt on the possibility of modern whales originating from a “carnivorous, cow-like creature about the size of a wolf … in a short period of geological time”. Apart from the “short period” amounting to at least 20 million years, the record of the rocks has revealed a fascinating series of forms, from whales with functional legs and ears like those of land mammals, to amphibious, wading and diving forms. (See Scientific American, May 2002). [Incidentally, based on new fossil evidence, the mantle of whale ancestor has shifted from the mesonychids (alluded to above) to a related group, the artiodactyls, and more specifically to the hippopotami.]

He is equally astray when he refers to “the lack of evidence for human evolution”. Apparently, he is unaware of early ape-like hominids, such as Ardipithecus ramidus, Australopithecus anamensis, and A. afarensis (“Lucy”), let alone later members of Australopithecus and early members of the genus Homo, the genus to which our species belongs. He is similarly dismissive of early bird evolution. Worth noting in this regard is a recent burst of fossil discoveries which has revealed a great diversity of Mesozoic birds; even older finds of feathered dinosaurs have corroborated prediction. Scientists await in keen anticipation further plugging of gaps in these and other transitional phases of vertebrate evolution.

Evolution well-supported

Has evolution occurred? The answer is a resounding “yes”! Darwin himself established this fact, based on an impressive consilience of evidence from several independent lines of inquiry: comparative morphology, embryology and geographical distribution, to name just a few. Since Darwin’s day, new research areas such as genetics, cell biology and molecular biology have only strengthened the level of consilience, as have many significant finds in the fossil record. Contrary to the impression continually being conveyed by anti-evolutionists, the occurrence of evolution is no longer an issue in biological science. The comparatively few scientists who seemingly question its validity are those who seem to have allowed their philosophical and religious beliefs to cloud their scientific judgment, to the extent, in some cases, of even advocating what amounts to the teaching of “theistic science”, and hence threatening the integrity of science in the classroom.

The key reason why ID and other forms of creationism must be kept out of science education is that the former have, as an inherent element, an appeal to an entity which lies outside the scope of science, whereas science deals with that part of reality amenable to empirical inquiry. Alternative explanations must be testable against the natural world. As Eugenie Scott, an American anthropologist and science educationist, has pointed out, science today is based on a necessary methodological materialism, which is not to be confused with philosophical materialism or naturalism, to which scientists and others may or may not adhere. (Wishart, to his credit, does seem to recognise the distinction between acceptance of evolution and non-scientific implications derived from it. Unfortunately, this distinction, like that between the reality of evolution and the “how” of the process, tends to become blurred in the writing.) Scott continually stresses that science neither denies nor opposes the supernatural, but ignores it for methodological reasons. She has expressed this necessary approach in colourful fashion: “You can’t put God in a test tube (or keep it out of one).” (For “God”, in the current context, read “Intelligent Designer”.)

Other points of confusion in the article are the conflation of “the origin of life” and “Big Bang theory” with organic evolution. There is a postulated continuity linking all aspects of an evolutionary universe, but each phase presents its own set of problems and requires its own specialised methodology. The conclusion that evolution has taken place, for example, rests on the evidence for it; the undoubted problems associated with the origin of the universe or with the origin of the very first life forms on this planet are irrelevant as far as organic evolution is concerned.

God of the Gaps

ID proponents tend to focus on such problem areas, which is akin to the God of the Gaps argument of earlier times. This unscientific approach is particularly apparent when a cornerstone (a very unstable one, I might add) of the ID movement is examined, namely, the idea of irreducible complexity, an idea alluded to in Wishart’s article. “By irreducibly complex”, writes Michael Behe, “I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively stop functioning”. He cites as examples of irreducible complexity, blood clotting and the movement of flagella (whiplike structures used by many microscopic organisms as swimming organelles). Such irreducibly complex structures and mechanisms, maintain IDers, could not have evolved in functional steps. The answer: intelligent design.

A Return to Paley

Setting aside the fact that reasonable naturalistic explanations do exist for many of these systems and structures (not yet satisfactorily formulated in other cases, admittedly), readers, I trust, will recognise a return to Paley in the whole idea of irreducible complexity. Drawing a line beyond which science is presumed unable to proceed is antithetical to the spirit of unfettered scientific inquiry. Is this the attitude we would wish to instil in developing and inquiring minds? And, as if this restriction were not enough, IDers would invoke some mysterious outsider as the “answer” to allegedly insoluble problems. (See the reviews of Darwin’s Black Box: Nature 383: 227-228; American Scientist 85: 474-475.)

The use of selective quotations is a favourite ploy of creationists. They are lifted from the evolutionary literature in such a way as to convey meanings not intended by their authors. In his article, Wishart provides several quotations intended to show that all is not well in evolutionary circles. Space restriction allows extended discussion of only two. However, these will serve to illustrate how misleading some selective quotations can be.

Lynn Margulis, Distinguished University Professor of Botany at the University of Massachusetts at Amhurst, is regarded in evolutionary circles as both innovator and maverick. She has been lauded for her work on cellular evolution, but her almost fanatical support of the Gaia hypothesis, considered by many scientists as unscientific, has not met with universal approval. In the article under review, several quotes by Margulis are gleaned from a profile article on her in Science 19 April 1991: 378-381. Here is how two of her statements (in italics for clarity) appear in Wishart’s article: “Darwinists, she goads, wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin…Neo-Darwinism, which insists on [the slow accrual of mutation] is in a complete funk.”

The Statements in Context

Now let us consider Margulis’ first statement (in italics) in context: “Margulis defends herself and Gaia with the rhetorical verve that has long startled her colleagues. Her critics, she said in 1988, just wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him.” Note that Wishart makes no mention of Gaia; yet it is clearly its rejection on this occasion which particularly annoyed her and prompted this tirade. Such verbal salvoes may be grist for the creationist mill (especially when misused), but what really matters in the end is that disputes of this kind are generally resolved by the self-correcting mechanism of science.

The second statement (shown again in italics below) is preceded in the Science article with a brief discussion of Margulis’ valuable contribution to evolutionary change at the bacterial level. The writer then points out that “the controversial part of Margulis’ argument comes after that [with] her insistence that such changes could not have come through the slow buildup of chance mutations, and that therefore neo-Darwinism, which insists on that, is in a complete funk.” Addressing an audience at the University of Massachusetts, Margulis continues: “I have seen no evidence whatsoever that these changes can occur through the accumulation of gradual mutations. There’s no doubt, of course that they exist, but the major source of evolutionary novelty is the acquisition of symbionts – the whole thing then edited by natural selection. It is never just the accumulation of mutations.” [By acquisition of symbionts is meant the incorporation of free-living bacteria (e.g. mitochondria) into other bacteria to form a more complex organism.]

Original Setting Important

The above examples emphasize how vital it is to read selective quotations in creationist writings in their original setting. With reference to the second quotation, Margulis is not jettisoning natural selection entirely, merely playing down its influence as far as the production of evolutionary novelty is concerned. In this she is at odds with prominent evolutionists, a point which is stressed in the Science article. Most significantly, contrary to what might be concluded from Wishart’s article, she is not questioning evolution itself. In spite of differences with her colleagues, she is still very much an evolutionist. It is worth noting that in Wishart’s article the two quotations are linked, even though they were uttered about three years apart!

Hopeful Monsters

Wishart repeats the creationist mantra that the theory of punctuated equilibria “is similar to what became dubbed ‘the hopeful monster theory’ of the 1940s, whereby a dinosaur laid an egg and out of it hatched a bird.” This, continues Wishart, “is tantamount to admitting a miracle – divine intervention – according to creationists”. But, as Stephen Jay Gould, co-author of the punctuated equilibria theory, has observed, “the theory advances no defenses for saltational models of speciation…” (Saltation is postulated abrupt change resulting from a major mutation, that can give rise to a new class or type.)

The writer refers to the “many controversies” within evolutionary theory, which in his opinion receive curt coverage in the science curriculum. Certainly, if it is true that debate at 7th form bursary level is limited to “Darwin vs Lamarck”, then such concern is justified. However, what really concerns Wishart is revealed by the following: “Lamarck was an evolutionist like Darwin with a slightly different spin on the process. He wasn’t a Creationist.” (Emphasis added). Clearly, he wants ID creationism taught alongside evolution as an alternative explanation for biological reality.

People, of course, should be free to believe what they like, but when beliefs which are clearly non-scientific, such as the belief in an intelligent designer, are promoted as legitimate alternatives to evolution in a science curriculum, any opposition to such a move is entirely justified. It surely is the duty of educators and others genuinely concerned with the quality of science education, to resist any such intrusions and so uphold the integrity of science in the classroom.

Teaching Evolution to the Alienated

Presenting the evidence just isn’t enough

Bill Peddie

In his book Unpopular Essays, Bertrand Russell claims that although he was fully aware of the notion that the human is a rational animal, despite years of searching for supporting evidence for that assertion, he could find none. For those hoping to batter the creationist opponents of evolution into submission with logical rational argument, Bertrand Russell’s comment should at the very least sound a note of caution.

As a second year student in zoology at Canterbury University, more years ago than I care to remember, I went armed with my genetics evolution notes to a lecture which had the intriguing title Darwin Debunked. The lecturer was the Roman catholic chaplain and Thomist scholar, Father George Duggan – and his talk even today would stand as a good example of creation science at its thoughtful best. What puzzled me was how, after his talk, despite having the zoologists and geologists in the audience tear his arguments asunder with devastating counter examples, this Rhodes scholar and trained Catholic thinker was totally unmoved.

It was much later that I gradually came to realise that where matters of faith and cultural belief are concerned, there is too much at stake for conventional argument to produce a shift in position.

Let me illustrate with three examples.

On several occasions, Jehovah’s Witnesses have arrived at my door and unsuspectingly offered me literature which I have previously checked out for myself. The pattern has usually been that sometime later they escape in disarray, I suspect thankfully, promising to return with the answers to the questions raised. They do not return – and yet it is a hollow victory because my visitors can be seen with the same neat little sports jackets, the same briefcases no doubt still containing the same flawed literature; and wearing the same smile of the truly saved, walking up the front paths of houses in the same neighbourhood the very next weekend. One-on-one tutorial teaching does not necessarily lead to total success.

My second example is from a transcript from an interview I had with a seventh form Polynesian pupil. The transcript included the following exchange.

Me: How old do you think the earth is?
Pupil: Six thousand years old.
Me: And the universe?
Pupil: The same.
Me: If someone was to give you very strong evidence that the world was older than that – and that for example there was geological evidence there was no flood of the size that would cover the world – how would you react?
Pupil: Evidence like what?
Me: Annual tree ring data going back seven or eight thousand years for the bristlecone pine – examples of annual deposits of ice layers which when counted give values into many thousands of years, fossils which from every test appear very ancient and radioactive dating of rocks leading to estimates of not millions but billions of years: – those sorts of things for the age of the earth. And then for Noah’s flood – the fact that the scientists have calculated that you would need to have three or four times the total amount of water in the sea, atmosphere and under the earth in order to cover the highest mountains like Mount Everest.
Pupil: (there was a pause, then…) Well, I would have to say that God is greater than that. But I am glad you told me about that – because if someone had hit me with that on the street – if I was, say witnessing – I would have been stuck dumb. I don’t know about that sort of thing. Now I can get ready with an answer.
Me: But it wouldn’t change what you think about the age of the Earth.
Pupil: No!

My third example of another attempt to educate those with a built-in resistance, comes from a few years ago by courtesy of physiologist professor Roger Short of Monash University. After discovering 27 percent of his first year medical students held a creationist view, he gave eight lessons on evolution to his class and retested them. Despite having completed an assignment on the subject matter of the lectures he found no change in the creationist views.

What crystallised my thinking on the nature of the problem was an interview I did when I was a few months into my PhD study into the nature of the creation/evolution debate. In the course of this interview a Maori studies lecturer made the comment that the ultimate in alienation would be to be a Maori evolutionist.

When I protested this with the counter example of Rangi Walker’s son who is a well known zoologist and one who as far as I know still accepts a concept of evolution my informant’s reply was instructive.

“To the extent he believes in evolution he is not a Maori.”

Group Identity

This reply suggests a way of looking at the debate. For many, the position taken on the debate is more one of identity with a group who are associated with a viewpoint than it is with a rationally constructed, evidence based position. I am not even convinced that this is itself entirely irrational behaviour. After all if your family – your whanau – has a discernible set of characterising beliefs, and you think it is important to identify with that family, that religion, that culture, is it simply a question of logical analysis to cut yourself off from the group by questioning what you believe to be one of its underlying major tenets? In today’s politically correct age it is ironic that those who bay for the creationist blood of fundamentalist Christians fall uncomfortably silent when asked to consider the creationism so much part of the thinking of many Maori and Polynesians.

Not the Desired Effect

As already stated, the first surprise for the teacher with creationist pupils of the more extreme sort, is that the usual classroom rehearsing of a few well chosen facts supporting evolution does not have the desired effect on those alienated by their belief system. The second problem is that such is the fervour of the strict creationist camp that their leaders have taken the trouble to assemble the most detailed and documented case which is both technical in flavour and at least superficially plausible. They will, for example, quote figures to cast doubt as to the reliability of radioactive decay figures, talk glibly about astrophysicists’ problems with the speed of light, and quote examples where apparently old rocks show signs of recent formation. And they have amongst their number some surprisingly well educated and well qualified supporters. It is true that they have few who are actually doing research or who are specialists in the areas they quote, but criticising them for basing their case on much that is second hand and interpreting science in amateur ways is not the way to deliver the knockout punch.

With some degree of embarrassment, might I dare suggest that familiarity with research and logical analysis may not even always be a central plank of the acquired wisdom of the proevolutionary camp. After all even amongst the Skeptics I am prepared to guess that there are some who would accept the validity of radiochemical dating of rocks – and yet perhaps never have handled a Geiger counter – or without having the faintest idea as to the relative merits, limitations and likely error bars of carbon dating or of Potassium-Argon dating, Uranium Lead dating or fission track analysis. There would be those who accept the idea of pre-hominid ancestors without having seen the fossil collections – or even without having the faintest idea of how the process of identification is made.

Methods of Study

Let’s face it – a huge percentage of our knowledge comes from received, predigested knowledge. The sources of knowledge for the “creation scientist” are admittedly different but the methods of study are proabably sufficiently similar to explain why apparently otherwise well educated people can be found sincerely claiming that the Earth is of the order of six thousand years old. It is important to remember that this hard-won so called knowledge is based on hours of study of a different literature and unquestioning acceptance of the textbook assertions of such worthies as Duane Gish, Henry Morris, Ken Ham or for that matter the apparently authoritative and profusely referenced claims by the anonymous authors of those nicely printed Watchtower publications.

Unfortunately, although in my view science teachers may know their conventional science in their subject disciplines from university they are ill prepared to identify the characteristics or for that matter the dangers of the pseudoscience of creationism as it is sometimes introduced into our schools. For example I received an extraordinary document through the mail the other day entitled “Understanding The Young Earth Model”. Yes I have spotted plenty of serious errors and misinterpretations of mainstream science in this publication which incidentally is called a “science teacher resource booklet”. But you have to remember I have a relatively recent PhD in the topic. A first encounter with the claims – especially by one unfamiliar with the quoted sources may well produce understandable confusion. Many of our science teachers have no geology in their degrees and it is possible to get right through a university science course without coming up against the raft of evidence which supports such things as an acceptance of the ancient past for the universe and the old Earth.

They are not to know that the PhD in the qualifications cited by an author come from the same university that Ian Plimer once told me had, for the not inconsiderable sum of US$19, inadvertently awarded a Doctor of Divinity to the slobbery blue heeler that belonged to his next door neighbour. Unless they are very well read, nor are the teachers likely to know which of the creationist assertions are founded on thoroughly discredited experiments or total misrepresentations of the literature.

Human and Dinosaur Footprints

For example a few years ago impressions of human footprints were reported as being found beside dinosaur footprints in the Paluxy River area in Texas. The creation science case was not helped when one of the creation science assistants reported that he had witnessed the Reverend Dr Carl Baugh carving out some new fossil human footprints by torchlight. The Paluxy River findings of human footprints are now considered of no consequence by the paleontologists but they still surface in creationist literature.

In a number of instances I encountered evidence of what is at worst deliberate intellectual dishonesty or at best extremely sloppy and ill-informed research techniques on the part of the leading creationists. In one of his recent lectures in Auckland, John MacKay supported his case by quoting from a book by Derek Ager entitled The New Catastrophism. He underlined the significance of Ager’s comments by stressing the authority of the book, posing a rhetorical qustion: “Who of you has had a book published by Oxford University Press?” Unfortunately for Mackay, the copy in the University of Auckland (which incidentally claims to be published by the Cambridge University Press) has a preface. There in bold type is the disclaimer –

“…in view of the misuse my words have been put to in the past, I wish to say that nothing in this book should be taken out of context and thought in any way to support the views of the “creationist”, who I refuse to call “scientific”.”

The existence of creationist influences in our schools raises some fundamental questions about the role of the school as an agent of society. If you are teaching in a comfortable white middle class suburb away from a bible belt enclave or marae the worst you are likely to encounter is a weekend missionary visit offering you a chance for spiritual enlightenment. If you are teaching at the type of school which demands a signature attesting a fundamentalist acceptance of bible literalism as a prerequisite for employment – or you are teaching in an area where the parental customer base represents unquestioning acceptance of Adam and Eve and the Noah flood and the board of trustees is known to have a bible literalist or creationist stance, it is legitimate to question how far you should take heed of in loco parentis. For me with my training in science – and a formal higher degree in science education with a focus on this very debate, there is normally no contest. I am totally convinced in the case for evolution. I personally find the evidence overwhelming as I believe is the case for believing the Earth was created vastly earlier than 6000 years ago. In making room for a discussion of the extreme form of “creation science” it is a little like being asked to condone those wishing to waste my pupils time with a case for the flat Earth, fake cures for cancer or career guidance by astrology, I do however concede that since we have to teach pupils as they are rather than as they should be, the probability they have either already encountered or at least are likely to encounter this set of beliefs make it more reasonable to tackle the problem. Since they have to learn what constitutes pseudoscience as well as good science there is also a case for arguing for creation science as a case study.

I also believe that as a science teacher I have a responsibility to fairly represent mainstream science views and attitudes and not imply a justifiable case where none exists. But when it comes to deciding – as I had to decide a few years ago – whether or not I should share my understanding with four Exclusive Brethren pupils when I knew that the penalty for heresy for them might be being ostracised by their family, I was less confident. The point is that even if the teacher sees “creation science” as being almost devoid of redeeming features, I believe we owe pupils and their families the right to choose their own religion and own place in society. I must also stress that for many teachers the debate is likely to be a non-issue. It is really only in those schools where the contributing community contains a significant number of creationists or vehement creationists intent on spreading their message in the schools that there is likely to be an issue for the teacher.

The regulations governing what happens in schools are of little help. While the education act safeguards the right of university lecturers to raise controversial issues and question cherished beliefs (with the possible exception of revisionist histories of the holocaust) there is no such clear direction for teachers at the secondary level.

What then should the teacher do to deal with someone offering creationist literature to the school or offering to come in and share creationist assertions with his or her pupils.

My main word of advice is that the teachers should make themselves thoroughly familiar with the nature of the literature. I first entered the arena assuming it was just a question of assembling the conventional evidence a la the prescription and thereby overwhelm the counter case. I rapidly discovered that there is a difference between evidence derived from a pseudoscience and that of the more conventional scientific literature. I find it helpful to my students to teach them how to read such evidence critically. The way I now use such material in the classroom is to demonstrate how science can be misrepresented.

I also believe that as teachers we should be sensitive to the fact that we may be dealing here with matters of religious or cultural belief and avoid direct confrontation where it is possible to do so. My personal answer is to introduce some geological and astronomical principles early on to my pupils and leave the evolution of man till much later in the piece. My own preferred strategy is to show a variety of simple methods for establishing the world is very old and inviting the pupils to draw their own conclusions as well as conveying the majority point of view. This might include showing photographs of varves, annual and daily ring formation in coral deposits, speed of light data from distant stars and galaxies and a highly simplified account of radioactive dating.

I give examples of variation in species, then examples of observed speciation. In the senior school I use the examples of new species including the cichlid fishes, Primula kewensis and the ring species of the Black-backed Gulls and Herring Gulls.

I find even fifth form pupils are fascinated by skeleton photos of related species and hominid and prehominid fossils.

I also give simplified accounts of protolife experiments such as those of Urey and Miller – and the Fox experiments.

After this I believe the pupils are more ready to make some of their own judgements about evolution when it is formally studied.

I also think that whatever the religious belief of the teacher it does no harm to point out that most mainstream religious believers now accept evolution. If I am asked I make no secret of the fact that I am a lay preacher in the Methodist church and have no problems with reconciling my interpretation of the bible with my scientific understanding of an ancient Earth and processes of evolution.

I think that whatever the constraints of the exam prescription, as science teachers my colleagues and I have an obligation to teach the difference between pseudo science and science. Where creation science is helpful is to highlight for senior pupils how science can be misrepresented.

Finally rather than lament the entanglement of science, education and entrenched world view we might do worse than allow the last word to John C Greene.

“I am convinced that science, ideology and world view will forever be interwined and interacting. As a citizen concerned for the welfare of science and of mankind generally, however, I cannot help but hope that scientists will recognise where science ends and other things begin.”

PC Chemistry in the Classroom.

One of the fictions of the “naive-greens” and other “irrationalists” is that “chemicals” are bad while natural products (non-chemicals?) are good. When asked if water is a chemical, and hence evil, and whether cyanide, nicotine or the botulism toxin, are natural and hence benign they change the subject. You might think that our classrooms are immune to such nonsense; in the November issue of Chemistry in New Zealand, Ian Millar of Carina Chemical Laboratories Ltd tells us we are wrong.

Mr Millar’s sister is a secondary school chemistry teacher and had received some official guide-lines titled “Chemical Safety Data Sheets for Teaching Laboratories” promoting the safe use of chemicals in schools. Mr Millar looked up a typical laboratory chemical to see what the data sheet had to say. Some excerpts follow:

  • Personal protection — dust respirator
  • Ventilation — extraction hood
  • Gloves — rubber or plastic
  • Eye — glasses, goggles or face shield
  • Other — plastic apron, sleeves, boots if handling large quantities
  • Disposal — dispose through local authorities if appropriate facilities are available, otherwise pass to a chemical disposal company
  • First Aid — irrigate thoroughly with water. Skin: wash off thoroughly with soap and water. Ingested: wash out mouth thoroughly with water. In severe cases obtain medical attention.

Now this chemical is clearly pretty nasty stuff and you might be thinking that it’s right and proper that our schools should be encouraging such sound practice.

But left to our own devices most of us would dispose of the stuff by throwing it into the sea — reasoning that the sea wouldn’t suffer too much damage as a result. After all this apparently dangerous chemical is nothing more than sodium chloride — better known as common salt.

Mr Miller points out that he enjoys bathing in a 3.5% solution of NaCl (the sea) and even eats it as table salt.

Can we now expect to see television chefs decked out in gloves, safety glasses, and plastic aprons, and calling in a chemical disposal company to clean up the kitchen afterwards? Should we ban children from our domestic kitchens because of the obvious risks to their health? These instructions are not only nonsense — they are dangerous nonsense. They are so ludicrous that they may well encourage people to ignore safety recommendations when handling genuinely dangerous chemicals such as cyanide or nitric acid. Or they may create a generation stricken with chemophobia.

To argue that it is good to err on the side of caution is wrong. This information is simply inaccurate. Nobody washes out their mouth after eating salt or taking in a mouthful of surf. I believe that this data sheet does not represent a simple error of judgement but unfortunately reflects an ideology which holds that all “chemicals” are bad and destructive of life and the environment.

I might have taken some comfort from the belief that whatever has been happening to the teaching of English, history, or anthropology, the objectivity of the process of science would make it immune to such victim-promoting political correctness. Could parents among our membership find out if the government’s chemical police have decided that NaCl is a politically incorrect “chemical” and needs all these precautions, while “Sea-Salt” is a “natural” product which can be used with safety?

Maori Science

Can traditional Maori knowledge be considered scientific?

The idea of a separate indigenous science, practised by Maori before European settlement and passed on to their descendants, is an appealing one. The phrase “Maori science” has cropped up in school curriculum reform and in Museum of New Zealand planning documents. Courses on it have been taught at university level. The Department of Conservation has decided it is “highly relevant to future policies for science and research”. But does “Maori science” even exist?

At first, this seems a silly question. After all, we know that Maori possessed a huge body of knowledge about their environment, passed on orally for generations, even if today much of it has been lost. The knowledge of how to make bird snares, process karaka berries to destroy their toxins, and differentiate dozens of different varieties of harakeke surely qualify as science.

But science is more than a body of in-depth knowledge about the world. Other bodies of knowledge include history, literary theory, gardening, auto mechanics and rugby. If knowing a lot about flax is enough to make you a scientist, then so is knowing a lot about rugby. Although scientists tend to know a lot about their area of study, as astronomer Carl Sagan has said, “science is a way of thinking much more than it is a body of knowledge”.

Defining Science

The aim of science is to understand how the world really works. Not just collecting facts about the world, but questioning the mechanisms behind those facts. Knowing how to prepare karaka berries is knowledge; trying to find out why and how they are poisonous, and how your preparation is removing the poison, is science. A perfect scientist (most are mere human beings) is continually questioning, never accepting hearsay or declaring an area closed to inquiry. This aim of science, and all the methods that flow from it, is responsible for the extraordinary understanding of the natural world we have today.

Dr Ian Hawthorn of Waikato University defines science as “objective rational co-operative knowledge acquisition”. That is, it deals with the real or empirical world as opposed with subjective opinion or personal belief. It believes that the world can be understood rationally, without recourse to the supernatural, and it operates through the sharing of knowledge by scientists.

Under this definition of science, how does Maori knowledge measure up? The answer, it seems, is not very well.

Kaumatua Morris Grey has pointed out that there was no demarcation between religion and knowledge in Maori culture. Religion’s goal is not to understand the natural world, but to help people to live in it. It operates on faith and authority. However good the knowledge database possessed by Maori, questioning (“Why don’t kakapo fly? Why is the sky blue? What is a rainbow?”) would quickly bring you up against religious and supernatural explanations, which by their nature are not open to questioning.

Maori culture was not alone in this, of course. On the contrary, every society in the world until very recently operated much the same way. Society then was what we today would call authoritarian, where the authority of your elders and gods was not up for challenge. In Maori society, knowledge was not freely available, but imparted to those who were deemed worthy in a controlled environment. Knowledge was power, and had to be restricted. It was legitimised by the authority of your teacher.

A society in which science can develop needs to have people with sufficient technology and leisure time to do research. It also has to have a good communications network, and ways of reliably storing, disseminating and duplicating information. This state was nearly reached in several ancient societies, but the right conditions were only achieved a few hundred years ago in Europe, and it is only an accident of history that science began there and not in China or South America. Maori society had neither the communications network nor the social structure for collaborative research to go on between different iwi.

So Maori knowledge acquisition was neither objective (relying as it did on religious faith), rational (it mixed supernatural with mundane explanations), nor co-operative (it relied on authority rather than challenge and consensus).


It seems then that “Maori science” doesn’t qualify as science. What should it be called then? Botanist Murray Parson has suggested the useful word matauranga, one Maori term for knowledge, and one which makes no assumptions about how scientific that knowledge is.

The phrase “Maori science” is problematic in a second sense. Most scientists would agree that the universality of science is one of its strongest features. Science is only accidentally European and, more importantly, can be practised by any culture. So the terms “Pakeha science” or “Western science” do not make sense — either a practice is science or it is not, regardless of the practitioner’s culture.

Maori knowledge or matauranga seems to have concentrated more on getting along in the world than understanding what makes it tick; it has more to do with technology than science. The words science and technology are often used together or interchangeably, but biologist Lewis Wolpert has argued that until quite recently the two areas had very little to do with each other — the technology our ancestors used for hunting, farming and building houses was uninformed by science until the 19th century. So matauranga may not be science, but that is only one of the problems that would assail anyone that tried to defend it as a research method or a curriculum subject.

Demeaning Traditional Knowledge

Calling matauranga a science demeans it. Maori knowledge — a mixture of religion, mythology and observed facts — is sometimes inconsistent and often resorts to an appeal to authority to justify a statement. It has different aims and standards to science. Moreover, to contrast it with “Pakeha” science, which is wider in scope and both more detailed and more accurate in almost every case, will teach Maori children that they are heir to a “science” that is less comprehensive and often simply wrong. Scientific standards are the wrong ones to use when examining matauranga.

Consider the story quoted by early anthropologist Elsdon Best about the pukeko arriving in New Zealand on the Aotea or Horouta canoes. This is a good example of the sort of knowledge claim that might be put forward in a Maori science class. It is also empirically testable. Ornithologists will point out that although pukeko are indeed found though most of the Pacific, New Zealand pukeko belong to the Australian subspecies, not the Pacific. This is consistent with other facts, such as the ancestors of takahe being pukeko which settled here long before humans, and the number of other bird species that have arrived here from across the Tasman. It is not, however, consistent with matauranga.

Such contradictions and anomalies are not rare. If matauranga were to qualify as science, it would have to play by the rules of the game and discard its mythological and religious elements. To many, and I am sure to most Maori, this seems a ludicrous solution, one which would rob matauranga of its coherency and richness.

There is another problem with the concept of Maori science. Although some of its promoters have the laudable aim of making science more accessible to Maori children, setting up an opposition between Maori and Pakeha science will have a different effect. The message conveyed will be that “real” science, with its wide-ranging and powerful explanations, is owned by Pakeha, and that Maori own only a lesser version.

As artist Cliff Whiting has pointed out, this ignores the fact that any race and culture can practice science. Members of historically excluded groups, such as Maori and women, should be encouraged to participate in science, not taught that it is the tool of the dominant culture and that to study it is to sell out.

Why Indigenous Science?

Given that there are so many problems with the notion of indigenous science, why is it being promoted at all?

The seminal publication in this area is a paper by Liz McKinley, Pauline Waiti and Beverley Bell, published in 1992 in the International Journal of Science Education. It advocates studying the culture of Maori students to encourage their achievement in science. The proponents are not cynical and malicious, as the creationist movement in the US has been in its struggle to introduce religion into science classes. They genuinely believe that Maori knowledge is science and should be taught. The problem here is that criticising their solution could be misinterpreted as criticising the very real problem of poor Maori participation in science.

About half the paper offers constructive suggestions for making science relevant to Maori. Again and again, however, the authors slide from this point to actively defending a separate indigenous science. Their use of the term “Maori science” seems to be an attempt to legitimise matauranga in Pakeha eyes, by borrowing the cloak of science to confer some mana. As Mere Roberts, a zoologist studying kiore, has pointed out, this is a little like the situation of some decades ago, where some Maori discarded their language and culture by “trying to be Pakeha”. Why should Maori have to “legitimise” their matauranga by trying to turn it into science?

Maori science is not being talked about only in academic journals. In 1992, the Department of Conservation, in response to the debate generated over the poisoning of kiore, the Polynesian rat, gave a bicultural presentation. Roberts talked about kiore from a scientific point of view, Bradford Haami from that of matauranga (which DoC called tikanga Maori, or Maori custom/protocol). The message was that each of these “techniques” of data-gathering are of equal value when doing research, and that this approach was highly relevant to future policies for science and research.

In 1993, McKinley and Waiti are on contract to the Ministry of Education to translate the NZ Curriculum Science Statement into Maori. An interesting point made in their paper is that some scientific concepts will not be crossing the language barrier; the concepts taught in Maori may not be the same as those taught in English. Their example is that in Maori “wind” would be termed “Tawhirimatea” for the name of the Maori god of wind. They defend the inclusion of religion in a science course by pointing out that concepts of energy taught by a physics and a chemistry teacher also differ, which hardly seems a reasonable analogy even if it is true.

The idea of Maori science seems to make sense at first hearing, partly because of a vernacular but inaccurate definition of science as “a body of knowledge”, and partly because it appeals to the fairness of teachers, who genuinely want different perspectives and to tell both sides of the story. The latter appeal is misleading, and echoes creationist requests for equal time for their story. Presenting two alternative viewpoints is only appropriate if the viewpoints are genuine alternatives; that is, if they are seeking to do the same thing in different ways. Science and matauranga do not seek to do the same thing.

The transitions going on in New Zealand society at the moment mean that discussions of cultural beliefs can become emotionally polarised, with misquotation and misunderstanding running riot. Posturing, name-calling or Maori/Pakeha-“bashing” will not help answer these issues. It is vital that critical and constructive argument can occur instead.