Confessions of a New Age Skeptic

How should a skeptic relate to those who have other belief systems?

What does a skeptic and atheist do when they are part of a broader group that is quite loose on empirical evidence and critical thinking? A lot of us experience this to some degree, but I’ve wrestled with my engagement with a particular group I’m fond of for the last 20 years.

Convergence: Beyond 2000 (previously, “Towards 2000”) is an annual camping event that takes place in North Canterbury over the New Year break. Its tag line is: “Gathering every year for a co-creative festival celebrating nature, spirituality, love, and healing”. The event is alcohol and drug-free, has good facilities, and includes about 350 people.

Convergence is a place where the cultural norm is one of suspension of disbelief. All of the typical energy healing models are practised and taught there in workshop context by volunteer facilitators. Reiki, guru aspirants, channelers, tarot card readers, Mayan calendar adherents, fairy lovers, tantric energy, The Secret, massage healers … well, where do you stop?

I found myself coming along to the events first in 1992. I’d migrated from Canada and my flatmate and all his friends, who were a playful, friendly bunch, went every year and I was drawn into it. I was still coming out of 12 years of study and work as a mechanical engineer installing computer systems into paper mills and was quite happy to regress into a less linear approach to my perception of life and how to live it.

My first year I was quite guarded, being aware that there are people out there that attempt to get people away to events “just-like-this” with the aim of drawing them into some sect or other. All the warmth, playfulness and affection that seemed to be happening was pretty overwhelming and I felt I stuck out like a sore thumb. Fortunately, it wasn’t a sect, and I wasn’t pressured to be “one of us”, and I was generally engaged with at a warm, receptive level.

At Convergence in the first few years I remember often feeling discomfort while the friend I might be walking or talking with would leap joyfully into the arms of someone they knew from previous events. It took a lot of self-reassurance to stick with it, and in time I found myself being outrageously affectionate as well, and carrying that forward into my life. I’ve made a lot of friends at Convergence, and found my last two partners there as well (having a child with both of them). So, there have been a lot of good times inside my relationship with the group.

My other exposures to “hooey” weren’t disturbing. I’d lived already for a few years on a hippy commune near Motueka where I’d seen any number of loose approaches to life. In a way, it made me feel more sane being around people that I was genuinely very fond of but that obviously had one or two screws loose and rattling around.

This Xmas, having recently turned 50 and after having gobbled up the the Skeptics Guide to the Universe (and other skeptic podcasts) I joined the ranks of the NZ Skeptics. I’ve finally come to the conclusion that I’m an atheist, a humanist, and I’m going to share that when it is relevant.

It’s still a learning experience for me. When do I say something? If a friend talks about the great course in acupuncture that they are in their final year of do I say what I believe? No, I haven’t, not often. But I do wonder the cost in not saying something. Did we lose an opportunity for intimacy? Did I miss giving them a test to their chosen life path, possibly sparing them some wasted years of hand-waving healing modalities? I’m still not clear on that one, being new to this.

“What’s the harm” is a classic response. I’ve reflected on my hippy years and now realise there was harm. The anti-vax/DIY home-birthing (without adequate support) crowd had three kids that are still paying the price. I’ve supported the deaf community as a social worker and found that there are years during which a lot of them go through milestone birthdays (anti-vax again). I’ve had my kids treated with bogus, outwardly professional therapies (waste of cash and time).

This year, when I went to Convergence I found the issue of my personal beliefs much more emotionally charged. I told quite a few people that I met that I had ‘come out’ as a skeptic. In saying this, I found others that shared my feelings.

Encouraged by my gathering support, in front of the whole crowd I ‘testified’ as an atheist/critical thinker and offered a workshop on the issue. The crowd barked with laughter and good will as I did it humorously. It turned out the others I’d spoken to prior to the meeting had initiated a workshop already!

In the workshop people spoke about the fear of diverging from the group norm, and holding their tongue while others spoke about their wild unfounded beliefs. They mentioned the discomfort of “having to” participate in opening rituals (blessing to the four directions…yadda yadda). And not knowing others that felt the same. We agreed that our general perspective was a healthy one for the fesitival, and one to be openly celebrated.

Next year we’ll open with a workshop for sceptics. It’s a beautiful event, and the acceptance is big enough to include critical thinking. And who knows, we may make us a few converts!
www.convergence.net.nz/wordpress/

Science as a human endeavour

If students are to pursue careers in science, they need to be able to see themselves in that role. One way to encourage this may be through the telling of stories. This article is based on a presentation to the 2008 NZ Skeptics Conference in Hamilton.

New Zealand’s new science curriculum asks us to develop students’ ability to think critically. As a science educator I think that’s about the most important skill we can give them: the ability to assess the huge amount of information that’s put in front of them from all sorts of sources. We also need to recognise that the ideas and processes students are hearing about have come to us through the activities of people – it’s people who develop science understanding. Science changes over time, as people’s ideas change. It’s fluid, it’s done by people, and it’s a human endeavour.

This puts science in an interesting position. It has its own norms, and its own culture, but it’s embedded in the wider culture as well. Those norms of science include its history. I find it sad that many of my students have no idea of where the big ideas in science came from. They don’t know what the people who were developing those ideas were like.

The new curriculum document recognises that the nature of science is an important strand in the curriculum, because it is what gives science its context, and lets students see science as a human endeavour. They’re going to learn what science is, and how scientists do science. They will become acquainted with the idea that scientists’ ideas change as they’re given new information; that science is valuable for society. And students are going to learn how it’s communicated.

Our future prosperity depends on students continuing to enter careers in the sciences. Richard Meylan, a senior adviser at the Ministry of Research, Science and Technology, said to me recently that somewhere between the end of year 13 and that two-month break before they go to university, we seem to be losing them. The universities are tending to see a drop in the number of students who have picked science as something that they want to continue in. Students don’t seem to see it as a viable career option, and there are many reasons for that.

We need more scientists, we need scientifically-literate politicians, and we need a community that understands science: how science is done, how science is relevant; one that sees science and scientists as being an integral part of the community. But how are we going to get there? What sorts of things can we do that are going to make young people want to carry on in science? Students often don’t choose science – how are we going to change that?

One of the reasons, perhaps, is that they often don’t see themselves as scientists. We did a bit of research on this at Waikato University last year, asking what would encourage our first-year students to continue as scientists. And what they were saying was, “Well, a lot of the time I don’t see myself as a scientist.” We asked, what would make a difference? The response: “Seeing that my lecturers are people.” People first, scientists second.

When I googled ‘scientist’ I had to go through eight or nine pages of results before finding something that looks like my own idea of a scientist. (‘Woman scientist’ is a bit better!) Almost all the guys have moustaches, they’ve all got glasses, all the women are square-shaped. Students don’t see themselves in this. We need them (and the rest of the community!) to see science as something that ordinary people do.

Now, what sorts of things are those ordinary people doing? They’re thinking; they’re speculating, they’re saying ‘what if?’ They’re thinking creatively: science is a creative process and at its best involves imagination and creativity. Scientists make mistakes! Most of the time we’re wrong but that doesn’t make good journal articles; usually no-one publishes negative results. So you just hear about the ‘correct’ stuff. Scientists persist when challenged, when things aren’t always working well.

Science stories

One way of fostering students’ engagement with science, and seeing themselves in it, is to tell them stories, to give them a feeling of how science operates. Brian Greene, a science communicator and physicist in the US, says:

I view science as one of the most dramatic narratives our species can tell. The story of our search to understand the Universe and ourselves. When that search is conveyed using the power of story – the story of discovery – we can all feel part of the journey.

So I’m going to tell you stories. And I’m going to tell stories about old, largely dead, people because one of my passions at the moment is the history of science. A lot of science’s big ideas have a history that stretches back 3-400 years. But they’re just as important today, and I think that an understanding of the scientists who came up with those ideas is also important today.

I think it’s important that kids recognise that a lot of scientists are a bit quirky. But then, everyone’s a bit quirky – we’re all different. One example of someone ‘a bit different’ is Richard Feynman. Famous for his discoveries in the nanotech field, he was a polymath: a brilliant scientist with interests in a whole range of areas – biology, art, anthropology, lock-picking, bongo-drumming. He was into everything. He also had a very quirky sense of humour. He was a brilliant scientist and a gifted teacher, and he showed that from an early age. His sister Joan has a story about when she was three, and Feynman was nine or so. He’d been reading a bit of psychology and knew about conditioning, so he’d say to Joan: “Here’s a sum: 2 plus 1 more makes what?” And she’s bouncing up and down with excitement. If she got the answer right, he’d give her a treat. The Feynman children weren’t allowed lollies for treats, so he let her pull his hair till it hurt (or, at least, he behaved as if it did!), and that was her reward for getting her sums right.

Making mistakes

We get it wrong a lot of the time. Even the people we hold up as these amazing icons – they get it wrong. Galileo thought the tides were caused by the Earth’s movement. At the time, no-one had developed the concept of gravity. How could something as far away as the Moon possibly affect the Earth? We look back at people in the past and we think, how could they be so thick? But,in the context of their time, what they were doing was perfectly reasonable.

Louis Pasteur, the ‘father of microbiology’, held things up for years by insisting that fermentation was due to some ‘vital process’ it wasn’t chemical. He got it wrong.

And one of my personal heroes, Charles Darwin, got it completely wrong about how inheritance worked. He was convinced that inheritance worked by blending. When Darwin published The Origin of Species, in 1859, Mendel’ s work on inheritance hadn’ t been published. It was published in Darwin’s lifetime – Mendel’s ideas would have made a huge difference to Darwin’s understanding of how inheritance worked – part of the mechanism for evolution that he didn’t have. But he never read Mendel’s paper.

Scientists do come into conflict with various aspects of society. Galileo had huge issues with the Church. He laid out his understanding of what Copernicus had already said: the Universe was not geocentric, it didn’t go round the Earth. The Church model was that the Universe was very strongly geocentric: everything went round us. Galileo was accused of heresy, and shown the various instruments of torture; for pulling out his thumbnails and squashing his feet. He did recant, and he was kept under house arrest until his death. And the Church officially apologised to him in 1992. A long-running conflict indeed.

And there’s conflict with prevailing cultural expectations. Beatrice Tinsley was an absolutely amazing woman; a New Zealander who has been called a world leader in modern cosmology, and one of the most creative and significant theoreticians in modern astronomy. She went to the US to do her PhD in 1964, and finished it in 1966. Beatrice published extensively, and received international awards, but she found the deck stacked against her at the University of Texas, where she worked. She was asked if she’d design and set up a new astronomy department, which she did. The university duly opened applications for the new Head of Department. Beatrice applied. They didn’t even respond to her letter. So she left Texas. (Yale did appreciate her, and appointed her Professor of Astronomy.) A couple of years later she found she had a malignant melanoma, and was dead by the age of 42. The issue for Beatrice was a conflict between societal expectations and the area where she was working: women didn’t do physics.

Science versus societal ‘knowledge’

Raymond Dart was an English zoologist who worked at the University of Witwatersrand in South Africa. He was widely known among the locals for his fondness for fossils; you could trundle down to Prof Dart’s house, bring him a lovely bit of bone, and he’d pay you quite well. One day in 1924 the workers at Taung quarry found a beautiful little skull – a face, a lower jaw, and a cast of the brain – in real life it would sit in the palm of your hand. Dart was getting ready for a wedding when the quarry workers arrived, and he was so excited by this find that when his wife came in to drag him off to be best man, he still didn’t have his cuffs and his collar on and there was dust all over his good black clothes. He was absolutely rapt.

Dart looked at this fossil and saw in it something of ourselves. He saw it as an early human ancestor. The jaw is like ours, it has a parabolic shape, and the face is more vertical -relatively speaking – than in an ape. He described it, under the name Australopithecus africanus, as being in our own lineage and went off to a major scientific meeting, expecting a certain amount of interest in what he’d discovered. What he got was a fair bit of doubt, and some ridicule. How could he be so foolish? It was surely an ape.

By 1924 evolution was pretty much an accepted fact in the scientific community. But there was a particular model of what that meant. In some ways this built on the earlier, non-evolutionary concept of the Great Chain of Being. They also had a model that tended to view the epitome of evolutionary progress as white European males. It followed from this that humans had evolved in Europe, because that’s where all the ‘best’ people came from. Black Africans were sometimes placed as a separate species, and were regarded as being lower down the chain.

Yet here was Dart saying he’d found a human ancestor in Africa. This would mean the ancestor must have been black – which didn’t fit that world-view. It’s a racist view, but that reflected the general attitudes of society at the time, and the scientists proposing that view were embedded in that society just as much as we are embedded in ours today.

Another difficulty for Dart had to do with prevailing ideas about how humans had evolved. By the 1920s Neanderthal man was quite well known. Neanderthals have the biggest brains of all the human lineage – a much bigger brain than we have. And the perception was that one of the features that defined humans, apart from tool use, was a big brain. It followed from this that the big brain had evolved quite early. Dart was saying that Australopithecus was a hominin, but Australopithecus as an adult would have had a brain size of around 400cc. We have a brain size of around 1400cc. Australopithecus didn’t fit the prevailing paradigm. The big brain had to come first; everybody knew that.

And belief in that particular paradigm – accepted by scientists and non-scientists alike – helps to explain why something like Piltdown man lasted so long. Over the period 1911-1915 an English solicitor, Charles Dawson, ‘discovered’ the remains of what appeared to be a very early human indeed in a quarry at Piltdown. There were tools (including a bone ‘cricket bat’), a skull cap, and a lower jaw, which looked very old. The bones were quite thick, and heavily stained. This was seized upon with joy by at least some anatomists because the remains fitted in with that prevailing model: old bones of a big-brained human ancestor.

People began to express doubts about this fossil quite early on, and these doubts grew as more hominin remains were confirmed in Africa and Asia. But it wasn’t completely unmasked as a fake until the early 1950s. The skull looked modern because it was a modern (well, mediaeval) skull that had been stained to make it look really old. The jaw was that of an orangutan, with the teeth filed so that they looked more human and the jaw articulation and symphysis (the join between right and left halves) missing. When people saw these remains in the light of new knowledge, they probably thought, how could I have been so thick? But in 1914 Piltdown fitted with the prevailing model; no-one expected it to look otherwise. And I would point out that it was scientists who ultimately exposed the fraud. And scientists who re-wrote the books accordingly.

Thinking creatively

The next story is about Barry Marshall, Robin Warren, and the Nobel Prize they received in 2005. (These guys aren’t dead yet!) Here’s the citation:

[The 2005] Nobel Prize in Physiology or Medicine goes to Barry Marshall and Robin Warren, who with tenacity and a prepared mind challenged prevailing dogmas. By using technologies generally available… they made an irrefutable case that the bacterium Helicobacter pylori is causing disease.

The prevailing dogma had been that if you had a gastric or duodenal ulcer, you were a type A stress-ridden personality. The high degree of stress in your life was linked to the generation of excess gastric juices and these ate a hole in your gut. Marshall and Warren noticed that this bacterium was present in every preparation from patients’ guts that they looked at. They collected more data, and found that in every patient they looked at, H. pylori was present in the diseased tissue. One of them got a test-tube full of H. pylori broth and drank it. He got gastritis: inflammation of the stomach lining and a precursor to a gastric ulcer. He took antibiotics, and was cured. The pair treated their patients with antibiotics and their ulcers cleared up.

Because they were creative, and courageous, they changed the existing paradigm. And this is important – you can overturn prevailing paradigms, you can change things. But in order to do that you have to have evidence, and a mechanism. Enough evidence, a solid explanatory mechanism, and people will accept what you say.

Which was a problem for Ignaz Semmelweiss. He had evidence, alright, but he lacked a mechanism. Semmelweiss worked in the Vienna General Hospital, where he was in charge of two maternity wards. Women would reputedly beg on their knees not to be admitted to Ward 1, where the mortality rate from puerperal fever was about 20 percent. In Ward 2, mortality was three or four percent. What caused the difference? In Ward 2 the women were looked after exclusively by midwives. In Ward 1, it was the doctors. What else were they doctors doing? They were doing autopsies in the morgue. And they would come from the morgue to the maternity ward, with their blood-spattered ties, and I hate to think what they had on their hands. Then they would do internal examinations on the women. Small wonder so many women died. Semmelweiss felt that the doctors’ actions were causing this spread of disease and said he wanted them to wash their hands before touching any of the women on his ward. Despite their affronted reactions he persisted, and he kept data. When those doctors washed their hands before doing their examinations, mortality rates dropped to around three percent.

The trouble was that no-one knew how puerperal fever was being transmitted. They had this idea that disease was spread by miasmas – ‘bad airs’ – and although the germ theory of disease was gaining a bit of traction the idea that disease could be spread by the doctors’ clothes or on their hands still didn’t fit the prevailing dogma. Semmelweiss wasn’t particularly popular – he’d gone against the hospital hierarchy, and he’d done it in quite an abrasive way, so when he applied for a more senior position, he didn’t get it, and left the hospital soon after. He was in the unfortunate position of having data, but no mechanism, and the change in the prevailing mindset had to wait for the conclusive demonstration by Koch and Pasteur that it was single-celled organisms that actually caused disease.

Collaboration and connectedness

Scientists are part of society. They collaborate with each other, are connected to each other, and are connected to the wider world. Although there have been some really weird people that weren’t. Take Henry Cavendish – the Cavendish laboratory in Cambridge is named after him. He was a true eccentric. He did an enormous amount of science but published very little, and was quite reclusive – Cavendish just didn’t like talking with people. If you wanted to find out what he thought, you’d sidle up next to him at a meeting and ask the air, I wonder what Cavendish would think about so-and-so. If you were lucky, a disembodied voice over your shoulder would tell you what Cavendish thought. If you were unlucky, he’d flee the room.

But most scientists collaborate with each other. Even Newton, who was notoriously bad-tempered and unpleasant to people whom he regarded as less than his equal, recognised the importance of that collaboration. He wrote: “If I have seen further than others, it is because I have stood on the shoulders of giants.” Mind you, he may well have been making a veiled insult to Robert Hooke, to whom he was writing: Hooke was rather short.

What about Darwin? Was he an isolated person, or a connected genius? We know that Darwin spent much of the later years of his life in his study at Downe. He had that amazing trip round the world on the Beagle, then after a couple of years in London he retreated to Downe with his wife and growing family, and spent hours in his study every day. He’d go out and pace the ‘sandwalk’ – a path out in the back garden – come back, and write a bit more. Darwin spent eight years of that time producing a definitive work on barnacles, and he didn’t do it alone. He wrote an enormous number of letters to barnacle specialists, and to other scientists asking to use work that they’d done, or to use their specimens to further the work he was doing.

He was also connected to a less high-flying world: he was into pigeons. This grew from his interest in artificial selection and its power to change, over a short period of time, various features in a species. So he wrote to pigeon fanciers. And the pigeon fanciers would write back. These were often in a lower social class and various family and friends may well have been a bit concerned that he spent so much time speaking to ‘those people’ about pigeons. And Darwin had a deep concern for society as well. He was strongly anti-slavery, and he put a lot of time (and money) into supporting the local working-class people in Downe. He was still going in to London to meet with his colleagues, men like Lyell and Hooker, who advised him when Alfred Wallace wrote to him concerning a new theory of natural selection. Now there’s an example of connectedness for you, and the impact of other people’s thought on your own! It was Wallace who kicked Darwin into action, and led to him publishing the Origin of Species.

That’s enough stories. I’m going to finish with another quote from Brian Greene:

Science is the greatest of all adventure stories, one that’s been unfolding for thousands of years as we have sought to understand ourselves and our surroundings. Science needs to be taught to the young and communicated to the mature in a manner that captures this drama. We must embark on a cultural shift that places science in its rightful place alongside music, art and literature as an indispensable part of what makes life worth living.
Science lets us see the wonder and the beauty of the stars, and inspires us to reach them.

Behind the Screen

Mass screening programmes have generated considerable controversy in this country. But these programmes have inherent limitations, which need to be better understood

In 1996 the Skeptical Inquirer published an article by John Allen Paulos on health statistics. Among other things this dealt with screening programmes. Evaluating these requires some knowledge of conditional probabilities, which are notoriously difficult for humans to understand.

Paulos presented his statistics in the form of a table; a modified version of this is shown in the table below.

Have the
condition
Do not have
the condition
Totals
Test Positive 990 9,990 10,980
Test Negative 10 989,010 989,020
Totals 1,000 999,000 1,000,000
Table 1

Of the million people screened, one thousand (0.1%) will have the condition. Of these 1% will falsely test negative (10) and 99% will correctly exhibit the condition. So far it looks good, but 1% of those who do not have the condition also test positive, so that the total number who test positive is 10980. Remember that this is a very accurate test. So what are the odds that a random person who is told by their doctor that s/he has tested positive, actually has the condition? The answer is 990/10980 or 9%.

In this hypothetical case the test is 99% accurate, a much higher accuracy rate than any practical test available for mass screening. Yet over 90% of those who test positive have been diagnosed incorrectly.

In the real world (where tests must be cheap and easy to run) a very good test might achieve 10% false negatives and positives. To some extent the total percentage of false results is fixed, but screening programmes wish to reduce the number of false negatives to the absolute minimum; in some countries they could be sued for failing to detect the condition. This can only be done by increasing the chance of false positives or inventing a better test. Any practical test is likely to have its results swamped with false positives.

Consider a more practical example where the base rate is the same as previously, but there are 10% false negatives and positives, ie the test is 90% accurate. Again 1 million people are tested (see Table 2 below).

Have the
condition
Do not have
the condition
Totals
Test Positive 900 99,000 100,800
Test Negative 100 889,100 899,200
Totals 1,000 999,000 1,000,000
Table 2. Base rate is 0.1%. Level of false positives=10%; level of false negatives=10%

This time the total number testing positive is 100800. But nearly one hundred thousand of them do not have the condition. The odds that any person who tested positive actually has the condition is 900/100800, or a little under 1%. This time, although 90% of these people have been correctly diagnosed, 99% of those who test positive have been diagnosed incorrectly.

In both these cases the incidence of the condition in the original population was 0.1%. In the first example the screened population testing positive had an incidence two orders of magnitude higher than the original population, but this was unrealistic. In the second example those testing positive in the screened population had an incidence one order of magnitude higher than the general population.

This is what a good mass screening test can do – to raise the incidence of the condition by one order of magnitude above the general population. However any person who tests positive is unlikely to have the condition and all who test positive must now be further investigated with a better test.

So screening programmes should not be aimed at the general population, unless the condition has a very high incidence. Targeted screening does not often improve the accuracy of the tests, but it aims at a sub-population with a higher incidence of the condition. For example, screening for breast cancer (a relatively common condition anyway) is aimed at a particular age group.

Humans find it very difficult to assess screening, and doctors (unless specifically trained) are little better than the rest of the population. It has been shown fairly convincingly that data are most readily understood when presented in tables as above. For example the data in Table 3 was presented to doctors in the UK. Suppose they had a patient who screened positive; what was the probability that that person actually had the condition?

When presented with the raw data, 95% of them gave an answer that was an order of magnitude too large. When shown the table (modified here for consistency with previous examples) about half correctly assessed the probability of a positive test indicating the presence of the disease.

Have the
condition
Do not have
the condition
Totals
Test Positive 8,00 99,000 107,000
Test Negative 2,000 891,000 893,000
Totals 10,000 990,000 1,000,000
Table 3. Base rate is 1%. False negative rate=20%; False positive rate=10%

This time the total number who test positive is 107 000. But nearly one hundred thousand of them do not have the condition. The odds that any person who tested positive actually has the condition are 8000/107 000 or about 7.5%. Now remember that nearly half the UK doctors, even when shown this table could not deduce the correct result. If your doctor suggests you should have a screening test, how good is this advice?

Patients are supposed to be supplied with information so that they can make an informed decision. Anybody who presents for a screening test in NZ may find it impossible to do this. My wife attempted to get the data on breast screening from our local group. She had to explain the meaning of “false negative”, “false positive” and “base rate”. The last is a particularly slippery concept. From UK figures the chances of a 40-year-old woman developing breast cancer by the age of 60 is nearly 4% (this is the commonest form of cancer in women). However, when a sample of women in the 40-60 age group are screened, the number who should test positive is only about 0.2%. Only when they are screened each year, will the total of correct positives approach 4%.

The number of false positives (again using overseas figures) is about 20 times the number of correct positives so a women in a screening programme for 20 years will have a very good chance of at least one positive result, but a fairly low probability of actually having breast cancer. I do not think NZ women are well prepared for this.

The Nelson group eventually claimed that the statistics my wife wanted on NZ breast cancer screening did not seem to be available. But, they added, “we (the local lab) have never had a false negative.” From the recent experience of a close friend, who developed a malignancy a few months after a screening test, we know this to be untrue. What they meant was that they had never seen a target and failed to diagnose it correctly as a possible malignancy requiring biopsy. This may have been true but it is no way to collect statistics.

Screening for breast cancer is generally aimed at the older age group. In the US a frequently quoted figure is that a woman now has a one in eight chance of developing breast cancer, which is higher than in the past. This figure is correct but it is a lifetime incidence risk; the reason it has risen is that on average women are living longer. The (breast cancer) mortality risk for women in the US is one in 28. A large number who develop the condition do so very late in life and die of some other condition before the breast cancer proves fatal.

Common Condition

Breast cancer is a relatively common condition and would appear well suited for a screening programme. The evaluation of early programmes seemed to show they offered considerable benefit in reducing the risk of death. However later programmes showed less benefit. In fact as techniques improved, screening apparently became less effective. This caused some alarm and a study published in 1999 by the Nordic Cochrane Centre in Copenhagen looked at programmes world wide, and attempted to better match screened populations with control groups. The authors claimed that women in screening programmes had no better chance of survival than unscreened populations. The reactions of those running screening programmes (including those in NZ) were to ignore this finding and advise their clients to do the same.

If there are doubts as to the efficacy of screening for breast cancer, there must be greater doubts about screening for other cancers in women, for other cancers are rarer. Any other screening programme should be very closely targeted. Unfortunately the risk factors for a disease may make targeting difficult. In New Zealand we have seen cases where people outside the target group have asked to be admitted into the screening programme, so they also “can enjoy the benefits”. Better education is needed.

Late-onset diabetes is more common among Polynesians than among New Zealanders in general, and Polynesians have very sensibly accepted that this is true. Testing Polynesians over a certain age for diabetes makes sense, particularly as a test is quick, cheap and easy to apply. Testing only those over a certain body mass would be even more sensible but may get into problems of political correctness.

Cervical cancer is quite rare so it is a poor candidate for a mass screening programme aimed at a large percentage of the female population. The initial screening is fast and cheap. If the targeted group has an incidence that is one order of magnitude higher than the general population, then the targeting is as good as most tests. Screening the whole female population for cervical cancer is a very dubious use of resources.

My wife and I were the only non-locals travelling on a bus in Fiji when we heard a radio interview urging “all women” to have cervical screening done regularly. The remarkably detailed description of the test caused incredible embarrassment to the Fijian and Indian passengers; we had the greatest difficulty in concealing our amusement at the reaction. The process was subsidised by an overseas charity. In Fiji, where personal hygiene standards are very high, and (outside Suva) promiscuity rates pretty low, and where most people pay for nearly all health procedures, this seemed an incredibly poor use of international aid.

Assessment Impossible

Screening for cervical cancer has been in place in NZ for some time. Unfortunately we cannot assess the efficacy of the programme because proper records are not available. An attempt at an assessment was defeated by a provision of the Privacy Act. The recent case of a Gisborne lab was really a complaint that there were too many false negatives coming from a particular source. However this was complicated by a general assumption among the public and media that it is possible to eliminate false negatives. It should be realised that reducing false negatives can only be achieved by increasing the percentage of false positives. As can be seen from the data above, it is false positives that bedevil screening programmes.

Efforts to sue labs for false negatives are likely to doom any screening programme. To some extent this has happened in the US with many labs refusing to conduct breast xray examinations, as the legal risks from the inevitable false negatives are horrendous.

Large sums are being spent in NZ on screening programmes; taxation provides the funds. Those running the programmes are convinced of their benefits, but it is legitimate to ask questions. Is this spending justified?

Some Post-Scripts:

January 15 2000 New Scientist P3: Ole Olsen & Peter Gøtzsche of the Nordic Cochrane Centre in Copenhagen published the original meta-analysis of seven clinical trials in 2000. The resulting storm of protest, particularly from cancer charities, caused them to take another look. They have now reached the same conclusion: mammograms do not reduce breast cancer deaths and are unwarranted.

October 2001: In recent TV interviews some people concerned with breast cancer screening in NZ were asked to comment on this meta-analysis. Once again the NZ commentators stated firmly that they were certain that screening programmes in NZ “had saved lives” but suggested no evidence to support their view.

March 23 2002 New Scientist P6: The International Agency for Research on Cancer (IARC) funded by the WHO claims to have reviewed all the available evidence. They conclude that screening women below the age of 50 is not worthwhile. However, screening women aged from 50-69 every two years reduces the risk of dying of breast cancer by 35%.

According to New Scientist, the figures from Britain are that of 1000 women aged 50, 20 will get breast cancer by the age of 60 (2%); of these six will die. Screening every two years would cut the death rate to four. [It is obvious that these are calculations, not the result of a controlled study!]

The IARC states that organised programmes of manual breast examination do not bring survival benefits (they call for more studies on these). If NZ has similar rates then screening programmes aimed at 50-60 year old women should save approximately 50 lives per annum.

Medical Evidence

In the second of a two-part series, Jim Ring looks at what evidence means to different people

Scientific evidence is often difficult to interpret, in medicine in particular. ‘An Unfortunate Experiment’ was the title given to the treatment for some women after screening for cervical cancer. In this case science was considered by the legal profession and apparently found wanting. The doctor involved was castigated and publicly humiliated for experimenting on humans. But no real experiments were ever done; it appeared he did not understand scientific methodology. Neither did the journalists and legal people involved. The point is that no proper controls were used so it was very poor science.

Were the women disadvantaged? It is difficult to tell, but many were certainly outraged. It generally escaped notice that the surgeon was responding to public pressure for less radical surgery and that a group of patients involved seem to have had on average a slightly better outcome than the norm.

One of the most unfortunate ideas that came out of the long legal case was the emphasis on privacy for the individuals involved which implied their records should not be available for medical study. There is a difference between privacy and anonymity. It is very important to explain to those involved in medical procedures that for medicine to progress it is essential to collect data. Women appeared on TV complaining bitterly that they had been used in an experiment without their consent. But all good medicine is experimental.

We are not much closer to determining whether mass screening for cervical cancer does improve the chances for the screened population and now we have another scandal in New Zealand. Public expectation of screening programs is far in excess of what they can deliver. Efforts to sue Dr Bottrill, and compensation claims from ACC, seem to imply that patients think a false negative reading is necessarily medical error. Women have appeared on TV claiming their lives have been devastated because they had a false negative. Surely this is wrong; they are rightly upset but this is because further tests show a medical problem. Of course some who died might have been saved if an early intervention had resulted from a correct positive reading; however this does not seem to be the main thrust of their complaint.

False Negatives vs False Positives

It is possible to reduce the number of false negative readings at the expense of an increase in the number of false positives. This may seem desirable but there is a cost. In Britain large numbers of women in a screening project reacted very badly to finding they might have a ‘pre-malignant’ condition. This included some members of the medical profession. There is a clear indication that patients were not well informed before screening.

Patients involved in any medical procedure are supposedly asked for their ‘informed consent’. It seems now obvious that ‘informed consent’ is largely lacking during mass screening for both cervical and breast cancer. Several of those involved in the public hearing are surprised to find that screening is less than 100 per cent accurate. All mass screening procedures are likely to have a high error rate as they are designed to be rapid, cheap and simple; leading to more precise testing if there is a positive result. Is a large and expensive inquiry, using legal methods, a suitable way of investigating scientific questions?

Cervical cancer, unlike breast cancer, is strongly correlated with environmental factors. The former is very rare in the general population with a relatively high incidence in a certain sector. However it is politically incorrect to target the high-risk population for screening because the risk correlation is with such factors as poverty, poor hygiene and sexual promiscuity.

A recent case of a gynaecologist accused of misconduct raises some interesting issues. The unfortunate patient would seem to be outside the high-risk group for cervical cancer, thus an assumption may have been made that the correct diagnosis was very unlikely. But no physical vaginal examination was made. Feminist literature once strongly criticised the medical profession for over-use of this procedure, which one writer described as ‘legalised rape’. It would be interesting to know the rate at which this procedure is used today compared with, say, 30 years ago. Is the medical profession responding to crusades in a way that disadvantages patients?

Objections to trials

Medical ethicists – now a profession – have objected to various drug trials saying it is unethical to provide some patients with a placebo that will not improve their condition. This is in effect a claim to certain knowledge – that the drug being trialed is the ideal treatment. Patients receiving a placebo are not disadvantaged when the new drug may do more harm than good. We can sympathise with terminally ill patients who know that they will die in the absence of treatment and where anything seems a better bet than a placebo. But it is essential that drugs be properly tested before being used routinely.

Experiments have even been done in surgery. In 1959 patients were randomly assigned, but all prepared for surgery and the chest cavity opened. Only then did the surgeon open an envelope and follow the instruction; either to perform the procedure or immediately close the chest. Although some ethicists have objected (one stated that such surgery would never take place in the UK), a double-blind study of brain surgery was recently done in the US. Not only did it pass an ethics committee but patients welcomed the chance to take part even though it involved drilling the skulls of both real and placebo patients. In this case there was considerable improvement in those under 60 who had the real operation.

This indicates people may be willing to give consent to risky experiments providing they are given good information.

Most evidence in medicine comes not from experiments but from epidemiology. This requires the collection of huge amounts of data and sometimes produces conflicting results. Two populations, which differ only in the factor under investigation, should be matched and this is difficult to achieve. Recently, in a world-wide study, doubt has been cast on the efficacy of breast-cancer screening. New analysis purports to show that when populations are matched correctly, the screened population has no better chance of survival than an unscreened population.

Demands for safety

Some demand that all medical procedures should be ‘safe’, though curiously this is not required of alternative medicine. Suppose a new drug has fatal consequences for one patient in 100,000. It is quite likely that this will not be discovered during testing. Should such a tiny risk preclude the use of a drug that gives significant benefits to the vast majority of patients? New medicines are introduced when they show a clear advantage over a placebo. When very large numbers are involved in a study it is possible for a drug to show a significant advantage, yet not be worth introducing. Significance is a technical term and it is possible to find an advantage of only 0.1% is ‘significant’, though it may not be worth taking such a product.

It was this confusion that bedevilled early experiments on ESP. Rhine in America and Soal in England recorded the success of subjects guessing unseen cards. The experimenters wrongly assumed controls were unnecessary; instead they compared guesses with a theoretical chance result. A few subjects scored correct guesses at slightly more than chance and because huge numbers of guesses were involved, statistical tests showed these results had ‘significance’. That is, there was a huge probability that the guesses were not simply ‘lucky’.

Enthusiasts then made the enormous leap to say that because the guesses were not due to chance they must be due to a previously undiscovered human faculty, extra-sensory perception or ESP. Disinterested observers, not just skeptics, should have concluded that other explanations, such as poor experimental design, badly recorded results, fatigue, or just plain cheating were more likely. A great deal of time, money and effort was spent pursuing this will-o’-the-wisp.