September 18, 2015

Even heroes get it wrong sometimes

I recently read David Sackett's 2004 paper from Evidence-based Medicine “Superiority trials, non-inferiority trials, and prisoners of the 2-sided null hypothesis “ (Evid Based Med 2004;9:38-39 doi:10.1136/ebm.9.2.38). [links don’t seem to be working, will edit later if I can].

In it I found this:

“As it happened, our 1-sided analysis revealed that the probability that our nurse practitioners’ patients were worse off (by ⩾5%) than our general practitioners’ patients was as small as 0.008.”

I’m pretty sure that 0.008 probability isn’t from a Bayesian analysis and is a misinterpretation of a p-value. It isn’t the probability of the null hypothesis being false! It really isn’t! Obviously that got past the reviewers of this manuscript without comment.

Edit: I've got the paper now. It's a result from a one-tailed test for non-inferiority. The null hypothesis is that the intervention group was worse by 5% or more on their measure of function, p=0.008 so they reject the hypothesis of inferiority. But, as usual, that's the probability of getting the data (or more extreme data) if the null hypothesis is true - not the probability of the null hypothesis.

May 02, 2015

New test can predict cancer. Oh no it can't!

A story in several UK papers including the Telegraph suggests that a test measuring telomere length can predict who will develop cancer "up to 13 years" before it appears. Some of the re-postings have (seemingly by a process of Chinese whispers) elaborated this into "A test that can predict with 100 per cent accuracy whether someone will develop cancer up to 13 years in the future has been devised by scientists" (New Zealand Herald) - which sounds pretty unlikely.

What they are talking about is this study, which analysed telomere lengths in a cohort of people, some of whom developed cancer.

It's hard to know where to start with this. there are two levels of nonsense going on here; the media hype, which has very little to do with the results of the study, and the study itself, which seems to come to conclusions that are way beyond what the data suggest, through a combination of over-reliance on sgnificance testing, poor methodology and wishful thinking. I'll leave the media hype to one side, as it's well-established that reporting of studies often bears little relation to what the study actually did; in this case, there was no "test" and no "100% accuracy". But what about what the researchers really found out, or thought they did?

The paper makes two major claims:

1. "Age-related BTL attrition was faster in cancer cases pre-diagnosis than in cancer-free participants" (that's verbatim from their abstract);

2. "all participants had similar age-adjusted BTL 8–14 years pre-diagnosis, followed by decelerated attrition in cancer cases resulting in longer BTL three and four years pre-diagnosis" (also vebatim from their abstract, edited to remove p-values).

They studied a cohort of 579 initially cancer-free US veterans who were followed up annually between 1999 and 2012, with blood being taken 1-4 times from each participant. About half had only one or two blood samples, so there isn't much in the way of within-patient comparisons of telomere length over time. Telomere length was measured from these blood samples (this was some kind of average, but I'll assume intra-individual variation isn't important).

Figure 1 illustrates the first result:

Full-size image (36 K)

The regression lines do look as though there is a steeper slope through the cancer group, and the interaction is "significant" (p-0.032 when unadjusted and p=0.017 adjusted) - but what is ignored in the interpretation is the enormous scatter around both of the regression lines. Without the lines on the graph you wouldn't be able to tell whether there was any difference in the slopes. Additionally, as relatively few participants had multiple readings, it isn't possible to do the analysis of comparing within-patient measures of change in telomere length, which might be less noisy. Instead we have an analysis of average telomere length at each age, with a changing set of patients. So, on this evidence, it is hard to imagine how this could ever be a useful test that would be any good for distinguishing people who will develop cancer from those who will not. The claim of a difference seems to come entirely from the "statistical significance" of the interaction.

The second claim, that in people who develop cancer BTL stops declining and reaches a plateau 3-4 years pre-diagnosis, derives from their Figure 2:

Full-size image (47 K)

Again, the claim derives from the difference between the two lines being "statistically significant" at 3-5 years pre-diagnosis, and not elsewhere. But looking at the red line, it really doesn't look like a steady decline, followed by a plateau in the last few years. If anything, the telomere length is high in the last few years, and the "significance" is caused by particularly low values in the cancer-free group in those years. I'm not sure that this plot is showing what they think it shows; the x-axis for the cancer group is years pre-diagnosis, but for the non-cancer group it is years pre-censoring, so it seems likely that the non-cancer group will be older at each point on the x axis. Diagnoses of cancer could happen at any time, whereas most censoring is likely to happen at or near the end of the study. If BTL declines with age, that could potentially produce this sort of effect. So I'm pretty unconvinced. The claim seems to result from looking primarily at "statistical significance" of comparisons at each time point, which seems to have trumped any sense-checking.

April 09, 2015

Journal statistical instructions – is that it??

Writing about web page

I submitted a manuscript to the journal Resuscitation recently. It's a pretty well-regarded medical journal, with an impact factor (for 2013) of 3.96, so a publication there would be a good solid paper. While formatting the manuscript I had a look at the statistical section of the Instructions for Authors. This is what I found:

"Statistical Methods
* Use nonparametric methods to compare groups when the distribution of the dependent variable is not normal.
* Use measures of uncertainty (e.g. confidence intervals) consistently.
* Report two-sided P values except when one-sided tests are required by study design (e.g., non-inferiority trials). Report P values larger than 0.01 to two decimal places, those between 0.01 and 0.001 to three decimal places; report P values smaller than 0.001 as P<0.001."

That's it! 69 words (including the title), more than half of which (43) are about reporting of p-values. I really don't think that many people would find this very useful (for example, what does "use measures of uncertainty consistently" mean?). Moreover, it seems to start from the premise that statistical analysis IS null hypothesis significance testing, and there are lots of reasons to take issue with that point of view. And finally (for now) it is questionable whether two-sided tests are usually the right thing to do, as we are usually interested in whether a treatment is better than another, not in whether it is different (not bothered whether it is better or worse) - won't get further in to that now but suffice to say it is a live issue.

January 25, 2015

"Classical" statistics

There is a tendency to describe traditional frequentist methods as "classical" statistics, often making a contrast with modern Bayesian methods, which are (or at least appear in ther modern guise) much newer and a break with tradition. That's kind of fair enough but I don't like the term classical being applied to traditional statistics, for two main reasons.

1. "Classical" is already in use for describing other types of thing (music. literature, architecture) and it has connotations of quality that aren't really applicable to statistics. These are classical:



2. It's inaccurate. Frequentist statistics dates from the mid 20th century. Bayesian statistics goes back much further, to Laplace (early 19th century) and Bayes (18th century) - so if anything should be called "classical", it is Bayesian methods.

December 23, 2014

The Cochrane diamond

You know, the one at the bottom of your meta-analysis that summarises the pooled result? This one:


Well, I don't like it. Why not? I think it's misleading, because the diamond shape (to me at least) suggests it is representing a probability distribution. It puts you in mind of something like this:

And that seems to make sense - the thick bit of the diamond, where your point estimate is, ought to be the area where the (unknown) true treatment effect would be most likely to be, and the thin points of the diamond are like the tails of the distribution, where the probability of the true value is getting smaller and smaller. That would be absolutely right, if the analysis was giving you a Bayesian credible interval - but it isn't.

It's a frequentist confidence interval, and as lots of people have been showing recently, frequentist confidence intervals do not represent probability distributions. They are just an interval constructed by an algorithm so that, if the experiment were repeated many times, 95% of the intervals would include the true value. They are NOT a distribution of the probability of any value of the treatment effect, conditional on the data, althought that is the way they are almost always interpreted. They don't say anything about the probability of the location of the true value, or even whether it is inside or outside any particular interval.

I think a solid bar would be a more reasonable way to represent the 95% confidence interval.

For more info:

Hoekstra R, Morey, RD, Rouder JN, Wagenmakers EJ. Robust misinterpretation of confidence intervals. Psychon Bull Rev. 2014, DOI 10.3758/s13423-013-0572-3

August 28, 2014

Treatment success in HTA trials: thoughts on Dent & Raftery 2011

Dent & Raftery analysed how many trials funded by HTA showed treatment benefit, harm, or were inconclusive (Trials 2011; 12:109). They found that 24% of trials showed a "significant" result (19% in favour of the new intervention and 5% in favour of control). How many of these results are likely to be correct?

Power and the proportion of interventions that are really effective determine the number of significant results that we see. If all interventions are effective, and power of all the trials is 90%, then 90% of trials will give a "significant" result. If no interventions are effective then we expect 5% of results to be significant; these are all false positives. See the graph below.

Proportion of truly effective trials against proportion found significant, by power

Proportion of truly effective trials in a field (i.e. the “population” of interventions that could be tested) against the proportion of “significant” effects that will be found, for different values of power. The proportion of truly effective interventions always exceeds the proportion found to be significant.

So reading across from 24% significant results; in the population of interventions there could be from about 25% (if power is 90%) to 100% (if power is less than about 25%) effective interventions. Power probably isn't 90%; most trials are deigned to have 80-90% power, usually for an optimistic effect size, so given that many effect sizes are smaller than anticipated and many trials recruit fewer than expected, power might typically be 60-70%. This suggests that there would be around 30% of interventions that are truly effective in the population of interventions evaluated by HTA trials.

How often do significance tests get it right? If power is 70%, 86.86% of significant differences are truly effective interventions, meaning that 13.14% are ineffective. As power decreases, the proportion of significant differences that are really effective decreases - the positive predictive value of a significant result gets worse.


Proportion of significant effects that are truly effective (PPV) for different values of power.

If a low proportion are really effective, a lot of significant effects will be false positives. Low power also makes this worse.

July 17, 2014

The EAGeR trial: Preconception low–dose aspirin and pregnancy outcomes

Lancet Volume 384, Issue 9937, 5–11 July 2014, Pages 29–36

Some extracts from the abstract:
Overall, 1228 women were recruited and randomly assigned between June 15, 2007, and July 15, 2011, 1078 of whom completed the trial and were included in the analysis.
309 (58%) women in the low-dose aspirin group had livebirths, compared with 286 (53%) in the placebo group (p=0·0984; absolute difference in livebirth rate 5·09% [95% CI −0·84 to 11·02]).
Preconception-initiated low-dose aspirin was not significantly associated with livebirth or pregnancy loss in women with one to two previous losses. .... Low-dose aspirin is not recommended for the prevention of pregnancy loss.
So - the interpretation is a so-called "negative" trial i.e. one that did not show any evidence of effectiveness.
BUT... the original planned sample size was 1600, with 1254 included in analyses (the other 346 being the 20% allowance for loss to follow up), which was calculated to have 80% probability of a "significant" result if there was in reality a 10% increase in live births in the intervention group from 75% in the control group.
In fact the trial recruited 1228 and lost 12.2% so only 1078 were included in the analyses (86% of the target). The placebo group incidence was different from expectation (53% compared with 75%) and the treatment effect was about half of that the sample size was calculated on (absolute difference of 5% rather than 10%), though they were more similar expressed as risk ratios than risk differences (1.09 compared with 1.13). Nevertheless the treatment effect was quite a bit smaller than the effect the trial was set up to find.
So is concluding ineffectiveness here reasonable? A 5% improvement in live birth rate could well be important to parents, and it is not at all clear that the 10% difference originally specified represents a "minimum clinically imporant difference". So the trial could easily have mised potentially important benefit. This isn't addressed anywhere in the paper. The conclusions seem to be based mainly on the "non-significant" result (p=0.09), without any consideration of what the trial could realistically have detected.

July 02, 2014

And I agree with David Colquhoun!

David C on the madness of the REF

April 07, 2014

David Colquhoun agrees with me!

On the hazards of significance testing. Part 2: the false discovery rate, or how not to make a fool of yourself with P values

makes much the same points as I have made elsewhere in this blog, though he doesn't go as far and recommend Bayesian analyses. But I can't see how you can sensibly interpret p-values without a prior, and if you're going to go that far, a fully Bayesian analysis is the natural thing to do surely?

February 20, 2014

Australian homeopathy review – surely they are kidding?

The NHMRC in Australia's strategic plan identified "‘examining alternative therapy claims’ as a major health issue for consideration by the organisation, including the provision of research funding" ( Well, that seems OK; they include a wide range of treatments under "complementary and alternative therapies", from the relatively mainstream (meditation and relaxation therapies, osteopathy) to the completely bonkers (homeopathy, reflexology), so it is reasonable to investigate the effectiveness of some of these.

But hold on! Further down the page we find a "Homeopathy review", and NHMRC have convened a "Homeopathy Working Committee" to oversee this. The plan seems to be to conduct an overview of systematic reviews on the effectiveness of homeopathy, to produce an information paper and position statement. The Working Committee includes some eminent names, and one member who, as a teacher of homeopathy, has a clear conflict of interest. I suppose you can argue that it is important to have a content expert in a review team, but in this case, where someone cannot help but have a personal interest in one particular outcome, it doesn't seem right. Like asking a committed Christian to weigh up dispassionately the evidence for the existence of god(s); unlikely to work.

I am somewhat staggered that this review is going ahead as it can only come to one credible conclusion, and I am struggling to understand the NHMRC's motivation. Did the homeopathy lobby push for this as part of its effort to be seen as evidence based and mainstream? Or did the NHMRC think that this was the best way to put homeopathy to bed for good? If the latter, I doubt it will be successful, as there will always be odd "statistically significant" results from trials of homeopathy, caused by bias or chance, that will keep the possibility of effectiveness alive in the minds of the credulous.

I have contacted the Homeopathy Working Committee to encourage them to use Bayesian methods with an appropriate prior!

UPDATE 25 July 2014

The report has been published and you can read it here:

The conclusion is less than scintillating:

"There is a paucity of good-quality studies of sufficient size that examine the effectiveness of homeopathy as a treatment for any clinical condition in humans. The available evidence is not compelling and fails to demonstrate that homeopathy is an effective treatment for any of the reported clinical conditions in humans."

At least it concluded lack of effectiveness, but the comments on the lack of good quality studies might encourage people to keep doing homeopathy studies - which would in my view be completely misguided.

August 2020

Mo Tu We Th Fr Sa Su
Jul |  Today  |
               1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Search this blog



Most recent comments

  • Hi Tom Sorry for delay in replying – taken out by family issues then holiday for the last month or s… by Simon Gates on this entry
  • Simon, I can see where you're coming from on this. If MCID (in its various guises) is not an optimal… by Chee-Wee Tan on this entry
  • Hi Simon I am currently doing my PhD in clinical based research. We want to use the MCID to determin… by tomwilks on this entry
  • I think your comment reveals how nonsensical null hypothesis testing is (and I see from your other p… by matt on this entry
  • Thanks for commenting Matt – I do wonder if anyone ever looks at any of this, not that this is a pro… by Simon Gates on this entry

Blog archive

RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder