July 17, 2014

The EAGeR trial: Preconception low–dose aspirin and pregnancy outcomes

Lancet Volume 384, Issue 9937, 5–11 July 2014, Pages 29–36

Some extracts from the abstract:
Overall, 1228 women were recruited and randomly assigned between June 15, 2007, and July 15, 2011, 1078 of whom completed the trial and were included in the analysis.
309 (58%) women in the low-dose aspirin group had livebirths, compared with 286 (53%) in the placebo group (p=0·0984; absolute difference in livebirth rate 5·09% [95% CI −0·84 to 11·02]).
Preconception-initiated low-dose aspirin was not significantly associated with livebirth or pregnancy loss in women with one to two previous losses. .... Low-dose aspirin is not recommended for the prevention of pregnancy loss.
So - the interpretation is a so-called "negative" trial i.e. one that did not show any evidence of effectiveness.
BUT... the original planned sample size was 1600, with 1254 included in analyses (the other 346 being the 20% allowance for loss to follow up), which was calculated to have 80% probability of a "significant" result if there was in reality a 10% increase in live births in the intervention group from 75% in the control group.
In fact the trial recruited 1228 and lost 12.2% so only 1078 were included in the analyses (86% of the target). The placebo group incidence was different from expectation (53% compared with 75%) and the treatment effect was about half of that the sample sizen was calculated on (absolute difference of 5% rather than 10%), though they were more similar expressed as risk ratios than risk differences (1.09 compared with 1.13). Nevertheless the treatment effect was quite a bit smaller than the effect the trial was set up to find.
So is concluding ineffectiveness here reasonable? A 5% improvement in live birth rate could well be important to parents, and it is not at all clear that the 10% difference originally specified represents a "minimum clinically imporant difference". So the trial could easily have mised potentially important benefit. This isn't addressed anywhere in the paper. The conclusions seem to be based mainly on the "non-significant" result (p=0.09), without any consideration of what the trial could realistically have detected.

July 02, 2014

And I agree with David Colquhoun!

David C on the madness of the REF

http://www.dcscience.net/?p=6636


April 07, 2014

David Colquhoun agrees with me!

On the hazards of significance testing. Part 2: the false discovery rate, or how not to make a fool of yourself with P values

http://www.dcscience.net/?p=6518

makes much the same points as I have made elsewhere in this blog, though he doesn't go as far and recommend Bayesian analyses. But I can't see how you can sensibly interpret p-values without a prior, and if you're going to go that far, a fully Bayesian analysis is the natural thing to do surely?


February 20, 2014

Australian homeopathy review – surely they are kidding?

The NHMRC in Australia's strategic plan identified "‘examining alternative therapy claims’ as a major health issue for consideration by the organisation, including the provision of research funding" (http://www.nhmrc.gov.au/your-health/complementary-and-alternative-medicines). Well, that seems OK; they include a wide range of treatments under "complementary and alternative therapies", from the relatively mainstream (meditation and relaxation therapies, osteopathy) to the completely bonkers (homeopathy, reflexology), so it is reasonable to investigate the effectiveness of some of these.

But hold on! Further down the page we find a "Homeopathy review", and NHMRC have convened a "Homeopathy Working Committee" to oversee this. The plan seems to be to conduct an overview of systematic reviews on the effectiveness of homeopathy, to produce an information paper and position statement. The Working Committee includes some eminent names, and one member who, as a teacher of homeopathy, has a clear conflict of interest. I suppose you can argue that it is important to have a content expert in a review team, but in this case, where someone cannot help but have a personal interest in one particular outcome, it doesn't seem right. Like asking a committed Christian to weigh up dispassionately the evidence for the existence of god(s); unlikely to work.

I am somewhat staggered that this review is going ahead as it can only come to one credible conclusion, and I am struggling to understand the NHMRC's motivation. Did the homeopathy lobby push for this as part of its effort to be seen as evidence based and mainstream? Or did the NHMRC think that this was the best way to put homeopathy to bed for good? If the latter, I doubt it will be successful, as there will always be odd "statistically significant" results from trials of homeopathy, caused by bias or chance, that will keep the possibility of effectiveness alive in the minds of the credulous.

I have contacted the Homeopathy Working Committee to encourage them to use Bayesian methods with an appropriate prior!

UPDATE 25 July 2014

The report has been published and you can read it here: https://www.nhmrc.gov.au/your-health/complementary-medicines/homeopathy-review.

The conclusion is less than scintillating:

"There is a paucity of good-quality studies of sufficient size that examine the effectiveness of homeopathy as a treatment for any clinical condition in humans. The available evidence is not compelling and fails to demonstrate that homeopathy is an effective treatment for any of the reported clinical conditions in humans."

At least it concluded lack of effectiveness, but the comments on the lack of good quality studies might encourage people to keep doing homeopathy studies - which would in my view be completely misguided.


January 24, 2014

The CAPTIVATE trial

...published a couple of years ago in American Journal of Respiratory and Critical Care Medicine (Wunderink et al Am J Resp Crit Care Med 2011; 183(11): 1561-1568). It's a trial of recombinant tissue factor pathway inhibitor (tifacogin) for patients with severe community acquired pneumonia, and randomised people to tifacogin 0.025mg/kg/h, 0.075 mg/kg/h, or placebo. The rationale for it was that tifacogin seemed to be beneficial in the subgroup with severe community acquired pneumonia in a previous trial of patients with sepsis (which rings alam bells with me, but that's another issue). The trial was international, involving 188 centres, and randomised 238 patients, so a major undertaking.

The interesting point about it was that they performed an interim analysis, as a result of which they stopped randomisation to the higher dose of drug due to lack of effeicacy (futility) but continued to randomise to the lower dose. This seems extraordinary; if the high dose isn't doing anything, it seems pretty unlikely that the low dose would. I could understand it if the high dose was stopped because of toxicity or increase in adverse outcomes, like death, but that doesn't seem to have been the case.

Unsurprisingly, the final trial results showed no difference in mortality between tifacogin (18%) and placebo (17.9%). Has there ever been a case where a promising-looking subgroup result was shown in a subsequent trial to be correct?


December 03, 2013

"Significance testing" and prior probabilities


I came across a helpful account recently of an issue which has been bothering me recently, which is the interpretation of significance tests. It was in a slightly unexpected place – the GraphPad software online statistics guide:


http://www.graphpad.com/guides/prism/6/statistics/index.htm?stat_more_misunderstandings_of_p_va.htm


The issue is about how you interpret a “significant” p-value. Say you compare a drug to placebo to see if it cures people, and you get a “significant” effect (p < 0.05). Does that mean the drug works? Not necessarily. Apart the obvious 5% of occasions when you will get a “significant” effect when the drug does nothing, it also depends on the prior probability that a drug is effective. It’s exactly the same issue as with diagnostic tests, where the prevalence of a disease has a huge effect on the positive predictive value of a test. If a disease is very rare, even a test with extremely high sensitivity and specificity can be essentially useless, because almost all of the positives will be false positives.


So it is with trials. If your trial has 80% power, and a 5% Type I error rate, then if the prior probability of a drug being effective is 80% then in 1000 replicates of the experiment you will get:


Prior probability=80%





Drug really works

Drug really doesn't work

Total

P<0.05, “significant”

640

10

650

P>0.05, “not significant”

160

190

350

Total

800

200

1000


So in 640/650 (98.46%) occasions where you get a “significant” result, the drug will really be effective. [It would also be effective in nearly half of the experiments with a “non-significant” result (160/350).]


However, if there is only a 10% chance that the drug really works, things look a lot worse.


Prior probability=10%

Drug really works

Drug really doesn't work

Total

P<0.05, “significant”

80

45

125

P>0.05, “not significant”

20

855

875

Total

100

900

1000


Now the drug is only really effective in 64% of trials with a “significant” result. With 1% prior probability of the drug’s effectiveness, it really works in only 14% of trials with “significant” results.


So the prior probability of the treatment’s effectiveness is absolutely crucial in interpretation of the results of trials. But I don’t think I have ever seen this mentioned in the results or discussion of a paper. I'm really not sure how you would go about downgrading your confidence in a frequentist result based on the prior probability; there isn't a mechanism for doing this. But this is undoubtedly a major cause of misinterpretation of trial results. When you consider that most trials have pretty low power (maybe 50-60% at best) to detect realistic treatment effects, and that the majority of interventions that are tested probably don't work (maybe at best 20% are effective?), then the false positive rate is going to be substantial.


This is another way in which Bayesian methods score over standard traditional analyses; they force us to consider the prior probabilities of hypotheses, and to include them explicitly in the analysis. The issue seems always to be swept under the carpet in traditional analyses, with potentially disastrous consequences. Actually, saying it is swept under the carpet is probably inaccurate - most people are completely unaware that this is even an issue.


September 26, 2013

Asking the wrong question

A study proposal came across my desk recently about evaluation of a new test for infection in a certain group of patients. The potential benefit was that the test uses a chemical marker of infection that is thought to increase rapidly early in the infection process, so it would potentially allow earlier diagnosis and treatment of the infection.

However, the analysis proposed to look for differences in the levels of the marker between patients with confirmed infection and those without. This is asking the wrong question: the issue is not whether the levels of marker differ between infected and non-infected patients. If this is being proposed as a test that will identify infected patients, presumably there is already a pretty good idea that levels of the marker differ. The important issue here is whether the marker is good at identifying those patients that have real infections i.e. it is a diagnostic question of sensitivity and specificity. The most important number is probably the positive predictive value: if a positive test result misses a lot of infected patients, it isn’t going to be much use in clinical practice.

A similar situation arose in a systematic review we did a few years ago of risk factors for chronic disability after whiplash injury. In this, a number of studies had recorded risk factors of whiplash-injured patients (such as injury severity, pre-existing pain, and so on) and whether they developed long-term problems, then analysed whether the risk factors differed between the patients who had recovered and those who had ongoing problems. Again, this is not addressing the right question. What we want to know is how good are risk factors that a clinician can assess early on at predicting long-term whiplash-associated problems.


September 06, 2013

Missing data in systematic reviews –an unappreciated problem?

Most systematic reviews state in their results sections and abstracts how many studies they included. But you usually find that not all outcomes are reported by all studies; it's quite common for important outcomes to be reported by only a minority of studies. What is usually done in this situation is essentially nothing; the subset of studies that have data is used to calculate the estimated treatment effects and this is presented as the review's result.

For example, in the Cochrane review of "Interventions for preventing falls in older people living in the community," there were 40 studies that evaluated multifactorial interventions (these are interventions that consist of several components, for example exercises for strength or balance, medication review, vision assessment, home hazard assessment etc etc; patients are assessed to find out what risk factors for falling they have and specific interventions for these are then provided). The review looked at the number of fallers as one outcome, and also more importantly, the number of participants sustaining fractures. The meta-analysis of the number of fallers included 34 studies, so only six did not provide data on this outcome. However, the meta-analysis of fractures included only 11 studies (27.5% of the studies included in the review), so the conclusion about fractures is based on an analysis in which most of the data are missing. Obviously, this outcome exists for all studies that were conducted; the participants either had a fracture or didn't during the follow-up period, but we only know about how many did and didn't for 11 trials. For the other 29, the information is missing.

The big problem here is the risk of introducing bias. When conducting trials and considering them for inclusion in a systematic review, incomplete outcome data are one of the criteria for judging risk of bias. A common rule of thumb is that more than 20% missing data can put a study at high risk of bias (though obviously that is simplistic, and its origin is obscure). More than 50% of data missing would be very worrying and you would not expect to put much credence on the results. So surely in a situation like the falls review, 72.5% of missing studies we shoud have major reservations about the estimated treatment effect? Yet treatment effects estimated from a subset of studies are routinely presented on a equal footing with results with small amounts of missing data. This doesn't seem right.

If there is an important outcome (like death) that is only reported by a few studies, and there happens to be a difference in those studies, that is likely to be prominently featured in the review's results and conclusions. But the particpants in all of the other trials either died or didn't die; the results for these trials exist but weren't recorded. It is quite possible that if they were known they would completely negate the positive effects in the trials that reported death. Maybe the reason those two trials reported it was precisely because of the treatment benefir?

[1] Gillespie LD, Robertson MC, Gillespie WJ, Sherrington C, Gates S, Clemson LM, Lamb SE. Interventions for preventing falls in older people living in the community. Cochrane Database of Systematic Reviews 2012, Issue 9. Art. No.: CD007146. DOI: 10.1002/14651858.CD007146.pub3.


June 17, 2013

Sample size and the Minimum Clinically Important Difference

Performing a sample size calculation has become part of the rigmarole of randomized trials and is now expected as a sign of “quality”. For example, the CONSORT guidelines include reporting of a sample size calculation as one of the items that should be included in a trial report, and many quality scales and checklists include presence of a sample size calculation as one of the quality markers. Whether any of this is right or just folklore is an interesting issue that receives little attention. [I’m intending to come back to this issue in future posts]

For now I want to focus on one aspect of sample size calculations that seems to me not to make much sense.

In the usual idealized sample size calculation, a treatment effect that it is desired to detect is assumed. Ideally this should be the “minimum clinically important difference” (MCID); the smallest difference that it would be worthwhile to know about, or the smallest difference that would lead to one treatment being favoured over the other in clinical practice. Obviously this is not an easy thing to calculate, but leaving practical issues to one side for the moment, in an ideal situation you would have a good idea of the MCID. Having established the MCID, this is used as the treatment effect in a standard sample size calculation, based on a significance test (almost always at the 5% significance level) and a specified level of power (almost invariably 80% or 90%). This gives a number of patients that need to be recruited. This number will give a “statistically significant” difference the specified percentage of the time (power) if the true difference is the MCID.

The problem here is that the sample size calculation is based on finding a statistically significant result, not demonstrating that the difference is larger than a certain size. But if you have identified a minimum clinically important difference, what you want to be able to say with a high degree of confidence is whether the treatment effect exceeds it. However, the standard sample size calculation is based on statistical significance, which is equivalent to finding that the difference that is non-zero. Obviously, the upper confidence limit is likely to be close to zero and will only rarely be far enough from zero to exclude the MCID. Hence the standard sample size may have adequate power to show whether there is a non-zero difference, but has very little power to show that the difference exceeds the MCID. Hence most results will be inconclusive; they will show that there is evidence of benefit, but uncertainty that it large enough to be clinically important.

As an example, imagine the MCID is thought to be a risk ratio of 0.75 (a bad outcome occurs in 40% of the control group and 30% of the intervention group). A standard sample size calculation gives 350 participants per group. So you do the trial and (unusually!) the proportions are exactly as expected: 40% in the control and 30% in the intervention group. The calculated risk ratio is 0.75 but the 95% confidence interval around this is 0.61 to 0.92. So you can conclude that the treatment has a non-zero effect but you don’t know whether it exceeds the minimum clinically important difference. With this result you would only have a 50% chance that the real treatment effect exceeded the MCID.

So sizing a trial based on the MCID might seem like a good idea, but in fact if you use the conventional methods, the result is probably not going to give you much information about whether the treatment effect really is bigger than the MCID or not. I suspect that in most cases the excitement of a “statistically significant” result overrides any considerations of the strength of the evidence that the effect size is clinically useful.


Randomised trial of the LUCAS mechanical chest compression device

Follow-up to Diary of a randomised controlled trial 25 July 2008 from Evidence-based everything

Recruitment finally finished on 10th June 2013. Over 400 ambulance service vehicles included, and more than 4300 patients. Fantastic effort by everyone involved.

PS final total sample size was 4471 - I missed out on the sweepstake to predict the final total by 1, as my guess was 4472!


August 2014

Mo Tu We Th Fr Sa Su
Jul |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Search this blog

Tags

Galleries

Most recent comments

  • Hi Tom Sorry for delay in replying – taken out by family issues then holiday for the last month or s… by Simon Gates on this entry
  • Simon, I can see where you're coming from on this. If MCID (in its various guises) is not an optimal… by Chee-Wee Tan on this entry
  • Hi Simon I am currently doing my PhD in clinical based research. We want to use the MCID to determin… by tomwilks on this entry
  • I think your comment reveals how nonsensical null hypothesis testing is (and I see from your other p… by matt on this entry
  • Thanks for commenting Matt – I do wonder if anyone ever looks at any of this, not that this is a pro… by Simon Gates on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXIV