September 21, 2017

Best sample size calculation ever!

I don't want to start obsessing about sample size calculations, because most of the time they're pretty pointless and irrelevant, but I came across a great one recently.

My award for least logical sample size calculation goes to Mitesh Patel et al, Intratympanic methylprednisolone versus gentamicin in patients with unilateral Meniere's disease: a randomised, comparative effectiveness trial, in The Lancet, 2016, 388: 2753-62.

The background: Meniere's disease causes vertigo attacks and hearing loss. Gentamicin, the standard treatment, improves vertigo but can worsen hearing. So the question is whether an alternative treatment, methylprednisolone, would be better - as good in reducing vertigo, and better in terms of hearing loss. That's actually not what the trial did though - it had frequency of vertigo attacks as the primary outcome. You might question the logic here; if gentamicin is already good at reducing vertigo, you might get no or only a small improvement with methylprednisolone, but methylprednisolone might not cause as much hearing loss. So you want methylprednisolone to be better at reducing hearing loss, as long as it's nearly as good as gentmicin at reducing vertigo.

Anyway, the trial used vertigo as its primary outcome, and recruited 60 people, which was its pre-planned sample size. But when you look at the sample size justification, it's all about hearing loss! Er... that's a completely different outcome. They based the sample size of 60 people on "detecting" a difference of (i.e. getting statistical significance if the true difference was) 9dB (sd11). Unsurprisingly, the trial didn't find a difference in vertigo frequency.

This seems to be cheating. If you're going to sign up to the idea that it's meaningful to pre-plan a sample size based on a significance test, it seems important that it should have some relation to the main outcome. Just sticking in a calculation for a different outcome doesn't really seem to be playing the game. I guess it ticks the box for "including a sample size calculation" though. Hard to believe that the lack of logic escaped the reviewers here, or maybe the authors managed to convince them that what they did made sense (in which case, maybe they could get involved in negotiating Brexit?).

Here's their section on sample size, from the paper in The Lancet:

patel1

patel2


September 13, 2017

Confidence (again)

I found a paper in a clinical journal about confidence intervals. I’m not going to give the reference, but it was published in 2017, and written by a group of clinicians and methodologists, including a statistician. Its main purpose was to explain confidence intervals to clinical readers – which is undoubtedly a worthwhile aim, as there is plenty of confusion out there about what they are.

I think there is an interesting story here about what understanding people take away from these sorts of papers (of which there are quite a number out there), and how things that are written that are arguably OK can lead the reader to a totally wrong understanding.

Here’s the definition of confidence intervals that the authors give:

“A 95% confidence interval offers the range of values for which there is 95% certainty that the true value of the parameter lies within the confidence limits.”

That’s the sort of definition you see often, and some people don’t find problematic, but I think most readers will be misled by it.

The correct definition is that in a long series of replicates, 95% of the confidence intervals will contain the true value, so it’s kind-of OK to say that a 95% CI has a “95% probability of including the true value,” if you understand that means that “95% of the confidence intervals that you could have obtained would contain the true value.”

Where I think this definition goes wrong is in using the definite article: “THE range of values for which there is 95% certainty…” That seems to be saying pretty clearly that we can conclude that there is a 95% probability that the true value is in this specific range. I’m pretty sure that is what most people would understand, and the next logical step is that if there is 95% probability of the true value being in this range, if we replicate the study many times, we will find a value in this range 95% of the time.

That’s completely wrong – the probability that the parameter is in a 95% CI varies depending exactly where in relation to the true value the CI falls. If you’ve got a CI that happens to be extreme, the probability of getting a replicated parameter in that range might be very low. On average it’s around 83.6% (see Cumming & Maillardet 2006, ref below).

The problem is that “95% probability of including the true value” is a property of the population of all possible confidence intervals, and unless we are very careful about language, it’s easy to convey the erroneous meaning that the “95% probability” applies to the one specific confidence interval that we have found. But in frequentist statistics it doesn’t make sense to talk about the probability of a parameter taking certain values; the parameter is fixed but unknown, so it is either in a particular confidence interval or it isn’t. That’s why the definition is as it is: 95% of the possible confidence intervals will include the true value. But we don’t know where along their length the true value will fall, or even whether it is in or out of any particular interval. It’s easy to see that “95% probability of the location of the true value” (which seems to be the interpretation in this paper) can’t be right; replications of the study will each have different data and different confidence intervals. These cannot all show the location of the true value with 95% certainty; some of them won’t even overlap!

What the authors seem to be doing, without realising it, is using a Bayesian interpretation. This is no surprise; people do it all the time, because it is a natural and intuitive thing to do, and many probably go through an entire career without realising that this is what they are doing. When we don’t know a parameter, it is natural to think of our uncertainty in terms of probability – it makes sense to us talk about the most probable values, or a range of values with 95% probability. I think this is what people are doing when they talk about 95% probability of the true value being in a confidence interval. They are imagining a probability distribution for the parameter, with the confidence interval covering 95% of it. But frequentist confidence intervals aren’t probability distributions. They are just intervals.

I guess this post ought to have some nice illustrations. I might add some when I’ve got a bit of time.

Cumming, G., Maillardet, R. Psychological Methods 2006, Vol. 11, No. 3, 217–227


August 19, 2017

Trial results infographics

There is a fashion for producing eye-catching infographics of trial results. This is a good thing in some ways, because it’s important to get the results communicated to doctors and patients in a way they can understand. Here’s one from the recent WOMAN trial (evaluating tranexamic acid for postpartum haemorrhage).

WOMAN trial 2

What’s wrong with this? To my mind the main problem is that if you reduce the messages to a few headlines then you end up leaving out a lot of pretty important information. One obvious thing missing from these results is uncertainty. We don’t know, based on the trial’s results, that the number of women bleeding to death would be reduced by 30% – that’s just the point estimate, and there’s substantial uncertainty about this.

Actually the reduction by 30% isn’t the trial’s main result, which has the risk ratio for death due to haemorrhage as 0·81, 95% CI 0·65–1·00. So that’s actually a point estimate reduction of 19%, with a range of effects “consistent with the data” (or not significantly different from the data) of a reduction between 35% and zero. The 30% reduction seems to come from a subgroup analysis of women treated within 3 hours of delivery. A bit naughty to use a subgroup analysis as your headline result, but this highlights another problem with the infographic – you don’t really know what you’re looking at. In this case they have chosen to present a result that the investigators presumably feel represents the real treatment effect – but others might have different views, and there isn’t any way of knowing that you’re results that have been selected to support a particular story.

[I’m guessing that the justification for presenting the “<3 hour” subgroup is that there wasn’t a clear effect in the “>3 hour” subgroup (RR 1.07, 95% CI 0.76, 1.51), so the belief is that treatment needs to be given within 3 hours to be effective. There could well be an effect of time from delivery, but it neds a better analysis than this.]

WOMAN trial: Lancet, Volume 389, No. 10084, p2105–2116, 27 May 2017

PS And what’s with the claim at the top that the drug could save 1/3 of the women who would otherwise die from bleeding after childbirth? That’s not the same as 30%, which wasn’t the trial’s result anyway. I guess a reduction of 1/3 is a possible result but so are reductions of 25% or 10%.


July 18, 2017

The future is still in the future

I just did a project with a work experience student that involved looking back through four top medical journals for the past year (NEJM, JAMA, Lancet and BMJ), looking for reports of randomised trials. As you can imagine, there were quite a lot - I'm not sure exactly how many because only a subset were eligible for the study we were doing. We found 89 eligible for our study, so there were probably at least 200 in total.

Of all those trials, I saw only ONE that used Bayesian statistical methods. The rest were still doing all the old stuff with null hypotheses and significance testing.


May 30, 2017

Hospital–free survival

One of the consequences of the perceived need for a “primary outcome” is that people try to create a single outcome variable that will include all or most of the important effects, and will increase the incidence of the outcome, or in some other way allow the sample size calculation to give you a smaller target. There has for some time been a movement to use “ventilator-free days” in critical care trials, but a recent trend is for trials of treatments for cardiac arrest to use “hospital-free survival” or “ICU-free survival,” defined as the number of days that a trial participant was alive and not in hospital or ICU, up to 30 days post randomisation.

A recent example is Nicholl et al (2015), who compared continuous versus interrupted chest compressions during CPR. It was a massive trial, randomising over 23,000 participants, and found 9% survival with continuous compressions and 9.7% with interrupted. Inevitably this was described as “continuous chest compressions during CPR performed by EMS providers did not result in significantly higher rates of survival.” But it did result in a “significant” difference in hospital-free survival, which was a massive 0.2 days shorter in the continuous compression group (95% CI -0.01, -0.03, p=0.004).
A few comments. First, with a trial of this size and a continuous outcome, it’s almost impossible not to get statistical significance, even if the effect is tiny. As you can see. I very much doubt that anyone would consider an improvement in hospital-free survival of 0.2 days (that’s about 4 hours 48 minutes) of any consequence, but it’s there in the abstract as a finding of the trial.

Second, it’s a composite outcome, and like almost all composite outcomes, it includes things that are very different in their importance; in this case, whether the patient is alive or dead, and whether they are in hospital. It’s pretty hard to interpret this. What does a difference of about 5 hours in the time alive and out of hospital mean? Would a patient think that was a good reason to use the intervention? I doubt it. They would surely be more interested in the chances of surviving, and maybe secondarily whether the amount of time they might spend in hospital would be different.

Third, and this is especially true for cardiac arrest trials, the mean is a terrible way to summarise these data. The survival rate in this trial was about 9%. The vast majority of deaths would have occurred either before reaching hospital or in hospital, so all of those patients would have hospital-free survival of zero. The 9% or so of patients that survived to hospital discharge would have a number of hospital-free days between 0 and 30. So the means for each group will be pulled strongly towards zero by the huge number of participants with zero hospital-free days. The means for each group are presented in Table 3 of the paper, as 1.3 ± 5.0, and 1.5 ± 5.3, without comment, even though that seems to imply negative hospital-free survival. Definitely a case here for plotting the data to see what is going on; the tabulated summary is inadequate. The difference is almost certainly driven by the 0.7% higher survival in the interrupted compression group, which was possibly an important finding. However, because it was non-significant it is pretty much ignored and assumed to be zero.

Nichol G et al. Trial of Continuous or Interrupted Chest Compressions during CPR. NEJM 2015; 373: 2203-2214.


April 07, 2017

Rant

altman slide

Here’s a photo of a slide from a talk by Doug Altman about hypothesis tests and p-values recently (I nicked the picture from Twitter, additions by me). I wasn’t there so I don’t know exactly what Doug said, but I totally agree that hypothesis testing and p-values are a massive problem.

Nearly five years ago (July 2012 if I remember correctly) I stood up in front of the Warwick Medical School Division of Health Sciences, in a discussion about adopting a “Grand Challenge” for the Division, and proposed that our “Grand Challenge” should be to abandon significance testing. The overwhelming reaction was blank incomprehension. There was a vote for which of the four or five proposals to adopt, and ONE PERSON voted for my idea (for context, as a rough guess, there were probably 200-300 people in the Division at that time).

It was certainly well-known before 2012 that hypothesis tests and p-values were a real problem, but that didn’t seem to have filtered through to many medical researchers.


March 05, 2017

Sample size statement translation

Here are a couple of statements about the justification of the sample size from reports of clinical trials in high-impact journals (I think one is from JAMA and the other from NEJM):

We estimated that a sample size of 3000 … would provide 90% power to detect an absolute difference of 6.3 percentage points in the rate of [outcome] between the [intervention] group and the placebo group.

The study was planned to detect a difference of 1.1 points in the [outcome score] between the 2 interventions with a significance level of .05 and a power level of 90%.

There is nothing remarkable about these at all; they were just the first two that I came across in rummaging through my files. Statements like this are almost always found in clinical trial reports.

A translation, of the first one:

“We estimated that if we recruited 3000 participants and the true absolute difference between intervention and placebo is 6.3 percentage points, then if we assumed that there was no difference between the groups, the probability (under this assumption of no difference) of getting data that were as unusual or more unusual than those we actually obtained would be less than 0.05 in 90% of a long series of replications of the trial.”

That’s what it actually means but I guess most clinicians and researchers would find that pretty impenetrable. An awful lot is hidden by the simple word “detect” in the sample size justification statements. I suspect the language (“detect a difference”) feeds into the misunderstandings of ”significant” results – it’s a real difference, not due to chance, etc.


February 11, 2017

Andrew Gelman agrees with me!

Follow-up to The Fragility Index for clinical trials from Evidence-based everything

I’ve slipped in my plan to do a new blog post every week, but here’s a quick interim one.

I blogged about the fragility index a few months back (http://blogs.warwick.ac.uk/simongates/entry/the_fragility_index/). Andrew Gelman has also blogged about this, and thought much the same as I did (OK, I did ask him what he thought).

See here: http://andrewgelman.com/2017/01/03/statistics-meets-bureaucracy/


December 14, 2016

Bayesian methods and trials in rare and common diseases

One of the places that Bayesian methods have made some progress in the clinical trials world is in very rare diseases. And it’s true, traditional methods are hopeless in this situation, where you can never get enough recruits to get anywhere near the sample size that traditional methods demand for an “adequately powered” study, and it’s unlikely that a result will be “statistically significant”. Bayesian methods really help here, because they give you a result in terms of probability that a treatment is superior. This is good for two main reasons. First, it’s helpful to quantify the probability of benefit, and its size and uncertainty. This tells us a lot more than simply dichotomising it into “significant” and “non-significant”, with the unstated assumption that “significant” means clinically useful. Second, there isn’t a fixed probability of benefit that means an intervention should be used; it will vary from situation to situation. For example, if there is almost no cost to using a treatment, it might only need a small probability of being better to be worthwhile. If we don’t estimate this probability we can’t make this sort of judgement.

But (and this is something I have experienced several times now in a variety of places so I think it is real) – this seems to have had an unfortunate side effect. A perception seems to have grown that Bayesian methods are something to consider using when a “proper” trial (with all of the usual stuff: interpretation based on p < 0.05 in a null hypothesis test, fixed pre-planned sample size based on a significance test, 80% or 90% power and so on) isn’t feasible. In reality, the ability to quantify probability of benefit would be helpful in just about all situations, even (or especially) large Phase 3 trials that are looking for modest treatment benefits. How many of these trials don’t “achieve statistical significance” but have results that would show a 70% or 80% probability of benefit? They might still provide good enough evidence to make decisions about treatments (based on, for example, cost-effectiveness), but at the moment they tend to get labelled as “non-significant” or “negative trials.”


November 12, 2016

“The probability that the results are due to chance”

One of the (wrong) explanations that you often see of what a p-value means is “the probability that data have arisen by chance.” I think people may struggle to see why this is wrong, as I did for a long time. A p-value is the probability of getting the data (or more extreme data) if the null hypothesis (no difference) is correct – right? So that would mean the specific result you got must have been due to chance variation, doesn’t it? So why isn’t the p-value the probability that the result was due to chance?

The problem is that there are two ways of interpreting “the probability that a result is due to chance.”
1. The probability that chance or random variation was the process that produced the result;
2. The probability of getting the specific data (or more extreme data) that you got in your experiment, if chance was the only process operating.

The second of these is what the p-value tells you; but the first is the interpretation that most people give it. The p-value tells you nothing about the process that produced the result, because it is calculated on the assumption that the null hypothesis is correct.


September 2017

Mo Tu We Th Fr Sa Su
Aug |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30   

Search this blog

Tags

Galleries

Most recent comments

  • Hi Tom Sorry for delay in replying – taken out by family issues then holiday for the last month or s… by Simon Gates on this entry
  • Simon, I can see where you're coming from on this. If MCID (in its various guises) is not an optimal… by Chee-Wee Tan on this entry
  • Hi Simon I am currently doing my PhD in clinical based research. We want to use the MCID to determin… by tomwilks on this entry
  • I think your comment reveals how nonsensical null hypothesis testing is (and I see from your other p… by matt on this entry
  • Thanks for commenting Matt – I do wonder if anyone ever looks at any of this, not that this is a pro… by Simon Gates on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXVII