p-values

Even in the most advanced discussions of modern times’ patient management, I have not once met a colleague who understood p-values. Not that I do.

In last Friday’s session we did the classical experiment (described e.g. in this article) of offering various choices of an explanation of how to interpret p-values to the audience, with actually none of them correct. Which is funny. Then we discussed the core definition. Which is simple. And worked through the examples on the Wikipedia entry (by the way: Wikipedia provides excellent articles on statistics!), such as computing p for simple random variables (such as the number of heads in n tosses of a coin). Which is hard.

Just for completeness: the p-value is the probability of obtaining the data’s test statistics or more extreme values, assuming the null hypothesis to be true.

Unfortunately we did not get far enough to discuss the Bayesian alternatives to p values and their decision theoretic applications.

Here are my take home messages:

  • p-values are a statistical property of the data.
  • You require a statistical model and have to assume the null hypothesis to be true to be able to compute them.
  • They say nothing about the truth of the null hypothesis or (even less so!) any alternative hypothesis.
  • They may increase with effect size yet small effect sizes may have the tiniest ps and vice versa.
  • They cannot be compared among studies.

 

Advertisements

How to read the publication of a randomized controlled trial

Traditional evidence-based medicine a la Sackett has it that you should consider a couple of aspects about a randomized controlled trial, before you believe it. Using the chapter on RCTs in the User’s Guide and the SPARCL trial in the NEJM we discover quite a lot of flaws in this not quite so new trial, that spawned a lot of controversy about the bleeding complications of high dose statins – a topic on it’s own.

On my hidden agenda I wanted to also sharpen the minds about the disadvantages of large-scale RCTs. While I would not go as far as James Penston in his book or one of his articles to totally condemn them (hey, they are still the best – after n=1 trials – we’ve got in terms of research), it is worthwhile to understand why phase 3 RCTs are more an economic than a medical undertaking. So go on and read one of his articles, say this.

Quite curiously, we did not go into this topic as far as I would have wished and instead focused on p-values, confidence intervals and frequentist statistics – a topic we have to deal with again in the future (after I have read up on Fisher vs. Neyman-Pearson).

Sensitivity, specificity, predictive values, likelihood ratios – a journey through terminology using jolt accentuation

Since I am a mathematician I don’t easily scare, at least I am not scared by statistical terminology. Still, it is hard to remember the difference between

  • sensitivity, specificity
  • positive and negative predictive value
  • likelihood ratios

So I tried to find a good example to study these and came up with jolt accentuation for suspected bacterial meningitis. In short, jolt accentutation means (exacerbation of) headache upon 2/sec head turns and is purported to be more sensitive than neck stiffness by the first publication and by this JAMA evidence based medicine review. To get some real data, we used a tiny collection of Iranian patients and then started constructing the four-field-table.

Here is the grid:

True positive

False positive

Test positive

False negative

True negative

Test negative

Diseased

Healthy

All

And here are the formulas:

  • Sensitivity: TP / (TP + FN)
  • Specificity: TN / (FP + TN)
  • Positive predictive value: TP / (TP + FP)
  • Negative predictive value: TN / (FN + TN)
  • Positive likelihood ratio: TP / FP
    = sensitivity / (1 – specificity)
  • Negative likelihood ratio: FN / TN
    = specificity / (1 – sensitivity)

Want more?

  • You can use the likelihood ratios to calculate the posttest odds from the pretest odds: post odds = pre odds * likelihood ratio. Problem is: you need to use odds rather than probabilities and we usually don’t. Of course: odds = probability / (1 – probability) and conversely probability = odds / (1 + odds). If it’s too much of a hassle, use a calculator on your IPhone or use Fagan’s nomogram.
  • Likelihood ratios are independant of disease prevalence – all the info about the disease goes into the pretest probability.
  • Read up on these statistics in this wonderful short pdf.

N-of-1 studies: individual trials

A patient asks you to prescribe Lacosamid for prevention of his migraine. She seems to have discovered by chance (when she was given Lacosamid for presumed seizures which then turned out to be syncopes) that her migraines – gravely discomforting her for up to 2 days a week – improved on Lacosamid, getting milder if not disappearing totally.

How do you substantiate her claim of benefit? How do you convince her HMO to pay for it?

The answer – as usual – lies in EBM. We discuss the n-of-1 trial, the statistics and the ethical dimensions, using the following resources:

By the way: The JAMA evidence home page is restricted to registered (paying) users. But the former version of the book above is still online and available under usersguides.org!