This forum is about wrong numbers in science, politics and the media. It respects good science and good English.
Well the first article comes with the usual extrapolated deaths caveat.
Still, I really don't understand our bending author's (or anyone else's) obsession with Vioxx. Vioxx is a crap product. Clinical trials, observational studies, and post-marketing pharmacovigilance all showed a large increase in heart attack and stroke risk, and deaths resulting from them. Particularly with the long-term use that is not addressed by clinical trials.
That is not an acceptable risk profile for a painkiller.
So you believe that a placebo prevents heart disease.
I have friends that absolutely loved vioxx. They used it to reduce the pains associated with playing volleyball. Ibuprofen works wonders still, but VIOXX was apparently much better.
Would only that a placebo might prevent coronary thrombosis. However in placebo-controlled clinical trials we generally attribute treatment effects (whether foreseen or unforeseen) to the active treatment, unless there is some reason to think otherwise or some quirk of randomisation leaves us with groups so different that that might affect our interpretation of the results.
Vioxx was pulled from the market after a randomised, blinded, placebo-controlled clinical trial (i.e. this is not the noninterventional data-dredge epidemiology that you rightly criticise) which was, highly unusually, statistically powered for and intended specifically to (among other things) assess this particular safety issue, showed a nearly three-fold difference in risk of thrombotic cardiac events and a doubling of stroke risk. Most of that difference was myocardial infarction.
I find it curious that your criticism of the study is based largely on the Kaplan-Meier estimates showing little divergence in the first 18 months of the study, for Kaplan-Meier is rather like performing thousands of interim analyses (which is something you also rightly criticise). In other words, it's not a criticism compatible with saying (as you also do, and which I support) that one should wait until whatever predefined endpoint is reached before doing any (unblinded) analysis.
Given these results, it was not ethically or commercially viable for MSD to leave the product on the market. Even in extremis for control of RA, it is not acceptable to the FDA to relieve one form of disability only to replace it with another. Let me assure you from personal experience that the FDA are the most conservative drug regulator in the world.
The FDA also lives in the real world and even were the risk/benefit ratio considered acceptable for, say, RA, they know that a lot of off-label use occurs. So the selfishness of people dosing up on it to play volleyball (and some of them maiming themselves via stroke or heart attack in the process) plays a part in deciding to deny it to the crippled.
I should add, as John probably won't know this, that incidence by time of almost everything bad almost always declines over time in long-term clinical trials. There is nothing unusual about the behaviour of the placebo group in APPROVe.
The absolute classic example is contraceptive trials. You know some pregnancies are going to occur. Some of them are treatment failures and will occur more or less at random throughout the trial. Others are user failures - people incapable of correct use of contraceptives, people who don't use them reliably, the promiscuous who go out most nights to hook up and forget to use them compared to the married couple with 4 kids who are lucky to manage it twice a month and really really really don't want any more sprogs. Not to mention that a contraceptive trial provides ideal cover to spermjackers.
All those people will tend to get pregnant early on. Sure, some of the incapable will be lucky and last longer than others, but in general the higher risk patients have their event early in the trial.
And most importantly - once that happens they are out of the trial and the risk profile of the remaining patients is changed - to a lower risk - and this is subsequently reflected in a lower pregnancy rate later in the trial.
Exactly the same phenomenon happens with adverse events (in this case coronary thrombosis). Your higher risk patients are by definition going to drop out earlier on average than the lower risk patients. Towards the end of the trial the rate of heart attack falls off in the placebo group because you've already selected out a lot of people who were going to have a heart attack. This isn't a population snapshot - it's a closed group of people and as time goes on you weed some of them out. So it doesn't behave like a population and it's wrong of you to expect it to behave like a population. It behaves like a group of people in which you are constantly selecting against people for the very thing you are looking for.
That means the treatment effect regarding CV events is probably real. In the active group we have reduced the risk profile at later times exactly as in the placebo, but the rate of events stays the same as at earlier timepoints.
So, unfortunately, there are subtleties in the interpretation of results of experiments performed on humans that may not be immediately apparent to those who have not worked in the field for some time. Sadly people do not behave like atoms. They are more complex, each has things we don't know about, no two people are alike, and it costs substantially more per person to do a clinical experiment than per atom to do a physical experiment. So we have to accept a lower degree of certainty in the clinical sciences than the physicists would tolerate.
That's interesting and prompts a few questions, more about the Vioxx side of things than the efficacies of 'lifestyle choice' recreational treatments.
Is there any attempt (assuming it could be done ethically) to assess for common circumstances (DNA or previous health events come to mind but there may well be more factors) for those who are adversely affected by drugs in the way that the Vioxx trial identified?
If there is and a regular pattern could be confidently elicited from the results, notably for the early negative outcomes, it would suggest that individuals could be pre-screened for unsuitable treatments. Maybe.
On that basis the moral clouds of health risk during trals could be more tolerated ("for the greater good") and pre-treatment testing to identify and exclude from treatment those who are thought to be at risk could be implemented. Those not thought to be at risk might then progress to a successful treatment.
Presumably at some point the risk factor associated with long term use, beyond any previous controlled trials, would have to be considered but in the meantime the many would have had access to a presumably effective (for them) treatment and the few with an anticipated high RR would have been protected from the risk.
Assuming that whatever information could be gathered from those that succumbed to the risk was both relatively conclusive and cost effective (and I appreciate that might be a big IF) thus making the idea a practical proposition, what other factors, moral or scientific, would kick such an approach onto the sidelines?
Some of that is done, for example with tumour genotypes or expression profiles for oncology drugs.
Of course most of this work is speculative and experimental and in the context of clinical trials these are rarely statistically powered to be informative in terms of "these patients are more/less likely to respond to treatment".
Of course, faced with 10 potential treatments for whatever disease you'd like to know which one the patient is most likely to respond to. That's the holy grail if you like, not treating someone with a drug they won't respond to anyway but going straight to the one that will work for them.
To the extent that that is down to individual physiology rather than random, it's largely something for the future when you can genetically profile everyone cheaply, with the compounding problems being that you are then looking for probably small differences in response between potentially hundreds of thousands of subgroups, plus all genetic stuff is multifactorial anyway, so correlating genetic background with response in some set of people is unlikely to apply to some other set. Not to mention we are entirely neglecting phenotype here.
So tailor-made treatment is theoretically very nice, but is likely to remain as much art as science for a long time.