This forum is about wrong numbers in science, politics and the media. It respects good science and good English.
Exactly what I expected. The numbers always seem to get really small when discussing these things. I seem to remember one of the first lectures in Statistics stating something to the effect of "Small numbers are always dangerous in statistics".
I like the confounding factors issue noted below.
Where are the "cooler" heads that are supposed to squish stupidity in the scientific arena?
Only a little though.
I didn't get the article -- but extrapolating from what they said and what WIKI says .
80% of colon cancer victims survive..
so out of 130 obese men we would expect 26 to die early.
1.35 HR indicates that in reality 35 died early.
Therefor the results of the study were inspired by 9 extra deaths...
I may have gotten that wrong, but I suspect it isn't that far off.
The abstract should say "We were unable to find an real link with colon cancer outcomes and BMI".
I will leave the idiocy of BMI for another time (Calculating BMI has an error in it greater than the CI).
You're probably thinking along the right lines Brad, and it is probably an excess number of deaths about the same as you have surmised. However, the article says that the subjects were those patients with what has been classified as "Stage II or Stage III colon cancer", and not simply all colon cancers. The conditional probability of any patient dying given that they fall into this category may be greater than 20%, which it is apparently for all colon cancers.
Reporting precise P values now that these can be calculated easily from the relevant distributions, rather than relying on a dusty old book of tables is now quite common, and indeed sensible. It remains the case that the statistical tests used were designed for experimental settings and are of limited use in data dredging operations. What might be a significant result for a predetermined primary endpoint in a randomised clinical trial is not necessarily the case in epidemiology, but we all know that. Credibility could be restored to epidemiological studies where, understandably, people want to get the most out of a large amount of data, by the simple expeident of correction for multiple testing - essentially halving the required P value for each additional cause and effect combination tested, but that would quickly invalidate the entire study and probably mean that no significant associations were found. Alternatively, even low(ish) relative risks could become credible if all such associations, positive, negative, and neutral were publicly available from all relative studies, in which case a real but modest association should be seen in a majority of studies. Obesity of course is confounded by a whole range of other issues.
Seems that those folks left out location as a confounding factor that has a high degree of correlation.
Moving from one county to another within the same state can increase a man's risk of dying from colon cancer by 100%!!!!
See the colon cancer density by location in these charts.