This forum is about wrong numbers in science, politics and the media. It respects good science and good English.
Misleading number 150,000 -- This may be a real number but is thrown in at the lead to make the subsequent information more important.
4381 -- The number of patients with cancer. Excellent.
20% of 4381 = 860.
The relative risk for obese? 35%. They don't give us the important numbers though-- How many died in each group.
150,000 is not a Trojan number in this article. That simply is the number of cases diagnosed each year in America. It is only there to demonstrate the scale of the problem. The authors make no pretence of their being 150,000 people involved in the study.
4,381 is the number of colon cancer sufferers involved in the study.
However, the authors actually say that "men in the highest body mass index category for obesity had a 35 percent increased risk of death compared to normal weight patients."
Looking at the abstract, here:
It mentions that 868 were obese and that only 262 were classified as class 2 or 3 obesity (BMI more than 35). We are not told how many of there were men, and how many women. However, if we assume an even breakdown, then it was only the 130 or so men in this category who were at a supposedly elevated risk of mortality.
It mentions that the 95% confidence interval extends down to a risk ratio of 1.02. i.e. it effectively includes 1.
Another thing I find curious is that up until a few years ago, the P values given in these studies were values like 0.05 or 0.01 and more rarely 0.001. But now one sees values that change for each seperate test. We see 0.017, 0.030, 0.0017, 0.039, 0.045, 0.006, 0.019. Is this possibly on a account of features added to statistical analysis software packages? The 95% confidence interval around the statistic we were discussing for example, is 1.02-1.79, is P=0.039 the level required for the confidence interval to extend below 1.00?
As you can see from the many classifications of the subjects and the number of things tested for, the study is a data dredge, and they are publishing the "best" of their results. How many non-results out of all their tests did they get?
The article seems to confuse the risk of getting cancer with the risk of dying from it if you're unlucky enough to get it. They establish that obese men are less likely to survive surgery for cancer of the colon than others but then seem to make a leap of faith to the idea that there is a link between obesity and cancer of the colon.
Exactly what I expected. The numbers always seem to get really small when discussing these things. I seem to remember one of the first lectures in Statistics stating something to the effect of "Small numbers are always dangerous in statistics".
I like the confounding factors issue noted below.
Where are the "cooler" heads that are supposed to squish stupidity in the scientific arena?
Only a little though.
I didn't get the article -- but extrapolating from what they said and what WIKI says .
80% of colon cancer victims survive..
so out of 130 obese men we would expect 26 to die early.
1.35 HR indicates that in reality 35 died early.
Therefor the results of the study were inspired by 9 extra deaths...
I may have gotten that wrong, but I suspect it isn't that far off.
The abstract should say "We were unable to find an real link with colon cancer outcomes and BMI".
I will leave the idiocy of BMI for another time (Calculating BMI has an error in it greater than the CI).
You're probably thinking along the right lines Brad, and it is probably an excess number of deaths about the same as you have surmised. However, the article says that the subjects were those patients with what has been classified as "Stage II or Stage III colon cancer", and not simply all colon cancers. The conditional probability of any patient dying given that they fall into this category may be greater than 20%, which it is apparently for all colon cancers.
Reporting precise P values now that these can be calculated easily from the relevant distributions, rather than relying on a dusty old book of tables is now quite common, and indeed sensible. It remains the case that the statistical tests used were designed for experimental settings and are of limited use in data dredging operations. What might be a significant result for a predetermined primary endpoint in a randomised clinical trial is not necessarily the case in epidemiology, but we all know that. Credibility could be restored to epidemiological studies where, understandably, people want to get the most out of a large amount of data, by the simple expeident of correction for multiple testing - essentially halving the required P value for each additional cause and effect combination tested, but that would quickly invalidate the entire study and probably mean that no significant associations were found. Alternatively, even low(ish) relative risks could become credible if all such associations, positive, negative, and neutral were publicly available from all relative studies, in which case a real but modest association should be seen in a majority of studies. Obesity of course is confounded by a whole range of other issues.
Seems that those folks left out location as a confounding factor that has a high degree of correlation.
Moving from one county to another within the same state can increase a man's risk of dying from colon cancer by 100%!!!!
See the colon cancer density by location in these charts.