Return to Website

Number Watch Web Forum

This forum is about wrong numbers in science, politics and the media. It respects good science and good English.

Number Watch Web Forum
Start a New Topic 
Author
Comment
View Entire Thread
Re: Meta-studies and probability

You have created an apparent paradox by your lax use of terms. You have replaced the conventional “heads” and “tails” with new names “true” and “false”. In your argument you go on to use the latter pair in two different senses. The binomial theorem, which is simple applied common sense, tells us that, as the number of coins increases, the probability of all-heads and all-tails both go down. Whether either of these is “true” or “false” depends on the veracity of the hypothesis you are testing, but you have not stated one.
The scam in most meta-studies is based on the assumption that you can combine two or more tests and make them look as though they are one larger (and therefore more significant) test; hence our definition of trying to make a strong chain of weak links. It becomes a fraud when you omit tests whose results do not fit your requirement (as in the case of the EPA meta-study on passive smoking).

Re: Meta-studies and probability

"The scam in most meta-studies is" that like the studies themselves they are a pack of bull****.

Let us suppose that someone has a substance X that they wish to test for effect Y, say jam and lung cancer. How likely is it that jam causes lung cancer? Rephrase: how likely is it that jam is one of the things that causes lung cancer? Rephrase: what fraction of things cause lung cancer? For the sake of a number, choose 1 in 1000, although the actual likelihood is probably much much less.

Collect data, generate a relative risk, generate a 95% confidence interval on that relative risk, find significance. What inference can you draw?

Choose 1000 things at random and follow the above procedure. Then...

In about 950 cases there will be no link, therefore the parameter (relative risk) will be 1. The interval will contain the parameter, so it will contain 1, so it will not be significant.

In about 50 cases there will be no link, therefore the parameter will be 1. The interval will not contain the parameter, so it will not contain 1, so it will be significant.

In 1 case the will be a link, therefore the parameter will not be 1. The interval might contain the parameter, or not, and it might contain 1, or not.

So if you find significance, the chance of there being a real link is going to be worse than 50 to 1 against. Given that you must go with the most likely option, clearly significance means that you should conclude that you have produced an error, and that there is no link.

In order for significance to mean anything other than error, the probability of there being something to find has to be much better than the significance level of the confidence interval, in this case 1 in 20. (In fact, theoretically, it must be at least 1 in 2.) When scientists do statistics it is never anything like that high. So in all scientific research, significance means error.

If a meta study finds significance, it is because it has found an error.