Return to Website

Number Watch Web Forum

This forum is about wrong numbers in science, politics and the media. It respects good science and good English.

Number Watch Web Forum
Start a New Topic 
Author
Comment
View Entire Thread
Re: Meta-studies and probability


Ok, the greater the number of studies that are combined, the lower the probability of the results being true.

Re: Meta-studies and probability

For the answer to that see our vocabulary.

Re: Meta-studies and probability

To give you a few comments Gary:

(a) You seem to be arguing that as the probability of all the studies being simultaneously true is going down, then the 'meta-study' must be going in the direction of being more false than a single study. However if you take the probability of each single study being false as 0.05, then the probability of two studies both being false is 0.05 x 0.05 = 0.025, so the probability of the whole thing being false is actually going down.

(b) Multiplying probabilities together assumes that you are dealing with 'independent events' or 'statistically independent events'. In the case of tossing coins, this independence is obviously a reasonable assumption as the coins 'know' nothing about the other coin tossing events, but less so for scientific studies. Study B might have been influenced by study A, and the people compiling the meta-study may be deliberately selecting studies which go in the direction of giving the overall result they want.

(c) Estimating a value to assign to the probability of even a single study being true isn't straightforward. You may have based your 0.95 probability values on the 'p value' which is often taken as 0.05 in soft science work. There was an article in Nature magazine a few months ago criticising the use of the p-value in scientific work, and amongst other things discusses the tendency for many people to assume the p value gives a direct indication of the probability value that the study is true.

link

Re: Meta-studies and probability

Hmmmmmm.

Probability of 'False' goes down while the probability of 'True' also decreases.

Math is a marvelous thing; but, can we base decisions on partial truth?

Is the concept of a study being less true and less false, at the same time, a paradox?

Does a meta-study have the strength of a chain for probability of failure or is it a string of lights with failure being a lack of total passage of current?

I am not great at math and am looking for a concept to give to those that even less numerate than myself.


Mainly, I think that a meta-study has less validity about the truth of a concept and the greater the number of combined studies, the smaller the chance of validity.

Re: Meta-studies and probability

You have created an apparent paradox by your lax use of terms. You have replaced the conventional “heads” and “tails” with new names “true” and “false”. In your argument you go on to use the latter pair in two different senses. The binomial theorem, which is simple applied common sense, tells us that, as the number of coins increases, the probability of all-heads and all-tails both go down. Whether either of these is “true” or “false” depends on the veracity of the hypothesis you are testing, but you have not stated one.
The scam in most meta-studies is based on the assumption that you can combine two or more tests and make them look as though they are one larger (and therefore more significant) test; hence our definition of trying to make a strong chain of weak links. It becomes a fraud when you omit tests whose results do not fit your requirement (as in the case of the EPA meta-study on passive smoking).

Re: Meta-studies and probability

"The scam in most meta-studies is" that like the studies themselves they are a pack of bull****.

Let us suppose that someone has a substance X that they wish to test for effect Y, say jam and lung cancer. How likely is it that jam causes lung cancer? Rephrase: how likely is it that jam is one of the things that causes lung cancer? Rephrase: what fraction of things cause lung cancer? For the sake of a number, choose 1 in 1000, although the actual likelihood is probably much much less.

Collect data, generate a relative risk, generate a 95% confidence interval on that relative risk, find significance. What inference can you draw?

Choose 1000 things at random and follow the above procedure. Then...

In about 950 cases there will be no link, therefore the parameter (relative risk) will be 1. The interval will contain the parameter, so it will contain 1, so it will not be significant.

In about 50 cases there will be no link, therefore the parameter will be 1. The interval will not contain the parameter, so it will not contain 1, so it will be significant.

In 1 case the will be a link, therefore the parameter will not be 1. The interval might contain the parameter, or not, and it might contain 1, or not.

So if you find significance, the chance of there being a real link is going to be worse than 50 to 1 against. Given that you must go with the most likely option, clearly significance means that you should conclude that you have produced an error, and that there is no link.

In order for significance to mean anything other than error, the probability of there being something to find has to be much better than the significance level of the confidence interval, in this case 1 in 20. (In fact, theoretically, it must be at least 1 in 2.) When scientists do statistics it is never anything like that high. So in all scientific research, significance means error.

If a meta study finds significance, it is because it has found an error.