Return to Website

Number Watch Web Forum

This forum is about wrong numbers in science, politics and the media. It respects good science and good English.

Number Watch Web Forum
Start a New Topic 
confidence levels

If I flip a coin, there is a 50% chance it will come up heads(50% confidence level).

If I flip a second coin, there is also a 50% chance it will come up heads(50% confidence level).

If I flip two coins at the same time, there is not a 50% chance they will both come up heads(50% confidence level).

There is only a 25% chance(confidence level) that they will both come up heads.
One multiplies the probabilities: 0.5 x 0.5 = 0.25(25%)

If you have two studies,each with a 95% confidence level, and combime the data into one study, does that new study still have a 95% confidence level or is the confidence level only 90%(0.95 x 0.95 = 0.90)

If you combine 5 such studies into one, will the Confidence level for the new study be 95%, or will it be only 77%)0.95 x 0.95 x 0.95 x 0.95 x 0.95 = 70.77) ?

Re: confidence levels

If you go from the Number Watch index to FAQs and then "What has the weakest link to do with fallacies in medical statistics?" You will find a table with the results you require.

Re: confidence levels

Thank you Dr. B.

But, does that table apply to a meta-analysis of 10 studies to determine the probability of one disease being caused/associated with one particular factor?

Using that table, am I correct in finding the 1993 EPA meta-analysis( 10 studies) on SHS and lung cancer has only a 60% CI?

Re: confidence levels

It is important not to confuse two equal and opposite frauds. A data dredge takes one big survey and pretends it is a lot of little trials, to which the table applies. A metastudy takes a lot of little insignificant trials and pretends they are one big significant one. How these data are combined is a more obscure process that I have never fathomed.
That EPA metastudy has five clear frauds in it (see for example April 2003), the biggest being that they began work on the anti-smoking legislation four years before they started to manufacture the test data. The results actually indicate that SHS is harmless.

Re: confidence levels

"A metastudy takes a lot of little insignificant trials and pretends they are one big significant one."

Would it be true to say that a metastudy does little more than to increase the probabilty that a conclusion is incorrect?

It would seem that the amount of said increase would be up to the maths of 'conditional probability".

Going back to coin flipping.

If you take two coins in your hand and then throw them under a cloth:

Each coin has a 50% chance of being a heads; but, given the condition that your first coin comes out from under the cloth as a heads, there is only a 25% probability the second coin will also do so.

Perhaps, part of the problem with metastudies, is that there is no consideration of 'conditional probability'?

Re: confidence levels

Now you have lost me.

Re: confidence levels

That is certainly not your fault.

I was trying to explain a concept and it was, I'm afraid, poorly done.

Re: confidence levels

This free to view paper taken from the Lancet should give an idea how the confidence interval for the "meta analysis" relates to the confidence intervals for the individual contributing studies in the meta analysis. The meta analysis still has 95% confidence levels.

The paper seems to be using a technique for combining results from the contributing studies called the Mantel-Haenszel method (it quotes some other methods as well), for which details are given on this webpage:

Re: confidence levels

Meta-analysis can work fine if you are combining several similarly-executed trials which then gives you the statistical power to see effects that wouldn't be powered for in the smaller trials.

The problem with most of the epidemiological stuff is not the use of meta-analysis per se, it is that the statistical testing is applied to large numbers of post-hoc hypotheses and the tests are really designed to look for effects of interventions. Since you can't ethically perform an interventional study with tobacco smoke (we know it's bad for you and the study has no prospect of benefitting the participants) you have to assign "treatment groups" on the basis of asking people about past, incidental exposure. This is not only notoriously inaccurate it introduces a range of biases you can't control for. One odd result off the top of my head: Dutch tea drinkers are more likely to smoke than Dutch non-tea drinkers. If the correlation is strong enough (or the study large enough) you could demonstrate that drinking tea causes heart disease in the Netherlands. Bigger studies also take disproportionate effort to do important things like age and sex matching of controls - importantly failing to do this is likely to dilute real effects but potentiate stochastic effects.

In the clinical world, statistically significant results (uncorrected for multiplicity) on things other than your powered, primary efficacy variable, are considered interesting things that might or might not be worthy of further investigation. At best in efficacy they are supportive of a claim. You couldn't usually base a strong enough claim to get a marketing license on them, for example. In the public health world, one P<0.05 among twenty risks (never benefits) tested for is considered adequate justification for draconian legislation.