I hesitate to put my tuppence-worth in but here goes.

Assuming the original question was set up correctly and the right work has been done in such as way as to not bias the result (big assumptions) and the calculations have been done correctly then the OR, of course, is valid. "All" we are arguing about is whether it matters.

I would like to suggest that if someone is reporting this in a paper then it is highly unlikely to be significant. Why?

1) It is unlikely that the uncertainty of the CI has been taken into account. Simplistically the standard deviation is multiplied by a factor - often 1.96 or 2. But the SD itself has an uncertainty that is hardly ever taken into account. The cynic in me suggests this is because the result will be to increase the CI and increase the probability of the result being obtained by chance.

2) There is invariably a wide range of statistical tests which could be applied to a given set of results. Equally, there are many ways of setting up the test or to obtain the data. These will all give different 95% CI values. Is the author likely to take the set of results and the statistical test which gives the widest CI range?

It is likely that the author will choose the test which gives the narrowest CI interval. This may give a value that comes within the 95% CI but all the others will be worse.

A further factor is to consider what the implications are for getting the answer wrong. 95% is arbitrary but conventional. You are willing to be wrong once in 20 times (assuming you could make the same decision a large number of times). But suppose someone says this involves a chemical plant with a huge quantity of dangerous material. If you conclude that you have a better than 95% that you will be OK is this good enough? What are the risks of getting it wrong?

It seems to me that your "simple" question is not simple at all!!

Unless it does not give the answer you want in which case you change the level of significance or the test. Alternatively, you split the results up in different ways until it becomes significant. As I suggested before, probably many of these things have already been done before work is publicised. You don't need to tell people this is what you did, of course - that would only confuse them ...

/sarc But we can probably all give some examples of where this has been done ...

--- --- --- --- --- --- --- --- ---

Replying to:

After searching around a lot.I found an answer.

The idea is significance and that implies validity.

When dealing with odds ratios (ORs), the magic
number is 1.0, the null hypothesis (i.e., no effect).

If you can get your CI completely above 1.0, you have at least a claim on the effect (although many
other factors come into that assessment).

If the interval is completely below 1.0, you have an inverse correlation (with the same qualifications).

But if the interval straddles (or even includes) 1.0, you have squat.