Unless it does not give the answer you want in which case you change the level of significance or the test. Alternatively, you split the results up in different ways until it becomes significant. As I suggested before, probably many of these things have already been done before work is publicised. You don't need to tell people this is what you did, of course - that would only confuse them ...

/sarc But we can probably all give some examples of where this has been done ...

--- --- --- --- --- --- --- --- ---

Replying to:

After searching around a lot.I found an answer.

The idea is significance and that implies validity.

When dealing with odds ratios (ORs), the magic
number is 1.0, the null hypothesis (i.e., no effect).

If you can get your CI completely above 1.0, you have at least a claim on the effect (although many
other factors come into that assessment).

If the interval is completely below 1.0, you have an inverse correlation (with the same qualifications).

But if the interval straddles (or even includes) 1.0, you have squat.