Actually Brad, I don't think the first half of the Ziliak article, endorsed by Briggs, is talking all that much sense. The second half of the article, where he is going on about people using statistical significance tests with little regard to the size of the effect for which statistical significance is being demonstrated, is more sound, and this issue seems to have been the central theme of a book Ziliak wrote a few years ago called "The Cult of Statistical Significance".

The argument in the first half of the Ziliak article has not been particularly well-received elsewhere on the internet, an example being this heated response from Luboš Motl, theoretical physicist and AGW sceptic blogger:

The impression I got from the first half of Ziliak's article is that neither he nor Briggs are all that familiar with laboratory work. When you do a physics experiment in secondary school, often only one measurement is made. The full statistical treatment of the experimental data, where you estimate a standard deviation for the measurement, is missed out probably due to time constraints and possibly to avoid making lab work look too dull, which might put people off studying physics at university. All experimentally determined values need to have a standard deviation quoted for them, or you don't know how accurate they are. Once you accept the idea that a measurement has to have a standard deviation, that means you sign up to the idea of wee p-values.

I suspect that if you were to ask Professor Briggs, he would agree with you. When you are measuring the velocity and mass of objects that are going to collide, you have errors associated with the measurements. Multiple measurement are necessary and evaluating the standard deviation of the measurements is useful.

I don't think it is the evaluation of the measurements these guys are going after (although I need to read the book "Cult of Statistical Significance" to be sure). It is the focus on the wee p value that is of concern. In physics they aren't playing with p<0.05. They use statistical significance, but I am pretty sure they are using the flip side of it. In epidemiology, that p value is the prize winner. In physics, it is a reference to see if you should be worried.

A friend and I were talking about business the other day. There is a simple model to follow to make sure you stay in business. Revenue > Expenses. A fundamental, yes? How do you evaluate it. My dad tried to teach this idea to the CEOs of several healthcare organizations. He opened the bank account, if there was more money at the end of the month than at the beginning, chances are, you would going to shut the doors next month.

The wee pee value ignores this idea.

Open the world population table. If you have more people this month than last month, world depopulation is not going to happen tomorrow. If the average age of death is increasing, chances are people are living longer.

Back to physics. If they say that they have found the Higgs boson, because the p-value was less than 0.05, I will post a comment saying "Physics is no longer a real subject". If they say,"Here is the higgs boson, X target was hit Y times, Z target V times, the chart of the data is H, and the p-value is Rho (where rho is pretty small number significantly less than 0.05)," I won't feel like statistical significance has overruled sense in the physics world.

Statistical significance is a tool. It can be used well. If chemists, physicists, engineers, and other hard science people start using Statistical Significance the way the epidemiologists use it, we have a problem.

At the end of the day, the physicist can still load a cannon with the help of a chemist and lob coke cans fill with concrete at targets and get better at hitting the target. Epidemiologists on the other hand, keep watching them and noting the color of their shoes, the length of the fingernails, the sway of the hair and the cadence of the clapping done to get the chemicals off their hands, push the data through SMS and discover great things about their effect on the projectiles.