This forum is about wrong numbers in science, politics and the media. It respects good science and good English.
Among engineers there is great concern expressed about computer models and spreadsheet calculations.
The use of such "shortcuts" and evaluation s is everywhere hedged around with admonitions to fully understand and check the work done automatically by a spreadsheet or program even on "well understood" subjects.
The use of computer models is even more cautiously treated.
This link is to a blog by a well respected engineer.
One of the most startling revelations from the Climategate emails was actually in Harry's notes. The evidence seemed to suggest that far from there being a top line computer programming specialists involved, that the programs appear to have been written by a school drop-out.
There was not much to commend the standard of the model far less the calculations used.
Yet the truth is now out.
It isn't about the science, it is about the money and those who will get rich (Al Gore, Mr Cleggs wife, Mr Cameron's father in law and so on) and the politics of anarchy and envy. Those who want to see social engineering on scale that makes Eugenics look like a kindergarten game.
There are some simple minded folk at play here and there are some very dangerous people playing in the background.
Yet still the MSM are proclaiming AGW as proven.
We live on a planet where people are strangely illogical. I think I'd like to move to Vulcan.
In the good old days you tested your theory out in a lab or with a test rig. If your theory was proven you wrote it up and published a paper. Ideally your paper contained enough information for anyone else with sufficient interest to replicate your results. Such correlation established your method and ensured your immortality (only joking) or at the very best got your method included in the text books as an accepted approach.
As computers have become more prolific over recent decades they have been used, not surprisingly, in more and more research. Let’s face it computers are really good at number crunching, and as cheaply as you please. What they are not good at though is accuracy, especially the differences that can build with cumulative tolerances. Examples of this are modelling of complex systems. They will also slavishly follow your methodology, and if that methodology is fundamentally flawed then your computer program be similarly flawed.
Concern was raised some years ago when the published research started to be based on large complex program runs. And with good reason. As I know myself such computer runs of huge coding sets rarely even repeat results (especially if you use stochastically based methods). So even you yourself fail the repeatability test! All you can do is repeated runs that produce a trend. If the trend successfully supports your hypothesis then wheyhey! you go ahead and publish.
But who is going to check and replicate your results? The primary problem is not your basic principles (which I assume you put into your paper) but who is going to check your code, assuming you even make it available – and most, as we know don’t. The answer is no one. Peer review will not. Who has the time to go through thousands of lines of code to see if it does what you say it does. Also who will know where those little fudge factors are that you slipped in one rainy Sunday afternoon to just make the thing work. The answer again is no one.
When you write any sort of program it will always be wrong when you first run it. If it has any level of complexity it will be wrong for an awful long time before the results start to make any sort of sense. It is dreadfully difficult to get a complicated computer program working even when you already know what the results should be. At least then you have a benchmark to aim for with your work. When you are modelling something like the climate, where is the benchmark? Where is the set of figures that tell you - you are wrong? There are none - other than the people who will be around in fifty years time, who will by then have completely forgotten you.
I propose that output from computer models should never be accepted as proof of anything. They are fine as long as you wish to test a hypothesis and that would lead onto a proper test using something more tangible. But they can never be used as a defining proof of a hypothesis. We see examples every day when such things are shown up for what they are. Wrong!
Then too, if you have the code but not a Cray Supercomputer, how can you verify?
Will you be able to get a time slot to use the computer?
We take PCs for granted.
The Appollo space program that used to take row upon row of skilled technicians each with a Computer could all be done on a low end laptop.
But at the other end of the spectrum there is number crunching and number crunching.
If the program is so complex and so time consuming you need to break it down and run it as screen savers on 20million PCs or you need a supercomputer, then it gets more difficult to emulate the work.
The thing is that engineers tend to use computers to do the grunt work. Mostly it is to perform calculations and procedures that they can do and normally would do by hand. The computer is simply a tool to make work easier.
But can climate models be checked by hand? do they simply make calculations easier or is there no other way?
"The thing is that engineers tend to use computers to do the grunt work. Mostly it is to perform calculations and procedures that they can do and normally would do by hand. The computer is simply a tool to make work easier."
I don't think it is quite as simple as that. Some of the more technical engineering industries like aerospace, nuclear and offshore oil have used computer analysis since the 1960s and maybe even before that. Nobody would use a computer in the form that was available in the 1960s for convenience, it would only be used if the calculations couldn't be performed by hand. The computer programs would however normally be verified against some text book hand calculation cases which would give confidence that it could handle other cases that could not be easily checked by hand.
Some engineering firms were able to avoid using computers until desk-top PCs were available or became fashionable in the 1980s and 1990s. These sort of firms who were able to avoid getting involved with all the card punch machines, IBM mainframes, VAX mini-computers and computer bureau services that existed before PCs must have been able to do whatever calculations they had to do by hand and are likely to only be using computers for convenience.
I suppose the classic example of an industry that uses computers for convenience would be "The City" (London's financial district). As far as I'm aware they showed absolutely no interest in computers whatsoever until the mid-80s and then adopted desk-top computing in a big way to give them an advantage over rival financial centres. But without realising it they and other financial centres have moved into financial risk modelling in recent years which can't really be checked by hand and some people think this was a big contributory factor to the financial meltdown of a couple of years ago.
As a guy who writes web applications, I have to say that figuring out why my applications gives the results it does takes up at least 1/2 my time. The code is pretty straight forward. It is the data that keeps changing. Understanding why the data looks the way it does takes a little skill.