This forum is about wrong numbers in science, politics and the media. It respects good science and good English.
Having a tengential discussion on the measurement of temperature extremes on Warwick Hughes' blog. (From 3rd comment)
Back in the old days, brief temperature transients would have been "filtered out" by the slow response time of e.g. the mercury thermometer; especially when compared to modern methods of measuring air temperature.
An interesting train of thought: If the "new records" represent no more than our ability to detect short-term, transient temperatures. Unless care has been taken to ensure that modern equipment has the "time constants" of the old, the records are simply not comparable.
This looks right up our host's alley.
Some VERY good points raised with that , a long time ago I worked for a couple of years as a furnaceman at a foundry and ran into the same problem with a change in the type of disposable temperature probe tips . The newer ones were of a much lighter construction and while the average temp readings were the same the peaks were much higher . It caused some problems initially as a plot of the temperature was only available via printout so was rarely used instead the peak temp which was displayed on a digital display was the prefered method . It was quite astounding how much variation in temperature there was within the eddies and swirls in a mass of molten steel
sounds entirely plausible, but one assumes the effect would be to see more extremes in both directions, in roughly equal numbers? Of course that might be happening, with the new record lows either being culled by reporting bias, or also being blamed on global worming.
Actually I guess the probe is always going to be playing catch-up with the air temperature. A true extreme will never be measured unless the probe is infinitely small or the temperature remains at exactly that extreme for a very long time (at least until the difference in temperature falls below the measurement error). That latter point might also affect the contention - measurement error was presumably much higher with mercury than modern probes. So I am no longer sure that I'd expect to see more extremes with modern equipment.
Or perhaps, due to the directional flow of heat, one is more likely to encounter small packets of warmer air than colder ones.
There's a followup by Warwick Hughes who received some expert feedback on the transition of temperature measurement technology.
His expert's response (in part) says:
The move to AWS and the concurrent shift of many sites from Town Post Office to airports is like a fault line through the Australian climate records. In many cases the site change is not registered in the data because both have the same station number. This fault line is papered over by the analysis of the data using GIS procedures – any analysis is based on the stations reporting for that day/month and does not require a continuity of record at the site. Thus the ‘Australian’ daily maximum, and other BoM published analyses represents the available data interpolated to grid points and then areal averaging the gridpoints, not the data. For most practical purposes this is OK; the problems arise when trends or new records are claimed!