This forum is about wrong numbers in science, politics and the media. It respects good science and good English.
I concede that you know more about this subject that I do, but am not sure what point you are making with your curves and where you believe I have been misleading.
There is a difference between “saturating” and “saturated”. It is like the difference between “ill” and “dead”. If a system is “saturating” its gain decreases as the input variable increases. It is a matter of immense importance in the theory of feedback instability. In vague generalities, a logarithmic response in the loop will militate against runaway instability of the kind we are frequently warned about, unless, of course, there is a counteracting exponential response elsewhere in the loop.
I note that you have chosen an exponentially increasing input range, so for a logarithmic response one would expect a linear increase in response. At the outer wavelengths your incremental responses look to be greater than linear, but perhaps I have misunderstood.
On second thoughts my last comment was rather naive and needs further thought.
Like John I don't quite follow your argument.
Obviously the true logarithmic variation tending towards saturation with a diminishing contribution in energy absorbed for each successive incremental thickness applies only to systems having a top-hat, square edged, variation with wavelength (or whatever metric is appropriate) characteristic. Hence the concept is excellent for thermal conduction but needs to be used with care in radiation systems where there are no sharp edges and increasing concentrations bring up low value wings.
However the basic situation remains the same as the contribution from the low value wings is, by definition, small unless the concentration of the absorber becomes absurdly high. This rising wing effect can never drive runaway increases it will merely damp the approach of the absorption function towards unity.
Rounding a quick and dirty estimate of the areas under the curves in your example to maximise the differential gives 50 at 0.5 current atmospheric CO2 and 60 at 4 times current CO2. An 800% variation in the driver gives less than 20% variation in the output. At these levels its pretty much irrelevant whether the curve of variation is logarithmic, linear parabolic or Gordon Browns economic mean. The practical effect is pretty negligible. Assuming that CO2 radiation blocking accounts for 3°C of warming at 0.5 current levels then an extra 20% is 0.6 °C. As the actual warming is around 30 to 35°C the overall increase can only be around 2% which, in any real world measurement, will be lost in the noise. Clearly the output response to the 800% change in input driver is far too little to drive any sort of runaway feedback.
In the days when I was involved with long range thermal detection I arbitrarily assumed that any absorption region over 80% was blocked by a top hat function and ignored the wings. I got answers correct to 0.1 °C or better whilst more sophisticated models frequently needed a good helping of Monks Coefficient to get it right. Of course having got answers correct o 0.1 °C under carefully controlled conditions it proved almost impossible to get across to people that the real world measured variations could easily be 2°C or more so relative variations might track quite well but reliable absolute measurements at that level were impossible.
My point is that a logarithmic relationship for infrared absorption, which btw follows straightforward using Lambert-Beer, is not the same as a saturating absorption.
Every doubling of co2 gives a forcing of 3.7 W/m2.
Now if you want to know how much warming a forcing of 3.7 W/m2 gives, then you come into the realm of climate sensitivity of which Nir Shaviv has a lot of sensible things to say:
But as I said earlier: I am a luke-warmer, and "warm is good" is my philosophy.
Your statement :-
Every doubling of CO2 gives a forcing of 3.7 W/m2.
ignores the shape of the absorption against wavelength curve and can only be true for either an open ended "infinite source" function or for a closed function significantly below saturation.
For illustrative purposes consider an very over simplified description:-
The radiation source for forcing is the terrestrial surface, a grey body within a few degrees of 0°C. As the surface is (broadly) close to constant temperature it can be assumed in radiation balance. Energy absorbed in CO2 (and other) bands cannot be radiated from the system so the surface temperature must rise slightly to increase the energy lost at wavelengths outside the absorption bands so restoring balance. This temperature forcing effect rapidly falls off as an absorption band saturates and degenerates to the contribution from the low value wings. For all practical purposes the CO2 absorption band shown at http://home.casema.nl/errenwijlens/co2/co205124.gif is saturated at current or slightly above atmospheric CO2 levels so reducing the radiative forcing to a very low level.
If you want to use a simple single value forcing function you have to do the analysis incrementally at suitably small wavelength intervals calculating the energy absorption of each over the required range of CO2 concentrations. Obviously the forcing function must be appropriate to each increment. The logarithmic variation applies to each increment just fine but the total energy absorption can clearly be seen to tend rapidly towards saturation.
Every doubling of CO2 gives a forcing of 3.7 W/m2.
In the range from 150 to 1200 atm cm, which is from lowest ice age to worst case emission. Of course the spectrum will completely saturate in the Venus case with a pCO2 of 90 atm, but that is not the earth case, by far.
I agree that the CO2 absorption band is partly saturated, but that was not the issue, was it? The logarithmic relationship is valid for the complete range of CO2 concentrations that we are currently interested in. Spectrum saturation is definetely not an issue.
I have trouble believing an author when he’s sloppy with terminology. Nir Shaviv defines GCM as "Global Circulation Model." I've studied GCMs for several years. I can only conclude one of three things: 1) he unintentionally used the wrong name implying a lack of due diligence, 2) he intentionally used the wrong name for reasons unknown, or 3) there really is a new class of model called "Global Circulation Model."
The paper he referenced--Cess et al. (1989)--has the following title: "Interpretation of Cloud-Climate Feedback as Produced by 14 Atmospheric General Circulation Models." (In this case, we are specifically talking about AGCMs.) This isn't even that latest word, because there's a later paper: Cess et al. (1996), "Cloud feedback in atmospheric general circulation models: An update." The more recent papers on the subject have apparently been overlooked.
The names of the models in his figure are all GENERAL circulation models (from the cited paper). By the way, the cloud feedback differences in the second paper aren’t as great.
Therefore, 1) it appears that there isn't a new class of climate model; 2) I wasted time trying to figure out what Nir Shaviv is really saying; and 3) his argument against GCMs doesn’t seem to carry as much weight.
I'm sure Shaviv was delighted when you e-mailed him to point out his error. Slips of the pen which we are all prone to make are extremely annoying. I bet you got a courteous reply though.
Alas, I'm not as masterful as you give me credit for, because I didn't send him an email. I'm glad to hear it was only a "slip of the pen."
Every doubling = exponential
Exponential, but not saturating
Many thanks Hans for the reference to Nir Shaviv's web site. Really clear and not at all difficult to understand.
I still feel the main point is being missed here. If you leave out the (admittedly complicated) problem of frequency response, a logarithmic response within an otherwise linear feedback loop ensures that there is ALWAYS a signal level at which the incremental loop gain is less than one. This means that runaway feedback instability cannot happen. That is why the concept of “saturating” is so important.
The Co2 spectrum can saturate (look at Venus), but just not for the optical thicknesses that are relevant for the near future and ice age past of the earth (150-1200 atm cm)
I begin to understand. We are divided by a common language. When a systems engineer speaks of saturation in an amplifier or material he means that the incremental gain decreases as the input signal increases, there is no implication of any physical mechanism analagous to a sponge taking up water. Saturation can be weak, strong or total. The logarithmic input/output characteristic is saturating because the incremental gain decreases monotonically with signal level. Over a wide range of sciences saturation is taken to mean the deviation from linearity, not just the flat bit (try googling, for instance, “onset of saturation”). It is important in the situation under discussion because runaway instability is impossible with a logarithmic characteristic in the feedback loop (unless of course there is a countervailing exponential characteristic).
As it happens, there is some relevance of the sponge analogy in this case, but it is modified by the competing process of spontaneous emission.