By Christopher Monckton of Brenchley
[Originally Posted on 27th Sept 2011 by Anthony Watts on WUWT]
- updated 29th Sept. + 28th Dec. 2011 & 15th Jan. 2012
“My commentary written for Remote Sensing on the empirical determination of climate sensitivity, published by the splendid Anthony Watts some days ago, has aroused a great deal of interest among his multitudes of readers. It is circulating among climate scientists on both sides of the debate. Several of Anthony’s readers have taken the trouble to make some helpful comments. Since some of these are buried among the usual debates between trolls on how awful I am, and others were kindly communicated privately, I have asked Anthony to allow me, first and foremost, to thank those readers who have been constructive with their comments, and to allow his readers the chance to share the comments I have received.”
Joel Shore pointed out that Schwartz, whose paper of 2007 I had cited as finding climate sensitivity to be ~1 K, wrote a second paper in 2008 finding it close to 2 K. Shore assumed I had seen but suppressed the second paper. By now, most of Anthony’s readers will perhaps think less ungenerously of me than that. The new .pdf version of the commentary, available from Anthony’s website (here), omits both Schwartz papers: but they will be included in a fuller version of the argument in due course, along with other papers which use observation and measurement, rather than mere modeling, to determine climate sensitivity.
Professor Michael Asten of Monash University helpfully provided a proper reference in the reviewed literature for Christopher Scotese’s 1999 paper reconstructing mean global surface temperatures from the Cambrian Era to the present. This, too, has been incorporated into the new .pdf.
Professor Asten also supplied a copy of a paper by David Douglass and John Christy, published in that vital outlet for truth Energy & Environment in 2009, and concluding on the basis of recent temperature trends that feedbacks were not likely to be net-positive, implying climate sensitivity ~1 K. I shall certainly be including that paper and several others in the final version of the full-length paper that underlies the commentary published by Anthony. This paper is now in draft and I should be happy to send it to any interested reader.
A regular critic, Lucia Liljegren was, as all too often before, eager to attack my calculations – she erred in publishing a denial that I sent her a reference that I can prove she received; and not factually accurate in blogging that “Monckton’s” Planck parameter was “pulled out of a hat” when I had shown her that in my commentary I had accepted the IPCC’s value as correct. She was misleading her readers in not telling them that the “out-of-a-hat” relationship she complains of is one which Kiehl and Trenberth (1997) had assumed, with a small variation (their implicit λ0 is 0.18 rather than the 0.15 I derived from their paper via Kimoto, 2009); and selective in not passing on that I had told her they were wrong to assume that a blackbody relationship between flux and temperature holds at the surface (if it did, as my commentary said, it would imply a climate sensitivity ~1 K).
A troll (commenter on WUWT) said I had “fabricated” the forcing function for CO2. When I pointed out that I had obtained it from Myhre et al. (1998), cited with approval in IPCC (2001, 2007), he whined at being called a troll (so don’t accuse me of “fabricating” stuff, then, particularly when I have taken care to cite multiple sources, none of which you were able to challenge) and dug himself further in by alleging that the IPCC had also “fabricated” the CO2 forcing function. No: the IPCC got it from Myhre et al., who in turn derived it by inter-comparison between three models. I didn’t and don’t warrant that the CO2 forcing function is right: that is above my pay-grade. However, Chris Essex, the lively mathematician who did some of the earliest spectral-line modeling of the CO2 forcing effect, confirms that Myhre and the IPCC are right to state that the function is a logarithmic one. Therefore, until I have evidence that it is wrong, I shall continue to use it in my calculations.
Another troll said – as usual, without providing any evidence – that I had mis-stated the result from process engineering that provides a decisive (and low) upper bound to climate sensitivity. In fact, the result came from a process engineer, Dr. David Evans, who is one of the finest intuitive mathematicians I have met. He spent much of his early career designing and building electrical circuitry and cannot, therefore, fairly be accused of not knowing what he is talking about. Since the resulting fundamental upper limit to climate sensitivity is as low as 1.2 K, I thought readers might be interested to have a fuller account of it, which is very substantially the work of David Evans. It is posted below this note.
Hereward Corley pointed out that the reference to Shaviv (2008) should have been Shaviv (2005). Nir Shaviv – another genius of a mathematician – had originally sent me the paper saying it was from 2008, but the version he sent was an undated pre-publication copy. Mr. Corley also kindly supplied half a dozen further papers that determine climate sensitivity empirically. Most of the papers find it low, and all find it below the IPCC’s estimates. The papers are Chylek & Lohman (2008); Douglass & Knox (2005); Gregory et al. (2002); Hoffert & Covey (1992); Idso (1998); and Loehle & Scafetta (2011).
I should be most grateful if readers would be kind enough to draw my attention to any further papers that determine climate sensitivity by empirical methods rather than by the use of general-circulation models. I don’t mind what answers the papers come to, but I only want those that attempted to reach the answer by measurement, observation, and the application of established theory to the results.
Many thanks again to all of you for your interest and assistance. Too many of the peer-reviewed journals are no longer professional enough or unprejudiced enough to publish anything that questions the new State religion of supposedly catastrophic manmade global warming. Remote Sensing, for instance has still not had the courtesy to acknowledge the commentary I sent. Since the editors of the learned journals seem to have abdicated their role as impartial philosopher-kings, WattsUpWithThat is now the place where (in between the whining and whiffling and waffling of the trolls) true science is done.
The fundamental constraint on climate sensitivity
A fundamental constraint rules out strongly net-positive temperature feedbacks acting to amplify warming triggered by emissions of greenhouse gases, with the startling result that climate sensitivity cannot much exceed 1.2 K.
Sensitivity to doubled CO2 concentration is the product of three parameters (Eq. 1):
- the radiative forcing ΔF2x = 5.35 ln 2 = 3.708 W m–2 at CO2 doubling (Eq. 2), from the function in Myhre et al. (1998) and IPCC (2001, 2007);
- the Planck zero-feedback climate sensitivity parameter λ0 = 0.3125 K W–1 m2 (Eq. 3), equivalent to the first differential of the fundamental equation of radiative transfer in terms of mean emission temperature TE and the corresponding flux FE at the characteristic-emission altitude (CEA, one optical depth down into the atmosphere, where incoming and outgoing fluxes are identical), augmented by approximately one-sixth to allow for latitudinal variation (IPCC, 2007, p. 631 fn.);
- the overall feedback gain factor G (Eq. 4), equivalent, where feedbacks are assumed linear as here, to (1 – g)–1, where the feedback loop gain g is the product of λ0 and the sum f of all unamplified temperature feedbacks f1, f2, … fn, such that the final or post-feedback climate sensitivity parameter λ is the product of λ0 and G.
The values of the first two of the three parameters whose product is climate sensitivity are known (Eqs. 2-3). The general-circulation models, following pioneering authors such as Hansen (1984), assume that the feedbacks acting upon the climate object are strongly net-positive (G 1: the IPCC’s implicit central estimate is G = 2.81). In practice, however, neither individual temperature feedbacks nor their sum can be directly measured; nor can feedbacks be readily distinguished from forcings (Spencer & Braswell, 2010, 2011; but see Dessler, 2010, 2011).
Temperature feedbacks – in effect, forcings that occur because a temperature change has triggered them – are the greatest of the many uncertainties that complicate the determination of climate sensitivity. The methodology that the models adopt was first considered in detail by Bode (1945) and is encapsulated at its simplest, assuming all feedbacks are linear, in Eq. (4). Models attempt to determine the value of each distinct positive (temperature-amplifying) and negative (temperature-attenuating) feedback in Watts per square meter per Kelvin of original warming. The feedbacks f1, f2, … fn are then summed and mutually amplified (Eq. 4).
Fig. 1 schematizes the feedback loop:
Figure 1. A forcing ΔF is input (top left) by multiplication to the final sensitivity parameter λ = λ0G, where g = λ0f = 0.645 is the IPCC’s implicit central estimate of the loop gain and G = (1 – g)–1 = 2.813 [not shown] is the overall gain factor: i.e., the factor by which the temperature change T0 = ΔF λ0 triggered by the original forcing is multiplied to yield the output final climate sensitivity ΔT = ΔF λ = ΔF λ0 G (top right). To generate λ = λ0 G, the feedbacks f1, f2, … fn, summing to f, are mutually amplified via Eq. (4). Stated values of λ0, f, g, G, and λare those implicit in the IPCC’s central estimate ΔT2x = 3.26 K (2007, p. 798, Box 10.2) in response to ΔF2x = 5.35 ln 2 = 3.708 W m–2. Values for individual feedbacks f1–f4 are taken from Soden & Held (2006). (Author’s diagram from a drawing by Dr. David Evans).
The modelers’ attempts to identify and aggregate individual temperature feedbacks, while understandable, do not overcome the difficulties in distinguishing feedbacks from forcings or even from each other, or in determining the effect of overlaps between them. The methodology’s chief drawback, however, is that in concentrating on individual rather than aggregate feedbacks it overlooks a fundamental physical constraint on the magnitude of the feedback loop gain g in Eq. (4).
Paleoclimate studies indicate that in the past billion years the Earth’s absolute global mean surface temperature has not varied by more than 3% (~8 K) either side of the 750-million-year mean (Fig. 2):
Figure 2. Global mean surface temperature over the past 750 million years, reconstructed by Scotese (1999), showing variations not exceeding 8 K (<3%) either side of the 291 K (18 °C) mean.
Consistent with Scotese’s result, Zachos et al. (2001), reviewing detailed evidence from deep-sea sediment cores, concluded that in the past 65 Ma the greatest departure from the long-run mean was an increase of 8 K at the Poles, and less elsewhere, during the late Paleocene thermal maximum 55 Ma BP.
While even a 3% variation either side of the long-run mean causes ice ages at one era and hothouse conditions at another, in absolute terms the temperature homeostasis of the climate object is formidable. At no point in the geologically recent history of the planet has a runaway warming occurred. The Earth’s temperature stability raises the question what is the maximum feedback loop gain consistent with the long-term maintenance of stability in an object upon which feedbacks operate.
The IPCC’s method of determining temperature feedbacks is explicitly founded on the feedback-amplification equation (Eq. 4, and see Hansen, 1984) discussed by Bode (1945) in connection with the prevention of feedback-induced failure in electronic circuits. A discussion of the methods adopted by process engineers to ensure that feedbacks are prevented in electronic circuits will, therefore, be relevant to a discussion of the role of feedbacks acting upon the climate object.
In the construction of electronic circuits, where one of the best-known instances of runaway feedback is the howling shriek when a microphone is placed too close to the loudspeaker to which it is connected, electronic engineers take considerable care to avoid positive feedback altogether, unless they wish to induce a deliberate instability or oscillation by compelling the loop gain to exceed unity, the singularity in Eq. (4), at which point the magnitude of the loop gain becomes undefined.
In electronic circuits for consumer goods, the values of components typically vary by up to 10% from specification owing to the vagaries of raw materials, manufacture, and assembly. Values may vary further over their lifetime from age and deterioration. Therefore engineers ensure long-term stability by designing in a negative feedback to ensure that vital circuit parameters stay close to the desired values.
Negative feedbacks were first posited by Harold S. Black in 1927 in New York, when he was looking for a way to cancel distortion in telephone relays. Roe (2009) writes:
“He describes a sudden flash of inspiration while on his commute into Manhattan on the Lackawanna Ferry. The original copy of the page of the New York Times on which he scribbled down the details of his brainwave a few days later still has pride of place at the Bell Labs Museum, where it is regarded with great reverence.”
One circuit parameter of great importance is the (closed) feedback loop gain inside any amplifier, which must be held at less than unity under all circumstances to avoid runaway positive feedback (g ≥ 1). The loop gain typically depends on the values of at least half a dozen components, and the actual value of each component may randomly vary. To ensure stability the design value of the feedback loop gain must be held one or two orders of magnitude below unity: g <0.1, or preferably <0.01.
Now consider the common view of the climate system as an engine for converting forcings to temperature changes – an object on which feedbacks act as in Fig. 1. The values of the parameters that determine the (closed) loop gain, as in an electronic circuit, are subject to vagaries. As the Earth evolves, continents drift, sometimes occupying polar or tropical positions, sometimes allowing important ocean currents to pass and sometimes impeding or diverting them; vegetation comes and goes, altering the reflective, radiative, and evaporative characteristics of the land and the properties of the coupled atmosphere-ocean interface; volcanoes occasionally fill the atmosphere with smoke, sulfur, or CO2; asteroids strike; orbital characteristics change slowly but radically in accordance with the Milankovich cycles; and atmospheric concentrations of the greenhouse species, vary greatly.
In the Neoproterozoic, 750 Ma BP, CO2 concentration (today <0.04%) was ~30%: otherwise the ocean’s magnesium ions could not have united with the abundance of calcium ions and with CO2 itself to precipitate the dolomitic rocks laid down in that era. Yet mile-high glaciers came and went twice at sea level at the equator.
As in the electronic circuit, so in the climate object, the values of numerous key components contributing to the loop gain change radically over time. Yet for at least 2 Ga the Earth appears never to have endured the runaway greenhouse warming that would have occurred if the loop gain had reached unity. Therefore, the loop gain in the climate object cannot be close to unity, for otherwise random mutation of the feedback-relevant parameters of vital climate components over time would surely by now have driven it to unity. It is near-certain, therefore, that the value of the climatic feedback loop gain g today must be very much closer to 0 than to 1.
A loop gain of 0.1, then, is in practice the upper bound for very-long-term climate stability. Yet the loop gain values implicit in the IPCC’s global-warming projections of 3.26[2, 4.5] K warming in response to a CO2 doubling are well above this maximum, at 0.64[0.42, 0.74] (Eq. 8). Values such as these are far too close to the steeply-rising segment of the climate-sensitivity curve (Fig. 3) to have allowed the climate to remain temperature-stable for hundreds of millions of years, as Zachos (2001) and Scotese (1999) have reported.
Figure 3. The climate-sensitivity curve at loop gains –1.0 ≤ g < +1.0. The narrow shaded zone at bottom left indicates that climate sensitivity is stable at 0.5-1.3 K per CO2 doubling for loop gains –1.0 ≤ g ≤ +0.1, equivalent to overall feedback gain factors 0.5 ≤ G ≤ 1.1. However, climate sensitivities on the IPCC’s interval [2.0, 4.5] K (shaded zone at right) imply loop gains on the interval (+0.4, +0.8), well above the maximum loop gain that could obtain in a long-term-stable object such as the climate. At a loop gain of unity, the singularity in the feedback-amplification equation (Eq. 4), runaway feedback would occur. If the loop gain in the climate object were >0.1, then at any time conditions sufficient to push the loop gain towards unity might occur, but (see Fig. 2) have not occurred in close to a billion years (author’s figure based on diagrams in Roe, 2009; Paltridge, 2009; and Lindzen, 2011).
Fig. 3 shows the climate-sensitivity curve for loop gains g on the interval [–1, 1). It is precisely because the IPCC’s implicit interval of feedback loop gains so closely approaches unity, which is the singularity in the feedback-amplification equation (Eq. 4), that attempts to determine climate sensitivity on the basis that feedbacks are strongly net-positive can generate very high (but physically unrealistic) climate sensitivities, such as the >10 K that Murphy et al. (2009) say they cannot rule out.
If, however, the loop gain in the climate object is no greater than the theoretical maximum value g = 0.1, then, by Eq. (4), the corresponding overall feedback gain factor G is 1.11, and, by Eq. (1), climate sensitivity in response to a CO2 doubling cannot much exceed 1.2 K. No surprise, then, that the dozen or more empirical methods of deriving climate sensitivity that I included in my commentary cohered at just 1 K. If that is indeed the answer to the climate sensitivity question, it is also a mortal blow to climate extremists worldwide – but good news for everyone else.
Bode, H.W., 1945, Network analysis and feedback amplifier design, Van Nostrand, New York, USA, 551 pp.
Chylek, P., and U. Lohman, 2008, Aerosol radiative forcing and climate sensitivity deduced from the last glacial maximum to Holocene transition, Geophys. Res. Lett. 35, doi:10.1029/2007GL032759.
Dessler, A.E., 2010, A determination of the cloud feedback from climate variations over the past decade, Science 220, 1523-1527.
Dessler, A.E., 2011, Cloud Variations and the Earth’s Energy Budget, Geophys. Res. Lett. [in press].
Douglass, D.H., and R.S. Knox, 2005, Climate forcing by the volcanic eruption of Mount Pinatubo, Geophys. Res. Lett. 32, doi:10.1029/2004GL022119.
Douglass, D.H., and J.R. Christy, 2009, Limits on CO2 climate forcing from recent temperature data of Earth, Energy & Environment 20:1-2, 177-189.
Gregory, J.M., R.J. Stouffer, S.C. Raper, P.A. Stott, and N.A. Rayner, 2002, An observationally-based estimate of the climate sensitivity, J. Clim. 15, 3117-3121.
Hansen, J., A., Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lerner, 1984, Climate sensitivity: analysis of feedback mechanisms, Meteorological Monographs 29, 130-163.
Hoffert, M.I., and C. Covey, 1992, Deriving global climate sensitivity from paloeclimate reconstructions, Nature 360, 573-576.
Idso, S.B., 1998, CO2-induced global warming: a skeptic’s view of potential climate change, Clim. Res. 10, 69-82.
IPCC, 2001, Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom, and New York, NY, USA.
IPCC, 2007, Climate Change 2007: the Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, 2007 [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Avery, M. Tignor and H.L. Miller (eds.)], Cambridge University Press, Cambridge, United Kingdom, and New York, NY, USA.
Kimoto, K., 2009, On the confusion of Planck feedback parameters, Energy & Environment 20:7, 1057-1066.
Lindzen, R.S., 2011, Lecture to the American Chemical Society, Aug. 28.
Loehle, C., and Scafetta, N., 2011, Climate change attribution using empirical decomposition of climatic data, Open Atmos. Sci. J. 5, 74-86.
Murphy, D. M., S. Solomon, R. W. Portmann, K. H. Rosenlof, P. M. Forster, and T. Wong 2009, An observationally-based energy balance for the Earth since 1950, J. Geophys. Res., 114, D17107, doi:10.1029/2009JD012105.
Myhre, G., E. J. Highwood, K. P. Shine, and F. Stordal, 1998, New estimates of radiative forcing due to well mixed greenhouse gases, Geophys. Res. Lett. 25:14, 2715–2718, doi:10.1029/98GL01908.
Paltridge, G., 2009, The Climate Caper, Connor Court, Sydney, Australia, 110 pp.
Roe, G., 2009, Feedbacks, Timescales, and Seeing Red, Ann. Rev. Earth. Planet. Sci. 37, 93-115.
Schwartz, S.E., 2007, Heat capacity, time constant, and sensitivity of Earth’s climate system, J. Geophys. Res. 112, D24So5, doi:10.1029/2007JD008746.
Schwartz, S.E., 2008, Reply to comments by G. Foster et al., R. Knutti et al., and N. Scafetta on “Heat Capacity, time constant, and sensitivity of Earth’s climate system”, J. Geophys. Res. 113, D15015, doi: 10.1029/2008JD009872.
Scotese, C.R., A.J. Boucot, and W.S. McKerrow, 1999, Gondwanan paleogeography and paleoclimatology, J. African Earth Sci. 28:1, 99-114.
Shaviv, N., 2005, On climate response to changes in the cosmic-ray flux and radiative budget, J. Geophys. Res., doi:10.1029.
Soden, B.J., and I.M. Held, 2006, An assessment of climate feedbacks in coupled ocean-atmosphere models. J. Clim. 19, 3354–3360.
Spencer, R.W., and W.D. Braswell, 2010, On the diagnosis of radiative feedback in the presence of unknown radiative forcing, J. Geophys. Res, 115, D16109.
Spencer, R.W., and W.D. Braswell, 2011, On the misdiagnosis of surface temperature feedbacks from variations in Earth’s radiant-energy balance, Remote Sensing 3, 1603-1613, doi:10.3390/rs3081603.
Zachos, J., M. Pagani, L. Sloan, E. Thomas, and K. Billups, 2001, Trends, Rhythms and Aberrations in Global Climate 65 Ma to Present, Science 292, 686-693.
Sequel : 1 K or not 1 K? That is the question
By Christopher Monckton of Brenchley
[Originally Posted on September 29, 2011 by Anthony Watts]
“I am very grateful for the many thoughtful postings in response to my outline of the fundamental theoretical upper bound of little more than 1.2 K on climate sensitivity imposed by the process-engineering theory of maintaining the stability of an object on which feedbacks operate. Here are some answers to points raised by correspondents.”
Iskandar says, “None of these feedbacks or forcings are ever given in the form of a formula.” In fact, there are functions for the forcings arising from each of the principal species of greenhouse gas: they are tabulated in Myhre et al., 1998, and cited with approval in IPCC (2001. 2007). However, Iskandar is right about temperature feedbacks. Here, the nearest thing to a formula for a feedback is the Clausius-Clapeyron relation, which states that the space occupied by the atmosphere is capable of carrying near-exponentially more water vapor as it warms. However, as Paltridge et al. (2009) have indicated, merely because the atmosphere can carry more water vapor there is no certainty that it does. The IPCC’s values for this and other feedbacks are questionable. For instance, Spencer and Braswell (2010, 2011, pace Dessler, 2010, 2011) have challenged the IPCC’s estimate of the cloud feedback. They find it as strongly negative (attenuating the warming that triggers it) as the IPCC finds it strongly positive (amplifying the original warming), implying a climate sensitivity of less than 1 K. Since feedbacks account for almost two-thirds of all warming in the IPCC’s method, and since it is extremely difficult to measure – still less to provide a formula for – the values of individual temperature feedbacks, an effort such as mine to identify a constraint on the magnitude of all feedbacks taken together is at least worth trying.
Doug says we cannot be sure when the dolomitic rocks were formed. What is certain, however, according to Professor Ian Plimer, who gave me the information, is that they cannot form unless the partial pressure of CO2 above the ocean in which they form is 30%, compared with today’s 0.04%. Yet, during the long era when CO2 concentrations were that high, glaciers came and went, twice, at sea level, and at the equator. Even allowing for the fact that the Sun was a little fainter then, and that the Earth’s albedo was higher, the presence of those glaciers where there are none today does raise some questions about the forcing effect of very high CO2 concentrations, and, a fortiori, about the forcing effect of today’s mere trace concentration. However, in general Doug’s point is right: it is unwise to put too much weight on results from the paleoclimate, particularly when there is so much scientific dispute about the results from today’s climate that we can measure directly.
Dirk H and the inimitable Willis Eschenbach, whose fascinating contributions to this column should surely be collected and published as a best-seller, point out that I am treating feedbacks as linear when some of them are non-linear. For the math underlying non-linear feedbacks, which would have been too lengthy to include in my posting, see e.g. Roe (2009). Roe’s teacher was Dick Lindzen, who is justifiably proud of him. However, for the purpose of the present argument, it matters not whether feedbacks are linear or non-linear: what matters is the sum total of feedbacks as they are in our own time, which is multiplied by the Planck parameter (of which more later) to yield the closed-loop gain whose upper bound was the focus of my posting. Of course I agree with Willis that the non-linearity of many feedbacks, not to mention that all or nearly all of them cannot be measured directly, makes solving the climate-sensitivity equation difficult. But, again, that is why I have tried the approach of examining a powerful theoretical constraint on the absolute magnitude of the feedback-sum. Since the loop gain in the climate object cannot exceed 0.1 (at maximum) without rendering the climate so prone to instability that runaway feedbacks that have not occurred in the past would be very likely to have occurred, the maximum feedback sum before mutual amplification cannot exceed 0.32: yet the IPCC’s implicit central estimate of the feedback sum is 2.81.
Roger Knights rightly takes me to task for a yob’s comma that should not have been present in my posting. I apologize. He also challenges my use of the word “species” for the various types of greenhouse gas: but the word “species” is regularly used by the eminent professors of climatology at whose feet I have sat.
R. de Haan cites an author whose opinion is that warming back-radiation returned from the atmosphere back to the surface and the idea that a cooler system can warm a warmer system are “unphysical concepts”. I know that the manufacturers of some infra-red detectors say the detectors do not measure back-radiation but something else: however, both Mr. de Haan’s points are based on a common misconception about what the admittedly badly-named “greenhouse effect” is. The brilliant Chris Essex explains it thus: when outgoing radiation in the right wavelengths of the near-infrared meets a molecule of a greenhouse gas such as CO2, it sets up a quantum resonance in the gas molecule, turning it into a miniature radiator. This beautifully clear analogy, when I recently used it in a presentation in New Zealand, won the support of two professors of climatology in the audience. The little radiators that the outgoing radiation turns on are not, of course, restricted only to radiating outwards to space. They radiate in all directions, including downwards – and that is before we take into account non-radiative transports such as subsidence and precipitation that bring some of that radiation down to Earth. So even the IPCC, for all its faults, is not (in this respect, at any rate) repealing the laws of thermodynamics by allowing a cooler system to warm a warmer system, which indeed would be an unphysical concept.
Gary Smith politely raised the question whether the apparently sharp ups and downs in the paleoclimate temperature indicated strongly-positive feedbacks. With respect, the answer is No, for two reasons. First, the graph I used was inevitably compressed: in fact, most of the temperature changes in that graph took place over hundreds of thousands or even millions of years. Secondly, it is the maximum variance either side of the long-run mean, not the superficially-apparent wildness of the variances within the mean, that establishes whether or not there is a constraint on the maximum net-positivity of temperature feedbacks.
Nick Stokes asked where the limiting value 0.1 for the closed-loop gain in the climate object came from. It is about an order of magnitude above the usual design limit for net-positive feedbacks in electronic circuits that are not intended to experience runaway feedbacks or to oscillate either side of the singularity in the feedback-amplification equation, which occurs where the loop gain is unity.
David Hoffer wondered what evidence the IPCC had for assuming a linear rise in global temperature over the 21st century given that the radiative forcing from CO2 increases only at a logarithmic (i.e. sub-linear) rate. The IPCC pretends that all six of its “emissions scenarios” are to be given equal weight, but its own preference for the A2 scenario is clear, particularly in the relevant chapter of its 2007 report (ch. 10). See, in particular, fig. 10.26, which shows an exponential rise in both CO2 and temperature, when one might have expected the logarithmicity of the CO2 increase to cancel the exponentiality of the temperature increase. However, on the A2 scenario it is only the anthropogenic fraction of the CO2 concentration that is increased exponentially, and this has the paradoxical effect of making temperature rise near-exponentially too – but only if one assumes the very high climate sensitivity that is impossible given the fundamental constraint on the net-positivity of temperature feedbacks.
DR asks whether anyone has ever actually replicated experimentally the greenhouse effect mentioned by Arrhenius, who in 1895/6 first calculated how much warming a doubling of CO2 concentration would cause. Yes, the greenhouse effect was first demonstrated empirically by John Tyndale at the Royal Institution, London (just round the corner from my club) as far back as 1859. His apparatus can still be seen there. The experiment is quite easily replicated, so we know (even if the SB equation and the existence of a readily-measurable temperature lapse-rate with altitude did not tell us) that the greenhouse effect is real. The real debate is not on whether there is a greenhouse effect (there is), but on how much warming our rather small perturbation of the atmosphere with additional concentrations of greenhouse gases will cause (not a lot).
Werner Brozek asks whether the quite small variations in global surface temperature either side of the billion-year mean indicate that “tipping-points” do not exist. In mathematics and physics the term “tipping-point” is really only used by those wanting to make a political point, usually from a climate-extremist position. The old mathematical term of art, still used by many, was “phase-transition”: now we should usually talk of a “bifurcation” in the evolution of the object under consideration. Since the climate object is mathematically-chaotic (IPCC, 2001, para. 220.127.116.11; Giorgi, 2005; Lorenz, 1963), bifurcations will of course occur: indeed any sufficiently rare extreme-weather event may be a bifurcation. We know that very extreme things can suddenly happen in the climate. For instance, at the end of the Younger Dryas cooling period that brought the last Ice Age to an end, temperatures in Antarctica as inferred from variations in the ratios of different isotopes of oxygen in air trapped in layers under the ice, rose by 5 K (9 F) in just three years. “Now, that,” as Ian Plimer likes to say in his lectures, “is climate change!”
But the idea that our very small perturbation in temperature will somehow cause more bifurcations is not warranted by the underlying mathematics of chaos theory. In my own lectures I often illustrate this with a spectacular picture drawn on the Argand plane by a very simple chaotic function, the Mandelbrot fractal function. The starting and ending values for the pixels at top right and bottom left respectively are identical to 12 digits of precision; yet the digits beyond 12 are enough to produce multiple highly-visible bifurcations.
And we know that some forms of extreme weather are likely to become rarer if the world warms. Much – though not all – extreme weather depends not upon absolute temperature but upon differentials in temperature between one altitude or latitude and another. These differentials tend to get smaller as the world warms, so that outside the tropics (and arguably in the tropics too) there will probably be fewer storms.
Roy Clark says there is no such thing as equilibrium in the climate. No, but that does not stop us from trying to do the sums on the assumption of the absence of any perturbation (the equilibrium assumption). Like the square root of -1, it doesn’t really exist, but it is useful to pretend ad argumentum that it might.
Legatus raised a fascinating point about the measurements of ambient radiation that observatories around the world make so that they can calibrate their delicate, heat-sensitive telescopes. He says those measurements show no increase in radiation at the surface (or, rather, on the mountain-tops where most of the telescopes are). However, it is not the surface radiation but the radiation at the top of the atmosphere (or, rather, at the characteristic-emission altitude about 5 km above sea level) that is relevant: and that is 239.4 Watts (no relation) per square meter, by definition, because the characteristic-emission altitude (the outstanding Dick Lindzen’s name for it) is that altitude at which outgoing and incoming fluxes of radiation balance. It is also at that altitude, one optical depth down into the atmosphere, that satellites “see” the radiation coming up into space from the Earth/atmosphere system. Now, as we add greenhouse gases to the atmosphere and cause warming, that altitude will rise a little; and, because the atmosphere contains greenhouse gases and, therefore, its temperature is not uniform, consequent maintenance of the temperature lapse-rate of about 6.5 K/km of altitude will ensure that the surface warms as a result. Since the altitude of the characteristic-emission level varies by day and by night, by latitude, etc., it is impossible to measure directly how it has changed or even where it is.
Of course, it is at the characteristic-emission altitude, and not – repeat not – at the Earth’s surface that the Planck parameter should be derived. So let me do just that. Incoming radiation is, say, 1368 Watts per square meter. However, the Earth presents itself to that radiation as a disk but is actually a sphere, so we divide the radiation by 4 to allow for the ratio of the surface areas of disk and sphere. That gives 342 Watts per square meter. However, 30% of the Sun’s radiation is reflected harmlessly back to space by clouds, snow, sparkling sea surfaces, my lovely wife’s smile, etc., so the flux of relevant radiation at the characteristic-emission altitude is 342(1 – 0.3) = 239.4 Watts per square meter.
From this value, we can calculate the Earth’s characteristic-emission temperature directly without even having to measure it (which is just as well, because measuring even surface temperature is problematic). We use the fundamental equation of radiative transfer, the only equation to be named after a Slovene. Stefan found the equation by empirical methods and, a decade or so later, his Austrian pupil Ludwig Boltzmann proved it theoretically by reference to Planck’s blackbody law (hence the name “Planck parameter”, engagingly mis-spelled “plank” by one blogger.
The equation says that radiative flux is equal to the emissivity of the characteristic-emission surface (which we can take as unity without much error when thinking about long-wave radiation), times the Stefan-Boltzmann constant 5.67 x 10^–8 Watts per square meter per Kelvin to the fourth power, times temperature in Kelvin to the fourth power. So characteristic-emission temperature is equal to the flux divided by the emissivity and by the Stefan-Boltzmann constant, all to the power 1/4.: thus, [239.4 / (1 x 5.67 x 10^–8)]^¼ = 254.9 K or thereby.
Any mathematician taking a glance at this equation will at once notice that one needs quite a large change in radiative flux to achieve a very small change in temperature. To find out how small, one takes the first differential of the equation, which (assuming emissivity to be constant) is simply the temperature divided by four times the flux: so, 254.9 / (4 x 239.4) = 0.2662 Kelvin per Watt per square meter. However, the IPCC (2007, p. 631, footnote) takes 0.3125 and, in its usual exasperating way, without explaining why. So a couple of weeks ago I asked Roy Spencer and John Christy for 30 years of latitudinally-distributed surface temperature data and spent a weekend calculating the Planck parameter at the characteristic-emission altitude for each of 67 zones of latitude, allowing for latitudinal variations in insolation and adjusting for variations in the surface areas of the zones. My answer, based on the equinoxes and admittedly ignoring seasonal variations in the zenith angles of the Sun at each latitude, was 0.316. So I’ve checked, and the IPCC has the Planck parameter right. Therefore, it is of course the IPCC’s value that I used in my calculations in my commentary for Remote Sensing, except in one place.
Kiehl & Trenberth (1997) publish a celebrated Earth/atmosphere energy-budget diagram in which they show 390 Watts per square meter of outgoing radiative flux from the surface, and state that this is the “blackbody” value. From this, we know that – contrary to the intriguing suggestion made by Legatus that one should simply measure it – they did not attempt to find this value by measurement. Instead, they were taking surface emissivity as unity (for that is what defines a blackbody), and calculating the outgoing flux using the Stefan-Boltzmann equation. The surface temperature, which we can measure (albeit with some uncertainty) is 288 K. So, in effect, Kiehl and Trenberth are saying that they used the SB equation at the Earth’s surface to determine the outgoing surface flux, thus: 1 x 5.67 x 10^–8 x 288^4 = 390.1 Watts per square meter.
Two problems with this. First, the equation holds good only at the characteristic-emission altitude, and not at the surface. That is why, once I had satisfied myself that the IPCC’s value at that altitude was correct, I said in my commentary for Remote Sensing that the IPCC’s value was correct, and I am surprised to find that a blogger had tried to leave her readers with a quite different impression even after I had clarified this specific point to her.
Secondly, since Kiehl and Trenberth are using the Stefan-Boltzmann equation at the surface in order to obtain their imagined (and perhaps imaginary) outgoing flux of 390 Watts per square meter, it is of course legitimate to take the surface differential of the equation that they themselves imply that they had used, for in that we we can determine the implicit Planck parameter in their diagram. This is simply done: 288 / (4 x 390) = 0.1846 Kelvin per Watt per square meter. Strictly speaking, one should also add the non-radiative transports of 78 Watts per square meter for evapo-transpiration and 24 for thermal convection (see Kimoto, 2009, for a discussion) to the 390 Watts per square meter of radiative flux, reducing Kiehl and Trenberth’s implicit Planck parameter from 0.18 to 0.15. Either 0.15 or 0.18 gives a climate sensitivity ~1 K. So the Planck parameter I derived at this point in my commentary, of course, not the correct one: nor is it “Monckton’s” Planck parameter, and the blogger who said it was had been plainly told all that I have told you, though in a rather more compressed form because she had indicated she was familiar with differential calculus. It is not Monckton’s Planck parameter, nor even Planck’s Planck parameter, and it is certainly not a plank parameter – but it is Kiehl & Trenberth’s Planck parameter. If they were right (and, of course, I was explicit in using the conditional in my commentary to indicate, in the politest possible way, that they were not), then, like it or not, they were implying a climate sensitivity a great deal lower than they had perhaps realized – in fact a sensitivity of around 1 K. I do regret that a quite unnecessary mountain has been made out of this surely simple little molehill – just one of more than a dozen points in a wide-ranging commentary.
And just to confirm that it should really have been obvious to everyone that the IPCC’s value of the Planck parameter is my value, I gave that value as the correct one both in my commentary and in my recent blog posting on the fundamental constraint on feedback loop gain. You will find it, with its derivation, right at the beginning of that posting, and encapsulated in Eq. (3).
“Thank you all again for your interest. This discussion has generally been on a far higher plane than is usual with climate discussions. I hope that these further points in answer to commentators will be helpful.”
Following on from an extensive discussion about these matters across several articles posted at Anthony Watts website, Lord Monckton has composed the following continuations:Sense and sensitivity
By Christopher Monckton of Brenchley Originally Posted on December 28, 2011 by Anthony Watts
Reed Coray’s post here on Boxing Day, commenting on my post of 6 December, questions whether the IPCC and science textbooks are right that without any greenhouse gases the Earth’s surface temperature would be 33 Kelvin cooler than today’s 288 K. He says the temperature might be only 9 K cooler.The textbook surface temperature of 255 K in the absence of any greenhouse effect is subject to three admittedly artificial assumptions: that solar output remains constant at about 1362 Watts per square meter, taking no account of the early-faint-Sun paradox; that the Earth’s emissivity is unity, though it is actually a little less; and that today’s Earth’s albedo or reflectance of 0.3 would remain unchanged, even in the absence of the clouds that are its chief cause.
These three assumptions are justifiable provided that the objective is solely to determine the warming effect of the presence as opposed to absence of greenhouse gases. They would not be justifiable if the objective were to determine the true surface temperature of the naked lithosphere at the dawn of the Earth. My post of 6 December addressed only the first objective. The second objective was irrelevant to my purpose, which was to determine a value for the system climate sensitivity – the amount of warming in response to the entire existing greenhouse effect.
Since Mr. Coray makes rather heavy weather of a simple calculation, here is how it is done. According to recent satellite measurements, 1362 Watts per square meter of total solar irradiance arrives at the top of the atmosphere. Since the Earth presents a disk to this insolation but is actually a sphere, this value is divided by 4 (the ratio of the surface area of a disk to that of a sphere), giving 340.5 Watts per square meter, and is also reduced by 30% to allow for the fraction harmlessly reflected to space, giving a characteristic-emission flux of 238.4 Watts per square meter.
The fundamental equation of radiative transfer, one of the few proven results in climatological physics, states that the radiative flux absorbed by (and accordingly emitted by) the characteristic-emission surface of an astronomical body is equal to the product of three parameters: the emissivity of that surface (here, as usual, taken as unity), the Stefan-Boltzmann constant (0.0000000567), and the fourth power of temperature. Accordingly, under the three assumptions stated earlier, the Earth’s characteristic-emission temperature is 254.6 K, or about 33.4 K cooler than today’s 288 K. It’s as simple as that.The “characteristic-emission” surface of an astronomical body is defined as that surface at which the incoming and outgoing fluxes of solar radiation are identical.
In the absence of greenhouse gases, the actual rocky surface of the Earth would be its characteristic-emission surface. As greenhouse gases are added to the atmosphere and cause warming, the altitude of the characteristic-emission surface rises.The characteristic-emission surface is now approximately 5 km above the Earth’s surface, its altitude varying inversely with latitude: but its temperature, by definition, remains 254.6 K or thereby. At least over the next few centuries, the atmospheric temperature lapse-rate (its decline with altitude) will remain near-constant at about 6.5 K per km, so that the temperature of the Earth’s surface will rise as greenhouse gases warm the atmosphere, even though the temperature of the characteristic-emission surface will remain invariant.
It is for this reason that Kiehl & Trenberth, in their iconic papers of 1997 and 2008 on the Earth’s radiation budget, are wrong to assume that (subject only to the effects of thermal convection and evapo-transpiration) there is a strict Stefan-Boltzmann relation between temperature and incident irradiance at the Earth’s surface.
If they were right in this assumption, climate sensitivity would be little more than one-fifth of what they would like us to believe it is.So, how do we determine the system sensitivity from the 33.4 K of “global warming” caused by the presence (as opposed to the total absence) of all the greenhouse gases in the atmosphere? We go to Table 3 of Kiehl & Trenberth (1997), which tells us that the total radiative forcing from the top five greenhouse gases (H2O, CO2, CH4, N2O and stratospheric O3) is 101[86, 125] Watts per square meter. Divide 33.4 K by this interval of forcings.
The resultant system sensitivity parameter, after just about all temperature feedbacks since the dawn of the Earth have acted, is 0.33[0.27, 0.39] Kelvin per Watt per square meter.Multiply this system sensitivity parameter by 3.7 Watts per square meter, which is the IPCC’s value for the radiative forcing from a doubling of the concentration of CO2 in the atmosphere (obtained not by measurement but by inter-comparison between three radiative-transfer models: see Myhre et al., 1998). The system sensitivity emerges. It is just 1.2[1.0, 1.4] K per CO2 doubling, not the 3.3[2.0, 4.5] K imagined by the IPCC.
Observe that this result is near-identical to the textbook sensitivity to a doubling of CO2 concentration where temperature feedbacks are absent or sum to zero. From this circumstance, it is legitimate to deduce that temperature feedbacks may well in fact sum to zero or thereby, as measurements by Lindzen & Choi (2009, 2011) and Spencer & Braswell (2010. 2011) have compellingly demonstrated.Therefore, the IPCC’s assumption that strongly net-positive feedbacks approximately triple the pre-feedback climate sensitivity appears to be incorrect. And, if Mr. Coray were right to say that the warming caused by all of the greenhouse gases is just 9 K rather than 33 K, then the system sensitivity would of course be still lower than the 1.2 K we have determined above.
This simple method of determining the system climate sensitivity is quite robust. It depends upon just three parameters: the textbook value of 33.4 K for the “global warming” that arises from the presence as opposed to the absence of the greenhouse gases in the atmosphere; Kiehl & Trenberth’s value of around 101 Watts per square meter for the total radiative forcing from the top five greenhouse gases (taking all other greenhouse gases into account would actually lower the system sensitivity still further); and the IPCC’s own current value of 3.7 Watts per square meter for the radiative forcing from a doubling of atmospheric CO2 concentration.However, it is necessary also to demonstrate that the climate sensitivity of the industrial era since 1750 is similar to the system sensitivity – i.e., that there exist no special conditions today that constitute a significant departure from the happily low system sensitivity that has prevailed, on average, since the first wisps of the Earth’s atmosphere formed.
Thanks to the recent bombshell result of the Carbon Dioxide Information and Analysis Center in the US (Blasing, 2011), the industrial-era sensitivity may now be as simply and as robustly demonstrated as the system sensitivity. Dr. Blasing has estimated that manmade forcings from all greenhouse gases since 1750 are as much as 3.1 Watts per square meter, from which we must deduct 1.1 Watts per square meter to allow for manmade negative radiative forcings, notably including the soot and other particulate aerosols that act as little parasols sheltering us from the Sun.
The net manmade forcing since 1750, therefore, is about 2 Watts per square meter. According to Hansen (1984), there had been 0.5 K of “global warming” since 1750, and there has been another 0.3 K of warming since 1984, making 0.8 K in all. We can check this by calculating the least-squares linear-regression trend on the Central England Temperature Record since 1750, which shows 0.9 K of warming. So 0.8 K warming since 1750 is in the right ballpark.
The IPCC says that we caused between half and all of the warming since 1750 – i.e. 0.6[0.4, 0.8] K. Divide this interval by the net industrial-era anthropogenic forcing of 2 Watts per square meter, and multiply by 3.7 Watts per square meter as before, and the industrial-era sensitivity is 1.1[0.7, 1.5] K, which neatly and remarkably embraces the system sensitivity of 1.2[1.0, 1.4] K. So the industrial-era sensitivity is near-identical to the low and harmless system sensitivity.
Will the IPCC take any notice of fundamental results such as these that are at odds with its core assumption of a climate sensitivity thrice what we have here shown it to be? I have seen the first draft of the chapter on climate sensitivity and, as in previous reports, the IPCC either sneeringly dismisses or altogether ignores the growing body of data, results and papers pointing to low sensitivity. It confines its analysis only to those results that confirm its prejudice in favor of very high sensitivity.In Durban I had the chance to discuss the indications of low climate sensitivity with influential delegates from the US and other key nations. I asked one senior US delegate whether his officials had told him – for instance – that sea level has been rising over the past eight years at a rate equivalent to just 2 inches per century. He had not been told, and was furious that he had been misled into thinking that sea level was rising at a dangerous rate.Having gained his attention, I outlined the grounds for suspecting low climate sensitivity and asked him whether he had been told that there was a growing body of credible and robust evidence that climate sensitivity is small, harmless, and even beneficial. He had not been told that either. Now he and other delegates are beginning to ask the right questions. If the IPCC adheres to its present draft and fails to deal with arguments such as that which I have sketched here, the nations of the world will no longer heed it. It must fairly consider both sides of the sensitivity question, or die.———- Sense and Sensitivity II – the sequel By Christopher Monckton of Brenchley Originally Posted on January 15, 2012 by Anthony Watts
Joel Shore, who has been questioning my climate-sensitivity calculations, just as a good skeptic should, has kindly provided at my request a reference to a paper by Dr. Andrew Lacis and others at the Goddard Institute of Space Studies to support his assertion that CO2 exercises about 75% of the radiative forcings from all greenhouse gases, because water vapor, the most significant greenhouse gas because of its high concentration in the atmosphere, condenses out rapidly, while the non-condensing gases, such as CO2, linger for years.
Dr. Lacis writes in a commentary on his paper: “While the non-condensing greenhouse gases account for only 25% of the total greenhouse effect, it is these non-condensing GHGs that actually control the strength of the terrestrial greenhouse effect, since the water vapor and cloud feedback contributions are not self-sustaining and, as such, only provide amplification.”
Dr. Lacis’ argument, then, is that the radiative forcing from water vapor should be treated as a feedback, because if all greenhouse gases were removed from the atmosphere most of the water vapor now in the atmosphere would condense or precipitate out within ten years, and within 50 years global temperatures would be some 21 K colder than the present.
I have many concerns about this paper, which – for instance – takes no account of the fact that evaporation from the surface occurs at thrice the rate imagined by computer models (Wentz et al., 2007). So there would be a good deal more water vapor in the atmosphere even without greenhouse gases than the models assume.
The paper also says the atmospheric residence time of CO2 is “measured in thousands of years”. Even the IPCC, prone to exaggeration as it is, puts the residence time at 50-200 years. On notice I can cite three dozen papers dating back to Revelle in the 1950s that find the CO2 residence time to be just seven years, though Professor Lindzen says that for various reasons 40 years is a good central estimate.
Furthermore, it is questionable whether the nakedly political paragraph with which the paper ends should have been included in what is supposed to be an impartial scientific analysis. To assert without evidence that beyond 300-350 ppmv CO2 concentration “dangerous anthropogenic interference in the climate system would exceed the 25% risk tolerance for impending degradation of land and ocean ecosystems, sea-level rise [at just 2 inches per century over the past eight years, according to Envisat], and inevitable disruption of socioeconomic and food-producing infrastructure” is not merely unsupported and accordingly unscientific: it is rankly political.
One realizes that many of the scientists at GISS belong to a particular political faction, and that at least one of them used to make regular and substantial donations to Al Gore’s re-election campaigns, but learned journals are not the place for über-Left politics.
My chief concern, though, is that the central argument in the paper is in effect a petitio principii – a circular and accordingly invalid argument in which one of the premises – that feedbacks are strongly net-positive, greatly amplifying the warming triggered by a radiative forcing – is also the conclusion.
The paper turns out to be based not on measurement, observation and the application of established theory to the results but – you guessed it – on playing with a notorious computer model of the climate: Giss ModelE. The model, in effect, assumes very large net-positive feedbacks for which there is precious little reliable empirical or theoretical evidence.
At the time when Dr. Lacis’ paper was written, ModelE contained “flux adjustments” (in plain English, fudge-factors) amounting to some 50 Watts per square meter, many times the magnitude of the rather small forcing that we are capable of exerting on the climate.
Dr. Lacis says ModelE is rooted in well-understood physical processes. If that were so, one would not expect such large fudge-factors (mentioned and quantified in the model’s operating manual) to be necessary.
Also, one would expect the predictive capacity of this and other models to be a great deal more successful than it has proven to be. As the formidable Dr. John Christy of NASA has written recently, in the satellite era (most of which in any event coincides with the natural warming phase of the Pacific Decadal Oscillation) temperatures have been rising at a rate between a quarter and a half of the rate that models such as ModelE have been predicting.
It will be helpful to introduce a little elementary climatological physics at this point – nothing too difficult (otherwise I wouldn’t understand it). I propose to apply the IPCC/GISS central estimates of forcing, feedbacks, and warming to what has actually been observed or inferred in the period since 1750.
Let us start with the forcings. Dr. Blasing and his colleagues at the Carbon Dioxide Information and Analysis Center have recently determined that total greenhouse-gas forcings since 1750 are 3.1 Watts per square meter.
From this value, using the IPCC’s table of forcings, we must deduct 35%, or 1.1 Watts per square meter, to allow for negative anthropogenic forcings, notably the particles of soot that act as tiny parasols sheltering us from the Sun. Net anthropogenic forcings since 1750, therefore, are 2 Watts per square meter.
We multiply 2 Watts per square meter by the pre-feedback climate-sensitivity parameter 0.313 Kelvin per Watt per square meter, so as to obtain warming of 0.6 K before any feedbacks have operated.
Next, we apply the IPCC’s implicit centennial-scale feedback factor 1.6 (not the equilibrium factor 2.8, because equilibrium is thousands of years off: Solomon et al., 2009).
Accordingly, after all feedbacks over the period have operated, a central estimate of the warming predicted by ModelE and other models favored by the IPCC is 1.0 K.
We verify that the centennial-scale feedback factor 1.6, implicit rather than explicit (like so much else) in the IPCC’s reports, is appropriate by noting that 1 K of warming divided by 2 Watts per square meter of original forcing is 0.5 Kelvin per Watt per square meter, which is indeed the transient-sensitivity parameter for centennial-scale analyses that is implicit (again, not explicit: it’s almost as though They don’t want us to check stuff) in each of the IPCC’s six CO2 emissions scenarios and also in their mean.
Dr. Lacis’ paper is saying, in effect, that 80% of the forcing from all greenhouse gases is attributable to CO2. The IPCC’s current implicit central estimate, again in all six scenarios and in their mean, is in the same ballpark, at 70%.
However, using the IPCC’s own forcing function for CO2, 5.35 times the natural logarithm of (390 ppmv / 280 ppmv), respectively the perturbed and unperturbed concentrations of CO2 over the period of study, is 1.8 Watts per square meter.
Multiply this by the IPCC’s transient-sensitivity factor 0.5 and one gets 0.9 K – which, however, is the whole of the actual warming that has occurred since 1750. What about the 20-30% of warming contributed by the other greenhouse gases? That is an indication that the CO2 forcing may have been somewhat exaggerated.
The IPCC, in its 2007 report, says no more than that between half and all of the warming observed since 1950 (and, in effect, since 1750) is attributable to us. Therefore, 0.45-0.9 K of observed warming is attributable to us. Even taking the higher value, if we use the IPCC/GISS parameter values and methods CO2 accounts not for 70-80% of observed warming over the period but for all of it.
In response to points like this, the usual, tired deus ex machina winched creakingly onstage by the IPCC’s perhaps too-unquestioning adherents is that the missing warming is playing hide-and-seek with us, lurking furtively at the bottom of the oceans waiting to pounce. However, elementary thermodynamic considerations indicate that such notions must be nonsense.
None of this tells us how big feedbacks really are – merely what the IPCC imagines them to be. Unless one posits very high net-positive feedbacks, one cannot create a climate problem. Indeed, even with the unrealistically high feedbacks imagined by the IPCC, there is not a climate problem at all, as I shall now demonstrate.
Though the IPCC at last makes explicit its estimate of the equilibrium climate sensitivity parameter (albeit that it is in a confused footnote on page 631 of the 2007 report), it is not explicit about the transient-sensitivity parameter – and it is the latter, not the former, that will be policy-relevant over the next few centuries.
So, even though we have reason to suspect there is a not insignificant exaggeration of predicted warming inherent in the IPCC’s predictions (or “projections”, as it coyly calls them), and a still greater exaggeration in Giss ModelE, let us apply their central estimates – without argument at this stage – to what is foreseeable this century.
The IPCC tells us that each of the six emissions scenarios is of equal validity. That means we may legitimately average them. Let us do so. Then the CO2 concentration in 2100 will be 712 ppmv compared with 392 ppmv today. So the CO2 forcing will be 5.35 ln(712/392), or 3.2 Watts per square meter, which we divide by 0.75 (the average of the GISS and IPCC estimates of the proportion of total greenhouse forcings represented by CO2) to allow for the other greenhouse gases, making 4.25 Watts per square meter.
We reduce this value by about 35% to allow for negative forcings from our soot-parasols etc., giving 2.75 Watts per square meter of net anthropogenic forcings between now and 2100.
Nest, multiply by the centennial-scale transient-sensitivity parameter 0.5 Kelvin per Watt per square meter. This gives us a reasonable central estimate of the warming to be expected by 2100 if we follow the IPCC’s and GISS’ methods and values every step of the way. And the warming we should expect this century if we do things their way? Well, it’s not quite 1.4 K.
Now we go back to that discrepancy we noted before. The IPCC says that between half and all of the warming since 1950 was our fault, and its methods and parameter values seem to give an exaggeration of some 20-30% even if we assume that all of the warming since 1950 was down to us, and a very much greater exaggeration if only half of the warming was ours.
Allowing for this exaggeration knocks back this century’s anthropogenic warming to not much more than 1 K – about a third of the 3-4 K that we normally hear so much about.
Note how artfully this tripling of the true rate of warming has been achieved, by a series of little exaggerations which, when taken together, amount to a whopper. And it is quite difficult to spot the exaggerations, not only because most of them are not all that great but also because so few of the necessary parameter values to allow anyone to spot what is going on are explicitly stated in the IPCC’s reports.
The Stern Report in 2006 took the IPCC’s central estimate of 3 K warming over the 20th century and said that the cost of not preventing that warming would be 3% of 21st-century GDP. But GDP tends to grow at 3% a year, so, even if the IPCC were right about 3 K of warming, all we’d lose over the whole century, even on Stern’s much-exaggerated costings (he has been roundly criticized for them even in the journal of which he is an editor, World Economics), would be the equivalent of the GDP growth that might be expected to occur in the year 2100 alone. That is all.
To make matters worse, Stern used an artificially low discount rate for inter-generational cost comparison which his office told me at the time was 0.1%. When he was taken apart in the peer-reviewed economic journals for using so low a discount rate, he said the economists who had criticized him were “confused”, and that he had really used 1.4%. William Nordhaus, who has written many reviewed articles critical of Stern, says that it is quite impossible to verify or to replicate any of Stern’s work because so little of the methodology is explicit and available. And how often have we heard that before? It is almost as if They don’t want us to check stuff.
The absolute minimum commercially-appropriate discount rate is equivalent to the minimum real rate of return on capital – i.e. 5%. Let us oblige Stern by assuming that he had used a 1.4% discount rate and not the 0.1% that his office told me of.
Even if the IPCC is right to try to maintain – contrary to the analysis above, indicating 1 K manmade warming this century – that we shall see 3 K warming by 2100 (progress in the first one-ninth of the century: 0 K), the cost of doing nothing about it, discounted at 5% rather than 1.4%, comes down from Stern’s 3% to just 0.5% of global 21st-century GDP.
No surprise, then, that the cost of forestalling 3 K of warming would be at least an order of magnitude greater than the cost of the climate-related damage that might arise if we just did nothing and adapted, as our species does so well.
But if the warming we cause turns out to be just 1 K by 2100, then on most analyses that gentle warming will be not merely harmless but also beneficial. There will be no net cost at all. Far from it: there will be a net economic benefit.
And that, in a nutshell, is why governments should shut down the UNFCCC and the IPCC, cut climate funding by at least nine-tenths, de-fund all but two or three computer models of the climate, and get back to addressing the real problems of the world – such as the impending energy shortage in Britain and the US because the climate-extremists and their artful nonsense have fatally delayed the building of new coal-fired and nuclear-fired power stations that are now urgently needed.
Time to get back down to Earth and use our fossil fuels, shale gas and all, to give electricity to the billions that don’t have it: for that is the fastest way to lift them out of poverty and, in so doing, painlessly to stabilize the world’s population. That would bring real environmental benefits.
And now you know why building many more power stations won’t hurt the climate, and why – even if there was a real risk of 3 K warming this century – it would be many times more cost-effective to adapt to it than to try to stop it.
As they say at Lloyds of London, “If the cost of the premium exceeds the cost of the risk, don’t insure.” And even that apophthegm presupposes that there is a risk – which in this instance there isn’t.
– The Viscount Monckton of Brenchley