Wednesday, May 23, 2007

More Uncertain

Here is a nice bit about Michael Mann of the climate hockey stick graph fame. [bolding is mine]

Perhaps the most damage to Michael Mann's credibility came from Michael Mann himself in his 2006 testimony before a congressional oversight committee where he stated, "Hundreds of scientists work in this field and we are a competitive bunch. We compete for scarce research dollars, academic recognition and professional standing." He further testified that the word "likely" only carried a "65% probability" and that his work in 1998 that was accepted by the IPCC was temperature reconstruction in its infancy. If anything in his 2006 testimony is valid, it is that most studies in the seven years since "...using different data and different statistical methods have re-affirmed...Northern Hemisphere warmth appears to be unprecedented over at least the past 1,000 years."

What Michael Mann and his supporters never mentioned is that solar activity is also unprecedented for the past 1,000 years and this was known to science at that time. What was not known to science at that time is that solar activity is actually unprecedented for the last 11,000 years. (Solanki et al. 2006)
So who else comes up with that 65% number? Nir Shaviv.
Evidently, we do not know the total Anthropogenic forcing. We don't know its sign. We also don't know its magnitude. All we can say is that it should be somewhere between -1 to +2 W/m�. Sounds strange, but we may have actually been cooling Earth (though less likely than warming). It is for this reason that in the 1970's, concerns were raised that humanity is cooling the global temperature. The global temperature appeared to drop between the 1940's and 1970's, and some thought that anthropogenic aerosols could be the cause of the observed global cooling, and that we may be triggering a new ice-age.
Let's see. A forcing range of 3. A likely positive value of 2 (at most). Two divided by 3 in percent is 66.7%. Hey! That looks a lot like 65%.

So why all the uncertainty? My crystal ball is cloudy. However, Nir to the rescue.
There is however one HUGE drawback, because of which GCMs are not suited for predicting future change in the global temperature. The sensitivity obtained by running different GCMs can vary by more than a factor of 3 between different climate models!

The above figure explains why this large uncertainty exists. Plotted are the sensitivities obtained in different GCMs (in 1989, but the situations today is very similar), as a function of the contribution of the changed cloud cover to the energy budget, as quantified using ΔQcloud/ΔT.

One can clearly see from fig. 1 that the cloud cover contribution is the primary variable which determines the overall sensitivity of the models. Moreover, because the value of this feedback mechanism varies from model to model, so does the prediction of the overall climate sensitivity. Clearly, if we were to know ΔQcloud/ΔT to higher accuracy, the sensitivity would have been known much better. But this is not the case.

The problem with clouds is really an Achilles heel for GCMs. The reason is that cloud physics takes place on relatively small spatial and temporal scales (km's and mins), and thus cannot be resolved by GCMs. This implies that clouds in GCMs are parameterized and dealt with empirically, that is, with recipes for how their average characteristics depend on the local temperature and water vapor content. Different recipes give different cloud cover feedbacks and consequently different overall climate sensitivities.
So what does he mean by small spatial scales? It is on the order of kilometers. Pretty big and hard to miss? Well no. The models are based on chunks 250 kms on a side. I don't know what temporal scales the models operate on but it is probably a similar situation to allow the simulations to be computed in a reasonable amount of time.

Supposedly, despite all this uncertainty, the models can be made to predict the past. How can this be? Simple, you adjust other parameters and feedbacks and lags until with your chosen number for the cloud effect you get outputs that look like the past. How relevant is this for the future? Not very. Because there is no way to tell if you got things right and the further in the future you look the more likely the model is to diverge from reality.

Commenter Froblyx on this post at Classical Values has an interesting point about determining the value of models.

The way to establish that the system of equations really does describe reality is to compare its results with reality. The better the match, the more confidence we have in the results.

That is a valid point if we KNOW all the parameters involved and include them all.

Then you test it by introducing peturbations in the real world and see if the results follow the model.

Since we can't disturb just one element of the real climate system and follow the results we have to assign values to the various sensitivities and see if the what happens in the future is correct. Yet we are not sure of our models because of ALL the interactions involved. We may have assigned incorrect values to the interactions let alone the things we think we know well.

We currently have models that do not include the solar variation of about .5% over 300 years and the cloud/solar magnetism/cosmic ray effect. We know apriori that the values assigned to various interactions that simulate past behavior are WRONG since they do not include these effects. In addition the latest better models (much better than 10 years ago we are told) have not been around long, so we can't be sure that they model the future well because we do not have much future to test them against.

So to be sure the models are correct we should wait a while.

BTW frob, global temperatures have been declining since 1998. Do the models tell us why? Do they expain the anamolous year of 2004 when temperatures spiked? In other words do the models produce noisy data the way the real world does? Or is it all averaged and smoothed? i.e. a rough approximation?

To get the models to run in a reasonable amount of time we have chunked the earth's surface into segments 250 kms on a side. Is this good enough to get the required accuracy? As Nir points out above. Not likely.

An excellent model of an engineered servo system where all the inputs and outputs can be measured to within .1% can come within +/- 1% of real world behavior. Can the climate modelers with their much more complex system come within that range of error? They claim a model error band of .5% from a measurement series where at best (at least until recently) the error band is around .2% for at around 70% of the data and possibly worse since I have seen no reports on how instrument calibrations might affect the measurements over the last 100 years.

Even in the best case of .2% data error that hardly gives much confidence in the .5% model error band (+1.5 to +4.5 deg C change predicted to a roughly 300 deg K base). Normally you want data that is at least 3X as good as the signal you are looking for and the preference for reliable results is 10X the signal to make sure you are not measuring noise. Basically what all this means is that the estimated signal is not far from the noise level of the data. In fact if you look at the figure referenced above it is not impossible that the estimated signal is equal to the noise level.

Pick a number is not science. Science is when you have real data of the required accuracy and known relations between inputs and outputs. i.e. the equations are fixed by scientific understanding, thus known in advance of any predictions (even of the past) and they make reliable predictions without having to adjust the models.

Let me touch on the servo question again. You do not have 13 models of a servo system all making varying predictions. The science of servo systems is well understood. There is one model.

As I have shown we have no such apriori model for clouds. Heck, we are not even sure of the sign let alone the magnitude. Plus we know for sure that the solar magnetic influence on clouds is not in the models because that understanding was only made public within the last year.

And that is only one of many problems in the models. Take this one mentioned by Dr Theodor Landscheidt of the Schroeter Institute for Research in Cycles of Solar Activity with reference to the global mean temperature:
The cyclic variation in the data cannot be explained by general circulation models in spite of the entailing great expense. There is not even an attempt to model such complex climate details, as GCMs are too coarse for such purposes. When K. Hasselmann (a leading greenhouse protagonist) was asked why GCMs do not allow for the stratosphere's warming by the sun's ultraviolet radation and its impact on the circulation in the troposphere, he answered: "This aspect is too complex to incorporate it into models"
You have to wonder what else they are leaving out.

GIGO

H/T JR

5 comments:

Anonymous said...

The pseudo-greens have made other ridiculous assumptions based on bad data and worse models.

The meta-model for climate used by the International Panel on Climate Change, when fed data up to 1980, was unable to predict the regional temperatures seen from 1981-2000. The IPCC bragged that the meta-model predicted (within 2 degrees C) annual mean temperatures for >90% of regions. What the IPCC failed to mention was that predicted temperatures for arctic and antarctic regions were 6 degrees C too high. Despite the incredible overestimate of arctic and antarctic temperatures, the IPCC used this model to predict global warming and to "prove" that global warming would melt the ice caps and flood all the coastal areas of the world.

I have a climate equation:
IPCC = organized liars

Anonymous said...

The global circulation models (GCMs) are the best models by the most highly qualified people on the planet to build them. There are still uncertainties - and with respect to the ultimate amount of GW, they can go either way.

Recently, at a debate on the topic, the anti-GW team trounced the mainstream scientists in terms of winning the audience. (Debates are often like that: debating is a skill of its own, not necessarily related to being correct.) But a little-noted event occurred in the Q&A session: A reporter (Rivken?) point-blank asked them if they thought global warming was actually happening. Put on the spot, they each had to admit that it was; they were just arguing about how much of it was human-caused, and how serious the impact would be.

This is a retreat from their earlier position of only a couple of years before, when they were saying GW simply wasn't happening.

Who were they? Singer, well-known skeptical physicist; Stott, somewhat well-known skeptical climatologist; and Crichton, very well-known skeptical science-fiction writer.

I'm convinced that the next time there's a shift in the anti-GW position, it will be to the "Oh, well by now it's too late to do anything about it. So we might as well party on." stage.

M. Simon said...

Neal,

Once the data was solid I changed my mind on global warming. As any honest person would.

Now that global cooling seems to have set in I may have to change my mind again.

In any case how much is human caused is still in question.

Anonymous said...

M. Simon:

Where do you see evidence of "global cooling"? Keep in mind, that the climate has lots of dynamics of its own, even without external drivers like C-O2 (or a putative variation in luminosity): El-Nino Southern Oscillation (ENSO) occurs every 2 to 7 years. So, you have to do a running average to smooth out jitter.

Here's the current view of scientists on the topic: It's not slowing down.

The prima facie case for C-O2 is pretty good:

- CO-2 has gone up 33% over 100 years, clearly attributable to fossil-fuels (carbon-14 ratios indicate this).
- There's a well-understood model (not computerized!) that predicts this effect. Has been studied for about 100 years.
- The magnitude of the global average temperature has increased over the same period by an amount pretty close to the prediction.

In my opinion, there's no doubt that a grand jury would indict. And I also believe a jury would convict, shown the full evidence.

(Wouldn't it be grand if we could get as many people as tuned in to the O.J. Simpson trial to actually go through all the arguments and evidence in this matter, with expert witnesses and cross-examination? What a thought!)

M. Simon said...

Most of the man made CO2 rise was after 1940.

Most of the warming happened before 1940.

Since 1998 global temperatures have been on a slight downward trend. I would not quibble if it was called flat.

I do not disagree about noise in the system. I propose waiting another 10 years to see what happens.

BTW if the models are so good why wasn't the flattening predicted.

We are looking for a 2% signal in a "noise" of 98%. (1 deg C out of 50- deg C annual variation. To properly find that signal you have to have long periods of averaging. Not only that you need good data sets covering the globe.

The GCMs are parameterized in 250Km on a side chunks. To tell if those parameters are representative you would have to have weather stations every 25 Km or so. Especially in areas where there are lots of hills and mountains.

Not even the USA is that densely metered.

Then you have the other 70% of the planet where data gathering is sparse and not continuous plus methods have changed significantly over 100 years i.e, 20 ft off the ocean in wooden ships vs. 100 ft off the ocean in oil tankers. Plus the microclimate of a fuel burning ship is different from the microclimate of a sailing ship.