Climate Alchemy - Turning Hot Air Into Gold
I have been having an ongoing discussion at Jane Galt about climate change. The discussion has been wide ranging, but what I want to focus on is the input data for the climate models and some of the problems with the models themselves.
I'm going to reprise my remarks here with block quotes. Block quotes in italics will be saved for other commenters. Revised and extended of course.
So let us look at the temperature record and how much reliance we should place on the data:
Temperature measurement instruments are not well enough calibrated to measure a 1 deg F signal over a 100 year span. A really expensive and well calibrated instrument re-calibrated weekly could probably measure down to the .1 deg F level in the field. If you have an above average instrument calibrated yearly you might get 1 deg F. Now try extrapolating that back to the days of glass thermometers and humans reading the numbers.I never got offered the hemp. Just as well. :-)
And you want to tell me that within that error band you can find a 1 deg. F (.6 deg. C) signal (temp rise over the last 100 years)?
Oh yeah. Moving the measuring site 30 ft could easily cause as much as 1 deg F difference due to micro climate. Would you care to bet on how many measuring stations have moved 30 ft in the last 100 years? Would you want to make a trillion dollar bet?
OK. We are all libertarians here. When do I get my share of the really good hemp?
I'm an electronics engineer by trade. I worked in aerospace which is one step below rocket science. Let me add that my aerospace specialty was instrumentation and control. So it is quite possible that I actually know something about the problems of measurement and control.
Commenter Brian Despain (who is excellent on this topic BTW) on March 14, 2007 5:28PM said in a list of talking points:
d)Sorry the urban heat island effect has been accounted for in the models.The urban heat island effect is about the idea that a measuring station will be in part measuring the heat output of the surrounding population and industry if the monitoring station has a city grow up around it. This is basically the idea that heating and air conditioning will affect the local temperature and give a long term signal that will look like global warming when there is no actual change in the local temperature. Or it will exaggerate the warming. Or reduce the cooling signal. Depending on what is actually happening. I cribbed some of my information from Climate Audit which looks at the heat island effect in Russia. The comments are chock full of stuff you never hear about from the "CO2 is causing global warming" folks.
There is some doubt as to whether the heat island correction is correct.When the signal equals the noise the value of the data is very questionable. Typically at minimum you want a signal that is twice the noise contribution FROM ALL SOURCES.
Hard to tell since the models and the data used in them is hard to extract from the "scientists".
In any case the error band is assumed to be .006 deg C per decade. Which is .06 deg C per 100 years. 10% of the ESTIMATED global warming signal. And that is the best case estimate of uncorrected error from one cause. How many other causes of error are there? Enough so that the signal = the noise?
Brian says.Some posters in the thread are suggesting places to look for honest sceptics.
Why is it that we don't see critiques in specific climate models?
I think he means "Why is it that we don't see critiques of specific climate models?". Which is the question I answer.
Simple really. The "scientist" do not make their models or data sets public except under duress. Even with pressure the releases are often partial.
I have designed servo systems where the models are excellent and based on first principles (physics and control theory). A very good model will give results within 10% of real performance. And we are to assume that the much more complicated climate models are within 1%? I don't think so.
Climate models (as far as is known) are full of fudge factors to make the numbers come out right. So the models are proably not good predictors and are not based on first principles.
Just one example - an error of 1% in the "cloud" factors would destroy model validity. Do able you say? The data sets are no better than 1% accurate. Probably worse. Plus it is not known if the sign of the cloud factor is positive or negative let alone within 1% of the real value. It is just assumed to be some value.
GIGO
We are to spend trillions based on models that are poor and data that is far from solid at the required accuracy?
Measurement to high enough accuracy is very difficult today. Then go back 100 years (the start of the data) and the accuracy is worse.
Based on unknown models (where the transfer function factors are not known to the required accuracy) and poor data sets you want to place a trillion dollar bet?
It is a con game.
Which just goes to show that if stuff is complicated enough you can design a good con without too much effort if you can conceal your methods. Better than alchemy.
Hot air into gold. Climate alchemy.
Nir Shaviv's sciencebits.com blog, in particular the CO2orSolar post and comments.From Nir Shaviv's blog:
I also highly reccommend Nir's stuff.
For the more technically minded:
www.climateaudit.org
Another interesting point is that the IPCC and the climate community as a whole prefer to rely on global circulation models which cannot predict the sensitivity (to better than a factor of 3 uncertainty!), and they choose to ignore the empirical evidence which shows that the climate sensitivity is actually on the low side.More from Nir:
IPCC bias
Second, if he would have sent me that article before publishing, I would have pointed out various inaccuracies in it. Here are some details in the article:Did he say hot fusion? You know how that gets me going.
* "Against the grain: Some scientists deny global warming exists" - given that I am the only scientist mentioned in the article, I presume it is meant to describe me. So, no, I don't deny global warming exists. Global warming did take place in the 20th century, the temperature increased after all. All I am saying is that there is no proof that the global warming was anthropogenic (IPCC scientists cannot even predict a priori the anthropogenic contribution), and not due to the observed increase in solar activity (which in fact can be predicted to be a 0.5±0.2°C contribution, i.e., most of the warming). Moreover, because empirically Earth appears to have a low sensitivity, the temperature increase by the year 2100AD will only be about 1°C (and not 2°C and certainly not 5°C), assuming we double the amount of CO2 (It might certainly be less if we'll have hot fusion in say 50 years).
No Interview
Hot fusion has very good prospects:Then I look at some of the known model uncertainties. Nir again.
Easy Low Cost No Radiation Fusion
So far this program can't even get $2 million in funding for the preliminary research. Or the $200 million for a test reactor.
So tell me. If AGW is real why haven't the AGW folks latched on to this?
I claim politics. A cheap and easy solution is not in the interest of the control freaks.
If spending trillions on reducing CO2 output is a good idea why is it so hard to raise .02% of that amount for a program that will reduce electrical costs by a factor of 10X and be ready for production in 5 to 7 years?
Such a lowering of electricity production costs would insure a very quick roll out.
The 15% temperature variations were a misquote by Kathleen Wong from a long ago published article in the California Wild. Her error was that she accidentally omitted "cloud cover". It should have been "as much as a 15% variation in the cloud cover, which cause large temperature variations" (of order 5 to 10 degs).Then there is this great reply from one of the commenters:
From the above "No Interview" link.
M. Simon:Then I respond to another of Brian's points.
Thank you for the points as to the relative accuracy of air temperature measurements. I have similiar experiance and training (MSEE) and agree that a +/- 1 degree F accuracy over any reasonable time span is about the best one could expect from the current location and equipment contained in the typical NOAA site. Considering those records and comparing them to prior records obtained from mercury thermometers adds another source of variance. Yet, we are told that a supposed increase of 0.6 degrees C in mean annual "Global" temperature since 1900 (?)is not only accurate but meaningful. Prior to November of '06, NOAA's records indicated that the decade of the 1930's was the warmest decade of the recent past and that, at least in Ohio, the warmest year was not 1998 (or 1999 depending on what news conference one selects) but rather 193x. However, Dr. Hansen, et al, have now quietly "adjusted" those records so that the 1930 temperatures are now in "3rd place". Reportedly, a "computer error" resulted in the loss of the prior data. Additional comments about this development can be found at climateaudit.org. Of cource, this is just my opinion and according to previous posts, I am biased against science-see my posting "name" to confirm that.
Posted by: MikeinAppalachia on March 15, 2007 1:07 PM
Brian says:Then in response to the general discussion:
However the speed of the current change is different than previous climate changes.
Are you telling me we can go back say 3 million years and get data in even 10 year increments for that period?
If your data points are less than 2X of the frequency you are trying to assess they are useless. It is called the Nyquist limit for those interested in data reconstruction. Normally for rate of change stuff, if you want an accurate value, you would like data at 5X or 10X the frequency you want to see.
Looking at data points 1,000 years apart tells nothing about 100 year rates of change. For 100 year rates of change data every 10 years would be good and every 5 years would be better.
Re: clouds,Back to Brian again:
We don't even know if we have the sign right let alone the magnitude and yet clouds are admitted by all to be one of the most significant variables.
Since clouds are so significant and the latest experiments show that the sun's magnetic field has a very large influence on clouds (through cosmic rays) then the current models which do not include solar effects on cloud formation are USELESS except for doing very general sensitivity studies.
If current models can predict the past without getting clouds right the models are either lucky or faked.
So are ya feeling lucky?
So Brian,Brian eventually gets back to me on the Nyquist question and admits I have made a good point. First I'm goint to cover another of Brian's points about clouds.
What is your opinion on the Nyquist limit vs the current data sets? For prehistoric data sets? For geologic data sets? The resolution gets worse the farther back you go.
Given that micro-climate can have a large (1 deg F) impact on the data and a move of as little as 30 ft can cause such a change do we have the information on the location of the climate stations to within 30ft for the last 100 years?
Real science is hard.
This is an assertion - the experiments showed an effect. You have added qualifiers such as "very large" - how large is yet to be determinedBrian gets back to me on Nyquist:
Yep. And until we know how large the models are not to be trusted for making trillion dollar bets.
BTW we do not even properly know (leaving out the cosmic ray stuff) if the cloud factor should be reinforcing or deinforcing currently. It is assumed reinforcing. i.e. worst case.
So OK the models show the worst case based on our present state of knowledge.
So do you want to place a trillion dollar bet based on the dice showing sevens 10 times in a row?
I work with model builders for servo systems where everything is bounded. The environment benign. Measurement is easy and accurate (better than 1%) and we still feel good about a result within 10% of the real world. Climate is 100X harder. Data accuracy is worse and you want me to believe that you can come in at better that 1% (3 deg C) accuracy?
You know I find that hard to swallow.
It has to be base on faith not engineering (my field) or science.
I'm looking forward to your exposition on the Nyquist limit with reference to data reconstruction.
"What is your opinion on the Nyquist limit vs the current data sets? For prehistoric data sets? For geologic data sets? The resolution gets worse the farther back you go."Brian is starting to think about the data and its reconstruction. Excellent.
Background for everyone else.
Nyquist_frequency
Simon has a good point. The proxy data we have for older climate data is in my mind crap. Other than ice cores, the various other proxy data (Bristlecone pines etc) are not too trustworthy. Modern data sets are vastly superior (and far larger)
"do we have the information on the location of the climate stations to within 30ft for the last 100 years?"
This might surprise you but yes we have their location. Weather stations are largely used in weather prediction which is pretty important in modern society. I am a little interested in your resolution of 30 ft - How did you determine it? Most weather stations have had fixed locations for a number of years. It makes forecasting difficult to randomly move things around.
Posted by: Brian Despain on March 15, 2007 9:45 PM
Brian,Brian wants some help with Nyquist. So naturally I volunteer.
Good on ya re: Nyquist. However that calls your "unprecidented rate of change" statement into question.
Yes the models are on to something. They bound the problem. However, they are suspect because they all come in at the high end of the range. If the cloud sign is negative they would come in at the low end. Or possibly even show that CO2 in the current range makes no difference.
As to moving 30ft for recording stations. That is a heuristic I gathered form a guy who has multiple sensors in his back yard. (comment #75)
BTW did the Russian or Chinese stations move in the last 100 years? Did they keep good records during their many political upheavals? I trust American stations. We have been very politically stable since about 1865. The rest of the world is not so fortunate.
Apply the 30 ft rule to ocean air temperatures measured on the bridge of a ship. Maybe 10 ft off the ocean in wooden ship days and maybe 90 ft with steel vessels. Then you have the problem of powered ships changing the micro-climate vs sailing ships.
Climate Audit
has some good bits on the location stability of recording stations around the world. In any case we are placing our bets based on a measuring system not designed for climate measurements.
Then we went from glass thermometers with human readers to electronic instruments with automatic data recording. The older measurements are only as good as the person doing the measurements. Were they done by random duty officers (for ocean data) or some one who cared? Were the thermometers recalibrated at the ends of the voyage and corrections made for instrument drift? Were the logs faked (a common practice if measurements were inadvertently skipped - I'm ex Navy Nuke so I know how these things are done in the real world). How accurate was lat. and long. known? Before GPS it depened on the skill of the navigator. Errors of 1 mi. in sailing days were very common. Errors of 10 mi. frequent. Errors of 60 mi. not unknown. How well were chronometers kept in 1865? Was recalibration done at the end of the voyage and corrections of position for chronometer drift made at the end of the voyage?
Also. Increses in solar output over the last 100 years accounts for .5 (+/- .2) deg C of the .6 deg C change. Meaning that increased CO2 probably accounts for maybe 20% of the "recorded" temperature change. The cloud thingy could cover the rest.
So it is more than possible that increased CO2 accounts for aproximately zero of the change in "evidence". Do the models show this possibility? None that I've seen reported.
At best the models tell us what kinds of things to look for. At worst they are totally useless.
i.e. if cloud sign and magnitude are this CO2 is important. If cloud sign and magnitude are that CO2 is unimportant.
Yet we hear the multitudes croaking that the science is definite and results confirmed.
I smell a rat.
Brian,The discussion sort of petered out there so I'm going to leave it at that - for now.
Let me help with Nyquist. At a sampling rate of 2X the frequency of interest it is possible to determine if a frequency is present if the sampling rate and the frequency to be determined are not coherent. A long data set would be required to determine the magnitude.
However, if higher frequencies than 1/2 the sampling rate are present at a decent magnitude you have an aliasing problem. i.e the high frequencies are translated to a lower frequency band making low frequency bins higher (or lower) than they should be.
In engineering we like to have low pass filtered data with a sampling frequency of 5X for a sharp cut off analog filter (which smears the high frequency data - phase and magnitude gets screwed) or 10X or even 20X for a first order analog filter (which would smear the high freqencies much less).
Look at the early digital audio stuff. Filters very sharp and sampling at about 2.2X the highest frequencies. OK for audio since the ear is not very phase sensitive at 20KHz. Today the filters are much less sharp (which means the filters themselves are less likely to produce artifacts - which sharp analog filters do) and we sample at higher frequencies and then reduce the data with digital FIR filters which do not have the artifact problem. Which means the recordings are pretty accurate these days. Playback still has the sharp filter problem.
BTW misunderstandings in communication between us have been excellent with respect to giving laymen an understanding of the problems involved in the science so I count that as a gain and not a loss.
Let me also note that if the high frequency events are transient you need the higher sampling rates - 5X to 20X - for good fidelity. Other wise the high frequencies are time smeared.
In the control area these days we like to sample at 100X or even 200X to get precision control. Not possible in the early days of relatively slow computers. The high sampling rates insure peaks will be recorded accurately.
Commenter AMcGuinn has posted this link on weather station siting.
Cross Posted at Classical Values
2 comments:
There has been some discussion of the effects of the particular siting of weather stations, beyond the known heat island effect.
See http://climatesci.colorado.edu/2006/08/21/can-mulit-decadal-temperature-trends-from-poorly-sited-locations-be-corrected/
You have more patience than I.
And that has to be one of the best post titles ever written.
ROFLMAO!
I haven't spent a lot of time reading or worrying much about the global warming issue du jour. I'm sticking with what for many is a Luddite position.
Post a Comment