Monthly Archives: June 2011

Climate Models Not Ready For Prime Time


The preceding posting, Climate Modelers are Wizard of Oz’s Spawn,  noted that the backcasting used to prove the models,  was not scientifically viable/honest. I worked in systems operations in manufacturing facilities where solutions to problems were proposed and then tested to see if they worked in the real world.   The technique of  backcasting to fit an experience curve has been around for a long time. When the model seemed to match history, the  “solution”  resulting from that model was employed going forward.  Sometimes it worked and sometimes it did not work. In the real world, you have to test, test  and retest your premises to assess the confidence of the rightness of the solution.  The concept of proving your solutions is not the standard in the science of global warming climate modeling as far as I can tell.  And my view is that the global climate dynamic is vastly greater that any of the problems we were solving in the operating facilities, thus the likelihood of obtaining a high degree of certainty is problematic.

Lets look at a summary of a recent posting that lists 10 issues that demonstrates that the models are not ready for prime time. This is from The Hockey Schtick blog where more detail is provided in that posting and can be read by clicking here.

1,            IPCC admits climate models have not been verified by empirical observations to assess confidence

2            IPCC admits it is not clear about which tests are critical to verify and assess confidence in the models.

3            Of 16 identified climate forcings, IPCC admits only two have a high level of understanding. Most of the others are said to have a low level of understanding.

4            Of the two identified as having high level of understanding (greenhouse gases and positive feedback) they are actually not well understood with empirical satellite data showing sensitivity to doubling CO2 with feedback is only about 0.7°C which is a factor of 4 less than IPCC climate models.

5            Climate models falsely assume “back-radiation” from greenhouse gases can heat the oceans. In fact IR radiation can only penetrate the surface a few microns with all the energy used  in the phase change of evaporation–which in fact cools the oceans.

6            UV radiation is capable of penetrating the ocean to a depth of several meters. The IPCC models ignore UV.

7            IPCC is not certain whether clouds have a net cooling or warming effect even though it is shown empirically that clouds are many times more important than greenhouse gases.

8            Ocean oscillation can have huge effects on climate and these are not incorporated into the models.

9            The traditional climate models fail to properly reconstruct the correct amplitude of climate oscillations that have clear solar/astronomical signature.

10            Climate models continue to greatly exaggerate sensitivity to CO2 by 67%. Despite the climate modeler having admitted this, they are unwilling or unable to tweak the models to match observed temperatures.

cbdakota

Climate Modelers are Wizard of Oz’s Spawn


If you look closely, it’s not demonstrated science but the climate models that are the basis for all the forecasts of catastrophe will result from manmade global warming.  The models, cited by the IPCC in their reports, supposedly demonstrated that the global temperatures recorded from 1978 to 1998 could only have occurred because of additional atmospheric CO2 from the increased use of fossil fuels.  Thus we are to believe that they have modeled the atmosphere so when the models look to the future they must give accurate projections.

But we know that these same models do not forecast worth a damn.  How can it be the models that all showed agreement with the past don’t get the future right? But perhaps more importantly why is it that the future forecasts don’t agree with one another.  That mystery is explain by Warren Meyer in his 9 June 2011 posting in Forbes:

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has different climate sensitivity, only one (at most) should replicate observed data.  But they all do. 

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl. To understand his findings, we need to understand a bit of background on aerosols. Aerosols are man-made pollutants, mainly combustion products, which are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

The problem, of course, is that matching history is merely a test of the model — the ultimate goal is to accurately model the future, and arbitrarily plugging variable values to match history is merely gaming the test, not improving accuracy.

This is why, when run forward, these models seldom do a very credible job predicting the future.  None, for example, predicted the flattening of temperatures over the last decade.  And when we look at the results of these models, or at least their antecedents, from twenty years ago, they are nothing short of awful.  NASA’s James Hansen famously made a presentation to Congress in 1988 showing his model runs for the future, all of which show 2011 temperatures well above what we actually measure today.

Meyer adds that: “Rather than real science, the climate models are in some sense an elaborate methodology for disguising our uncertainty.  They take guesses at the front-end and spit them out at the back-end with three-decimal precision.  In this sense, the models are closer in function to the light and sound show the Wizard of Oz uses to make himself seem more impressive, and that he uses to hide from the audience his shortcomings.”

So there we have it, the modelers jigger the system with enough variables to have the predetermined variables such as the positive feedback that boosts CO2 effect by a multiple of 3 or 4 be over ridden when doing the back cast and then drop the jiggering (in this case, aerosols) for future forecasts.

cbdakota

“Cheshire Cat Sunspots”-Livingston and Penn


Future sunspots may behave like the Cheshire Cat“the smile is there (magnetic fields) but the body is missing (no dark markings)“.   Dr Bill Livingston and Dr Matt Penn of the National Solar Observatory have been recording the average magnetic field strength of sunspots for the past 13 years.  What they have found is a decline of about 50 gauss per year during Cycle 23 and continuing in Cycle 24.   See their chart below:

Typical sunspot magnetic field strength registers about “2500 to 3500 gauss” based upon their research.  But Cycle 24 spots are running about 2000 gauss and Livingston and Penn are estimating that if the sunspot field strength drops to 1500 gauss: “the spots will largely disappear as the magnetic field is no longer strong enough to overcome forces on the solar surface.”   That could occur in the next ten years, coinciding with Cycle 25.

Traditionally the measurements of sunspots seem to have focused on visible light and magnetic flux.  Livingston and Penn emphasized the sunspot IR and magnetic field strength and that has brought a new perspective which seems to correlate with the other recent discoveries that were announced on 14 June 2011 at the AAS meeting in Las Cruces NM.

Dr Leif Svalgaard used Livingston and Penn data to illustrate the interrelationship of the magnetic field strength and visibility.

The pink line is visibility where 1 means invisibility.  This “visibility/invisibility” is somewhat perverse.  The spot (black) will no longer be seen because its temperature at the Sun’s surface is essentially the same as the surrounding gases. The black line is magnetic field strength.

Sunspots appear dark because they are cooler than the rest of the solar surface.  From posting by Space.com: “The dark, heart of a sunspot, called the umbra, is surrounded by a brighter edge know as the penumbra, which is made of numerous dark and light filaments more than 1,200 miles long.   They are relatively thin at approximately 90miles in width, making it difficult to resolve the detail of how they arise.

A photo of a sunspot taken in May 2010, with Earth shown to scale. The image has been colorized for aesthetic reasons. This image with 0.1 arcsecond resolution from the Swedish 1-m Solar Telescope represents the limit of what is currently possible in terms of spatial resolution.

Now scientists have discovered these columns are rapid downflows and upflows of gas, matching recent theoretical models and computer simulations suggesting these filaments are generated by the movement of hot and cold gases known as convective flow.

The researchers used the Swedish 1-meter Solar Telescope to focus on a sunspot on May 23, 2010. They found dark downflows of more than 2,200 miles per hour (3,600 kph) and bright upflows of more than 6,600 miles per hour (10,800 kph). The models suggest that columns of hot gas rise up from the interior of the sunspot, widen, cool and then sink downward while rapidly flowing outward.

Solar Cycle 24-A Game Changer Revisited


On the 14th of June at the AAS conference in Las Cruces,  a group of scientist from the National Solar Observatory (NSO) suggested that the familiar sunspot cycle may be shutting down.   They observed that the spots were fading (weaker), that the current Cycle 24 was showing fewer spots and that Cycle 25 was behind the normal schedule in its formation.

Sunspots have been recorded for hundreds of years and they are a very visible proxy for solar activity.  Solar activity is also visible in the numbers and strength of flares and coronal mass ejections (CME).  The solar cycle is nominally about 11 years in duration.  It begins with a relatively quiet sun and then there is a ramping up of sunspots, etc. maximizing  about half way through a cycle.   At this time the Sun’s north and south magnetic poles “flip” and sunspots, etc. begin ramping down to a relatively quiet Sun.

Drs.Frank  Hill  of the NSO explains that he and his team are using “helioseismology to measure sun-wide oscillations of the solar surface”.   Sound waves of extremely low frequency that emanate from deep within the Sun induce up-and-down oscillations in the sun’s outer gas layer. Measurements of these surface motions can be used to make maps of solar surface velocity, called Dopplergrams, from which physical conditions such as temperature, composition and the interior magnetic field can be inferred.  Dr. Hill reported on “a jet-stream-like flow within the sun that they have been monitoring since 1995 using helioseismology.

The stream, which is coincident with the sunspots, has an east-west zonal flow inside the Sun at about 4000 miles beneath the Sun’s surface.   The following figure presented at the Conference is illustrative of what Hill and his team have discovered.

The annotated chart’s  yellow and red bands trace the solar jet streams.  The black contours denote sunspot activity.   Cycle 24 (the current cycle) streams can be seen beginning about 1998-1999 at about 60° lattitude north and south.  These streams begin converging toward the equator.  At about 22°, sunspot activity begins.  Ultimately the streams will reach the equator at a time of solar maximum.  See Cycle 24 and the Butterfly Diagram for more on this.

The stream that began at the 60° latitude splits with part of it going toward the poles and the other part toward the equator.

Note that Cycle 23 stream heading for the equator was more active when it reached approximately 22°  than is Cycle 24 and that the angle of approach to the equator was steeper than that currently occurring in Cycle 24.   Dr Hill reports that it took  3 years for Cycle 24 to cover a ten-degree range that only took 2 years for Cycle 23.   Thus Cycle 24 is “slower” than Cycle 23.

The determination of this magnetic jet stream was first made by the   instrumentation on the SOHO satellite launched  December 2, 1995 using  a Michelson Doppler Imager.  It was replaced by a Helioseismic and Magnetic Imager (HMI) in Feb 2010 on a Solar Dynamics Observation satellite.  The HMI is said to be many time more sensitive and it will report almost continuously.    The unit uses a 16 million-pixel camera  configured to show blue images where the Sun’s  oscillations are moving the surface toward the HMI camera and red when it moves away.  The satellite is in orbit about 22,000 miles above the Earth’s surface at about the point where the Sun’s and the Earth’s gravitational pull are equal.

Richard Altrock, manager of the Air Force’s coronal research program has observed that the remnants of the magnetic jet stream go poleward as far as 85° where they die.

Returning to the figure it can be noted that Cycle 24 magnetic jet stream was forming in the 1998-2000 timeframe.   Noted on the figure is “Cycle 25??? 2019? 2030?”. Dr Hill points out that the magnetic jet stream for Cycle 25 should have been forming already but there is no sign of it yet.  The press release for regarding this situation suggests that: Cycle 25 will be greatly reduced or may not happen at all.

Latter, I will post on the work by Matt Penn and William Livingston that shows a weakening trend in the strength of the sunspots.

So what do we make of this?  Because of the satellite programs underway in the US and Europe primarily, we are probably doubling our knowledge of the Sun every few year.  But we still don’t know much about the Sun.  Reading the postings on this topic leads me to believe that the solar experts are not of one mind on the idea that this means the climate is about to get much cooler.
My bias is to say that we are going to see years of global cooling.  I say that based upon the reconstructed history of the Maunder and other minimums.  The only good thing I believe that can come from a period of cooling is to put a stake in the heart of the corrupt science that is the AGW theory.  I am not sure we can say with any certainty that more CO2 in the atmosphere and perhaps more naturally caused global warming is a bad thing. Who is to say that 2 or 3 more degrees would be bad.  Only the models in their ignorance are sure of this.  But extended cold could cause a lot of starvation.  Lets hope this does not happen.

So,  stayed tuned.

cbdakota

Volt and the Leaf News—May Sales and Other Items


The numbers are in for sales of the Volt and the Leaf.    Leafs outsold Volts in May.  Leaf numbers were 1142, up from 573 in April.  The Volt sales in May were 481, down from the 493 in April.   Year-to-date sales of the Volt are 2184 while the Leaf’s total is 2167.

The US sales goal for the Leaf was originally set at 20,000 in 2011.   They now are estimating between 10,000 and 12,000 this year.  Nissan began accepting “reservation” in April 2010.  You had to submit $99 to be put on the list.  Nissan stopped accepting reservations in September when they had reached 20,000.   They now report that only about half are resulting in sales.   Nissan say there are many reasons why only about half are resulting in purchases.   To read the full report and the reasons,  click here.

And lastly,  some disturbing news regarding the Leaf.  Apparently some of them wont start.   Torque News posting about this problem is as follows:

Since Nissan hasn’t determined the exact cause of the Leaf electric vehicles that won’t start, the automaker has not yet decided whether they will issue a safety bulletin but if the problem continues to grow and they cannot discover a fix – a recall could be in order even though this issue doesn’t propose a direct safety issue. Right now, reports indicate that the company is looking at both the electrical components and programming involved with the air conditioning system but the longer it takes Nissan to figure out what is causing the no-start issues and how to address them, the consumer perception of the Leaf could take a massive hit.

cbdakota

Ideology vs Economics-Feds Plan to Buy 116 Electric Cars


Obama plans to buy some $4plus million dollars on Electric Vehicles to save $116 thousand annual fuel cost.  USA Today reports that the Obama Administration is planning on buying 116 electric vehicles and installing charging stations in five cities.

The usual ideology that the country needs to boost alternatively powered vehicles to reduce CO2 emissions and reduce foreign crude oil use is in play here.  This administration’s devotion to the man-made global warming theory is going to drive us to third world status if we don’t vote them out of power at the next election.

Based on Consumer Report’s analysis, the government would be better off to buy a Prius at half the cost and the Prius gets better mileage.  On March 3, 2011 USA Today posted the following:

Consumer Reports magazine offers its initial assessment of the two reigning wondercars of our times, the Chevrolet Volt and Nissan Leaf, in its April issue and finds both may not be such good deals after all.

Not only has Consumer Reports’ test car been coming in at the low end of the electric-only mileage range — 23 to 28 miles, not 25 to 50 miles as billed — before the gasoline power kicks in, but CR had to pay over list to the get the car. It says it had to pay $48,700 — full price plus options and a $5,000 windfall to the dealer.

It gets worse. CR figures the cost of recharging the Volt would work out to about 5.7 cents a mile for electric mode and 10 cents a mile for gas. Yet a Toyota Prius, which gets about 50 miles a gallon, would cost 6.8 cents a mile to operate. A Prius costs half as much as a Volt.

Using the Consumer Report price of $48,700 and a Prius price of nominally $28,000, the extra cost for getting the Volt would be about $20,000 per car. Is the actual price of the Volt $48thou versus the MSRP of $41thou?     The Los Angles Times reports that some Volt dealers are inflating the selling prices “more than $20,000 above GMs suggested retail price of $41,000”.

Using the difference in the purchase price, $20,000×116= $2.3 million premium for the same (or possibly worse) gas mileage. There is also a cost for the charging stations.    Chuck Rogers in an American Thinker posting estimated the five charging stations at…. “$75,000, including any and all land purchase or site lease costs.
Roughly we have about $2.4million dollars premium to save the same amount of annual fuel costs that could be achieved by buying the same number of Prius.  (Are more than one Prius, Prii?)

Ok, so “buy US” is a good thing for our government to do.  But my guess is that the Chevy Cruise would be a better buy.   Ideology gets in the way of common sense.

Cbdakota