Total Pageviews

Wednesday, May 18, 2022

Time to Ditch the 'Climate Models'?

 NOTE:
As I have repeatedly stated on this blog that computers "predict" whatever they are programmed to predict. About 40 years ago, the General Circulation Climate Models were programmed to predict rapid, dangerous global warming. Those computer model predictions matched the pre-computer model predictions. The programs were set up to SUPPORT existing predictions.

The computer predictions (projections, simulations or BS -- you can chose the word you prefer), on average, were for a global warming rate 100% faster than reality = double of the actual measured warming rate. For 40 years, the goal has been scary predictions, not accurate predictions.

The one computer model that least over predicted global warming -- the Russian INM model -- gets no individual attention -- because accurate predictions are NOT a goal. Scary predictions are the goal. If accurate predictions were a goal, the models would have been refined over the past 40 years to be more accurate. In fact, they have been manipulated to predict even FASTER global warming in the future, even though they had grossly over predicted the warming rate for the past 40 years.

The latest group of 38 CMIP6 models have am ECS range from +1.83°C to +5.67°C., per CO2 level doubling. Observations since the 1950s are about +1.0 to +1.5 degrees C. per CO2 doubling, but ONLY if you assume CO2 was the only cause of global warming, which is very unlikely to be true.

I call climate models "computer games". In fact, they are political computer games meant to fool gullible people, who are impressed by super computers and scientists. The people who are impressed are not the brainiacs of our society.
Ye Editor

SOURCE:

"Just about every projected environmental catastrophe going back to the population bomb of the late 1960s, the “Club of Rome” and “Global 2000” resource-exhaustion panics of the 1970s, the ozone depletion crisis of the 1980s, and beyond has depended on computer models, all of which turned out to be wrong, sometimes by an order of magnitude.

No putative environmental crisis has depended more on computer models than "climate change." But the age of high confidence in supercomputing and rapidly advancing “big data” analytics, computer climate models have arguably gone in reverse, generating a crisis in the climate-change community.



The defects of the computer climate models—more than 60 are used at the present time—which the whole climate crusade depends on have become openly acknowledged over the past few years, and a fresh study in the mainstream scientific literature recently highlights the problem afresh: too many of the climate models are “running hot,” which calls into question the accuracy of future temperature projections.

Did somebody say, "hot-model problem"?

Nature magazine, one of the premier “mainstream” science journals, last week published “Climate simulations: recognize the ‘hot model’ problem,” by four scientists all firmly established within the “consensus” climate science community. 

It is a carefully worded article, aiming to avoid giving ammunition to climate-change skeptics, while honestly acknowledging that the computer models have major problems that can lead to predictions of doom that lack sufficient evidence.

“Users beware: a subset of the newest generation of models are ‘too hot’ and project climate warming in response to carbon dioxide emissions that might be larger than that supported by other evidence,” the authors write. 

While affirming the general message that human-caused climate change is a serious problem, the clear subtext is that climate scientists need to do better lest the climate science community surrenders its credibility.

One major anomaly of the climate modeling scene is that, as the authors write, “As models become more realistic, they are expected to converge.” But the opposite has happened—there is more divergence among the models. Almost a quarter of recent computer climate models show much higher potential future temperatures than past model suites, and don’t match up with known climate history: “Numerous studies have found that these high-sensitivity models do a poor job of reproducing historical temperatures over time and in simulating the climates of the distant past.”

What this means is that our uncertainty about the future climate is increasing. To paraphrase James Q. Wilson’s famous admonition to social scientists, never mind predicting the future; many climate models can’t even predict the past.

Some models might be larger than those supported by evidence.

A quick primer: in general  the average of computer climate models predict that a doubling of the level of greenhouse gases (GHGs), principally carbon dioxide (CO2), by the end of this century would increase global average temperature by range of 1.5 degrees C to 4.5 degrees C. At present rates of GHG emissions, we’re on course to double the GHG level in the atmosphere about 80-100 years from now.

Why is the range so wide, and why does it matter? First, the direct thermal effect of doubling GHGs is only about 1.1 degrees. So how do so many models predict 4.5 degrees or more? Two words: feedback effects. That is, changes in atmospheric water vapor (clouds, which both trap and reflect heat), wind patterns, ocean temperatures, shrinkage of ice caps at the poles, and other dynamic changes in ecosystems on a large scale.

Yet it is precisely these feedback effects where the computer models are the weakest and perform most poorly. The huge uncertainties in the models (especially for the most important factor—clouds) are always candidly acknowledged in the voluminous technical reports the U.N.’s Intergovernmental Panel on Climate Change (IPCC) issues every few years, but few people—and no one in the media—bother to read the technical sections carefully.

Why are climate models so bad? And can we expect them to improve any time soon? Steven Koonin, a former senior appointee in the Department of Energy in the Obama administration, explains the problem concisely in his recent book Unsettled: What Climate Science Tells Us, What It Doesn’t and Why It Matters. The most fundamental problem with all climate models is their limited “resolution.”

Climate models are surprisingly crude, as they divide up the atmosphere into 100 km x 100 km grids, which are then stacked like pancakes from the ground to the upper atmosphere. Most climate models have one million atmospheric grid squares, and as many as 100 million smaller (10 sq. km) grid squares for the ocean. The models then attempt to simulate what happens within each grid square and sum the results. It can take up to two months for the fastest supercomputers to complete a model “run” based on the data assumptions input into the model.

The problem is that “many important [climate] phenomena occur on scales smaller than the 100 sq. km. (60 mile) grid size, (such as mountains, clouds, and thunderstorms).” In other words, the accuracy of the models is highly limited. Why can’t we scale down the model resolution? Koonin, who taught computational physics at Cal Tech, explains: “A simulation that takes two months to run with 100 km grid squares would take more than a century if it instead used 10 km grid squares. The run time would remain at two months if we had a supercomputer one thousand times faster than today’s—a capability probably two or three decades in the future.”

Two words: feedback effects.

But even if the models get better at the dynamics of what happens in the atmosphere on a more granular scale, the models still depend on future GHG emissions forecasts, and there is a wide range of emissions scenarios the modelers use. The high-end temperature forecasts depend on extreme projections of future emissions that are no longer credible, such as one model included in previous U.N. reports that relied on a six-fold increase in the use of coal over the next 80 years, an outcome no one thinks is going to happen (or only with massive carbon-capture technology if it does).

Emissions forecasts made just 20 years ago turned out to be much too high for today. Nearly all of the most alarming claims of the effects of future warming depend on these discredited forecasts, but the media has failed to keep up with the changing estimates. It’s a classic garbage-in, garbage out problem.

The Nature article is candid about this problem:

    The largest source of uncertainty in global temperatures 50 or 100 years from now is the volume of future greenhouse-gas emissions, which are largely under human control. However, even if we knew precisely what that volume would be, we would still not know exactly how warm the planet would get.

The authors of the Nature article are taking a risk in dissenting from the politicized party line on climate science, however cautiously worded, and deserve credit for their candor and self-criticism of climate modeling."

Author Steven F. Hayward is a resident scholar at the Institute of Governmental Studies at UC Berkeley, and lecturer at Berkeley Law. His most recent book is "M. Stanton Evans: Conservative Wit, Apostle of Freedom." He writes daily at Powerlineblog.com.