Climate Model Problems Persist, Changes Reduce Accuracy Further – Watts Up With That?
H. Sterling Burnett February 10, 2022
YOU SHOULD SUBSCRIBE TO CLIMATE CHANGE WEEKLY.
IN THIS ISSUE:
- Climate Model Problems Persist, Changes Reduce Accuracy Further
- Podcast of the Week: Industrial Wind: How to Fight It in Your Hometown and Win! (Guest: John Droz)
- Disaster Losses Declining as a Percentage Of GDP
- Coral Collection Greater Threat to Coral Than Climate Change
- Climate Comedy
- Video of the Week: Authoritarianism in the Climate Change Debate
- BONUS Video of the Week: The Trajectory of Electrical Power Generation
- Recommended Sites
Climate Model Problems Persist, Changes Reduce Accuracy Further
In the past week, two more articles have been added to the growing body of literature discussing the fact that climate models have consistently failed to project the Earth’s temperatures and temperature trends accurately, since their inception.
As if that were not bad enough, The Wall Street Journal and Powerline report climate models’ projections of future temperatures have gotten worse over time. As new generations of supposedly improved climate models are produced and refined, the accuracy of their temperature simulations decreases. Each new generation of general circulation models fails to track or correspond to a greater degree with measured temperature changes and trends than the previous generation.
This makes a joke of the Intergovernmental Panel on Climate Change’s (IPCC) claims that climate models have improved, which in normal discourse would mean they’ve become more accurate.
Every time the IPCC issues a new report, from the first (CAR1) issued in 1990 through its Sixth and most recent Assessment report (CAR6) released in August 2021, it claims the newest generation of models it uses is more accurate than the prior generation. The “Summary for Policy-Makers” for CAR3 stated, “Confidence in the ability of models to project future climate has increased.” Yet, instead of narrowing the range of future possible temperatures from the previous report, the range nearly doubled. That’s like doubling the size of the bulls-eye on a target, barely hitting the outside edge of the larger bullseye, and claiming it’s a sign the shooter’s accuracy has increased.
The IPCC’s CAR6 reports states,
“These models include new and better representation of physical, chemical and biological processes, as well as higher resolution, compared to climate models considered in previous IPCC assessment reports. This has improved the simulation of the recent mean state of most large-scale indicators of climate change and many other aspects across the climate system.”
How can their simulations be “improved” when, as I discussed in Climate Change Weekly 407, the modelers themselves were forced to admit, just weeks before CAR6 was released, that the models were projecting even hotter temperatures and steeper temperature trends than the previous iteration, the simulated temperatures of which were already too hot, failing to represent measured temperatures accurately?
Only a government bureaucracy or a con artist could claim with a straight face a technology has improved when it does not perform its required task as well as poor-performing previous versions. It’s like confidently asserting a class of electric vehicles is improving based on laboratory modelling even as the miles they can travel between recharges is declining and the amount of time it takes to recharge them is getting longer. Worse performance is not better, unless the goal is to fail.
The fact that computer models are flawed and produce untrustworthy climate projections has long been recognized. Reports by the National Center for Policy Analysis (which I edited when I worked there) in 2001 by environmental scientist Kenneth Green, Ph.D. and in 2002 by David Legates, Ph.D. (who was then the director of the Center for Climatic Research at the University of Delaware-Newark) detailed the numerous flawed projections computer models had made, and they explained why the failures occurred and were likely to continue to be the norm.
Models are limited in important ways, including:
- an incomplete understanding of the climate system,
- an imperfect ability to transform our knowledge into accurate mathematical equations,
- the limited power of computers,
- the models’ inability to reproduce important atmospheric phenomena, and
- inaccurate representations of the complex natural interconnections
These weaknesses combine to make GCM-based predictions too uncertain to be used as the bases for public policy responses related to future climate changes.
Whereas computing power has improved markedly over time, modelers’ knowledge of the myriad factors and interconnections that drive climate change has not. In part, this is because the IPCC has always focused on understanding the human factors that affect climate, to the exclusion of other factors, even though it admits other factors do have some effect.
To their credit, CARs 1 through 5 acknowledged that natural factors—the Sun, clouds, ocean currents, etc.—play at least some role in climate change, however poorly understood at the time. The IPCC has provided lists of factors, natural and human, that affect temperatures. The list has changed over time, as have the estimates of the direction the various factors drive temperature and by what amount. What never changes, however, is the amount of confidence or degree of understanding the IPCC has about the temperature effects of non-anthropogenic factors, because study of these is largely ignored. Previous CAR reports consistently admitted the IPCC had low or very low understanding of each of the natural factors that drive temperature changes. Nonetheless, the IPCC has been confident in dismissing them as significant sources of present climate change.
That is like trying to understand how a car functions, admitting you know nothing about radiators, alternators, timing belts, oil pumps, and myriad other systems, but being confident that having a full tank of gasoline and a key in the ignition are the only real factors that make a car function. Then, when the car doesn’t start, you state with confidence the only reason it could have failed was because the key was broken or the car was out of gas.
The IPCC got away with this nonsense by claiming that when they run their models without carbon dioxide it does not produce the warming they expect, regardless of changes in assumptions about the other factors, but when they add carbon dioxide, the models produce significant warming. That’s circular reasoning at best and idiotic at worst. It should not surprise us that the assumptions modelers make about carbon dioxide and other greenhouse gas emissions are the only ones that produce the results they expect and are getting.
In CAR6, the IPCC abandons even the appearance of scientific curiosity about nonhuman factors’ effects on climate. If you read only AR6’s summary for policymakers, you wouldn’t know clouds existed unless humans caused them by creating aerosols. Yet water vapor is by far the dominant greenhouse gas, accounting for more than 97 percent of all the greenhouse gases in the atmosphere, and clouds have huge long-term and short-term effects on surface temperatures. The IPCC acknowledged as much in previous AR reports, admitting climate models account poorly for the role changes in cloud cover play in climate change.
AR6 virtually ignores any effect the Sun has on climate change. The report barely acknowledges solar irradiance as having any role at all in climate change, in a graphic on page SPM-8 (Summary for Policy Makers). There is no mention of solar cycles, which we know from history correlate with climate changes. Nor does the report even mention that increases and decreases in cosmic rays resulting from solar fluctuations affect cloud cover and thus temperatures. Except for volcanoes, all other factors—such as large-scale decadal ocean circulation patterns—are lumped into a category called “Internal Variability” to which AR6 attributes almost no effect on climate change.
Climate modelers’ response to the fact that their models perform poorly and their performance has worsened over time is not to admit it is a matter of “Garbage in, Garbage out” that should lead them to question their fundamental assumptions about whether human greenhouse gas emissions are the sole or even dominant factor driving temperature changes. Instead, as The Wall Street Journal reports,They reworked 2.1 million lines of supercomputer code used to explore the future of climate change, adding more-intricate equations for clouds and hundreds of other improvements [emphasis mine]. They tested the equations, debugged them and tested again.The scientists would find that even the best tools at hand can’t model climates with the sureness the world needs as rising temperatures impact almost every region.
That’s right, climate modelers’ response to the consistent failure of their models to reflect real climate conditions is to spend more money and time adding complexity to their models. Complexity is not in and of itself a virtue.
The climate system is certainly complex. Even so, there is no reason to believe making models more complex will make them more accurate. We don’t adequately understand all the factors that drive climate changes or how the Earth responds to different perturbations on the overall climate. What one doesn’t understand, one can’t model well. Absent that basic understanding, adding more lines of code and making increasingly complex assumptions about climate feedback mechanisms that are even more poorly understood than the basic physics only makes models more error-prone. Every line of code and every complex calculation is one more area, formula, or operator where a flawed assumption or simple mistake of math or code punctuation can cascade throughout the model. Complexity introduces more opportunities for errors or “bugs” in the code, which can throw off the projections.
The fact that as modelers make their models increasingly complex their simulated climate outputs increasingly diverge from real-world climate data should serve as an indicator complexity is a weakness of the models. Modelers simply don’t know what they don’t know. That’s a fact they should admit, instead of building their ignorance into their models by pretending elegant mathematical formulae reflect reality simply because they are elegant and complex. The first step in getting out of a hole you have dug is to stop digging.
A second indicator that complex climate models are inherently flawed is the fact that simpler climate models perform better in matching real-world temperature data. Simple models reject assumptions about how different aspects of the climate system will add to or reduce relative warming as greenhouse gas emissions rise. Absent the additional forcing from modeled feedback mechanisms or loops, simple models project a modest warming in response to rising emissions. In this respect, the simple models reflect well what Earth has actually undergone.
There has been no runaway warming, and there is little or no reason to expect such a thing to occur from any reasonably expected future rise in atmospheric greenhouse gas concentrations. If models don’t get right their basic projections—temperatures—there is no reason to trust their ancillary or projected secondary effects which are supposed to be driven by rising temperatures.
SOURCES: Intergovernmental Panel on Climate Change; The Wall Street Journal; Powerlineblog
Check Out All Our Presentations in Scotland
Podcast of the Week
Wind energy is touted as “clean”, “environmentally friendly”, and effective, when the opposite is true. Industrial wind turbines kill animals that are essential to agriculture, are inefficient as a power source, and can even have a direct effect on your health.
These facts can be used to convince local councils to take a second look at proposed wind projects and draft siting regulations and rules that can stop wind projects from getting a foothold.Subscribe to the Environment & Climate News podcast on Apple Podcasts, iHeart, Spotify or wherever you get your podcasts. And be sure to leave a positive review!
Disaster Losses Declining as a Percentage Of GDP
Roger Pielke Jr., Ph.D., reports U.S. losses from natural disasters have declined as a percentage of overall economic outputs, measured as gross domestic product (GDP), as documented by the U.S. disaster loss database at the Center for Emergency Management and Homeland Security at Arizona State University. The Federal Emergency Management Agency (FEMA) uses the university’s Spatial Hazard Events and Losses Database for the United States (SHELDUS) data set to estimate expected annual losses from disasters in the United States.
Based on SHELDUS’s data, FEMA estimates that when all the loss numbers are finally in, disaster losses in the United States in 2021 will be around $141 billion.
That would confirm what other data sources have reported: since 1990, there has been a significant downward trend in U.S. disaster losses as a proportion of U.S. GDP (see the figure below), as monitored by the Office of Management and Budget, despite tremendous population growth and land-use development.
Pielke points out the United Nations’ preferred methodology for calculating disaster costs is as a fraction of GDP, per the Sendai Framework for Disaster Risk Reduction.
The declining disaster losses in the United States are part of and consistent with a “broader global trend of declining vulnerability to weather and climate extremes, which has been documented around the world and for a wide range of weather and climate phenomena,” Pielke writes.
Did you miss the headlines announcing disaster costs were falling amid climate change? I know I sure did. Instead, I am constantly barraged with headlines claiming disaster costs are rising and setting records because global warming is causing more extreme weather, with no recognition of context, population and demographic trends, or price inflation.
SOURCES: Climate Change Dispatch; SHELDUS
Heartland’s Must-read Climate Sites
Coral Collection Greater Threat to Coral Than Climate Change
With the Australian government announcing plans to spend $1 billion to save the Great Barrier Reef from bleaching ostensibly caused by global warming, and maintain its status as a UNESCO World Heritage Site, Jennifer Marohasy, Ph.D., notes approximately 200 tons of coral from the reef, in addition to sea life interacting with it, are being dug up each year as part of the aquarium trade. As an aside, additional thousands of tons are excavated each year to satisfy the international demand for coral jewelry.
Marohasy notes the amount of coral removed represents only a small amount of the reef as a whole, yet it is likely to be more than is replanted with the government’s $1 billion in funds.
This is on top of $443 million Australia’s government granted to the small but politically connected Great Barrier Reef Foundation for research, protection, and recovery efforts in 2018. That grant set aside more than $86 million for administrative expenses.
How will the money be spent? Some of it will go to go to a consortium committed to replanting corals, creating jobs for scuba divers, and charter boats, Marohasy writes. Their work will be filmed by underwater videographers, and marine scientists will participate in, monitor, and collect data.
It is unclear whether any money will go to those gathering and providing coral for the foreign aquarium trade, but Marohasy writes,
… an October 2021 assessment of the Queensland Coral Fishery by the federal Department of Agriculture, Water and the Environment explains there is a quota of 200 tonne total allowable catch, split between ‘specialty coral’ (30 per cent) and ‘other coral’ (70 per cent).”
Many of the corals are listed under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). The assessment report does mention that there is some concern around the lack of harvest limits for CITES-listed coral species and the lack of adequate mechanisms to enforce harvest limits. It also explains that the take of corals has been increasing.
Interestingly, all this money is being spent to save the Great Barrier Reef from death induced by climate change even though the data indicate the Great Barrier Reef is doing well, with most of its coral successfully recovering from the limited bleaching it experienced in recent years.
Nor, based on historical records, have ocean temperatures where the reef resides increased over the past 150 years, as reported at Climate Realism:
Temperatures are monitored at eighty sites within the Great Barrier Reef by the Australian Institute of Marine Sciences, and individual records do not show a long-term warming trend. There are no studies showing either a deterioration in coral cover or water quality.
In short, it seems all this money is being spent to satisfy UNESCO and a small but vocal group of politically connected researchers who say global warming is threating the Great Barrier Reef, without any confirming evidence.
SOURCES: The Spectator (AU); Climate Realism
Video of the Week: Climate Change Roundtable: ESG Scores
The Climate Change Roundtable team tackles the emergence of environmental, social, and governance (ESG) scores. ESG scores are portion of the growing elitist movement known as the Great Reset. Corporations are incorporating ESG scores into their decision making promise, causing executives to ignore their fiduciary duty by not acting in the best interests of shareholders.
Andy Singer, Linnea Lueken, Anthony Watts, Bette Grande, and H. Sterling Burnett discuss how ESG scores threaten the freedoms America was founded on.
BONUS Video of the Week: What Drives Global Temperature Trends, Q&A
Anthony Watts and Ross McKitrick, Ph.D., take questions after their presentations on the latest global temperature trends, and explain what’s driving them. Recorded at The Heartland Institute’s 14th International Conference on Climate Change at Caesars Palace in Las Vegas.
via Cartoons by Josh