William A Gardner
Culture and Societies
JOIN MY READER GROUP
OTHER BLOG POSTS
Modeling for Control
Our lives have recently been subject to public policy mandates based on scientific predictions of catastrophic climate change and disease spread. The science behind both of these so-called crises has relied heavily on complex computer models that use algorithms to predict future trends. Considering the significant harm that these policies have caused, and are causing to society, should we be allowing questionable algorithms in these models to circumscribe our lives and freedoms?
People have always looked for ways to forecast the future. In ancient Greece it was common for a person of wealth or standing to take the long, narrow and winding trail around the base of Mount Parnassus until they reached the sacred town of Delphi. Here on the rocky slopes above the Gulf of Corinth they could pay to hear a prognostication from one of the oracles of Delphi – a Pythia. Of course the words of the priestess were most often ambiguous and thus subject to interpretation. A wise ruler would consider very carefully any action taken on the advice of such an oracle. An example is the prophecy made to the very wealthy King Croesus of Lydia that if he attacked the Persians he would destroy a mighty empire. He mistakenly took that to mean the Persian Empire. So much for wealth, inexperience, and trust in the mystical words of an oracle.
Power goes hand in hand with obfuscation. We hold the mysterious and complex with a certain awe, and it is these traits in our character that have been used for millennia by soothsayers, fortune tellers, witches, magicians and prognosticators to control people and extract money from them. Who, in ancient and uncertain times, would question the words and methods of a tribal witch doctor as he or she mumbles incantations and casts bone runes while crouched beside a flickering fire in a dark hut? Superstition and obedience to authority1 are ingrained in us. While the scene has transmogrified from the stone hut to the clean computer room, human nature has not changed.
Today the role of the priestess or witchdoctor has been taken over by the purveyors of computer modeling - the high priests of academia and science with their mathematical algorithms that purport to predict the future of climate, pandemics, and the environment. By draping complex algorithms in the robes of science and complexity they demand unquestioned belief.2 An understanding of history suggests that such predictions should be weighed carefully on the scale of common sense. Precipitous action should be avoided. Unfortunately, the leaders of today appear to be no wiser than King Croesus.
Since computer modeling of future events become common some fifty years ago there have been many marked failures. One could fill many pages with tales of predictions that detonated in spectacular fashion when later faced with actual data. Strangely enough, despite such known history, leaders still place great faith in these models and the people who promote them. So what are these questionable predictive tools on which governments love to base public policy?
Iterative computer models – the modern Pythia
The computer models used in forecasting such things as pandemics, climate, population and such, are simply mathematical representations of some aspect of the real world using algorithms. These algorithms are generally classed into four types.3 The predictive models on which public policy has been recently based are mainly the 'recursive' type using the 'divide and conquer' method to break down the model world into smaller parts. Algorithmic functions can then be applied to each part and the results combined into an output. That is, the model structure uses various algorithms and interlocking parameter-driven functions to operate on a set of entities which make up the model world. These entities might be people, squares of land, cubic sections of atmosphere, or whatever is being modeled. Each entity has an initial state. A time period is selected and the model entities are processed by the algorithms to see how they change during one slice of time. The entities change during the iterations but the algorithms do not. The new state of these entities after each iteration is then used as the input for the next time period. This process is repeated over and over to some end point. The result of each iteration may be used to graph the state of one or more entities or factors over the relevant time period. As the model increases in complexity the results are not readily predictable. One must keep in mind that increasing complexity4 does not mean increasing accuracy. The application of artificial intelligence (AI),5 to these models will not necessarily make them more reliable.
The real world is not known for simplicity. One important aspect, a major weakness actually, of these iterative predictive models that try to simulate the real world is their tendency to be linear and to magnify errors. That is, a small error in the initial model parameters, or assumptions, is magnified by each iteration leading to potentially large errors in output. In addition, they do not properly take into account real-world long and short-term cycles that naturally occur. For example the changing energy output of the sun.6 Most models are similar in this aspect and thus can be made to show whatever is desired by making small changes in the model assumptions.
To illustrate the concept, consider a simple example of a model used to predict the future population of a country. The model is set up with an initial population parameter and then calculates how the population will change during one year by using other parameters such as annual birth rate, annual death rate, plus annual in-migration and out-migration. This calculation is then done for each year into the future. So long as the parameters are reasonably correct the model can predict pretty closely what the population will be in ten, or even a hundred years. Other sub-algorithms can be added to make the model even more complex such as unusual diseases using a stochastic method. However, the programmer is unlikely to include a sub-algorithm which models an unusual event, often called a black swan event, which wipes out most of the population. Importantly, such models are most often coded with one or a small number of primary drivers which becomes dominant over multiple iterations. In the case of this simple population model, the primary drivers are birth and death rates with these parameters based on past historical data.
When researchers started using such models for global population predictions they didn’t foresee the cultural shift to very low - below replacement - birth rates in much of the world. A number of countries are facing the possibility of a rapidly shrinking population7 with resulting serious financial and social challenges. We should recall the famous book by Professor Paul Ehrlich of Stanford University titled, The Population Bomb, which predicted a continuing and unlimited global population growth using a computer model. In the book he demanded immediate action to limit the human population in order to prevent overpopulation, starvation and social upheavals. It was a classic predictive model failure.
My criticism of these models is based on past experience. A major in maths during my science degree plus an interest in computer programming led me to code my first recursive computer model in 1981. It was a fun game I called Kingdom where a Ruler tried to increase his wealth of grain by battling invaders and the elements of nature in an agricultural setting. My fun model was primarily based on deterministic and stochastic algorithms and was recursive – aka iterative - which means that, after the initial parameters were set, it executed a sequence of steps to produce a set of outputs and then those outputs, augmented with new decisions by the player, were used as the input parameters to run through the same sequence of steps again. This occurred over and over until the game was lost – the Ruler ran out of grain or was overthrown - which was the normal ending.
While it was trivial compared to the models used today, it nevertheless illustrated a great deal about the inherent characteristics of iterative models. And one characteristic is that most types, as pointed out above, tend toward linearity due to the primacy of one or a few key parameters and algorithms. Also, it showed that small changes in the initial parameters can result in large differences in output. Most importantly, as the number of iterations increases, the less likely the result will match reality because, again, such models normally do not take into account poorly understood real world cycles of cause and effect. In particular this includes human nature and natural cyclic phenomena. The models can be made to initially fit measured data by tweaking parameters, but ultimately their predictions diverge from the real world.
Recent prediction fails
Western governments have in the past few decades based major policy initiatives on model-predicted apocalyptic futures. Some organizations have even predicted a type of extinction event if certain actions aren't soon adopted.8 Government leaders appear reluctant to challenge the veracity of these models. Either they are intimidated by the complexity and science behind the models, or the prediction conveniently aligns with a political agenda. Strangely enough, despite spectacular past failures of prediction, they continue to create public policy based on model prognostications. One would think that our leaders would be wiser and more cautious than King Croesus. Hubris leads to folly and hidden agendas ultimately are revealed.
I have already mentioned the population prediction by Ehrlich where the real bomb was the actual prediction. Another prediction is that of rising sea levels. In 1989 the Associated Press ran a story about climate change that included this quote: "A senior U.N. environmental official says entire nations could be wiped off the face of the Earth by rising sea levels if the global warming trend is not reversed by the year 2000." The agencies of the United Nations have been particularly fond of apocalyptic computer model predictions, especially when demanding more funding from members. Many people point to flooding from recent storm surges as proof of rising ocean levels. I showed evidence of ocean rise in a previous blog.
Another is the prediction by Professor Neil Ferguson of Imperial College London whose past modelling efforts predicted in 2002 that as many as 150,000 people could die from BSE9 in the U.K. The reality turned out to be just 177 deaths but the prediction led to the slaughter of huge numbers of sheep and cattle - a tragedy that was devastating for farmers. As for the COVID pandemic, his model predicted that 500,000 people would die (not just get ill) from COVID-19 in the United Kingdom and the health service would be swamped. Yet despite his past failure the U.K. government used the prediction to bring in harsh lockdowns and other health mandates. His model also predicted two million such deaths in the United States. Again, his doomsday prediction was completely off the mark. When compared to measured real data, the false assumptions and parameters within the model were revealed to be entirely wrong including the R-nought (reproductive) value and the IFR (infection fatality rate). Nevertheless, the model, which naturally tended toward linearity resulting in a gross overestimation of mortality and seriousness, was used by governments to initiate exceptionally damaging measures, many of which continue today.
As for the predictions of climate change, AKA global warming, the model predictions are starting to display the characteristics explained above. Namely that as more data is collected and the years pass, the generally linear model predictions start to diverge from real measured data. That is, the climate tends to be cyclical rather than linear. There has been a bias for models to overstate warming10 and this warming bias has been amplified by media which uses a variety of means - even subconscious such as color, with red being bad - to influence our belief in uncontrolled dangerous warming. The screen grab below from weather reports in the UK is one example. You will notice that there is no significant change in temperatures but the news media used a red background in the later story to suggest excessive temperature rise.
A not uncommon problem often develops when a model is poorly predictive. Developers who have invested significant time developing the structure and writing the code are highly reluctant to abandon the model. As a result, they keep tweaking the parameters and algorithms to match current data rather than starting over. And when political decisions have already been made based on the model, there is even more reluctance to admit that the model is simply inappropriate for long-term predictions. No politician likes to admit they made the wrong decision. Nevertheless, even the IPCC in their sixth assessment report working group one began to doubt the veracity of the models.11
A reasonable deduction
Computer models have limitations. If there is one lesson from recent experience it is that common sense must be paramount in public affairs and policy. Just because a new technology can appear to be, in some cases, a magic prognosticator, governments must have a wider view and take opposing opinions into account when creating public policy. Just because some computer program predicted an apocalyptic event is not an excuse for poorly thought-out public policy. A set of algorithms created by some coder in a cubicle and based on some agenda should not be allowed to define one's life and freedoms and be used as an excuse to suspend human rights and ignore constitutional protections.
Computer models are exceptionally useful in very many fields including medicine, engineering, aeronautics, physics, agriculture and more.12 The importance of being able to create 3-D models of certain phenomena to aid in visualization and understanding of complex behavior cannot be overstated. Computer software that can ferret out optimal behaviors to increase efficiency (Eg. expert systems) has important applications in many endeavours. And of course there is the entertainment value of computer games used by so many adults and children. I myself am a big fan of such models.
Problems only arise when people and institutions with a poor understanding of the limitations of computer models use the results to control human behavior, limit personal freedoms, and carry out personal agendas. As "artificial intelligence" software becomes more ubiquitous there will be an increased reluctance by people and governments to challenge the conclusions and policy recommendations of such analytical tools, thus potentially leading to very poor public policy, including health policy, even when it goes against common sense. Beware the gaslighting13 of us all.
Notes and References
- Stanley Milgram’s 1963 book, Obedience to Authority, was based on his psychology experiments at Yale which changed our perception of people’s morality and free will. It showed that most people, when ordered by an authority figure, will perform actions which would normally be considered at least undesirable and at worst morally wrong. His experiments originated from the question of why so many German people (and people from other countries such as Ukraine) participated in the persecution and killing of Jews.
- The entire demand for unquestioned belief in the results of models predicting apocalyptic climate change has been the opposite of real science from the beginning. Remember the mantra, "The science is settled." Real science welcomes questioning and debate. The claim that some 99% of scientists agree that the climate is warming and it is the fault of human activity has been shown to be the result of statistical slight-of-hand. The narrative that the climate will continue to get hotter as the level of CO2 increases, and the result will be catastrophic for mankind, has not been properly subjected to scientific rigorous examination. Such debate has been stifled, combined with denigration and name-calling of those who question the standard narrative.
- Computer algorithms are generally classified into four types: recursive (iterative), divide and conquer, dynamic, and greedy. One type or more is used in most current predictive models depending on the problem to be solved.
- Bjorn Lomborg in his book titled False Alarm, speaks about the complexity of many of the computer models used in predicting future climate and says, "But there are also much smaller, faster global warming models that simplify the inputs solely to carbon dioxide and a few other emissions, and model the total change to the global temperature. One of these models is called MGICC. It was developed partly with funding from the US government’s Environmental Protection Agency, and has been used by the UN’s panel of climate scientists for all of their reports.” Note that he points out how the models focus on the parameters of carbon dioxide and a few other emissions, reflecting what I outlined above. It is perhaps telling that the model acronym rhymes with 'magic'.
- The entire field of Artificial Intelligence (AI) - a very much overused term - is evolving rapidly and definitions within it are still fluid. A true AI system is one that learns AND can mimic human reasoning and behavior. A great recent AI example is the interesting ChatGPT system which is free and available to anyone. Nevertheless, most such systems today are actually Machine Learning (ML) types. An ML system, unlike an AI system, is an Expert System and not meant to pass the classic Turing Test. AI and ML systems include a user interface (sensing and other input/output types), a knowledge base (database to store states and facts, sometimes called objects and attributes), and an inference engine (the brain with the rules of how to interact with and derive information from the knowledge base depending on the current query). Many of these are Neural Nets. Any type of AI is different from most current predictive models in that AI uses algorithms to not just change the model entities, but the algorithms change the way that the model itself functions. A neural net is constantly morphing. That is, the program itself learns - the inference engine modifies its own rules as to how it interacts with the knowledge base. There is great opportunity in this technology but a danger of these AI systems arises when we start to base more and more decisions on their output. Especially when just how the model has modified its own inference rules is unknown.
- The sun changes in terms of the number of sunspots on a regular basis. This solar cycle repeats on average every 11 years and is caused by the poorly understood magnetohydrodynamics of the sun itself. We are now currently in what is called solar cycle 25 which began about 2019. It should be in maximum about 2025. There is a correlation between the number of sunspots and Earth's climate. The Maunder minimum is perhaps the most evident historical example of this and was a period of drastically reduced sunspots between 1645 and 1715 during which the Earth experienced decades of unusually cold weather. The Maunder minimum coincided with the coldest part of what is known as the “Little Ice Age” (c. 1500–1850) in the Northern Hemisphere.
- Many countries have or are experiencing a decline in birth rates to at or below replacement levels. This includes countries such as Canada, the United States, most European countries, Russia, Japan, and even China. The implications of this trend to lower fertility have yet to be properly researched, but the most important problem will be economic. A November 15, 2022, report from the International Monetary Fund (IMF) titled, 'Aging is the Real Population Bomb' lays out the problem in detail.
- Recall the 'How Dare You' Greta Thunberg lecture at the United Nations. Then there is the Metronome clock repurposed in 2020 as a climate clock in Union Square, New York, that purportedly shows the remaining time to take drastic action to avert a climate disaster. Other countdown climate clocks have been erected in other cities around the world. There is even a world climate clock website that declares "We are in a climate emergency.'
- The BSE (Bovine Spongiform Encephalopathy) scare in the United Kingdom in the 1980s and later let to the slaughter of over four million head of cattle and eleven million sheep. BSE is a neurodegenerative disease found in cattle that can be transmitted to humans through the meat. The source of the infection began by feeding cattle a high-protein diet derived from animal sources. Essentially, the cattle were fed cattle and sheep byproducts including spinal cord and brain tissue. (no doubt this animal 'cannibalism' seemed a good idea at the time) The disease is related to Creutzfeldt-Jakob disease and is caused by misfolded proteins known as prions and causes degeneration of the brain and ultimately death.
- A July 2020 paper on a study of warming bias in Earth and Space Science, says, "The tendency of climate models to overstate warming in the tropical troposphere has long been noted. Here we examine individual runs from 38 newly released Coupled Model Intercomparison Project Version 6 (CMIP6) models and show that the warm bias is now observable globally as well." You can read the report here.
- Dr. Judith Curry in an interview stated in a very careful way, "...the IPCC Assessment Working Group One, they also sort of rejected the climate models to some extent."
- Even physicists have troubles with their predictive models. A recent story in the Independent says, "despite more advanced computer modeling than ever, researchers recently found that particles in fusion reactions at the US National Ignition Facility (NIF) behave very differently than models said they should." Well, back to the drawing board/coding terminal. It would be refreshing some day for the climate modelers to admit a similar problem.
- The term gaslighting originates from the title of a 1938 play wherein a husband tries to control his wife by making her believe that she is going insane. His mysterious activities in the attic cause the house’s gas lights to dim, but he insists to his wife that the lights are not dimming and that she can’t trust her own perceptions. Thus it is the psychological manipulation of a person, or groups, by making them question their own perception of reality. It promotes the idea that if what you see and hear is different from what you are being told then there is something wrong with you, do not trust yourself, believe only what you are being told. For the victim it leads to confusion, loss of confidence and self-esteem, uncertainty of one's emotional or mental stability, and a dependency on the perpetrator. In today's world, as more and more real data accumulates, one must ask whether the standard narratives over climate change and pandemics are examples of such psychological manipulation. Curious people might be interested in a book titled The Psychology of Totalitarianism by Dr. Mattias Desmet.
Read Related Previous / Climate Change and Rising Seas