Every once in a while, a big, bold new theory rises up and takes over a whole field, or even a whole domain of knowledge, and promises to explain darn near everything. General relativity and quantum mechanics were 20th-century examples from physics. Today, a concept called the “free energy principle” is rapidly marching across neuroscience and even the social sciences. “Free energy” isn’t a new way of talking about perpetual motion machines. Instead, it’s a principle that describes the ways in which living systems maintain dynamic equilibrium – that is, fight against entropy. And it might just apply to culture – and even religion.
The free energy principle’s main champion and formulator is the British neuroscientist and psychiatrist Karl Friston. In a 2010 paper, Friston outlined his idea that the human brain works like a Bayesian inference machine whose superordinate prior is just its own existence. That is, the core prediction that the brain is always seeking to confirm is that it exists – that it hasn’t died yet.
Like any good Bayesian, the brain is constantly seeking to match its internal models with incoming data, updating its beliefs about the world on the basis of shifting evidence and new information. But Friston argues that the brain – like all living things – not only shifts its beliefs in relation to its environment, but also seeks to change its environment in order to match its beliefs or internal models. He calls this “active inference.”
Active inference is critical for maintaining the core beliefs that define propositions describing the conditions that must be met if an organism is going to stay alive. That was a complicated sentence, so I’ll rephrase. The brain can happily update predictions such as “it’s raining outside” on the basis of incongruent evidence – for example, if you look out the window and see that, in fact, it’s sunny and birds are chirping. But other predictions are reflective of the fundamental conditions that the body needs to stay alive – to resist entropy. When the evidence doesn’t match these models, the brain can’t just cheerily change its models. It has to act on the environment in order to change the data.
For example: if the brain has the prior belief that “an organism like me occupies a temperature range of 97-99 degrees Fahrenheit,” but then discovers that in fact its body temperature is 96 degrees, it won’t just shrug (or the brain-y equivalent thereof) and conscientiously revise its model. If it did, you would die of hypothermia, and then the brain’s single most important model – the model that predicts that the brain exists and inhabits a living body – would be falsified. In technical Bayesian terminology, this would be No Good.
So instead, the brain tells the body to go inside from the cold, sit near the heater, and drink a hot cup of cocoa. Pretty soon, your body temperature has recovered, and the Bayesian prior predicting that the body will occupy a temperature range between 98 and 99 degrees is once more be nicely corroborated by the data.
In all cases, the goal is to get the internal models to match external reality, or to minimize the gap between model predictions and data. (This is why the free energy principle is often thought of as an offshoot, or maybe culmination, of predictive-processing models of the brain.)
Surprise and Free Energy
“Free energy” is, roughly, a measure of the extent to which internal models about the external world are inaccurate. The process of optimizing our models of the world is, then, the minimization of free energy. Free energy, in turn, always places an upper bound on “surprise,” which is a quantity that has nothing to do with birthday parties. Instead, it can be thought of as negative model evidence or prediction error. From Friston’s 2010 paper:
minimizing surprise is the same as maximizing sensory evidence for an agent’s existence, if we regard the agent as a model of its world.
Which clears everything right up, doesn’t it? Friston is widely regarded to be almost completely impenetrable, but also (unintentionally?) often funny:
Entropy is also the average self information or ‘surprise’ (more formally, it is the negative log-probability of an outcome). Here, ‘a fish out of water’ would be in a surprising state (both emotionally and mathematically). A fish that frequently forsook water would have high entropy.
Here we have one of the most prestigious, prolific, and influential scientists in the world telling us, quite soberly, that a dead fish is merely a regular fish that has a lot of entropy. This is a good example of the way that reading about the free energy principle sometimes feels like a mind-blowing revelation of a deep truth about the structure of reality itself, and sometimes just an extremely jargon-intensive press release from the Department of Obvious Things.
According to free-energy theorists, living systems at all scales are bounded by Markov blankets, or abstract information membranes that separate inner states from outer ones. In free-energy models of cognition, the job of Bayesian inference is to get a good predictive grip on the actual causes of sensory impressions that enter the Markov blanket from the outside. In a Kantian way, the organism doesn’t have any direct access to those causes – it only has access to the sense impressions that they cause. So its job is to constantly make informed guesses about what’s causing those impressions. If swigs a glass of water and tastes salt, then it “guesses” that the water contains salt. Even though it can’t be 100% sure this is truth, it can then act as if this guess were true, and see what happens next. In this way, the interior of a Markov blanket is always updating its models of the world, trying to maximize evidence for its single most important model prediction: that it exists.
Where culture and religion come into this is that a number of free-energy theorists think that the entire biological world is Markov blankets all the way up and down, from the tiniest cells to the biggest human societies. This means that, just as the brain is a Bayesian inference machine trying to maximize evidence for its own existence (and thus avoiding states of hypothermia, glucose shortage, oxygen deprivation, and so forth), a nation-state (for example) is also a Bayesian inference machine trying to maximize evidence for its own existence.
So when a nation-state perceives conditions that are incompatible with its core model – such as hordes of invaders massing at the borders with spears, siege engines, and thick Scandinavian accents – it acts to alter the external world in such a way that the incompatible data is “corrected” to fit the model. For instance, it sends its army to the border to kick the invaders’ butts. The existential threat eliminated, the nation’s own existence, as a prior hypothesis, has been comfortingly corroborated.
Now, active inference is the key to maintaining low-entropy conditions all the way up and down the Great Chain of Markov Blankets. When a bacterium perceives that its glucose levels are low, it does not correct its “model” to encompass low-glucose states as being compatible with its own existence. Instead, it whips its little flagellum around, or whatever, and goes out and devours something. And a social group does the equivalent: it goes and kills the invaders, or signs the trade treaty, or whatever.
So we can think of optimizing internal models on the basis of external evidence to be the rough equivalent of the search for objective truths, or what some philosophers call “mind-independent” truth. When groups of scientists sit on lab benches and try to pinch and poke the universe into giving up its physical secrets, they are – in a sense – the instruments of our society’s attempts to neutrally update its priors. As scientists, we’re ideally trying to figure out what’s actually out there, beyond our Markov blanket(s) – independent of any of our preferences, beliefs, or ideologies, no matter how dearly held.
But this is waiting for our beliefs to be informed by the external world. That isn’t active inference. Active inference is more like holding fast to beliefs, even to the point of altering the world in order to fit the beliefs. So, at the level of culture, the equivalent isn’t science, but religion and religion-like phenomena, like nationalism and binding symbolic commitments such as marriage.
Faith-Based Predictive Coding
Think about it: by definition, a nation-state does not exist objectively the way that, like, calcium carbonate does. It isn’t an objective cause of sense impressions. Instead, it’s a kind of ideal that – by its very nature! – induces people to act in ways that alter the world so as to corroborate the prior hypothesis that it exists. This is why symbols are so important for countries, religions, and marriages: they condense the entire predictive package into a high-density semiotic nexus that isn’t descriptive, but prescriptive.
An example: the American flag doesn’t represent anything objective in the external universe, the way that the chemical formula for calcium carbonate does. It is not a descriptive sign. Instead, the American flag is prescriptive – it calls for active inference. Grammatically, its encoded content is an imperative: “Believe that the United States exists, and act in ways that cause this belief to meet with good model evidence.”
(Admittedly, this language is not as poetic as Lincoln’s first inaugural.)
To wrap up, let me happily provide fodder for the critique that the free energy principle tries to explain everything, and so ends up explaining nothing: I think the free energy principle, if applied to human cultural systems, sheds some important light on the differences between ideological orientations.
Roughly, the ideological system we call “liberalism” is one that has, by and large, focused more on the goal of updating cultural priors to match the probable causes of external data. This is why science has been, and in some key ways always will be, a liberal enterprise: it depends on scientists insulating themselves from ideology, religious beliefs, partisan identity, and every single other source of motivated cognition. Liberalism is deeply entwined with our collective attempt to maximize the descriptive accuracy of our statements about the objective world.
On the other hand, conservatism – in which category I’m including most forms of religion – is the ideological expression of the drive to alter the data to match the model prediction that the social system will persist or live on. This is why conservatives care so much about symbols, like the flag and national holidays. Again, these symbols are not descriptive. They’re prescriptive. They are condensed, semiotic motivations for action, not neutral descriptions or references to any objective state of affairs. They have what followers of philosopher Elizabeth Anscombe call a world-to-mind direction of fit: instead of trying to conform ideas to external reality, they conform reality to the ideas.
In other words, cultural active inference is a conservative thing (by and large).
This is why marriage, for example, seems to be largely the ideological preserve of conservative values. The promises spouses make to each other in wedding vows are constantly being falsified, in big and small ways. Instead of caring for a spouse with a nasty flu, you go out for a wild night with your bro-friends from college. Instead of forsaking all others, you harbor a sweet crush on your cute officemate. And those are just the innocuous examples. The fact that the ideals of marriage are never perfectly met, and are often completely contradicted by reality, means that the liberal side of the human continuum is often quite skeptical of it.
Thus, the Atlantic magazine – a centrist vehicle for an essentially liberal ideology – is prone to running articles that argue that marriage is a bum deal for almost everyone and should be radically reconsidered. But conservative traditions everywhere, from Jewish Orthodoxy to conservative Protestantism to traditionalist Hinduism, emphasize marriage as a strongly normative value. This is partly because their worldviews already emphasize active inference, or – to put it bluntly – “ignoring evidence in favor of prior ideals.”
This strategy sounds insane to the scientifically minded set. The scientistic mindset asserts that the whole point of cognition is to get a better grip on the objective truth about the world, our preferences and hopes be damned. And this is absolutely correct when it comes to science.
The thing is, human society would not work at all without the active inference strategy of prioritizing ideals over evidence in certain domains. To pick an example, no one would ever have founded the United States, because back in 1773 the objective evidence would have unequivocally shown that there was no such thing as the United States. If the Founders had simply adjusted their models to match the objective facts, the Revolution would never have ignited. Poof! – no new country. In order to create the country, they had to move to a prescriptive epistemology. Same thing for Ghandi’s rebellion against British control, or Mexican independence from Spain, or even the establishment of a local school district.
This selective way in which prescriptive ideals have – must have – primacy over descriptive facts in human cognition simply reflects Friston’s argument that the purpose of cognition, biologically speaking, is not the acquisition of true beliefs, but the maximization of model evidence for the existence of the cognizing entity. That is, we don’t think strictly in order to become more knowledgeable, but in order to survive and adapt in a difficult world. The creation of symbolic identities and ideals is just a uniquely human manifestation of that function.
I’m not saying liberal people don’t get married or feel patriotic, or that conservatives never become scientists. They do, and they do. Often. What I’m saying is that the ideological poles can usefully be seen, at the level of averages, as expressions of two crucial tendencies, both of which any society – and any living system – needs: on the one hand, to update models to better fit reality, and on the other, to update reality to better fit the models.
Without either one of those tendencies, you know what comes next, don’t you?
A lot of entropy.