www.fgks.org   »   [go: up one dir, main page]

How to Avoid the Tribal Trap of Stories: A Scientific Approach to the Way We Think

The real difference between us and chimpanzees is the mysterious glue that enables millions of humans to cooperate effectively. This mysterious glue is made of stories.
—Yuval Noah Harari

In Sapiens: A Brief History of Humankind, Yuval Noah Harari describes the stories we tell and believe as the precursors of the so-called cognitive revolution that occurred in our species. Harari explains that our unique ability to create and share fiction—an ability which sets us apart from other animals—allowed us to cooperate and organize into societies. We are constantly surrounded by tales, some of which more obvious to us than others. As Harari argues, “any large scale human cooperation … is rooted in common myths that exist only in people’s collective imagination,” whether the collective imagination in question is that of a religion, a state or even a business or corporation.

Shared beliefs in god(s), customs, laws and the value of money provide some of the foundations of the communities we know today. The power of stories—and of the belief we hold in them—lies in their ability to shape the way we think. Just as the story of the tortoise and the hare reminds us that slow and steady wins the race, shared stories lead to shared morals. They provide us with a framework within which we can parse new inputs and which helps us to make decisions.

There is ample evidence in cognitive science and psychology that people use generalizations and simplifications—which are analogous to storytelling—to develop understanding. This kind of abstract information processing system in our brains is often referred to as a mental model.

Mental models are internal representations of external reality—constructed on the basis of personal experiences and perceptions—which individuals use to filter new information. They can be seen as the unconscious lens through which we view the world. This concept, first proposed by the psychologist Kenneth Craik in 1943, has since gathered much evidence in its favor. It helps us understand the process of human reasoning. Drawing on Collins and Gentner’s 1987 work, most psychology researchers agree that people use analogical thinking (thinking based on images, comparisons, metaphors, etc.) as a cognitive process by which to “generate predictions about what should happen in various situations in the real world.

mentalmodelmentalmodel

Under this definition, Harari’s ensembles of stories can be seen as shared mental models, which created a common view of the world, thereby allowing people to organize themselves into societies. What could go wrong with these shared mental models? The tendency of many to follow dogmatic religious scriptures or nationalistic movements is a sign that we still often forget the messages embodied in our stories. Delving into how mental models are developed and adopted can offer us ways of understanding societal questions of morality, progress and human well-being.

The Power of Stories

Stories are persuasive. They play a critical role in shaping our mental models. The best communicators, teachers and preachers often use analogies to create relatable connections between things and to explain complex issues. We can see this demonstrated in art, in which images and figures of speech often prove much more powerful than straightforward prose. Religious scripture is also full of symbolism. Tales that stand the test of time—and whose messages we remember—are, more often than not, metaphorical.

Mental models are tools people use to represent or predict cause-and-effect relations. They are dynamic and constantly evolving, and—if everything goes well—they improve with learning and experience. Because of our cognitive limitations, however, our views of the world cannot account for every detail of reality and are therefore incomplete. There is just too much information out there for us to process, so we are obliged to unconsciously filter it. A predefined, shared set of stories through which to view the world is therefore a compelling tool, which helps fill the gaps in our understanding.

As Yuval points out, collective representations of the world, driven by myths and stories, can bring millions, if not billions, of complete strangers together. The most obvious example of this is religion, which has been able to create communities unbounded by geographical location or language. To put it crudely, no matter what is objectively true, it is likely that believing in a horrible fiery hell will lead people to do fewer things that they think will send them there. In this case, the story of hell not only provides a shared morality, it leads people out of a violent state of nature. Religion provided a shared interpretation of reality, a mental model, which created order and laid the foundations of civil society, at a time without widespread education or today’s technology.

Stories can be useful, even if they are wrong. To paraphrase evolutionary biologist Bret Weinstein, speaking on the Joe Rogan Podcast, in conversation with Jordan Peterson about truth:

Porcupines can throw their quills. It’s not true. However, if you live near porcupines and you imagine that porcupines can throw their quills, you’ll give them some space. If you don’t, you may—realizing that they can’t throw their quills—get really close to one and it may wheel around and nail you with a porcupine quill, which can be extremely dangerous, because they are microscopically designed to move in from where they puncture you over time, and they can puncture a vital organ, or you can get an infection. So the person who believes that a porcupine can throw their quills has an advantage that isn’t predicated on the fact that this is actually a literal truth, right?

According to Weinstein, the story of quill-throwing porcupines serves a beneficial purpose to those who use it to interpret their environment. But how far can a wrong model actually get you? Predefined mental models have their limits. Blind faith in stories leads to forgetting why you believe in them in the first place. When the messages of the stories are lost, the mental models of individuals can become de-calibrated and derailed from the desired outcomes of understanding and moral sense. This accounts for religious fundamentalism and literal readings of holy texts, and it can also apply to other types of tribalism.

As Harari points out, “states are rooted in common national myths.” In France, towards the end of the nineteenth century, schools taught the country’s ‘national story’ (récit national), which is peopled by heroes from Vercingetorix, King of the Gauls, to Joan of Arc and Napoleon, and replete with origin myths. Their intent in molding history in such a way was to unify the nation. Although the French state has evolved towards championing universal values, there are still those who would rather focus on returning France to the Gauls. Forms of extreme nationalism put the tribe ahead of its morals, showing that these types of stories are also liable to be misread.

Vercingetorix throws down his arms at the feet of Julius Caesar. Painting by Lionel Royer (1899)
used in History textbooks in French schools despite debates among historians whether the
scene actually took place.

Improving Our Mental Models

Although they certainly make life more interesting and fun, stories tend to be the precursors of tribalistic attitudes. Neither moral nor societal progress can come from the notion of the tribe, so we face the considerable challenge of making our mental models more resilient against the unwanted byproducts of stories. The inner workings of the mind remain a mystery to us, but we do know quite a lot about other types of models. These models generally exist on computer screens, not just in our brains, yet these scientific models follow the same logic as the mental ones. Like mental models, they are manageable simplifications of complex concepts, based on known assumptions, which are used to represent a portion of reality, in order to make predictions. They can describe and help us understand a wide range of phenomena: from how water flows in a river to the global climate. Being, by definition, incomplete representations, they can also never be perfect.

As the statistician George Box famously stated, “all models are wrong, but some are useful.” Our comprehension of reality may be flawed, but we do have the tools—now more than ever—to improve upon it. As an environmental scientist, I believe we can draw many lessons by constructing models which represent the natural world, and these lessons can be applied to our mental models, too.

Garbage In, Garbage Out

When we base our interpretation of reality on the wrong information (input), we are bound to arrive at conclusions (output) of very little value or meaning, no matter how good the model is that we are using. Before any simulation is carried out, a critical first step is to check that our data have not been corrupted and do not present any gaps. Another best practice is to make sure not only that the data are up to date, but also that they provide the appropriate input for their intended application.

A well known case of misinformation in climate science is the idea that global temperatures have not increased since 1998, which is taken as indicative that climate change is not happening (as this 2006 Telegraph article argues). Since 1998 was an exceptionally warm year, temperatures did appear to remain relatively stable until around 2012. The issue here is twofold. The first problem was the focus on short-term data trends. Examining a longer period revealed worrying, prolonged warming. The other point of contention was which data to examine. Observations of a pronounced hiatus in warming were soon disputed because they did not account for the Arctic, which warms much faster than the rest of the planet. For more details, check out this recent article by the National Oceanic and Atmospheric Administration.

screen-shot-2015-06-04-at-182009screen-shot-2015-06-04-at-182009

This example shows how significant data is in controlling the way we think. You may have a perfect understanding of physics or of how the climate works and yet reach entirely wrong conclusions from the wrong information. We live in an Information Age, with more data available to us than ever before in human history. However, this also means that the increasing amount of fake news can corrupt our thought processes. We have to exercise quality control over the information we use in our mental models. Having the right data, however, does not in itself guarantee that we will obtain good results. Scientists often draw different conclusions from the same data.

The Model is Wrong

What if our information is correct, but our understanding of it is flawed? Let’s return to the porcupine example. Evidently, the model influenced by the story that porcupines can throw their quills is wrong. It is useful in some instances—when it is right for the wrong reasons—and this makes it appealing. Because of the intrinsic flaw in its parameters, however, the porcupine model is likely to lead to unexpected and counterproductive outcomes, when used in new situations. Imagine if the porcupine-fearing people were suddenly invaded by dangerous flying animals. Their understanding of reality might lead them to stay close to porcupines, in the hopes that the creatures would repel the invaders with their airborne projectiles. In such a case, this misleading model would have ceased to be useful.

Let us take another example: the phenomenon of flat earthers. Assuming that such people actually believe what they assert, their model of a flat Earth works perfectly for them on a day to day basis. At the human scale, the distinction between a flat and a spherical Earth makes no difference to their lives. Flat Earthers are also remarkable in the way they are able to fit information to their model. Obviously, the model has a breaking point, however—we have never yet seen a flat earther in space.

Accepting wrong models because they are useful leaves us vulnerable to unintended mistakes. The challenge is to detect when our models are wrong. The only way to find that out is to question those models.

Mutability

Scientists use wrong models with incomplete data all the time. What sets science apart is its ability to test itself, correct its errors and improve. A model should never be set in stone. It must be constantly probed and analyzed, using new information, and appropriately adjusted, if necessary. This not only allows the model to adapt to changing contexts, but makes it more resilient against errors. An iterative process of this kind is fundamental: it enables us to detect and correct mistakes and therefore achieve progress.

One undeniable trait of stories is their ability to be passed on from generation to generation. While some ancient tales are still relevant, there is no guarantee that a centuries-old story will be applicable in today’s environment or at any time in the future. Religions generally fall short of providing a satisfactory mental model, for this reason. Religious doctrines are, by definition, rarely questioned—questioning them can even be considered taboo. Taking the doctrines for granted is often even regarded in a positive light, as an act of faith. A model that is thousands of years old is likely to exhibit dangerous flaws, especially if one is not aware of the conditions in which it was created.

Make Your Own Model

Using someone else’s model is usually a bad idea. It is difficult to fully understand the underlying assumptions of a model if you haven’t developed it yourself. Of course, you can borrow from different places to create your own framework. But if you build your understanding of the world from the ground up, it’s easier to know how your model was constructed, what purpose it serves and therefore how it should be used. Dogmatically following a political, ideological or religious set of principles leaves you prone to failure in these regards.

Take, for example, allegiance to a political party. Political parties have become a source of convenient, well-wrapped thought packages. It is remarkable—and unlikely—that people divide perfectly along partisan lines on a wide range of completely independent topics. This has particularly drastic results in bi-partisan systems, like that of the United States. The Republican and Democratic parties have their own sets of stories, myths, heroes and key historical victories, which they use to spur a sense of emotional belonging among their adherents. Tribal politics, however, tends to lead people to fight for their side, rather than consider what they really think—to the point that it has become entirely possible to predict their positions on any given topic. This is happening because we tend to rely too much on models that are not our own, a practice which cannot be conducive to a constructive, healthy society.

Humans are not straightforward creatures: our minds are more complex than any computer model we can build. However, we have come to realize how important stories are in shaping the way we think. Stories have become so ingrained in our psyches that it has become difficult to recognize that some of those stories even exist, or are having an impact on us. It is essential that we strive to do so. Our tendency to sacralize sets of stories of all kinds is a significant threat, leading to counterproductive results and human suffering. Becoming more conscious and aware of our thought processes—including their flaws—is not an easy task. However, the scientific method can help guide us in this endeavor. In Enlightenment Now: The Case for Reason, Science, Humanism, and ProgressSteven Pinker argues that knowledge can be used to enhance human flourishing. Questioning how and why we know what we know—or what our mental models are made of—is bound to enhance our understanding of reality and help us make better decisions, which promote human well-being.

If you enjoy our articles, be a part of our growth and help us produce more writing for you:

2 comments

  1. “There is ample evidence in cognitive science and psychology that people use generalizations and simplifications—which are analogous to storytelling—to develop understanding. This kind of abstract information processing system in our brains is often referred to as a mental model.” Replace “which are analogous to storytelling” with “which are indispensible tools used by scientists.” Why not set the bar higher, with the expectation that people should do their best to align their beliefs around empirical evidence?

Leave a Reply

Inline
Inline