conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

What if they’re not crazy? Belief in conspiracy theories may be normal

At least since the publication of Richard Hofstadter’s “The Paranoid Style in American Politics,” belief in conspiracy theories has been seen as an aberrational, fringe phenomenon. But what if is isn’t? A 2014 study by Eric Oliver and Thomas Wood of the University of Chicago found that “half of the American public consistently endorses at least one conspiracy theory.” So such belief isn’t just found on the fringe. Now a new paper by a team at the University of Antwerp, led by Sander Van de Cruys, argues that it’s not so aberrational either. “The motto of the conspiracist, ‘Do your own research,’ may seem ludicrous to scientists,” they write. But a close look at the information-seeking process itself finds that it’s not fundamentally different from what scientists themselves do in making discoveries — especially when it comes to the importance of “aha moments.”

This isn’t to say there’s no flawed thinking involved in embracing conspiracy theory — but flawed thinking is itself normal among human beings, including scientists. So the question isn’t whether conspiracy theorists’ thinking is sound, but rather to what extent it isn’t, and how it differ from the thinking of people who don’t embrace conspiracy theories. This entire approach offers some good news by opening the door to better and more open-minded ways of engaging with conspiracy theories and their followers. To learn more about this paper — which is titled “Insight in the conspiracist’s mind” — I interviewed Van de Cruys, a postdoctoral researcher at the Antwerp Social Lab. Our email exchange has been edited for clarity and length.

The conventional view is that conspiracy theories and those who embrace them are aberrational. Your paper argues that they’re not, and offers a specific model for why. But first you go into some of the problems with that conventional view that recent research has brought to light. What are those problems?

The key problem is both moral and scientific. These “aberrational accounts” paint a picture of some humans (those with conspiracy theories) that is not supported by psychological science, namely that they are mere passive recipients of whatever (mis)information they are bombarded with. The metaphor of (mis)information as viruses in an infodemic reinforces this view of passive, gullible people who are powerless against “infection” by “contagious” misinformation.

The epidemiological analysis of the spread of misinformation that flows from this and the efforts to come up with “misinformation vaccines” (so-called prebunking) are not without value. But psychologically speaking, it is a simplification that hampers us in understanding unusual beliefs that people actively (re)construct, and that generally hang together in networks of beliefs (like conspiracy theories) rather than being built from a pile of isolated, debunkable pieces of misinformation. 

“Their search for alternative answers is genuine, and is born from the fact that their experiences are excluded from official knowledge-gathering in society.”

The immediate impetus was that I talked to and read about conspiracy believers who were smarter than conventional theories, which presented conspiracists as flawed cognitive beings, made it look. Their search for alternative answers is genuine, and is born from the fact that their experiences are excluded from official knowledge-gathering in society. One could say they are very empiricist in this: They conclude that bedrock facts are denied by policy makers and intellectual elites. It is easy to ridicule the theories they come up with to explain this denial, but what drives them is this curious discrepancy and the sense of insight that comes from discovering subjectively plausible patterns behind it. 

So the moral problem that flows from the aberrational view is that it reinforces the sense of exclusion that is at the very root of people’s autonomic search for alternative answers in conspiracy theories. 

You write that “The motto of the conspiracist, ‘Do your own research,’ may seem ludicrous to scientists.” But your paper explores “the information-seeking activities (‘research’) that conspiracists do engage in,” focusing on the role of “aha” moments specifically. So what are “aha” moments?

If the “viral” social transfer view of how people adopt misbeliefs is incomplete, we wondered: What other model do we have of how beliefs are formed? The classical competing view is that of normative reason, namely that we adopt beliefs when they are justified and true. But this is equally implausible psychologically. What this view does right is to acknowledge that humans are epistemic agents, meaning they actively search for information and construct their own mental models of the world. But we are limited creatures, limited in our experiences, in what we can do to explore the world and in the resources we use to build our models of the world. There is no way we can live up to the tall order, or indeed afford the luxury, of only forming justified, true beliefs. We’d all be dead before we could make the fully resolved judgments and actions. 

So while ultimate-ground truth is unavailable for us mortals, we do have experiences that guide us in our search toward uncovering the structure of our world. We have curiosity that urges us to actively seek out particular pieces of information, and such epistemic quests are often marked by “aha” experiences, when we suddenly get an insight that “clicks” together pieces of already existing beliefs and new information. When this happens, beliefs seem to be reinforced and they take on new confidence, as empirical research on “aha” experiences shows.

Can you give me an example of how that works?

“There is no way we can live up to the tall order, or afford the luxury, of only forming justified, true beliefs. We’d all be dead before we could make the fully resolved judgments and actions.” 

In the lab, we create “aha” experiences in by presenting people with a puzzle, usually visual (think of the Dalmatian dog hidden figure) or verbal, for example: “Breakfast was excellent because the thread was sticky.” These puzzles create uncertainty, leading to an active search for a solution. Eventually, people will suddenly discover a restructuring of the input that provides a satisfying explanation for the puzzle and resolves the uncertainty experienced (they see the Dalmatian, or they understand that spiderweb is the key concept to “reorganize” the puzzle sentence above). In the best cases, this comes with an intense positive feeling, and a sense of truth that cannot be unseen: The Dalmatian will be one’s experience now, rather than the disorganized black-and-white patches one experienced before finding the solution.

Why are “aha” moments” epistemically important?

Both the seeking out of information, experienced as curiosity and the “aha” experience are limited by our particular mental models and the evidence we can gather, so they are not foolproof. The “aha” experience will depend on a so-called top-down hypothesis one has learned and can apply to the perceptual inputs. If I have experienced Dalmatian dogs before, I will be more likely to get to the solution in those distorted images. 

Even though they are often reliable indicators of actual states of affairs, they remain subjective. False insights are possible and can be elicited in the lab as well. We all, scientists included, know those instances of an “aha” experience that later turned out to be wrong. These may be rare, precisely because it is pretty hard to backtrack once you’ve had this sense that you have a clear picture of the situation. “Aha” experiences feel like a distinct endpoint. One could say that science is characterized not by more frequent insight experiences, but by counterintuitively taking “aha” experiences as starting points rather than endpoints. 

So why are “aha” moments useful in helping us understanding conspiracist thinking as normal — or at least as normal-ish?

Because this shows that part of what drives the scientist and the conspiracy builder is the same. It is not that the conspiracy believer “falls for” a conspiracy theory, while the rest of us (notably scientists) “discover” our beliefs through insights and evidence. We both discover our beliefs. We usually reserve the word “discovery” for tech breakthroughs or Nobel Prize-worthy scientific work. But especially for infovores like ourselves, who need to explore and get insight in the structure of our world, before we can get what we need (whether that is food, social status, sex or something else entirely), the sense of discovery or insight is as vital as those other biological needs. 

If conspiracists are driven by insights, they’re not so strange. Or as I like to put it: Scientists are not so strange. They are far from the only ones with a need for epistemic agency and discovery. This is all based on the underlying theory of the Bayesian brain, which says that  we’re all proto-scientists, continually trying to capture and predict the structure of our environment, but with a strong bias towards our pre-existing models. 

“One could say that science is characterized not by more frequent insight experiences, but by counterintuitively taking ‘aha’ experiences as starting points rather than endpoints.” 

Our “aha” experiences are the only thing we can rely on in our individual quests for knowledge. They are metacognitive signs that our model-building is going well, that we have succeeded in resolving uncertainty about the world using our own thinking and actions. So the normal cycle is to experience curiosity when we come across uncertainty we feel can be resolved (as in the puzzles described above), to forage new information, search our models when this happens and, when we are lucky, to find cognitive closure or uncertainty-reduction in the insight experience, when we manage to restructure our impressions of the world. 

The models that conspiracy thinkers start out with will differ from other people’s models, but within this thinking-frame particular information-seeking and restructuring can lead to “aha” experiences, meaning that they suddenly reduce uncertainty relative to a prior state of uncertainty. 

You note that “aha” moments “have properties that can be exploited by conspiracy theories, such as the potential for false but seemingly grounded conclusions.” What are some examples of how those moments can lead us astray?

We already mentioned that “aha” experiences do not necessarily imply complete truth, but empirical research shows that they come with a confidence-boost for that content. Interestingly, recent research suggests this can carry over to co-occurring content. They used sentences like “ithlium is the lightest of all metals,” where the anagram was used to create an “aha” experience. People subsequently tend to judge the whole statement as more accurate when they experience an “aha” for it, even though the “aha” is strictly unrelated to the truth of the sentence. 

We also mentioned that “aha” experiences provide a sense of clarity or understanding that works as an endpoint in one’s information search. They have what philosopher C. Thi Nguyen has called a thought-terminating effect. They mark the moment when uncertainty is resolved, relieving you from the need for further thinking, such as trying another hypothesis or collecting additional information. 

Another interesting property is that an “aha” experience depends on something the individual has inferred or generated themselves. I might have told you that there is a Dalmatian dog in the image, but you have to reconstruct it yourself to really experience the “aha.” We know from experimental psychology that people remember things better when they have contributed to it or generated it themselves, and also value such things more. It creates ownership of ideas, and that is of course important in conspiracy beliefs as well.

The “aha” experience also projects into the world what was clearly a construction of your own mind. You might say the Dalmatian is the truth of the image, because the image is created from an actual distorted photograph of a Dalmatian — but the image wasn’t sufficient evidence of that until you constructed the Dalmatian. We see that people sometimes, with great confidence, will discover things in these kinds of images that are not there, in the sense that they weren’t in the photograph that the image is derived from. All these properties help explain the “stickiness” of conspiracy beliefs that are formed by insight experiences.

How do conspiracy theories and those who spread them exploit this?

Anytime conspiracy believers use questions, partial cues or “mysteries,” like the Q-drops we see in QAnon, to recruit people instead of mere statements of “fact,” they rely on people’s curiosity and invite people to do their own searching for “truth,” collecting clues and generating the “plot” behind them. These challenges present information in a subtle, non-directive way that does not feel patronizing or manipulative (compare this to the “fact checks” that are often used, with limited effect, to debunk conspiracy theories).

“Anytime conspiracy believers use questions, cues or ‘mysteries,’ like the Q-drops we see in QAnon, instead of mere statements of ‘fact,’ they rely on people’s curiosity and invite people to do their own searching for ‘truth.'”

These tactics aren’t specific to conspiracy theories: As a student, I was encouraged to generate questions myself which had the central pieces of information as part of the to-be-constructed answers, instead of making summaries or regular notes. Knowledge acquisition is (re)constructive in this way, so it is no wonder that conspiracy circles have also evolved to use these techniques for reliable belief change.

Conspiracy theorists seem to go a step further, however, by prepackaging the insight (or answer) into the question. The prototypical form is the internet meme, often used in modern conspiracy milieus. They often rely on some kind of expectation-violation that attracts attention (curiosity) — it poses a question, but also has a pre-engineered resolution, a conclusion to be drawn by the meme consumer. It requires a cognitive contribution, albeit with minimal effort, from this consumer. So an idea is planted, but the individual does most of the planting themselves. We can only truly convince ourselves.

While the phenomenon of the “aha” moment is relatively easy to grasp, you write that “At the core of our account is the role of epistemic arcs to explain the pull of conspiracy thinking,” and that’s a less obvious concept. So what is an epistemic arc?

The epistemic arc is a concept we use to emphasize that “aha” moments don’t stand on their own, but start with a level of curiosity which creates the urge to act and seek more information to reduce uncertainty, which in turn can result in an “aha” experience, understood as a sudden, unexpected resolution of uncertainty. This is rooted in computational, mechanistic theories of what causes curiosity and “aha” experiences on the subpersonal level, that are the subject of active empirical study in psychology, neuroscience and artificial intelligence.

What’s a specific example?

Epistemic arcs can be short, as in the example of the memes discussed above. Little effort is necessary here to resolve uncertainty. But they can also be longer, which means more effort, more information seeking and hypothesizing is needed, so uncertainty persists longer. There is more risk that people will prematurely break off such arcs. 

Think of a detective novel, or better yet a literary novel. While the detective novel will often gradually resolve uncertainty and give you clear frames of thinking that lead to limited, well-defined regions of uncertainty, for a literary novel you might not know whether you’ll be able to comprehend and resolve uncertainty. Here, curiosity becomes crucial: If the author can sustain the feeling that the uncertainty created will also be resolved — one can read this as building trust by enabling the reader to close other arcs— the reader will continue to “work” to resolve uncertainty.


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


Our impression is that conspiracy theories tend to shape people’s beliefs through shorter epistemic arcs, always requiring some epistemic agency, but the challenges remain low-threshold so the insights are accessible to many. Again, everything depends on already existing mental models of reality: They will make uncertainty manageable or not.

The information-seeking that conspiracy thinkers conduct is often belittled: It’s just internet searches to dredge up supposed evidence that supports already fully-formed beliefs. But we know very little about these acts and the epistemic arcs in which they feature, because much of this takes place in the secluded context of home internet use, using proprietary systems. There is interesting scientific research to be done on the methods and effects of online search in conspiracy believers.

You have some suggestions about how to test this explanation of conspiracist thinking.

One route is the exploratory one: Looking at conspiracy thinkers’ information-seeking strategies, either in natural settings or in controlled lab settings where we give people semi-structured search tasks and look at how they go about it. A second route is to do more ethnographic studies into what role information-seeking practices and epistemic emotions like curiosity and “aha” moments play in the development of conspiracy views. This research can rely on in-depth interviews, but also on content analysis of social media conversations in conspiracy channels. 

Experimental research will also be necessary to establish our explanation of how conspiracy beliefs are formed. We’re working on integrating challenges or puzzles in stories that contain conspiracy accounts of an event, versus control stories that present official accounts. Participants can solve these challenges on their own, leading to an “aha” moment that will reveal crucial information on the conspiracy or control story. We examine whether such embedded puzzle-solving and “aha” experiences will lead to higher judged plausibility of the stories — both kinds of stories, or only the conspiracy version.

In a correlational study, we recruit people who are high or low in conspiracy beliefs and look at whether conspiracy believers tend to have stronger “aha” experiences, even with materials not linked to conspiracy theories. The goal is to characterize conspiracy theorists’ epistemic sensitivities and how they might differ from nonbelievers.

How does your account connect with the broader observation that conspiracy theories tend to be embraced by people who feel socially excluded?

We propose to look at the epistemic dimension of this social exclusion or injustice. A common core in social exclusion is the sense that one’s actual lived experiences have no place in society, while one’s autonomous knowledge-building based on those experiences is disparaged. Conventional wisdom holds that this should be left to epistemic authorities like doctors, scientists, lawyers, politicians, engineers, etc. Social, existential lack of autonomy is hence mirrored in epistemic lack of autonomy. People deprived of epistemic agency and subjective insight in their own life will reassert it, in ways that of course also explain the exclusion. 

This doesn’t mean people will correctly discover the specific causes for their exclusion. As journalist Sebastian Milbank recently wrote: “The specifics of conspiracy theories are nonsense, but they flourish because the generalities — that we’re governed by unaccountable elites, whose interests are served by global rules and organizations — seem unassailably true.”

“People deprived of epistemic agency and subjective insight in their own life will reassert it, in ways that also explain the exclusion. That doesn’t mean they will correctly discover the specific causes for their exclusion.”

So we make a distinction when it comes to the peddlers of conspiracy theories in media or politics. They generally don’t do so from a sense of exclusion: A dominant group can sometimes claim the underdog position as a way to attract adherence from actually disadvantaged groups. As a psychologist, my focus is on the formation rather than the instrumentalization of conspiracy theories.

Your paper takes on a number of possible objections. The fourth one seems most central to me: How we “reconcile the ubiquitous idea that conspiracists have a fixed, closed mind, with the dynamic practices of world-building and discovering” that you identify as a “core feature and attraction of conspiracy thinking.”

Look at how dynamic conspiracy theories actually are. Because conspiracy theorists come across as extremely stubborn, it is often assumed that their thinking is maximally rigid, and there is little actual research that follows conspiracy thinkers or their theories for a longer time to see if this holds true. So we don’t really know.

Another way to respond is to emphasize that discoveries require new observations, but not necessarily a lot of change in one’s prior beliefs. Such new evidence can create a new sense of insight, showing the goodness of an explanation. Note that what counts as relevant evidence is in itself dependent on your models of the world.

Without denying the differences between scientists and conspiracy theorists, philosopher of science Thomas Kuhn has argued that so-called puzzle-solving is the default activity of scientists: Fitting new evidence within known frameworks and working out the details of a more or less fixed paradigm. Paradigm shifts are comparatively rare, and only induced by repeated, persistent violations of the paradigmatic assumptions. 

“It’s possible to see conspiracy theory as a ‘pirate science’ that’s stuck in the puzzle-solving phase, with its own conferences, its own publications, its own channels for evidence evaluation.”

It’s possible to see conspiracy theories as a “pirate science” that’s stuck in the puzzle-solving phase. We see that conspiracy circles have their own conferences, their own publications, their own channels for evidence evaluation. We tend to see an elaborate ploy in this — these are things they set up to give themselves more respectability than is warranted — but our analysis suggests there’s an authenticity to this social puzzle-solving. Let’s not forget that science has also had periods of stagnation, where dead ends were pursued that later turned out to be pseudoscientific or proto-scientific. 

In your conclusion, you discuss some implications for how conspiracism can be dealt with as a social challenge. What would you suggest?

First, if conspiracists do not engage in harmful behavior, we should let them be. There’s already a strong sense among conspiracy theorists that there’s a well-organized thought police, so organized efforts to fight these ideas have little chance to succeed. 

Also, given that the label of “conspiracy theory” is sometimes used by politicians to dismiss legitimate concerns, as scientists we should avoid getting drawn into this. We can use our expertise in doing research to counter the inaccurate claims of conspiracy theorists when they risk getting adopted more broadly in society, but we should maintain clear independence from ruling parties, whether in industry or politics. For example, we can investigate the roots of epistemic and social exclusion in society, and we can be more receptive and inclusive to experiences of people who have little voice in society.

A well-functioning democratic society can deal with  minority dissent, even if it gets a bit weird. Research has shown that conspiracy theories become more than a small minority view in dysfunctional societies with higher corruption. This evidence is merely correlational, so it’s theoretically possible that higher conspiracy beliefs cause corruption, but most scholars agree that the other way around is much more plausible. The COVID pandemic illustrated this: Urgency led policy makers to skimp on some civic freedoms, creating a surge of conspiracy theories.

The way governments try to regain trust is by an emphasis on transparency, but these efforts are often quite shallow. Making information available is not the same as giving people insight. We have an abundance of information in modern society but it’s not disclosed in accessible ways or adapted to the models of listeners. In our increasingly scientifically and technologically advanced society, more and more aspects of our lives are governed by processes opaque to most citizens. This reduces our epistemic autonomy and creates a democratic deficit. Note that modern conspiracy theories often center on new technologies such as 5G or microchips or vaccines. There is no doubt these technologies have brought us great advances in health, safety and comfort. But when we fail to get people on board in our knowledge-creating system, and allow them to get insights within this system, they will search for it elsewhere. 

This is an immense task for science communicators, educators and any other expert in society, as the complexity of our knowledge only increases as we compete against low-threshold, readymade but flawed insights. If we succeed in building epistemic arcs in our science communication, like the best YouTube explainer videos do, we may be able to rebuild trust in our common knowledge-creation system of science and society. Not that all expertise needs to be disclosed in this way — that would entail actually becoming an expert — but just to rebuild trust and the sense that this knowledge has been responsibly built up by showing that it is in principle accessible and comprehensible.

Finally, we can try to use the “aha” experiences that conspiracists exploit to draw people in, to instead invite people out. We can use non-directive communication, such as curiosity-evoking questions and surmountable challenges, instead of simple statements of fact, to debunk false information. This approach may be more effective with people who are already suspicious of knowledge “passed down from above,” such as conspiracy thinkers. More fundamentally, there is a worry that “one just can’t talk to conspiracists.” But a more Socratic approach of asking questions, thinking along further — saying, “Let’s suppose this is true …” — as well as discussing information-seeking methods instead of the particular contents of beliefs, might give both sides a better idea of the concerns at play. 

Read more

about the conspiracy-theory quandary

***
This article has been archived for your research. The original version from Salon can be found here.